Next Article in Journal
Sometimes Littering Is Acceptable—Understanding and Addressing Littering Perceptions in Natural Settings
Next Article in Special Issue
Geochemical and Isotopic Evidence for Investigating the Impacts of Landfills on Groundwater: A Case Study in the Campania Region (Southern Italy)
Previous Article in Journal
A Discrete Cooperative Control Method for Production Scheduling Problem of Assembly Manufacturing System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using PRISMA Hyperspectral Data for Land Cover Classification with Artificial Intelligence Support

1
Department of Agricultural and Forest Sciences (DAFNE), Tuscia University, Via S. Camillo de Lellis snc, 01100 Viterbo, Italy
2
Department of Architecture, University of Naples Federico II, Via Forno Vecchio, 36, 80134 Naples, Italy
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(18), 13786; https://doi.org/10.3390/su151813786
Submission received: 11 July 2023 / Revised: 28 August 2023 / Accepted: 5 September 2023 / Published: 15 September 2023

Abstract

:
Hyperspectral satellite missions, such as PRISMA of the Italian Space Agency (ASI), have opened up new research opportunities. Using PRISMA data in land cover classification has yet to be fully explored, and it is the main focus of this paper. Historically, the main purposes of remote sensing have been to identify land cover types, to detect changes, and to determine the vegetation status of forest canopies or agricultural crops. The ability to achieve these goals can be improved by increasing spectral resolution. At the same time, improved AI algorithms open up new classification possibilities. This paper compares three supervised classification techniques for agricultural crop recognition using PRISMA data: random forest (RF), artificial neural network (ANN), and convolutional neural network (CNN). The study was carried out over an area of 900 km2 in the province of Caserta, Italy. The PRISMA HDF5 file, pre-processed by the ASI at the reflectance level (L2d), was converted to GeoTiff using a custom Python script to facilitate its management in Qgis. The Qgis plugin AVHYAS was used for classification tests. The results show that CNN gives better results in terms of overall accuracy (0.973), K coefficient (0.968), and F1 score (0.842).

1. Introduction

The potential for Earth observation (EO) from space for resource management and optimization emerged in the early 1960s and throughout the 20th century, with the advent of digital computers, pattern-recognition technology, and the first artificial satellites. The ability to monitor large areas, while reducing per-unit costs, is one of the main advantages of space-based technologies [1]. Remote sensing (RS) through EO offers the possibility to obtain quantitative information at the pixel level. Having this information helps to understand environmental and socioeconomic trends. Land cover and land use maps can be used for sustainable resource management and the study of climate change phenomena. According to the Eurostat definition, land cover (LC) corresponds to the physical coverage of the land surface, while land use (LU) refers to the socioeconomic use of the land [2]. It is possible to create thematic maps of land cover using different classification methods. Whereas photointerpretation can be ambiguous and subjective, automatic classification is a defined, quantifiable, and repeatable process [3] (p. 359).
Classifying involves assigning a land cover class to all pixels in a digital image. Automatic LULC classification is typically based on the notion that different land cover types have different spectral reflectance behaviors [4]. Classification techniques (e.g., maximum likelihood, decision tree, and neural networks) are then used to define spectral signatures using sample data to discriminate between different classes of selected LULC based on pixel values [5,6]. Image classification, which can be supervised or unsupervised, is the process of extracting semantic Information from raw data, i.e., pixel values, by assigning a class label to each pixel [7,8]. When classifying land cover, supervised approaches offer better performance than unsupervised approaches but require a sufficient number of accurate samples [9].
Machine learning (ML) and deep learning (DL) are two approaches that use artificial intelligence (AI) to classify RS images. Among the classification techniques, support vector machine (SVM), random forest (RF), and maximum likelihood are ML-based algorithms. On the other hand, artificial neural networks (ANNs) can be either ML or DL depending on the layers of the network [10]. All these methods are common supervised classification techniques that are also used for hyperspectral imagery (HSI) [11]. The use of ANN for image classification and change detection has been a well-known technique for many years. Later, the focus shifted to ML algorithms, such as SVM, that could handle large data sets with few training samples, or such as RF, that were easy to use with good accuracy [10]. In the last decade, however, with the rise of DL, there has been a renewed interest in ANNs for their ability to produce good results in image analysis, including land cover classification [10]. These methods are derived from neural networks but have greater computational power. DL methods use deeper layers to extract feature information, particularly in the case of CNN, by considering both spatial and spectral characteristics of images [10]. DL models provide promising results for object recognition or classification of hyperspectral data, by better handling images with high spatial and spectral resolution [11,12].
Multispectral data are a key resource in remote sensing because they provide more than one spectral measurement per pixel [13] (p. 4). Most multispectral satellites record information in the visible and near-infrared regions of the electromagnetic spectrum, in a number of bands ranging from three to six or more, improving the ability to distinguish man-made surfaces, vegetation, clear and turbid water, rocks, and soil [14]. However, bands in multispectral sensors are not contiguous across the spectrum and often have bandwidths of 100–200 nm. As a result, unlike hyperspectral sensors, they do not have sufficient spectral resolution to directly identify materials with diagnostic characteristics [15] (p. 306). Hyperspectral data cover a wide range, from visible (0.4–0.7 nm) to shortwave infrared (SWIR) (2.4 nm) and are useful for detailed land cover legends. The ability to distinguish features with similar spectral signatures is enhanced by the availability of multiple bands [16]. In addition to improving the ability to discriminate between similar objects, hyperspectral data make it possible to perform advanced studies, for example, predicting the type and amount of crops traits by estimating grassland biochemical parameters [17,18].
HSI from aircraft has been in development since the 1980s, but the first hyperspectral space EO missions for civil and scientific purposes have only been available since the early 21st century [19]. The main providers of space-based hyperspectral data over the last few decades have been Earth Observer-1 (NMP/EO-1) [20], launched under NASA’s New Millennium Program in 1999, with the Hyperion spectrometer, and PROBA (Project for On-Board Autonomy), launched in 2001 with the Compact High-Resolution Imaging Spectrometer (CHRIS) developed by the European Space Agency [21]. Among the latest EO hyperspectral missions are PRISMA (Hyperspectral Precursor of the Application Mission) and EnMAP (Environmental Mapping and Analysis Program). EnMAP is a German imaging spectroscopy mission that was launched in April 2022 and recently completed its commissioning phase [22,23]. PRISMA is developed by the ASI and has been in orbit since March 2019, with commissioning completed in January 2020 [24]. The PRISMA mission is expected to contribute to the advancement of environmental RS by providing hyperspectral data for various applications, such as monitoring of agricultural crops, forest resources, and inland and coastal waters; mapping of natural resources, soil properties, and soil degradation; climate change studies; and environmental research [25].
Working with HSI raises issues that are well known in the scientific community, related to the size of the data. The very high dimensionality of hyperspectral data, due to the large amount of information recorded in different bands, has some disadvantages: high storage costs, redundancy, and degraded performance [26]. The dimensionality reduction is addressed using techniques based on band selection or feature extraction, as we will see below. However, the reduction in the size of the data must preserve the most relevant information it contains. To predict the separability of two classes of materials, the statistical distance between two spectral bands must be measured, and one of the most common methods is the Bhattacharyya distance [1]. The other issue is related to the notion that the number of samples that are used to train a classifier has an impact on its accuracy. Therefore, in order to maintain accuracy, the number of training pixels per class needs to increase as the dimensionality of the data increases, according to the curse of dimensionality or the Hughes phenomenon (Hughes 1968) [13,27].
The literature on the use of PRISMA data, particularly in the field of land cover classification, is still limited. Most of the research shows the potential of PRISMA data for specific purposes, such as forest conservation with wildfire fuel mapping [28] or fire detection [29], geological applications [30], cryospheric applications [31], urban surface detection [32], and mapping methane point emissions [33]. There are also interesting studies in the agricultural field dealing with specific crop or vegetation type discrimination [34,35,36,37]. However, the possibility of using PRISMA data to distinguish LULC classes has not been fully investigated.
This research aims to evaluate the possibility of classifying permanent agricultural crops (i.e., orchards, fruit trees, olive groves, etc.) in a highly fragmented agricultural area using PRISMA data. The purpose of this study is to discriminate entities with very similar spectral signatures using HSI. From this point of view, high spectral resolution becomes an advantage. However, the handling of hyperspectral images may not be an easy task due to the high dimensionality of the data [26,27]. Therefore, as explained in the following sections, two techniques were tested to achieve data dimensionality reduction. For the purposes of the research, it was decided to compare three different AI-based classifiers by evaluating the accuracy of their results. In particular, among the consolidate methods for supervised classification tasks, RF, ANN, and CNN were chosen. This choice was made to test the PRISMA data using algorithms with known performance, and which are expected to produce different results [38]. In addition, it was found useful to report the processing time for data of different dimensionality, used as input in each classifier.

2. Materials and Methods

The main steps of the proposed methodology are shown in Figure 1. The starting point of the procedure is the selection of the study area. Then, on the one hand, sample data are collected, both in the field and through photointerpretation, for the construction of a training data set, which is also crucial for the selection of the LC classes. On the other hand, the PRISMA satellite cube downloaded from the ASI portal was processed, including dimensionality reduction. Finally, three types of classifiers (RF, ANN, and CNN) were used for the supervised classification of the hyperspectral cube. The classification maps obtained were subjected to an accuracy evaluation phase, which allowed the selection of the best result.

2.1. Study Area

The study area is in Italy, between the provinces of Naples and Caserta, and covers an area of approximately 900 km2 north of the city of Naples, as shown in Figure 2.
The area is largely coincident with what was once Campania Felix and is still known as “Terra di lavoro” [39] (pp. 16–17). This is a large flat agricultural area, on the left side of the Volturno River, partially crossed by a network of drainage canals known as Regi Lagni. It is an area characterized by a high degree of fragmentation. The complexity of agricultural uses in this area can be derived from the Campania Agricultural Land Use Map (CUAS) data, as shown in Table 1 [40]. The CUAS is an official map, accredited at the conventional scale of 1:50,000. It has been used in this research because it allows, although with unproven accuracy, the surveying of much smaller minimum mappable units than the Corine Project, which has a minimum mapping unit of 25 ha [41]. In 2009, the reference year of the CUAS, the area was mainly occupied by arable land (31%), artificial areas (28%), and orchards (18%). This study area is particularly suitable for testing the classification of complex mosaics due to its extreme fragmentation.
Table 2 shows the data obtained from the Corine Land Cover Classification (CLC) [41] for the year 2012. The largest percentage of the study area (30%) is occupied by anthropic areas (code_12 “111” to “142”). Agricultural areas (code_12 “211” to “112”) occupy 23%, while areas with permanent tree crops (code_12 “212” to “223”) occupy about 17% of the area.
Based on the study area configuration, 18 classes were selected for ground truth collection and image classification. As shown in Table 3, the level of detail was decreased or increased starting from the subdivision corresponding to Corine project level III. The level of detail was reduced for classes that were not of interest for the case study if these classes had similar characteristics. Conversely, the level of detail was increased for categories of relevance to the case study, particularly in the case of fruit trees.

2.2. PRISMA Hyperspectral Image

According to the technical specification document [42], the PRISMA satellite orbits at an altitude of 615 km and can deliver 223 images per day. The PRISMA spectrometer is integrated with a medium-resolution panchromatic camera and can provide images with high spectral resolution: the radiation reflected from the Earth’s surface is recorded in the following spectral ranges: VNIR (400–1010 nm) and SWIR (920–2505 nm). The hyperspectral cube consists of 240 frames with a bandwidth less than or equal to 12 nm, with a SWAT of 30 km at nadir and a GSD of about 30 m in the hyperspectral and 5 m in the panchromatic. The PRISMA products, available at http://prisma.asi.it/, are provided at different levels [43], in particular:
  • Level 0 (L0) refers to raw data that include ancillary data from satellite instruments;
  • Level 1 (L1) are top-of-atmosphere (TOA) radiance images, radiometrically corrected and calibrated in physical units;
  • Level 2b (L2b), obtained by L1 product, spectral radiance is transformed into geophysical parameters, images are geolocated and geocoded at the bottom of the atmosphere;
  • Level 2c (L2c) are geolocated at bottom-of-atmosphere reflectance images; Angstrom correction is applied;
  • Level 2d (L2d) product is a geocoded version of L2c; images are orthorectified by the ASI using a DEM.
The study area is bounded by 30 × 30 square kilometers corresponding to the scene “PRS_L2D_STD_20191130100450_20191130100454_0001” acquired by the PRISMA satellite on 30 November 2019 and shown in Figure 3.
For this study, it was decided to use the L2d level of the ASI data provided with reflectance values, geocoded and orthorectified. “SRTM90_CGIAR_4.1” is the digital elevation model (DEM) used by the ASI to orthorectify the L2d product of the scene. The STRM acronym stands for “Shuttle Radar Topography Mission” and refers to the joint mission of NASA and the German and Italian space agencies that started in 2000, with the goal of obtaining a high-quality global digital elevation model [44]. Version 4.1 of the SRTM DEM is distributed by the Consortium for Spatial Information (CGIAR-CSI) [45]. It is important to note that global DEMs are generally less accurate than local DEMs and may contain errors, the main sources of which are the post-processing or generation techniques used [46]. The area chosen for this study is largely flat, so negligible errors in altitude can be accepted. However, it was observed that the geocoded data were also not very accurate in spatial distribution (+/−300 m in latitude and +/−150 m in longitude). In addition, functionality to handle the HDF5 format of PRISMA data was not implemented in all mainstream software until a few years ago. For these reasons, a Python script (Prisma Tool) was developed [47]. The Prisma Tool allows data to be converted from HDF5 to the more common GeoTiff format, making the data easier to use in GIS software. At the same time, it allows for better georeferencing of images through the input of the correct vertex coordinates.

Dimensionality Reduction

As mentioned above, hyperspectral data have great potential for distinguishing features with similar spectral signatures but also some disadvantages related to data dimensionality. Dimensionality reduction techniques can generally be divided into two types: those based on feature extraction and those based on band selection. Those based on feature extraction, such as principal component analysis (PCA), which selects a subset of features with high variance from the original data, involve a loss of information. This is why feature selection is the preferred dimensionality reduction technique in HSI, where a subset of bands is selected without loss of their physical meaning [48]. Several techniques have been proposed in the literature to solve the band selection challenge [49].
In this case study, starting with the original 240-band cube (Cube-O, where “O” stands for original), two different methods were used.
  • The first is based on an approach like supervised band selection techniques that allow us to choose which bands to keep. This step was aimed at eliminating bands corresponding to atmospheric windows with low or zero radiance values, rather than greatly reducing the size of the data. Random spectral signatures associated with different types of surfaces were compared as shown in Figure 4. In accordance with previous approaches [48,50], the bands with the lowest signal-to-noise ratio and atmospheric water vapor absorption were selected for removal. Through this process, a new hyperspectral cube of 156 bands (Cube-R1, where “R1” indicates the first reduction) was derived from the original 240 bands. The bands removed were those related to the ranges 1290–1490 nm, 1700–2050 nm, and 2350–2495 nm.
Figure 4. Random spectral signature of the original PRISMA Cube-O; the red rectangles correspond to the area in which the bands with the lowest values have been removed—authors’ elaboration using Qgis with the EnMAP-Box plugin described in the following section.
Figure 4. Random spectral signature of the original PRISMA Cube-O; the red rectangles correspond to the area in which the bands with the lowest values have been removed—authors’ elaboration using Qgis with the EnMAP-Box plugin described in the following section.
Sustainability 15 13786 g004
  • The second method is based on the application of the statistical method of PCA [51] (pp. 115–128), which is widely used in the scientific community [49] to reduce the cube dimension, removing bands that are strongly correlated to others. The original hyperspectral cube was used as input data. At the end of the processing, 20 principal components were selected, preserving a percentage of variance in the data of 98.7% (Cube-R2, where “R2” indicates the second reduction). The AVHYAS plugin described in the following section was used for PCA.
It is worth noting that the Cube-R2 obtained through the PCA has kept a high percentage of the variance that was computed on Cube-O. PCA, which is very useful to distinguish between classes that are very different from each other, could mask the information recorded in the original data about small differences between similar classes. For the classification task, the authors decided to alternatively test the two images (Cube-R1 and Cube-R2) obtained using the models described above to evaluate this issue.

2.3. Software Used in Research

Qgis has been used as an open-source GIS software for the purpose of this study. This software is an official project of the Open-Source Geospatial Foundation (OSGeo) [52] and is one of the most widely used open-source GIS software worldwide due to its ability to easily implement a variety of functions through plugins. Qgis offers several plugins for supervised classification of satellite images.
However, the ability to classify hyperspectral data, especially using DL-based approaches such as neural networks, is still limited. A solution is offered by the Advanced Hyperspectral Data Analysis (AVHYAS) plugin [53]. The AVHYAS plugin has been developed by the Space Applications Centre (SAC) of the Indian Space Research Organization. It is a HSI processing and analysis plugin based on Python-3. The plugin extends the existing Qgis GDAL functionality. It uses several modules such as TensorFlow for DL and Scikit-Learn for machine learning. The purpose of the plugin is to exploit the visualization capabilities of Qgis by integrating them with the most advanced techniques available in the literature for hyperspectral data management and classification. The ability to experiment with different supervised classification techniques based on ML and DL within the same application was another reason for choosing AVHYAS.
Another very useful plugin was EnMAP-Box, developed by the Humboldt University of Berlin and the University of Greifswald, to handle data from the German hyperspectral satellite EnMAP [54]. This plugin constituted the starting point for the development of the AVHYAS plugin mentioned above. It combines the visualization capabilities of Qgis with the ability to manage and process hyperspectral data, as in the previous case. Some of its main features are listed below. The plugin further provides (i) the ability to easily visualize HSI, both as a two-dimensional image and as a hyperspectral cube; (ii) the exploration of high-dimensional data and hyperspectral libraries; and (iii) the collection of spectral signatures in spectral libraries. It also allows data to be processed using some of the most popular algorithms in the scientific community (i.e., support vector machines or RF-based raster classification, regression, and clustering approaches from the scikit-learn library).
For field sample data collection, the QField mobile application [55] has been used; it is based on Qgis and was developed specifically for field work. The main goal is to help users perform the tasks required in the field, such as data collection and storage.

2.4. Sample Data

In order to apply supervised classification techniques, it was necessary to collect a dataset of labeled data, to be used both for training the classifiers and testing the results. In addition, as mentioned above, the number of training samples affects the accuracy of a classifier. Using a large number of bands requires a large number of observations to maintain classification accuracy [20]. Since it was chosen to maintain a large number of bands as the input data, a consistent number of ground truths was collected. Specifically, three sets of labeled data were used. Each set was collected at different times and in different ways.
(A) The first set was collected on 23 October 2019. The coordinates of the collected points, corresponding to the centroids of the agricultural plots, were obtained using differential GPS and then entered into a CSV file. The CSV file, which also included attributes related to the type of LC, was then imported into GIS software. Using the coordinates in the attribute table, the points were automatically georeferenced. Thus, a vector datum of the points was obtained, which in turn made it possible to derive polygon vector data, considering the perimeter of the cultivated areas. Photointerpretation and geotagged images collected during the survey made this step possible. During the first survey, 160 control points were surveyed and classified simultaneously as the satellite image was acquired. Of these, 106 involved agricultural parcels, which in turn were divided into 43 points for permanent crops and 63 for temporary crops. Figure 5 shows the preservation of the canopy for almost all the permanent crops and trees classified in this study at the time of the first survey. This study is not based on a multitemporal analysis of the study area. The classification was made on the basis of a November image. During this period, the canopy of the trees may have had a low number of leaves. However, the leaves of the plants selected for this study were not yet in the phase of leaf abscission, as can be seen in the photos below.
(B) Later, research was suspended due to the COVID-19 pandemic. However, it was necessary to increase the number of labeled data to have more confidence in the statistical validation of the results. It was assumed that ground truths could still be integrated for the arboriculture sector after three years of image acquisition. In the survey, great care was taken in the estimation of the probable date of planting from the tree characteristics. In this way, only plantings that were older than the date of the satellite image was included in the survey. The second survey took place in April 2023 and allowed the detection of 150 additional ground truths. This brings the total number of GCPs related to permanent cultures to 180. In this case, the QField app was used during the survey to collect point vector data directly in the field, using the AGPS of a smartphone. In an attempt to open up to the world of citizen science and crowdsensing data collection, this type of approach was preferred over the traditional one. The accuracy of AGPS provided by common smartphones was considered adequate for this study, given the 30 m spatial resolution of the PRISMA image. Using the QField app, the collected points could be entered directly into the GIS environment in their correct location [55]. This is possible through the activation of a satellite base map and smartphone localization in the app during the survey. The georeferenced vector point datum was then used to derive a polygon datum using the same methodology as the first labeled dataset (A).
(C) Photointerpretation also allowed for the addition of new classes such as clouds, shadows, and water. It also increased the area for some of the classes already present, such as impervious surfaces, bare soil, and temporary agricultural surfaces. Photointerpretation was performed using the results of different false-color compositions of the PRISMA image. A total of six different false-color band compositions were used. The main objective was to improve the ability to distinguish between bare soil and temporary or permanent vegetation by photointerpretation. The selection of the bands is based on the combination method of the raster layer styling panel of the Qgis EnMAP-Box plugin [54]. Table 4 shows the RGB bands used for each composition, and Figure 3 shows the resulting maps.
Table 4. This table shows the bands used in false-color composition to obtain maps 1 to 6, shown in Figure 6.
Table 4. This table shows the bands used in false-color composition to obtain maps 1 to 6, shown in Figure 6.
Red ChannelGreen ChannelBlue Channel
MAPBand NumbernmBand NumbernmBand Numbernm
Natural color1340669.5220562.5130492.5
False color2340669.5500833.4220562.5
Color infrared3500833.4340669.5220562.5
Agriculture 41281616.5500833.4130492.5
Healthy vegetation5500833.41281616.5130492.5
Recent harvest61932198.8500833.4220562.5
Figure 6. Maps resulting from false-color combination: 1—natural color; 2—false color; 3—color infrared; 4—agriculture; 5—healthy vegetation; 6—recent harvest.
Figure 6. Maps resulting from false-color combination: 1—natural color; 2—false color; 3—color infrared; 4—agriculture; 5—healthy vegetation; 6—recent harvest.
Sustainability 15 13786 g006
Table 5 shows the dimensions in hectares of the surfaces in the sample dataset by class and group.

2.5. Supervised Classification Techniques

The primary purpose of this study is to evaluate the possibility of using PRISMA data to classify land cover classes, in particular different types of orchards, as mentioned in the introduction. For this purpose, three of the most common [11,38] classification techniques were investigated: RF, ANN, and CNN. Classification tests were performed in Qgis 3.14.1 software using the AVHYAS plugin [53]. The different techniques were tested by classifying the same study area. The images used as input were both the one corresponding to the Cube-R1 with 156 bands and the one corresponding to the Cube-R2 obtained by the PCA. The same number of labeled data was used in each test run. In the case of neural networks, the training data set was randomly divided into training (70%) and test (30%) sites by AVHYAS prior to each test run. For random forests, k-fold cross-validation with 3 iterations was used for validation. The AVHYAS plugin [53] provides some of the most common networks in the literature [38] through a dedicated DL classification module. Reflecting the evolution of the literature, 13 different neural network architectures are available within this module [53]. The default setting was used in the selection of some parameters for the proposed study, since the aim was not to define the optimal scenario in the use of this tool. In addition to different input data, the number of trees for the RF algorithm and the number of epochs for the neural networks differed between tests.

2.5.1. Random Forest

The RF algorithm developed by Breiman [56] in 2001 is an ML-based approach. Classification is based on individual decision trees working as an ensemble. A random subset of labeled samples is used to train each tree. The randomization of the training data reduces the probability of error as the number of trees increases. Each tree expresses only one vote per instance, and the final classification is given by the majority of votes from all trees [56,57]. The choice of the number of trees affects the classification result. Considering similar works, a suitable number to balance accuracy and time spent is around 500 [58,59]. However, in this study, six different tests with 100, 500, and 1000 trees were performed, alternately using both Cube-R1 and Cube-R2 as input, to test the performance of the used data.

2.5.2. Artificial Neural Network

Some tests were carried out with a classification approach based on the ANN, using the model of the basic network proposed by the plugin. In this case, the “Baseline (Fully Connected NN)” model was chosen. This is a sequential model of a fully connected network with 3 hidden layers, where each neuron is fully connected to all those in the previous layer. The learning rate has been left at the default value of 0.0001. This value, considered small, is slower in that it requires more epochs for training, but it is generally more suitable for dealing with complex problems [60]. A total of 6 trials were run. The number of epochs chosen from time to time was 10, 50, and 100, where Cube-R1 was used as input in 3 cases, and Cube-R2 was used in the remaining 3 cases. Some tests were performed using an ANN-based classification approach using the DL module available in AVHYAS [53].

2.5.3. Convolutional Neural Network

The utility of using CNNs for HSI classification has been widely described in the literature [38,59,61]. Thanks to specific convolutional kernels, CNN can use both spatial and spectral information to classify images. In the present work, among the models available in the AVHYAS plugin, the “Hu (1-D CNN)” was used. This model is based on the one proposed by Hu [62]. The architecture of the network is similar to the one shown in Figure 7. It is a one-dimensional network and considers only spectral signatures by using a convolutional layer and a fully connected layer. The default value for the learning rate, which was kept in this case, is 0.01. Six trials were run with the number of epochs 10, 50, and 100. Cube-R1 and Cube-R2 were used alternately as input data.

2.6. Accuracy Assessment

In case studies involving the use of remotely sensed data to produce land cover maps, assessment of the accuracy of the resolutions is essential. For quantitative assessment of the accuracy of the results, an error matrix can be generated, which can be used for a number of descriptive and analytical statistical analyses. The most common terms are overall accuracy (OA), kappa coefficient (K) (Cohen 1960), user’s accuracy (UA), and producer’s accuracy (PA) [63]. OA is the percentage of correctly classified pixels and is given by the ratio of the number of correctly classified pixels to the total number of test pixels used. Cohen’s K is a measure of the agreement between the predicted values and the actual values. This metric measures how much better the classification that was made is in comparison to a classification made by chance. UA estimates the commission error and indicates the probability that a pixel predicted to be in a certain class actually belongs to that class. PA estimates the omission error and indicates the probability that a pixel belonging to a given class has been correctly classified. Another statistical indicator used is the F1 score, which is the harmonic mean between precision and recall for each class, varying from 0 (worst case) to 1 (best case) [64].
The error matrix automatically generated by the AVHYAS plugin in the final report was used to evaluate the results in this case study. The classification results were evaluated globally in terms of OA and K. Results for individual classes were evaluated in terms of UA, PA, and F1 score. In the case of random forest, cross-validation was used, so the validation was carried out on the total number of labeled pixels. In the case of neural networks, 30% of the randomly selected pixels for the test phase were used to validate the results.

3. Results

This section presents the results of the classification techniques used. For each one of the three methods of classification used (RF, ANN, and CNN), the results are very different in terms of OA, K, and F1 score indices and also for PA, UA, and F1 evaluated for each class. Table 6 shows a comparison of the best results of the methods used for classification in terms of global accuracy (OA, K, and F1) for all the classes and running time (h).
Figure 8 shows a comparison graph between the RF, ANN, and CNN performances in terms of the F1 score obtained for the classes of permanent agricultural cover that are mainly considered in this study (ID from 9 to 18).
The following subsections report the values of the best results obtained in the different tests using RF, ANN, and CNN. In the first table of each subsection, in addition to OA and K, the data used as input and the processing time are also reported. The second table in each paragraph shows the results for each of the classes considered and the area in Ha obtained.

3.1. Random Forest Results

Table 7 shows the overall values for the best result of the RF algorithm obtained with the PCA-derived Cube-R2. Figure 9 shows the resulting classification map, while a detail is shown in Figure 10.
Table 8 shows the values by class for the best result obtained with the RF algorithm.

3.2. Artificial Neural Network Results

Table 9 shows the overall values for the best result obtained with ANN, using Cube-R1 with 156 bands as input. OA and K index values are very high but not homogeneous, because they are influenced by the results obtained in the cases of the first eight classes. Figure 11 shows the resulting classification map, while a detail is shown in Figure 12.
Table 10 shows the values by class for the best ANN score.

3.3. Convolutional Neural Network Results

Table 11 shows the global values for the best result obtained with CNN, using Cube-R1 with 156 bands. Figure 13 shows the resulting classification map, while a detail is shown in Figure 14.
Table 12 shows the values by class for the best result obtained with CNN.

4. Discussion

Land cover classifications based on multispectral data allowed us to distinguish more general land cover classes. The use of hyperspectral data has advanced the identification of detailed maps of LULC. However, as noted in the introduction, most of the research focuses on temporary crops [34,61,65] or a few tree species [57,66]. The results show that a detailed legend for land cover maps can be obtained by supervised classification using HSI. In particular, it is possible to extend the classification to the fourth level, starting from the third level of CLC. This study shows how the PRISMA data can be used to distinguish between many types of permanent crops. In this case study, CLC class “222—Fruit and berry plantations” was divided into eight subclasses. The results show that in the case of CNN, using Cube-R1, the F1 score is higher than 0.7 for the following classes: hazelnut orchard—0.93; olive groves—0.90; citrus groves—0.88; walnut orchard—0.81; persimmon orchard—0.77; peach orchard—0.71. Based on these results, it can be said that the main objective of this study has been demonstrated. Supervised classification using HSI provides an opportunity to obtain a detailed legend for land cover maps. Furthermore, to understand the actual applicability of the data in the application domain, the worst-case scenario was assumed (non-multitemporal analysis with a single autumn image), but even in this case, the results were very interesting. It is also true, as the photos in Figure 5 show, that the climatic conditions of the study area were conducive to the maintenance of the canopy over a longer period of time.
Among the different techniques used for the classification, the best choice seems to be the use of CNN. In terms of OA, K, PA, UA, and F1, this technique gave the best global result and per-class evaluation.
However, the classification with the RF algorithm gave interesting global results (OA—0.887; K—0.867; F1—0.603), using Cube-R2, but not comparable to those of the neural networks. It is an excellent alternative for land cover mapping with aggregated classes due to its very fast computation times. Indeed, excellent results were obtained with F1 scores above 80% for impervious, bare soil, temporary crops/low vegetation, and woods and forests. With respect to the classes of interest (ID from 9 to 18), the best results concern the classes hazelnut orchard (67%) and olive grove (59%).
Classification with ANN produced excellent overall results (OA—0.963; K—0.956; F1—0.766) using Cube-R1. The F1 score, which is higher than in the RF case, for some of the classes of interest (ID from 9 to 18), such as cherry and poplar groves, is less than 60% and 0% in the case of vineyards. It may be possible to obtain better results for these classes with an image from a different season. But in the case of olive, hazelnut, and persimmon groves, results higher than 80% in terms of F1 seem to be very favorable. Among the disadvantages, it is necessary to highlight the onerous cost of calculation in terms of hours spent.
In the case of convolutional neural networks, using Cube-R1 as in the previous case, the OA and global K coefficients are very high (OA—0.973; K—0.968; F1—0.842). This is because they are influenced by the very high level of accuracy in the case of aggregated classes such as impervious. However, in this case also, UA, PA, and F1 values are higher compared to the previous techniques. The convolutional network seems to guarantee better classification results, especially for the following classes: hazelnut, olive, persimmon, and walnut. On the contrary, for cherry, vineyard, and apricot, the classification is less accurate. For the classes of interest (ID from 9 to 18), the classification output shows percentages higher than 70% in terms of the F1 score in 6 out of 10 cases (hazelnut orchard—0.93; olive grove—0.90; citrus grove—0.88; walnut orchard—0.81; persimmon orchard—0.77; peach orchard—0.71).
Convolutional technology produces an overall F1 score that is almost 10% better than ANN and 25% better than RF. When evaluating the results obtained in the classes of interest (ID from 9 to 18), CNN guarantees almost 30% better F1 results compared to RF in all classes. Compared to ANN, there are improvements of more than 20% in the vineyard, citrus, and poplar classes; they are equivalent for the apricot and persimmon classes; while for all the others, there are slightly better results with CNN.
Based on the results obtained, it has been proven that the convolutional network guarantees excellent results with PRISMA hyperspectral data, especially with the cube resulting from the band selection (Cube-R1) and with a number of epochs equal to 100.

5. Conclusions

This study shows that the HSI data, by achieving very high classification results in terms of F1 scores for some classes, can discriminate between different types of permanent crops (ID from 9 to 18). As shown before, the results in terms of F1 score are higher than 0.70 in 6 out of 10 cases (hazelnut orchard—0.93; olive grove—0.90; citrus grove—0.88; walnut orchard—0.81; persimmon orchard—0.77; peach orchard—0.71). This shows that satellite imagery can be used for level IV classification and confirms studies based on PRISMA data to distinguish crops [34,67], vegetation [36], or forest types [35].
The most promising technology for this kind of application is based on neural network methods, especially CNNs (CNN F1 results are almost 30% better compared to RF for permanent crops). Indeed, various studies have demonstrated the potential of CNNs for hyperspectral data classification compared to other techniques [38,59,61,62]. The best results were obtained with CNN using Cube-R1 (156 bands). This confirms that strong dimensionality reduction is not necessary when using DL-based methods [37].
When dealing with high dimensionality of data, methods based on ML such as RF have limitations [68]. This aspect necessitates information reduction, which inevitably has an impact on the ability to discriminate classes with similar characteristics. However, as shown in [58], better results can be obtained with focused band selection.
The processing times may not be relevant for a case like this, where only a single data set is processed. However, this aspect could become crucial for multitemporal analyses or when experiments are carried out with different sample data. Although the best results were obtained with CNN, this method is more time consuming than RF. In fact, the main disadvantages of this method are the computational time and computer power consumption [59,69]. However, it is important to note that the algorithms have been used on open-source tools, so most of the processing/analysis is single-threaded. The time required could be significantly reduced as open-source software evolves to a multithreaded perspective.
In terms of reducing the high dimensionality of the data, the best results were obtained using Cube-R1 derived by band selection. Eliminating bands in the three regions of the spectrum where the transmittance is low is always useful [48,49,50]. Data obtained with this method preserve more than 150 bands of the original 240. It guarantees good results when used as input for neural networks, but not for RF. Feature extraction using the PCA method is essential in this case. Starting from the original Cube-O, PCA can significantly reduce data dimensions while preserving a high percentage of variance. Therefore, it is useful for discriminating between very different classes, but as stated in the previous paragraphs, it tends to overlook the few differences between similar classes.
As mentioned in the introduction, there is to date a lack of studies using PRISMA for LULC classification. The reasons are mainly due to the difficulties in accessing and managing the data. However, future improvements are expected in both data availability, partly due to new hyperspectral missions such as EnMAP [23], and data quality, using higher-accuracy DEMs for orthorectification. This would improve the reflectance accuracy, and thus even more accurate classifications are likely. It is expected that the interest in HSI will increase soon, especially because of the possibilities and advantages in the field of natural and agricultural land classification and monitoring.
The analysis of multi-seasonal imagery will certainly be a possible future application of PRISMA hyperspectral data for land cover mapping. The most promising way seems to be the use of convolutional networks. Future tests will certainly favor CNN by including different models for the network architecture. A chance to improve the results could certainly be achieved by making use of 3D CNNs that take advantage of the combination of spatial and spectral information [70]. Also, a different dimensionality reduction method with more specific band selection may be used. This would allow for better results with ML algorithms that are less time-consuming, such as RF, or less sensitive to the Hughes phenomenon, such as SVM [71].

Author Contributions

Conceptualization, G.D., L.B. and M.N.R.; methodology, G.D. and E.C.; software, L.B. and G.D.; validation, G.D., M.P. and E.C.; investigation, G.D.; resources, L.B. and G.D.; data curation, G.D.; writing—original draft preparation, G.D.; writing—review and editing, G.D., E.C., M.P., M.N.R. and L.B.; supervision, L.B. and M.N.R.; funding acquisition, M.N.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was primarily funded by the Ministry of University and Research (MUR). This research was partially funded by the PRIN 2022 project—CUP J83C20001990005. In addition, this work was supported by the “National Biodiversity Future Center-NBFC” project code CN_000033, Decreto Direttoriale MUR n.1034 del 17 Giugno 2022. The funder had no role in conducting the research and/or during the preparation of the article.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request. Please contact the authors at the following addresses [email protected] or [email protected].

Acknowledgments

The authors would like to thank Massimo Fagnano (Professor of Agronomy at the University of Naples Federico II), who participated in the first survey (23 October 2019) and classified the crops from the first 160 ground control points. This study was carried out using PRISMA products, of the Italian Space Agency (ASI), delivered under an ASI License to use. The authors also thank the anonymous reviewers for their helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ANNArtificial neural network
ASIItalian Space Agency
CLCCorine Land Cover
CNNConvolutional neural network
CUAS Carta Utilizzazione Agricola dei Suoli
(Agricultural Land Use Map of Campania Region)
Cube-OPRISMA Original 240-band Hyperspectral Cube
Cube-R1Reduced 156-band hyperspectral cube
Cube-R2Reduced 20-band hyperspectral cube
DLDeep learning
EnMAPEnvironmental Mapping and Analysis Program
EOEarth observation
F1F1 score
HSIHyperspectral images
KCohen’s kappa coefficient
LCLand cover
LULand use
MLMachine learning
OAOverall accuracy
PAProducer’s accuracy
PANPanchromatic
PRISMAPRecursore IperSpettrale della Missione Operativa
(Hyperspectral Precursor of the Application Mission)
RFRandom forest
RSRemote sensing
SVMSupport vector machine
SWIRShort-wavelength infrared
TOATop of atmosphere
UAUser’s accuracy
VNIR Visible and near-infrared

References

  1. Landgrebe, D. Hyperspectral Image Data Analysis as a High Dimensional Signal Processing Problem. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  2. EUROSTAT Glossary. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Thematic_glossaries (accessed on 31 May 2023).
  3. Brivio, P.A.; Lechi, G.; Zilioli, E. Principi e Metodi Di Telerilevamento; CittàStudi: Milan, Italy, 2006; ISBN 978-88-251-7293-5. [Google Scholar]
  4. Townshend, J.R. Global Data Sets for Land Applications from the Advanced Very High Resolution Radiometer: An Introduction. Int. J. Remote Sens. 1994, 15, 3319–3332. [Google Scholar] [CrossRef]
  5. Pfeifer, M.; Disney, M.; Quaife, T.; Marchant, R. Terrestrial Ecosystems from Space: A Review of Earth Observation Products for Macroecology Applications. Glob. Ecol. Biogeogr. 2012, 21, 603–624. [Google Scholar] [CrossRef]
  6. Tassi, A.; Vizzari, M. Object-Oriented LULC Classification in Google Earth Engine Combining SNIC, GLCM, and Machine Learning Algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  7. Jia, X. Field Guide to Hyperspectral/Multispectral Image Processing; SPIE: Bellingham, WA, USA, 2022; ISBN 978-1-5106-5215-6. [Google Scholar] [CrossRef]
  8. Sarath, T. A Study on Hyperspectral Remote Sensing Classifications. Int. J. Comput. Appl. 2014, 6. Available online: https://www.ijcaonline.org/proceedings/icict/number3/17974-1422 (accessed on 10 July 2023).
  9. Jamali, A. Evaluation and Comparison of Eight Machine Learning Models in Land Use/Land Cover Mapping Using Landsat 8 OLI: A Case Study of the Northern Region of Iran. SN Appl. Sci. 2019, 1, 1448. [Google Scholar] [CrossRef]
  10. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep Learning in Remote Sensing Applications: A Meta-Analysis and Review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  11. Lv, W.; Wang, X. Overview of Hyperspectral Image Classification. J. Sens. 2020, 2020, 4817234. [Google Scholar] [CrossRef]
  12. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep Supervised Learning for Hyperspectral Data Classification through Convolutional Neural Networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
  13. Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis: An Introduction, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2006; ISBN 978-3-540-25128-6. [Google Scholar] [CrossRef]
  14. Govender, M.; Chetty, K.; Bulcock, H. A Review of Hyperspectral Remote Sensing and Its Application in Vegetation and Water Resource Studies. Water SA 2009, 33, 145–151. [Google Scholar] [CrossRef]
  15. Rast, M.; Painter, T.H. Earth Observation Imaging Spectroscopy for Terrestrial Systems: An Overview of Its History, Techniques, and Applications of Its Missions. Surv. Geophys. 2019, 40, 303–331. [Google Scholar] [CrossRef]
  16. Rasti, B.; Hong, D.; Hang, R.; Ghamisi, P.; Kang, X.; Chanussot, J.; Benediktsson, J.A. Feature Extraction for Hyperspectral Imagery: The Evolution From Shallow to Deep: Overview and Toolbox. IEEE Geosci. Remote Sens. Mag. 2020, 8, 60–88. [Google Scholar] [CrossRef]
  17. Capolupo, A.; Kooistra, L.; Berendonk, C.; Boccia, L.; Suomalainen, J. Estimating Plant Traits of Grasslands from UAV-Acquired Hyperspectral Images: A Comparison of Statistical Approaches. IJGI 2015, 4, 2792–2820. [Google Scholar] [CrossRef]
  18. Tagliabue, G.; Boschetti, M.; Bramati, G.; Candiani, G.; Colombo, R.; Nutini, F.; Pompilio, L.; Rivera-Caicedo, J.P.; Rossi, M.; Rossini, M.; et al. Hybrid Retrieval of Crop Traits from Multi-Temporal PRISMA Hyperspectral Imagery. ISPRS J. Photogramm. Remote Sens. 2022, 187, 362–377. [Google Scholar] [CrossRef]
  19. Filchev, L. Satellite Hyperspectral Earth Observation Missions—A Review. Aerosp. Res. Bulg. 2014, 26, 191–207. [Google Scholar]
  20. Ungar, S.G.; Pearlman, J.S.; Mendenhall, J.A.; Reuter, D. Overview of the Earth Observing One (EO-1) Mission. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1149–1159. [Google Scholar] [CrossRef]
  21. Barnsley, M.J.; Settle, J.J.; Cutter, M.A.; Lobb, D.R.; Teston, F. The PROBA/CHRIS Mission: A Low-Cost Smallsat for Hyperspectral Multiangle Observations of the Earth Surface and Atmosphere. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1512–1520. [Google Scholar] [CrossRef]
  22. Guanter, L.; Kaufmann, H.; Segl, K.; Foerster, S.; Rogass, C.; Chabrillat, S.; Kuester, T.; Hollstein, A.; Rossner, G.; Chlebek, C.; et al. The EnMAP Spaceborne Imaging Spectroscopy Mission for Earth Observation. Remote Sens. 2015, 7, 8830–8857. [Google Scholar] [CrossRef]
  23. Storch, T.; Honold, H.-P.; Chabrillat, S.; Habermeyer, M.; Tucker, P.; Brell, M.; Ohndorf, A.; Wirth, K.; Betz, M.; Kuchler, M.; et al. The EnMAP Imaging Spectroscopy Mission towards Operations. Remote Sens. Environ. 2023, 294, 113632. [Google Scholar] [CrossRef]
  24. Caporusso, G.; Ettore, L.; Rino, L.; Rosa, L.; Rocchina, G.; Girolamo, D.M.; Patrizia, S. The Hyperspectral Prisma Mission in Operations. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September 2020; pp. 3282–3285. [Google Scholar]
  25. Cogliati, S.; Sarti, F.; Chiarantini, L.; Cosi, M.; Lorusso, R.; Lopinto, E.; Miglietta, F.; Genesio, L.; Guanter, L.; Damm, A.; et al. The PRISMA Imaging Spectroscopy Mission: Overview and First Performance Analysis. Remote Sens. Environ. 2021, 262, 112499. [Google Scholar] [CrossRef]
  26. Hong, D.; Yokoya, N.; Chanussot, J.; Xu, J.; Zhu, X.X. Learning to Propagate Labels on Graphs: An Iterative Multitask Regression Framework for Semi-Supervised Hyperspectral Dimensionality Reduction. ISPRS J. Photogramm. Remote Sens. 2019, 158, 35–49. [Google Scholar] [CrossRef]
  27. Alonso, M.C.; Malpica, J.A.; de Agirre, A.M. Consequences of the Hughes Phenomenon on some Classification Techniques. In Proceedings of the ASPRS 2011 Annual Conference, Milwaukee, WI, USA, 1–5 May 2011. [Google Scholar]
  28. Shaik, R.U.; Laneve, G.; Fusilli, L. An Automatic Procedure for Forest Fire Fuel Mapping Using Hyperspectral (PRISMA) Imagery: A Semi-Supervised Classification Approach. Remote Sens. 2022, 14, 1264. [Google Scholar] [CrossRef]
  29. Amici, S.; Piscini, A. Exploring PRISMA Scene for Fire Detection: Case Study of 2019 Bushfires in Ben Halls Gap National Park, NSW, Australia. Remote Sens. 2021, 13, 1410. [Google Scholar] [CrossRef]
  30. Tripathi, P.; Garg, R.D. Feature Extraction of Desis and Prisma Hyperspectral Remote Sensing Datasets for Geological Applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 44, 169–173. [Google Scholar] [CrossRef]
  31. Kokhanovsky, A.; Di Mauro, B.; Colombo, R. Snow Surface Properties Derived from PRISMA Satellite Data over the Nansen Ice Shelf (East Antarctica). Front. Environ. Sci. 2022, 10, 1420. [Google Scholar] [CrossRef]
  32. Cavalli, R.M. The Weight of Hyperion and PRISMA Hyperspectral Sensor Characteristics on Image Capability to Retrieve Urban Surface Materials in the City of Venice. Sensors 2023, 23, 454. [Google Scholar] [CrossRef] [PubMed]
  33. Guanter, L.; Irakulis-Loitxate, I.; Gorroño, J.; Sánchez-García, E.; Cusworth, D.H.; Varon, D.J.; Cogliati, S.; Colombo, R. Mapping Methane Point Emissions with the PRISMA Spaceborne Imaging Spectrometer. Remote Sens. Environ. 2021, 265, 112671. [Google Scholar] [CrossRef]
  34. Spiller, D.; Ansalone, L.; Carotenuto, F.; Mathieu, P.P. Crop Type Mapping Using Prisma Hyperspectral Images and One-Dimensional Convolutional Neural Network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11 July 2021; pp. 8166–8169. [Google Scholar] [CrossRef]
  35. Vangi, E.; D’amico, G.; Francini, S.; Giannetti, F.; Lasserre, B.; Marchetti, M.; Chirici, G. The New Hyperspectral Satellite Prisma: Imagery for Forest Types Discrimination. Sensors 2021, 21, 1182. [Google Scholar] [CrossRef]
  36. Pepe, M.; Pompilio, L.; Gioli, B.; Busetto, L.; Boschetti, M. Detection and Classification of Non-Photosynthetic Vegetation from PRISMA Hyperspectral Data in Croplands. Remote Sens. 2020, 12, 3903. [Google Scholar] [CrossRef]
  37. Yang, H.; Chen, M.; Wu, G.; Wang, J.; Wang, Y.; Hong, Z. Double Deep Q-Network for Hyperspectral Image Band Selection in Land Cover Classification Applications. Remote Sens. 2023, 15, 682. [Google Scholar] [CrossRef]
  38. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep Learning Classifiers for Hyperspectral Imaging: A Review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
  39. Giordano, A.; Caprio, A.; Natale, M. Terra di Lavoro; Guida Editori: Bisignano, Italy, 2003; ISBN 978-88-7188-774-6. [Google Scholar]
  40. Data Obtained by the Authors through the Elaboration of Shapefiles of the Agricultural Land Use Map of the Year 2009, Open Data Available on the Geo-Portal of the Campania Region. Available online: https://sit2.regione.campania.it/content/carta-utilizzazione-agricola-dei-suoli (accessed on 26 April 2023).
  41. Data Obtained by the Authors through the Elaboration of Shapefiles Available on the Open Data Land. Copernicus Portal, Taken from the Corine Project Map for the Year 2018, Not Yet Validated. Available online: https://land.copernicus.eu/pan-european/corine-land-cover (accessed on 26 April 2023).
  42. PRISMA Technical Specification Documents. Available online: https://prisma.asi.it/missionselect/docs/ (accessed on 9 January 2023).
  43. Guarini, R.; Loizzo, R.; Facchinetti, C.; Longo, F.; Ponticelli, B.; Faraci, M.; Dami, M.; Cosi, M.; Amoruso, L.; De Pasquale, V.; et al. Prisma Hyperspectral Mission Products. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 179–182. [Google Scholar]
  44. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The Shuttle Radar Topography Mission. Rev. Geophys. 2007, 45, 3. [Google Scholar] [CrossRef]
  45. Boncori, M.; Peter, J. Caveats Concerning the Use of SRTM DEM Version 4.1 (CGIAR-CSI). Remote Sens. 2016, 8, 793. [Google Scholar] [CrossRef]
  46. Capolupo, A. Improving the Accuracy of Global DEM of Differences (DoD) in Google Earth Engine for 3-D Change Detection Analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 12332–12347. [Google Scholar] [CrossRef]
  47. The Beta Version of Prisma Tool Is Available for Free Download. Available online: https://www.larp.unina.it/.
  48. Zhang, W.; Li, X.; Zhao, L. Band Priority Index: A Feature Selection Framework for Hyperspectral Imagery. Remote Sens. 2018, 10, 1095. [Google Scholar] [CrossRef]
  49. Sawant, S.; Prabukumar, M. A Survey of Band Selection Techniques for Hyperspectral Image Classification. J. Spectr. Imaging 2020, 9, a5. [Google Scholar] [CrossRef]
  50. Flynn, K.C.; Frazier, A.E.; Admas, S. Nutrient Prediction for Tef (Eragrostis Tef) Plant and Grain with Hyperspectral Data and Partial Least Squares Regression: Replicating Methods and Results across Environments. Remote Sens. 2020, 12, 2867. [Google Scholar] [CrossRef]
  51. Jolliffe, I.T. Principal Component Analysis and Factor Analysis. In Principal Component Analysis; Jolliffe, I.T., Ed.; Springer Series in Statistics; Springer: New York, NY, USA, 1986; pp. 115–128. ISBN 978-1-4757-1904-8. [Google Scholar] [CrossRef]
  52. Qgis Official Website. Available online: https://www.qgis.org/en/site/about (accessed on 12 June 2023).
  53. Lyngdoh, R.B.; Sahadevan, A.S.; Ahmad, T.; Rathore, P.S.; Mishra, M.; Gupta, P.K.; Misra, A. AVHYAS: A Free and Open Source Qgis Plugin for Advanced Hyperspectral Image Analysis 2021. In Proceedings of the 2021 International Conference on Emerging Techniques in Computational Intelligence (ICETCI), Hyderabad, India, 25–27 August 2021. [Google Scholar] [CrossRef]
  54. van der Linden, S.; Rabe, A.; Held, M.; Jakimow, B.; Leitão, P.J.; Okujeni, A.; Schwieder, M.; Suess, S.; Hostert, P. The EnMAP-Box-A Toolbox and Application Programming Interface for EnMAP Data Processing. Remote Sens. 2015, 7, 11249–11266. [Google Scholar] [CrossRef]
  55. QField Official Website. Available online: https://docs.qfield.org/reference/qfieldcloud/concepts/ (accessed on 9 August 2023).
  56. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  57. Lim, J.; Kim, K.-M.; Jin, R. Tree Species Classification Using Hyperion and Sentinel-2 Data with Machine Learning in South Korea and China. IJGI 2019, 8, 150. [Google Scholar] [CrossRef]
  58. Naidoo, L.; Cho, M.A.; Mathieu, R.; Asner, G. Classification of Savanna Tree Species, in the Greater Kruger National Park Region, by Integrating Hyperspectral and LiDAR Data in a Random Forest Data Mining Environment. ISPRS J. Photogramm. Remote Sens. 2012, 69, 167–179. [Google Scholar] [CrossRef]
  59. Sothe, C.; De Almeida, C.M.; Schimalski, M.B.; La Rosa, L.E.C.; Castro, J.D.B.; Feitosa, R.Q.; Dalponte, M.; Lima, C.L.; Liesenberg, V.; Miyoshi, G.T.; et al. Comparative Performance of Convolutional Neural Network, Weighted and Conventional Support Vector Machine and Random Forest for Classifying Tree Species Using Hyperspectral and Photogrammetric Data. GIScience Remote Sens. 2020, 57, 369–394. [Google Scholar] [CrossRef]
  60. Wilson, D.R.; Martinez, T.R. The Need for Small Learning Rates on Large Problems. In Proceedings of the IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), Washington, DC, USA, 15–19 July 2001; Volume 1, pp. 115–119. [Google Scholar]
  61. Pandian, J.A.; Gupta, S.K.; Kumar, R.; Hazra, S.; Kanchanadevi, K. Classification of Land Cover Hyperspectral Images Using Deep Convolutional Neural Network. In Advanced Computing and Intelligent Technologies; Shaw, R.N., Das, S., Piuri, V., Bianchini, M., Eds.; Lecture Notes in Electrical Engineering; Springer Nature Singapore: Singapore, 2022; Volume 914, pp. 89–97. ISBN 978-981-19297-9-3. [Google Scholar] [CrossRef]
  62. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
  63. Congalton, R. Accuracy Assessment and Validation of Remotely Sensed and Other Spatial Information. Int. J. Wildland Fire 2001, 10, 321–328. [Google Scholar] [CrossRef]
  64. Schuster, C.; Foerster, M.; Kleinschmit, B. Testing the red edge channel for improving land-use classifications based on high resolution multi-spectral satellite data. Int. J. Remote Sens. 2012, 33, 5583–5599. [Google Scholar] [CrossRef]
  65. Zhang, L.; Liu, Q.; Lin, H.; Sun, H.; Chen, S. The Land Cover Mapping with Airborne Hyperspectral Remote Sensing Imagery in Yanhe River Valley. In Proceedings of the 2010 18th International Conference on Geoinformatics, Beijing, China, 18–20 June 2010; pp. 1–5. [Google Scholar]
  66. Bandyopadhyay, D.; Mukherjee, S.; Ball, J.; Vincent, G.; Coomes, D.A.; Schönlieb, C.-B. Tree Species Classification from Hyperspectral Data Using Graph-Regularized Neural Networks. arXiv 2023, arXiv:2208.08675. [Google Scholar]
  67. Amato, U.; Antoniadis, A.; Carfora, M.F.; Colandrea, P.; Cuomo, V.; Franzese, M.; Pignatti, S.; Serio, C. Statistical Classification for Assessing PRISMA Hyperspectral Potential for Agricultural Land Use. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 615–625. [Google Scholar] [CrossRef]
  68. Friedman, J.H. On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality. Data Min. Knowl. Discov. 1997, 1, 55–77. [Google Scholar] [CrossRef]
  69. Huang, C.; Davis, L.S.; Townshend, J.R.G. An Assessment of Support Vector Machines for Land Cover Classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar] [CrossRef]
  70. Arun, S.A.; Akila, A.S. Land-Cover Classification with Hyperspectral Remote Sensing Image Using CNN and Spectral Band Selection. Remote Sens. Appl. Soc. Environ. 2023, 31, 100986. [Google Scholar] [CrossRef]
  71. Melgani, F.; Bruzzone, L. Classification of Hyperspectral Remote Sensing Images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
Figure 1. This flow chart shows the main stages of the proposed methodology—authors’ elaboration.
Figure 1. This flow chart shows the main stages of the proposed methodology—authors’ elaboration.
Sustainability 15 13786 g001
Figure 2. Framing of the study area—authors’ elaboration.
Figure 2. Framing of the study area—authors’ elaboration.
Sustainability 15 13786 g002
Figure 3. PRISMA scene in natural colors—authors’ elaboration.
Figure 3. PRISMA scene in natural colors—authors’ elaboration.
Sustainability 15 13786 g003
Figure 5. Photos representative of some of the classes of interest (ID from 9 to 18) in this study collected during the field survey on 23 October 2019.
Figure 5. Photos representative of some of the classes of interest (ID from 9 to 18) in this study collected during the field survey on 23 October 2019.
Sustainability 15 13786 g005
Figure 7. This figure shows a CNN architecture similar to the one proposed by Hu (2015)—authors’ elaboration based on [62].
Figure 7. This figure shows a CNN architecture similar to the one proposed by Hu (2015)—authors’ elaboration based on [62].
Sustainability 15 13786 g007
Figure 8. RF, ANN, and CNN comparison in terms of F1 score for classes of interest (ID from 9 to 18).
Figure 8. RF, ANN, and CNN comparison in terms of F1 score for classes of interest (ID from 9 to 18).
Sustainability 15 13786 g008
Figure 9. Random forest classification results. White rectangle corresponds to detail shown next.
Figure 9. Random forest classification results. White rectangle corresponds to detail shown next.
Sustainability 15 13786 g009
Figure 10. This figure suggests a detail from Figure 3 that corresponds to the white rectangle. The visual comparison is between (a) pixels used as labeling data to train the classifier and test the results and (b) detail of RF classification results.
Figure 10. This figure suggests a detail from Figure 3 that corresponds to the white rectangle. The visual comparison is between (a) pixels used as labeling data to train the classifier and test the results and (b) detail of RF classification results.
Sustainability 15 13786 g010
Figure 11. Artificial neural network classification results. White rectangle corresponds to detail shown next.
Figure 11. Artificial neural network classification results. White rectangle corresponds to detail shown next.
Sustainability 15 13786 g011
Figure 12. This figure suggests a detail from Figure 5 that corresponds to the white rectangle. The visual comparison is between (a) pixels used as labeling data to train the classifier and test the results and (b) detail of ANN classification results.
Figure 12. This figure suggests a detail from Figure 5 that corresponds to the white rectangle. The visual comparison is between (a) pixels used as labeling data to train the classifier and test the results and (b) detail of ANN classification results.
Sustainability 15 13786 g012
Figure 13. Convolutional neural network classification map. White rectangle corresponds to detail shown next.
Figure 13. Convolutional neural network classification map. White rectangle corresponds to detail shown next.
Sustainability 15 13786 g013
Figure 14. This figure suggests a detail from Figure 5 that corresponds to the white rectangle. The visual comparison is between (a) pixels used as labeling data to train the classifier and test the results and (b) detail of CNN classification results.
Figure 14. This figure suggests a detail from Figure 5 that corresponds to the white rectangle. The visual comparison is between (a) pixels used as labeling data to train the classifier and test the results and (b) detail of CNN classification results.
Sustainability 15 13786 g014
Table 1. CUAS surfaces classification. Source: Campania Region 2009 [40].
Table 1. CUAS surfaces classification. Source: Campania Region 2009 [40].
CUAS ClassHectares%
Urbanized environment and artificial surfaces25,37328.30%
Orchards and small fruit trees15,93117.77%
Spring–summer crops—vegetable crops12,59414.05%
Spring–summer crops—industrial crops86329.63%
Natural pasture and upland grassland44014.91%
Vernal fall crops—cereals37034.13%
Deciduous forests33543.74%
Spring–summer crops—grain crops28013.12%
Permanent pasture, meadow 25882.89%
Complex cropping and plots25122.80%
Olive groves22452.50%
Protected crops—horticultural and fruit crops12401.38%
Temporary crops associated with permanent crops11801.32%
Areas with sparse vegetation6550.73%
Grasslands6490.72%
Water5080.57%
Fall green crops—tubers3970.44%
Shrubs and bushes2200.25%
Vineyards1610.18%
Pastures unused or of uncertain use1380.15%
Pasture1270.14%
Poplar groves, willow groves, other deciduous trees920.10%
Bare rocks and outcrops820.09%
Citrus groves440.05%
Coniferous forests160.02%
Areas of natural colonization110.01%
Other permanent crops or orchards60.01%
Areas degraded by fire and other events30.00%
Artificially recolonized areas (reforestation)20.00%
Mixed broadleaf and coniferous forests10.00%
Tot.89,666100%
Table 2. CLC surfaces classification. Source: Corine project 2012 [41].
Table 2. CLC surfaces classification. Source: Corine project 2012 [41].
CLC ClassCode_12Hectares%
Non-irrigated arable land21120,98223%
Complex cultivation patterns24216,68719%
Continuous urban fabric11114,06216%
Fruit trees and berry plantations22210,64112%
Discontinuous urban fabric11263517%
Industrial or commercial units12138774%
Broad-leaved forest31132154%
Permanently irrigated land21230383%
Natural grasslands32123993%
Olive groves22319872%
Land principally occupied by agriculture […]24314412%
Annual crops associated with permanent crops24114162%
Transitional woodland–shrub3249541%
Sclerophyllous vegetation3235531%
Mineral extraction sites1314310%
Pastures2314110%
Road and rail networks and associated land1223680%
Airports1242250%
Water courses5111950%
Dump sites1321570%
Green urban areas1411410%
Construction sites133480%
Burnt areas334430%
Sport and leisure facilities142260%
Coniferous forest312180%
Tot. 89,666100%
Table 3. The following table shows the correspondence between the level III classification of the CLC project and the classes used in this study.
Table 3. The following table shows the correspondence between the level III classification of the CLC project and the classes used in this study.
Code_12CLC ClassMatching ClassID
111Continuous urban fabricImpervious 1
112Discontinuous urban fabric
121Industrial or commercial units
122Road and rail networks and associated land
124Airports
131Mineral extraction sitesBare soil2
132Dump sitesImpervious 1
133Construction sites
141Green urban areas *
142Sport and leisure facilities
211Non-irrigated arable landBare soil—crops or low vegetation **2–3
212Permanently irrigated land
221VineyardsVineyards18
222Fruit trees and berry plantationsCitrus grove9
Apricot orchard10
Cherry orchard11
Persimmon orchard 12
Hazelnut orchard13
Walnut orchard14
Peach orchard15
Poplar orchard16
223Olive grovesOlive grove17
231PasturesBare soil—crops or low vegetation **2–3
241Annual crops associated with permanent crops
242Complex cultivation patterns
243Land principally occupied by agriculture […]
311Broad-leaved forestWoods and forest4
312Coniferous forest
321Natural grasslandsBare soil—crops or low vegetation **2–3
323Sclerophyllous vegetationWoods and forest4
324Transitional woodland–shrub
334Burnt areasBare soil2
511Water coursesWater7
--Clouds5
--Shadows6
* Considering pixel spatial resolution in terms of 30 × 30 m of PRISMA image, it was preferred to assign this class to the class containing impervious surfaces. ** Depending on the case, there could be correspondence for class ID 2 or 3, as this study was not based on multitemporal analysis. Some CLC classes (i.e., Code_12: 211;212;231;241;242;243;321) may correspond to the class “ID 2—Bare soil” or alternatively to the class “ID 3—Crops or low vegetation”, depending on the presence or absence of vegetation cover at the time of PRISMA image acquisition (e.g., because some areas have been tilled with agricultural machinery). The authors did not combine the two classes when classifying, but the two classes need to be aggregated to compare them with CLC classes in this table.
Table 5. Ha surfaces of labeled data distinguished by class and group: A—ground truths surveyed in 2019; B—ground truths surveyed in 2023; C—labeled data obtained by photointerpretation.
Table 5. Ha surfaces of labeled data distinguished by class and group: A—ground truths surveyed in 2019; B—ground truths surveyed in 2023; C—labeled data obtained by photointerpretation.
IDClassABCTot.
1Impervious8.230.00203.85212.08
2Bare soil26.358.7586.81121.90
3Temporary crops19.831.2947.2968.41
4Woods and forest6.490.8242.6049.91
5Clouds0.000.00176.35176.35
6Shadows0.000.00217.53217.53
7Waters0.520.0017.9018.42
8Greenhouses1.970.0027.6529.62
9Citrus groves0.002.880.002.88
10Apricot orchard0.739.280.0010.00
11Cherry orchard0.004.340.004.34
12Persimmon orchard 4.538.820.0013.35
13Hazelnut orchard0.0018.140.0018.14
14Walnut orchard5.9912.610.0018.60
15Peach orchard4.871.440.006.32
16Poplar orchard2.162.730.004.89
17Olive groves11.295.400.0016.70
18Vineyard1.113.680.004.79
Tot. 94.0780.18819.98994.23
Table 6. RF, ANN, and CNN comparison.
Table 6. RF, ANN, and CNN comparison.
Input DataOAKF1h
RFCube-R2 (20 PCA) 0.890.870.600.1
ANNCube-R1 (156 bands)0.960.960.7767.2
CNNCube-R1 (156 bands)0.970.970.842.4
Table 7. RF classification results, overall evaluation.
Table 7. RF classification results, overall evaluation.
Inputn. TreesOAKF1Hours
Cube-R25000.8870.8670.6030.052
Table 8. RF classification results, per class evaluation.
Table 8. RF classification results, per class evaluation.
IDClassUA [%]PA [%]F1 [%]Ha
1Impervious 93%88%91%37,203
2Bare soil92%96%94%4922
3Temporary crops83%80%81%16,671
4Woods and forest84%81%83%5444
5Clouds100%100%100%2698
6Shadows99%99%99%53,888
7Waters72%99%84%225
8Greenhouses91%88%90%1002
9Citrus groves27%69%39%103
10Apricot orchard31%28%30%2690
11Cherry orchard6%16%9%123
12Persimmon orchard 40%38%39%2402
13Hazelnut orchard72%62%67%377
14Walnut orchard40%33%36%3539
15Peach orchard29%38%33%342
16Poplar orchard18%71%28%213
17Olive groves59%60%59%2566
18Vineyard14%78%24%78
Table 9. ANN classification results, overall evaluation.
Table 9. ANN classification results, overall evaluation.
Inputn. EpochsOAKF1Hours
Cube-R11000.9630.9560.76667.166
Table 10. ANN classification results, per class evaluation.
Table 10. ANN classification results, per class evaluation.
IDClassUA [%]PA [%]F1 [%]Ha
1Impervious 100%98%99%34,668
2Bare soil99%100%99%4628
3Temporary crops94%96%95%13,118
4Woods and forest97%99%98%2500
5Clouds100%100%100%3247
6Shadows99%100%100%51,221
7Waters98%100%99%664
8Greenhouses96%100%98%1275
9Citrus groves60%55%57%1224
10Apricot orchard91%45%60%6589
11Cherry orchard29%50%36%297
12Persimmon orchard 77%83%80%3375
13Hazelnut orchard98%79%88%1703
14Walnut orchard71%79%75%3296
15Peach orchard64%67%65%2209
16Poplar orchard35%55%43%862
17Olive groves87%87%87%2852
18Vineyard0%0%0%540
Table 11. CNN classification results, overall evaluation.
Table 11. CNN classification results, overall evaluation.
Inputn. EpochsOAKF1Hours
Cube-R11000.9730.9680.8422.399
Table 12. CNN classification results, per class evaluation.
Table 12. CNN classification results, per class evaluation.
IDClassUA [%]PA [%]F1 [%]Ha
1Impervious 99%100%99%30,850
2Bare soil99%100%100%5429
3Temporary crops99%95%97%13,956
4Woods and forest98%96%97%4290
5Clouds100%100%100%3050
6Shadows100%100%100%53,105
7Waters100%100%100%570
8Greenhouses99%98%98%1733
9Citrus groves80%100%89%1217
10Apricot orchard53%71%61%3533
11Cherry orchard36%50%42%622
12Persimmon orchard 77%77%77%2571
13Hazelnut orchard93%93%93%718
14Walnut orchard84%79%81%3778
15Peach orchard68%75%71%1778
16Poplar orchard76%57%65%1643
17Olive groves96%85%90%3396
18Vineyard40%86%55%2027
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Delogu, G.; Caputi, E.; Perretta, M.; Ripa, M.N.; Boccia, L. Using PRISMA Hyperspectral Data for Land Cover Classification with Artificial Intelligence Support. Sustainability 2023, 15, 13786. https://doi.org/10.3390/su151813786

AMA Style

Delogu G, Caputi E, Perretta M, Ripa MN, Boccia L. Using PRISMA Hyperspectral Data for Land Cover Classification with Artificial Intelligence Support. Sustainability. 2023; 15(18):13786. https://doi.org/10.3390/su151813786

Chicago/Turabian Style

Delogu, Gabriele, Eros Caputi, Miriam Perretta, Maria Nicolina Ripa, and Lorenzo Boccia. 2023. "Using PRISMA Hyperspectral Data for Land Cover Classification with Artificial Intelligence Support" Sustainability 15, no. 18: 13786. https://doi.org/10.3390/su151813786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop