Next Article in Journal
Robust Controller for Pursuing Trajectory and Force Estimations of a Bilateral Tele-Operated Hydraulic Manipulator
Next Article in Special Issue
Distance Transform-Based Spectral-Spatial Feature Vector for Hyperspectral Image Classification with Stacked Autoencoder
Previous Article in Journal
Dynamic Microclimate Boundaries across a Sharp Tropical Rainforest–Clearing Edge
Previous Article in Special Issue
Machine Learning Optimised Hyperspectral Remote Sensing Retrieves Cotton Nitrogen Status
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating the Effects of a Combined Spatial and Spectral Dimensionality Reduction Approach for Aerial Hyperspectral Target Detection Applications

1
Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XW, UK
2
BAE Systems, Air Sector, Filton, Bristol BS34 7QW, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(9), 1647; https://doi.org/10.3390/rs13091647
Submission received: 26 February 2021 / Revised: 9 April 2021 / Accepted: 19 April 2021 / Published: 23 April 2021
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)

Abstract

:
Target detection and classification is an important application of hyperspectral imaging in remote sensing. A wide range of algorithms for target detection in hyperspectral images have been developed in the last few decades. Given the nature of hyperspectral images, they exhibit large quantities of redundant information and are therefore compressible. Dimensionality reduction is an effective means of both compressing and denoising data. Although spectral dimensionality reduction is prevalent in hyperspectral target detection applications, the spatial redundancy of a scene is rarely exploited. By applying simple spatial masking techniques as a preprocessing step to disregard pixels of definite disinterest, the subsequent spectral dimensionality reduction process is simpler, less costly and more informative. This paper proposes a processing pipeline to compress hyperspectral images both spatially and spectrally before applying target detection algorithms to the resultant scene. The combination of several different spectral dimensionality reduction methods and target detection algorithms, within the proposed pipeline, are evaluated. We find that the Adaptive Cosine Estimator produces an improved F1 score and Matthews Correlation Coefficient when compared to unprocessed data. We also show that by using the proposed pipeline the data can be compressed by over 90% and target detection performance is maintained.

Graphical Abstract

1. Introduction

Remote sensing from aerial and satellite platforms has become increasingly prevalent and is an important source of information in areas of research including disaster relief [1], determining land usage [2] and assessing vegetation health [3]. Remote sensing platforms are also often deployed in military and security applications such as change detection [4,5], target tracking [6] and classification. Target Detection (TD) from airborne imagery is a major challenge and active area of research within the disciplines of signal and image processing [7,8,9]. There have been a wide range of TD algorithms of varying complexities developed over the last few decades [10], ranging from mathematical models to those based on more intuitive approaches such as angles or distances. The most notable difficulties in aerial TD are discussed in [11] and include sensor noise effects, atmospheric attenuation and subsequent correction which can both lead to variabilities in target signature.
Depending on the system, remote sensing data can consist of high resolution RGB colour data, radar, multispectral, or hyperspectral images. The latter, while providing a great deal of useful information, often at wavelengths beyond the range of human vision, introduces a vast quantity of data which must be handled and processed. Dimensionality Reduction (DR) techniques offer methods of compressing and remapping this high dimensionality data into a reduced, and sometimes more informative, uncorrelated subspace. As hyperspectral images contain high levels of redundancy they are easily compressed using sparsity-based approaches [12] or by applying DR methods. Coupling spectral DR with TD in order to improve detection and classification rates has been covered widely in the literature [11,13,14,15,16,17,18,19] and has been shown to improve the performance of TD and classification algorithms.
In TD applications, often the targets are sparsely positioned in an imaged scene, therefore large amounts of spatial redundancy are exhibited. This spatial redundancy, like the spectral redundancy also present in hyperspectral images, can be exploited in order to attain increased performance and efficiency. In [18,19], we investigated using the Normalised Difference Vegetation Index (NDVI) as a spatial mask on the detected image in order to constrain the region of interest in the scene. In this paper, however, the spatial DR is applied prior to the calculation of the dimensionality reduced image in order to refine the subspace in which any TD is performed. NDVI and its variants are most often used in remote sensing applications to quickly and effectively assess vegetation health [3]. Other similar indices are used to detect water/snow in an image or for assessing how built upon an area is. However, such indices could be used to provide a measure of how informative a pixel may be or how likely it is to hold a target signature. Pixels are categorised as informative or non-informative with the non-informative pixels being discarded. By removing such pixels, the DR calculation can be simplified by reducing the number of samples, whilst also simplifying and suppressing the background class. As TD algorithms can be represented as a binary classification, improving the separation between target and background classes consequently improves TD performance [8]. While various information indices are commonly used in remote sensing tasks, to the best of the authors’ knowledge, they have never been used to perform spatial DR or coupled with spectral DR in this way with the aim of improving hyperspectral TD applications.
In this paper, we investigate the use of coupled spatial and spectral DR for hyperspectral TD applications. With this approach, we aim to decrease both the spatial and spectral redundancy exhibited in hyperspectral images, improving the efficiency and performance of various benchmark TD algorithms. The proposed method was tested on two hyperspectral datasets containing multiple targets in varied scenes.

2. Materials and Method

In this section, we first introduce the notation used in this paper as well as the relevant background information on each of the datasets used. Secondly the various spectral DR methods used are introduced followed by the spatial DR method created for purpose of TD. Finally the various detection algorithms are described.

2.1. Notation

Hyperspectral images can most easily be represented as 3-dimensional datacubes, with two spatial dimensions and a third spectral dimension. Any hyperspectral image X can be represented as L individual greyscale images each exposed at a particular wavelength or spectral band λ l , X l : l { 1 , 2 , , L } , where L represents the total number of spectral bands. Alternatively an image, X, can be thought of as N individual pixels each comprised of an L-dimensional vector as seen in Equation (1):
X 3 D = x 1 , 1 x 2 , 1 x i , 1 x 1 , 2 x 2 , 2 x i , 2 x 1 , j x 2 , j x i , j
where i and j represent the number of columns and rows in the hyperspectral datacube X 3 D , respectively. Generally when applying hyperspectral image processing algorithms to images, it is desirable for the image to be in a 2-dimensional matrix form, X. This is shown in Equation (2), where each column consists of a single pixel, x i : i { 1 , 2 , , N } , represented by an L-dimensional vector, as seen in Equation (3).
X 2 D = x 1 , x 2 , x 3 , x i , x N
x = x λ 1 ; x λ 2 ; x λ L
The vector in Equation (3) represents a single hyperspectral pixel, or a single spectral measurement.

2.2. Image Acquisition

Images from two sources have been used to validate the techniques described here. The first dataset “OP7”, provided by BAE Systems, consists of three images acquired on the 18 May 2014 from an aerial platform flying at an altitude of approximately 0.78 km. The platform used a Visible and Near-InfraRed (VNIR) hyperspectral sensor with a spectral range of roughly 400–1000 nm.
The second set of images were supplied by the UK Defence Science and Technology Laboratory (DSTL) as part of the University Defense and Research Collaboration (UDRC) from the Selene trial. Part of this trial collected airborne hyperspectral imagery of large numbers of spectrally varied targets across a two week period between the 4th and 15th of August 2014 at an altitude between 0.9 and 1.05 km. A common region from a selection of seven images captured over this period was used so as to exhibit varied targets under different environmental conditions. The camera used was also in the VNIR range with a similar spectral range of roughly 400 nm to 1000 nm with fewer spectral measurements but a much higher spatial resolution than the OP7 dataset.
Sample false-colour images from each of the datasets can be seen in Figure 1 along with cropped portions of the target area indicated by a red box.

2.3. Spectral Dimensionality Reduction Techniques

Due to the high correlation between successive bands in hyperspectral images, compression and DR techniques can be readily applied. In this section, we review four of the most common techniques which we have included in this analysis.

2.3.1. Principal Component Analysis

Principal Component Analysis (PCA) [20] is a classical method of DR. It seeks to remap highly correlated data into an uncorrelated space using a set of optimal orthogonal basis vectors, or Principal Components (PC), calculated from the input data. There are multiple ways of achieving this through both iterative and non-iterative algorithms, we have included two in this analysis, Eigenvalue Decomposition (EVD) and Non-linear Iterative Partial Least Squares (NIPALS). The EVD is a common method for performing PCA and consists of the matrix decomposition Σ = U Λ U T , where the matrix Λ is a diagonal matrix containing the eigenvalues of Σ , i.e., Λ = { λ 1 , λ 2 , , λ L } and the matrix U contains the related eigenvectors [ u 1 , u 2 , , u L ] . The eigenvalues in Λ are ordered such that λ 1 > λ 2 > . . . > λ L , hence the first K largest eigenvalues correspond to the first K eigenvectors. The first K eigenvectors, or PCs, can be used as a set of basis vectors to transform the original data into an uncorrelated K-dimensional subspace, where K < L , which represents the most significant information contained in the data.
In some cases, such as those where the desired number of retained components is known, it is unnecessary and therefore preferable to avoid calculating every PC as is required in an EVD. In these cases, iterative techniques can be used to calculate each successive PC in turn until the required number, K, has been reached. The NIPALS algorithm can be used to achieve this and consists of the decomposition X = T P T , where X is some mean-centred matrix and the columns of T are the scores and the columns of P are the loadings. P forms an optimal transform matrix which can be used in an identical manner to the matrix of eigenvectors from an EVD in transforming input data into a dimensionality reduced subspace. An overview of the NIPALS algorithm can be found in [21].
In testing, both the EVD and NIPALS algorithms produced PCs with identical magnitudes but some which exhibited opposite polarity as orthogonality can take one of two directions. The EVD has no need to converge and is therefore faster while producing minimal error. With this in mind, only the EVD was used to perform PCA-based DR.

2.3.2. Maximum Noise Fraction

The Maximum Noise Fraction (MNF) [22] transform is similar in operation to PCA but also accounts for the noise present in input data [23]. Rather than ordering the PCs of an input image, X, by their variance, as in PCA, they are instead sorted by their estimated Signal-to-Noise Ratio (SNR). In MNF, it can be assumed that the covariance of the data, Σ , is a sum of the covariance of the signal, Σ s , and the covariance of noise, Σ n , i.e., Σ = Σ s + Σ n . The MNF transform seeks to maximise the calculated eigenvalues with respect to the estimated SNR and can be interpreted as two separate PCAs computed in turn, the first to noise whiten the data, and the second to calculate the PCs. The complete MNF algorithm is described in [22].

2.3.3. Folded Principal Component Analysis

With both PCA and MNF, as well as many other PCA-like methods, it is necessary to compute the full covariance matrix Σ . This covariance matrix is of size L × L where L is equal to the number of spectral bands in an image. Therefore, for images with high spectral resolution it can be computationally expensive and time-consuming to compute. In order to circumvent this challenge, Folded Principal Component Analysis (FPCA) [24] seeks to reduce the size of the covariance matrix and also incorporate the correlation within spectra into the calculation. In order to perform FPCA, each of the N mean-centred spectral vectors, x ¯ , are folded into an H × W matrix, A, where H × W = L for some positive integers H and W. A partial covariance matrix can be calculated as Σ = A T A and using each of these N partial covariance matrices the full covariance matrix, Σ F P C A , can be calculated as Σ F P C A = 1 N i = 1 N A i T A i . Images can be projected into the FPCA domain by performing the EVD, of Σ F P C A and using the resultant eigenvectors to project the input data into the PC space. Auxiliary target spectra can then be folded using the same H and W and projected using the eigenvectors of Σ F P C A , before being unfolded again to be processed in the FPCA domain.

2.3.4. Independent Component Analysis

Independent Component Analysis (ICA) is a common method for performing Blind Source Separation (BSS) used in DR. Unlike PCA, MNF or FPCA, ICA seeks to separate an ensemble of mixed signals into a set of finite distinct sources or Independent Components (IC). This is achieved by maximising the statistical independence of the calculated components [25]. As hyperspectral images are made up of a weighted sum of a set of finite pure spectra, or endmembers, it is possible to perform ICA to separate the mixed spectra into pure spectral endmembers. There are multiple algorithms used to calculate the ICs of a set of data, two of the most well used are the FastICA [26] algorithm and the Joint Approximation Diagonalization of Eigen-matrices (JADE) algorithm [27]. In this paper, the FastICA algorithm is used instead of the JADE algorithm as it reached convergence both faster and more reliably. In order to perform ICA based DR, the required number of ICs to represent the data needs to be calculated. This is achieved by using the notion of Virtual Dimensionality (VD) [28] which estimates the number of spectrally distinct sources in the image. Using the method from [29], ICA-DR can be achieved with K ICs.
PCA and MNF are both classified as second order statistics-based transforms which can be insufficient in some applications [29]. ICA preserves higher order moments, such as skewness and kurtosis, which can aid in applications which require characterisation of subtle differences in signature such as classification or detection of small/rare targets. While it is possible that second-order statistics may be insufficient in preserving such characterising information it has not been the case with this application. Although it performs favourably when compared to other ICA algorithms such as JADE, FastICA is much slower than the other, non-iterative, methods for DR listed here. This is due to the need for multiple iterations to reach a convergence and is therefore another important consideration in its choice in any practical application.

2.4. Spatial Dimensionality Reduction Using Vegetation Indices

As well as exploiting the spectral redundancy exhibited in hyperspectral images, the spatial redundancy can also be utilised for TD through compression or by creating new features. By investigating the spectral properties of the scene, spatial areas of interest can be selected and areas of non-interest can be discarded from further processing, often saving on large computational costs. Vegetation Indices (VI) such as NDVI and its variants are of particular interest in TD applications as they offer simple and effective methods to discriminate between vegetative and non-vegetative pixels. Three NDVI variants were selected and tested in discriminating between the desired background of vegetation and the foreground of synthetic materials to which the target objects of interest belong. Each of the methods used in this work are listed in Table 1.

2.5. Target Detection Algorithms

In this paper, five common classical methods for TD and Anomaly Detection (AD) are investigated for use in combination with spatial and spectral DR. These are the Adaptive Cosine Estimator (ACE) [34], Constrained Energy Minimisation (CEM) [13], the Spectral Angle Mapper (SAM) [35], Spectral Information Divergence (SID) [36], and the Reed-Xiaoli Detector (RXD) [37]. Each method, with the exception of the latter are TD algorithms and, as such, they require a priori information about the targets to be detected in the form of a reference or ground truth spectra. The final method however, the RXD, does not require prior information about a target and finds outlying or anomalous pixels within the image and is cited as the benchmark AD algorithm [11]. Whilst other TD algorithms such as Orthogonal Subspace Projection (OSP) [38,39] are often used to good effect [40,41,42], such methods require prior knowledge of the background which may not be fully known and as a result hinder the performance in a TD application hence they are left out of this analysis. ACE in particular has been shown to achieve favourable results in similar comparisons with other TD algorithms [11,14,17,43].

2.6. Performance Measures

In order to asses the performance of each of the TD algorithms a number of measures are used in this paper. Each of the various TD and AD algorithms used return a probability or confidence measure as to whether each pixel contains a target. By varying the threshold above which a pixel is classified as a target, the various behaviours and performance of a TD algorithm can be assessed. Both Receiver Operator Characteristic (ROC) curves [44] and Precision-Recall (PR) curves [45,46] are useful measures in determining an optimal operational threshold in order to maintain an acceptable False Alarm Rate (FAR). The Area Under the Curve (AUC) is a useful measure for comparing the ROC and PR behaviours of various algorithms. The ROC curve can be created by plotting the Probability of Detection (Pd), against the Probability of False Alarm (Pfa), at a series of thresholds.
Although ROC curves are a simple and effective way of rapidly visualising the performance of a classifier, it has been shown that ROC analysis can be flawed for unbalanced classes, as is the case for TD applications. In [45] it is shown that PR, curves are more informative for unbalanced classes as they correctly evaluate the fraction of True Positive (TP) detections amongst the total number of positive predictions, or the precision of the classifier. Precision can be calculated using the number of TP and False Positive (FP) detections where precision = TP / ( TP + FP ) . Recall is calculated using the number of False Negative (FN) detections where recall = TP / ( TP + FN ) . Recall is the fraction of TP detections amongst the total number of positive examples. In the case of TD applications the number of positive examples is the total number of target pixels present in an image and PR curves can be obtained by plotting the precision of a classifier against its recall at a series of thresholds.
Along with these graph-based methods, four other methods of assessing each of the TD algorithms were used. Three measures commonly used in assessing binary classifier performance, the F1 score [46], Matthew’s Correlation Coefficient (MCC) [47] and balanced accuracy [48] were used. As TD algorithms can be represented as a binary classification between a positive target class and a negative background class, these measures are applied to assess how each algorithm performs. The final metric used in this work is the visibility measure [14]. Visibility is an indication of how distinct a target is from its background. This is useful in assessing how the detection can be affected by applying DR to input image data.

2.7. Proposed Methodology

In this paper, we are proposing a pipeline to improve TD in hyperspectral images by combining both spatial and spectral DR methods. This is achieved by performing a spatial DR on an input image, removing any vegetative, and therefore, non-target pixels, before projecting the subset of the image into a subspace using more traditional spectral DR methods. Any relevant ground truth target spectra can also be projected into the same subspace using the forward transform of each DR method. The TD can then be performed in the dimensionality reduced subspace. This pipeline is shown in Figure 2.
In previous work, [18,19], both NDVI and PCA were combined to improve the performance of a hyperspectral Hit-or-Miss Transform (HMT) for use as a TD algorithm. By reducing the spatial and spectral redundancy the computational overhead of the proposed algorithm was reduced. NDVI was used to mask the already dimensionality reduced data. However, this meant that the NDVI had no influence over the performance of the detection. When it is already known that the target is non-vegetative, the application of NDVI masking prior to the use of spectral DR improves the performance of TD algorithms because a much more informative subset of pixels is exploited. Rather than using the spectral information of vegetation in the DR calculation, which can skew the resultant basis vectors away from representing desirable signatures, it is instead overlooked. The DR is targeted towards representing potentially more informative pixels. By suppressing the vegetative part of the background class, an improved separation between the target and remaining background can be achieved in the DR subspace. The aim is, that by reducing the number of samples in this way, the calculation of the dimensionality reduced data is not only simplified but also more useful information is retained in potentially fewer components.

3. Experimental Results

In this section, we will first investigate the optimal method of removing vegetative pixels and then discuss the achievable compression rates when combining both spectral DR and NDVI-based spatial DR. We then select the optimal detection algorithm for use with the proposed spatio-spectral DR pipeline shown in Figure 2. Then, we present a subset of the results gathered using both the OP7 dataset and the UDRC Selene Trial data. Finally, we investigate the effects of the various spatial and spectral DR schemes combined with the chosen TD algorithm. Each of the spectral DR methods, PCA, MNF, FPCA and ICA, are tested with K = 20 components retained and “Raw” refers to the full dimensionality image where L = 100 for the OP7 data and L = 80 for the Selene trial images.

3.1. Selection of the Optimal Vegetation Index for Spatial Dimensionality Reduction

In order to assess which VI gave the best separation between vegetative and non-vegetative pixels, the ground truth spectra of multiple green targets from the Selene dataset as well as the average spectra of a patch of vegetation were investigated. Figure 3 shows the test image used as well as the results of each of the three VIs.
All three of the VIs are able to identify a good separation between vegetation and most other non-vegetative background pixels. However some of the green targets present in the scene, despite exhibiting distinctly non-vegetative spectra, can produce a ratio similar to that of the surrounding grass, this is most apparent when using the basic NDVI. The regions investigated are indicated by the blue and orange elements in Figure 3, matching the colour of the plotted spectral signatures in Figure 4. Figure 4 shows the target spectra, background spectrum, and VI bands used to calculate the ratio of each VI result, respectively.
Two of the targets from Figure 3a, green perspex (circled in blue) and green ceramic (circled in orange) were investigated for separation from the background when using VI-based spatial DR. Each of the three VIs investigated produce a ratio between the intensity of a pixel at two bands, the two targets produced VI values shown in Table 2.
From Figure 3b–d, Figure 4 and Table 2, it is possible to see that NDVI and the Red Edge Normalised Difference Vegetation Index (RENDVI) have lower separability between the “green perspex” target and the background when compared with that of the Normalised Difference Vegetation Index (red-edge) (NDVIre). In fact, it can be observed that the green perspex target, pinpointed by the blue arrows in Figure 3b–d, is near indistinguishable from the background in Figure 3b with only six of the seven targets having a low enough NDVI value to be reliably distinguished from the background. Despite having a distinct spectral profile, as shown in Figure 4a, the green perspex has an almost identical NDVI value to the background (0.48 vs. 0.53) indicating the ratio between the two NDVI bands is nearly the same. By altering the Near-InfraRed (NIR) band to be placed in the red-edge portion of the spectrum, as is the case when using NDVIre, a much greater separation is achieved (0.09 vs. 0.39). This is due to the red-edge phenomenon, when the intensity of the background spectra rises sharply, reflecting NIR light. RENDVI, whilst successfully segmenting all seven targets in this example, creates a lower contrast between background and target when compared with NDVIre. As NDVIre provides the best separation between the most difficult targets and the background it is used to implement spatial DR in this paper.

3.2. Combining Spatial and Spectral DR for Hyperspectral Compression

Here we briefly discuss the effects on image size and compression when combining spatial and spectral DR techniques. NDVIre is used as a spatial mask, selecting pixels that are relevant and can be used in subsequent spectral DR and TD processes. By masking certain pixels they can be discarded from further processing, reducing the sample size. Then, by performing spectral DR, retaining K components from L spectral bands the sample size is reduced further. By combining the remaining spatial and spectral components, a compressed representation of the relevant data is retained for further processing. Table 3 details the size of each of the images used in this paper, as well as their compressed spatial and spectral sizes along with the percentage of the original data retained after compression.
The OP7 dataset images are first able to be compressed to 1.72 % of their original size on average as NDVIre selects a small proportion of the total pixels to process further. By retaining K = 20 components in the subsequent spectral DR stage, this is reduced further to an average of 0.34 % of their original size. The images in the Selene trial have a much higher spatial resolution and a larger sample is retained after using the NDVIre spatial mask as a large proportion of the pixels represent non-target and non-vegetative materials, as shown in Figure 1. The pixels retained after NDVIre represent an average of 18.45 % of the original image and applying spectral DR, with K = 25 , reduces this to 4.61 % on average.

3.3. Comparison of the TD Algorithms Used

Each of the detection algorithms used were individually tested for their suitability when combined with the spatial and spectral DR schemes selected. In order to validate which algorithm performed optimally, the proposed method was applied to a subset of the Selene data. First, a ROC analysis was performed with examples of ROC curves for each combination of TD and DR algorithms are shown in Figure 5 for the full spatial scene and in Figure 6 when combined with NDVIre. Figure 5 and Figure 6 show the upper left quadrant of the ROC curves in order to highlight the differences between each of the methods used.
As previously stated, ROC analysis, in isolation, is insufficient for comparing unbalanced binary classifiers [45]. However, it is interesting to note the disparity between the ROC curves from each of the TD algorithm outputs. In Figure 5, each of the algorithms used have near ideal ROC curves regardless of which spectral DR scheme is used when working on the full spatial scene. However when spatial DR is employed, only the ACE and CEM algorithms remain near ideal as seen in Figure 6. The AUC of the ROC curves increase for each spectral DR scheme when combined with NDVIre-based spatial DR and the ACE algorithm, as shown in Figure 6a. By simplifying the background, and therefore improving the covariance estimate, the ACE algorithm can achieve better separation between the known target and the estimated background. Similarly, by suppressing the background, the FIR filter weight estimation that is necessary for the CEM algorithm is simplified. This is reflected in the increased AUC values of the ROC curve when using CEM with NDVIre-based spatial DR, as shown in Figure 6b.
As well as ROC curves, PR curves were generated for each of the combinations of TD and DR algorithms with and without the NDVIre spatial DR. The PR curves of each of the TD algorithms when considering both the full spatial scene and with the application of NDVIre-based spatial DR are shown in Figure 7.
Investigating the PR curves shown in Figure 7 and the corresponding AUC values in Table 4 we see that using ACE, CEM and SAM generally all yield high AUC values for each of the spectral DR schemes used. When NDVIre-based spatial DR is used in combination with the spectral DR the AUC increases in almost every case, including on the raw data where no spectral DR is used. SID, when used on the full dimensionality data, provides an average AUC which is once again improved when using NDVIre-based spatial DR. The RXD performs well when using the full data and each of the spectral DR algorithms with the exception of PCA where it fails to discriminate target materials. This is due to the fact that, mathematically, PCA is the inverse operation of the RXD [49]. PCA exploits the redundancy of hyperspectral images by only retaining the PCs corresponding to the largest eigenvalues whereas the RXD works by investigating the anomalous data attributed to smaller eigenvalues which have been discarded.
Both the ROC and PR analysis were performed on a per-target basis. The results shown in Figure 5, Figure 6 and Figure 7 and Table 4 are from the detection of a single target however they are generally representative of the performance over every target present in the scene. Along with the ROC and PR curves, the other performance measures detailed in Section 2.6 were calculated for each of the targets in the scene. These measures were then averaged in order to obtain an overview of each TD algorithm’s general performance, the results of which can be seen below in Figure 8.
Similar to the results drawn from Figure 5, Figure 6 and Figure 7, each of the TD algorithms perform well when considering the AUC of the ROC curves. ACE and CEM give the highest AUC of the PR curves with ACE, CEM, and SID each performing better when combined with spatial DR. Generally using the spatial DR reduces the visibility with the exception of CEM and the RXD where it slightly increases. ACE gives the highest visibility when considering both the full scene and when using spatial DR indicating it is the best at separating the background from the target of the algorithms investigated. ACE and SID display the best precision, with both methods improving when using spatial DR. ACE also displays the highest balanced accuracy, F1 score and MCC of each of the detectors tested. For these reasons, the remaining results in this paper are generated using the ACE algorithm solely. It is interesting to note that, as well as reducing the sample size for increased efficiency in each of the detection algorithms, the performance after the application of spatial DR is generally as good or an improvement over using the full scene, as seen in Figure 8.

3.4. Results on the OP7 Dataset

The first of two datasets used in this paper was provided by BAE System. It consists of three images of a forest scene each portraying a common target area from overlapping views. The target area contains three calibration panels, one grey, one black and one white, which were used as the targets of interest. Figure 9a shows a false colour representation of one of the images with all three targets present in the scene. Figure 9c shows the same image masked using the NDVIre method detailed in Section 2.4. Figure 9b,d are enlarged views of the target areas of Figure 9a,c, respectively.
Of the two datasets, OP7 is simpler as it contains fewer distinct materials and objects in the scene compared to the images from the Selene Trial dataset. The OP7 images also have a lower spatial resolution when compared to the Selene Trial data, with a Ground Sample Distance (GSD) of approximately 1 metre. As a result roughly nine pixels per target contain pure spectra.
In order to assess how each TD algorithm’s behaviour varied with the number of components retained using each DR scheme, the F1 score, MCC, balanced accuracy and visibility were calculated at various values for K between K = 10 and K = L , where L = 100 for the OP7 data as shown in Figure 10. It must be noted that, when using FPCA, the dip in performance in each measure is a consequence of an implementation limitation which results in the creation of a singular matrix. This restricts the choice of the number of retained components and is discussed further in Section 4.
As seen in Figure 10, both balanced accuracy and visibility are largely invariant to the number of components, K, retained. Although interestingly, at lower values of K, the visibility using each spectral DR methods is greater than that of the raw data. Conversely, the F1 score and MCC both vary as the number of components increase to be equal to the original spectral dimensionality of the data, both with and without the application of spatial DR. This is to be expected, in the case where K = L , the data is functionally identical, although it has been remapped, and no information has been lost in the DR operation. By using spatial DR prior to spectral DR, both the F1 score and MCC are increased above what is achieved using the raw full dimensionality data without spatial DR.
When comparing TD performance on the full spatial dimensionality images with that of the NDVIre masked images, both with and without the application of spectral DR, the F1 score and MCC both increase. However when spatial DR is applied, the average F1 score and MCC are considerably higher. The removal of the vegetative background discards highly disparate observations and simplifies the problem of separating background from target. This increases the precision of the detection as seen in Table 5 and Table 6. By reducing the complexity of the background, the targets, although more similar to the remaining background, can be separated in the subspace more easily.
The MCC, in comparison to the F1 score, is slightly higher in both spatial DR cases as it takes into account the correct identification of the true negative class. The balanced accuracy drops slightly when NDVIre is applied. As the balanced accuracy is the average of the True Positive Rate (TPR) and True Negative Rate (TNR), the decrease in the size of the True Negative (TN) class, without a corresponding proportional decrease in FP, results in a lower balanced accuracy. Despite the increase in False Positive Rate (FPR) when using NDVIre, the absolute number of FP detections decreases. It can also be seen in Figure 10 that, by removing the easily separated vegetative background using NDVIre, the visibility of the targets decreases. This occurs because the materials which remain are, on average, more similar to the targets.
Further comparisons were made by retaining 20 components from each of the spectral DR methods as this provided a good balance of performance and compression. As shown in Figure 10, K = 20 components also gave clear improvements over the raw, full dimensionality, scene when combined with spatial DR. The improvement in detection when using spatial DR can be seen in Figure 11 where there is less confusion in the detection map where NDVIre is applied, Figure 11d. The target is the brightest object in the scene in each case, indicating good separability from the background.
In order to quantify this improvement, the ROC and PR curves for both the full and spatial dimensionality reduced images are shown in Figure 12 for each spectral DR method where K = 20 .
The ROC curves in Figure 12a,b are of the ACE detection statistics on the full scene and NDVIre masked scene, respectively. The AUC of the ROC curves alone is not significant as, regardless of the spatial and spectral DR used, it remains nearly identical. The AUC of the PR curves (Figure 12c,d) when using the raw uncompressed data, PCA, MNF or ICA dimensionality reduced data increases when spatial DR is applied. However, when applying FPCA the AUC falls slightly.
The results from Figure 10, Figure 11 and Figure 12 are all calculated from a single target in order to display an example of the performance achieved. The average results for each target are shown in Table 5 when considering the full scene and in Table 6 when spatial DR has been applied.
In general, as shown in Table 5 and Table 6, the AUC of the ROC curves are similar regardless of the spectral and spatial DR used. The AUC of the PR curves varies with the spectral DR used with each of the methods providing an average AUC. Generally, employing spectral DR maintains the performance when considering the full spatial scene but when combined with spatial DR there is a slight decrease in the AUC of the PR curves. Applying NDVIre-based spatial DR improves the AUC when considering the full dimensionality data. The precision of the spatial DR coupled methods is increased in comparison to using the full spatial scene as certain false positives are removed either directly via the masking operation or indirectly by improving the spectral DR calculation. The recall drops slightly, however this may not be significant in TD applications as one pixel on target can be sufficient for the identification and classification of an object of interest. Figure 10 shows a drop in the visibility and balanced accuracy measures when applying spatial DR which is consistent for each of the targets, as shown in Figure 13.
The visibility drops significantly when using the spatial DR as the highly dissimilar vegetative background is removed, making the average background and target spectra more similar. The balanced accuracy falls when using NDVIre as the TN class decreases without a corresponding drop in FP detections. The F1 score and MCC both increase when using NDVIre-based spatial DR when applied to the full dimensionality data as well as for each spectral DR scheme used. In nearly all of the measures tested, the full spectral dimensionality image with and without spatial DR performed the best of all methods on average with the application of spatial DR tending to improve the performance. Each of the spectral DR methods employed retained only 20 components of the original 100, reducing the computational complexity and cost of performing the TD while maintaining similar performance.

3.5. Results on the UDRC Selene Dataset

The second of the two datasets used in this paper was provided by DSTL. It consists of seven images of a different forest scene with a large concrete area with metal containers, vehicles and other objects captured over the course of two weeks in August 2014. Each image covers a different view of this common target area containing between five and seven calibration panels of various colours and materials with a GSD of roughly 0.3 m.
Figure 14c shows the image masked using the NDVIre method detailed in Section 2.4 with Figure 14b,d providing an enlarged view of the target area from Figure 14a,c, respectively.
The same process of plotting the F1 score, MCC, balanced accuracy and visibility of a target from the OP7 data against the number of components, as in Figure 10, was applied to one of the target materials (green ceramic) present in the images from Figure 14. These graphs can be seen in Figure 15.
As in Figure 10 using the OP7 data, the average F1 score and MCC both increase with the number of retained components until K = L . ICA and FPCA both perform well on average at K = 20 whereas both PCA and MNF require more components to represent the data fully. Applying the spatial DR to each of the spectral DR methods improves both their F1 score and MCC regardless of the number of components retained. Similar to the results from Section 3.4, the balanced accuracy and visibility are lowered when using spatial DR because of the reduced TN class and more similar average background signature. As in the results gathered from the OP7 dataset, applying spectral DR improves the balanced accuracy and visibility of the full spatial scene at lower values of K. The remaining results were obtained with K = 20 as it provided a good balance between detector performance and compression. The results shown in Figure 15 also indicate that improved performance could be obtained at K = 40 at the expense of compression rate. It must again be noted that FPCA requires more careful consideration when selecting the value of K in order to avoid the creation of a singular matrix and subsequent drops in performance as seen in Figure 15. This is discussed in detail in Section 4.
Similar to the results obtained on the OP7 dataset in Figure 11, removing the vegetation and simplifying the background class improves separation between the synthetic background and targets. Whilst there is an overall decrease in target visibility, as the average spectra is more similar to the desired targets, there is less varied information to be represented, either in the full dimensionality image or in a dimensionality reduced subspace. This leads to less confusion in the detection image, as shown in Figure 16d, where the clutter present in the scene is less likely to be misidentified as a target, when compared to Figure 16b.
The ROC curves in Figure 17a,b are of the ACE detection statistics on the full scene and NDVIre-based spatial DR scene, respectively. The two sets of ROC curves are almost identical and do not provide definitive results, but indicate a small improvement when using the spatial DR. Comparing the PR curves in Figure 17c,d shows that when each spectral DR scheme is used in conjunction with spatial DR the AUC is increased by 10–15%.
The average results for each target in the Selene dataset are shown in Table 7 when considering the full scene and in Table 8 when spatial DR has been applied. The average performance of the ACE detector when combined with each spatial and spectral DR method used are shown in Figure 18.
In general, from Table 7 and Table 8, the AUC of both the ROC and PR curves is similar regardless of the spectral and spatial DR used. By applying NDVIre-based spatial DR, the precision of the spatial DR coupled methods increases in comparison to using the full spatial scene with the recall dropping slightly. As seen in Figure 15, there is a decrease in visibility of the target when using the spatial DR as the highly dissimilar vegetative background is removed. Figure 18 shows that, on average, the visibility drops slightly for each of the spectral DR methods when NDVIre is applied when K = 20 components are retained. The balanced accuracy also decreases slightly due to the reduction in the size of the TN class. Both the F1 score and MCC are improved when using spatial DR in all methods tested. The full dimensionality images with and without spatial DR have the best performance. However, of the spectral DR methods used, MNF, FPCA and ICA perform similarly despite retaining the equivalent of only 25% of the total spectral components. When combined with spatial DR both ICA and FPCA maintain similar levels of performance compared to the full dimensionality image with no spatial DR applied. Applying the proposed method to the Selene dataset (Figure 18 and Table 7 and Table 8) allows for improved results, however these improvements are not as significant as those achieved from the processing of the OP7 dataset. This can be attributed to the increased complexity of the Selene trial images when compared to the OP7 data. The performance can be improved further by retaining additional DR components, as shown in Figure 15, albeit at the expense of compression and therefore at an increased computational cost.

4. Discussion

The proposed NDVIre-based spatial DR is relatively simple, requiring information from only two wavelengths and can be readily applied to TD and other similar applications. By using NDVIre it is possible to detect varied spectral targets composed of metals, plastics and other synthetic materials against a vegetative background. NDVI variants allow for the discrimination between vegetative and non-vegetative pixels due to known material characteristics in the red-edge portion of the spectrum. Other VIs, whilst not considered here, as exploiting the red-edge portion was determined the key component of this method, may provide alternative insights and allow for the more optimal detection of additional materials in alternative environments. By combining both spatial and spectral DR, the computational complexity and memory requirements can be reduced whilst maintaining, or in some cases improving upon, detection performance as shown in Figure 13 and Figure 18. Using spatial DR had little effect on the AUC of the ROC or PR curves, the main improvements came from the increased F1 score, MCC and precision. On average, there is a slight reduction in recall and balanced accuracy, however, one correctly detected and classified pixel per target may be sufficient for certain applications.
The complexity and performance of the spectral DR methods utilised varies. PCA is the simplest method used but also requires the most spectral components to be retained in order to be competitive. Applying the spatial DR and simplifying the background prior to performing spectral DR improved the performance of all methods but most notably when using PCA, which was competitive in both datasets with the addition of spatial DR. MNF can be conceptualised as two PCAs, one for noise reduction and the second to transform the noise whitened data into the reduced subspace. This extra noise removal step offers a distinct advantage when compared to PCA and allows it to perform similarly to FPCA and ICA. FPCA performed favourably in both datasets and is efficient given the simplification when calculating the partial covariance matrix. However, when using FPCA the choices of the number of components, K, and the height, H, and width, W, of the folded matrix are far more sensitive than the other methods and are subject to two rules:
  • K must be a factor of the total number of wavelengths L or
  • When selecting the folding parameters H and W, L > ( H 1 ) W
In any case where the first rule is true, the expression in the second rule will automatically be valid. H was selected to be half the value of K in order to adapt with the changing number of components. However, the folded array must be padded with zeros in order to fulfil the expression H × W = L , if these zeros formed an entire row of the covariance matrix they will form a zero component in both the projected image and target. When these interact in each of the TD algorithms, usually by inner product, it forms a singular matrix. As inverse matrices are prevalent in the implementations of the TD algorithms used, singular matrices completely suppress the detection. This phenomenon caused the undulating behaviours present in Figure 10 and Figure 15 and informed the choice of the number of DR components in order to compare each TD algorithm. ICA is the most complicated and computationally expensive method, but performed well on both datasets. Only using the full dimensionality data, with and without spatial DR, was an improvement over the ICA based methods. In general the spectral DR methods, whilst increasing the balanced accuracy and visibility when smaller numbers of components are retained, decrease the F1 score and MCC when compared with the raw full dimensionality data. Both FPCA and ICA offer consistent and improved detection when combined with ACE and NDVIre-based spatial DR. In general, the most impressive results are obtained using the ACE TD algorithm which corroborates the conclusions of other similar works investigating this topic [11,14,17].
The methods detailed here offered improvement to the TD performance on both datasets considered. However greater improvements were obtained on the simpler dataset. Increasing the number of spectral DR components retained to account for the increased variability in the Selene dataset would improve the performance. This is at the expense of the compression rates that can be achieved at lower values of K. On average applying NDVIre-based spatial DR increases precision and slightly decreases the recall of the TD algorithms used. The visibilities of the targets decrease as background pixels which are dissimilar to the targets are not considered. The average background signature, after applying the NDVIre-based spacial DR, becomes more similar to the target signatures. However, applying spectral DR and mapping the data into a more informative subspace can alleviate this issue.

5. Conclusions

DR is a tool often employed in various hyperspectral imaging applications, usually to reduce the number of spectral bands present in an image due to its high spectral redundancy. However, known spatial redundancies are rarely exploited. This paper provides an investigation into how spatial DR can be utilised in a TD application. We have shown that in each case tested using multiple spectral DR schemes, the addition of a spatial DR pre-processing stage improved the performance of the TD algorithm considered. By applying both spatial and spectral DR the complexity of the data is reduced and computational cost and memory requirements can be lowered.
We used robust, classical TD/AD and DR algorithms in order to assess the proposed method. The provision of a priori information gives the TD algorithms an advantage over AD algorithms like the RXD for example. Whilst the RXD correctly identifies the anomalous pixels, it fails to discriminate between specific target spectra resulting in low precision. Therefore, AD is insufficient for the application we are proposing. Of the detection methods tested, the ACE algorithm performs the best both when considering the full spatial scene and when applying the NDVIre-based spatial DR-especially when combined with the FPCA and ICA DR algorithms.
We have shown that the proposed pipeline can compress an input image by >90% whilst maintaining the detector performance seen in the processing of the raw images. This pipeline is readily applicable in TD scenarios where the predominant background is comprised of vegetative pixels. The proposed method may be adapted to suppress other, highly predictable, background signatures given an appropriate index. Indices such as the built-up index could provide the inverse to NDVI and its variants masking non-vegetative pixels directly, or alternatively providing auxiliary features. Additionally, multiple indices can be generated rapidly and combined to provide additional information about the pixels in a scene. Existing indices could also be used in the detection of camouflaged objects or bespoke alternative measures may be developed.
Potential future work includes using an adaptive method for selecting the optimal number of components, K, to retain in each DR method. In PCA, MNF and FPCA, variations on scree plots [20] can be used to find the elbow point. Alternatively, the value of K at which the number of components represent a sufficient percentage of the variance in the data could be chosen. Similarly for ICA, VD [28] can be used to estimate the number of spectrally distinct sources in the image and allows for the automation of this approach.
Although the proposed spatial DR approach has been tested on classical DR and TD/AD algorithms more state-of-the-art approaches to spectral DR could be considered as well as more complex detection algorithms. While the visibility of the target generally dropped when using spectral DR, the detection was improved and so a measure which can determine how distinctive the target is within the reduced subspace would be of benefit. Along with spectral DR other methods of spatial DR could be considered.
In order to avoid saturation of tables and results, the most informative and interesting results were included here. The full set of results generated from this work will be available online at a later date.

Author Contributions

Conceptualization, F.M., P.M. and S.M.; Methodology, F.M. and P.M.; Software, F.M.; Original Draft Preparation, F.M.; Review & Editing, H.W., P.M. and S.M.; Supervision, P.M., S.M. and H.W. All the authors contributed significantly to the research. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by BAE Systems, the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant numbers EP/S000631/1 and EP/R512114/1 (ref. no. 1969791), and the UK MOD University Defence Research Collaboration (UDRC) in Signal Processing.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ACEAdaptive Cosine Estimator
ADAnomaly Detection
AUCArea Under the Curve
CEMConstrained Energy Minimisation
DRDimensionality Reduction
EVDEigenvalue Decomposition
FARFalse Alarm Rate
FNFalse Negative
FNRFalse Negative Rate
FPFalse Positive
FPCAFolded Principal Component Analysis
FPRFalse Positive Rate
GSDGround Sample Distance
ICIndependent Components
ICAIndependent Component Analysis
MCCMatthew’s Correlation Coefficient
MNFMaximum Noise Fraction
NDVINormalised Difference Vegetation Index
NDVI re Normalised Difference Vegetation Index (red-edge)
NDWINormalised Difference Water Index
NDSINormalised Difference Snow Index
NIPALSNon-linear Iterative Partial Least Squares
NIRNear-InfraRed
OSPOrthogonal Subspace Projection
P d Probability of Detection
P fa Probability of False Alarm
PCPrincipal Components
PCAPrincipal Component Analysis
PRPrecision-Recall
RENDVIRed Edge Normalised Difference Vegetation Index
ROCReceiver Operator Characteristic
RXDReed-Xiaoli Detector
SAMSpectral Angle Mapper
SIDSpectral Information Divergence
SNRSignal-to-Noise Ratio
TDTarget Detection
TNTrue Negative
TNRTrue Negative Rate
TPTrue Positive
TPRTrue Positive Rate
VDVirtual Dimensionality
VIVegetation Indices
VNIRVisible and Near-InfraRed

References

  1. Van Aardt, J.A.; McKeown, D.; Faulring, J.; Raqueño, N.; Casterline, M.; Renschler, C.; Eguchi, R.; Messinger, D.; Krzaczek, R.; Cavillia, S. Geospatial disaster response during the Haiti earthquake: A case study spanning airborne deployment, data collection, transfer, processing, and dissemination. Photogramm. Eng. Remote Sens. 2011, 77, 943–952. [Google Scholar] [CrossRef] [Green Version]
  2. Ettehadi Osgouei, P.; Kaya, S.; Sertel, E.; Alganci, U. Separating Built-Up Areas from Bare Land in Mediterranean Cities Using Sentinel-2A Imagery. Remote Sens. 2019, 11, 345. [Google Scholar] [CrossRef] [Green Version]
  3. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef] [Green Version]
  4. Theiler, J.; Kucer, M.; Ziemann, A. Experiments in anomalous change detection with the Viareggio 2013 trial dataset. In Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11392, p. 1139211. [Google Scholar]
  5. Ghaderpour, E.; Vujadinovic, T. Change Detection within Remotely Sensed Satellite Image Time Series via Spectral Analysis. Remote Sens. 2020, 12, 4001. [Google Scholar] [CrossRef]
  6. Kwan, C.; Chou, B.; Yang, J.; Tran, T. Compressive object tracking and classification using deep learning for infrared videos. In Pattern Recognition and Tracking XXX; International Society for Optics and Photonics: Washington, DC, USA, 2019; Volume 10995, p. 1099506. [Google Scholar]
  7. Manolakis, D.; Truslow, E.; Pieper, M.; Cooley, T.; Brueggeman, M. Detection algorithms in hyperspectral imaging systems: An overview of practical algorithms. IEEE Signal Process. Mag. 2013, 31, 24–33. [Google Scholar] [CrossRef]
  8. Chang, C.I. Hyperspectral target detection. In Real-Time Progressive Hyperspectral Image Processing; Springer: Berlin/Heidelberg, Germany, 2016; pp. 131–172. [Google Scholar]
  9. Wang, Q.; Lin, Q.; Li, M.; Tian, Q. A new target detection algorithm: Spectra sort encoding. Int. J. Remote Sens. 2009, 30, 2297–2307. [Google Scholar] [CrossRef]
  10. Nasrabadi, N.M. Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Process. Mag. 2014, 31, 34–44. [Google Scholar] [CrossRef]
  11. Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J. Is there a best hyperspectral detection algorithm? In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV; International Society for Optics and Photonics: Bellingham, WA, USA, 2009; Volume 7334, p. 733402. [Google Scholar]
  12. Chang, C.I.; Cao, H.; Chen, S.; Shang, X.; Yu, C.; Song, M. Orthogonal Subspace Projection-Based Go-Decomposition Approach to Finding Low-Rank and Sparsity Matrices for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020. [Google Scholar] [CrossRef]
  13. Chang, C.I. Hyperspectral Imaging: Techniques for Spectral Detection and Classification; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 1. [Google Scholar]
  14. Jin, X.; Paswaters, S.; Cline, H. A comparative study of target detection algorithms for hyperspectral imagery. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV; International Society for Optics and Photonics: Bellingham, WA, USA, 2009; Volume 7334, p. 73341W. [Google Scholar]
  15. Olson, C.; Nichols, J.; Michalowicz, J.; Bucholtz, F. Improved Outlier Identification in Hyperspectral Imaging via Nonlinear Dimensionality Reduction; SPIE Defense, Security, and Sensing; SPIE: Bellingham, WA, USA, 2010; Volume 7695. [Google Scholar]
  16. Manolakis, D.; Pieper, M.; Truslow, E.; Cooley, T.; Brueggeman, M.; Lipson, S. The remarkable success of adaptive cosine estimator in hyperspectral target detection. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIX; International Society for Optics and Photonics: Bellingham, WA, USA, 2013; Volume 8743, p. 874302. [Google Scholar]
  17. Bakken, S.; Orlandic, M.; Johansen, T.A. The effect of dimensionality reduction on signature-based target detection for hyperspectral remote sensing. In CubeSats and SmallSats for Remote Sensing III; International Society for Optics and Photonics: Bellingham, WA, USA, 2019; Volume 11131, p. 111310L. [Google Scholar]
  18. Macfarlane, F.; Murray, P.; Marshall, S.; White, H. A Fast Hyperspectral Hit-or-Miss Transform with Integrated Projection-Based Dimensionality Reduction. Hyperspectral Imaging HSI 2018. Available online: https://strathprints.strath.ac.uk/65308/ (accessed on 12 November 2020).
  19. Macfarlane, F.; Murray, P.; Marshall, S.; White, H. Object Detection and Classification in Aerial Hyperspectral Imagery Using a Multivariate Hit-or-Miss Transform; SPIE Defense + Commercial Sensing; SPIE: Bellingham, WA, USA, 2019; Volume 10986. [Google Scholar]
  20. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  21. Andrecut, M. Parallel GPU implementation of iterative PCA algorithms. J. Comput. Biol. 2009, 16, 1593–1599. [Google Scholar] [CrossRef] [Green Version]
  22. Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
  23. Luo, G.; Chen, G.; Tian, L.; Qin, K.; Qian, S.E. Minimum Noise Fraction versus Principal Component Analysis as a Preprocessing Step for Hyperspectral Imagery Denoising. Can. J. Remote Sens. 2016, 42, 106–116. [Google Scholar] [CrossRef]
  24. Zabalza, J.; Ren, J.; Yang, M.; Zhang, Y.; Wang, J.; Marshall, S.; Han, J. Novel Folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing. ISPRS J. Photogramm. Remote Sens. 2014, 93, 112–122. [Google Scholar] [CrossRef] [Green Version]
  25. Cao, L.J.; Chua, K.S.; Chong, W.K.; Lee, H.P.; Gu, Q.M. A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine. Neurocomputing 2003, 55, 321–336. [Google Scholar] [CrossRef]
  26. Hyvärinen, A.; Oja, E. A fast fixed-point algorithm for independent component analysis. Neural Comput. 1997, 9, 1483–1492. [Google Scholar] [CrossRef]
  27. Cardoso, J.F.; Souloumiac, A. Blind beamforming for non-Gaussian signals. In IEE Proceedings F (Radar and Signal Processing); IET: London, UK, 1993; Volume 140, pp. 362–370. [Google Scholar]
  28. Chang, C. A Review of Virtual Dimensionality for Hyperspectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1285–1305. [Google Scholar] [CrossRef]
  29. Wang, J.; Chang, C. Applications of Independent Component Analysis in Endmember Extraction and Abundance Quantification for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2601–2616. [Google Scholar] [CrossRef]
  30. Rouse, J.W., Jr.; Haas, R.H.; Schell, J.; Deering, D. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation 1973. Available online: https://core.ac.uk/download/pdf/42887948.pdf (accessed on 5 January 2021).
  31. Hansen, P.; Schjoerring, J. Reflectance measurement of canopy biomass and nitrogen status in wheat crops using normalized difference vegetation indices and partial least squares regression. Remote Sens. Environ. 2003, 86, 542–553. [Google Scholar] [CrossRef]
  32. Gitelson, A.; Merzlyak, M.N. Spectral reflectance changes associated with autumn senescence of Aesculus hippocastanum L. and Acer platanoides L. leaves. Spectral features and relation to chlorophyll estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  33. Sims, D.A.; Gamon, J.A. Relationships between leaf pigment content and spectral reflectance across a wide range of species, leaf structures and developmental stages. Remote Sens. Environ. 2002, 81, 337–354. [Google Scholar] [CrossRef]
  34. Kraut, S.; Scharf, L.L.; Butler, R.W. The adaptive coherence estimator: A uniformly most-powerful-invariant adaptive detection statistic. IEEE Trans. Signal Process. 2005, 53, 427–438. [Google Scholar] [CrossRef]
  35. Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  36. Chang, C.I. Spectral information divergence for hyperspectral image analysis. In Proceedings of the IEEE 1999 International Geoscience and Remote Sensing Symposium, IGARSS’99 (Cat. No. 99CH36293), Hamburg, Germany, 28 June–2 July 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 509–511. [Google Scholar]
  37. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  38. Chang, C.I.; Chen, J. Orthogonal Subspace Projection Using Data Sphering and Low-Rank and Sparse Matrix Decomposition for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2021, 1–19. [Google Scholar] [CrossRef]
  39. Chang, C.I. Orthogonal subspace projection (OSP) revisited: A comprehensive study and analysis. IEEE Trans. Geosci. Remote Sens. 2005, 43, 502–518. [Google Scholar] [CrossRef]
  40. Shen, S.S.; Bajorski, P.; Lewis, P.E. Impact of missing endmembers on the performance of the OSP detector for hyperspectral images. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII; International Society for Optics and Photonics: Bellingham, WA, USA, 2007. [Google Scholar] [CrossRef]
  41. Bajorski, P. Analytical Comparison of the Matched Filter and Orthogonal Subspace Projection Detectors for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2007, 45, 2394–2402. [Google Scholar] [CrossRef]
  42. Bajorski, P. Target Detection Under Misspecified Models in Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 470–477. [Google Scholar] [CrossRef]
  43. Gross, W.; Boehler, J.; Schilling, H.; Middelmann, W.; Weyermann, J.; Wellig, P.; Oechslin, R.; Kneubühler, M. Assessment of target detection limits in hyperspectral data. In Target and Background Signatures; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9653, p. 96530J. [Google Scholar]
  44. Kerekes, J. Receiver operating characteristic curve confidence intervals and regions. IEEE Geosci. Remote Sens. Lett. 2008, 5, 251–255. [Google Scholar] [CrossRef] [Green Version]
  45. Saito, T.; Rehmsmeier, M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE 2015, 10, e0118432. [Google Scholar]
  46. Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  47. Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta (BBA) Protein Struct. 1975, 405, 442–451. [Google Scholar] [CrossRef]
  48. Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. The balanced accuracy and its posterior distribution. In Proceedings of the 2010 20th international conference on pattern recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 3121–3124. [Google Scholar]
  49. Verdoja, F.; Grangetto, M. Graph Laplacian for image anomaly detection. Mach. Vis. Appl. 2020, 31, 1–16. [Google Scholar] [CrossRef] [Green Version]
Figure 1. False colour images from the datasets used in this work. (a) OP7 image. (b) Target Region of (a). (c) UDRC Selene image. (d) Target Region of (c).
Figure 1. False colour images from the datasets used in this work. (a) OP7 image. (b) Target Region of (a). (c) UDRC Selene image. (d) Target Region of (c).
Remotesensing 13 01647 g001
Figure 2. Spatio-spectral dimensionality reduction pipeline for target detection.
Figure 2. Spatio-spectral dimensionality reduction pipeline for target detection.
Remotesensing 13 01647 g002
Figure 3. Optimal VI experiments (a) UDRC Test Image. (b) NDVI ratio. (c) NDVIre ratio. (d) RENDVI ratio. (In (bd) warmer colours indicate higher levels of vegetation and colder colours indicate non-vegetation.)
Figure 3. Optimal VI experiments (a) UDRC Test Image. (b) NDVI ratio. (c) NDVIre ratio. (d) RENDVI ratio. (In (bd) warmer colours indicate higher levels of vegetation and colder colours indicate non-vegetation.)
Remotesensing 13 01647 g003
Figure 4. VI ratio of each of the three test spectra (a) NDVI ratio and band locations. (b) NDVIre ratio and band locations. (c) RENDVI ratio and band locations.
Figure 4. VI ratio of each of the three test spectra (a) NDVI ratio and band locations. (b) NDVIre ratio and band locations. (c) RENDVI ratio and band locations.
Remotesensing 13 01647 g004
Figure 5. ROC Curves for each TD and spectral DR scheme pairing on the full scene. (a) ACE. (b) CEM. (c) SAM. (d) SID. (e) RXD.
Figure 5. ROC Curves for each TD and spectral DR scheme pairing on the full scene. (a) ACE. (b) CEM. (c) SAM. (d) SID. (e) RXD.
Remotesensing 13 01647 g005
Figure 6. ROC Curves for each TD and spectral DR scheme pairing in combination with spatial DR. (a) ACE. (b) CEM. (c) SAM. (d) SID. (e) RXD.
Figure 6. ROC Curves for each TD and spectral DR scheme pairing in combination with spatial DR. (a) ACE. (b) CEM. (c) SAM. (d) SID. (e) RXD.
Remotesensing 13 01647 g006
Figure 7. PR Curves for each TD and spectral DR scheme pairing in combination with spatial DR. (a) ACE. (b) CEM. (c) SAM. (d) SID. (e) RXD.
Figure 7. PR Curves for each TD and spectral DR scheme pairing in combination with spatial DR. (a) ACE. (b) CEM. (c) SAM. (d) SID. (e) RXD.
Remotesensing 13 01647 g007
Figure 8. Average performance of each TD and DR algorithm combination when used on the full scene vs. when combined with spatial DR.
Figure 8. Average performance of each TD and DR algorithm combination when used on the full scene vs. when combined with spatial DR.
Remotesensing 13 01647 g008
Figure 9. Example of the OP7 Dataset. (a) False colour image of the target area. (b) Enlarged version of (a). (c) Retained pixels following the NDVIre spatial masking. (d) Enlarged version of (c).
Figure 9. Example of the OP7 Dataset. (a) False colour image of the target area. (b) Enlarged version of (a). (c) Retained pixels following the NDVIre spatial masking. (d) Enlarged version of (c).
Remotesensing 13 01647 g009
Figure 10. Performance measures using the ACE TD algorithm and each DR scheme with a varying number of retained components on the OP7 dataset.
Figure 10. Performance measures using the ACE TD algorithm and each DR scheme with a varying number of retained components on the OP7 dataset.
Remotesensing 13 01647 g010
Figure 11. Detection statistics of the ACE algorithm on the full dimensionality data where yellow indicates a response of 1 and blue represents a response of 0. (a) Without spatial DR. (b) Enlarged version of (a). (c) With NDVIre. (d) Enlarged version of (c).
Figure 11. Detection statistics of the ACE algorithm on the full dimensionality data where yellow indicates a response of 1 and blue represents a response of 0. (a) Without spatial DR. (b) Enlarged version of (a). (c) With NDVIre. (d) Enlarged version of (c).
Remotesensing 13 01647 g011
Figure 12. ROC and PR curves for detecting Target 1 (grey tile) in the OP7 dataset (a) ROC curve using the full spatial dimensionality data. (b) ROC curve using spatial DR pre-processing. (c) PR curve using the full spatial dimensionality data. (d) PR curve using spatial DR pre-processing.
Figure 12. ROC and PR curves for detecting Target 1 (grey tile) in the OP7 dataset (a) ROC curve using the full spatial dimensionality data. (b) ROC curve using spatial DR pre-processing. (c) PR curve using the full spatial dimensionality data. (d) PR curve using spatial DR pre-processing.
Remotesensing 13 01647 g012
Figure 13. Comparison between the performance measures when combining spatial and spectral DR on the OP7 dataset.
Figure 13. Comparison between the performance measures when combining spatial and spectral DR on the OP7 dataset.
Remotesensing 13 01647 g013
Figure 14. Example image from the UDRC Selene Dataset. (a) False colour image of the target area. (b) Enlarged version of (a). (c) Retained pixels following the NDVIre spatial masking. (d) Enlarged version of (c).
Figure 14. Example image from the UDRC Selene Dataset. (a) False colour image of the target area. (b) Enlarged version of (a). (c) Retained pixels following the NDVIre spatial masking. (d) Enlarged version of (c).
Remotesensing 13 01647 g014
Figure 15. Performance measures using the ACE TD algorithm and each DR scheme with a varying number of retained components on the UDRC Selene dataset.
Figure 15. Performance measures using the ACE TD algorithm and each DR scheme with a varying number of retained components on the UDRC Selene dataset.
Remotesensing 13 01647 g015
Figure 16. Detection statistics of the ACE algorithm on the full dimensionality data where yellow indicates a response of 1 and blue represents a response of 0. (a) Without spatial DR. (b) Enlarged version of (a). (c) With NDVIre. (d) Enlarged version of (c).
Figure 16. Detection statistics of the ACE algorithm on the full dimensionality data where yellow indicates a response of 1 and blue represents a response of 0. (a) Without spatial DR. (b) Enlarged version of (a). (c) With NDVIre. (d) Enlarged version of (c).
Remotesensing 13 01647 g016
Figure 17. ROC and PR curves for detecting Target 3 (green ceramic) in the Selene dataset (a) ROC curve using the full spatial dimensionality data. (b) ROC curve using spatial DR pre-processing. (c) PR curve using the full spatial dimensionality data. (d) PR curve using spatial DR pre-processing.
Figure 17. ROC and PR curves for detecting Target 3 (green ceramic) in the Selene dataset (a) ROC curve using the full spatial dimensionality data. (b) ROC curve using spatial DR pre-processing. (c) PR curve using the full spatial dimensionality data. (d) PR curve using spatial DR pre-processing.
Remotesensing 13 01647 g017
Figure 18. Comparison between the performance measures when combining spatial and spectral DR on the Selene dataset.
Figure 18. Comparison between the performance measures when combining spatial and spectral DR on the Selene dataset.
Remotesensing 13 01647 g018
Table 1. Vegetation indices used for spatial DR.
Table 1. Vegetation indices used for spatial DR.
IndexAcronymEquationReference
Normalised Difference
Vegetation Index
NDVI λ N I R λ R e d λ N I R + λ R e d Rouse et al. [30]
Normalised Difference Vegetation
Index (red-edge)
NDVIre λ r e λ R e d λ r e + λ R e d Hansen & Schjoerring [31]
Ettehadi et al. [2]
Red-Edge Normalised
Difference Vegetation Index
RENDVI λ 750 λ 705 λ 750 + λ 705 Gitelson & Merzlyak [32]
Sims & Gamon [33]
Table 2. Vegetation index ratios obtained for background and targets.
Table 2. Vegetation index ratios obtained for background and targets.
Vegetation IndexGreen Perspex RatioGreen Carpet RatioBackground Ratio
NDVI0.480.140.53
NDVIre0.090.130.39
RENDVI0.100.060.28
Table 3. Achieved compression for combined spatial and spectral DR.
Table 3. Achieved compression for combined spatial and spectral DR.
Image # Samples Full# Samples NDVIreLKSpatial Comp. (%)Spectral Comp. (%)Total Comp. (%)Average Comp. (%)
OP7_1160,0003504100202.19200.440.34
OP7_2160,0002500100201.56200.31
OP7_3160,0002232100201.40200.28
IM_1408043,210,191649,435802020.23255.064.61
IM_1408063,839,976578,674802015.07253.77
IM_1408073,415,052689,245802020.18255.05
IM_1408083,015,944543,569802018.02254.51
IM_1408124,360,159610,172802013.99253.50
IM_1408133,301,404807,262802024.45256.11
IM_1408153,640,769626,776802017.22254.30
Table 4. Comparison between the AUC of the PR curves using the full and spatial DR images.
Table 4. Comparison between the AUC of the PR curves using the full and spatial DR images.
PRRawPCAMNFFPCAICA
AUCFullNDVIreFullNDVIreFullNDVIreFullNDVIreFullNDVIre
ACE0.6490.75560.60380.75050.62290.73930.560.71860.59990.7507
CEM0.62070.68520.62080.76730.60330.66330.61240.66690.61950.6849
SAM0.5770.67230.51270.44430.49380.09930.5280.61940.60060.7507
SID0.53150.61120.1310.35820.01870.01020.33140.26250.18090.5871
RXD0.51530.50860.00550.01750.53580.66040.54450.58160.52240.5049
Table 5. Average performance measures for each target in the OP7 dataset without spatial DR applied using the ACE algorithm.
Table 5. Average performance measures for each target in the OP7 dataset without spatial DR applied using the ACE algorithm.
ACE-Full
K = 20DRAUC ROCAUC PRVisibilityPrecisionRecallBaccF1MCC
Grey TileRaw1.000.840.880.370.880.930.350.43
PCA1.000.770.940.120.950.960.120.18
MNF1.000.790.930.150.950.960.150.21
FPCA1.000.800.920.170.940.960.170.23
ICA1.000.780.930.160.950.960.160.23
Black TileRaw1.000.060.600.090.620.800.060.11
PCA1.000.100.680.050.720.840.050.09
MNF1.000.150.700.090.750.850.070.12
FPCA1.000.130.710.050.760.850.050.09
ICA1.000.110.670.040.720.830.050.09
White TileRaw1.000.740.790.570.790.890.530.59
PCA1.000.670.930.220.940.960.280.37
MNF1.000.680.850.390.860.920.410.47
FPCA1.000.600.830.330.840.910.350.41
ICA1.000.670.790.470.800.890.440.49
All SpectraRaw1.000.550.760.340.770.870.320.37
PCA1.000.520.850.130.870.920.150.22
MNF1.000.540.830.210.850.910.210.27
FPCA1.000.510.820.180.850.910.190.25
ICA1.000.520.800.220.820.890.220.27
Table 6. Average performance measures for each target in the OP7 dataset with NDVIre-based spatial DR applied using the ACE algorithm.
Table 6. Average performance measures for each target in the OP7 dataset with NDVIre-based spatial DR applied using the ACE algorithm.
ACE-NDVIre
K = 20DRAUC ROCAUC PRVisibilityPrecisionRecallBaccF1MCC
Grey TileRaw1.000.860.720.730.740.860.590.65
PCA1.000.830.750.560.790.870.480.54
MNF1.000.850.760.560.800.870.480.54
FPCA1.000.840.810.540.840.900.510.57
ICA1.000.750.730.520.770.860.430.50
Black TileRaw0.980.370.520.400.570.760.250.33
PCA0.940.080.480.060.560.740.090.14
MNF0.960.090.460.070.540.730.090.14
FPCA0.940.090.470.080.540.730.100.15
ICA0.930.080.420.050.490.710.080.12
White TileRaw0.970.660.570.830.580.780.580.63
PCA0.950.610.590.780.600.790.590.63
MNF0.950.620.610.740.620.800.580.62
FPCA0.920.590.590.640.610.790.510.55
ICA0.940.630.570.730.590.780.530.58
All SpectraRaw0.980.630.610.650.630.800.470.53
PCA0.960.500.610.470.650.800.390.44
MNF0.970.520.610.460.650.800.380.44
FPCA0.960.500.620.420.660.810.370.43
ICA0.960.490.570.430.620.780.350.40
Table 7. Average performance measures for each target in the Selene dataset without spatial DR applied using the ACE algorithm.
Table 7. Average performance measures for each target in the Selene dataset without spatial DR applied using the ACE algorithm.
ACE-Full
K = 20DRAUC ROCAUC PRVisibilityPrecisionRecallBaccF1MCC
Brown CarpetRaw0.970.330.650.190.670.820.170.24
PCA0.970.060.600.040.640.800.030.07
MNF0.970.460.750.170.780.870.150.21
FPCA0.970.550.750.220.780.870.200.26
ICA0.970.570.750.250.780.870.220.28
Green CarpetRaw0.980.610.820.320.830.910.360.43
PCA0.950.070.450.060.510.720.030.06
MNF0.980.540.800.230.830.890.240.30
FPCA0.980.580.850.250.890.920.290.35
ICA0.980.600.860.220.900.920.270.32
Green CeramicRaw0.990.650.940.190.940.960.290.39
PCA0.980.600.850.160.890.920.190.26
MNF0.990.600.930.130.950.960.200.29
FPCA0.990.540.940.120.960.960.200.30
ICA0.990.520.940.130.960.960.200.30
Green PerspexRaw1.000.630.950.220.950.970.320.42
PCA0.990.440.910.080.930.950.120.20
MNF1.000.550.950.160.970.970.240.33
FPCA1.000.510.950.160.970.970.250.34
ICA1.000.570.960.150.970.970.230.33
Grey CeramicRaw0.990.610.770.310.780.880.270.34
PCA0.980.470.810.130.830.900.110.17
MNF0.990.580.850.160.880.920.180.24
FPCA0.990.550.840.180.870.910.210.27
ICA0.990.530.820.180.850.900.190.25
Orange PerspexRaw0.990.320.900.120.900.950.200.31
PCA0.990.250.920.050.930.960.080.18
MNF0.990.290.930.070.940.960.130.24
FPCA0.990.300.930.080.940.960.140.25
ICA0.990.310.930.080.940.960.140.25
White PerspexRaw0.980.070.480.070.490.740.050.10
PCA0.990.270.830.040.850.910.050.11
MNF0.990.100.650.040.670.820.030.08
FPCA0.980.030.560.040.590.780.030.07
ICA0.980.020.450.030.490.720.020.05
All SpectraRaw0.990.460.770.210.780.880.230.31
PCA0.980.300.740.080.770.870.080.14
MNF0.990.460.820.140.850.910.160.23
FPCA0.990.450.820.160.840.900.190.26
ICA0.990.450.790.150.820.890.180.25
Table 8. Average performance measures for each target in the Selene dataset with NDVIre-based spatial DR applied using the ACE algorithm.
Table 8. Average performance measures for each target in the Selene dataset with NDVIre-based spatial DR applied using the ACE algorithm.
ACE-NDVIre
K = 20DRAUC ROCAUC PRVisibilityPrecisionRecallBaccF1MCC
Brown CarpetRaw1.000.200.530.150.550.760.130.18
PCA0.970.010.310.000.370.650.010.03
MNF0.990.150.560.120.610.780.090.14
FPCA0.990.270.590.170.640.790.130.18
ICA1.000.470.680.220.730.840.190.25
Green CarpetRaw1.000.630.820.370.830.910.410.47
PCA0.950.050.410.050.460.700.040.07
MNF1.000.480.750.240.790.870.240.31
FPCA1.000.580.820.300.860.910.320.38
ICA1.000.610.890.280.930.940.340.40
Green CeramicRaw1.000.700.960.370.960.970.490.56
PCA1.000.630.910.250.930.950.300.38
MNF1.000.620.960.270.970.970.380.46
FPCA1.000.600.960.280.970.970.390.46
ICA1.000.620.960.300.980.980.420.49
Green PerspexRaw1.000.680.970.410.970.980.540.60
PCA1.000.610.960.230.980.980.310.38
MNF1.000.600.970.320.980.980.430.50
FPCA1.000.600.970.330.980.980.440.50
ICA1.000.640.970.330.980.980.440.51
Grey CeramicRaw1.000.620.730.400.750.860.350.41
PCA1.000.470.760.220.790.870.190.26
MNF1.000.600.780.290.820.890.280.35
FPCA1.000.580.780.290.830.890.300.35
ICA1.000.600.790.280.830.890.270.34
Orange PerspexRaw1.000.350.870.180.870.930.250.36
PCA1.000.250.910.090.910.950.140.25
MNF1.000.350.920.110.930.950.180.29
FPCA1.000.330.920.120.920.950.180.30
ICA1.000.360.930.110.940.960.170.29
White PerspexRaw0.980.110.410.100.430.700.080.12
PCA0.990.240.720.100.750.850.090.16
MNF0.980.060.500.060.540.750.040.09
FPCA0.980.060.480.050.530.740.040.09
ICA0.940.020.350.020.410.670.020.05
All SpectraRaw1.000.460.730.280.740.860.300.37
PCA0.980.300.670.130.700.830.140.20
MNF0.990.400.750.200.780.870.220.28
FPCA0.990.430.760.220.790.880.240.31
ICA0.990.470.770.220.800.880.260.32
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Macfarlane, F.; Murray, P.; Marshall, S.; White, H. Investigating the Effects of a Combined Spatial and Spectral Dimensionality Reduction Approach for Aerial Hyperspectral Target Detection Applications. Remote Sens. 2021, 13, 1647. https://doi.org/10.3390/rs13091647

AMA Style

Macfarlane F, Murray P, Marshall S, White H. Investigating the Effects of a Combined Spatial and Spectral Dimensionality Reduction Approach for Aerial Hyperspectral Target Detection Applications. Remote Sensing. 2021; 13(9):1647. https://doi.org/10.3390/rs13091647

Chicago/Turabian Style

Macfarlane, Fraser, Paul Murray, Stephen Marshall, and Henry White. 2021. "Investigating the Effects of a Combined Spatial and Spectral Dimensionality Reduction Approach for Aerial Hyperspectral Target Detection Applications" Remote Sensing 13, no. 9: 1647. https://doi.org/10.3390/rs13091647

APA Style

Macfarlane, F., Murray, P., Marshall, S., & White, H. (2021). Investigating the Effects of a Combined Spatial and Spectral Dimensionality Reduction Approach for Aerial Hyperspectral Target Detection Applications. Remote Sensing, 13(9), 1647. https://doi.org/10.3390/rs13091647

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop