Next Article in Journal
Mountain Top-Based Atmospheric Radio Occultation Observations with Open/Closed Loop Tracking: Experiment and Validation
Previous Article in Journal
Using Remote Sensing Techniques to Document and Identify the Largest Underwater Object of the Baltic Sea: Case Study of the Only German Aircraft Carrier, Graf Zeppelin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Can Multifractal Analysis Tell Us about Hyperspectral Imagery?

by
Michał Krupiński
1,*,
Anna Wawrzaszek
1,
Wojciech Drzewiecki
2,
Małgorzata Jenerowicz
1 and
Sebastian Aleksandrowicz
1
1
Centrum Badań Kosmicznych Polskiej Akademii Nauk (CBK PAN), Bartycka 18A, 00-716 Warszawa, Poland
2
Department of Photogrammetry, Remote Sensing of Environment and Spatial Engineering, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(24), 4077; https://doi.org/10.3390/rs12244077
Submission received: 25 November 2020 / Revised: 9 December 2020 / Accepted: 11 December 2020 / Published: 12 December 2020

Abstract

:
Hyperspectral images provide complex information about the Earth’s surface due to their very high spectral resolution (hundreds of spectral bands per pixel). Effective processing of such a large amount of data requires dedicated analysis methods. Therefore, this research applies, for the first time, the degree of multifractality to the global description of all spectral bands of Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data. Subsets of four hyperspectral images, presenting four landscape types, are analysed. In particular, we verify whether multifractality can be detected in all spectral bands. Furthermore, we analyse variability in multifractality as a function of wavelength, for data before and after atmospheric correction. We try to identify absorption bands and discuss whether multifractal parameters provide additional value or can help in the problem of dimensionality reduction in hyperspectral data or landscape type classification.

1. Introduction

1.1. Fractals

Mandelbrot ([1,2]) first drew the attention of scientists to fractals, and they have since proven to be helpful in developing a numerical description of irregularities occurring in nature. He argued that fractals can be characterised by a quantitative parameter—the fractal dimension ( D F ). The latter corresponds to measures used in Euclidean geometry (e.g., length or area), but in a scale of measurement function. Over time, applications have expanded from a simple description of linear features to surface feature analysis and research on information stored in the form of images [3]. Remote sensing researchers have applied the concept to aerial and satellite imagery analysis, including radar data processing for irrigation mapping [4], surface modelling [5], land cover mapping [6], estimation of building density [7] and segmentation [8], among others.
The most popular D F calculation methods used in remote sensing applications include the blanket method [9], the triangular prism [10], the isarithm method [11], Sevcik’s method [12], the power spectrum method [13], the modified variogram method [14] and the adapted Hausdorff metric [15]. Besides the number of algorithms for D F calculation, there are a few applications where this parameter is associated with remote sensing data. For example, in a single spectral band, it can be calculated for each pixel (local description)—or for a square subset/patch of the image (global description) [16]. In some cases, the subset covers the whole image.
In the context of hyperspectral data, D F has been used in two ways. The first approach concerns an analysis of spectral signatures/profiles that refer to single pixels (e.g., [15,17,18]). Usually, the purpose of these analyses is to reduce the dimensionality of hyperspectral data. For example, Mukherjee et al. [19] showed that D F can be almost as accurate as conventional methods, but with lower computational requirements.
In a second approach, D F is calculated for spectral bands in the whole image or a subset ([20,21,22]). Qiu et al. [20] compared a few calculation methods for each spectral band in two aerial images acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. Krupiński et al. [22] analysed the same images with the Differential Box Counting (DBC) method [23] and additionally analysed four types of landscape. The latter work found significant variability in D F determined for radiance and reflectance data as a function of wavelength. In addition, in noisy bands, disturbance in D F values was observed as a result of strong radiation absorption by water molecules. The authors suggested that this parameter could be used, inter alia, to determine the quality of spectral bands or separability of landscape types.

1.2. Multifractals

A number of experiments have shown, however, that a fractal formalism may not be sufficient for satellite imagery description, and that multifractals (an extension of fractal theory) should be used instead (e.g., [24,25,26]). The multifractal approach is based on the assumption that it is necessary to use a number of non-trivially connected fractals, each with a different dimension of self-similarity, to describe data complexity. What is more, there are various functions (e.g., generalised dimensions, multifractal spectra [27,28]) and a number of quantitative parameters associated with multifractal description. Together, these open up a wide range of possible applications of multifractals in the Earth Observation domain (see Table 1).
A review of existing implementations of a multifractal formalism in the analysis of remote sensing images indicates that they are mostly related to the segmentation and classification of radar imagery. Other applications include, for example, the detection of changes ([16,31]), the edge-preserving smoothing of high-resolution satellite images [32], super-resolution image analysis [33], compression of remote sensing imagery [34] and usage as textural features for content-based image retrieval [35]. There are also combinations with methods such as granulometric analysis [36] or principal component analysis [37] (see Table 2).
Our previous research comprehensively analysed the multifractal character of panchromatic very high resolution (VHR) satellite images acquired by the following satellites: EROS [26], WorldView-2 [25], GeoEye-1 and Pléiades 1A [45,46], using the box-counting based moment method. In particular, we proposed a quantitative parameter, named the degree of multifractality (Δ), and demonstrated its usefulness in attempts to distinguish the basic forms of land cover (water, urban areas, forests and agriculture). In general, the method was effective in automatically assigning image subsets to certain classes. We also showed that the degree of multifractality can be a very effective descriptor of image content, compared to other textural features [47]. However, it should be noted that our global analysis was exclusively focused on a panchromatic band of satellite imagery. The exception was 30-m pixels from Landsat satellites, where we initially considered six spectral bands [48].
In the context of hyperspectral data analyses, we found only four examples in the literature where a multifractal formalism was applied. More precisely, Combrexelle et al. [30] calculated coefficients of the polynomial describing the multifractal spectrum for each spectral band. Li et al. in [42] showed that using multifractal parameters for spectral profiles’ description may improve the average accuracy by 7–8%. Incorporation of multifractal parameters resulted in higher overall accuracy (almost 10%) in the algorithm proposed by Wan et al. [43]. Krupiński et al. showed [44] that overall classification accuracies of methods that use multifractal parameters to describe the spectral curve may differ by 7–9%, depending on the pre-processing method. A summary of the current state of knowledge regarding the usefulness of (multi)fractal formalism in the analysis of hyperspectral data is presented in Table 1 (global and local descriptions of spectral bands) and Table 2 (spectral curve description). It is worth noting that in all of the reviewed papers, the analysed datasets were limited to a small number of images or subsets (from 1 to 6), and mostly applied to airborne imagery. Our summary indicates that the multifractal analysis of hyperspectral imagery remains very limited, and we aim to fill this gap.
Therefore, the aim of this research is to consider, for the first time, degree of multifractality as a new global description of all spectral bands for AVIRIS data, for four landscape types. In particular, we verify whether multifractal character is present in all spectral bands. Additionally, we analyse the variability of multifractal features in the function of wavelength, both before and after atmospheric correction. Following this analysis, we try to discuss the areas of potential applicability for the presented methodology. We consider identification of absorption bands and whether multifractal parameters can help to overcome the problem of dimensionality reduction in hyperspectral data or with landscape type classification. Finally, a short comparison of determined multifractal features with statistical and fractal descriptors is performed.
This paper is organised as follows. In Section 2, we present our dataset. The methodology related to the concept of the multifractal is described in Section 3. The results of our analysis are presented and discussed in Section 4, the discussion is in Section 5 and the main conclusions are outlined in Section 6.

2. Data

The Jet Propulsion Laboratory (JPL), managed by NASA, designed and developed the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). It was the first imaging spectrometer able to measure the solar spectrum from 400 to 2500 nm as 224 contiguous channels (spectral bands) [49].
Data acquired by AVIRIS over California (USA) are used in our analyses. More precisely, we considered four subsets of four images. Each subset measured 512 × 512 pixels. Each subset represented a different type of landscape, i.e., agriculture, mountains, urban and water (see Figure 1). Analyses were performed on two types of data—radiance (224 bands) and reflectance (197 bands). The reflectance dataset did not include bands with strong radiation absorption caused by water particles in the air, which are removed by the JPL during the atmospheric correction process. As noted in [49], the Atmosphere Removal Algorithm (ARTEM), developed by Gao et al. [50] and updated by Gao and Davis [51], is applied to generate Level-2 products. The spatial resolution of analysed images is about 15–16 m. A list of images used for sample selection is presented in Table A1 (Appendix A). The size of the subsets is given by the calculation and requires square images, where the number of rows (and columns) is a power of 2. The same dataset, among others, was previously analysed with the D F parameter by Krupiński et al. in [22]. According to the reviews presented in Table 1 and Table 2, AVIRIS is the most commonly used sensor for fractal-related research.
To facilitate the results’ description, we divided the spectrum into six zones (VIS, NIR, A 1, SWIR 1, A 2 and SWIR 2). The bands of zones A 1 and A 2 were removed from reflectance as these bands were strongly affected by radiation absorption caused by water particles in the air (noisy bands) [49]. Exact ranges (bands and wavelengths) of the six zones are summarised in Table 3. More details about the analysed subsets may be found in Figure A1 (Appendix A), which presents four statistical moments (mean, variance, skewness and kurtosis) for radiance (left column) and reflectance (right column) samples.

3. Methodology

The degree of multifractality (Δ) is a quantitative multifractal parameter that describes the non-homogeneity of considered data and several methods can be used in its estimation (e.g., [25,44]). In this study, to calculate the degree of multifractality (Δ) for each spectral band separately, for both types of imagery (radiance and reflectance), we applied the box-counting based moment method [27] as described in, for example, [25]. More precisely, we considered the variability of the spectrum of the generalised dimension D q as a non-increasing function of the real index q , < q < + (see, for example, [52]). In the first step, an image (in a given spectral band) of size m × m was divided into N   ( δ ) = ( m / δ ) 2 square boxes of size δ × δ (starting with a single pixel of size δ = 1 and ending with a box of size δ = m); see [25] and Figure 1. For a given box, the normalised measure was calculated according to the formula:
μ i ( δ ) = p i ( δ ) i = 1 N ( δ ) p i ( δ )
where i   =   1 ,   , N   ( δ )   are labels given to individual boxes sized δ , and p i ( δ ) = ( k , l ) Ω i g ( k , l ) , where g ( k ,   l ) is the greyscale intensity at point ( k ,   l ) , and Ω i is the set of all pixels in box δ . It is worth noting that in the context of multifractal analysis, different measures can be constructed that emphasise various effects and describe different physical processes in the considered data ([47,53]). In the next step, the partition function χ ( q ,   δ ) for various values of δ and q was computed according to the formula:
χ ( q , δ ) = i = 1 N ( δ ) ( μ i ( δ ) ) q .
As q varies in (2), different subsets associated with different densities dominate. For multifractal measures, the partition function χ ( q ,   δ ) scales with the box size δ     0 as:
χ ( q ,   δ ) δ D q ( q 1 ) .
Based on (2) and (3), we obtained the spectrum of generalised dimensions D q defined, for the first time, by [28]:
D q = 1 1 q lim δ 0 log i = 1 N ( δ ) ( μ i ( δ ) ) q log δ .
Additionally, the uncertainties of determining the corresponding slopes log i = 1 N ( δ ) ( μ i ( δ ) ) q   against log δ (Equation (4)), obtained using unweighted least squares fitting, informed us about the accuracy of the calculation of the D q .
Next, the variability of the multifractal function D q , given as the difference between the maximum ( D ) and minimum ( D + ) dimensions, defines the degree of multifractality Δ [52,53] as follows:
Δ = D D +   .
Theoretically, this generalised dimension function D q is defined for all real values of q [53]. In practice, the limited dataset means that we can only determine values of D q for a narrow number of moments q (see, for example, [45,47,52]). In this study, we performed an initial examination and verified what kind of q range each sample yields. Then, an optimum range 3   q 8 that overlaps the individual q ranges was chosen and applied in further steps. Finally, parameter Δ was calculated as the difference between D 3 and D + 8 , while the sum of errors of D 3 and D + 8 gave the error obtained for each value of ∆.

4. Results

The multifractal analysis described in Section 3 was performed for each spectral band for four examples of different landscapes and, in each case, for radiance and reflectance data. In all considered cases, the scaling given by Equation (3) was revealed and the multifractal nature of the analysed data confirmed. As a consequence, the function (4) and final degree of multifractality (Δ) could be determined and used as a quantitative data descriptor.

4.1. Degree of Multifractality for Radiance

Figure 2 presents the change in values of degree of multifractality Δ (Y-axis) with wavelength/ band number (X-axis) for the four landscape types: water (blue), mountains (black), agriculture (green) and urban (red). The information about error estimation is also given in the form of vertical bars (Δ ± error). A general overview of the results shows that the values of Δ determined for radiance data samples are within the range of 0 and 0.768. It is worth noting that both lower and upper extremes relate to water landscape sample (bands 4 and 111, respectively).
In the case of agriculture, the minimum value of Δ is 0.006 and the maximum is 0.498. For mountains, Δ ranges from 0.001 to 0.581, and from 0.002 to 0.484 for urban areas. Water landscape has the lowest Δ values in all spectral bands, except the A 1 and A 2 zones. Figure 2 shows that for all landscape types, Δ is lowest within the VIS zone. With the exception of agriculture, the highest Δ value is found in the A 1 or A 2 zones. Besides the A 1 and A 2 zones, Δ values are the highest and the most variable in the SWIR 2 zone, for all four types of landscape. More detailed information about Δ values (minimum, mean and maximum) is presented in Table A2.
An analysis of the whole electromagnetic spectrum registered by AVIRIS shows that the highest mean error in the Δ estimation was estimated for urban landscape (0.0160), then agriculture (0.0124), mountains (0.0059) and water (0.0039). Although error values increase with wavelength, in all zones, it does not exceed 18% of the Δ values. The highest mean error in the Δ estimation is observed for the A 2 zone (0.0217 | 18%), then A 1 (0.0159 | 14%), SWIR 2 (0.0145 | 12%), SWIR 1 (0.0075 | 12%), VIS (0.0052 | 13%) and NIR (0.0050 | 14%). Table A3 contains precise values of error in the Δ estimation.

4.2. Degree of Multifractality for Reflectance

In order to study the multifractal character of reflectance data, we prepared a comparison between Δ values computed for radiance and reflectance. Figure 3 presents these values for each zone of the electromagnetic spectrum listed in Table 3 (except A 1 and A 2). Moreover, we calculated the difference | Δ r e f l e c t a n c e Δ r a d i a n c e | and it is presented in Table 4.
The first thing we notice, analysing the difference, is the huge increment (exceeding 7000) in Δ values calculated for the first few bands of the mountain landscape subset. A closer examination of these bands revealed that they contain between 1.5% and 99.7% of pixels with a value of 0 (this issue is not observed in radiance data). Moreover, checks of the original source image indicated that most mountain areas are covered by 0-value pixels in the first few bands of reflectance data. As analogous situations were noted in other types of landscape, we decided to analyse only bands without this error. Consequently, all bands where the increment in number of pixels with a value of 0 (between radiance and reflectance) was higher than 1% (~2600 pixels) were excluded from further comparison. Bands 1–3 and 224 were removed from the agriculture reflectance sample; 1–7 from the mountain sample; 1–2 and 223–224 from the urban sample; and 223–224 from the water sample. Table 4 summarises the statistics after the removal of bands with this error.
Detailed analysis of Figure 3 and Table 4 reveals that the first zone, VIS, is the part of the electromagnetic spectrum where the biggest differences between radiance and reflectance results appear. More precisely, in the whole VIS zone, Δ reflectance values are higher than Δ radiance values for all four landscape types (almost four times in average). It means that the influence of atmospheric corrections is reflected in Δ values.
In the NIR zone, the average absolute change in Δ values is 9% (the highest in water—25%—and the lowest in the agriculture sample—4%). In the SWIR 1 zone, the average absolute change in Δ is 7% (the highest is for water (11%); the lowest is for agriculture (4%)). The biggest changes in Δ occur in bands 115–120 and 151–152 for mountains, observed as a smoothing of curve shape in Figure 3. These bands represent regions of the spectrum that are adjacent to zones of water absorption (A 1 and A 2).
In the SWIR 2 zone, there is the lowest average change in Δ in the whole analysed spectrum (3%), so influence of atmospheric correction on these bands is very small. Like SWIR 1, the highest relative change is found for water (6%) and the lowest for agriculture (1%). The most significant changes are also observed on the edges of the SWIR 2 zone for mountains (bands 169–170 and 221–224).
Errors in Δ estimation for reflectance data were analysed in comparison to errors for radiance data. Absolute and relative differences for all landscape types and the four spectrum zones are combined in Table A4. The biggest change in error values is observed in the VIS zone (0.005). This may be because in this zone, the change in Δ values is the highest. In the three other zones (NIR, SWIR 1 and SWIR 2), average error is the same—0.001. For all zones, the absolute average change in error values is highest for urban landscape and lowest for water. In most cases, error values increase after atmospheric correction, notably for all bands in urban and water landscapes.

5. Discussion

To the best of our knowledge, in this paper, for the first time, global multifractal parameters are used for hyperspectral data analyses. For this reason, our results may be discussed only by reference to the global multifractal analysis of multispectral and panchromatic images or by reference to the global fractal analysis of hyperspectral data (listed in Table 1). In this section, we discuss the issues of atmospheric absorption, landscape type separability, comparison to fractal dimension and statistical moments and indicate potential directions for further analyses.

5.1. Interpretation of the Multifractal Results

5.1.1. Influence of Atmospheric Absorption

The influence of atmosphere components on data in this experiment is analysed in two ways. Firstly, variability of Δ curves may be compared to the transmittance spectrum of the atmosphere (Figure 2 in [49]). Secondly, comparing Δ for radiance and for reflectance, we may observe the magnitude of changes and indirectly study how atmospheric correction modifies the data of specific spectral bands.
The biggest influence of atmosphere components on acquired radiance and, in consequence, on pixel values is visible in the A 1 and A 2 zones. An analysis of the shape of Δ curves in Figure 2 highlights that the smoothness of the curves is disrupted in zones A 1 and A 2. For mountain and water landscapes, Δ values in A 1 and A 2 zones exceed the values of neighbouring zones. In the case of urban landscape, we observe the opposite situation (Δ values are lower than in adjacent zones). Values seem to preserve their continuity mostly in agricultural areas. Visual interpretation of these bands confirmed that images of other landscape types are noisier in these two zones. These differences may be related to different levels of humidity in the atmosphere over specific types of landscape and different image acquisition dates.
In the VIS zone, after band 35, Δ decreases rapidly until band 40, which corresponds to O2 absorption. In the NIR zone, three subzones can be clearly observed (bands 41–63, 64–84 and 85–103). This division may be caused by the absorbance of solar radiation by H2O vapour at the edges. The values for mountains reveal two fragments where there is a sudden increment (bands 63–65 and 81–86). These parts of the electromagnetic spectrum are where absorption caused by H2O is observed. Atmospheric correction causes smoothing of the Δ curve for the mountain landscape in these bands. It could suggest that the Δ parameter may be used to evaluate atmospheric correction quality. Moreover, in the bands adjacent to these zones (in SWIR 1 and SWIR 2), we observed increased values only for agriculture and mountain landscapes. It may be related to the fact that these two landscape samples include plants. After atmospheric correction, Δ for these bands also seems to be smoothed.
It is worth noting that plots with mean and variance values of specific bands also reveal the influence of atmosphere on the data acquired (compare Figure 2 with Figure 2 in [49]). However, these two parameters describe only pixel values’ distribution, while Δ describes spatial and spectral complexity at the same time.

5.1.2. Landscape Types and Dimensionality Reduction

It is possible to identify parts of the electromagnetic spectrum where certain landscape samples have different Δ values and differences between them are higher than values of errors. In such cases, we will note that Δ values of certain bands could be used to clearly distinguish landscape types. Distinct Δ values with differences smaller than error values could possibly distinguish landscape types. In the VIS zone, clear separation of all landscape types is observed in bands 23–38. In the NIR zone, it is possible to clearly distinguish landscape types using bands from the second and third subzones, although differences in Δ are lower than in the VIS zone. Similar values of Δ for neighbouring bands can be explained by the fact that these bands represent adjacent wavelengths. Low variability in Δ within specific landscape types (except agriculture) indicates that information capacity, from the point of view of multifractal analysis, is similar. Therefore, Δ could possibly be useful for data dimensionality reduction (especially in NIR) if agricultural land is not of interest.
In SWIR 1, no overlap between Δ values (±errors) indicates that all four types of landscape could be separated using bands 115–117 or 121–125. In SWIR 2, all four types of landscape could be separated using almost any band from this zone. The exception are bands 182–187, where Δ values for mountains are very close to those for urban landscape.
In this research, single samples of four landscape types have been used. To confirm if specific a landscape type is characterised by the same or similar Δ curves, more samples should be analysed. To assess the usefulness of Δ for dimensionality reduction, additional experiments should also be performed.

5.1.3. Size of Images and Spatial Resolution

The size of the analysed images is related to the methodology used to estimate the degree of multifractality and influences the range that is used for multifractal scaling (3). Our review of the literature found that sizes ranging from 17 × 17 to 512 × 512 pixels have been analysed (Table 1). However, only Myint et al. [21] investigated different sizes of hyperspectral subsets representing the same land cover class, and those subsets did not represent exactly the same area. Moreover, the data were analysed using a fractal, and not a multifractal, formalism.
In our previous work, we calculated the degree of multifractality for several subset sizes: 1024 × 1024 in [25], 512 × 512 in [22,26,47], 256 × 256 in [16,46] and 64 × 64 and 32 × 32 in [45]. These earlier studies indicated that the bigger the size, the more reliable the multifractal scaling and Δ estimations. To verify this hypothesis, image subsets were resampled from their original size to 256 × 256, 128 × 128 and 64 × 64 pixels and analysed. In consequence, we obtained images with pixel sizes of 30, 60 and 120 m.
Stable results were found for the degree of multifractality determined for different image sizes, for all analysed landscape types (see Figure 4). However, estimation errors increased as image size decreased. A detailed analysis was performed with absolute and relative mean values of change in Δ (and errors), and the results are presented in Table 5.
A smaller subset size changes Δ values in all landscapes. Generally, the smaller the subset (and lower spatial resolution), the bigger the changes compared to the original image (512 × 512 pixels). Average absolute change increases evenly by 0.005, starting at 0.010 (15%) for 256 × 256, and rising to 0.019 (39%) for the smallest subsets (64 × 64). The biggest absolute differences are noted for urban landscape, 0.023 (256 × 256), 0.033 (128 × 128), 0.037 (64 × 64), and the smallest for water (0.005, 0.008 and 0.011, respectively). In most cases, resampling the agriculture subset to 256 × 256 or 128 × 128 caused an increment in Δ values. The exception was 64 × 64, where Δ tended to fall. In the analysis of mountain and urban landscapes, it was found that Δ fell almost in all bands. In the case of the water subset, each resampling increased Δ values.
To analyse how errors in the Δ estimation change when images are resampled, we compared absolute relative values. In general, we observe that the smaller the image size, the bigger the change in error. The smallest average absolute change in errors was 0.003 (for water), and the highest was 0.009 (for urban landscape).

5.2. Comparison with Other Characteristics

5.2.1. Correlation with Statistical Moments

Following Combrexelle et al. [30], we calculated the coefficient of determination (r2) to evaluate the complementarity between Δ and statistical moments. In the first assessment, values of Δ for all bands for the four landscape types in reflectance images were combined. Low values of r2 were obtained with four moments: mean (0.003), standard deviation (0.225), skewness (0.114) and kurtosis (0.034). In the next step, each landscape type was analysed separately (Table 6). In this case, we observed the highest coefficient values for urban landscape, notably for the mean (0.834) and skewness (0.858). This may be because the urban sample has the most complex structure and contains a dense mixture of dark and bright objects (see Figure 1c). Coefficient values were much lower for the three other landscape types.
In general, the comparison performed here of statistical and multifractal description of hyperspectral data states the confirmation our previous analysis with panchromatic bands ([26,35,47]) and indicates that Δ provides complementary information to moments and may be used as an additional classification feature. This is in agreement with the fact that multifractal analysis considers both positive and negative higher-order moments (see Section 3), while in the case of statistical description, only positive or particular moments are used.

5.2.2. Comparison with Fractal Dimension

The data used in this experiment were previously analysed with a fractal parameter—fractal dimension ( D F )—in [21], where the differential box-counting method was applied [3]. Values of D F (±error of estimation) from that paper are combined in Figure A2 (for radiance) and Figure A3 (for reflectance). Similarly to Δ values’ shape, D F values also increase with wavelength in VIS. There, however, water reached the highest values, then urban, agriculture and mountains. It may be confirmation of the fractal character of water, observed previously on panchromatic VHR images inter alia in [47]. When D F was calculated with other methods (isarithm and triangular prism) in [20], urban landscapes had also higher values than rural ones. Moreover, the few first bands had very high D F values. It is worth noting that the image samples used in [20] present more heterogeneous landscape samples than those in [22] and in this paper.
Unlike Δ, D F values in [22] did not allow for clear separation of landscape types in any band because of overlapping error bars; in [20], errors were not presented. It reveals an advantage of using the multifractal over the fractal parameter.
Both Δ and D F show the increment in values caused by atmospheric correction in the VIS zone. In NIR, three subzones indicated by Δ values were also visible when D F was used in [22]. Moreover, both parameters change at the edges of these subzones (compare our results to Figure 2 in [49]) when atmospheric correction is applied, especially in the case of mountain landscape. In the SWIR 1 zone, the shape of the Δ curve for agriculture is very similar to the shape of the D F curve presented by [22]. However, in the latter, agriculture has the lowest values from all landscape types. The other three landscapes present almost flat shapes, with the highest values for water, then mountains, urban and agriculture.
Previous research on D F ([20,22]) revealed disturbances in the A 1 and A 2 zones, similarly to Δ. Using Δ or D F (estimated with the DBC method), the continuity of curve was disturbed. Values were mostly elevated, but with few exceptions. Using isarithm and triangular prism methods, D F values were clearly elevated for rural and urban samples. It may suggest that both parameters (Δ and D F ) may be used for absorption bands’ evaluation.

6. Conclusions

The research presented here is one of the first attempts to describe hyperspectral aerial images using multifractal parameters. Our results confirmed the multifractal nature of hyperspectral data acquired by AVIRIS. In particular, we calculated, for the first time, the degree of multifractality (Δ) for each spectral band of four image samples that present different landscape types. For each image, two levels of data processing were used (radiance and reflectance). The analysis of variability in Δ highlighted a clear division of the spectrum into the following four ranges: VIS, NIR, SWIR 1 and SWIR 2. Our results are presented in detail for each range. Two zones of absorption were visible in radiance subsets (A 1 and A 2). Here, radiation absorption caused by water particles in the atmosphere leads to significant disturbances in the shape of the Δ curve. This observation suggests that Δ could be used as an indicator to discriminate noisy bands.
In the selected bands, our proposed Δ values seem to be able to distinguish different landscape types, inter alia, at bands 23–38, 62–103, 115–117, 121–125 and 169–224. Water differs from the three other types in most bands, but especially in SWIR 2. Very low Δ values and a flat curve suggest that water represents monofractal scaling; this is in line with our previous studies on VHR panchromatic satellite images (e.g., [47]). Agriculture differs significantly in the SWIR 2 and VIS (bands 23–38), NIR (bands 41–64) and SWIR 1 zones (bands 115–123). Urban landscape differs most in the VIS zone (bands 23–38) and mountains in all zones, except SWIR 2. Regions where Δ values form flat horizontal lines indicate bands with similar information content.
Based on our results, we can assume that variability in Δ could possibly be used as a parameter for feature selection and identification of optimal bands to distinguish between different landscape types in the context of context-based image retrieval or image information mining. A comparison with results obtained from fractal analysis [22] indicates that our method identifies more bands where the degree of multifractality can distinguish landscape types. Moreover, the smaller Δ error estimation confirms that the formalism is better-suited to aerial hyperspectral images analysis. In general, a comparison of the usability of Δ to distinguish landscape types highlights that better performance is achieved using radiance data. The exception is the NIR zone, where reflectance data provide more bands with clear interclass separability than radiance. Moreover, in most cases, Δ estimates increase after atmospheric correction.
In some bands, large differences in Δ were found for data before and after atmospheric correction. The increase in Δ after atmospheric correction shows that the parameter could be used to assess atmospheric correction quality, especially given that one value of Δ may be calculated for the whole image. Both Δ and its error can be used to accurately detect noisy bands. Like D F used in [22], both approaches (fractal and multifractal) indicate that atmospheric correction causes the biggest increase in parameter values in the visible part of the electromagnetic spectrum. Additionally, data acquired in absorption zones (A 1 and A 2) cause disturbances in both fractal and multifractal parameters, and smoothing effects are found in bands 62–64 and 81–85 for mountain landscape. However, unlike D F , the degree of multifractality can detect bands dominated by 0-value pixels, which should be excluded from the study.
Variability in Δ does not change significantly with decreasing subset size. Experiments with images of four different sizes (512 × 512, 256 × 256, 128 × 128 and 64 × 64) revealed that variability in Δ is stable, regardless of landscape type and size. Moreover, an analysis of change in Δ (and errors) found that the smaller subset size, the bigger the differences in Δ and its error. The biggest absolute differences in Δ (and its error) were noted for urban landscape, and the lowest for water.
Another interesting observation is the generally low correlation coefficient values found between Δ and statistical moments (mean, standard deviation, skewness and kurtosis). This finding suggests that Δ may be used to complement the description of hyperspectral data.
Finally, it should be noted that our research introduces a new parameter for the global description of hyperspectral data. To sum up, we may conclude that the performed multifractal analysis resulted in several interesting findings and seems to be useful for a number of possible applications. Further studies on larger and more varied datasets are required to confirm our conclusions. These studies will be performed in the next stage of our work, during which we will analyse different land cover/use classes and images from other hyperspectral sensors. Another interesting avenue will be to apply a different method to estimate Δ and different feature extraction methods. We believe that these studies may make a significant contribution to the problem of dimensionality reduction in hyperspectral data.

Author Contributions

Conceptualisation, M.K. and A.W.; methodology, A.W. and M.K.; software, A.W. and M.K.; investigation, M.K., A.W. and W.D.; writing—original draft preparation, M.K.; writing—methodology description, A.W.; writing—review and editing, A.W., W.D., M.J. and S.A.; visualisation, M.K.; project administration and funding acquisition, A.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Polish National Science Centre (NCN) through grant 2016/23/B/ST10/01151.

Acknowledgments

The authors would like to thank the JPL for supplying the AVIRIS images.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Images used for the analyses were downloaded from the AVIRIS Data Portal (https://aviris.jpl.nasa.gov/alt_locator/).
Table A1. Files used in the analyses.
Table A1. Files used in the analyses.
Landscape TypeOriginal Image (Radiance)Original Image (Reflectance)
Agriculturef130522t01p00r11rdn_ef130522t01p00r11_refl
Mountainsf130522t01p00r12rdn_ef130522t01p00r12_refl
Urbanf130612t01p00r11rdn_ef130612t01p00r11_refl
Waterf130606t01p00r10rdn_ef130606t01p00r10_refl
Figure A1. Values of first four statistical moments (from top: mean, variance, skewness and kurtosis) determined for radiance (left column) and reflectance (right column) data for four landscape types: water (blue), mountains (black), agriculture (green) and urban (red).
Figure A1. Values of first four statistical moments (from top: mean, variance, skewness and kurtosis) determined for radiance (left column) and reflectance (right column) data for four landscape types: water (blue), mountains (black), agriculture (green) and urban (red).
Remotesensing 12 04077 g0a1

Appendix B

Table A2. Values of the Δ estimation for radiance and reflectance data. Bold font indicates the lowest and the highest Δ values for specific landscape types.
Table A2. Values of the Δ estimation for radiance and reflectance data. Bold font indicates the lowest and the highest Δ values for specific landscape types.
RadianceReflectance
VISNIRA 1SWIR 1A 2SWIR 2AllVISNIRSWIR 1SWIR 2All
AgricultureMin0.0060.0390.0470.0680.0750.1570.0060.0320.0390.0680.1570.032
Mean0.0880.0700.0730.1060.1970.2640.1370.1510.0720.1010.2620.151
Max0.2100.1090.1590.1880.4230.4980.4980.4970.1100.1710.4970.497
MountainsMin0.0010.0240.0330.0520.1640.0950.0010.0310.0250.0530.0960.025
Mean0.0310.0370.1160.0700.3490.1350.0920.1000.0340.0640.1290.079
Max0.0840.0760.2220.1380.5810.2780.5810.2060.0470.0830.2080.208
UrbanMin0.0020.0340.0240.0710.0320.0820.0020.0450.0410.0580.0610.041
Mean0.0600.0480.0830.0870.0480.1340.0800.1260.0560.0890.1340.098
Max0.1040.0550.4840.1030.0620.1720.4840.1980.0640.1080.1760.198
WaterMin0.00030.0040.0040.0040.0110.0010.00030.0030.0050.0050.0020.002
Mean0.0020.0050.0810.0050.0790.0070.0140.0050.0060.0060.0060.006
Max0.0040.0050.7680.0060.2340.0300.7680.0250.0060.0060.0220.025
Table A3. Errors in the Δ estimation for radiance data. Bold font indicates the highest mean value within a zone; green represents the lowest mean error for a specific landscape type; and red, the highest.
Table A3. Errors in the Δ estimation for radiance data. Bold font indicates the highest mean value within a zone; green represents the lowest mean error for a specific landscape type; and red, the highest.
VISNIRA 1SWIR 1A 2SWIR 2All
AgricultureMin0.0004
4%
0.0031
8%
0.0036
7%
0.0051
7%
0.0060
7%
0.0125
8%
0.0004
4%
Mean0.0070
7%
0.0056
8%
0.0061
8%
0.0083
8%
0.0240
13%
0.0245
9%
0.0124
8%
Max0.0192
9%
0.0086
8%
0.0132
13%
0.0161
9%
0.0464
29%
0.0581
12%
0.0581
29%
MountainsMin0.0001
5%
0.0008
2%
0.0009
2%
0.0034
4%
0.0065
4%
0.0047
3%
0.0001
2%
Mean0.0026
9%
0.0027
7%
0.0061
5%
0.0044
6%
0.0244
7%
0.0075
6%
0.0059
7%
Max0.0068
15%
0.0087
11%
0.0150
8%
0.0071
8%
0.0458
9%
0.0156
7%
0.0458
15%
UrbanMin0.0001
8%
0.0080
18%
0.0030
8%
0.0062
9%
0.0012
4%
0.0031
4%
0.0001
4%
Mean0.0104
17%
0.0108
22%
0.0292
19%
0.0163
19%
0.0096
20%
0.0247
18%
0.0160
19%
Max0.0158
24%
0.0131
24%
0.2524
52%
0.0185
23%
0.0167
28%
0.0347
22%
0.2524
52%
WaterMin0.0000
17%
0.0006
16%
0.0006
14%
0.0005
13%
0.0021
19%
0.0001
10%
0.0000
10%
Mean0.0003
17%
0.0008
17%
0.0224
24%
0.0009
16%
0.0288
33%
0.0012
17%
0.0039
18%
Max0.0006
23%
0.0008
17%
0.2124
34%
0.0009
17%
0.0846
46%
0.0083
28%
0.2124
46%
AllMean0.0052
13%
0.0050
14%
0.0159
14%
0.0075
12%
0.0217
18%
0.0145
12%
Table A4. Modulus of the difference between error values of Δ calculated for reflectance and radiance data. Average values.
Table A4. Modulus of the difference between error values of Δ calculated for reflectance and radiance data. Average values.
Landscape TypeVISNIRSWIR 1SWIR 2
Agriculture0.0070.00030.0010.001
304%5%5%2%
Mountains0.0040.0010.0010.001
542%32%12%7%
Urban0.0090.0030.0030.002
494%32%21%6%
Water0.0010.00020.00010.00005
440%25%11%6%
All0.0050.0010.0010.001
442%21%12%5%

Appendix C

Figure A2. Fractal dimension determined for radiance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes.
Figure A2. Fractal dimension determined for radiance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes.
Remotesensing 12 04077 g0a2
Figure A3. Fractal dimension determined for reflectance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes.
Figure A3. Fractal dimension determined for reflectance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes.
Remotesensing 12 04077 g0a3

References

  1. Mandelbrot, B.B. Fractals: Form, Chance, and Dimension; W.H. Freeman & Company: San Francisco, CA, USA, 1977; ISBN 0716704730. [Google Scholar]
  2. Mandelbrot, B.B. The Fractal Geometry of Nature; Einaudi Paperbacks; Henry Holt and Company: New York, NY, USA, 1983; ISBN 9780716711865. [Google Scholar]
  3. Chaudhuri, B.B.; Sarkar, N. Texture Segmentation Using Fractal Dimension. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 72–77. [Google Scholar] [CrossRef]
  4. Gao, Q.; Zribi, M.; Escorihuela, J.M.; Baghdadi, N.; Segui, Q.P. Irrigation Mapping Using Sentinel-1 Time Series at Field Scale. Remote Sens. 2018, 10, 1495. [Google Scholar] [CrossRef] [Green Version]
  5. Di Martino, G.; Di Simone, A.; Riccio, D. Fractal-Based Local Range Slope Estimation from Single SAR Image with Applications to SAR Despeckling and Topographic Mapping. Remote Sens. 2018, 10, 1294. [Google Scholar] [CrossRef] [Green Version]
  6. Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G.; Zinno, I. The Role of Resolution in the Estimation of Fractal Dimension Maps From SAR Data. Remote Sens. 2018, 10, 9. [Google Scholar] [CrossRef] [Green Version]
  7. Zhou, Y.; Lin, C.; Wang, S.; Liu, W.; Tian, Y. Estimation of Building Density with the Integrated Use of GF-1 PMS and Radarsat-2 Data. Remote Sens. 2016, 8, 969. [Google Scholar] [CrossRef] [Green Version]
  8. Chen, Q.; Li, L.; Xu, Q.; Yang, S.; Shi, X.; Liu, X. Multi-Feature Segmentation for High-Resolution Polarimetric SAR Data Based on Fractal Net Evolution Approach. Remote Sens. 2017, 9, 570. [Google Scholar] [CrossRef] [Green Version]
  9. Peleg, S.; Naor, J.; Hartley, R.; Avnir, D. Multiple Resolution Texture Analysis and Classification. IEEE Trans. Pattern Anal. Mach. Intell. 1984, PAMI-6, 518–523. [Google Scholar] [CrossRef] [Green Version]
  10. Clarke, K.C. Computation of the Fractal Dimension of Topographic Surfaces Using the Triangular Prism Surface Area Method. Comput. Geosci. 1986, 12, 713–722. [Google Scholar] [CrossRef]
  11. Lam, N.S.N.; De Cola, L. Fractals in Geography; Prentice Hall: Upper Saddle River, NJ, USA, 1993; ISBN 9780131058675. [Google Scholar]
  12. Sevcik, C. A Procedure to Estimate the Fractal Dimension of Waveforms. Complex. Int. 1998, 5, 1–19. [Google Scholar]
  13. Turcotte, D.L. Fractals and Chaos in Geology and Geophysics, 2nd ed.; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  14. Mukherjee, K.; Ghosh, J.K.; Mittal, R.C. Variogram Fractal Dimension Based Features for Hyperspectral Data Dimensionality Reduction. J. Indian Soc. Remote Sens. 2013, 41, 249–258. [Google Scholar] [CrossRef]
  15. Ghosh, J.K.; Somvanshi, A. Fractal-based Dimensionality Reduction of Hyperspectral Images. J. Indian Soc. Remote Sens. 2008, 36, 235–241. [Google Scholar] [CrossRef]
  16. Aleksandrowicz, S.; Wawrzaszek, A.; Drzewiecki, W.; Krupinski, M. Change Detection Using Global and Local Multifractal Description. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1183–1187. [Google Scholar] [CrossRef]
  17. Dong, P. Fractal Signatures for Multiscale Processing of Hyperspectral Image Data. Adv. Space Res. 2008, 41, 1733–1743. [Google Scholar] [CrossRef]
  18. Mukherjee, K.; Bhattacharya, A.; Ghosh, J.K.; Arora, M.K. Comparative Performance of Fractal Based and Conventional Methods for Dimensionality Reduction of Hyperspectral Data. Opt. Lasers Eng. 2014, 55, 267–274. [Google Scholar] [CrossRef]
  19. Mukherjee, K.; Ghosh, J.K.; Mittal, R.C. Dimensionality Reduction of Hyperspectral Data Using Spectra Fractal Feature. Geocarto Int. 2012, 27, 515–531. [Google Scholar] [CrossRef]
  20. Qiu, H.-I.; Lam, N.; Quattrochi, D.; Gamon, J. Fractal Characterization of Hyperspectral Imagery. Photogramm. Eng. Remote Sens. 1999, 65, 63–71. [Google Scholar]
  21. Myint, S.W. Fractal Approaches in Texture Analysis and Classification of Remotely Sensed Data: Comparison with Spatial Autocorrelation Techniques and Simple Descriptive Statistics. Int. J. Remote Sens. 2003, 24, 1925–1947. [Google Scholar] [CrossRef]
  22. Krupinski, M.; Wawrzaszek, A.; Drzewiecki, W.; Aleksandrowicz, S. Usefulness of the Fractal Dimension in the Context of Hyperspectral Data Description. In Proceedings of the 14th SGEM GeoConference on Informatics, Geoinformatics and Remote Sensing; STEF92 Technology, Albena, Bulgaria, 17–26 June 2014; Volume 3, pp. 367–374. [Google Scholar] [CrossRef]
  23. Sarkar, N.; Chaudhuri, B.B. An efficient differential box-counting approach to compute fractal dimension of image. IEEE Trans. Syst. Man Cybern. 1994, 24, 115–120. [Google Scholar] [CrossRef] [Green Version]
  24. Sun, W.; Xu, G.; Gong, P.; Liang, S. Fractal analysis of remotely sensed images: A review of methods and applications. Int. J. Remote Sens. 2006, 27, 4963–4990. [Google Scholar] [CrossRef]
  25. Wawrzaszek, A.; Aleksandrowicz, S.; Krupiski, M.; Drzewiecki, W. Influence of Image Filtering on Land Cover Classification when using Fractal and Multifractal Features. Photogramm. Fernerkund. Geoinf. 2014, 2014, 101–115. [Google Scholar] [CrossRef]
  26. Drzewiecki, W.; Wawrzaszek, A.; Krupinski, M.; Aleksandrowicz, S.; Bernat, K. Comparison of selected textural features as global content-based descriptors of VHR satellite image—The EROS—A study 2013. In Proceedings of the 2013 Federated Conference on Computer Science and Information Systems, Krakow, Poland, 8–11 September 2013; pp. 43–49. [Google Scholar]
  27. Halsey, T.C.; Jensen, M.H.; Kadanoff, L.P.; Procaccia, I.; Shraiman, B.I. Fractal measures and their singularities: The characterization of strange sets. Nucl. Phys. B Proc. Suppl. 1987, 2, 501–511. [Google Scholar] [CrossRef]
  28. Hentschel, H.G.E.; Procaccia, I. The infinite number of generalized dimensions of fractals and strange attractors. Phys. D Nonlinear Phenom. 1983, 8, 435–444. [Google Scholar] [CrossRef]
  29. Su, H.; Sheng, Y.; Du, P. A New Band Selection Algorithm for Hyperspectral Data Based on Fractal Dimension. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. Beijing 2008, XXXVII, 279–284. [Google Scholar]
  30. Combrexelle, S.; Wendt, H.; Tourneret, J.-Y.; Mclaughlin, S.; Abry, P. Hyperspectral Image Analysis Using Multifractal Attributes. In Proceedings of the 7th IEEE Workshop on Hyperspectral Image and SIgnal Processing: Evolution in Remote Sensing (WHISPERS 2015), Tokyo, Japan, 2–5 June 2015; pp. 1–4. [Google Scholar] [CrossRef] [Green Version]
  31. Aleksandrowicz, S.; Wawrzaszek, A.; Jenerowicz, M.; Drzewiecki, W.; Krupinski, M. Local Multifractal Description of Bi-Temporal VHR Images. In Proceedings of the 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Shanghai, China, 5–7 August 2019; pp. 1–3. [Google Scholar] [CrossRef]
  32. Grazzini, J.; Turiel, A.; Yahia, H.; Herlin, I.; Rocquencourt, I. Edge-preserving smoothing of high-resolution images with a partial multifractal reconstruction scheme. In Proceedings of the ISPRS 2004—International Society for Photogrammetry and Remote Sensing XXXV, Istambul, Turkey, 12–23 July 2004; pp. 1125–1129. [Google Scholar]
  33. Hu, M.-G.; Wang, J.-F.; Ge, Y. Super-resolution reconstruction of remote sensing images using multifractal analysis. Sensors 2009, 9, 8669–8683. [Google Scholar] [CrossRef]
  34. Chen, Z.; Hu, Y.; Zhang, Y. Effects of Compression on Remote Sensing Image Classification Based on Fractal Analysis. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4577–4590. [Google Scholar] [CrossRef]
  35. Drzewiecki, W.; Wawrzaszek, A.; Aleksandrowicz, S.; Krupinski, M.; Bernat, K. Comparison of selected textural features as global content-based descriptors of VHR satellite image. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium—IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 4364–4366. [Google Scholar] [CrossRef]
  36. Kupidura, P. The Comparison of Different Methods of Texture Analysis for Their Efficacy for Land Use Classification in Satellite Imagery. Remote Sens. 2019, 11, 1233. [Google Scholar] [CrossRef] [Green Version]
  37. Chen, Q.; Zhao, Z.; Jiang, Q.; Zhou, J.-X.; Tian, Y.; Zeng, S.; Wang, J. Detecting subtle alteration information from ASTER data using a multifractal-based method: A case study from Wuliang Mountain, SW China. Ore Geol. Rev. 2019, 115, 103182. [Google Scholar] [CrossRef]
  38. Ghosh, J.K.; Somvanshi, A.; Mittal, R.C. Fractal Feature for Classification of Hyperspectral Images of Moffit Field, USA. Curr. Sci. 2008, 94, 356–358. [Google Scholar]
  39. Junying, S.; Ning, S. A Dimensionality Reduction Algorithm of Hyper Spectral Image Based on Fract Analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, XXXVII, 297–302. [Google Scholar]
  40. Ziyong, Z. Multifractal Based Hyperion Hyperspectral Data Mining. In Proceedings of the 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery, Yantai, China, 10–12 August 2010; Volume 5, pp. 2109–2113. [Google Scholar]
  41. Hosseini, A.; Ghassemian, H. Classification of Hyperspectral and Multifractal Images by Using Fractal Dimension of Spectral Response Curve. In Proceedings of the 20th Iranian Conference on Electrical Engineering (ICEE2012), Tehran, Iran, 15–17 May 2012; pp. 1452–1457. [Google Scholar] [CrossRef]
  42. Li, N.; Zhao, H.; Huang, P.; Jia, G.R.; Bai, X. A novel logistic multi-class supervised classification model based on multi-fractal spectrum parameters for hyperspectral data. Int. J. Comput. Math. 2015, 92, 836–849. [Google Scholar] [CrossRef]
  43. Wan, X.; Zhao, C.; Wang, Y.; Liu, W. Stacked sparse autoencoder in hyperspectral data classification using spectral-spatial, higher order statistics and multifractal spectrum features. Infrared Phys. Technol. 2017, 86, 77–89. [Google Scholar] [CrossRef]
  44. Krupiński, M.; Wawrzaszek, A.; Drzewiecki, W.; Aleksandrowicz, S.; Jenerowicz, M. Multifractal Parameters for Spectral Profile Description. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1256–1259. [Google Scholar] [CrossRef]
  45. Jenerowicz, M.; Wawrzaszek, A.; Krupiński, M.; Drzewiecki, W.; Aleksandrowicz, S. Aplicability of Multifractal Features as Descriptors of the Complex Terrain Situation in IDP/Refugee Camps. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2662–2665. [Google Scholar] [CrossRef]
  46. Jenerowicz, M.; Wawrzaszek, A.; Drzewiecki, W.; Krupiński, M.; Aleksandrowicz, S. Multifractality in Humanitarian Applications: A Case Study of Internally Displaced Persons/Refugee Camps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 1–8. [Google Scholar] [CrossRef]
  47. Drzewiecki, W.; Wawrzaszek, A.; Krupiński, M.; Aleksandrowicz, S.; Bernat, K. Applicability of multifractal features as global characteristics of WorldView-2 panchromatic satellite images. Eur. J. Remote Sens. 2016, 49, 809–834. [Google Scholar] [CrossRef] [Green Version]
  48. Wawrzaszek, A.; Walichnowska, M.; Krupiński, M. Evaluation of degree of multifractality for description of high resolution data aquired by Landsat satellites. Arch. Fotogram. Kartogr. Teledetekcji 2015, 27, 175–184. (In Polish) [Google Scholar] [CrossRef]
  49. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M.; et al. Imaging spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  50. Gao, B.-C.; Heidebrecht, K.B.; Goetz, A.F.H. Derivation of scaled surface reflectances from AVIRIS data. Remote Sens. Environ. 1993, 44, 165–178. [Google Scholar] [CrossRef]
  51. Gao, B.-C.; Davis, C.O. Development of a line-by-line-based atmosphere removal algorithm for airborne and spaceborne imaging spectrometers. In Proceedings of the Imaging Spectrometry III, San Diego, CA, USA, 31 October 1997; Volume 3118. [Google Scholar]
  52. Wawrzaszek, A.; Krupinski, M.; Aleksandrowicz, S.; Drzewiecki, W. Fractal and multifractal characteristics of very high resolution satellite images. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium—IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 1501–1504. [Google Scholar] [CrossRef]
  53. Wawrzaszek, A.; Echim, M.; Bruno, R. Multifractal analysis of heliospheric magnetic field fluctuations observed by Ulysses. Astrophys. J. 2019, 876, 153. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Selected spectral bands of four Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images used in the analyses: (a) agriculture, (b) mountains, (c) urban and (d) water.
Figure 1. Selected spectral bands of four Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images used in the analyses: (a) agriculture, (b) mountains, (c) urban and (d) water.
Remotesensing 12 04077 g001
Figure 2. Degree of multifractality (Δ) determined for radiance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes. The whole spectrum is divided according to ranges presented in Table 3.
Figure 2. Degree of multifractality (Δ) determined for radiance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes. The whole spectrum is divided according to ranges presented in Table 3.
Remotesensing 12 04077 g002
Figure 3. Degree of multifractality calculated for radiance and reflectance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes, divided into four zones according to Table 3.
Figure 3. Degree of multifractality calculated for radiance and reflectance images of water (blue), mountain (black), agriculture (green) and urban (red) landscapes, divided into four zones according to Table 3.
Remotesensing 12 04077 g003
Figure 4. Degree of multifractality calculated for radiance images of different sizes: 512 × 512, 256 × 256, 128 × 128 and 64 × 64 pixels, and four landscape types: agriculture (green), mountains (black), urban (red) and water (blue).
Figure 4. Degree of multifractality calculated for radiance images of different sizes: 512 × 512, 256 × 256, 128 × 128 and 64 × 64 pixels, and four landscape types: agriculture (green), mountains (black), urban (red) and water (blue).
Remotesensing 12 04077 g004
Table 1. Examples of local and global (multi)fractal descriptions applied to hyperspectral data analysis.
Table 1. Examples of local and global (multi)fractal descriptions applied to hyperspectral data analysis.
PaperSensor/DatasetNumber of BandsImage SizeParameters Used/Method
Global descriptionQiu et al. (1999) [20]AVIRIS 1/Malibu
AVIRIS/LA
224
224
614 × 512
614 × 512
Fractal Dimension/isarithm method and triangular prism method
Myint et al. (2003) [21] ATLAS (5 classes)1517 × 17
33 × 32
65 × 65
Fractal dimension/isarithm, triangular prism and variogram method
Su et al. (2008) [29]OMIS 2/Beijing64536 × 512Fractal dimension/double blanket method
Krupiński et al. (2014) [22]AVIRIS (4 classes)
AVIRIS/Malibu
AVIRIS/LA
224
224
224
512 × 512
512 × 512
512 × 512
Fractal dimension/differential box counting method
Local descriptionCombrexelle et al. (2015) [30]Hyspex/Madonna
AVIRIS/Moffit Field
160
224
256 × 256
64 × 64
16 × 16
Coefficients of the polynomial describing multifractal spectrum/wavelet leader multifractal formalism
1 Airborne Visible/Infrared Imaging Spectrometer, 2 Operational Modular Imaging Spectrometer.
Table 2. Examples of (multi)fractal spectral curve descriptions applied to hyperspectral data analysis.
Table 2. Examples of (multi)fractal spectral curve descriptions applied to hyperspectral data analysis.
PaperSensor/DatasetNo. of BandsNo. of ClassesParameters Used/Method
Spectral Curve DescriptionDong et al. (2008) [17]HYPERION138 of 2425Fractal dimension/blanket method
Ghosh et al. (2008) [15]AVIRIS/Moffit Field2244Fractal dimension/adapted Hausdorff metric
Ghosh et al. (2008) [38]AVIRIS/Moffit Field30
128
5Fractal dimension/adapted Hausdorff metric
Junying et al. (2008) [39]MAIS 1
OMIS
176 of 2204
4
Fractal dimension/step measurement method
Ziyong et al. (2010) [40]HYPERION191 of 210
12
-Fractal dimension/modified blanked method
Hosseini et al. (2012) [41]HYDICE 2/Washington
F210
191 of 210
12
6
9
Fractal dimension/Hausdorff metric
Mukherjee et al. (2012) [14]HYDICE
AVIRIS/Indian Pine
AVIRIS/Cuprite
188 of 210
200 of 224
197 of 224
5
9 of 16
14
Fractal dimension/power spectrum method
Mukherjee et al. (2013) [19]HYDICE
AVIRIS/Indian Pine
AVIRIS/Cuprite
188 of 210
200 of 224
197 of 224
5
9 of 16
14
Fractal dimension/variogram method
Mukherjee et al. (2014) [18]AVIRIS/Indian Pine
AVIRIS/Cuprite
200 of 224
197 of 224
9 of 16
14
Fractal dimension/Sevcik’s method, power spectrum method, variogram method
Li et al. (2015) [42]PHI 3/Fanglu
AVIRIS/Indian Pines
64
200 of 224
6
16
4 parameters related to multifractal spectrum
Wan et al. (2017) [43]AVIRIS/Indian Pines
AVIRIS/KSC
200 of 224
176 of 224
9 of 16
13
Holder exponent, multifractal spectrum features
Krupiński et al. (2019) [44]CASI 4/University of Houston144156 parameters related to multifractal spectrum/multifractal detrended fluctuation analysis
1 Modular Airborne Imaging Spectrometer, 2 Hyperspectral Digital Imagery Collection Experiment, 3 Pushbroom Hyperspectral Imager, 4 Compact Airborne Spectrographic Imager.
Table 3. Division of the electromagnetic spectrum into six zones.
Table 3. Division of the electromagnetic spectrum into six zones.
Zone NameVISNIRA 1SWIR 1A 2SWIR 2
Bands1–4041–103104–114115–152153–168169–224
Wavelength (nm)366–724734–13131323–14231433–18021811–19371947–2496
Table 4. Modulus of the difference between Δ values for reflectance and radiance data. Average absolute and relative (%) values. Relative values refer to radiance data.
Table 4. Modulus of the difference between Δ values for reflectance and radiance data. Average absolute and relative (%) values. Relative values refer to radiance data.
Landscape TypeVISNIRSWIR 1SWIR 2
Agriculture0.0740.0030.0070.004
233%4%4%1%
Mountains0.0620.0050.0040.009
524%11%8%5%
Urban0.0630.0070.0000.002
383%15%5%2%
Water0.0030.0010.0010.0003
446%25%11%6%
All0.0490.0050.0040.004
394%9%7%3%
Table 5. Modulus of the difference between values of Δ calculated for radiance data for original and resampled data. Average values.
Table 5. Modulus of the difference between values of Δ calculated for radiance data for original and resampled data. Average values.
Mean Δ DifferencesMean Error Differences
Landscape Type256 × 256128 × 12864 × 64256 × 256128 × 12864 × 64
Agriculture0.0070.0070.0120.0030.0080.014
4%4%10%2%5%11%
Mountains0.0060.0120.0150.0010.0020.002
6%14%18%1%2%2%
Urban0.0230.0330.0370.0080.0120.014
29%44%52%4%7%8%
Water0.0050.0080.0110.0020.0040.004
21%44%76%2%2%5%
All0.0100.0150.0190.0030.0050.009
15%26%39%2%3%6%
Table 6. Correlation coefficients (r2) between the degree of multifractality and four moments calculated for reflectance for the four landscape types after removal of outliers.
Table 6. Correlation coefficients (r2) between the degree of multifractality and four moments calculated for reflectance for the four landscape types after removal of outliers.
Landscape TypeMeanStandard DeviationSkewnessKurtosis
Agriculture0.4520.0030.0120.508
Mountains0.6640.0750.0380.012
Urban0.8340.5000.8580.762
Water0.2550.1800.0390.008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Krupiński, M.; Wawrzaszek, A.; Drzewiecki, W.; Jenerowicz, M.; Aleksandrowicz, S. What Can Multifractal Analysis Tell Us about Hyperspectral Imagery? Remote Sens. 2020, 12, 4077. https://doi.org/10.3390/rs12244077

AMA Style

Krupiński M, Wawrzaszek A, Drzewiecki W, Jenerowicz M, Aleksandrowicz S. What Can Multifractal Analysis Tell Us about Hyperspectral Imagery? Remote Sensing. 2020; 12(24):4077. https://doi.org/10.3390/rs12244077

Chicago/Turabian Style

Krupiński, Michał, Anna Wawrzaszek, Wojciech Drzewiecki, Małgorzata Jenerowicz, and Sebastian Aleksandrowicz. 2020. "What Can Multifractal Analysis Tell Us about Hyperspectral Imagery?" Remote Sensing 12, no. 24: 4077. https://doi.org/10.3390/rs12244077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop