*3.3. Qualitative Parameters for Compressed Aerial Images*

In aerial images, the different areas like grass, sand, vegetation, water, and others may be defined as regions of different textures. As the texture is defined by the local fluctuations of intensities or color brightness in an image, a human can discern the appropriate regions even in grayscale images by distinct textures. The region of rough texture is characterized by contrasting values of the neighboring pixels (e.g., forest canopy). The smooth region contains pixels of similar values (e.g., calm water) [27]. Texture characteristics are used in various application areas of remote sensing images like segmentation and classification [23,53,54]. The aerial images are rich not only in texture but also in color information. This information is used for edge detection, segmentation, classification, and other purposes [5,36,37].

The visual features depend on the image content since it is represented by the spatial arrangement and interrelationships of pixel values. The pixel-based approaches are widely used in change detection in remote sensing data [23,55]. Because of this, it is essential to evaluate the impact of lossy compression on the visual features of the aerial image content. Change detection of the texture and color characteristics can be evaluated by numerical pixel-based statistical measures [20,55]. The image does not require the prior processing to assess the statistical change of information after lossy compression and statistics are calculated for the compressed and the original image to find the differences.

There are different methods to calculate the texture features, like Gabor Filter [56], wavelet [57], Grey Level Co-occurrence Matrix (GLCM) [53–55]. The GLCM-based method is commonly used for texture analysis and discrimination using second-order histogram statistics. In [58], the authors proposed 14 statistics properties to describe the texture. There are five commonly used Haralic statistical measures (irrelative to each other) for texture analysis in remote sensing images: contrast, correlation, energy, entropy, and homogeneity [53,59]. The statistical texture measures are calculated using the probability matrix *P* of the GLCM method. The number of grey levels in the aerial image determines the dimensions of this matrix. The gray levels can be quantized with the cost of the reduction in the information [60]. Each element (*i*, *j*) of the probability matrix *P* defines the frequency of a pixel of the *i* grayscale intensity occurring at a specified distance *d* and direction *θ* adjacent to a pixel of a grayscale intensity *j*. Smaller distances are used for capturing local information [54].

In this research, we used five GLCM-based statistics to evaluate the distortions introduced into the grayscale image (Y channel of YCbCr color space) textures after lossy compression: contrast, correlation, homogeneity, energy, entropy.

The contrast [53,54,58] statistic measures the intensity contrast between each image pixel and its neighbor. This statistic presents the local variations in the image content and is defined by the equation:

$$Conttrast = \sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} |i - j|^2 P(i, j) \,\prime \,\tag{2}$$

where *P*—the probability matrix, (*i*, *j*)—location of the current pixel.

The correlation [53,54,58] shows the link between a pixel and its neighbors. It also reflects texture similarity in the appropriate direction and will be high for the image regions with a linear structure. The correlation is expressed as:

$$Correlation = \sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} \frac{(i - \mu\_i) \left(j - \mu\_j\right) P(i, j)}{\sigma\_i \sigma\_j},\tag{3}$$

where *µ<sup>i</sup>* , *µj*—the mean, *σ<sup>i</sup>* , *σj*—the standard deviation.

The homogeneity [53,54,58] statistic reflects how close to the GLCM diagonal are distributed the elements of GLCM. Low contrast image exposes high values of homogeneity. This statistic is closely related to the change of the pixel intensity values in the image region. The homogeneity is calculated as:

$$Homogeneity = \sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} \frac{P(i,j)}{1+|i-j|}. \tag{4}$$

The energy [53,54,58] statistic measures the uniformity. The less smooth the texture of the image is, the lower its energy value. The energy is computed as:

$$Energy = \sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} P^2(i, j). \tag{5}$$

The entropy [53,54,58] measures the image information. The high entropy reflects the high complexity and disorder of the image textures. The smooth textures have low entropy values. The entropy statistic is computed as:

$$Entropy = -\sum\_{i=0}^{N-1} \sum\_{j=0}^{N-1} P(i,j) \log P(i,j). \tag{6}$$

The effect of lossy compression on the image color is defined by first-order statistics mean and standard deviation—of Cb and Cr color channels of YCbCr color space in our research. These histogram-based statistics are global as they do not localize image distortions in the spatial domain. The one-dimensional histogram is used to provide statistical information about the greyscale or color image or textures. The probability density function *p*(*i*) can be calculated by dividing the values of intensity level histogram *h*(*i*) by the number of image pixels *N* × *M* [61]:

$$p(i) = \frac{h(i)}{NM'} \tag{7}$$

where *i* = 0, 1, . . . , *G* − 1, *G*—the number of image intensity levels.

The average intensity level of the image is defined by the mean statistics. The standard deviation defines the density of the image intensity dispersion around the mean.

Supervised quality metrics—PSNR, PSNR-HVS-M, SSIM, MS-SSIM—compare the distorted image with reference. We included subjective metrics alongside the commonly used supervised objective metrics to evaluate the quality of lossy aerial image compression.

Simple pixel-based differences metric—Peak Signal to Noise Ratio (PSNR) is usually used for assessment of image distortion after lossy compression. The PSNR is only an approximation to human visual perception. Peak signal to noise ratio between the original *Im*(*i*, *j*) and compressed *Im*0 (*i*, *j*) images [62] is calculated as:

$$PSNR = 10 \log\_{10} \frac{2^B - 1}{\sqrt{MSE}} \,\text{}\tag{8}$$

where *MSE*—the mean square error; *B*—the bits per sample. The Mean Square Error (MSE) [62]:

$$MSE = \frac{1}{MN} \sum\_{i=0}^{M-1} \sum\_{j=0}^{N-1} \left(Im(i,j) - Im'(i,j)\right)^2,\tag{9}$$

where *M* and *N*—the width and high of the aerial image, respectively. For YCbCr color space, PSNR is computed as [62]:

$$PSNR\_{YcbCr} = \frac{6PSNR\_Y + PSNR\_{Cb} + PSNR\_{Cr}}{8}.\tag{10}$$

PSNR-HVS-M metric was designed to improve PSNR's performance [63,64], taking into account the HVS. The original and distorted images are divided into 8 × 8 nonoverlapping blocks of pixels. The difference *δ*(*i*, *j*) between each distorted and original block of DCT coefficients is multiplied by a contrast masking metric (CM), and further, the result is weighted using coefficients of Contrast Sensitivity Function (CSF) [64]:

$$
\delta\_{\rm PSNRHSVM}(i,j) = (\delta(i,j) \cdot \mathbb{C}M(i,j)) \cdot \mathbb{C}SF\_{\mathbb{C}of}(i,j). \tag{11}
$$

Then, the MSE in DCT domain [64]:

$$MSE\_{\text{PSNRHSW}}(i, j, I, l) = \frac{1}{\text{MN}} \sum\_{l=1}^{M/8} \sum\_{l=1}^{N/8} \left( \sum\_{i=1}^{8} \sum\_{j=1}^{8} \left( \delta\_{\text{PSNRHSW}}(i, j) \right)^2 \right), \tag{12}$$

where (*I*, *J*)—the position of the 8 × 8 non-overlapping block in the image; (*i*, *j*)—the position of the pixel in the block.

PSNR-HVS-M is computed using Equation (8) and replacing MSE with *MSEPSNRHSVM*. Values of PSNR and PSNR-HVS-M metrics are in the range [0, +∞] dB.

The Structural Similarity Index (SSIM) [65,66] shows the similarity between the two images—the original and reconstructed after compression. The changes of structural information are analyzed in the images using the structure *s*, luminance *l*, and contrast *c*. This HVS-based metric is usually applied to a luminance channel of images (Y channel of YCbCr color space). For *j*-th scale SSIM, the quality assessment is defined as [66]:

$$SSIM\_{\hat{\jmath}} = \frac{1}{N\_{\hat{\jmath}}} \sum\_{\hat{\imath}} c(x\_{\hat{\jmath},i\prime}, y\_{\hat{\jmath},i}) s(x\_{\hat{\jmath},i\prime}, y\_{\hat{\jmath},i}) \,\,\,\tag{13}$$

for *j* = 1, . . . , *M*−1 and

$$\text{LSSIM}\_{\text{j}} = \frac{1}{N\_{\text{j}}} \sum\_{\text{i}} l(\mathbf{x}\_{\text{j}\dot{\boldsymbol{\mu}}\prime} \ y\_{\text{j}\dot{\boldsymbol{\mu}}}) c(\mathbf{x}\_{\text{j}\dot{\boldsymbol{\mu}}\prime} \ y\_{\text{j}\dot{\boldsymbol{\mu}}}) s(\mathbf{x}\_{\text{j}\dot{\boldsymbol{\mu}}\prime} \ y\_{\text{j}\dot{\boldsymbol{\mu}}}) . \tag{14}$$

for *j* = *M*. In (7) and (8), *xj*,*<sup>i</sup>* , *yj*,*<sup>i</sup>* are *i*-th local image patches at the *j*-th scale that are extracted from *i*-th evaluation widow, and *Nj*—the number of the evaluation windows in the scale.

The overall multiscale SSIM is denoted as MS-SSIM [66] and is expressed by the equation:

$$MSSSIM = \prod\_{j=1}^{M} (SSIM\_j)^{\beta\_j} \,\,\,\,\tag{15}$$

where *βj*—the values that are obtained through psychophysical measurements [56].

The values of SSIM and MS-SSIM metrics are in the range from [0, 1]. High image quality is indicated by the high score of SSIM, MS-SSIM, PSNR, and PSNR-HVS-M.

A subjective evaluation of the distorted image quality is based on human visual perception. This method is called the Mean Opinion Score (MOS) [62] and defines the average score of opinions. MOS is commonly used to assess the image quality in a broad spectrum of applications, including image compression. Human judgment is important, but the subjective method is time-consuming and can fail to evaluate high-resolution images because of the vast amount of the data present; it is almost impossible to perceive and estimate the small distortions. This method is reasonable to combine with objective methods.

These four groups of qualitative parameters are presented in Figure 3.

**Figure 3.** Qualitative parameters for the evaluation of the reconstructed aerial image after lossy compression. Different colors represent different groups of qualitative parameters.

They were used as criteria for the qualitative evaluation of aerial image lossy compression using MCDM methodology. The chosen combination of the different types of qualitative measurements can improve the qualitative assessment of the image reconstructed after lossy compression.
