Next Article in Journal
Deep Multi-Scale Recurrent Network for Synthetic Aperture Radar Images Despeckling
Next Article in Special Issue
Spectral Imagery Tensor Decomposition for Semantic Segmentation of Remote Sensing Data through Fully Convolutional Networks
Previous Article in Journal
Combining Evapotranspiration and Soil Apparent Electrical Conductivity Mapping to Identify Potential Precision Irrigation Benefits
Previous Article in Special Issue
Compression of Hyperspectral Scenes through Integer-to-Integer Spectral Graph Transforms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression †

by
Kevin Chow
*,
Dion Eustathios Olivier Tzamarias
,
Ian Blanes
and
Joan Serra-Sagristà
Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, Cerdanyola del Vallès, 08193 Barcelona, Spain
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 6th ESA/CNES International Workshop on On-Board Payload Data Compression Proceedings.
Remote Sens. 2019, 11(21), 2461; https://doi.org/10.3390/rs11212461
Submission received: 31 August 2019 / Revised: 16 October 2019 / Accepted: 17 October 2019 / Published: 23 October 2019
(This article belongs to the Special Issue Remote Sensing Data Compression)

Abstract

:
This paper proposes a lossless coder for real-time processing and compression of hyperspectral images. After applying either a predictor or a differential encoder to reduce the bit rate of an image by exploiting the close similarity in pixels between neighboring bands, it uses a compact data structure called k 2 -raster to further reduce the bit rate. The advantage of using such a data structure is its compactness, with a size that is comparable to that produced by some classical compression algorithms and yet still providing direct access to its content for query without any need for full decompression. Experiments show that using k 2 -raster alone already achieves much lower rates (up to 55% reduction), and with preprocessing, the rates are further reduced up to 64%. Finally, we provide experimental results that show that the predictor is able to produce higher rates reduction than differential encoding.

Graphical Abstract

1. Introduction

Compact data structures [1] are examined in this paper as they can provide real-time processing and compression of remote sensing images. These structures are stored in reduced space in a compact form. Functions can be used to access and query each datum or groups of data directly in an efficient manner without an initial full decompression. This compact data should also have a size which is close to the information-theoretic minimum. The idea was explored and examined by Guy Jacobson in his doctoral thesis in 1988 [2] and in a paper published by him a year later [3]. Prior to this, works had been done to express similar ideas. However, Jacobson’s paper is often considered the starting point of this topic. Since then it has gained more attention and a number of research papers have been published. Research on algorithms such as FM-index [4,5] and Burrows-Wheeler transform [6] were proposed and applications were released, notable examples of which include bzip2 (https://linux.die.net/man/1/bzip2), Bowtie [7] and SOAP2 [8]. One of the advantages of using compact data structures is that the compressed data form can be loaded into main memory and accessed directly. The smaller compressed size also helps data move through communication channels faster. The other advantage is that there is no need to compress and decompress the data as is the case with data compressed by a classical compression algorithm such as gzip or bzip2, or by a specialized algorithm such as CCSDS 123.0-B-1 [9] or KLT+JPEG 2000 [10,11]. The resulting image will have the same quality as the original.
Hyperspectral images are image data that contain a multiple number of bands from across the electromagnetic spectrum. They are usually taken by hyperspectral satellite and airborne sensors. Data are extracted from certain bands in the spectrum to help us find the objects that we are specifically looking for, such as oil fields and minerals. However, due to their large sizes and the huge amount of data that have been collected, hyperspectral images are normally compressed by lossy and lossless algorithms to save space. In the past several decades, a lot of research studies have gone into keeping the storage sizes to a minimum. However, to retrieve the data, it is still necessary to decompress all the data. With our approach using compact data structures, we can query the data without fully decompressing them in the first place, and this is the main motivation for this work.
Prediction is one of the schemes used in lossless compression. CALIC (Context Adaptive Lossless Image Compression) [12,13] and 3D-CALIC [14] belong to this class of scheme. In 1994, Wu et al. introduced CALIC, which uses both context and prediction of the pixel values. In 2000, the same authors proposed a related scheme called 3D-CALIC in which the predictor was extended to the pixels between bands. Later in 2004, Magli et al. [15] proposed M-CALIC whose algorithm is related to 3D-CALIC. All these methods take advantage of the fact that in a hyperspectral image, neighboring pixels in the same band (spatial correlation) are usually close to each other and even more so for neighboring pixels of two neighboring bands (spectral correlation).
Differential encoding is another way of encoding an image by taking the difference between neighboring pixels and in this work, it is a special case of the predictive method. It only takes advantage of the spectral correlation. However, this correlation between the pixels in the bands will become smaller as the distance between the bands are further apart and therefore, its effectiveness is expected to decrease when the bands are far from each other.
The latest studies on hyperspectral image compression, both lossy and lossless, are focused on CCSDS 123.0, vector quantization, Principal Component Analysis (PCA), JPEG2000, and Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), among many others. Some of these research works are listed in [16,17,18,19]. In this work, however, we investigate lossless compression of hyperspectral images through the proposed k 2 -raster for 3D images, which is a compact data structure that can provide bit-rate reduction as well as direct access to the data without full decompression. We also explore the use of a predictor and a differential encoder as preprocessing on the compact data structure to see if it can provide us with further bit-rate reduction. The predictive method and the differential method are also compared. The flow chart shown in Figure 1 depicts how the encoding/decoding of this proposal works.
This paper is organized as follows: In Section 2, we present the k 2 -raster and discuss it in detail, beginning with quadtree, followed by k 2 -tree and k 2 -raster. Later in the same section, details of the predictive method and the differential method are discussed. Section 3 shows the experimental results on how the two methods fare using k 2 -raster on hyperspectral images, and more results on how some other factors such as using different k-values can affect the bit rates. Finally, we present our conclusions in Section 4.

2. Materials and Methods

One way to build a structure that is small and compact is to make use of a tree structure and do it without using pointers. Pointers usually take up a large amount of space, with each one having a size in the order of 32 or 64 bits for most modern-day machines or programs. A tree structure with n pointers will have a storage complexity of O ( n log n ) whereas a pointer-less tree only occupies O ( n ) . For pointer-less trees, to get at the elements of the structure, rank and select functions [3] are used, and that only requires simple arithmetic to find the parent’s and child’s positions. This is the premise that compact data structures are based on. In this work, we will use k 2 -raster from Ladra et al. [20], a concept which was developed from k 2 -tree, also a type of compact data structure, as well as the idea of using recursive decomposition of quadtrees. The results of k 2 -raster were quite favorable for the data sets that were used. Therefore, we are extending their approach for hyperspectral images and investigate whether it would be possible to use that structure for 3D hyperspectral images. The Results section will show us that the results are quite competitive compared to other commonly-used classical compression techniques. There is a bit-rate reduction of up to 55% for the testing images. Upon more experimentation with predictive and differential preprocessing, a further bit-rate reduction of up to 64% can be attained. For that reason, we are proposing in this paper our encoder using the predictor or differential method on k 2 -raster for hyperspectral images.

2.1. Quadtrees

Quadtree structures [21], which have been used in many kinds of data representations such as image processing and computer graphics, are based on the principle of recursive decomposition. As there are many variants of quadtree, we will describe the one that is pertinent to our discussion: region quadtree. Basically, a quadtree is a tree structure where each internal node has 4 children. Given a 2D square matrix, it is partitioned recursively into four equal subquadrants. If a tree is built to represent this, it will have a root node at level 0 with 4 children nodes at level 1, each child representing a node and a subquadrant. Next, if the subquadrant has a size larger than 2 2 , then each of these subquadrants will be partitioned to give 4 more children and a new level 2 is added to the tree. Note that the tree nodes are traversed in a left to right order.
Considering a matrix of size n × n where n is a power of 2, it is recursively divided until each subquadrant has a size of 2 2 . For example, if the size of the matrix is 8 × 8, after the recursive division of matrix, ( 8 2 )/( 2 2 ) = 16 subquadrants are obtained. It should be noted that the value of n in the image matrix needs to be a power of 2. Otherwise, the matrix has to be enlarged widthwise and heightwise to a value which is the next power of 2, and these additional pixels will be padded with zeros. As k 2 -trees are based on quadtrees, the division and the resulting tree of a quadtree are very similar to those of a k 2 -tree. Figure 2 illustrates how a quadtree’s recursive partitioning works.

2.2. LOUDS

k 2 -tree is based on unary encoding and LOUDS, which is a compact data structure introduced by Guy Jacobson in his paper and thesis [2,3]. A bit string is formed by a breadth-first traversal (going from left to right) of an ordinal (rooted, ordered) tree structure. Each parent node is encoded with a string of ‘1’ bits whose length indicates the number of children it has and each string ends with a ‘0’ bit. If the parent node has no children, only a single ‘0’ bit suffices.
The parent and child relationship can be computed by two cornerstone functions for compact data structures: rank and select. These functions give us information about the node’s first-child, next-sibling(s), and parent, without the need of using pointers. They are described below:
rank b (m)returns the number of bits which are set to b, left of position m (inclusive) in the bitmap where b is 0 or 1.
select b (i)returns the position of the i-th b bit in the bitmap where b is 0 or 1.
By default, b is 1, i.e., rank(m) = rank 1 (m). These operations are inverses of each other. In other words, rank(select(m)) = select(rank(m)) = m. Since a linear scan is required to process the rank and select functions, the worst-case time complexity will be O ( n ) .
To clarify how these functions work, consider the binary trees depicted in Figure 3 where the one on the left shows the values and the one on the right shows the numbering of the same tree. If the node has two children, it will be set to 1. Otherwise, it is set to 0. The values of this tree are put in a bit string shown in Figure 4. Figure 5 shows how the position of the left child, right child or parent of a certain node m is computed with the rank and select functions. An example follows:
To find the left child of node 8, we first need to compute rank(8), which is the total number of 1’s from node 1 up to and including node 8 and the answer is 7. Therefore, the left child is located in 2*rank(8) = 2*7 = 14 and the right child is in 2*rank(8)+1 = 2*7+1 = 15. The parent of node 8 can be found by computing select( 8 / 2 ) or select( 4 ). The answer can be arrived at by counting the total number of bits starting from node 1, skipping the ones with ’0’ bits. When we get to node 4 which gives us a total bit count of 4, we then know that node 4 is where the parent of node 8 is.
In the next section, we will explain how the rank function can be used to determine the children’s positions in a k 2 -tree, thus enabling us to query the values of the cells.

2.3. k 2 -Tree

Originally proposed for compressing Web graphs, k 2 -tree is a LOUDS variant compact data structure [22]. The tree represents a binary adjacency matrix of a graph (see Figure 2). It is constructed by recursively partitioning the matrix into square submatrices of equal size until each submatrix reaches a size of k x k where k 2. During the process of partitioning, if there is at least one cell in the submatrix that has a value of 1, the node in the tree will be set to 1. Otherwise, it will be set to 0 (i.e., it is a leaf and has no children) and this particular submatrix will not be partitioned any further. Figure 2 illustrates an example of a graph of 6 nodes, its 8 × 8 binary adjacency matrix at various stages of recursive partitioning, and the k 2 -tree that is constructed from the matrix.
The values of k 2 -trees are basically stored in two bitmaps denoted by T (tree) and L (leaves). The values are traversed in a breadth-first fashion starting with the first level. The T bitmap stores the bits at all levels except the last one where its bits will be stored in the L bitmap. Note that the bit values of T which are either 0 or 1 will be stored as a bit vector. To illustrate this with an example, we again make use of the binary matrix in Figure 2. The T bitmap contains all the bits from levels 1 and 2. Thus the T bitmap has the following bits: 1001 1101 1000 (see Figure 6). The bits from the last level, level 3, will be stored in the L bitmap with the following bits: 1001 0001 1011 1111.
Consider a set S with elements from 1 to n, to find the child’s or the parent’s position of a certain node m in a k 2 -tree, we perform the following operations:
first-child(m) ← rank(m) · k 2 where 1 m S
parent(m) ← select( m / k 2 ) where 1 m S
Once again using the k 2 -tree in Figure 2 as an example, with the T bitmap (Figure 6) and the rank and select functions, we can navigate the tree and obtain the positions of the first child and the parent. Figure 7 shows how the nodes of the k 2 -tree are numbered.
Ex.Locate the first child of node 8:
rank 1 (8) * 4 = 6 * 4 = 24
(There are 6 one bits in the T bitmap starting from node 0 up to and including node 8.)
Ex.Locate the parent of node 11:
select 1 ( 11 / 4 ) = select 1 (2) = 3
(Start counting from node 0, skipping all nodes with ’0’ bits, and node 3 is the first node that gives a total number of 1-bit count of 2. Therefore, node 3 is the parent.
It was shown that k 2 -tree gave the best performance when the matrix was sparse with large clusters of 0’s or 1’s [20].

2.4. DACs

This section describes DACs which is used in k 2 -raster to directly access variable-length codes. Based on the concept of compact data structures, DACs were proposed in the papers published by Brisaboa et al. in 2009 and 2013 [23,24] and the structure was proven to yield good compression ratios for variable-length integer sequences. By means of the rank function, it gains fast direct access to any position of the sequence in a very compact space. The original authors also asserted that it was better suited for a sequence of integers with a skewed frequency distribution toward smaller integer values.
Different types of encoding are used for DACs and the one that we are interested in for k 2 -raster is called Vbyte coding. Consider a sequence of integers x. Each integer, which is represented by ⌊log 2 x i + 1 bits, is broken into blocks of bits of size S. Each block is stored as a chunk of S + 1 bits. The chunk that holds the most significant bits has the highest bit set to 0 while the other chunks have their highest bit set to 1. For example, if we have an integer 20 (10100 2 ) which is 5 bits long and if the block size is S = 3 , then we can have 2 chunks denoted by the following: 0010 1100.
To show how the chunks are organized and stored, we again illustrate it with an example. If we have 3 integers of variable length 20 (10100 2 ), 6 (110 2 ), 73 (1001001 2 ) and each block size is 3, then the three integers have the following representations.
200010 1100 (B 1 , 2 A 1 , 2 B 1 , 1 A 1 , 1 )
60110 (B 2 , 1 A 2 , 1 )
730001 1001 1001 (B 3 , 3 A 3 , 3 B 3 , 2 A 3 , 2 B 3 , 1 A 3 , 1 )
We will store them in three chunks of arrays A and bitmaps B. This is depicted in Figure 8. To retrieve the values in the arrays A, we make use of the corresponding bitmaps B with the rank function.
More information on DACs and the software code can be found in the papers [23,24].

2.5. k 2 -Raster

k 2 -raster is a compact data structure that allows us to store raster pixels in reduced space. It consists of several basic components: bitmaps, DACs and LOUDS. Similar to a k 2 -tree, the image matrix is partitioned recursively until each subquadrant is of size k 2 . The resulting LOUDS tree topology contains the bitmap T where the elements are accessed with the rank function. Unlike k 2 -tree, at each tree level, the maximum and minimum values of each subquadrant are stored in two bitmaps which are respectively called V m a x and V m i n . However, to compress the structure further, the maximum and minimum values of each level are compared with the corresponding values of the parent and their differences will replace the stored values in the V m a x and V m i n bitmaps. The rationale behind all this is to obtain smaller values for each node so as to get a better compression with DACs. An example of a simple 8 × 8 matrix is given to illustrate this point in Figure 9. A k 2 -raster is constructed from this matrix with maximum and minimum values stored in each node in Figure 10. The structure is further modified, according to the above discussion, to form a tree with smaller maximum and minimum values and this is shown in Figure 11.
Next, with the exception of the root node at the top level, the V m a x and V m i n bitmaps at all levels are concatenated to form L m a x and L m i n bitmaps. The root’s maximum ( r M a x ) and minimum ( r M i n ) values are integer values and will remain uncompressed.
For an image of size n × n with n bands, the time complexity to build all the k 2 -rasters is O ( n 3 ) [22]. To query a cell from the structure, which has a tree height of at most log k n levels, the time complexity to extract a codeword at a single L m a x level is O ( log k n ) , and this is the worst-case time to traverse from the root node to the last level of the structure. The number of levels, L , in L m a x can be obtained from the maxinum integer in the sequence and with this, we can compute the time complexity for a cell query, which is O ( log k n · L ) [23,25].
To sum up, a k 2 -raster structure is composed of a bitmap T, a maximum bitmap L m a x , a minimum bitmap L m i n , a root maximum r M a x integer value and a root minimum r M i n integer value.

2.6. Predictive Method

As mentioned in the Introduction, an interband predictor called 3D-CALIC was proposed by Wu et al. in 2000 and another predictor called M-CALIC by Magli et al. in 2004. Our predictor is based on the idea of least squares method and the use of reference bands that were discussed in both the 3D-CALIC [14] and M-CALIC [15] papers. Consider two neighboring or close neighboring bands of the same hyperspectral image. These bands can be represented by two vectors X = ( x 1 , x 2 , x 3 , , x n 1 , x n ) and Y = ( y 1 , y 2 , y 3 , , y n 1 , y n ) where x i and y i are two pixels that are located at the same spatial position but in different bands, and n is the number of pixels in each band. We can then employ the close similarity between the bands to predict the pixel value in the current band Y using the corresponding pixel value in band X , which we designate as the reference band.
A predictor for a particular band can be built from the linear equation:
Y ^ = α X + β
so as to minimize | | Y ^ Y | | 2 2 where Y ^ is the predicted value and Y is the actual value of the current band. The optimal values for α and β should minimize the prediction error of the current pixel and can be obtained by using the least squares solution:
α ^ = n i = 1 n x i y i i = 1 n x i i = 1 n y i n i = 1 n x i 2 ( i = 1 n x i ) 2 ,
β ^ = n i = 1 n y i α ^ i = 1 n x i n
where n is the size of each band, i.e., the height multiplied by the width, α ^ the optimal value of α and β ^ the optimal value of β .
The difference between the actual and predicted pixel values of a band is known as the residual value or the prediction error. When all the pixel values in the current band are calculated, these prediction residuals will be saved in a vector, which will later be used as input to a k 2 -raster.
In other words, for a particular pixel in the current band and the corresponding pixel in the reference band, δ i being the residual value, y i the actual value of the current band, and x i the value of the reference band, to encode, the following equation is used:
δ i = y i ( α ^ · x i + β ^ ) .
To decode, the following equation is used:
y i = δ i + ( α ^ · x i + β ^ ) .
The distance from the reference band affects the residual values. The closer the current band is to the reference band, the smaller the residual values would tend to be. We can arrange the bands into groups. For example, the first band can be chosen as the reference and the second, third and fourth bands will have their residual values calculated with respect to the first band. And the next group starts with the fifth band as the reference band, etc.
For this coding method, the group size (stored as a 2-byte short integer) as well as the α ^ and β ^ values for each band (stored as 8-byte double’s) will need to be saved for use in both the encoder and the decoder. Note that the size of these extra data is insignificant - which generally comes to around 3.5 kB - compared to the overall size of the structure.

2.7. Differential Method

In the differential encoding, which is a special case of the predictor where α = 1 and β = 0 , the residual value is obtained by simply taking the difference between the reference band and the current band. For a particular pixel in the current band and the corresponding pixel in the reference band, δ i being the residual value, y i the actual value of the current band, and x i the value of the reference band, to encode, the following equation is used:
δ i = y i x i .
To decode, the following equation is used:
y i = δ i + x i .
Like the predictor, we can use the first band as the reference band and the next several bands can use this reference band to find the residual values. Again, the grouping is repeated up to the last band. For this coding method, only the group size (stored as a 2-byte short integer) needs to be saved.

2.8. Related Work

Since the publication of the proposals on k 2 -tree and k 2 -raster, more research has been done to extend the capabilities of the structures to 3D where the first and second dimensions represent the spatial element and the third dimension the time element.
Based on their previous research of k 2 -raster, Silva-Coira et al. [26] proposed a structure called Temporal k 2 -raster ( T k 2 raster) which represents a time-series of rasters. It takes advantage of the fact that in a time-series, the values in a matrix M 1 are very close to, if not the same as, the next matrix M 2 or even the one after that, M 3 , along the timeline. The matrices can then be grouped into τ time instants where the values of the elements of the first matrix in the group is subtracted from the corresponding ones in the current matrix. The result will be smaller integer values that would help form a more compact tree as there are likely to be more zeros in the tree than before. Their experimental results bear this out. When the τ value is small ( τ = 4 ), the sizes are small. However, as would be expected, the results are not as favorable when the τ value becomes larger ( τ = 50 ). Akin to the Temporal k 2 -raster, the differential encoding on k 2 -raster that we are proposing in this paper also exploits the similarity between neighboring matrices or bands in a hyperspectral image to form a more compact structure.
Another study on compact representation of raster images in a time-series was proposed earlier this year by Cruces et al. in [27]. This method is based on the 3D to 2D mapping of a raster where 3D tuples < x , y , z > are mapped into a 2D binary grid. That is, a raster of size w × h with values in a certain range, between 0 and v inclusive will have a binary matrix of w × h columns and v+1 rows. All the rasters will then be concatenated into a 3D cube and stored as a k 3 -tree.

3. Results

In this section we describe some of the experiments that were performed to show the use of compact data structures, prediction and differential encoding for real-time processing and compression. First, we show the results with other compression algorithms and techniques that are currently in use such as gzip, bzip2, xz, M-CALIC [15] and CCSDS 123.0-B-1 [9]. Then we compare the build time and the data access time for k 2 -raster with and without prediction and differential encoding. Next, we show the results of different rates in k 2 -raster that are produced as different k-values are applied. Similarly, the results of different group sizes for prediction and differential encoding are shown. Finally, the predictive method and the differential method are compared.
Experiments were conducted using hyperspectral images from different sensors: Atmospheric Infrared Sounder (AIRS), Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), Compact Reconnaissance Imaging Spectrometer for Mars (CRISM), Hyperion, and Infrared Atmospheric Sounding Interferometer (IASI). Except for IASI, all of them are publicly available for download (http://cwe.ccsds.org/sls/docs/sls-dc/123.0-B-Info/TestData). Table 1 gives more detailed information on these images. The table also shows the bit-rate reduction for using k 2 -raster with and without prediction. Performance in terms of bit rate and entropy is evaluated for them.
For best results in k 2 -raster for the testing images, we used the optimal k-value, and also in the case of the predictor and the differential encoder, the optimal group size for each image was used. The effects of using different k-values and different group sizes will be discussed and tested in two of the subsections below.
To build the structure of k 2 -raster and the cell query functions, a program in C was written. The algorithms presented in the paper by Ladra et al. [20] were the basis and reference for writing the code. The DACs software that was used in conjunction with our program is available at the Universidade da Coruña’s Database Laboratory (Laboratorio de Bases de Datos) website (http://lbd.udc.es/research/DACS/). The package is called “DACs, optimization with no further restrictions”. As for the predictive and differential methods, another C program was written to perform the tasks needed to give us the results that we will discuss below. All the code was compiled using gcc or g++ 5.4.0 20160609 with -Ofast optimization.
The machine that these experiments ran on has an Intel Core 2 Duo CPU E7400 @2.80GHz with 3072KB of cache and 3GB of RAM. The operating system is Ubuntu 16.04.5 LTS with kernel 4.15.0-47-generic (64 bits).
To ensure that there was no loss of information, the image was reconstructed by reverse transformation and verified to be identical to the original image in the case of predictive and differential methods. For k 2 -raster, after saving the structure to disk, we made sure that the original image could be reconstructed from the saved data.

3.1. Comparison with Other Compression Algorithms

Both k 2 -raster with and without predictive and differential encoding were compared to other commonly-used compression algorithms such as gzip, bzip2, xz, and specialized algorithms such as M-CALIC and CCSDS 123.0-B-1. The results for the comparison are shown in Table 2 and depicted in Figure 12.
It can be seen that k 2 -raster alone already performed better than gzip. When it was used with the predictor, it produced a bit rate that was basically on a par with and sometimes better than other compression algorithms such as xz or bzip2. However, it could not attain the bit-rate level done by CCSDS 123.0-B-1 or M-CALIC. This was to be expected as both are specialized compression techniques, and CCSDS 123.0-B-1 is considered a baseline against which all compression algorithms for hyperspectral images are measured. Nevertheless, k 2 -raster provides direct access to the elements without full decompression, and this is undoubtedly the major advantage it has over all the aforementioned compression algorithms.

3.2. Build Time

Both the time to build the k 2 -raster only and the time to build k 2 -raster with predictive and differential preprocessing were measured. They were then compared against the time to compress the data with gzip. The results are presented in Table 3. We can see that the build time for k 2 -raster only took half as long as with gzip. Comparing the predictive and the differential methods, the time difference is small although it generally took longer to build the former than the latter due to the additional time needed to compute the values of α ^ and β ^ . Both, however, still took less time to build than gzip compression.

3.3. Access Time

Several tests were conducted to see what the access time was like to query the cells in each image and we found that the time for a random cell access took longer for a predictor compared to just using the k 2 -raster. This was expected but we should bear in mind that the bit rates are reduced when a predictor is used, thus decreasing storage size and transmission rate. Note that the last column also lists the time to decompress a gzip image file and it took at least 4 or 5 times longer than using a predictor to randomly access the data 10 5 times. Table 4 shows the results of access time in milliseconds for 100,000 iterations of random cell query done by getCell(), a function which was described in the paper from Ladra et al. [20] for accessing pixel values in a k 2 -raster.

3.4. Use of Different k-Values

With k 2 -raster, we found that different k-values used in the structure would produce different bit rates and different access time. In general, for most of our testing images the k-value is at its optimal bit-rate level when it is between 4 and 9. The reason is that as the k-value increases, the height of the constructed tree becomes smaller. Therefore, the number of nodes in the tree will decrease and so will the size of the bitmaps L m a x and L m i n that need to be stored in the structure. Table 5 shows the bit rates of some of the testing images between k = 2 and k = 20 . Additionally, experiments show that as the k-value becomes higher, the access time also becomes shorter, as can be seen in Table 6. As the k-value gets larger, the tree becomes shorter, thus making it faster to traverse from the top level to a lower level when searching for a particular node in the tree. As there is a trade-off between storage size and access time, for the experiments, the k-value that produces the lowest bit rate for the image was used.
For those who would like to know which k-value would give the best or close to the best rate, we recommend them to use a value of 6 as a general rule. This can be seen from Table 5 where the difference in the rate produced by this value and the one by the optimal k-value averages out to be only about 0.19 bpppb.

3.5. Use of Different Group Sizes

Tests were performed to see how the group size affects the predictive and differential methods. The group sizes were 2, 4, 8, 12, 16, 20, 24, 28 and 32. The results in Table 7 and Figure 13 show that for most images, they are at their optimal bit rates when the size is 4 or 8. The best bit-rate values are highlighted in red. For the range of group size tested, we can also see that except for the CRISM scenes (which consist of pixels with low spatial correlation, thus leading to inaccurate prediction), the bit rates for the predictor are always lower than the ones for differential encoding, irrespective of the group size.
For users who are interested in knowing which group size is the best to apply to the predictive and differential methods, a size of 4 is recommended for general use as the difference in bit rate produced by this group size and the one by the optimal group size averages out to be about 0.06 bpppb.
For the rest of the experiments, the optimal group size for each image was used to obtain the bit rate.

3.6. Predictive and Differential Methods

The proposed differential and predictive methods were used to transform these images into data with lower bit rates. They were then used as input to k 2 -raster to further reduce their bit rates. Their performance was compared together with Reversible Haar Transform at levels 1 and 5, and the results are presented in Table 8. Figure 14 shows the entropy comparison of Yellowstone03 using differential and predictive methods while Figure 15 shows the bit rate comparison between the two methods. Both show us that the proposed algorithm has brought benefits by lowering the entropy and the bit rates. The data for reference bands are left out of the plots so that the reader can have a clearer overall picture of the bit rate comparison.
Compared to other methods, the predictive method outperforms others, with the exception of Reversible Haar Transform level 5. However, it should be noted that while the predictive and differential methods require only two pixels (reference pixel and current pixel) to perform the reverse transformation, it would be a much more involved process to decode data using Reversible Haar Transform at a higher level. The experiments show that for all the testing images, the predictive method in almost all bands perform better than the differential method. This can be explained by the fact that in predictive encoding the values of α and β in Equation (1) take into account not only the spectral correlation, but also the spatial correlation between the pixels in the bands when determining the prediction values. This is not the case with differential encoding whose values are only taken from the spectral correlation.

4. Conclusions

In this work, we have shown that using k 2 -raster structure can help reduce the bit rates of a hyperspectral image. It also provides easy access to its elements without the need for initial full decompression. The predictive and differential methods can be applied to further reduce the rates. We performed experiments that showed that if the image data are first converted by either a predictive method or a differential method, we can gain more reduction in bit rates, thus making the storage capacity or the transmission volume of the data even smaller. The results of the experiments verified that the predictor indeed gives a better reduction in bit rates than the differential encoder and is preferred to be used for hyperspectral images.
For future work, we are interested in exploring the possibility of modifying the elements in a k 2 -raster. This investigation is based on the dynamic structure, d k 2 -tree, as discussed in the papers by de Bernardo et al. [29,30]. Additionally, we would like to improve on the variable-length encoding which is currently in use with k 2 -raster, and hope to further reduce the size of the structure [23,24].

Author Contributions

Conceptualization, K.C., D.E.O.T., I.B. and J.S.-S.; methodology, K.C., D.E.O.T., I.B. and J.S.-S.; software, K.C.; validation, K.C., I.B. and J.S.-S.; formal analysis, K.C., D.E.O.T., I.B. and J.S.-S.; investigation, K.C., D.E.O.T., I.B. and J.S.-S.; resources, K.C., D.E.O.T., I.B. and J.S.-S.; data curation, K.C., I.B. and J.S.-S.; writing—original draft preparation, K.C., I.B. and J.S.-S.; writing—review and editing, K.C., I.B. and J.S.-S.; visualization, K.C., I.B. and J.S.-S.; supervision, I.B. and J.S.-S.; project administration, I.B. and J.S.-S.; funding acquisition, I.B. and J.S.-S.

Funding

This research was funded by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund under grants RTI2018-095287-B-I00 and TIN2015-71126-R (MINECO/FEDER, UE) and BES-2016-078369 (Programa Formación de Personal Investigador), and by the Catalan Government under grant 2017SGR-463.

Acknowledgments

The authors would like to thank Magli et al. for the M-CALIC software that they provided us in order to perform some of the experiments in this research work.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIRSAtmospheric Infrared Sounder
AVIRISAirborne Visible InfraRed Imaging Spectrometer
CALICContext Adaptive Lossless Image Compression
CCSDSConsultative Committee for Space Data Systems
CRISMCompact Reconnaissance Imaging Spectrometer for Mars
DACsDirectly Addressable Codes
IASIInfrared Atmospheric Sounding Interferometer
JPEG 2000Joint Photographic Experts Group 2000
KLTKarhunen–Loève Theorem
LOUDSLevel-Order Unary Degree Sequence
MDPIMultidisciplinary Digital Publishing Institute
PCAPrincipal Component Analysis
SOAPShort Oligonucleotide Analysis Package

References

  1. Navarro, G. Compact Data Structures: A Practical Approach; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  2. Jacobson, G. Succinct Static Data Structures. Ph.D. Thesis, Carnegie-Mellon, Pittsburgh, PA, USA, 1988. [Google Scholar]
  3. Jacobson, G. Space-efficient static trees and graphs. In Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), Research Triangle Park, NC, USA, 30 October–1 November 1989; pp. 549–554. [Google Scholar]
  4. Grossi, R.; Gupta, A.; Vitter, J.S. High-order entropy-compressed text indexes. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms, Baltimore, MD, USA, 12–14 January 2003; Volume 72, pp. 841–850. [Google Scholar]
  5. Ferragina, P.; Manzini, G. Opportunistic data structures with applications. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science, Redondo Beach, CA, USA, 12–14 November 2000; p. 390. [Google Scholar]
  6. Burrows, M.; Wheeler, D. A Block Sorting Lossless Data Compression Algorithm; Technical Report; Digital Equipment Corporation: Maynard, MA, USA, 1994. [Google Scholar]
  7. Langmead, B.; Trapnell, C.; Pop, M.; Salzberg, S.L. Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009, 10, R25. [Google Scholar] [CrossRef] [PubMed]
  8. Li, R.; Yu, C.; Li, Y.; Lam, T.; Yiu, S.; Kristiansen, K.; Wang, J. SOAP2: an improved ultrafast tool for short read alignment. Bioinformatics 2009, 25, 1966–1967. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Consultative Committee for Space Data Systems (CCSDS). Image Data Compression CCSDS 123.0-B-1; Blue Book; CCSDS: Washington, DC, USA, 2012. [Google Scholar]
  10. Jolliffe, I.T. Principal Component Analysis; Springer: Berlin, Germany, 2002; p. 487. [Google Scholar]
  11. Taubman, D.S.; Marcellin, M.W. JPEG 2000: Image Compression Fundamentals, Standards and Practice; Kluwer Academic Publishers: Boston, MA, USA, 2001. [Google Scholar]
  12. Wu, X.; Memon, N. CALIC—A context based adaptive lossless image CODEC. In Proceedings of the 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, Atlanta, GA, USA, 9 May 1996. [Google Scholar]
  13. Wu, X.; Memon, N. Context-based adaptive, lossless image coding. IEEE Trans. Commun. 1997, 45, 437–444. [Google Scholar] [CrossRef]
  14. Wu, X.; Memon, N. Context-based lossless interband compression—Extending CALIC. IEEE Trans. Image Process. 2000, 9, 994–1001. [Google Scholar] [PubMed]
  15. Magli, E.; Olmo, G.; Quacchio, E. Optimized onboard lossless and near-lossless compression of hyperspectral data using CALIC. IEEE Geosci. Remote Sens. Lett. 2004, 1, 21–25. [Google Scholar] [CrossRef]
  16. Kiely, A.; Klimesh, M.; Blanes, I.; Ligo, J.; Magli, E.; Aranki, N.; Burl, M.; Camarero, R.; Cheng, M.; Dolinar, S.; et al. The new CCSDS standard for low-complexity lossless and near-lossless multispectral and hyperspectral image compression. In Proceedings of the 6th International WS on On-Board Payload Data Compression (OBPDC), ESA/CNES, Matera, Italy, 20–21 September 2018. [Google Scholar]
  17. Fjeldtvedt, J.; Orlandić, M.; Johansen, T.A. An efficient real-time FPGA implementation of the CCSDS-123 compression standard for hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3841–3852. [Google Scholar] [CrossRef]
  18. Báscones, D.; González, C.; Mozos, D. Hyperspectral image compression using vector quantization, PCA and JPEG2000. Remote Sens. 2018, 10, 907. [Google Scholar] [CrossRef]
  19. Guerra, R.; Barrios, Y.; Díaz, M.; Santos, L.; López, S.; Sarmiento, R. A new algorithm for the on-board compression of hyperspectral images. Remote Sens. 2018, 10, 428. [Google Scholar] [CrossRef]
  20. Ladra, S.; Paramá, J.R.; Silva-Coira, F. Scalable and queryable compressed storage structure for raster data. Inf. Syst. 2017, 72, 179–204. [Google Scholar] [CrossRef]
  21. Samet, H. The Quadtree and related hierarchical data structures. ACM Comput. Surv. (CSUR) 1984, 16, 187–260. [Google Scholar] [CrossRef]
  22. Brisaboa, N.R.; Ladra, S.; Navarro, G. k2-trees for compact web graph representation. In International Symposium on String Processing and Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2009; pp. 18–30. [Google Scholar]
  23. Brisaboa, N.R.; Ladra, S.; Navarro, G. DACs: Bringing direct access to variable-length codes. Inf. Process Manag. 2013, 49, 392–404. [Google Scholar] [CrossRef]
  24. Brisaboa, N.R.; Ladra, S.; Navarro, G. Directly addressable variable-length codes. In International Symposium on String Processing and Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2009; pp. 122–130. [Google Scholar]
  25. Silva-Coira, F. Compact Data Structures for Large and Complex Datasets. Ph.D. Thesis, Universidade da Coruña, A Coruña, Spain, 2017. [Google Scholar]
  26. Cerdeira-Pena, A.; de Bernardo, G.; Fariña, A.; Paramá, J.R.; Silva-Coira, F. Towards a compact representation of temporal rasters. In String Processing and Information Retrieval; SPIRE 2018; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11147. [Google Scholar]
  27. Cruces, N.; Seco, D.; Gutiérrez, G. A compact representation of raster time series. In Proceedings of the Data Compression Conference (DCC) 2019, Snowbird, UT, USA, 26–29 March 2019; pp. 103–111. [Google Scholar]
  28. Álvarez Cortés, S.; Serra-Sagristà, J.; Bartrina-Rapesta, J.; Marcellin, M. Regression Wavelet Analysis for Near-Lossless Remote Sensing Data Compression. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef]
  29. De Bernardo, G.; Álvarez García, S.; Brisaboa, N.R.; Navarro, G.; Pedreira, O. Compact querieable representations of raster data. In International Symposium on String Processing and Information Retrieval; Springer: Cham, Switzerland, 2013; pp. 96–108. [Google Scholar]
  30. Brisaboa, N.R.; De Bernardo, G.; Navarro, G. Compressed dynamic binary relations. In Proceedings of the 2012 Data Compression Conference, Snowbird, UT, USA, 10–12 April 2012; pp. 52–61. [Google Scholar]
Figure 1. A flow chart showing the encoding and decoding of this coder.
Figure 1. A flow chart showing the encoding and decoding of this coder.
Remotesensing 11 02461 g001
Figure 2. A graph of 6 nodes (top) with its 8 × 8 binary adjacency matrix at various stages of recursive partitioning. At the bottom, a k 2 -trees (k=2) is constructed from the matrix.
Figure 2. A graph of 6 nodes (top) with its 8 × 8 binary adjacency matrix at various stages of recursive partitioning. At the bottom, a k 2 -trees (k=2) is constructed from the matrix.
Remotesensing 11 02461 g002
Figure 3. A binary tree example for LOUDs. The one on the left shows the values of the nodes and the one on the right shows the same tree with the numbering of the nodes in a left-to-right order. In this case the numbering starts with 1 at the root.
Figure 3. A binary tree example for LOUDs. The one on the left shows the values of the nodes and the one on the right shows the same tree with the numbering of the nodes in a left-to-right order. In this case the numbering starts with 1 at the root.
Remotesensing 11 02461 g003
Figure 4. A bit string with the values from the binary tree in Figure 3.
Figure 4. A bit string with the values from the binary tree in Figure 3.
Remotesensing 11 02461 g004
Figure 5. With the rank and select functions listed in the first column, we can navigate the binary tree in Figure 3 and compute the position node for the left child, right child or parent of the node.
Figure 5. With the rank and select functions listed in the first column, we can navigate the binary tree in Figure 3 and compute the position node for the left child, right child or parent of the node.
Remotesensing 11 02461 g005
Figure 6. A T bitmap with the first node labeled as 0.
Figure 6. A T bitmap with the first node labeled as 0.
Remotesensing 11 02461 g006
Figure 7. An example showing how the rank function is computed to obtain the children’s position on a k 2 -tree node (k=2) based on the tree in Figure 2. It starts with 0 on the first child of the root (first level) and the numbering traverses from left to right and from top to bottom.
Figure 7. An example showing how the rank function is computed to obtain the children’s position on a k 2 -tree node (k=2) based on the tree in Figure 2. It starts with 0 on the first child of the root (first level) and the numbering traverses from left to right and from top to bottom.
Remotesensing 11 02461 g007
Figure 8. Organization of 3 Directly Addressable Codes (DACs) clusters.
Figure 8. Organization of 3 Directly Addressable Codes (DACs) clusters.
Remotesensing 11 02461 g008
Figure 9. An example of an 8 × 8 matrix for k 2 -raster. The matrix is recursively partitioned into square subquadrants of equal size. During the process, unless all the cells in a subquadrant have the same value, the partitioning will continue. Otherwise the partitioning of this particular subquadrant will end at this point.
Figure 9. An example of an 8 × 8 matrix for k 2 -raster. The matrix is recursively partitioned into square subquadrants of equal size. During the process, unless all the cells in a subquadrant have the same value, the partitioning will continue. Otherwise the partitioning of this particular subquadrant will end at this point.
Remotesensing 11 02461 g009
Figure 10. A k 2 -raster ( k = 2 ) tree storing the maximum and mininum values for each quadrant of every recursive subdivision of the matrix in Figure 9. Every node contains the maximum and minimum values of the subquadrant, separated by a dash. On the last level, only one value is shown as each subquadrant contains only one cell.
Figure 10. A k 2 -raster ( k = 2 ) tree storing the maximum and mininum values for each quadrant of every recursive subdivision of the matrix in Figure 9. Every node contains the maximum and minimum values of the subquadrant, separated by a dash. On the last level, only one value is shown as each subquadrant contains only one cell.
Remotesensing 11 02461 g010
Figure 11. Based on the tree in Figure 10, the maximum value of each node is subtracted from that of its parent while the minimum value of the parent is subtracted from the node’s minimum value. These differences will replace their corresponding values in the node. The maximum and minimum values of the root remain the same.
Figure 11. Based on the tree in Figure 10, the maximum value of each node is subtracted from that of its parent while the minimum value of the parent is subtracted from the node’s minimum value. These differences will replace their corresponding values in the node. The maximum and minimum values of the root remain the same.
Remotesensing 11 02461 g011
Figure 12. A rate (bpppb) comparison with other compression techniques.
Figure 12. A rate (bpppb) comparison with other compression techniques.
Remotesensing 11 02461 g012
Figure 13. A rate (bpppb) comparison of different group sizes.
Figure 13. A rate (bpppb) comparison of different group sizes.
Remotesensing 11 02461 g013
Figure 14. An entropy comparison of Yellowstone03 using differential and predictive methods. Data for reference bands are not included.
Figure 14. An entropy comparison of Yellowstone03 using differential and predictive methods. Data for reference bands are not included.
Remotesensing 11 02461 g014
Figure 15. A bit rate comparison of Yellowstone03 using differential and predictive methods on k 2 -raster. Data for reference bands are not included.
Figure 15. A bit rate comparison of Yellowstone03 using differential and predictive methods on k 2 -raster. Data for reference bands are not included.
Remotesensing 11 02461 g015
Table 1. Hyperspectral images used in our experiments. It also shows the bit rate and bit rate reduction using k 2 -raster with and without the predictor. x is the image width, y the image height and z the number of spectral bands. The unit bpppb stands for bits per pixel per band.
Table 1. Hyperspectral images used in our experiments. It also shows the bit rate and bit rate reduction using k 2 -raster with and without the predictor. x is the image width, y the image height and z the number of spectral bands. The unit bpppb stands for bits per pixel per band.
SensorNameC/U 🟉 AcronymOriginal Dimensions ( x × y × z )Bit Depth (bpppb)Optimal k -Value k 2 -Raster Bit Rate (bpppb) k 2 -Raster Bit-Rate Reduction (%) k 2 -Raster+ Predictor Bit Rate (bpppb) k 2 -Raster+ Predictor Bit-Rate Reduction (%)
AIRS9UAG990 × 135 × 15011269.4921%6.7644%
16UAG1690 × 135 × 15011269.1224%6.6345%
60UAG6090 × 135 × 15011269.8118%7.0641%
126UAG12690 × 135 × 15011269.6120%7.0541%
129UAG12990 × 135 × 15011268.6528%6.4746%
151UAG15190 × 135 × 15011269.5321%7.0241%
182UAG18290 × 135 × 15011269.6819%7.1940%
193UAG19390 × 135 × 15011269.4421%7.0641%
AVIRISYellowstone sc. 00CACY00677 × 512 × 2241669.6140%6.8757%
Yellowstone sc. 03CACY03677 × 512 × 2241669.4241%6.7258%
Yellowstone sc. 10CACY10677 × 512 × 2241647.5753%5.8464%
Yellowstone sc. 11CACY11677 × 512 × 2241668.8145%6.5259%
Yellowstone sc. 18CACY18677 × 512 × 2241669.7839%7.0456%
Yellowstone sc. 00UAUY00680 × 512 × 22416911.9225%9.0444%
Yellowstone sc. 03UAUY03680 × 512 × 22416911.7427%8.8745%
Yellowstone sc. 10UAUY10680 × 512 × 2241699.9938%8.0050%
Yellowstone sc. 11UAUY11680 × 512 × 22416911.2730%8.7745%
Yellowstone sc. 18UAUY18680 × 512 × 22416912.1524%9.2942%
CRISMfrt000065e6_07_sc164UC164640 × 420 × 54512610.0816%10.0216%
frt00008849_07_sc165UC165640 × 450 × 54512610.3714%10.3314%
frt0001077d_07_sc166UC166640 × 480 × 54512611.058%11.088%
hrl00004f38_07_sc181UC181320 × 420 × 5451259.9717%9.5221%
hrl0000648f_07_sc182UC182320 × 450 × 54512510.1116%9.8418%
hrl0000ba9c_07_sc183UC183320 × 480 × 54512510.6511%10.5912%
HyperionAgricultural 2905 CHCA1256 × 2905 × 2421288.2032%7.4738%
Agricultural 3129 CHCA2256 × 3129 × 2421288.0833%7.5037%
Coral Reef CHCC256 × 3127 × 2421287.3839%7.4138%
Urban CHCU256 × 2905 × 2421288.5928%7.8335%
Filtered Erta Ale UHFUEA256 × 3187 × 2421286.8443%5.9950%
Filtered Lake Monona UHFULM256 × 3176 × 2421286.7943%6.0649%
Filtered Mt. St. Helena UHFUMS256 × 3242 × 2421286.7843%5.8851%
Erta Ale UHUEA256 × 3187 × 2421287.5737%6.9942%
Lake Monona UHULM256 × 3176 × 2421287.5237%7.0841%
Mt. St. Helena UHUMS256 × 3242 × 2421287.4938%6.9342%
IASILevel 0 1 UI0160 × 1528 × 83591245.9351%4.6961%
Level 0 2 UI0260 × 1528 × 83591245.9051%4.7560%
Level 0 3 UI0360 × 1528 × 83591245.4255%4.5862%
Level 0 4 UI0460 × 1528 × 83591246.2348%4.9059%
🟉: Calibrated or Uncalibrated; †: Cropped to 256 × 512 × 242; ‡: Cropped to 60 × 256 × 8359.
Table 2. A rate (bpppb) comparison with other compression techniques. The optimal values for all compression algorithms (except for M-CALIC, CCSDS 123.0-B-1) are highlighted in red. Results for CCSDS 123.0-B-1 are from [28].
Table 2. A rate (bpppb) comparison with other compression techniques. The optimal values for all compression algorithms (except for M-CALIC, CCSDS 123.0-B-1) are highlighted in red. Results for CCSDS 123.0-B-1 are from [28].
Compression Technique (bpppb)
SensorNameC/U 🟉 Acronym k 2 -Raster k 2 -Raster + Predictor k 2 -Raster + Differentialgzipbzip2xzM-CALICCCSDS 123.0-B-1
AIRS9UAG99.496.767.5210.167.427.904.194.21
16UAG169.126.637.299.827.157.664.194.18
60UAG609.817.067.8210.537.718.234.414.36
126UAG1269.617.057.7810.337.648.104.394.38
129UAG1298.656.476.969.506.687.224.084.12
151UAG1519.537.027.7410.317.437.974.394.41
182UAG1829.687.197.9410.647.798.334.454.42
193UAG1939.447.067.7710.157.477.944.424.42
AVIRISYellowstone sc. 00CACY009.616.877.7910.127.518.044.123.95
Yellowstone sc. 03CACY039.426.727.659.597.107.623.953.82
Yellowstone sc. 10CACY107.575.846.267.415.305.733.313.36
Yellowstone sc. 11CACY118.816.526.859.046.657.073.713.63
Yellowstone sc. 18CACY189.787.047.5310.007.457.954.093.90
Yellowstone sc. 00UAUY0011.929.0410.0412.399.9910.616.326.20
Yellowstone sc. 03UAUY0311.748.879.9111.989.5410.236.146.07
Yellowstone sc. 10UAUY109.998.008.5710.177.718.405.535.58
Yellowstone sc. 11UAUY1111.278.779.2111.499.089.665.915.84
Yellowstone sc. 18UAUY1812.159.299.9212.299.9010.586.336.21
CRISMfrt000065e6_07_sc164UC16410.0810.0210.0610.988.427.157.344.86
frt00008849_07_sc165UC16510.3710.3310.3711.038.687.517.734.91
frt0001077d_07_sc166UC16611.0511.0811.1411.209.047.648.445.44
hrl00004f38_07_sc181UC1819.979.529.5210.778.288.207.094.27
hrl0000648f_07_sc182UC18210.119.849.8610.908.537.907.284.49
hrl0000ba9c_07_sc183UC18310.6510.5910.6410.878.527.287.914.96
HyperionAgricultural 2905 CHCA18.207.477.478.907.077.405.39-
Agricultural 3129 CHCA28.087.507.508.847.047.355.285.70
Coral Reef CHCC7.387.417.417.455.745.904.595.42
Urban CHCU8.597.837.839.247.467.835.255.71
Filtered Erta Ale UHFUEA6.845.996.157.635.556.004.194.32
Filtered Lake Monona UHFULM6.796.066.187.615.505.944.214.45
Filtered Mt. St. Helena UHFUMS6.785.886.157.185.445.744.114.35
Erta Ale UHUEA7.576.997.068.696.416.734.874.32
Lake Monona UHULM7.527.087.138.696.466.744.944.45
Mt. St. Helena UHUMS7.496.937.048.266.286.484.824.36
IASILevel 0 1 UI015.934.695.015.904.483.982.942.89
Level 0 2 UI025.904.755.035.964.444.012.922.88
Level 0 3 UI035.424.584.795.253.943.752.922.88
Level 0 4 UI046.234.905.206.304.714.242.972.90
🟉: Calibrated or Uncalibrated; †: Cropped to 256 × 512 × 242 except for CCSDS 123.0; ‡: Cropped to 60 × 256 × 8359 except for CCSDS 123.0.
Table 3. A comparison of build time (in seconds) using k 2 -raster only and k 2 -raster with predictive and differential methods.
Table 3. A comparison of build time (in seconds) using k 2 -raster only and k 2 -raster with predictive and differential methods.
Hyperspectral ImageBuild Time (s)Gzip Compression (s)
k 2 -Raster k 2 -Raster + Predictor k 2 -Raster + Differential
AG91.862.232.123.18
AG161.782.222.093.49
ACY008.3210.119.4915.01
ACY038.2610.009.4715.32
AUY005.567.396.8412.10
AUY035.597.386.7612.68
C16417.8421.3221.5927.94
C16517.8922.8322.9230.83
HCA11.982.672.475.59
HCA21.982.642.425.80
HFUEA2.383.013.057.59
HFULM2.413.042.877.57
HFUMS2.332.952.768.26
I0114.5818.6216.5631.59
I0214.6617.4916.6629.64
Table 4. A comparison of access time (in milliseconds) using k 2 -raster only and k 2 -raster with predictive and differential encoders.
Table 4. A comparison of access time (in milliseconds) using k 2 -raster only and k 2 -raster with predictive and differential encoders.
Hyperspectral Image100,000 Iterations of Random Access (ms)Gzip Decompression (ms)
k 2 -Raster k 2 -Raster + Predictor k 2 -Raster + Differential
AG99012592474
AG168512185459
ACY002754854261949
ACY032694744241912
AUY001514894021941
AUY031514854021957
C1642734003814048
C1653014203974382
HCA177131127735
HCA276121118737
HFUEA93150129684
HFULM92148129680
HFUMS91146134670
I011552222442517
I021682362552396
Table 5. Rates (bpppb) for different k-values for some of the testing images. The k-value with the lowest rate is in red.
Table 5. Rates (bpppb) for different k-values for some of the testing images. The k-value with the lowest rate is in red.
Hyperspectral Imagek = 234567891011121314151617181920
AG913.0610.1110.0310.479.499.9810.689.8910.65-11.2310.3311.299.5311.5711.7210.7812.5212.13
AG1612.729.789.6610.119.129.5710.329.5110.29-10.829.9810.869.1711.1111.2810.3212.0711.68
ACY0012.3410.209.76-9.619.91-9.699.839.879.9510.2410.20------
ACY0311.819.879.56-9.429.71-9.509.659.709.7610.019.98------
AUY0015.3112.9312.20-12.0812.35-11.9212.1112.1312.1712.5212.43------
AUY0315.0312.6012.00-11.9012.20-11.7411.9311.9412.0012.3412.25------
C16412.6010.4210.17-10.08--10.3410.2010.7610.48--------
C16512.8410.6710.48-10.37--10.5410.5110.7911.03--------
HCA110.799.418.858.458.749.368.208.518.688.858.888.929.21------
HCC9.438.127.797.417.758.407.387.677.858.068.128.268.56------
HFUEA8.827.807.307.247.418.076.847.257.437.667.687.718.07------
HFULM8.697.707.207.137.338.026.797.217.407.647.667.688.05------
I018.03-5.93--6.45--------6.597.798.308.736.36
I028.02-5.90--6.48--------6.647.928.468.976.45
Table 6. Access time (ms) for different k-values for some of the testing images. The best access time is in red.
Table 6. Access time (ms) for different k-values for some of the testing images. The best access time is in red.
Hyperspectral ImageAccess Time (ms)
k = 234567891011121314151617181920
AG93451671301149184838282-605956565552605147
AG163341521141088579788075-555453535951584845
ACY0035531085573-291225-152133125114110104------
ACY0335211112572-277223-149131122112120102------
AUY0035691135592-292228-153133126115113106------
AUY0335591123585-279221-152133124115109103------
C1642924964606-272--159161138131--------
C16537541017555-290--178156152145--------
HCA1117938421315412410680786971626162------
HCC120340623317213912395948685837979------
HFUEA140946526218414812793958984777676------
HFULM142746726219315513094968790798180------
I01999-779--679--------610709715728450
I021047-759--698--------651746746730472
Table 7. A rate (bpppb) comparison of different group sizes using the predictive and the differential methods. The optimal values are highlighted in red.
Table 7. A rate (bpppb) comparison of different group sizes using the predictive and the differential methods. The optimal values are highlighted in red.
Hyperspectral ImageGroup Size
248121620242832
AG9-Pred8.037.417.197.137.167.157.177.177.25
AG9-Diff8.117.607.517.527.637.587.747.738.00
ACY00-Pred7.747.057.037.367.557.787.818.188.18
ACY00-Diff8.037.587.798.188.408.739.099.489.13
AUY00-Pred9.949.229.189.539.7010.0310.0210.5310.34
AUY00-Diff10.339.9110.1810.4910.7111.1411.3211.9111.36
C164-Pred10.0810.1110.1710.2010.2310.2010.3110.2410.24
C164-Diff10.0610.0810.1310.1610.1810.2010.3110.2210.17
HCA1-Pred7.517.307.317.517.658.158.318.098.16
HCA1-Diff7.577.477.687.668.008.348.958.848.47
HFUEA-Pred6.076.016.056.136.256.336.416.216.55
HFUEA-Diff6.116.156.306.446.756.636.816.707.07
I01-Pred5.224.914.784.754.734.744.694.734.71
I01-Diff5.325.105.015.045.015.065.025.055.06
Table 8. A rate (bpppb) comparison using different transformed methods: predictor, differential, reversible Haar level 1 and reversible Haar level 5 on k 2 -raster. The optimal values are highlighted in red.
Table 8. A rate (bpppb) comparison using different transformed methods: predictor, differential, reversible Haar level 1 and reversible Haar level 5 on k 2 -raster. The optimal values are highlighted in red.
Hyperspectral ImageTransformation Type
Without TransformationPredictorDifferentialReversible Haar (Level 1)Reversible Haar (Level 5)
AG99.496.767.528.106.83
AG169.126.637.297.816.60
ACY009.636.877.798.017.00
ACY039.446.727.657.866.87
AUY0011.929.0410.0410.339.35
AUY0311.748.879.9110.189.23
C16410.0810.0210.0610.019.83
C16510.3710.3310.3710.3310.16
HCA18.207.477.477.377.05
HCC7.387.507.506.716.54
HFUEA6.845.996.157.126.75
HFULM6.796.066.187.146.83
I015.934.695.015.264.54
I025.904.755.035.264.57

Share and Cite

MDPI and ACS Style

Chow, K.; Tzamarias, D.E.O.; Blanes, I.; Serra-Sagristà, J. Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression. Remote Sens. 2019, 11, 2461. https://doi.org/10.3390/rs11212461

AMA Style

Chow K, Tzamarias DEO, Blanes I, Serra-Sagristà J. Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression. Remote Sensing. 2019; 11(21):2461. https://doi.org/10.3390/rs11212461

Chicago/Turabian Style

Chow, Kevin, Dion Eustathios Olivier Tzamarias, Ian Blanes, and Joan Serra-Sagristà. 2019. "Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression" Remote Sensing 11, no. 21: 2461. https://doi.org/10.3390/rs11212461

APA Style

Chow, K., Tzamarias, D. E. O., Blanes, I., & Serra-Sagristà, J. (2019). Using Predictive and Differential Methods with K2-Raster Compact Data Structure for Hyperspectral Image Lossless Compression. Remote Sensing, 11(21), 2461. https://doi.org/10.3390/rs11212461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop