Next Article in Journal
Employing Fuzzy Adaptive and Event-Triggered Approaches to Achieve Formation Control of Nonholonomic Mobile Robots Under Complete State Constraints
Previous Article in Journal
Optimized Economizer Control with Maximum Limit Set-Point to Enhance Cooling Energy Performance in Korean Climate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images

1
Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Laboratory of Infrared Detection and Imaging Technology, Chinese Academy of Sciences, Shanghai 200083, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(5), 2826; https://doi.org/10.3390/app15052826
Submission received: 19 December 2024 / Revised: 3 March 2025 / Accepted: 4 March 2025 / Published: 5 March 2025

Abstract

:
Traditional JPEG series image compression algorithms have limitations in speed. To improve the storage and transmission of 14-bit/pixel images acquired by infrared line-scan detectors, a novel method is introduced for achieving high-speed and highly efficient compression of line-scan infrared images. The proposed method utilizes the features of infrared images to reduce image redundancy and employs improved Huffman coding for entropy coding. The improved Huffman coding addresses the low-probability long coding of 14-bit images by truncating long codes, which results in low complexity and minimal loss in the compression ratio. Additionally, a method is proposed to obtain a Huffman code table that bypasses the pixel counting process required for entropy coding, thereby improving the compression speed. The final implementation is a low-complexity lossless image compression algorithm that achieves fast encoding through simple table lookup rules. The proposed method results in only a 10% loss in compression performance compared to JPEG 2000, while achieving a 20-fold speed improvement. Compared to dictionary-based methods, the proposed method can achieve high-speed compression while maintaining high compression efficiency, making it particularly suitable for the high-speed, high-efficiency lossless compression of line-scan panoramic infrared images. The code table compression effect is 5% lower than the theoretical value. The algorithm can also be applied to analyze images with more bits.

1. Introduction

Matter, energy, and information are the three fundamental elements constituting the objective world. Information technology, in particular, plays a key role in humanity’s perception of the objective world, with infrared imaging technology serving as a vital component of this technology. Infrared imaging technology encompasses types such as line-array detector scanning imaging and area-array detector staring imaging [1]. The area-array detector captures the entire field of view at once and is commonly used in applications such as night vision devices, video surveillance, and others. In contrast, the line-array detector captures panoramic images by rotating 360 degrees, making it ideal for large-field applications like airborne small-target detection and border security surveillance. As infrared imaging technology advances toward higher resolution, faster frame rates, and greater pixel bit depth, high-rate data transmission and storage face corresponding challenges. For example, a 14-bit line-scan image with 3072 pixels per column and approximately 60,000 columns per panoramic image has an imaging cycle of about 25 microseconds per column, leading to an imaging rate of 240 MB/s and a frame size of 350 MB. Lossless data compression, which can reduce data redundancy, is essential for improving data transmission and storage efficiency.
Image compression methods leverage the natural compressibility of images by utilizing correlations between adjacent pixels to reduce redundancy and concentrate information. Finally, entropy coding is applied to remove coding redundancy, further achieving compression. Image transformation and prediction are widely used techniques for reducing image redundancy. To complement these approaches, dictionary-based compression methods, such as Lempel–Ziv–Welch (LZW), Lempel–Ziv 77 (LZ77), and Lempel–Ziv 78 (LZ78), are also employed [2,3]. These methods reduce redundancy by identifying and encoding repeated patterns or sequences in the image data, which can be especially effective for certain types of image content. Transform-based image compression converts images into a domain with less redundancy, where information is concentrated and easier to encode. The image is usually transformed into the frequency or spatial domain and then encoded using the properties of the transform coefficients, such as image compression based on discrete wavelet transform (DWT) [4,5,6], discrete cosine transform (DCT) [7], and integer discrete Tchebichef transform [8]. Prediction-based image compression exploits the correlation between pixels. The residual image obtained by subtracting the original image from the predicted image contains less redundant information. The residual image typically has a narrow range of pixel values, making it more effectively compressed through entropy coding techniques such as Huffman coding [9] or arithmetic coding [10]. Typical prediction-based compression includes DPCM [11] and LOCO-I [12]. Predictive techniques are typically integrated with transformations, such as utilizing DPCM for DC coefficients in the DCT within the JPEG, or applying DPCM to wavelet coefficients in JPEG 2000. This amalgamation allows for further compression by exploiting the inter-coefficient correlation. The intra prediction coding in video coding standards such as H.264/AVC and High Efficiency Video Coding (HEVC) also utilizes DPCM [13,14]. Dictionary-based compression methods dynamically build or reference predefined dictionaries of patterns, replacing repeated sequences with shorter symbols to achieve efficient compression without relying on transformation-specific knowledge. However, the data encoded by the dictionary may still contain redundancy, such as certain symbols or indices appearing more frequently. To address this, entropy coding is applied to further compress the data by assigning shorter codes to more frequently occurring symbols, such as in the Lempel–Ziv–Markov chain Algorithm (LZMA) and Deflate [15,16]. Dictionary-based lossless image compression has inherent limitations, as these methods rely on repeated patterns within the data and do not fully leverage spatial redundancy in images, which makes them less effective in eliminating spatial redundancy. Furthermore, dictionary encoding lacks sufficient capability to handle fine details. These limitations become particularly evident in 14-bit line-scan panoramic images, which feature a large number of source symbols, a wide symbol range, abundant details, complex structures, and rapidly changing scenes.
Traditional image compression methods, such as JPEG, JPEG-LS, and JPEG 2000 [17,18,19,20,21], are designed to minimize perceived quality loss by the human visual system (HVS) while reducing data transmission rates to improve the efficiency of image transmission and storage. JPEG (Joint Photographic Experts Group) is a widely used lossy image compression standard that reduces file sizes by discarding information that is less noticeable to the human eye, often resulting in some loss of image quality. JPEG-LS (JPEG Lossless and Near-Lossless Compression), on the other hand, is a standard for lossless or near-lossless compression, providing higher compression ratios while maintaining the original image quality. In recent years, more advanced image compression schemes have been continuously proposed and developed. For instance, ref. [22] proposed a compressive sensing-based image compression system. Nevertheless, their advantages over JPEG are not significant enough to justify the creation of new standards.
Image compression includes both lossy and lossless methods. JPEG is a lossy image compression method designed for lower bit-depth images, specifically for 8-bit images. Ref. [22] is essentially a lossy compression method. However, lossy compression may not be suitable for many imaging applications that require high precision, such as hyperspectral imaging and infrared weak target detection. Since lossless compression methods can fully recover the original data, they are more favored in these applications. JPEG 2000 offers lossless compression capabilities, achieving efficient compression at the cost of increased computational and memory resource consumption. Its complexity arises from the implementation of the 5/3 lifting wavelet transform, bit-plane coding, and MQ coding, which refers to a context-based arithmetic coding technique used to efficiently encode the quantized wavelet coefficients in the image, improving compression performance by exploiting the statistical dependencies between the coefficients. JPEG-LS achieves a low-complexity lossless compression that is easy to implement in hardware, using simple prediction, context modeling, and Golomb coding. This approach sacrifices compression efficiency in favor of speed improvement. The performance of JPEG-LS improves with simpler image scenes, whereas in more complex scenes, the prediction and context updating become more intricate, leading to a decrease in compression speed. JPEG-LS is only applicable to 8-bit and 12-bit images and is not suitable for images with higher bit depths. The complexity of image redundancy removal and entropy coding, along with limitations in pixel bit depth, restricts the application of JPEG-based algorithms in line-scan imaging with high data rates and high pixel bit depths.
This study focuses on 14-bit line-scan infrared panoramic images. Unlike traditional area-array images, each row of the image is generated by the same photosensitive element, leading to stronger inter-column correlation. Conventional JPEG series image compression methods do not take into account the characteristics of the line-column scanning image. Based on the characteristics of 14-bit line-scan infrared panoramic images, this paper analyzes the feasibility of removing spatial redundancy through inter-column differencing. The inter-column differencing DPCM prediction method is used to replace the complex wavelet transform in JPEG2000 for removing spatial redundancy in the image. This paper also designs an improved Huffman coding scheme to replace the complex entropy coding in JPEG 2000. The improved Huffman coding simplifies the image compression process by using a code table. Additionally, a method for generating the code table is proposed, simplifying the compression process by avoiding pixel statistics in entropy coding. Based on the proposed methods, a low-complexity lossless compression algorithm based on the code table is ultimately implemented using a simple lookup method. The structure of this paper is as follows: Section 2 introduces related work on Huffman coding, Section 3 analyzes methods for redundancy removal in line-scan infrared panoramic images, Section 4 presents the code-table-based lossless compression method proposed in this paper, Section 5 provides the experimental results, and Section 6 concludes the paper.

2. Related Works

The speed and efficiency of image compression are common objectives in both academia and industry. There are two main approaches to improving the encoder speed: hardware acceleration and designing low-complexity algorithms. The speed of the encoder is dependent on the processor’s performance, and the slowdown in processor performance improvements has driven the development of parallel architecture processors. As a result, image encoding algorithms have transitioned from single-threaded to multi-threaded algorithms [23,24]. However, in embedded platforms with limited hardware resources, it is crucial to design lossless compression algorithms with higher bit depths and lower computational complexity.
The core of the method proposed in this article is Huffman coding. We researched and improved the Huffman code. Lossless data compression is based on information theory, with its theoretical limit being entropy [25]. Huffman coding needs to count the probability of occurrence of source symbols, assign shorter codes to the symbols with high probability and longer codes to those with low probability, thereby achieving entropy coding, which approximates the theoretical entropy value. Huffman coding requires two passes over the source data to build the frequency table and generate the code table. Vitter [26] proposed a dynamic Huffman algorithm to scan the data only once. However, with the constant modification of the Huffman tree as new symbols appear, the dynamic Huffman algorithm leads to a rapid increase in computational effort. This algorithm is not suitable for compressing large datasets with multiple source symbols. Schwartz’s [27] canonical Huffman coding requires minimal data storage to reconstruct the Huffman tree. Unfortunately, both Huffman and canonical Huffman coding need to count the probability of symbol occurrence before compression, which seriously affects the compression speed. Reinhardt [28] improved compression speed by pre-allocating a Huffman code table. Nevertheless, this method requires offline data analysis to define the code table and is limited to 256 symbols, restricting its range of applications. Yunge’s [29] algorithm for dynamic code table enhances adaptability by increasing the number of code table. However, it requires evaluating the variance of symbol changes to reselect the code table each time, leading to high complexity and reduced compression speed. The code table method necessitates longer codes when compressing large datasets with multiple source symbols. Nonetheless, symbols corresponding to these longer codes typically have low occurrence probabilities, and constructing such long codes can adversely affect both the compression efficiency and processing speed. Reinhardt [30] proposed a truncated Huffman tree algorithm by only preserving codes with high-frequency occurrences, while symbols with low-frequency occurrences have no designated code. To distinguish between coded and uncoded bit streams, an extra 1-bit (0 or 1) is required, which increases the length of all encoded symbols by one bit and significantly degrades the compression performance. Xu’s [31] modified adaptive Huffman coding algorithm adds unknown symbol nodes to achieve uniform coding without requiring additional 1-bit identifiers, but it suffers from high complexity.
To reduce the algorithmic complexity, this paper proposes a novel method for constructing a Huffman code table that bypasses the process of calculating pixel occurrence probabilities in entropy coding. Additionally, an improved Huffman coding scheme is introduced to handle the longer codes required for 14-bit images by truncating longer codes with low complexity and minimal compression ratio loss. This ultimately achieves a low-complexity lossless compression method based on a code table for infrared images.

3. Redundancy Analysis of Line-Scan Panoramic Infrared Images

From information theory, information can be measured by the self-information. Suppose the set of source symbols is S = { s 1 , s 2 , s n } . The probability of occurrence of each symbol is P = { p ( s 1 ) , p ( s 2 ) , p ( s n ) } . The self-information I ( s i ) of s i is defined by the equation
I ( s i ) = log 2 1 / p ( s i ) ( bit ) ,
The smaller the probability of symbol s i , the more information it conveys. The information entropy H ( S ) is the mathematical expectation of the self-information I ( s i ) . The information entropy H ( S ) indicates the minimum number of bits needed to represent each source symbol in a binary computer, which is defined as
H ( S ) = i = 1 n p ( s i ) · log 2 p ( s i ) ( bit / symbol ) ,
A two-dimensional (2D) image is a kind of information that humans can intuitively perceive. The continuous tonal distribution in nature leads to significant spatial redundancy in visible images, while the scene’s infrared radiation continuity results in similar redundancy in infrared images. The image spatial redundancy is manifested as a large number of neighboring pixels with little or no change, resulting in a high correlation between the image pixels. Infrared images, due to their spatial redundancy, exhibit low actual information entropy, indicating high compression potential. However, obtaining the exact source entropy is challenging and can only be approximated closely by certain methods. In the field of digital imaging, image differencing represents the changes between adjacent pixels. The correlation between difference pixels is weak, effectively reducing spatial redundancy. Digital image differentiation involves both inter-column and inter-row differencing. Inter-column differencing is defined as
d p ( i , j ) = p ( i , j ) p ( i , j 1 ) ,
where p ( i , j ) is the current column pixel, p ( i , j 1 ) is the previous column pixel, and  d p ( i , j ) is the differential pixel j 2 . The inter-row differencing can be derived analogously.
Infrared line scanning imaging, as it is distinct from area array imaging, involves capturing panoramic images through a 360-degree rotating scan. In line-scan images, each row is acquired by the same photosensitive element, leading to stronger inter-column correlations compared to traditional images. We choose two 640 × 512 14-bit infrared images, which are a portion of a line-scan panoramic infrared image, and analyze the correlation of the original and differential images, named image A and image B. We calculate the inter-column correlation coefficients and count the probability of occurrence pixels. The experimental results are shown in Figure 1 and Table 1.
The inter-column correlation coefficient is defined as
r ( j ) = i = 1 512 σ ( i , j ) · σ ( i , j 1 ) i = 1 512 σ ( i , j ) 2 · i = 1 512 σ ( i , j 1 ) 2 ,
σ ( i , j ) = p ( i , j ) p ( j ) ¯ ,
where p ( j ) ¯ is the mean value of the jth column. The inter-row correlation coefficient can be derived analogously.
The experimental results indicate that the correlation coefficients of the original image all exceed 0.99. Inter-column differencing significantly reduces the correlation of the image. The entropy of the original image A is 10.3854 bit/symbol, while the entropy of the inter-column differential image A, at 4.9182 bit/symbol, is lower than that of the inter-row differential image A, which is 6.3071 bit/symbol. Further analysis on images from 51 diverse scenarios consistently demonstrates that the entropy of inter-column differential images reaches a minimum. These findings are summarized in Table 2. The 51 images are 14-bit infrared images captured by a line-scan infrared detector in different scenarios. The image sizes vary and include 640 × 512, 1000 × 2000, 2000 × 4000, 1000 × 4000, and 2000 × 8000.
Additionally, the infrared panoramic images processed in this study consist of 3072 pixels per column, with an image width of approximately 60,000 columns. When employing inter-column or inter-row differencing, preserving the first column or row is necessary for the restoration of subsequent rows or columns. As the first column of the image is generated at the beginning of the detector’s operation, only 3072 original pixels need to be immediately stored for inter-column differencing as opposed to 60,000 for inter-row differencing at different times. To maximize the compression speed, we only utilize inter-column differencing to eliminate image redundancy.
After the image redundancy is removed, the subsequent entropy coding requires the probability distribution information of the pixels. For a 14-bit infrared image, the dynamic range of pixel values is [ 0 , 2 14 1 ] , but the differential image dynamic range is doubled to [ 2 14 + 1 , 2 14 1 ] . The experimental results also show that differential images often contain many 0 pixels, approximating a Laplace distribution. Therefore, through extensive experimentation, a general probability distribution model can be found to predict other differential images. A general distribution probability model can generate a general Huffman code table applicable to generic images. The higher the prediction accuracy of the probability distribution model, the better the compression.

4. Proposed Method

4.1. The Creation and Coding of General Code Table

4.1.1. Canonical Huffman Coding

The first step in Huffman coding is counting the frequency of each pixel symbol. The frequency is weight, and n pixel symbols can be used to construct an n-th Huffman tree based on the weights.
A 3 × n array N is created to store the differential pixels, frequency of occurrence, and code length. The code length is known after building the Huffman tree in Algorithm 1 and calculating the code lengths in Algorithm 2, which is initialized to 0. After counting the pixels of an image, a frequency table of n = 188 pixel symbols is obtained, with data from image B used for illustration. The pixel frequency statistics are shown in Table 3.
The canonical Huffman tree only needs to obtain the code length of pixel symbols, and an array H of size 2 × (2 n 1 ) can describe the Huffman tree to obtain the code length. The two columns of H are used to record the weight and parent of each node, respectively. The first n rows of the H record the leaf nodes and the last n 1 rows record the new synthesized nodes, setting the parent of the root node to 0. The canonical Huffman tree construction algorithm is shown in Algorithm 1. H after running Algorithm 1 is shown in Table 4.
Algorithm 1 Canonical Huffman tree creation.
Require:  N
Ensure:  H
1:
H 0 , i = 0
2:
while  i n  do
3:
     H (i,1) = N (i,2);
4:
     i = i + 1
5:
end while
6:
while  i 2 × n 1  do
7:
    ( i n d e x 1 , i n d e x 2 ) = f i n d 2 m i n w e i g h t ( H , i 1 )
8:
     H ( i n d e x 1 , 2 ) = i
9:
     H ( i n d e x 2 , 2 ) = i
10:
     H ( i n d e x , 1 ) = H ( i n d e x 1 , 1 ) + H ( i n d e x 2 , 1 )
11:
     i = i + 1
12:
end while
The Huffman tree, also known as an optimal binary tree, has the shortest path length with weights. Higher weights in a Huffman tree correspond to closer proximity to the root node. In image coding, higher weights signify a higher frequency of occurrence of pixel symbols, while being closer to the root node indicates shorter coded symbols. The shortest weighted path length represents optimal compression for images by the Huffman tree among binary trees.
Algorithm 2 Root node backtracking for calculating code lengths.
Require:  N
Ensure:  H
1:
i = 0
2:
while  i n  do
3:
     c o d e _ l e n g t h = 0
4:
     p a r e n t = H ( i , 2 )
5:
    while p a r e n t 0 do
6:
         c o d e _ l e n g t h = c o d e _ l e n g t h + 1
7:
         p a r e n t = H ( p a r e n t , 2 )
8:
         N ( i , 3 ) = c o d e _ l e n g t h
9:
    end while
10:
     i = i + 1
11:
end while
However, Huffman coding requires significant space to store the Huffman tree, so canonical Huffman coding is commonly employed. Canonical Huffman coding reconstructs the Huffman tree by recording only the number of codes of a certain code length in all codes. After the H is established, the code length of each leaf node can be calculated by backtracking from the leaf node to the root node as is shown in Algorithm 2. Figure 2 shows the leaf node −3 backtracking towards the root node. A canonical Huffman code table C can be obtained by performing another count of the number of leaf nodes for a given code length as shown in Algorithm 3.
Algorithm 3 Calculate C
Require:  H
Ensure:  C
1:
i = 0 , C 0
2:
while  i n  do
3:
     c o d e _ l e n g t h = [ H ( i , 3 )]
4:
     C ( i , 1 ) = C ( i , 1 )+1
5:
     i = i + 1
6:
end while
The results of the experiment are C = [ 0 , 1 , 0 , 5 , 6 , 6 , 6 , 9 , 17 , 23 , 20 , 20 , 22 , 10 , 10 , 12 , 11 , 10 ] . The result shows that the longest code length is 18 bits and there are 188 codes, corresponding to the 188 differential pixel symbols that occur. Canonical Huffman coding can recover the Huffman tree by C and three rules:
  • The first code value of the shortest code is 0.
  • When lengths are equal, values increase by 1.
  • For code lengths plus m ( m > 0 ) , the first code value = (the last code value of the second shortest length + 1) ×  2 m .
A set of prefix codes can be obtained from three rules and C as shown in Table 5.

4.1.2. Canonical Huffman Coding Based on General Code Table

For the compression of multi-frame images or large panoramic image datasets, we need to count several times to obtain the frequency table of each frame before constructing the Huffman tree, which will seriously affect the compression speed. In Section 3, we learn that the probability distribution of differential pixels follows a certain pattern and approximately adheres to a Laplace distribution. Therefore, through extensive experimentation, a differential pixel probability distribution model suitable for most scenarios can be statistically derived. This model can then be used to generate a code table based on Huffman coding rules.
We select the 53 images mentioned earlier and count the frequency of occurrence of differential pixels. The partial images are shown in Figure 3. We calculate the total probability of pixels occurring within the boundary, all with absolute values smaller than that of the boundary pixel. The cumulative probability of boundary pixels having values less than 150 exceeds 0.99 as shown in Figure 4. Thus, the differential pixels are limited to [−150, +151] to create a general code table for broad application. In total, there are 302 points ranging from −150 to 151. We sort the differential pixels by frequency and plot the first 302 points with the highest probability of occurrence as shown in Figure 5.
The mean probability distribution of the 53 images serves as a model for most scenes as shown in Figure 5. The Huffman tree is constructed from the average probability distribution model to obtain the general code table: C = [0 1 0 4 8 7 7 7 8 15 23 32 36 43 44 53 14]. The C has a total of 302 codes. Each pixel symbol in the differential pixel range [−150, +151] has one-to-one mapped code.
Create an array S to record the code values and lengths in order from the smallest to the largest code value. The pixel values have the following correspondence with the S element index:
i n d e x = 2 × ( 0 p i x e l ) , if p i x e l 0 2 × p i x e l 1 , otherwise . ,
The probability distribution of the difference image pixels is essentially symmetric around zero. The probability of a difference pixel being + p i x e l ( p i x e l > 0) is approximately equal to that of p i x e l . In Huffman coding, a probability model that ranks probabilities from high to low is required. Therefore, the probability of p i x e l is placed after + p i x e l and before + p i x e l + 1 . This is the reasoning behind the equation mentioned above. The new coding rules are shown in Table 6.
For a 14-bit image, there is a possibility that the number of symbols is more than 302, so longer codes are required. However constructing long codes can negatively impact both the compression efficiency and speed. Considering the practical situation, differential pixels are concentrated in small ranges, and pixels beyond the range occur infrequently but inevitably. The  C is very difficult to apply. So an improved Huffman coding lossless compression algorithm is proposed.

4.2. An Improved Huffman Coding Lossless Compression Algorithm

The probability distribution is concentrated in a small range, we only keep shorter codes with high frequency. However, the above operation will result in out-of-range pixels that are not encoded. The proposed method represents all out-of-bounds pixels by the code with maximum code value. The code “11111111111111111” with a maximum code value is used to encode “ + 151 ”. The improved Huffman coding rules are as follows:
i n d e x = 2 × ( 0 p i x e l ) , if p i x e l 0 2 × p i x e l 1 , if p i x e l > 0 301 , if i n d e x > 300 ,
The coding rules are shown in Table 7.
When i n d e x > 300 , let i n d e x = 301 , encode all out-of-bounds pixels with the maximum code value, and we need to create an array B to record the original values of the out-of-bounds pixels in the order in which they appear. Since the probability of an out-of-bounds pixel is extremely small in the whole image, B does not take up much space and has little impact on the compression ratio. When the longest code is decoding, then the out-of-bounds pixels are taken out of the B sequentially so that the differential image can be recovered. The algorithm is described in Algorithm 4 and Figure 6.
Algorithm 4 Lossless compression algorithm for the improved Huffman coding based on a code table.
Require:  i m g _ d a t a , B
Ensure:  c o m p r e s s _ d a t a
1:
i = 1 , j = 1
2:
S = c r e a t _ S ( C )
3:
d a t a 1 = s a v e _ f i r s t _ c o l u m n ( i m g _ d a t a )
4:
d i f _ i m g _ d a t a = i n t e r _ c o l u m n _ d i f f e r e n c i n g ( c o m p r e s s _ d a t a )
5:
while  i i m g _ w i d t h 1  do
6:
    while  j i m g _ h e i g h t do
7:
         p i x e l = d i f _ i m g _ d a t a ( j , i )
8:
        if  p i x e l ≤ 0 then
9:
            i n d e x = 2 × ( 0 p i x e l )
10:
        else
11:
            i n d e x = 2 × p i x e l 1
12:
        end if
13:
        if  i n d e x >300 then
14:
            i n d e x = 301
15:
            B p i x e l
16:
        end if
17:
         j = j + 1
18:
    end while
19:
     b i t s t r e a m = e n c o d i n g ( S , i n d e x )
20:
     i = i + 1
21:
end while
22:
c o m p r e s s _ d a t a = s a v e _ d a t e ( d a t a 1 , b i t s t r e a m , B )
For more-bit images, we need to find a new differential pixel range of one-to-one encoding. The new differential pixel range must cover most of the pixels in scenes in order to ensure that the improved Huffman coding is efficient.
The original image can be recovered from the first column of the original image data and the differential image data. The formula for image recovery is as follows:
p ( i , j ) = p ( i , 1 ) , if j = 1 p ( i , j 1 ) + d p ( i , j 1 ) , otherwise j > 1 ,
where p ( i , j ) is the current column pixel of the recovery image, p ( i , j 1 ) is the previous column pixel of the recovery image, p ( i , 1 ) is the first column of the original image data, and d p ( i , j 1 ) is the difference pixel.

5. Results

The mean square error (MSE) between the original and reconstructed images is used to verify that the compression is lossless. MSE is defined as
M S E = 1 M · N i = 1 M i = 1 N [ p ( i , j ) p ( i , j ) ] 2 ,
where p ( i , j ) is the pixel at row i and column j of the reconstructed image, and p ( i , j ) is a pixel at row i and column j of the original image. MSE describes the reconstruction error between the reconstructed image and the original image. The MSEs of the 53 scenes calculated during our experiment are all 0, indicating that the proposed algorithm achieves lossless compression.
Additionally, SSIM (Structural Similarity Index) is a metric used to measure the similarity between two images, defined as follows:
S S I M ( p , p ) = 2 μ p μ p + C 1 μ p 2 + μ p 2 + C 1 · 2 σ p p + C 2 σ p 2 + σ p 2 + C 2 ,
where μ p and μ p are the mean values of the original and reconstructed images, respectively. σ p 2 and σ p 2 denote the variances of the original and reconstructed images, respectively. σ p p represents the covariance between the original and reconstructed images. C 1 and C 2 are small constants used to stabilize the computation and prevent the denominator from approaching zero. All 53 images have an SSIM of 1, further confirming that the compression is lossless and the reconstructed images are structurally identical to the originals, with no loss in image quality.
The code table is calculated from the 53 scenes, so we need another scene to verify the algorithmic generality. We capture another 37 scenes for validation, partially displayed in Figure 7.
The compression ratio is defined as
C r = S o u r c e f i l e s i z e C o m p r e s s e d f i l e s i z e ,
We use a 16-bit space to store a 14-bit pixel, so the theoretical limit compression ratio is defined as
T C r = 16 I m a g e i n f o r m a t i o n e n t r o p y ,

5.1. Proposed Method Compared with JPEG Series Algorithms

On an experimental platform of 12th Gen Intel(R) Core(TM) i7-12700H, a 20-core CPU at 2.30 GHz, and 16 GB of RAM, we test JPEG 2000, JPEG XL, JPEG XT, and the method proposed in this paper, and the results are shown in Figure 8 and Table 8. Figure 8a shows the compression ratio test results of the proposed method and JPEG series methods on 37 images, along with the theoretical compression ratio calculated based on the entropy of the difference image. Figure 8b presents the speed test results. Table 8 records the average values of the compression ratio and speed from Figure 8. It also includes the percentage change in the compression ratio of the proposed algorithm compared to both the JPEG series methods and the theoretical value.
The method proposed in this paper achieves an average compression ratio of 3.3 for infrared line-scan images, which is 5% lower than the theoretical value. Compared to JPEG 2000, the proposed method incurs a 10% loss in compression efficiency but provides a 20-fold speed improvement, reaching an average of 210 MB/s. Its performance and speed both outperform JPEG XT, and it is approximately 8 times faster than JPEG XL.

5.2. Proposed Method Compared with TIFF

TIFF (Tagged Image File Format) is an image storage format. The term “Tagged” in “TIFF” refers to the complex file structure of this format. TIFF allows the flexible use of compression methods to maintain image integrity and clarity. Lossless compression methods that can be used include LZW, Deflate, LZMA, and Packbits [32,33]. The dictionary-based table lookup encoding in TIFF, such as LZW, completely avoids frequency statistics, providing a very high compression speed. However, dictionary-based lossless image compression has limitations, as it cannot effectively remove spatial redundancy in images. The Deflate method can achieve higher compression efficiency by adjusting the dictionary size. However, using an excessively large dictionary increases memory and time consumption, which may degrade the compression speed.
In the experiment, we test the performance of LZW, Deflate, LZMA, and Packbits. The results are shown in Figure 9 and Table 9.
The Deflate method combines LZ77 and Huffman coding. Under the condition of maximum compression efficiency, its speed is 8 times faster than JPEG 2000. However, it experiences a significant loss in compression ratio, approximately 46%. In contrast, the proposed method outperforms Deflate in both speed and efficiency.
In TIFF, the compression efficiency of the LZW method is not adjustable. Although it achieves a significant speed improvement, approximately 31 times faster than JPEG 2000, the compression ratio loss is substantial, reaching 63%. The proposed method, compared to LZW, is better at achieving high-speed image compression while maintaining high compression efficiency.
The LZMA method achieves a high compression ratio, but this comes at the expense of compression speed, with speeds comparable to JPEG2000 at its highest compression efficiency, yet resulting in a 17% loss in compression ratio. The proposed method outperforms LZMA in both speed and efficiency.
The PackBits method is a simple variant of Run-Length Encoding (RLE). The RLE pattern is as follows: repeated symbol + count of repetitions. While the PackBits method provides high compression speed, in images with few repeating patterns, the overhead of recording the repetition count causes the compressed files to be twice the size of the original, resulting in no effective compression.
It is worth noting that the method in this paper was tested only in single-threaded, single-core mode, without any SIMD (Single Instruction, Multiple Data) optimizations. Based on the above experiments, we conclude that, compared to dictionary-based methods, the proposed method ensures high compression efficiency while achieving fast compression for line-scan panoramic infrared images.
The detailed experimental data are shown in Table A1 and Table A2 in Appendix A.

6. Conclusions

This paper designs a new low-complexity lossless compression algorithm for 14-bit line-scan infrared images. This paper proposes a method for constructing a Huffman code table, replacing the pixel probability statistical step in entropy coding, thereby improving the compression speed. For more bit images, the method proposed in this paper can be used to find a new probability model to compute a new code table. Additionally, this paper designs an improved Huffman coding scheme to handle the longer codes of 14-bit images, truncating long codes with low complexity and minimal compression ratio loss, ultimately realizing a low-complexity lossless image compression algorithm. The method proposed in this paper achieves an average compression ratio of 3.3 for infrared line-scan images, which is 5% lower than the theoretical value. Compared to JPEG 2000, the proposed method incurs a 10% loss in compression efficiency but provides a 20-fold speed improvement, reaching an average of 210 MB/s. Its performance and speed both outperform JPEG XT. Compared to dictionary-based lossless compression methods, the proposed method can achieve high-speed compression while maintaining high compression efficiency.
The method proposed in this paper can be extended to other general image domains that require high-speed compression and can tolerate some loss in compression ratio. Additionally, if the code table is concealed, this method could also facilitate encrypted image transmission, making it applicable to secure communication systems. However, there are certain limitations to the proposed method. For instance, while the method significantly speeds up compression, the 10% loss in compression efficiency compared to JPEG 2000 suggests that further optimizations could be made to balance both the speed and compression ratio. Additionally, the proposed algorithm might face challenges when applied to more complex image types or images with larger bit depths, as the probability model used may need to be adapted to these cases.
Future research can explore several directions. First, improving the Huffman coding scheme to achieve better compression efficiency without compromising speed is a potential area of investigation. Second, adapting the proposed method to work with images of higher bit depths or more complex data types may enhance its applicability in other fields, such as medical imaging or remote sensing. This could be achieved by incorporating more widely used image prediction methods to reduce redundancy, which would allow the approach to better handle more complex image types. Finally, investigating hybrid methods that combine the strengths of both dictionary-based and statistical compression techniques could lead to even more efficient algorithms for infrared image compression.

Author Contributions

Conceptualization, Y.Z. (Yaohua Zhu); Supervision, Y.Z. (Yong Zhang); Validation, M.H. and Y.Z. (Yanghang Zhu); Writing—original draft, Y.Z. (Yaohua Zhu). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be available reasonably on request.

Acknowledgments

The authors sincerely thank the anonymous reviewers for their insightful comments and valuable suggestions.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Appendix A

Table A1. The detailed experimental data for JPEG series.
Table A1. The detailed experimental data for JPEG series.
SceneTCrCr of the Proposed MethodCompression Speed of the Proposed Method (MB/S)Cr of
JPEG 2000
Compression Speed of JPEG 2000 (MB/S)Cr of
JPEG XL
Compression Speed of JPEG XL (MB/S)Cr of
JPEG XT
Compression Speed of JPEG XT (MB/S)
13.17682.9919220.50273.548111.95963.281329.78922.53608.2258
23.04862.8599208.73863.446211.15423.181522.43942.51607.8654
33.15272.9754215.97723.580512.26063.309429.46552.62008.2037
43.17562.9938209.74283.577511.53223.318931.14042.66028.2928
53.16852.9889210.61133.594311.53223.331928.46552.71698.0310
63.21823.0275213.70853.643510.12853.365527.14042.75887.9061
73.03422.8428209.02453.422711.15423.179125.43142.55287.7065
83.15342.9606211.34063.574711.15423.304627.24792.68417.6294
93.14712.9638210.17623.564411.62403.305027.24792.66577.8654
103.20343.0163217.82713.619210.81443.351229.91922.69677.5914
113.22523.0337216.13023.646710.67193.372227.34392.73257.5539
123.21793.0270204.26763.610712.39513.358428.25712.61707.9473
133.23833.0417203.83883.607611.05423.359825.84162.59917.6294
143.19512.9963205.92163.602611.12853.290623.84192.62557.6294
153.26323.0518204.26763.609012.12853.343528.25712.67807.7851
163.18742.9937217.98273.526711.39513.258432.46552.54467.8250
173.19273.0095209.02453.505310.62403.284025.43142.54617.8654
183.19273.0095201.03813.505310.62403.284026.30832.54618.0310
193.03722.8662205.36733.369210.62403.202826.30832.47607.7851
203.15762.9898200.77353.524511.12853.319926.30832.61967.6294
213.16192.9896204.81603.536411.38493.321125.43142.63497.7065
223.17442.9936209.02453.570810.38493.334627.24792.67297.7065
233.25363.0648219.70903.650011.39513.383331.14042.73807.9061
243.71553.5391214.61033.671810.12854.181225.43142.65047.8654
253.91853.7452210.46613.751111.12854.295025.43142.77117.9473
263.96773.8641212.66613.761110.95964.230725.14042.74598.2928
273.99083.8094210.75683.814110.39514.350426.30832.86138.0310
283.83193.7394213.40963.689911.39514.124228.34392.74488.1598
294.34384.1742212.37013.992511.25884.481129.17132.66068.6208
304.50134.2840221.14194.194310.25884.686428.90852.82068.3840
313.68493.5832211.86443.69367.81274.049812.97632.60665.2085
324.04173.9486217.36173.851610.53224.275227.51762.73528.3840
334.05883.9554218.29463.812112.12854.262632.46552.63998.5245
344.05793.9499218.92093.786812.38494.311923.84192.75628.2037
353.55383.4228223.21433.491710.41693.965712.84102.53954.8078
363.75993.6812210.75683.678711.12854.093329.91922.69097.9473
373.48413.2131208.33333.57937.81274.071513.12512.56155.6820
Mean3.45643.2864211.72913.637910.97283.659926.32142.65467.7399
Table A2. The detailed experimental data for TIFF.
Table A2. The detailed experimental data for TIFF.
SceneCr of TIFF-DeflateCompression Speed of TIFF-Deflate (MB/S)Cr of TIFF-LzwCompression Speed of TIFF-Lzw (MB/S)Cr of TIFF-PackbitsCompression Speed of TIFF-Packbits (MB/S)Cr of TIFF-LZMACompression Speed of TIFF-LZMA (MB/S)
11.7323137.23621.2237354.3350.4961764.25692.689712.8439
21.714188.93971.2233338.46020.4961675.0582.550712.7596
31.915944.29731.4343387.41130.4961667.04062.791511.8134
41.974841.30061.4649389.58750.4961691.43492.772212.0544
51.942942.81541.4577388.4090.4961684.2892.792711.7786
61.979744.59661.4961386.09830.4961733.7142.864111.8401
71.718881.39591.2235337.590.4961616.28922.503912.8117
81.850853.92381.3696343.73390.4961641.32522.74712.0309
91.835660.88021.3469344.90680.4961663.62182.722511.8702
101.855156.2321.3971375.55230.4961717.44742.789211.8174
111.857251.95281.3714341.93280.4961692.18992.804711.5897
121.816766.83481.32342.24990.4961678.61332.74611.8523
131.804277.46241.3211341.24050.4961639.79972.730312.2185
142.012344.96361.541361.33330.4961525.79642.880611.7433
151.837870.75081.3248341.75280.4961601.81062.739212.159
161.7254112.18591.1928358.61090.4961664.39672.672612.6656
171.766881.45721.2561336.67560.4961687.78762.610112.3507
181.766881.69661.2561335.5090.4961642.40152.610112.1252
191.672495.55261.1785340.21410.4961625.87552.434212.3639
201.833761.25631.3462343.3860.4961669.54032.685211.9983
211.8252.16051.3543348.66640.4961673.40582.687511.8417
221.875946.13011.427353.54380.4961635.28892.791111.6741
231.999839.99321.5235388.04280.4961739.22852.899811.6483
242.27485.30411.446343.68770.4961616.0943.451911.4221
252.401969.55281.5407349.40650.4961625.03813.60710.425
262.255384.80931.4204373.66030.4961733.0563.5211.229
272.374571.40991.4962350.04440.4961685.94473.639111.1167
282.208584.54891.415373.67420.4961599.95583.407811.3581
292.2003141.87791.2285356.28010.4961699.68263.726911.9364
302.2873189.04491.2331362.98440.4961689.21373.903713.0254
311.996649.17811.2575130.31630.496329.68723.1627.8971
322.0742161.9971.1831351.2850.4961704.25953.493912.3269
332.0721189.11481.1485350.31130.4961670.67863.507412.7368
342.25294.92021.352341.55090.4961643.13163.630412.1598
352.070341.34931.2683127.63470.496296.09013.20968.0878
362.1171102.24651.3242373.03820.4961718.11073.330311.9698
372.290324.60211.4882127.71270.496316.92493.41117.0688
Mean1.977979.02621.3473337.58990.4961639.41833.013911.6381

References

  1. Li, H. Infrared thermal imaging technology towards the new century. Laser Optoelectron. Prog. 2002, 39, 48–51. [Google Scholar]
  2. Singh, S.; Pandey, P. Enhanced LZW technique for medical image compression. In Proceedings of the 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 16–18 March 2016; pp. 1080–1084. [Google Scholar]
  3. Giri, K.; Mishra, A.; Rongali, A. An innovation analysis of LZ77 and LZ78 Compression Algorithms for Data Compression & Source Coding. In Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), Mandi, India, 18–22 June 2024; pp. 1–5. [Google Scholar]
  4. Antonini, M.; Barlaud, M.; Mathieu, P.; Daubechies, I. Image coding using wavelet transform. IEEE Trans. Image Process. 1992, 1, 20–25. [Google Scholar] [CrossRef] [PubMed]
  5. Yea, S.; Pearlman, W.A. A wavelet-based two-stage near-lossless coder. IEEE Trans. Image Process 2006, 11, 3488–3500. [Google Scholar] [CrossRef] [PubMed]
  6. Usevitch, B.E. A tutorial on modern lossy wavelet image compression: Foundations of JPEG 2000. IEEE Signal Process. Mag. 2001, 18, 22–35. [Google Scholar] [CrossRef]
  7. Mandyam, G.; Ahmed, N.; Magotra, N. Lossless image compression using the discrete cosine transform. J. Vis. Commun. Image Represent. 1997, 8, 21–26. [Google Scholar] [CrossRef]
  8. Xiao, B.; Lu, G.; Zhang, Y.; Li, W.; Wang, G. Lossless image compression based on integer Discrete Tchebichef Transform. Neurocomputing 2016, 214, 587–593. [Google Scholar] [CrossRef]
  9. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  10. Witten, I.H.; Neal, R.M.; Cleary, J.G. Arithmetic coding for data compression. Commun. ACM 1987, 30, 520–540. [Google Scholar] [CrossRef]
  11. Jiang, W.W.; Kiang, S.Z.; Hakim, N.Z.; Meadows, H.E. Lossless compression for medical imaging systems using linear/nonlinear prediction and arithmetic coding. In Proceedings of the 1993 IEEE International Symposium on Circuits and Systems (ISCAS), Chicago, IL, USA, 3–6 May 1993; pp. 283–286. [Google Scholar]
  12. Weinberger, M.J.; Seroussi, G.; Sapiro, G. The LOCO-I lossless image compression algorithm: Principles and standardization into JPEG-LS. IEEE Trans. Image Process. 2000, 9, 1309–1324. [Google Scholar] [CrossRef]
  13. Tan, Y.H.; Yeo, C.; Li, Z. Residual DPCM for lossless coding in HEVC. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 2021–2025. [Google Scholar]
  14. Sanchez, V.; Auli-Llinas, F.; Serra-Sagrista, J. DPCM-based edge prediction for lossless screen content coding in HEVC. IEEE J. Emerg. Sel. Top. Circuits Syst. 2016, 6, 497–507. [Google Scholar] [CrossRef]
  15. Harnik, D.; Khaitzin, E.; Sotnikov, D. A fast implementation of deflate. In Proceedings of the 2014 Data Compression Conference, Snowbird, UT, USA, 26–28 March 2014; pp. 223–232. [Google Scholar]
  16. Leavline, E.; Singh, D. Hardware implementation of LZMA data compression algorithm. Int. J. Appl. Inf. Syst. 2013, 5, 51–56. [Google Scholar]
  17. Wallace, G.K. The JPEG still picture compression standard. Commun. ACM 1991, 34, 30–44. [Google Scholar] [CrossRef]
  18. Skodras, A.; Christopoulos, C.; Ebrahimi, T. The JPEG 2000 still image compression standard. IEEE Signal Process. Mag. 2001, 18, 36–58. [Google Scholar] [CrossRef]
  19. Chiou, P.T.; Sun, Y.; Young, G.S. A complexity analysis of the JPEG image compression algorithm. In Proceedings of the 2017 9th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 27–29 September 2017; pp. 65–70. [Google Scholar]
  20. Alakuijala, J.; Van Asseldonk, R.; Boukortt, S.; Bruse, M.; Comșa, I.M.; Firsching, M. JPEG XL next-generation image compression architecture and coding tools. In Proceedings of the Applications of Digital Image Processing XLII, San Diego, CA, USA, 6 September 2019; Volume 11137, pp. 112–124. [Google Scholar]
  21. Artusi, A.; Mantiuk, R.K.; Richter, T.; Hanhart, P.; Korshunov, P.; Agostinelli, M. Overview and evaluation of the JPEG XT HDR image compression standard. J. Real-Time Image Process. 2019, 16, 413–428. [Google Scholar] [CrossRef]
  22. Yuan, X.; Haimi-Cohen, R. Image compression based on compressive sensing: End-to-end comparison with JPEG. IEEE Trans. Multimed. 2020, 22, 2889–2904. [Google Scholar] [CrossRef]
  23. Cea-Dominguez, C.; Moure, J.C.; Bartrina-Rapesta, J.; Auli-Llinas, F. Complexity scalable bitplane image coding with parallel coefficient processing. IEEE Signal Process. Lett. 2020, 27, 840–844. [Google Scholar] [CrossRef]
  24. Wu, Z.; Zhang, W.; Jing, P.; Liu, Y. A High-Performance Dual-Context MQ Encoder Architecture Based on Extended Lookup Table. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2023, 31, 897–901. [Google Scholar] [CrossRef]
  25. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  26. Vetter, J. Design and analysis of dynamic Huffman coding. In Proceedings of the 26th Annual Symposium on Foundations of Computer Science, Portland, OR, USA, 21–23 October 1985; pp. 293–302. [Google Scholar]
  27. Schwartz, E.S.; Kallick, B. Generating a canonical prefix encoding. Commun. ACM 1964, 7, 166–169. [Google Scholar] [CrossRef]
  28. Reinhardt, A.; Christin, D.; Hollick, M.; Schmitt, J.; Mogre, P.S.; Steinmetz, R. Trimming the tree: Tailoring adaptive huffman coding to wireless sensor networks. In Proceedings of the Wireless Sensor Networks: 7th European Conference, Coimbra, Portugal, 17–19 February 2010; pp. 33–48. [Google Scholar]
  29. Yunge, D.; Park, S.; Kindt, P.; Chakraborty, S. Dynamic alternation of Huffman codebooks for sensor data compression. IEEE Embed. Syst. Lett. 2017, 9, 81–84. [Google Scholar] [CrossRef]
  30. Reinhardt, A.; Christin, D.; Steinmetz, R. Pre-allocating code mappings for energy-efficient data encoding in wireless sensor networks. In Proceedings of the 2013 IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), San Diego, CA, USA, 18–22 March 2013; pp. 578–583. [Google Scholar]
  31. Xu, L.; Li, Q.; Zhu, B. Modified adaptive Huffman coding algorithm for wireless sensor network. J. Nanjing Univ. Sci. Technol. 2013, 37, 813–817. [Google Scholar]
  32. Kabachinski, J. TIFF, GIF, and PNG: Get the picture? Biomed. Instrum. Technol. 2007, 41, 297–300. [Google Scholar] [CrossRef] [PubMed]
  33. Wei, Z.; Sun, Z.; Xie, Y. GPU Acceleration of integer wavelet transform for TIFF image. In Proceedings of the 2010 3rd International Symposium on Parallel Architectures, Algorithms and Programming, Dalian, China, 18–20 December 2010; pp. 138–143. [Google Scholar]
Figure 1. Frequency statistics and correlation analysis of infrared images. (a) Original image A. (b) Original image A pixel frequency. (c) Difference image A. (d) Difference image A pixel frequency. (e) Correlation analysis of A. (f) Original image B. (g) Original image B pixel frequency. (h) Difference image B. (i) Difference image B pixel frequency. (j) Correlation analysis of B.
Figure 1. Frequency statistics and correlation analysis of infrared images. (a) Original image A. (b) Original image A pixel frequency. (c) Difference image A. (d) Difference image A pixel frequency. (e) Correlation analysis of A. (f) Original image B. (g) Original image B pixel frequency. (h) Difference image B. (i) Difference image B pixel frequency. (j) Correlation analysis of B.
Applsci 15 02826 g001
Figure 2. Leaf node 3 backtracking towards the root node.
Figure 2. Leaf node 3 backtracking towards the root node.
Applsci 15 02826 g002
Figure 3. Various experimental scenes (partial of 53).
Figure 3. Various experimental scenes (partial of 53).
Applsci 15 02826 g003
Figure 4. The total probability of the occurrence of pixels within the boundary pixel.
Figure 4. The total probability of the occurrence of pixels within the boundary pixel.
Applsci 15 02826 g004
Figure 5. Differential pixels probability distribution. (a) Probability distribution of the first 302 differential pixels in 53 images. (b) Average probability distribution of 53 images.
Figure 5. Differential pixels probability distribution. (a) Probability distribution of the first 302 differential pixels in 53 images. (b) Average probability distribution of 53 images.
Applsci 15 02826 g005
Figure 6. The lossless compression algorithm. (a) Coding rules of the improved Huffman coding. (b) Framework of the lossless compression algorithm.
Figure 6. The lossless compression algorithm. (a) Coding rules of the improved Huffman coding. (b) Framework of the lossless compression algorithm.
Applsci 15 02826 g006
Figure 7. Various experimental scenes (partial of 37).
Figure 7. Various experimental scenes (partial of 37).
Applsci 15 02826 g007
Figure 8. Speed and compression ratio of the proposed method and JPEG series. (a) Compression ratio. (b) Compression speed.
Figure 8. Speed and compression ratio of the proposed method and JPEG series. (a) Compression ratio. (b) Compression speed.
Applsci 15 02826 g008
Figure 9. Speed and compression ratio of the proposed method and tiff. (a) Compression speed. (b) Compression speed. (c) Compression ratio.
Figure 9. Speed and compression ratio of the proposed method and tiff. (a) Compression speed. (b) Compression speed. (c) Compression ratio.
Applsci 15 02826 g009
Table 1. Image information entropy.
Table 1. Image information entropy.
ImageOriginal ImageEntropy1 1Entropy2 2
A10.38544.91826.3071
B9.911214.37925.7613
1 entropy1 represents inter-column differential image entropy (bit/symbol). 2 entropy2 represents inter-row differential image entropy (bit/symbol).
Table 2. Image information entropy.
Table 2. Image information entropy.
ImageOriginal ImageEntropy1Entropy2ImageOriginal ImageEntropy1Entropy2
19.96005.21746.7721279.87064.31815.8708
29.96834.73226.4009289.70863.99735.4276
310.26984.55156.4508299.59833.43755.2666
49.51395.32506.1609309.22723.58105.1778
59.75594.18515.9531319.46823.68765.3521
69.84945.11046.1961329.18604.04015.5243
710.16325.03886.9598339.20123.95555.4476
89.39033.93455.3837348.98533.82605.4016
98.89803.77925.2160359.03924.01085.4408
107.91093.55634.79833611.46194.52746.3586
119.70174.03985.48743711.07284.82466.3974
1210.09774.32055.71563810.04665.17146.0446
1310.08814.92296.1077399.03394.87145.6439
1411.48584.02476.17884010.48945.07936.0699
1510.63094.61046.3493419.76255.02865.4812
1610.21815.00156.1660428.76245.00475.2910
1710.55785.22806.2924439.06605.00955.3068
1810.35624.87465.9831448.16294.96475.2399
1910.51535.87106.6050458.61855.10685.4800
2010.26844.99276.08344610.26915.47936.0277
2110.93764.99406.20514710.55265.51336.0654
2210.43474.81376.29014810.59645.60676.1470
2310.23434.91276.25214910.43375.32665.8868
2410.19564.91396.35245010.04115.41275.9088
2510.54135.67766.58685110.15275.20305.8944
268.94033.76645.1789
Table 3. Differential pixel frequency statistics of image B.
Table 3. Differential pixel frequency statistics of image B.
Pixel 1FrequencyCode Length
2
−314,2530
−217,1240
−119,8850
0106,7080
117,5350
214,8180
312,1350
1 ‘Pixel’ represents differential pixel; 2 ‘…’ represents omitted data.
Table 4. H describing the Huffman tree.
Table 4. H describing the Huffman tree.
PixelIndexWeightParent
−37714,253364
−27817,124366
−17919,885368
080106,708374
18117,535366
28214,818365
38312,135363
\ 135612,674363
\35714,755364
\35815,480365
\36121,246368
\36223,294369
\36324,809369
\36429,008370
\36530,298370
\36634,659371
\36737,261371
\36841,131372
\36948,103372
\37059,306373
\37171,920373
\37289,234374
\373131,226375
\374195,942375
\375327,1680
1 ‘\’ represents a synthesized node, which is a non-leaf node, so it does not have corresponding pixels.
Table 5. Prefix codes derived from C .
Table 5. Prefix codes derived from C .
PixelFrequencyCode LengthCode ValueCode
0106,7082000
−119,885440100
117,535450101
−217,124460110
214,818470111
−314,253481000
312,13551810010
Table 6. Coding rules of C .
Table 6. Coding rules of C .
PixelIndex S (Code Length) S (Code Value)Code
002000
11440100
−12450101
23460110
−24470111
3551610000
−3651710001
15029917131,06911111111111111101
−15030017131,07011111111111111110
15130117131,07111111111111111111
Table 7. Coding rules of the variant Huffman coding.
Table 7. Coding rules of the variant Huffman coding.
PixelIndex S (Code Length) S (Code Value)Code
002000
11440100
−12450101
23460110
−24470111
3551610000
−3651710001
15029917131,06911111111111111101
−15030017131,07011111111111111110
<−150
or30117131,07111111111111111111
>150
Table 8. Experimental results of the proposed method and JPEG series.
Table 8. Experimental results of the proposed method and JPEG series.
The Average ResultsProposed MethodTheoretical ValueJPEG 2000JPEG XLJPEG XT
Cr3.28643.45643.63793.65992.6546
The percentage change\−4.9175%−9.6633%−10.2072%23.7988%
Compression speed (MB/S)211.7291\10.972826.32147.7399
Table 9. Experimental results for TIFF.
Table 9. Experimental results for TIFF.
The Average ResultsJPEG 2000TIFF-DeflateTIFF-LzwTIFF-PackbitsTIFF-LZMA
Cr3.63791.97791.34730.49613.0139
The percentage change\−45.6307%−62.9649%−86.3630%−17.1528%
Compression speed (MB/S)10.972879.0262337.5899639.418311.6381
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Huang, M.; Zhu, Y.; Zhang, Y. A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images. Appl. Sci. 2025, 15, 2826. https://doi.org/10.3390/app15052826

AMA Style

Zhu Y, Huang M, Zhu Y, Zhang Y. A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images. Applied Sciences. 2025; 15(5):2826. https://doi.org/10.3390/app15052826

Chicago/Turabian Style

Zhu, Yaohua, Mingsheng Huang, Yanghang Zhu, and Yong Zhang. 2025. "A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images" Applied Sciences 15, no. 5: 2826. https://doi.org/10.3390/app15052826

APA Style

Zhu, Y., Huang, M., Zhu, Y., & Zhang, Y. (2025). A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images. Applied Sciences, 15(5), 2826. https://doi.org/10.3390/app15052826

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop