Next Article in Journal
Transfer Learning from Synthetic Data Applied to Soil–Root Segmentation in X-Ray Tomography Images
Previous Article in Journal
Detection of Red-Meat Adulteration by Deep Spectral–Spatial Features in Hyperspectral Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Based and Prediction-Based Transformations for Lossless Image Compression

by
Md. Ahasan Kabir
* and
M. Rubaiyat Hossain Mondal
Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka 1205, Bangladesh
*
Author to whom correspondence should be addressed.
J. Imaging 2018, 4(5), 64; https://doi.org/10.3390/jimaging4050064
Submission received: 7 February 2018 / Revised: 24 April 2018 / Accepted: 1 May 2018 / Published: 4 May 2018

Abstract

:
Pixelated images are used to transmit data between computing devices that have cameras and screens. Significant compression of pixelated images has been achieved by an “edge-based transformation and entropy coding” (ETEC) algorithm recently proposed by the authors of this paper. The study of ETEC is extended in this paper with a comprehensive performance evaluation. Furthermore, a novel algorithm termed “prediction-based transformation and entropy coding” (PTEC) is proposed in this paper for pixelated images. In the first stage of the PTEC method, the image is divided hierarchically to predict the current pixel using neighboring pixels. In the second stage, the prediction errors are used to form two matrices, where one matrix contains the absolute error value and the other contains the polarity of the prediction error. Finally, entropy coding is applied to the generated matrices. This paper also compares the novel ETEC and PTEC schemes with the existing lossless compression techniques: “joint photographic experts group lossless” (JPEG-LS), “set partitioning in hierarchical trees” (SPIHT) and “differential pulse code modulation” (DPCM). Our results show that, for pixelated images, the new ETEC and PTEC algorithms provide better compression than other schemes. Results also show that PTEC has a lower compression ratio but better computation time than ETEC. Furthermore, when both compression ratio and computation time are taken into consideration, PTEC is more suitable than ETEC for compressing pixelated as well as non-pixelated images.

1. Introduction

In today’s information age, the world is overwhelmed with a huge amount of data. With the increasing use of computers, laptops, smartphones, and other computing devices, the amount of multimedia data in the form of text, audio, video, image, etc. are growing at an enormous speed. Storage of large volumes of data has already become an important concern for social media, email providers, medical institutes, universities, banks, and many other offices. In digital media such as in digital cameras, digital cinemas, and films, high resolution images are needed. In addition to the storage, data are often required to be transmitted over the Internet at the highest possible speed. Due to the constraint in storage facility and limitation in transmission bandwidth, compression of data is vital [1,2,3,4,5,6,7,8].
The basic idea of compressing images lies in the fact that several image pixels are correlated, and this correlation can be exploited to remove the redundant information [9]. The removal of redundancy and irrelevancy leads to a reduction in image size. There are two major types of image compression—lossy and lossless [10,11,12]. In the case of lossless compression, the reconstruction process can recover the original image from the compressed images. On the other hand, images that go through the lossy compression process cannot be precisely recovered to its actual form. Examples of lossy compression are some of the wavelet-based compressions such as embedded zerotrees of wavelet transforms (EZW), joint photographic experts group (JPEG) and the moving picture experts group (MPEG) compression.
A large number of research papers report image compression algorithms. For example, one study [13] is about discrete cosine transform (DCT)-based lossless image compression where the higher energy coefficients in each block are quantized. Next, an inverse DCT is performed only on the quantized coefficients. The resultant pixel values are in the 2-D spatial domain. The pixel values of two neighboring regions are then subtracted to obtain residual error sequence. The error sequence is encoded by an entropy coder such as Arithmetic or Huffman coding [13]. Image compression in the frequency domain using wavelets is reported in several studies [12,14,15,16,17]. In the method described in [14] lifting-based bi-orthogonal wavelet transform is used which produces coefficients that can be rounded without any loss of data. In the work of [18] wavelet transform limits the image energy within fewer coefficients which are encoded by “set partitioning in hierarchical trees” (SPIHT) algorithm.
In [19] JPEG lossless (JPEG-LS), a prediction-based lossless scheme, is proposed for continuous tone images. In [14] embedded zero tree coding (EZW) method is proposed based on the zero tree hypothesis. The study in [12] proposes a compression algorithm based on combination of discrete wavelet transform (DWT) and intensity-based adaptive quantization coding (AQC). In this AQC method, the image is divided into sub-blocks. Next, the quantizer step in each sub-block is computed by subtracting the maximum and the minimum values of the block and then dividing the result by the quantization level. In the case of intensity-based adaptive quantizer coding (IBAQC) reported in [12] the image sub-block is classified into low and high intensity blocks based on the intensity variation of each block. To encode high intensity block, it is required to have large quantization level depending on the desired peak signal to noise ratio (PSNR). On the other hand, if the pixel value in the low intensity block is less than the threshold, it is required to encode this value without quantization; otherwise, it is required to quantize the bit value with less quantization level. In case of the composite DWT-IBAQC method, IBAQC is applied to the DWT coefficients of the image. Since the whole energy of the image is carried by only a few wavelet (DWT) coefficients, the IBQAC is used to encode only the coarse (low pass) wavelet coefficients [12].
Some researchers describe prediction-based lossless compression [1,3,19,20,21,22,23,24]. Moreover, the combination of wavelet transform and the concept of prediction are presented in some studies [25,26]. In [25], the image is pre-processed by DPCM and then the wavelet transform is applied to the output of the DPCM. In [26], the image pixels are predicted by a hierarchical prediction scheme and then the wavelet transform is applied to the prediction error. Some work [5,9,11,27,28,29,30] applies various types of image transformation or pixel difference or simple entropy coding. An image transformation scheme known as “J bit encoding” (JBE) has been proposed in [11]. It can be noted that image transformation means rearranging the positions of the image components or pixels to make the image suitable for huge compression. In this [11] work, the original data are divided into two matrices where one matrix is for original nonzero data bytes, while the other matrix is for defining the positions of the zero/nonzero bytes.
A number of research papers use the high efficiency video coding (HEVC) standard for image compression [31,32,33,34]. The work in [31] describes a lossless scheme that carries out sample-based prediction in the spatial domain. The work in [33] provides an overview of the intra coding techniques in the HEVC. The authors of [32] present a collection of DPCM-based intra-prediction method which is effective to predict strong edges and discontinuities. The work in [34] proposes piecewise mapping functions on residual blocks computed after DPCM-based prediction for lossless coding. Besides, the compression using HEVC, JPEG2000 [35,36] and graph-based transforms [37] are also reported. Moreover, the work in [5] presents a combination of fixed-size codebook and row-column reduction coding for lossless compression of discrete-color images. Table 1 provides a comparative study of different image compression algorithms reported in the literature.
One special type of image is the pixelated images that are used to carry data between optical modulators and optical detectors. This is known as pixelated optical wireless communication system in the literature. Figure 1 illustrates one example of a pixelated system [38]. In such systems, a sequence of image frames is transmitted by liquid crystal display (LCD) or light emitting diodes (LED) arrays. A smart-phone with camera or an array of photodiode with imaging lens can be used as optical receivers [6,7,8]. Such systems have the potential to have huge data rates as there are millions of pixels on the transmitter screens. The images created on the optical transmitter are required to be within the field of view (FOV) of the receiver imaging lens. Pixelated links can be used for secure data communication in banking and military applications. For instance, pixelated systems can be useful at gatherings such as shopping malls, retail store, trade shows, galleries, conferences, etc. where business cards, product videos, brochures, and photos can be exchanged without the help of the Internet (www) connection. The storage of pixelated images may be vital for offline processing. Since data are embedded within image pixels, the pixelated images must be processed by lossless compression methods. Any amount of loss in image entropy may lead to loss in the embedded data. A very important feature of pixelated images is that a single intensity value made of pixel blocks contains a single data, and this value of intensity changes abruptly at the transition of pixel blocks. This feature is not particularly exploited in the existing image compression techniques. Hence, none of the above-mentioned research reports are optimum for pixelated images as the special features of these images are yet to be exploited for compression. In fact, a new compression algorithm for pixelated images has been proposed by the authors of this paper in a very recent study [39]. This new algorithm is termed as edge-based transformation and entropy coding (ETEC) having high compression ratio at moderate computation time. In this previous study [39], the ETEC method is evaluated for only four pixelated images. This paper extends the study of ETEC method for fifty (50) different pixelated images. Moreover, a new algorithm termed as prediction-based transformation and entropy coding (PTEC) is proposed to overcome the limitations of computation time of ETEC. The main contributions of this paper can be summarized as follows:
(1)
Providing a framework for ETEC method as a combination of JBE and entropy coding, and then evaluating its effectiveness for compressing a wide range of pixelated images.
(2)
Developing a new algorithm termed as PTEC by combining the aspects of hierarchical prediction approach, JBE method, and entropy coding.
(3)
Comparing the proposed ETEC and PTEC schemes with the existing compression techniques for a number of pixelated and non-pixelated standard images.
The rest of the paper is summarized as follows. Section 2 describes JPEG-LS, SPIHT, Huffman coding, Arithmetic coding and other existing methods. Section 3 describes the new ETEC and PTEC methods. The results on different image compression methods are reported in Section 4. Finally, Section 5 presents the concluding remarks.

2. Existing Image Compression Techniques

The JPEG-LS compression algorithm is suited for continuous tone images. The compression algorithm consists of four main parts, which are fixed predictor, bias canceller or adaptive corrector, context modeler and entropy coder [19]. In JPEG-LS, the edge detection is performed by “median edge detection” (MED) process [19]. JPEG-LS uses context modeling to measure the quantized gradient of surrounding image pixels. This context modeling of the predication error gives good results for images with texture pattern. Next correction values are added to the prediction error, and the remaining or residual error is encoded by Golomb coding [40] scheme. SPIHT [18,41] is an advanced encoding technique based on progressive image coding. SPIHT uses a threshold and encodes the most significant bit of the transformed image, followed by the application of increasing refinement. This paper considers SPIHT algorithm with lifting-based wavelet transform for 5/3 Le Gall wavelet filter.
Differential pulse code modulation (DPCM) [42] predictor can predict the current pixel based on its neighboring pixels as mentioned in the JPEG-LS predictor. The subtraction of the current pixel intensity and the predictor output gives predictor error e. The quantizer quantizes the error value using suitable quantization level. In case of lossless compression, the quantized level is unity. Next, an entropy coding is performed to get the final bit streams. The predictor operator can be expressed by the following equation
x ^ s ( i , j ) = a * I ( i , j 1 ) + b * I ( i 1 , j 1 ) + c * I ( i 1 , j ) + d * I ( i 1 , j + 1 ) + .......
where x ^ s is predictor output, the terms a, b, c and d are constant, I is the intensity value and (x,y) represent the spatial indices of the pixels.
Arithmetic coding is an entropy coding used for lossless compression [43]. In this method, the infrequently occurring symbols/characters are encoded with greater number of bits than frequent occurring symbols/characters. An important feature of Arithmetic coding is that it encodes the full information into a single long number and represents current information as a range. Huffman coding [44] is basically a prefix coding method which assigns variable length codes to input characters/symbols. In this scheme, the least frequently occurring character is assigned with the smallest of the codes within a code table.

3. Proposed Algorithms

This section describes the recently proposed ETEC method and then proposes the PTEC method.

3.1. ETEC

The study of ETEC is extended in this paper with a detailed analysis of the ETEC algorithm. It has already been mentioned in Section I that each pixel block of a pixelated image carries a single intensity value or a single piece of data. The pixel blocks have abrupt transition and thus have many directional edges. The ETEC method can be described by three steps. In the first step, the special feature of pixelated images is used to calculate a residual error € by using the following intensity gradient
I = [ g x g y ] = [ I x I y ]
where I / x is the derivative with respect to the x direction, I / y is the derivative with respect to the y direction, I is the intensity value and (x,y) represent the spatial indices of the pixels. The maximum change of gradient between two co-ordinates represents the presence of edge either in the vertical or the horizontal direction.
The edge pixels are responsible for the increase in the level of the residual error €. It can be noted that for the presence of vertical edges, the value of € can be reduced to obtain the vertical intensity gradient. Similarly, for the presence of horizontal edges, the value of € can be reduced to obtain the horizontal intensity gradient. In order to detect a strong edge, a threshold Th is applied to the residual error in between the previous neighbors. If the previous residual error is greater than the threshold Th, then the present pixel I(x,y) is considered to be on the edge. So, the direction of gradient is changed. This can be mathematically described as:
if   = I x x > T h
then   = I x x
and   if   = I y y > T h
then   = I y y
As long as the previous residual error is less than the threshold, i.e., < T h , the scanning direction remains the same. After the whole scanning, the term € contains lower entropy compared to the original image.
In the second step of ETEC method, two matrices A and B are generated to encode €. The dimensions of the matrix A is X × Y. The possible values of matrix A are 0 or 1 or 2 depending on the value of €. The matrix A is assigned a value of 0 where ( x , y ) has a value of 0. Moreover, the matrix A is assigned values of 1 and 2 where ( x , y ) has a value greater than and less than 0, respectively. On the other hand, the matrix B is assigned with the absolute value of ( x , y ) , except for ( x , y ) = 0 . After assigning the values for the two matrices, run-length coding [45] is applied to A. This coding is applied to the values whose corresponding run is greater than other values. This method manipulates bits of data to reduce the size and optimize input of the other algorithm. Figure 2 shows the block diagram of step 2 of ETEC method.
In the third step of ETEC, Huffman or Arithmetic coding is applied to matrices A and B. Like other image compression methods, in ETEC, the general process of image decompression is just the opposite of compression. Figure 3 shows the flowchart of proposed ETEC algorithm.

3.2. PTEC

The main purpose of the proposed PTEC algorithm is to optimize the compression ratio and computational time for pixelated images as well as for other continuous tone images. In the case of a gray scale image, the signal variation is generally much smaller than that of a color image, but the intensity variation is still large near the edges of a gray scale image. For more accurate prediction of these signals and for accurate modeling of the prediction error, the hierarchical prediction scheme is used in PTEC. This method is described for the case where any image is divided into four subimages. At first, the gray scale image is decomposed into two subimages, i.e., a set of even numbered rows and a set of odd numbered rows, respectively. Figure 4 and Figure 5 show the hierarchical decomposition of the input image X 0 . The input image is separated into two subimages: an even subimage X e and an odd subimage X o . Here the even subimage X e is formed by gathering all even rows of the input image, and the odd subimage is formed of the collection of all odd rows of the input image. Each subimage is further divided into two subimages based on the even columns and the odd columns. Then X e e is encoded and is used to predict the pixels in X e o . In addition, X e e is also used to estimate the statistics of prediction errors of X e o . After encoding X e e and X e o , these are used to predict the pixels in X o e . Furthermore, three subimages X e e , X e o , X o e are used to predict a given subimage X o o . With the increase in the number of subimages used to predict a given subimage, the probability of the prediction error may be decreased. To predict the pixels of the last subimage X o o , maximum of eight (8) adjacent neighbors are used. This is evident from Figure 5. It can be noted that if the original image is divided into eight or more subimages instead of only four, the complexity and computation time will increase.
Suppose the image is scanned in a raster-scanning order; then the predictor is always based on its past casual neighbors (“context”). Figure 6 shows the order of the casual neighbors. The current pixels of the subimage X e e are predicted based on the casual neighbors. A reasonable assumption made with this subimage source is the N t h order Markovian property. This means in order to predict a pixel, N nearest casual neighbors are required. Then the prediction of current pixel x ( n ) is predicted as follows:
X ^ ( n ) = k = 1 N a ( k ) X ( n k )
where a ( k ) is the prediction coefficient, and X ( n k ) is the neighbors of X ( n ) . For the prediction of X e o pixels using X e e , directional prediction is attached to avoid large prediction errors near the edge. For each pixel X e o ( i , j ) in X e o , the horizontal predictor X ^ v ( i , j ) and vertical predictor X ^ h ( i , j ) are defined as shown in the following. Both X ^ v ( i , j ) and X ^ h ( i , j ) are determined by calculating the average of two different predictions. First, consider the case for X ^ h ( i , j ) . The prediction value, X ^ h 1 ( i , j ) , is expressed as
X ^ h 1 ( i , j ) = X e o ( i , j 1 ) + round { X e o ( i 1 , j 1 ) X e o ( i 1 , j ) 2 }
The second prediction value, X ^ h 2 ( i , j ) , is expressed as
X ^ h 2 ( i , j ) = round { X e e ( i , j ) + X e e ( i , j + 1 ) 2 }
Now, the term X ^ h ( i , j ) is determined using the average of X ^ h 1 ( i , j ) and X ^ h 2 ( i , j ) as follows:
X ^ h ( i , j ) = round { X ^ h 1 ( i , j ) + X ^ h 2 ( i , j ) 2 }
Similarly, the term X ^ v ( i , j ) can be expressed as follows:
X ^ v ( i , j ) = X e o ( i 1 , j ) + round { ( X e e ( i 1 , j ) X e e ( i , j ) ) + ( X e e ( i 1 , j + 1 ) X e e ( i , j + 1 ) ) 4 }
Among these, one is selected as a predictor for X e o ( i , j ) from Equations (10) and (11). With these possible two predictors, the most common approach to encoding is mode selection; where better predictor for each pixel is selected and the mode selection is dependent on the vertical and horizontal edges. If | X e o ( i , j ) X ^ h ( i , j ) | is smaller than | X e o ( i , j ) X ^ v ( i , j ) | , the horizontal edge is stronger than the vertical edge. Otherwise, the vertical edge is stronger than horizontal edge. For the prediction of X o e using X e e and X e o , the vertical and horizontal edges as well as diagonal edges can be suitably predicted. For each pixel X o e ( i , j ) in X o e , the horizontal predictor X ^ h ( i , j ) , vertical predictor X ^ v ( i , j ) , and diagonal predictor X ^ d l ( i , j ) (left), X ^ d r ( i , j ) (right) are defined in the following. Again, X ^ v ( i , j ) , X ^ h ( i , j ) , X ^ d l ( i , j ) and X ^ d r ( i , j ) are determined by taking the average of two different predictions. The term X ^ h ( i , j ) is determined as follows
X ^ h ( i , j ) = X o e ( i , j 1 ) + round { ( X e e ( i , j 1 ) X e e ( i , j ) ) + ( X e e ( i + 1 , j 1 ) X e e ( i + 1 , j ) ) 4 }
Now, consider the case for X ^ v ( i , j ) . The first prediction value, X ^ v 1 ( i , j ) , is expressed as
X ^ v 1 ( i , j ) = X e e ( i , j ) + round { ( X e o ( i , j 1 ) X e o ( i + 1 , j 1 ) ) + ( X e o ( i , j ) X e o ( i + 1 , j ) ) 4 }
The second prediction value, X ^ v 2 ( i , j ) , is expressed as
X ^ v 2 ( i , j ) = round { X e e ( i , j ) + X e e ( i + 1 , j ) 2 }
The term X ^ v ( i , j ) is determined using the average of X ^ v 1 ( i , j ) and X ^ v 2 ( i , j ) as follows:
X ^ v ( i , j ) = round { X ^ v 3 ( i , j ) + X ^ v 4 ( i , j ) 2 }
Now, consider the case for X ^ d r ( i , j ) . The first prediction value, X ^ d r 1 ( i , j ) , is expressed as
X ^ d r 1 ( i , j ) = X e o ( i , j ) + round { ( X e e ( i , j ) X e e ( i + 1 , j 1 ) ) + ( X e e ( i , j + 1 ) X e e ( i + 1 , j ) ) 4 }
The second prediction value, X ^ d r 2 ( i , j ) , is expressed as
X ^ d r 2 ( i , j ) = round { X e o ( i , j ) + X e o ( i + 1 , j 1 ) 2 }
The term X ^ d r ( i , j ) is determined using the average of X ^ d r 1 ( i , j ) and X ^ d r 2 ( i , j ) as follows:
X ^ d r ( i , j ) = round { X ^ d r 1 ( i , j ) + X ^ d r 2 ( i , j ) 2 }
Now, consider the case for X ^ d l ( i , j ) . The first prediction value, X ^ d l 1 ( i , j ) , is expressed as
X ^ d l 1 ( i , j ) = X e o ( i , j 1 ) + round { ( X e e ( i , j 1 ) X e e ( i + 1 , j ) ) + ( X e e ( i , j ) X e e ( i + 1 , j + 1 ) ) 4 }
The second prediction value, X ^ d l 2 ( i , j ) , is expressed as
X ^ d l 2 ( i , j ) = round { X e o ( i , j 1 ) + X e o ( i + 1 , j ) 2 }
The term X ^ d l ( i , j ) is determined using the average of X ^ d l 1 ( i , j ) and X ^ d l 2 ( i , j ) as follows:
X ^ d l ( i , j ) = round { X ^ d l 1 ( i , j ) + X ^ d l 2 ( i , j ) 2 }
Moreover, the selection of predictor is dependent on the presence of the directivity of the strong edges. By using Equations (12) and (21), it is possible to find an edge with a specified direction. Next, the residual error is encoded using modified J bit encoding. At the final stage, entropy coding is applied to the J bit encoded data.

4. Results and Discussion

This section evaluates the performance of ETEC and PTEC schemes for various types of images. The evaluation is done with the help of MATLAB tool and computer having specifications of Intel core i3 (Intel, Shanghai, China), 3110M 2.4GHz processor, RAM 4GB (Kingston, Shanghai, China), 1 GB VGA graphics card (Intel, Shanghai, China) and Windows 7 (32 bits) operating system (Microsoft, Shanghai, China). The intensity levels of the images are from 0 to 255 and the threshold term Th is assumed to have a value of 20. It can be noted that this value of Th has been selected as a near optimal value. Since a high value of Th may not recognize some edges in the images, whereas low values of Th may unnecessary consider any small transition as an edge
Figure 7 shows 50 different types of pixelated images used for evaluating the compression algorithms. Some of these images are created using MATLAB tool, and the remaining ones are available in [46,47,48,49]. Both Figure 7a,b have 25 images each. These images are made of different pixel blocks and each block is of different pixel sizes. Each pixel block has uniform intensity level. In some cases, a pixelated image may have very small pixel block or no block (each pixel block made of one pixel only). A number of metrics such as compression ratio, bits per pixel, saving percentage [28], and computation time are considered for comparing the algorithms. It can be noted that in this study the compression ratio is defined as the ratio of the size of the original image to the compressed image. Moreover, the saving percentage parameter is the difference between the original image and the compressed image as a percentage of the original image. Mathematically, the compression ratio is C 1 / C 2 , and the saving percentage is ( 1 C 2 / C 1 ) where C 1 and C 2 are the size of the original image and the compressed image, respectively. The bit per pixel parameter is obtained by dividing the image size (in bytes) by the number of pixels in the compressed image. The computation time is the total amount of time required to perform the image compression using MATLAB tool with the given computer specified earlier in this section.
First, consider the compression ratio (denoted as CR) and bits/pixel parameters. In Table 2, compression ratio and bits/pixel metrics are compared for the proposed ETEC and PTEC techniques with the existing JPEG-LS, SPIHT, and DPCM methods. The comparison is done for the 50 pixelated images illustrated in Figure 7. The bits per pixel parameter of the first 15 images are plotted in Figure 8 for the proposed and existing compression algorithms. Now consider the saving percentage and computation time. Table 3 represents the percentage saving and computation time for the case of those 50 images. The computation time in seconds is also plotted for the first 15 images in Figure 9. It can be seen from Table 2 that for the pixelated images, the average bits per pixel for ETEC (0.299) and PTEC (0.592) are lower (better) than the existing JPEG-LS (0.836), SPIHT (2.105) and DPCM (2.17). Table 2 shows that the average compression ratio of ETEC (29.39) and PTEC (10.28) are better than SPIHT (3.178) and JPEG-LS (9.264) and DPCM (3.09). Table 2 also shows that the compression ratio of PTEC is not always better than JPEG-LS for all the 50 pixelated images. In particular, PTEC has better compression than JPEG-LS for pixelated images having large pixel blocks. For small pixel blocks, the compression performance of PTEC is worse than JPEG-LS. This is because of the hierarchical prediction of PTEC. In case of small pixel block images, the prediction error for the first subimage is very high due to the high randomness of pixel intensity. For large pixel block images, this problem is significantly reduced. Table 3 indicates that the computational time of ETEC (62.58 s) is worse than SPIHT (13.9 s), but better than JPEG-LS (526 s) and DPCM (17.48 s). Furthermore, PTEC method has a computation time of 18.406 s which is much better than ETEC (62.58 s) and comparable to SPIHT (13.9 s). So, for pixelated images and for the case where both compression and computation time are important, PTEC may be more suitable than ETEC, SPIHT, JPEG-LS and DPCM.
In the following, different compression algorithms are shown for standard images particularly, non-pixelated images. Figure 10 illustrates eight standard test images available in [50,51,52,53,54]. These images have a resolution of 512 × 512 pixels. All these eight images are used to test compression ratio of different algorithms. For example, the Lena image is of 2,097,152 bits, but this image results in 1,063,464, 1,399,968, 1,170,220, 1,145,985 and 1,263,345 bits by using compression schemes of JPEG-LS, SPIHT, ETEC, PTEC and DPCM, respectively. Therefore, for the use of JPEG-LS on Lena image, the compression ratio is 1.972 (2,097,152/1,063,464) and bits/pixel is 4.0567 (1,063,464/512 × 512). Similarly, the compression ratio and bits/pixel values for different algorithms on different images can be easily obtained. These values are summarized in Table 4. Table 4 shows the comparison of compression ratio and bits per pixel representation of the ETEC and PTEC techniques with the existing JPEG-LS, SPIHT and DPCM for non-pixelated images. Figure 11 and Figure 12 are the corresponding visual representation of Table 4 for the case of bits per pixel and compression ratio, respectively. When the average compression ratio is considered, PTEC (2.06) is better than SPIHT (1.76), ETEC (1.93) and DPCM (1.72), but worse than JPEG-LS (2.16). Similarly, PTEC is better than SPIHT, ETEC and DPCM but worse than JPEG-LS in terms of average bits/pixel metric. Table 5 represents the percentage of saving area and computation time for the compression algorithms. It can be seen from Table 5 that the PTEC is better than SPIHT, ETEC and DPCM but worse than JPEG-LS in terms of percentage saving metric. Table 5 also shows that for the non-pixelated images, the average computation time of PTEC (74.50 s) is comparable to SPIHT (43.45 s) and DPCM (43.48 s), but better than ETEC (347.44 s) and JPEG-LS (2279.36 s). Note that PTEC has much better computation time than ETEC. This is because the use of hierarchical approach in PTEC. In the hierarchical approach, the computational data matrix is reduced to ¼ of the original data matrix. To handle a smaller matrix requires less time than handling a large one.
So, for non-pixelated images and for the case where both compression and computation time are important, PTEC, SPIHT and DPCM may be more suitable than ETEC and JPEG-LS.

5. Conclusions

This work describes two algorithms for compression of images, particularly pixelated images. One algorithm is termed as ETEC, which has recently been conceptualized by the authors of this paper. The other one is a prediction-based new algorithm termed as PTEC. The ETEC and PTEC techniques are compared with the existing JPEG-LS, SPIHT, and DPCM methods in terms of compression ratio and computation time. For the case of pixelated images, the compression ratio for PTEC is around 10.28, which is worse than ETEC (29.39) but better than JPEG-LS (9.264), SPIHT (3.178), and DPCM (3.09). In particular, for images having large pixel-blocks, the PTEC method provides a much greater compression ratio than JPEG-LS. In terms of average computational time, the PTEC (18.406 s) is comparable with SPIHT (13.90 s) and DPCM (17.42 s) for pixelated images, and better than JPEG-LS (526 s) and ETEC (62.58 s). The compression ratio of PTEC (2.06) for non-pixelated images is comparable with JPEG-LS (2.16), but better than SPIHT (1.76), ETEC (1.93), and DPCM (1.72). Therefore, for the cases where compression ratios, as well as computational time, are required and for the case of pixelated images, PTEC is a better choice than ETEC, JPEG-LS, SPIHT, and DPCM. Moreover, for the case of non-pixelated images, PTEC, along with DPCM and SPIHT, are better choices than ETEC and JPEG-LS when both compression ratio and computational time are important. Therefore, PTEC is an attractive candidate for lossless compression of standard images including pixelated and non-pixelated images. The proposed PTEC method may be modified in future by applying an error correction algorithm to the prediction error caused by hierarchical prediction. The resultant values will be encoded by JBE and entropy coder as usual.

Author Contributions

M.A.K. performed the study under the guidance of M.R.H.M. Both M.A.K. and M.R.H.M. wrote the paper.

Funding

This research received no external funding.

Acknowledgments

This work is a part of Master’s thesis of the author M.A.K. under the supervision of the author M.R.H.M. submitted to the Institute of Information and Communication Technology (IICT) of Bangladesh University of Engineering and Technology (BUET). Therefore, the authors would like to thank IICT, BUET for providing research facilities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koc, B.; Arnavut, Z.; Kocak, H. Lossless compression of dithered images. IEEE Photonics J. 2013, 5, 6800508. [Google Scholar] [CrossRef]
  2. Jain, A.K. Image data compression: A review. Proc. IEEE 1981, 69, 349–389. [Google Scholar] [CrossRef]
  3. Kim, S.; Cho, N.I. Hierarchical prediction and context adaptive coding for lossless color image compression. IEEE Trans. Image Process. 2014, 23, 445–449. [Google Scholar] [CrossRef] [PubMed]
  4. Kabir, M.A.; Khan, M.A.M.; Islam, M.T.; Hossain, M.L.; Mitul, A.F. Image compression using lifting based wavelet transform coupled with SPIHT algorithm. In Proceedings of the 2nd International Conference on Informatics, Electronics & Vision, Dhaka, Bangladesh, 17–18 May 2013. [Google Scholar]
  5. Alzahir, S.; Borici, A. An innovative lossless compression method for discrete-color images. IEEE Trans. Image Process. 2015, 24, 44–56. [Google Scholar] [CrossRef] [PubMed]
  6. Mondal, M.R.H.; Armstrong, J. Analysis of the effect of vignetting on MIMO optical wireless systems using spatial OFDM. J. Lightwave Technol. 2014, 32, 922–929. [Google Scholar] [CrossRef]
  7. Mondal, M.R.H.; Panta, K. Performance analysis of spatial OFDM for pixelated optical wireless systems. Trans. Emerg. Telecommun. Technol. 2017, 28, e2948. [Google Scholar] [CrossRef]
  8. Perli, S.D.; Ahmed, N.; Katabi, D. PixNet: Interference-free wireless links using LCD-camera pairs. In Proceedings of the 16th Annual International Conference on Mobile Computing and Networking (MOBICOM), Chicago, IL, USA, 20–24 September 2010. [Google Scholar]
  9. Shantagiri, P.V.; Saravanan, K.N. Pixel size reduction loss-less image compression algorithm. Int. J. Comput. Sci. Inf. Technol. 2013, 5, 87. [Google Scholar] [CrossRef]
  10. Ambadekar, S.; Gandhi, K.; Nagaria, J.; Shah, R. Advanced data compression using J-bit Algorithm. Int. J. Sci. Res. 2015, 4, 1366–1368. [Google Scholar]
  11. Suarjaya, A.D. A new algorithm for data compression optimization. Int. J. Adv. Comput. Sci. Appl. 2012, 3, 14–17. [Google Scholar]
  12. Al-Azawi, S.; Boussakta, S.; Yakovlev, A. Image compression algorithms using intensity based adaptive quantization coding. Am. J. Eng. Appl. Sci. 2011, 4, 504–512. [Google Scholar]
  13. Mandyam, G.; Ahmed, N.; Magotra, N. Lossless Image Compression Using the Discrete Cosine Transform. J. Vis. Commun. Image Represent. 1997, 8, 21–26. [Google Scholar] [CrossRef]
  14. Munteanu, A.; Cornelis, J.; Cristea, P. Wavelet-Based Lossless Compression of Coronary Angiographic Images. IEEE Trans. Med. Imaging 1999, 18, 272–281. [Google Scholar] [CrossRef] [PubMed]
  15. Taujuddin, N.S.A.M.; Ibrahim, R.; Sari, S. Progressive pixel to pixel evaluation to obtain hard and smooth region for image compression. In Proceedings of the 6th International Conference on Intelligent Systems, Modeling and Simulation, Kuala Lumpur, Malaysia, 9–12 February 2015. [Google Scholar]
  16. Oh, H.; Bilgin, A.; Marcellin, M.W. Visually Lossless Encoding for JPEG2000. IEEE Trans. Image Process. 2013, 22, 189–201. [Google Scholar] [PubMed]
  17. Yea, S.; Pearlman, W.A. A Wavelet-Based Two-Stage Near-Lossless Coder. IEEE Trans. Image Process. 2006, 15, 3488–3500. [Google Scholar] [CrossRef] [PubMed]
  18. Usevitch, B.E. A Tutorial on Modern Lossy Wavelet Image Compression: Foundations of JPEG 2000. IEEE Signal Process. Mag. 2001, 18, 22–35. [Google Scholar] [CrossRef]
  19. Weinberger, M.J.; Seroussi, G.; Sapiro, G. The LOCO-I Lossless Image Compression Algorithm: Principles and Standardization into JPEG-LS. IEEE Trans. Image Process. 2000, 9, 1309–1324. [Google Scholar] [CrossRef] [PubMed]
  20. Santos, L.; Lopez, S.; Callico, G.M.; Lopez, J.F.; Sarmiento, R. Performance Evaluation of the H.264/AVC Video Coding Standard for Lossy Hyperspectral Image Compression. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 451–461. [Google Scholar] [CrossRef]
  21. Al-Khafaji, G.; Rajab, M.A. Lossless and Lossy Polynomial Image Compression. OSR J. Comput. Eng. 2016, 18, 56–62. [Google Scholar] [CrossRef]
  22. Wu, X. Lossless Compression of Continuous-Tone Images via Context Selection, Quantization, and Modeling. IEEE Trans. Image Process. 1997, 6, 656–664. [Google Scholar] [PubMed]
  23. Said, A.; Pearlman, W.A. An Image Multiresolution Representation for Lossless and Lossy Compression. IEEE Trans. Image Process. 1996, 5, 1303–1310. [Google Scholar] [CrossRef] [PubMed]
  24. Li, X.; Orchard, M.T. Edge-Directed Prediction for Lossless Compression of Natural Images. IEEE Trans. Image Process. 2001, 10, 813–817. [Google Scholar]
  25. Abo-Zahhad, M.; Gharieb, R.R.; Ahmed, S.M.; Abd-Ellah, M.K. Huffman Image Compression Incorporating DPCM and DWT. J. Signal Inf. Process. 2015, 6, 123–135. [Google Scholar] [CrossRef]
  26. Lohitha, P.; Ramashri, T. Color Image Compression Using Hierarchical Prediction of Pixels. Int. J. Adv. Comput. Electron. Technol. 2015, 2, 99–102. [Google Scholar]
  27. Wu, H.; Sun, X.; Yang, J.; Zeng, W.; Wu, F. Lossless Compression of JPEG Coded Photo Collections. IEEE Trans. Image Process. 2016, 25, 2684–2696. [Google Scholar] [CrossRef] [PubMed]
  28. Kaur, M.; Garg, E.U. Lossless Text Data Compression Algorithm Using Modified Huffman Algorithm. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2015, 5, 1273–1276. [Google Scholar]
  29. Rao, D.; Kamath, G.; Arpitha, K.J. Difference based Non-linear Fractal Image Compression. Int. J. Comput. Appl. 2011, 30, 41–44. [Google Scholar]
  30. Oshri, E.; Shelly, N.; Mitchell, H.B. Interpolative three-level block truncation coding algorithm. Electron. Lett. 1993, 29, 1267–1268. [Google Scholar] [CrossRef]
  31. Tan, Y.H.; Yeo, C.; Li, Z. Residual DPCM for lossless coding in HEVC. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 2021–2025. [Google Scholar]
  32. Sanchez, V.; Aulí-Llinàs, F.; Serra-Sagristà, J. DPCM-Based Edge Prediction for Lossless Screen Content Coding in HEVC. IEEE J. Emerg. Sel. Top. Circuits Syst. 2016, 6, 497–507. [Google Scholar] [CrossRef]
  33. Lainema, J.; Bossen, F.; Han, W.J.; Min, J.; Ugur, K. Intra Coding of the HEVC Standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1792–1801. [Google Scholar] [CrossRef]
  34. Sanchez, V.; Aulí-Llinàs, F.; Serra-Sagristà, J. Piecewise Mapping in HEVC Lossless Intra-Prediction Coding. IEEE Trans. Image Process. 2016, 25, 4004–4017. [Google Scholar] [CrossRef] [PubMed]
  35. Hernández-Cabronero, M.; Marcellin, M.W.; Blanes, I.; Serra-Sagristà, J. Lossless Compression of Color Filter Array Mosaic Images with Visualization via JPEG 2000. IEEE Trans. Multimedia 2018, 20, 257–270. [Google Scholar] [CrossRef]
  36. Taubman, D.S.; Marcellin, M.W. JPEG2000: Standard for interactive imaging. Proc. IEEE 2002, 90, 1336–1357. [Google Scholar] [CrossRef]
  37. Egilmez, H.E.; Said, A.; Chao, Y.H.; Ortega, A. Graph-based transforms for inter predicted video coding. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3992–3996. [Google Scholar]
  38. Hranilovic, S.; Kschischang, F.R. A pixelated MIMO wireless optical communication system. IEEE J. Sel. Top. Quantum Electron. 2006, 12, 859–874. [Google Scholar] [CrossRef]
  39. Kabir, M.A.; Mondal, M.R.H. Edge-based Transformation and Entropy Coding for Lossless Image Compression. In Proceedings of the International Conference on Electrical, Computer and Communication Engineering (ECCE), Cox’s Bazar, Bangladesh, 16–18 February 2017; pp. 717–722. [Google Scholar]
  40. Huffman, D. A method for the construction of minimum redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  41. Miaou, S.-G.; Chen, S.-T.; Chao, S.-N. Wavelet-based Lossy-to-lossless Medical Image Compression using Dynamic VQ and SPIHT Coding. Biomed. Eng. Appl. Basis Commun. 2003, 15, 235–242. [Google Scholar] [CrossRef]
  42. Tomar, R.R.S.; Jain, K. Lossless Image Compression Using Differential Pulse Code Modulation and Its Application. Int. J. Signal Process. Image Process. Pattern Recognit. 2016, 9, 197–202. [Google Scholar] [CrossRef]
  43. Chen, Y.-Y.; Tai, S.-C. Embedded Medical Image Compression using DCT based Subband Decomposition and Modified SPIHT Data Organization. In Proceedings of the IEEE Symposium on Bioinformatics and Bioengineering, Taichung, Taiwan, 21–21 May 2004; pp. 167–175. [Google Scholar]
  44. Sharma, M. Compression using Huffman Coding. Int. J. Comput. Sci. Netw. Secur. 2010, 10, 133–141. [Google Scholar]
  45. Salomon, D. A Concise Introduction to Data Compression; Springer: London, UK, 2008. [Google Scholar]
  46. Wallpaperswide. Available online: http://wallpaperswide.com/pixelate-wallpapers.html (accessed on 23 April 2018).
  47. Freepik. Available online: https://www.freepik.com/free-photo/pixelated-image_946034.htm (accessed on 23 April 2018).
  48. Famed Pixelated Paintings. Available online: https://www.trendhunter.com/trends/digitzed-classic-paintings (accessed on 23 April 2018).
  49. Pixabay. Available online: https://pixabay.com/en/pattern-super-mario-pixel-art-block-1929506/ (accessed on 23 April 2018).
  50. Image Processing Place. Available online: http://www.imageprocessingplace.com/root_files_V3/image_databases.htm (accessed on 23 April 2018).
  51. Computational Imaging and Visual Image Processing. Available online: https://www.io.csic.es/PagsPers/JPortilla/image-processing/bls-gsm/63-test-images (accessed on 23 April 2018).
  52. Wikimedia Commons: Sprgelenkli. Available online: https://commons.wikimedia.org/wiki/File:Sprgelenkli131107.jpg#filelinks (accessed on 23 April 2018).
  53. Wikimedia Commons: Putamen. Available online: https://commons.wikimedia.org/wiki/File:Putamen.jpg (accessed on 23 April 2018).
  54. Wikimedia Commons: MRI Glioma 28 Yr Old Male. Available online: https://commons.wikimedia.org/wiki/File:MRI_glioma_28_yr_old_male.JPG (accessed on 23 April 2018).
Figure 1. Illustration of (a) a pixelated optical wireless communication system [38] (b) a transmitted pixelated image.
Figure 1. Illustration of (a) a pixelated optical wireless communication system [38] (b) a transmitted pixelated image.
Jimaging 04 00064 g001
Figure 2. Modified J-bit encoding process.
Figure 2. Modified J-bit encoding process.
Jimaging 04 00064 g002
Figure 3. Flowchart of the “edge-based transformation and entropy coding” (ETEC) algorithm.
Figure 3. Flowchart of the “edge-based transformation and entropy coding” (ETEC) algorithm.
Jimaging 04 00064 g003
Figure 4. Illustration of hierarchical decompression.
Figure 4. Illustration of hierarchical decompression.
Jimaging 04 00064 g004
Figure 5. Input image and its decomposition image.
Figure 5. Input image and its decomposition image.
Jimaging 04 00064 g005
Figure 6. Ordering of the casual neighbors.
Figure 6. Ordering of the casual neighbors.
Jimaging 04 00064 g006
Figure 7. Tested pixelated images [46,47,48,49].
Figure 7. Tested pixelated images [46,47,48,49].
Jimaging 04 00064 g007aJimaging 04 00064 g007b
Figure 8. Comparison of bits per pixels for pixelated images.
Figure 8. Comparison of bits per pixels for pixelated images.
Jimaging 04 00064 g008
Figure 9. Comparison of computation time for pixelated images.
Figure 9. Comparison of computation time for pixelated images.
Jimaging 04 00064 g009
Figure 10. Standard test images: (a) lena [51] (b) peppers [51] (c) ankle [52] (d) brain [53] (e) Mri_top [54](f) boat [51] (g) barbara [50] (h) house [51].
Figure 10. Standard test images: (a) lena [51] (b) peppers [51] (c) ankle [52] (d) brain [53] (e) Mri_top [54](f) boat [51] (g) barbara [50] (h) house [51].
Jimaging 04 00064 g010
Figure 11. Comparison of bits per pixels for non-pixelated images.
Figure 11. Comparison of bits per pixels for non-pixelated images.
Jimaging 04 00064 g011
Figure 12. Comparison of compression ratio for non-pixelated images.
Figure 12. Comparison of compression ratio for non-pixelated images.
Jimaging 04 00064 g012
Table 1. Technical scenarios of few existing lossless and near lossless image compression algorithms.
Table 1. Technical scenarios of few existing lossless and near lossless image compression algorithms.
Ref. No.Prediction BasedWavelet BasedPixel Difference BasedDCTEntropy CodingImage Encoder/TransformerImage TypeHierarchical Approach
[1]YesNoNoNoYesPDTDitheringNo
[3]YesNoNoNoYesNoContinuousYes
[5]NoNoNoNoYesRow-column reduction encodingMap imagesNo
[9]NoNoNoNoYesLZWContinuousNo
[11]NoNoNoNoYesJ bit encodingContinuousNo
[12]NoYesNoNoNoAQCContinuousNo
[13]NoNoYesYesYesNoContinuousNo
[14]NoYesNoNoNoModified EZWContinuousNo
[15]NoYesNoNoNoNoAllNo
[16]NoYesNoNoNoNoColorNo
[17]NoYesNoNoYesNoContinuousNo
[19]YesNoNoNoYesNoContinuousNo
[20]YesNoNoNoNoH.264/AVCHyper-spectralNo
[21]YesNoNoNoYesTaylor seriesContinuousNo
[22]YesNoNoNoYesAQCContinuousYes
[23]YesNoYesNoYesS+P transformContinuousNo
[24]YesNoNoNoNoLS basedNaturalNo
[25]YesYesNoNoYesNoMedical imageNo
[26]YesYesNoNoNoColor transformColorYes
[27]YesNoNoYesYesGeometric, photometric transformationJPEG imageNo
[28]NoNoNoNoYesDynamic bit reductionContinuousNo
[29]NoNoBlock diffNoNoNoFractalNo
[30]NoNoNoNoNoAQC (3-level)ContinuousNo
[31]YesNoNoYesYesresidual codingAllNo
[32]YesNoNoNoNoresidual codingAllNo
[33]YesNoNoNoNoresidual codingAllNo
[34]YesNoNoNoYesresidual codingAllNo
[35]YesYesNoNoYesresidual codingAllNo
[36]YesYesNoNoYesEmbedded block codingAllNo
[37]YesNoNoNoYesGraph based transformsAllNo
Table 2. Comparison of bits per pixel and compression ratio for pixelated images.
Table 2. Comparison of bits per pixel and compression ratio for pixelated images.
Tested Image NameJPEG-LSLe Gall 5/3 + SPIHT (Subblock)ETEC (Arithmetic)PTECDPCM
Bits/PixelCRBits/PixelCRBits/PixelCRBits/PixelCRBits/PixelCR
10.57313.952.1053.80.16249.520.39320.352.293.49
20.71511.192.8222.840.17645.420.8129.852.702.96
31.2366.471.0847.380.49316.232.1283.763.982.01
41.2026.653.0212.651.1456.981.7024.703.252.46
50.8149.832.4653.250.76410.480.9058.841.894.22
60.8349.592.0853.840.45717.520.78310.222.613.06
70.9648.31.4955.350.73410.91.087.412.103.81
82.0983.813.1722.521.5275.242.2413.574.141.93
91.7444.582.8462.811.2716.291.9564.093.702.16
101.6184.942.5553.131.365.881.8314.373.522.27
110.28228.42.4623.250.0461750.16648.211.395.72
120.29726.985.011.590.0272970.18343.652.113.79
130.8799.092.0173.970.38520.770.8439.490.968.31
141.0557.583.7242.150.24133.240.50215.942.563.12
150.8369.562.1053.80.29926.730.59213.522.173.68
161.6124.962.6952.971.7034.701.7514.572.4653.25
172.2793.514.6621.722.3963.342.3123.463.7142.15
182.2653.533.2952.432.2093.622.2813.514.0201.99
191.7234.642.5363.151.5095.301.6994.713.1732.52
202.8382.823.6692.182.4223.302.6892.984.3481.84
210.71311.232.1723.680.71311.231.1177.163.0302.64
220.09485.471.6234.930.09485.470.30526.241.7744.51
230.77810.282.4623.250.25431.510.78310.223.2372.47
241.5885.043.8272.091.8654.291.5995.003.0302.64
251.5395.202.9632.701.5945.021.5975.011.7704.52
261.3985.722.6693.000.9488.441.3775.812.1803.67
271.5905.033.4992.291.6084.981.5945.021.9324.14
280.68311.722.4923.210.08792.280.9148.751.9464.11
291.7504.572.7242.941.2646.331.7294.633.7212.15
301.6784.772.6173.061.3765.811.6334.903.5402.26
311.4685.452.0063.991.9314.141.8824.254.4941.78
322.9002.763.7242.153.0672.613.0302.644.5711.75
330.42218.951.8684.280.025326.280.08692.551.4605.48
341.2576.362.3463.410.76610.451.1926.712.6063.07
350.8769.142.0753.860.8039.960.8729.171.8434.34
361.1986.682.2713.521.1187.161.2366.472.4103.32
371.5865.042.7592.901.6304.911.5905.033.0532.62
381.7144.672.7562.903.1012.582.9202.744.9081.63
391.9654.072.9852.681.8894.231.9614.084.0201.99
401.7014.703.1052.581.7274.631.7664.533.4782.30
410.9908.082.4663.240.8479.440.9868.112.5403.15
421.5105.302.6303.041.1806.781.5945.023.3762.37
431.0787.422.5253.170.74210.791.0777.432.5083.19
441.1357.052.5223.170.8329.621.1057.242.5813.10
450.9888.092.2983.480.60813.160.9748.212.4103.32
460.9448.482.3213.450.60113.310.9058.842.3323.43
471.5605.133.1892.511.2426.441.6104.973.4632.31
481.4285.602.8892.771.1417.011.5185.273.1252.56
491.7924.463.1972.501.9244.161.9004.213.4192.34
501.2616.342.3623.390.9038.861.2526.392.8882.77
Average0.8369.2642.1053.1780.29929.390.59210.282.173.09
Table 3. Comparison of percentage saving and computation time for pixelated images.
Table 3. Comparison of percentage saving and computation time for pixelated images.
Tested Image No.JPEG-LSLe Gall 5/3 + SPIHT (Subblock)ETEC (Arithmetic)PTECDPCM
Saving %Time (s)Saving %Time (s)Saving %Time (s)Saving %Time (s)Saving %Time (s)
192.8329573.6820.897.98134.595.0922.9371.3331.5
291.0630964.7916.797.8133.6589.8528.3566.2335.05
384.5449586.4538.593.84126.7473.4049.2150.2448.23
484.9675162.2618.7285.6992.1278.7230.4259.3929.26
589.8338969.2317.5290.4678.6388.6920.8276.3219.64
689.5750173.9614.9494.2974.1390.2219.3167.3225.25
787.9540581.317.2590.8343.0186.5017.9273.7617.44
873.7523560.323.5580.913.7671.999.948.189.34
978.224664.413.6184.123.6675.557.0253.818.66
1079.7624768.053.4083.003.5777.128.2356.007.95
1196.4811469.232.3499.4316.1897.935.5382.519.17
1296.299237.37.4499.6616.0397.715.5773.6312.75
1389.0184074.8124.8495.19164.589.4615.0187.977.26
1486.8286253.4934.596.99221.5893.7344.1767.9538.56
1591.4740768.8521.2298.9293.4492.603.1472.8445.29
1679.8478866.3118.7178.7198.0878.1220.8269.1915.67
1771.5177941.7225.1570.05102.8071.1035.0453.5728.52
1871.6918058.812.4972.392.7771.494.849.754.23
1978.4719668.292.7681.142.7078.775.0960.334.35
2064.5327754.143.7969.734.6866.395.6182.173.60
2185.2314972.852.1786.031.2686.033.8962.122.78
2294.8562579.7125.7298.83255.5596.1939.4877.8333.47
2390.2736069.2211.1096.8331.9390.2112.7259.549.52
2480.1637052.1611.2476.6823.6080.025.5762.123.75
2582.5283366.6318.7188.1595.1982.7925.7572.7522.43
2680.76113062.9624.1680.08137.4080.0438.1977.8831.36
2780.12115256.2727.6079.90142.3180.0839.7175.8535.09
2891.4640768.8521.2298.9293.4488.5727.0575.6723.68
2978.1323465.963.2184.203.4078.395.4753.493.39
3081.6429174.924.8975.864.6476.477.5243.8223.5
3163.7532653.453.8661.668.0962.126.8542.8631.52
3279.0217367.292.4382.802.4179.594.0955.756.15
3394.7211876.656.9899.6916.4898.9211.3281.755.43
3484.28160070.6835.2590.4327.9885.1043.5267.4311.27
3589.0578474.0722.6489.96121.5489.0929.4676.9618.32
3685.0317071.613.3286.033.3484.545.0969.883.05
3780.1811865.511.9279.632.1880.123.7161.832.54
3878.5717465.551.7861.242.6663.503.1538.652.71
3975.4456762.696.9376.3814.6575.499.3749.758.52
4078.7472361.1910.6578.4122.5377.9215.8056.5212.63
4187.6239069.189.9189.4121.6887.6715.2468.2513.04
4281.1327567.133.6385.254.0080.085.4057.813.96
4386.5396668.4323.4590.7365.2286.5429.4368.6523.79
4485.8191468.4724.4789.60121.8086.1932.2667.7425.64
4587.6592271.2822.1092.40131.3387.8232.0469.8825.54
4688.2185970.9922.3492.49133.3988.6931.4770.8524.98
4780.5060260.149.1084.4718.1779.8813.8556.7111.07
4882.1567363.8911.3085.7324.0781.0218.9060.9414.00
4977.60125460.0320.8375.95119.8276.2527.2757.2621.77
5084.2471770.4713.6888.7162.6684.3517.8663.9014.45
Average83.4852666.1113.9086.1562.5882.7618.40664.5417.42
Table 4. Comparison of bits per pixel and compression ratio for non-pixelated images.
Table 4. Comparison of bits per pixel and compression ratio for non-pixelated images.
Tested Image No.JPEG-LSLe Gall 5/3 + SPIHT (Subblock)ETEC (Arithmetic)PTECDPCM
Bits/PixelCRBits/PixelCRBits/PixelCRBits/PixelCRBits/PixelCR
lena4.05671.9725.34041.4984.4641.7924.3641.834.811.66
peppers4.50501.7755.15431.5524.9981.6014.7041.704.841.65
ankle2.932.733.7042.163.2652.452.6942.973.9402.03
brain2.523.173.542.263.0192.652.742.923.7912.11
mri_top3.602.224.3721.833.9222.043.7742.124.6781.71
boat4.86181.6455.45811.4655.151.5535.0671.585.351.49
barbara4.82801.6575.34521.4965.5591.4395.4021.485.741.39
house3.85352.0764.48171.7854.1761.9154.2271.894.641.72
Average3.892.164.671.764.321.934.122.064.721.72
Table 5. Comparison of percentage saving and computation time for non-pixelated images.
Table 5. Comparison of percentage saving and computation time for non-pixelated images.
Tested Image No.JPEG-LSLe Gall 5/3 + SPIHT (Subblock)ETEC (Arithmetic)PTECDPCM
Saving %Time (s)Saving %Time (s)Saving %Time (s)Saving %Time (s)Saving %Time (s)
lena49.29293433.2459.744.2330.2645.45197.1639.9253.70
peppers43.69170535.5776.2737.527390.8941.197104.639.5254.87
ankle63.371548.753.7022.1359.18398.2566.3337.6350.7427.29
brain68.451761.6755.7536.6262.26500.3965.7561.4352.6129.94
mri_top54.951941.4845.3634.2550.98393.2752.8362.1241.5233.76
boat39.23335231.7758.1935.62539736.665106.333.0360.86
barbara39.65403733.1850.7030.517355.4732.473109.128.2272.41
house51.8395543.989.747.79414.1347.16317.641.9615.01
Average51.302279.3641.5743.4546.01347.4448.4874.5040.8543.48

Share and Cite

MDPI and ACS Style

Kabir, M.A.; Mondal, M.R.H. Edge-Based and Prediction-Based Transformations for Lossless Image Compression. J. Imaging 2018, 4, 64. https://doi.org/10.3390/jimaging4050064

AMA Style

Kabir MA, Mondal MRH. Edge-Based and Prediction-Based Transformations for Lossless Image Compression. Journal of Imaging. 2018; 4(5):64. https://doi.org/10.3390/jimaging4050064

Chicago/Turabian Style

Kabir, Md. Ahasan, and M. Rubaiyat Hossain Mondal. 2018. "Edge-Based and Prediction-Based Transformations for Lossless Image Compression" Journal of Imaging 4, no. 5: 64. https://doi.org/10.3390/jimaging4050064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop