Next Article in Journal
MResTNet: A Multi-Resolution Transformer Framework with CNN Extensions for Semantic Segmentation
Next Article in Special Issue
Iterative Tomographic Image Reconstruction Algorithm Based on Extended Power Divergence by Dynamic Parameter Tuning
Previous Article in Journal
Head Gesture Recognition Combining Activity Detection and Dynamic Time Warping
Previous Article in Special Issue
Comparative Analysis of Color Space and Channel, Detector, and Descriptor for Feature-Based Image Registration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Linde–Buzo–Gray (FLBG) Algorithm for Image Compression through Rescaling Using Bilinear Interpolation

1
Department of Information Engineering Technology, University of Technology, Nowshera 24170, Pakistan
2
Department of Electrical and Computer Engineering, Pak-Austria Fachhochschule: Institute of Applied Sciences & Technology, Mang Haripur 22621, Pakistan
3
Institut d’Informatica i Aplicacions, Universitat de Girona, 17003 Girona, Spain
4
Department of Electrical and Electronic Engineering, Daffodil International University, Dhaka 1216, Bangladesh
*
Author to whom correspondence should be addressed.
J. Imaging 2024, 10(5), 124; https://doi.org/10.3390/jimaging10050124
Submission received: 29 March 2024 / Revised: 5 May 2024 / Accepted: 15 May 2024 / Published: 20 May 2024
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)

Abstract

:
Vector quantization (VQ) is a block coding method that is famous for its high compression ratio and simple encoder and decoder implementation. Linde–Buzo–Gray (LBG) is a renowned technique for VQ that uses a clustering-based approach for finding the optimum codebook. Numerous algorithms, such as Particle Swarm Optimization (PSO), the Cuckoo search algorithm (CS), bat algorithm, and firefly algorithm (FA), are used for codebook design. These algorithms are primarily focused on improving the image quality in terms of the PSNR and SSIM but use exhaustive searching to find the optimum codebook, which causes the computational time to be very high. In our study, our algorithm enhances LBG by minimizing the computational complexity by reducing the total number of comparisons among the codebook and training vectors using a match function. The input image is taken as a training vector at the encoder side, which is initialized with the random selection of the vectors from the input image. Rescaling using bilinear interpolation through the nearest neighborhood method is performed to reduce the comparison of the codebook with the training vector. The compressed image is first downsized by the encoder, which is then upscaled at the decoder side during decompression. Based on the results, it is demonstrated that the proposed method reduces the computational complexity by 50.2% compared to LBG and above 97% compared to the other LBG-based algorithms. Moreover, a 20% reduction in the memory size is also obtained, with no significant loss in the image quality compared to the LBG algorithm.

1. Introduction

Images are significant representations of objects. They are utilized in many applications, such as digital cameras, satellite and medical imaging, or computer storage of pictures. Commonly, sampling, quantization, and encoding are performed on a 2D analog signal to generate a digital image that creates an abundant volume of data that is impractical for storing and transmission. To decrease the image’s size for storage and transmission, it is necessary to compress the image for practical use. The real-time transmission of images restricts the image compression techniques as they require fast buffering and low computational complexity [1]. Conversely, compressing images for storing in memory has no restrictions. It is due to this that the algorithms are executed in non-real time, where there is no need for buffers for the communication channel [2]. Image compression can be categorized into two main types: one is lossless, which contains no loss of image quality and is used for applications where no loss is tolerable, such as medical imaging, scientific research, and satellite imaging, while the other is lossy, which bears the loss in quality and is suitable for applications where losses are acceptable, such as video streaming, web publishing, and social media, etc. [3]. The goal of compressing images is to reduce the storage requirement of the images.
A lossless image compression algorithm formulated by Garcia et al. [4] is a renowned technique that uses Huffman coding [5] (hierarchical encoding) to assign variable coding procedures in which more bits are assigned to less frequently occurring data and fewer bits to more frequently occurring data. The Huffman coding algorithm is adopted in many standards, including the JPEG (Joint Photographic Experts Group) [6]. However, it is efficient only if the probabilities of the data are provided and the data encoding is performed integrally. As the histograms vary from image to image, it is thus not certain whether the Huffman algorithm performs optimally. Another well-known technique of lossless image compression is arithmetic coding [7], which contains variations in coding just like the Huffman algorithm. It is based on reducing the redundant codes present in the image data. It performs efficiently if the probability of the occurrence of each symbol is identified. Arithmetic coding works on the principle of generating an interval for every symbol, which is calculated through cumulative probability. It assigns intervals to symbols from high to low and rescales the rest of the intervals until all the symbols are rescaled. It is an error-sensitive technique; a one-bit error will corrupt the entire image [8]. Lossless prediction coding is commonly applied in image coding to eliminate inter-pixel redundancy [9], ensuring accurate reconstruction without the loss of data. It predicts the current value of the pixels from the neighboring pixels and creates new values from the predicted value and original pixel values.
On the other hand, lossy image compression techniques outperform lossless techniques in terms of compression [10]. There are wide applications where the loss in the image is tolerable and lossy image compression is preferred due to its tendency to lower the bit rate as desired by the application. Various lossy schemes are proposed, including predictive coding [11], which performs predictions using neighboring pixels. It performs quantization to an error message obtained through the predictive and actual values of an image as shown in Figure 1.
If the prediction is accurate, the encoding will produce high compression. Adding many pixels in the prediction process will directly affect the computational cost, and it has been noticed that, after attaining three and above previous pixels, no substantial improvement is observed in the image compression, as observed in algorithms such as the JPEG (Joint Photographic Experts Group). Transform coding is extensively employed for image compression, involving the conversion from one domain to another, resulting in densely packed coefficients [12]. Some coefficients exhibit high energy, while others have low energy. Transform coding techniques aim to efficiently pack the information using a limited number of coefficients, with a quantization process used to discard coefficients containing minimal information. The Karhunen–Loeve Transform (KLT) [13] is another technique used for compression that uses vectors with low-subspace dimensions. It uses correlation vectors, which are constructed using the original image. These correlation vectors are used to calculate the orthogonal vectors. These vectors are used as a linear combination to represent the original image. The KLT is not an ideal coding scheme due to image interdependences and computation. Another important lossy technique is the discrete cosine transform, which is orthogonal and transforms the image to the frequency domain. Such a representation is relatively compact and has the advantage of better results in discontinuous data at the end blocks compared to the Discrete Fourier Transform [14]. The DCT is adopted in many standards, including the Joint Photographers Expert Group (JPEG). The JPEG was initiated in 1987 and became an international standard in 1992 by the International Standards Organization (ISO) [15]. The standard contains four modes, including hierarchical, progressive, baseline, and lossless. The baseline mode is the default mode, and it is a common standard adopted worldwide. It has a two-step model in which the first DCT is applied and a quantization process is performed to remove the psycho-visual redundancy. In the second step, the entropy encoders are used to to remove the coding redundancy.
JPEG 2000 was developed after its first release, and it was a joint venture of the ISO and the International Telecommunication Union (ITU). It has an advantage over the JPEG due to its high compression ratio and better functionalities. It differs from the JPEG in terms of the transform coefficients and provides additional functions regarding progressive resolution transmission, better quality, error calculation, and information on location.
One of the renowned image compression techniques is vector quantization, which is recognized for its effectiveness in achieving high compression ratios with minimal distortion at specified bit rates [16]. It offers many features, including high compression with a higher block size and low compression with a smaller block size. The alteration in distortion can be tailored to specific applications by modifying the size of the block. Moreover, it provides a rapid decompression method utilizing codebooks and indexes, suitable for implementing in web and multimedia applications to decompress the images multiple times. Generating the optimal codebook is a pivotal aspect of vector quantization, and it can be refined through various optimization techniques, including the implementation of genetic algorithms [17], ant colony optimization [18], and other various optimization techniques. Linde–Buzo–Gray [19] developed an algorithm that recursively produces an optimal codebook of size “N” from a random selection of codebooks. It divides the original vectors into N number of clusters and endeavors to refine the codebook until the distortion is within the acceptable threshold.
Several optimization algorithms are applied on LBG for codebook generation and its optimization; however, these algorithms improved the quality of the reconstructed image but suffered greatly in terms of computational time [20]. Hence, in the proposed research, a fast-LBG-based codebook generation is proposed, which improves the computational time and storage requirements of LBG and LBG-based algorithms.

2. Recent Algorithms for Codebook Generation

Vector quantization is a block coding method using encoding to find the final codebook. The codebook contains the codeword and indexes, which are transmitted to the receiver. The decoder uses indexes and codewords to reconstruct the image. The encoder and decoder of the VQ are shown in Figure 2.
The test image is configured to match a training vector of dimensions ( N × N ), which is then condensed into a smaller block of size N b ( n × n ). The blocks are represented as X i , where (i = 1, 2, 3, 4…….. N b ). Specific vectors from the N b blocks are chosen as the codewords. These vectors are selected based on minimum “D” between codeword and training vector, and they are denoted as C j (i = 1, 2, 3……. N c ). In this context, N c denotes the overall number of codewords within the codebook. The index of the codebook represents the location of each codeword in the codebook, which is updated during each iteration after the “D” is calculated. After the finalization of indexes and codewords, they are combined and transmitted as a codebook to the receiver. The decoder after receiving the codebook uses index and codewords for reconstructing the original image. The distance of the codebook with the training vector is calculated as
E u c l i d e a n D i s t a n c e = D = 1 / N c j = 1 N c i = 1 N b V i j | | X i C j | | 2
Under condition
D = j = 1 N c V i j = 1 , for all i { 1 , 2 , 3 , , N b }
V i j = 1 , if X i belongs to the j th cluster 0 , otherwise
For VQ, it is important to satisfy two conditions,
(1) A partition denoted by R j for all j = 1 , 2 , 3 , 4 , , N c will satisfy the given criteria.
R j { x ϵ X ; d ( c , C j ) < d ( x , C k ) , i f k j
(2) The C j defines a centroid R j where
C j = 1 / N j i = 1 N j x i , x i ϵ R j
Here, N j represents total vectors that belong to R j .

2.1. Vector Quantization Using LBG

The first algorithm to apply the VQ technique, as described by Linde et al. (1980), is known as the Linde–Buzo–Gray (LBG) algorithm [21]. Algorithm 1 illustrates the steps of the LBG algorithm. This method employs a k-means clustering approach, utilizing a proximity function to identify the optimal local solution. This function makes an effort to prevent distortion from becoming worse between iterations. Due to its ineffective randomly initialized codebook, this technique becomes trapped in local optima and is unable to find the optimal solution.
Algorithm 1: LBG Algorithm
Rk→CB
Initialize X = ( x 1 , x 2 x k ) as initial training vectors. The Euclidean Distance (D) among the two vector is D ( x , y ) .
Step 1: Initial codebook C B 0 , which is generated randomly.
Step 2: Initialize i = 0.
Step 3: Execute the given steps for each training vector. Calculate the distances among the codewords in CBi and training vector as D(X; C) = (xt − ct).
Find the closest codeword in CBi.
Step 4: Divide the codebook in clusters of N number of blocks.
Step 5: Calculate the centroid of each block for obtaining the new codebook CBi + 1.
Step 6: Calculate the average distortion of CBi + 1. If no improvement in last iteration, the codebook is finalized and execution stops. Otherwise, i = i + 1, and go to Step 3.

2.2. Particle Swarm Optimization Vector Quantization Algorithm

Hsuan and Ching proposed Particle Swarm Optimization (PSO) to find the optimum codebook for VQ [3]. The codebook employs swarm intelligence to adjust the codewords based on the natural behavior principles observed in schools of fish, as depicted in Algorithm 2. This approach offers a global codebook, in case the particle update velocity is kept higher, but requires many iterations to find the global best solution.
Algorithm 2: PSO-LBG Algorithm
Step 1: Implement the LBG algorithm to discover the codebook and designate as the global best (gbest) codebook.
Step 2: Randomly generate additional codebooks.
Step 3: Compute the fitness values for each codebook.
F i t n e s s ( C ) = 1 D i s t o r t i o n D ( C )
1 D ( C ) = N b J = 1 N c i = 1 N b u i j | | X i C j | | 2
Step 4: Upon observing an enhancement in the fitness of the codebook compared to the previous fitness (pbest), assign the new fitness value as pbest.
Step 5: Identify the codebook with the highest fitness value; if the fitness surpasses that of gbest, update gbest with the new value.
Step 6: Update velocities and elements to transition to a new position.
V i k n + 1 = V i k n + C 1 r 1 n ( p b e s t i k n X i k n ) + C 2 r 2 n ( g b e s t k n X i k n )
X i k n + 1 = X i k n + V i k n + 1
The variable K denotes the total number of solutions, where “i” denotes the position of a particle, and  r 1 and r 2 represent random numbers, while C 1 and C 2 signify the rates of social and cognitive influences, respectively.
Step 7: Until max iteration or stopping criteria are met, repeat Steps 3–7.

2.3. Quantum-Inspired Particle Swarm Optimization Vector Quantization Algorithm

By the procedure outlined in Algorithm 3, Wang et al. implemented the Quantum Swarm Evolutionary Algorithm (QPSO) [22], whereby the local points are estimated as P i utilizing Equation (10) derived from the local best (pbest) and global best (gbest) codebooks. The adjustment of particle positions is facilitated by manipulating parameters u and z. It is noted that refining these parameters to improve PSNR entails substantial computational resources, surpassing those demanded by PSO and LBG algorithms.
P i = r 1 p b e s t i + r 2 g b e s t i / r 1 + r 2

2.4. Firefly Vector Quantization Algorithm

The firefly algorithm for codebook design was introduced by MH Horng [23]. This algorithm, inspired by the flashing behavior of fireflies, incorporates brightness into its objective function. It operates by generating multiple codebooks, analogous to fireflies, with the goal of transitioning from lower to higher intensities or brightness values. However, if there is a lack of brighter fireflies within the search space, the algorithm’s performance in terms of the PSNR may deteriorate. The FA-VQ algorithm is depicted as Algorithm 4 in the present study.
Algorithm 3: QPSO-LBG Algorithm
Step 1: Initialization of the LBG algorithm involves assigning the global best codebook (gbest) and initializing the remaining codebooks and velocities randomly.
Step 2: The fitness of each codebook is computed.
Step 3: If the newly computed fitness surpasses the previous best fitness (pbest), the new fitness value replaces pbest.
Step 4: The largest fitness value among all particles is taken, and, if an improvement is detected in gbest, it is updated with the new value.
Step 5: Random values, r 1 , r 2 , and u are chosen within the range of 0 to 1, and the local point Pi is calculated using Equations (8) and (9).
Step 6: The elements of the codebook Xi are updated according to Equations (11)–(13).
L i = z | X i p i |
i f   u > 0.5 X i ( t + 1 ) = p i L i l n ( 1 / u )
e l s e   X i ( t + 1 ) = p i + L i l n ( 1 / u )
In this context, the constant ‘z’ is maintained such that it satisfies the condition z <  1 / l n 2 , where ‘t’ represents the iterations.
Step 7: Steps (3) to (7) are iterated until reaching the maximum allowable number of iterations.
Algorithm 4: FA-LBG Algorithm
Step 1: Implement the LBG algorithm and designate its output as the brighter firefly (codebook).
Step 2: Initialize the parameters alpha ( α ), beta ( β ), and gamma coefficients ( λ ).
Step 3: Randomly initialize codebooks; select maximum iteration count j.
Step 4: Start count m = 1 .
Step 5: Evaluate the fitness of all codebooks using Equation (6). Choose a codebook randomly based on its fitness value and commence moving codebooks toward the brighter fireflies using Equations (14)–(17).
E u c l i d e a n d i s t a n c e r i j = | | X i X j | |
| | X i X j | | = k = 1 N c h = 1 L ( X i k h X j k h ) 2
β = β o e γ i j
where 0 < u < 1 and k = (1, 2, 3,….. N c ).
Step 6: If brighter fireflies cannot be located, begin moving randomly within the search space in pursuit of brighter ones using
X j k h = ( 1 β ) X i k h + β X j k h + u j k h
Step 7: If (m = j), execution stops.
Step 8: Increment m = m + 1.
Step 9: Jump to step 5.

2.5. BA Vector Quantization Algorithm

Karri et al introduced a bat algorithm (BA) for vector quantization, inspired by the mating behavior of bats [24]. In this algorithm, the codebook is considered as a bat, estimating the global codebook through three key parameters: loudness, frequency, and pulse rate. Compared to other LBG-based algorithm techniques, it achieves notably high PSNR. Nonetheless, it requires the calculation of an extra parameter, leading to a notable rise in computation duration when contrasted with PSO, QPSO, and LBG algorithms. Algorithm 5 introduces the BAT algorithm.
Algorithm 5: BA-LBG Algorithm
Step 1: Begin by allocating N codebooks, represented as bats, and defining parameters ‘A’ as (loudness), ‘V’ as (velocity), ‘R’ as (pulse rate), ‘Qmin’ as (minimum frequency), and ‘Qmax’ as (maximum frequency).
Step 2: Implement the LBG algorithm to establish the initial codebook. Randomly select the remaining codebooks, denoted as X i (where i = 1, 2, 3,...N− 1).
Step 3: Set the iteration counter m to 1 and define the maximum count as j.
Step 4: Evaluate all codebooks’ fitness values using Equation (6). Identify X b s t as the best-performing codebook.
Step 5: Update the positions of the codebooks by adjusting their frequency and velocity according to Equations (18) through (20).
Q 1 ( t + 1 ) = Q maximum ( t ) + Δ Q ( t )
where Δ Q ( t ) = ( Q minimum ( t ) Q maximum ( t ) ) · ( R )
V 1 ( t + 1 ) = V i ( t ) + Δ V ( t )
where Δ V ( t ) = ( X i ( t ) X best ( t ) ) · Q i ( t + 1 )
X 1 ( t + 1 ) = X i ( t ) + V i ( t )
Step 6: Select the size of the step between 0 and 1 for the random walk (W).
Step 7: If the step size exceeds the pulse rate (R), the codebook is shifted using Equation (21).
X 1 ( t + 1 ) = X b e s t ( t ) + W * R
Step 8: Produce a randomized value, and, if its magnitude is below the threshold of loudness, incorporate it into the codebook.
Step 9: Perform sorting the codebooks with respect to fitness value X b e s t .
Step 10: If the condition (m = j) is satisfied, the execution halts. Otherwise, the value of m is incremented by 1.

3. Proposed Fast-LBG Algorithm

The conventional LBG algorithm faces inefficiencies because of the random initialization of the initial codebook, often leading to entrapment in local optima. While PSO and QPSO methods generate efficient codebooks, higher-velocity particles may suffer from instability. Constructing a codebook for HBMO [25] demands numerous tuning parameters. Additionally, the FA algorithm’s convergence is compromised when there are no brighter fireflies [26] in the search space. Failure to meet the convergence conditions in the CS algorithm [20] necessitates numerous iterations. To address these challenges, a novel approach is proposed, modifying the LBG algorithm by incorporating rescaling via bilinear interpolation to reduce the computational time while maintaining a PSNR and SSIM near the LBG algorithm. The proposed method’s block diagram is depicted in Figure 3.

Bilinear Interpolation for Codebook Rescaling

Bilinear interpolation allows for the estimation of a function’s value at any location within a rectangle given that its value is known at each of the rectangle’s four corners. This technique is particularly relevant in our methodology, where resizing the image is essential for reducing computational complexity. In our proposed method, bilinear interpolation is employed using the nearest neighborhood method to rescale the image and reduce the comparison between the codebook and the training vector. Bilinear interpolation estimates the value of an unknown function at a specific location (x, y) based on the known values at four surrounding sites. The encoder uses the input image as a training vector, which is initialized with a random selection of vectors from the image. Assume we want to find the value of the unknown function f at a specific location ( x , y ) . The values of f at the four sites Q 11 = ( x 1 , y 1 ) , Q 12 = ( x 1 , y 2 ) , Q 21 = ( x 2 , y 1 ) , and  Q 22 = ( x 2 , y 2 ) are assumed to be known. First, a bilinear interpolation is performed in the x direction using the following equation:
f ( x , y 1 ) = x 2 x x 2 x 1 f ( Q 11 ) + x x 1 x 2 x 1 f ( Q 21 )
To acquire the necessary estimate, interpolation is performed in the y-direction using the following equation:
f ( x 1 , y ) = y 2 y y 2 y 1 f ( Q 12 ) + y y 1 y 2 y 1 f ( Q 22 )
Through interpolation, we reduce the size of the image by 1/4 of the original image size. This size reduction reduces the total number of comparisons between the training vector and the codebook. The decoder performs upscaling in response to the encoder’s downscaling. The upscaling is performed by a factor of 4, and the index at the decoder is used to replicate the image pixels at the receiving end to match the size of the compressed image with the original image. Based on the findings, it can be concluded that the proposed methods reduce computing complexity. The proposed algorithm is shown in Algorithm 6.
Algorithm 6: Proposed FLBG Algorithm
Step 1: Resize the image using bilinear interpolation using Equations (25) and (26).
Step 2: Find the Euclidean Distance “D” between the two vectors as D(x,y).
Step 3: Initial codebook C B 0 , which is generated randomly.
Step 4: i = 0.
Step 5: Execute the given steps for each training vector. Calculate the distances among the codewords in CBi and the training vector as D(X; C) = (xt − ct).
Find the closest codeword in CBi.
Step 6: Divide the codebook into clusters of N number of blocks.
Step 7: Calculate the centroid of each block for obtaining the new codebook CBi + 1.
Step 8: Calculate the average distortion of CBi + 1. If no improvement in last iteration, the codebook is finalized and execution stops, or else i = i + 1, and go to Step 4.

4. Results and Discussion

The evaluation of the codebook involved conducting tests on grayscale images. For the comparative study, five distinct test photographs shown in Figure 4 were utilized: ‘Cameraman.png’, ‘Baboon.png’, ‘peppers.png’, ‘Barb.png’, and ‘Goldhill.png’. The simulations were performed on a 32-bit Windows 11 Pro system using an Intel® Core™ i5-3210M Processor running at 2.54 GHz with a 3M Cache 4 GB double data rate 3 RAM. MATLAB version R2019A was utilized for compiling the codes. All the tests were conducted on 512 × 512 grayscale images, as depicted Figure 4.
Initially, the test image undergoes partitioning into non-overlapping blocks measuring 4 × 4 pixels each for compression purposes. These blocks are then considered as 16,384-dimensional training vectors, with the dimension of each input vector set to 16. The comparison metrics utilized include data rate per pixel (BPP), Mean Square Error (MSE), and peak signal-to-noise ratio (PSNR), calculated using Equations (24), (25), and (26), respectively.
b p p = L o g 2 N c k
k represents size of the block, while N c indicates the size of codebook. The bit rate normalized by the number of pixels serves as a metric for evaluating the magnitude of compression in an image.
M S E = 1 M × N i = 1 M j = 1 N f ( i , j ) f ^ ( i , j ) 2
M × N denotes the overall pixel count, where I and J signify the x and y coordinates of pixel values. The test image is referenced as f ( I , J ) , while the compressed image is denoted as f−(I, J).
P S N R = 10 l o g 10 ( 255 M S E ) 2 ( d b )
PSNR measurements are employed for assessing the quality of the decompressed image. Five test images are examined, each standardized to a size of M × N ( 512 × 512 ) pixels while employing varied codebook sizes (8, 16, 32, 64, 128, 256, 512, and 1024). The maximum average PSNR values are utilized for determining the parameters in the proposed method for simulating the test image, which is executed four times. Comparative Table 1, Table 2, Table 3, Table 4 and Table 5 depict the PSNR evaluation of the test images utilizing FLBG in comparison to the existing algorithms. The analysis of the average variation in peak signal-to-noise ratio concerning the bit rate indicates that the proposed algorithm achieves a PSNR level comparable to that of LBG.
Although PSNR is a valuable metric for comparing image quality, it may not fully correlate with human visual perception, especially in distinguishing structural details. To overcome this limitation and facilitate structural comparison, we computed the Structural Similarity Index Measure (SSIM) metrics [27]. These metrics evaluate luminance, contrast, and structure among the test images and the compressed images. The SSIM score is determined using Equation (27).
S S I M ( X , Y ) = [ L ( X , Y ) ] α . [ C ( X , Y ) ] β . [ S ( X , Y ) ] γ
where S, C, and L denote structure, contrast, and luminance, respectively, while a l p h a , b e t a , and g a m m a signify the relative significance of these parameters. For the sake of ease, it was presumed that a l p h a = b e t a = g a m m a = 1. The SSIM values were observed to vary between 0 and 1, where 0 indicates no similarity and 1 signifies a high degree of resemblance between the two images. As shown in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9, using FLBG, the SSIM scores (expressed as percentages) of five test images are compared to those obtained from existing algorithms.
The graph indicates that the proposed algorithm achieves an SSIM percentage that is approximately equal to that of the LBG algorithm. Figure 10, Figure 11 and Figure 12 examine and contrast three reconstructed test images utilizing FLBG- and LBG-derived algorithms, employing a codebook capacity of 64 and a block size of 16. It is observed that the image quality of the reconstructed images depicted in Figure 10, Figure 11 and Figure 12 is comparable to that of LBG. Simulations were conducted by varying the codebook sizes. Increasing the codebook size enhances the image quality but also increases the total number of comparisons among codewords and training vectors, resulting in longer computation times and lower compression ratios. Nevertheless, a notable reduction in computation time was achieved during testing. Table 6 presents the average processing times for various test images measured using the FLBG and comparable algorithms. Each test image was run six times to compute the average processing time for accurate evaluation. The results demonstrated a significant efficiency advantage of FLBG over traditional LBG and other LBG-based algorithms. Specifically, FLBG achieved a 47.7% reduction in processing time compared to LBG, showcasing a substantial improvement. Furthermore, FLBG outperformed other LBG-based algorithms by more than 97%, indicating its superior performance.
It can be observed from these tables that FLBG has less computational time and reduced image size compared to the LBG-based algorithms, such as HBMO-LBG, FA-LBG, BA-LBG, PSO-LBG, QPSO-LBG, and CS-LBG.
It is important to mention here that this work specifically focuses on an LBG compression method that is a VQ-based technique. We acknowledge that its focus may seem narrow in comparison to the widely used transform-based methods like the JPEG, JPEG 2000, and WebP. However, we would like to emphasize that our intention was not to directly compete with these established algorithms in terms of computational time or file size reduction. Instead, our aim was to enhance the computational speed of the VQ-based LBG compression method.
While it is true that comparing the resultant image sizes and execution times of various algorithms is crucial for selecting the most suitable compression method, we believe that a qualitative comparison is equally important, especially when considering different algorithmic approaches. Transform-based methods excel in many scenarios, but there are specific cases where VQ methods offer unique advantages, such as preserving perceptual quality, exploiting correlated data, and facilitating fixed-rate compression.
In this work, we sought to highlight the importance of VQ methods in certain application domains where these advantages are critical. Although our enhancements may not directly improve the execution time or file size reduction compared to the state-of-the-art transform-based methods, they contribute to the broader discussion on the relevance and necessity of VQ techniques.

5. Conclusions

A fast-LBG algorithm is proposed for compressing images, wherein the codebook generation is enhanced by pre-scaling the image prior to applying the LBG algorithm. This pre-scaling optimizes the process by exploiting the inherent high correlation among the pixels, thus reducing inter-pixel redundancies. The reduction in the training vector size before the LBG algorithm application significantly cuts the processing time by minimizing the number of required comparisons among the training vector and codebook codewords. Consequently, the resultant image size is contingent upon the chosen rescaling factor, enabling increased compression potential at the expense of image quality.
Based on the simulation outcomes, it is evident that the proposed algorithm excels in terms of computational efficiency, advocating for its adoption in LBG and LBG-based algorithms to mitigate the computational complexity and diminish compressed image dimensions.
The potential avenues for future research encompass exploring the efficacy of the algorithm on polychrome, monochrome, and three-dimensional images. Moreover, employing advanced scaling mechanisms such as edge-directed interpolation or Sinc and Lanczos resampling could further enhance the results.

Author Contributions

Conceptualization, writing—original draft, visualization, validation, experiments, formal analysis, M.B.; methodology, algorithms/software, visualization, and writing—original draft, writing—review, formal analysis, resources, Z.U.; writing—original draft, writing—review, formal analysis, resources, O.M.; writing—review, resources, T.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research has not received any external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available upon request to M.B.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bilal, M.; Ullah, Z.; Islam, I.U. Fast codebook generation using pattern based masking algorithm for image compression. IEEE Access 2021, 9, 98904–98915. [Google Scholar] [CrossRef]
  2. Wang, M.; Xie, W.; Zhang, J.; Qin, J. Industrial Applications of Ultrahigh Definition Video Coding With an Optimized Supersample Adaptive Offset Framework. IEEE Trans. Ind. Inform. 2020, 16, 7613–7623. [Google Scholar] [CrossRef]
  3. Chen, Z.; Bodenheimer, B.; Barnes, J.F. Robust transmission of 3D geometry over lossy networks. In Proceedings of the Eighth International Conference on 3D Web Technology, Saint Malo, France, 9–12 March 2003; p. 161-ff. [Google Scholar]
  4. Garcia, N.; Munoz, C.; Sanz, A. Image compression based on hierarchical encoding. Image Coding 1986, 594, 150–157. [Google Scholar]
  5. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  6. Wallace, G.K. The JPEG still picture compression standard. Commun. ACM 1991, 34, 30–44. [Google Scholar] [CrossRef]
  7. Howard, P.G.; Vitter, J.S. New methods for lossless image compression using arithmetic coding. Inf. Process. Manag. 1992, 28, 765–779. [Google Scholar] [CrossRef]
  8. Khalaf, W.; Mohammad, A.S.; Zaghar, D. Chimera: A New Efficient Transform for High Quality Lossy Image Compression. Symmetry 2020, 12, 378. [Google Scholar] [CrossRef]
  9. Nam, J.H.; Sim, D. Lossless video coding based on pixel-wise prediction. Multimed. Syst. 2008, 14, 291–298. [Google Scholar] [CrossRef]
  10. Rahman, M.A.; Hamada, M. Lossless image compression techniques: A state-of-the-art survey. Symmetry 2019, 11, 1274. [Google Scholar] [CrossRef]
  11. Magli, E.; Olmo, G. Lossy predictive coding of SAR raw data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 977–987. [Google Scholar] [CrossRef]
  12. Wohlberg, B.; De Jager, G. A review of the fractal image coding literature. IEEE Trans. Image Process. 1999, 8, 1716–1729. [Google Scholar] [CrossRef]
  13. Abbas, H.; Fahmy, M. Neural model for Karhunen-Loeve transform with application to adaptive image compression. IEE Proc. I (Commun. Speech Vis.) 1993, 140, 135–143. [Google Scholar]
  14. Otto, J.K. Image Reconstruction for Discrete Cosine Transform Compression Schemes; Oklahoma State University: Stillwater, OK, USA, 1993. [Google Scholar]
  15. Kajiwara, K. JPEG compression for PACS. Comput. Methods Programs Biomed. 1992, 37, 343–351. [Google Scholar] [CrossRef] [PubMed]
  16. Oehler, K.L.; Gray, R.M. Combining image compression and classification using vector quantization. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 461–473. [Google Scholar] [CrossRef]
  17. Mitra, S.K.; Murthy, C.; Kundu, M.K. Technique for fractal image compression using genetic algorithm. IEEE Trans. Image Process. 1998, 7, 586–593. [Google Scholar] [CrossRef] [PubMed]
  18. Shivashetty, V.; Rajput, G. Adaptive Lifting Based Image Compression Scheme Using Interactive Artificial Bee Colony Algorithm. Csity Sigpro Dtmn 2015, 9–21. [Google Scholar] [CrossRef]
  19. Gray, R. Vector quantization. IEEE Assp Mag. 1984, 1, 4–29. [Google Scholar] [CrossRef]
  20. Nag, S. Vector quantization using the improved differential evolution algorithm for image compression. Genet. Program. Evol. Mach. 2019, 20, 187–212. [Google Scholar] [CrossRef]
  21. Linde, Y.; Buzo, A.; Gray, R. An Algorithm for Vector Quantizer Design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  22. Wang, Y.; Feng, X.Y.; Huang, Y.X.; Pu, D.B.; Zhou, W.G.; Liang, Y.C.; Zhou, C.G. A novel quantum swarm evolutionary algorithm and its applications. Neurocomputing 2007, 70, 633–640. [Google Scholar] [CrossRef]
  23. Horng, M.H. Vector quantization using the firefly algorithm for image compression. Expert Syst. Appl. 2012, 39, 1078–1091. [Google Scholar] [CrossRef]
  24. Karri, C.; Jena, U. Fast vector quantization using a Bat algorithm for image compression. Eng. Sci. Technol. Int. J. 2016, 19, 769–781. [Google Scholar] [CrossRef]
  25. Rini, D.P.; Shamsuddin, S.M.; Yuhaniz, S.S. Particle swarm optimization: Technique, system and challenges. Int. J. Comput. Appl. 2011, 14, 19–26. [Google Scholar] [CrossRef]
  26. Sekhar, G.C.; Sahu, R.K.; Baliarsingh, A.; Panda, S. Load frequency control of power system under deregulated environment using optimal firefly algorithm. Int. J. Electr. Power Energy Syst. 2016, 74, 195–211. [Google Scholar] [CrossRef]
  27. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. High-level block diagram of predictive coding.
Figure 1. High-level block diagram of predictive coding.
Jimaging 10 00124 g001
Figure 2. Block diagram of VQ encoder and decoder.
Figure 2. Block diagram of VQ encoder and decoder.
Jimaging 10 00124 g002
Figure 3. Block diagram of the proposed fast-LBG algorithm.
Figure 3. Block diagram of the proposed fast-LBG algorithm.
Jimaging 10 00124 g003
Figure 4. (ae) The images utilized for analytical purposes underwent compression during the experimentation.
Figure 4. (ae) The images utilized for analytical purposes underwent compression during the experimentation.
Jimaging 10 00124 g004
Figure 5. Similarity index measure for Cameraman image.
Figure 5. Similarity index measure for Cameraman image.
Jimaging 10 00124 g005
Figure 6. Similarity index measure for Baboon image.
Figure 6. Similarity index measure for Baboon image.
Jimaging 10 00124 g006
Figure 7. Similarity index measure for Peppers image.
Figure 7. Similarity index measure for Peppers image.
Jimaging 10 00124 g007
Figure 8. Similarity index measure for Barb image.
Figure 8. Similarity index measure for Barb image.
Jimaging 10 00124 g008
Figure 9. Similarity index measure for Goldhill image.
Figure 9. Similarity index measure for Goldhill image.
Jimaging 10 00124 g009
Figure 10. The image Goldhill reconstructed employing six distinct algorithms: (a) Linde–Buzo–Gray. (b) Linde–Buzo–Gray Particle Swarm Optimization. (c) Linde–Buzo–Gray Quantum Particle Swarm Optimization. (d) Linde–Buzo–Gray Honey Bee Mating Optimization. (e) Linde–Buzo–Gray firefly algorithm. (f) Linde–Buzo–Gray bat algorithm. (g) Linde–Buzo–Gray Cuckoo Search Optimization. (h) Fast Linde–Buzo–Gray.
Figure 10. The image Goldhill reconstructed employing six distinct algorithms: (a) Linde–Buzo–Gray. (b) Linde–Buzo–Gray Particle Swarm Optimization. (c) Linde–Buzo–Gray Quantum Particle Swarm Optimization. (d) Linde–Buzo–Gray Honey Bee Mating Optimization. (e) Linde–Buzo–Gray firefly algorithm. (f) Linde–Buzo–Gray bat algorithm. (g) Linde–Buzo–Gray Cuckoo Search Optimization. (h) Fast Linde–Buzo–Gray.
Jimaging 10 00124 g010
Figure 11. The image of Barb reconstructed employing six distinct algorithms: (a) Linde–Buzo–Gray. (b) Linde–Buzo–Gray Particle Swarm Optimization. (c) Linde–Buzo–Gray Quantum Particle Swarm Optimization. (d) Linde–Buzo–Gray Honey Bee Mating Optimization. (e) Linde–Buzo–Gray firefly algorithm. (f) Linde–Buzo–Gray bat algorithm. (g) Linde–Buzo–Gray Cuckoo Search Optimization. (h) Fast Linde–Buzo–Gray.
Figure 11. The image of Barb reconstructed employing six distinct algorithms: (a) Linde–Buzo–Gray. (b) Linde–Buzo–Gray Particle Swarm Optimization. (c) Linde–Buzo–Gray Quantum Particle Swarm Optimization. (d) Linde–Buzo–Gray Honey Bee Mating Optimization. (e) Linde–Buzo–Gray firefly algorithm. (f) Linde–Buzo–Gray bat algorithm. (g) Linde–Buzo–Gray Cuckoo Search Optimization. (h) Fast Linde–Buzo–Gray.
Jimaging 10 00124 g011
Figure 12. The image of Peppers reconstructed utilizing six distinct algorithms: (a) Linde–Buzo–Gray. (b) Linde–Buzo–Gray Particle Swarm Optimization. (c) Linde–Buzo–Gray Quantum Particle Swarm Optimization. (d) Linde–Buzo–Gray Honey Bee Mating Optimization. (e) Linde–Buzo–Gray firefly algoorithm. (f) Linde–Buzo–Gray bat algorithm. (g) Linde–Buzo–Gray Cuckoo Search Optimization. (h) Fast Linde–Buzo–Gray.
Figure 12. The image of Peppers reconstructed utilizing six distinct algorithms: (a) Linde–Buzo–Gray. (b) Linde–Buzo–Gray Particle Swarm Optimization. (c) Linde–Buzo–Gray Quantum Particle Swarm Optimization. (d) Linde–Buzo–Gray Honey Bee Mating Optimization. (e) Linde–Buzo–Gray firefly algoorithm. (f) Linde–Buzo–Gray bat algorithm. (g) Linde–Buzo–Gray Cuckoo Search Optimization. (h) Fast Linde–Buzo–Gray.
Jimaging 10 00124 g012
Table 1. Image ‘Cameraman’ PSNR vs. bitrate comparison.
Table 1. Image ‘Cameraman’ PSNR vs. bitrate comparison.
BppPSNR in Decibels
LBGPSOQPSOHBMOFABAFLBG
0.1525.225.425.225.425.325.525.3
0.2526.426.326.426.526.426.526.3
0.32526.426.526.526.626.426.526.4
0.37526.226.727.226.926.827.3526.2
0.43526.328.628.628.528.729.226.5
0.48526.529.829.429.529.729.926.6
0.5526.730.230.230.230.130.526.8
0.62526.731.431.531.631.631.826.8
Table 2. Image ‘Baboon’ PSNR vs. bitrate comparison.
Table 2. Image ‘Baboon’ PSNR vs. bitrate comparison.
BppPSNR in Decibels
LBGPSOQPSOHBMOFABAFLBG
0.1518.218.318.118.719.119.118.1
0.2519.619.619.719.819.620.119.3
0.32519.520.220.220.120.220.219.4
0.37519.620.520.721.220.821.619.2
0.43519.721.521.421.821.622.219.1
0.48519.722.122.322.722.52319.3
0.5519.623.123.223.423.123.619.4
0.62519.723.423.423.623.524.419.4
Table 3. Image ‘Peppers’ PSNR vs. bitrate comparison.
Table 3. Image ‘Peppers’ PSNR vs. bitrate comparison.
BppPSNR in Decibels
LBGPSOQPSOHBMOFABAFLBG
0.1524.224.324.424.424.424.624.1
0.2525.125.325.425.225.225.524.8
0.32525.226.226.426.326.126.325.1
0.37525.227.127.227.427.628.425.1
0.43525.229.129.429.429.630.224.7
0.48525.330.130.330.430.530.725.2
0.5525.431.231.431.531.631.824.7
0.62525.332.432.532.632.732.525.2
Table 4. Image ‘Barb’ PSNR vs. bitrate comparison.
Table 4. Image ‘Barb’ PSNR vs. bitrate comparison.
BppPSNR in Decibels
LBGPSOQPSOHBMOFABAFLBG
0.1523.724.223.623.423.424.223.2
0.2523.724.124.124.124.124.523.2
0.32524.125.725.725.726.226.223.7
0.37524.127.127.227.127.227.524.2
0.43524.128.128.328.228.428.323.8
0.48524.728.728.728.829.129.124.5
0.5524.530.130.229.729.830.224.1
0.62525.130.230.129.729.830.824.8
Table 5. Image ‘Goldhill’ PSNR vs. bitrate comparison.
Table 5. Image ‘Goldhill’ PSNR vs. bitrate comparison.
BppPSNR in Decibels
LBGPSOQPSOHBMOFABAFLBG
0.1524.424.524.224.224.224.823.6
0.2525.125.225.225.525.725.824.7
0.32525.525.225.825.826.126.125.1
0.37525.626.126.126.826.727.125.1
0.43525.627.227.327.627.728.225.2
0.48525.628.228.128.728.628.824.8
0.5525.629.729.830.130.130.224.7
0.62525.630.130.130.430.330.824.8
Table 6. The average computational times taken across different test/experimental images.
Table 6. The average computational times taken across different test/experimental images.
Size of Codebook: 16
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.25
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN8.13591.11618.561232.221173.37599.612521.453.12
PEPPER8.92487.57493.451105.281040.34630.443326.923.33
BABOON9.44669.84695.211983.121964.46698.983031.064.13
GOLDHILL9.64625.37740.911158.501130.75513.282480.954.66
BARB9.21555.67656.911567.511549.53690.422811.534.87
Average9.07585.91641.011409.331371.69626.552834.384.02
Percentage Improvement55.6599.3199.3799.7199.7199.3699.86
Size of Codebook: 32
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.3125
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN9.16521.32554.421291.341298.46593.812209.834.12
PEPPER9.82532.17428.92898.76934.76546.911713.634.59
BABOON8.88468.12497.961249.011243.71549.342715.313.93
GOLDHILL7.72476.64538.461340.211299.82480.452625.023.06
BARB10.03423.93474.921349.011320.15422.782025.725.23
Average9.12484.44498.941225.671219.38518.662257.904.19
Percentage Improvement54.1199.1499.1699.6699.6699.1999.81
Size of Codebook: 64
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.3750
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN11.31665.23685.121563.761491.62671.452982.765.41
PEPPER11.31597.42599.241247.541278.45636.774468.235.63
BABOON12.25573.61590.121412.321437.11740.023984.186.12
GOLDHILL14.31622.21637.741577.141181.33498.564305.037.11
BARB16.73460.21466.241306.21854.17398.742721.128.32
Average13.18583.74595.691421.391248.54589.113692.266.52
Percentage Improvement50.5598.8898.9199.5499.4898.8999.82
Size of Codebook: 128
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.4375
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN16.28645.34657.541080.441054.19628.821932.318.94
PEPPER19.41600.32675.321132.681081.51623.622220.8210.71
BABOON22.61467.57536.071112.141060.77963.912785.6611.11
GOLDHILL18.21835.27860.381413.081343.73502.211962.029.92
BARB30.34579.21589.541291.811271.32562.242438.0112.12
Average21.37625.54663.771206.031162.30656.162267.7610.56
Percentage Improvement50.5898.3198.4199.1299.0998.3999.53
Size of Codebook: 256
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.50
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN23.15898.23922.46824.23816.44696.461627.2313.24
PEPPER18.33760.12760.121010.65984.54574.411750.6613.98
BABOON28.25599.51568.241019.861040.91572.322039.7514.13
GOLDHILL29.92931.61560.64850.06834.32981.932978.0614.33
BARB27.84689.71698.66847.23837.72596.062598.4313.82
Average25.50775.84702.02910.41902.79684.242198.8313.90
Percentage Improvement45.4998.2198.0298.4798.4697.9799.37
Size of Codebook: 512
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.5625
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN36.84731.32758.231125.231078.35849.451643.6419.13
PEPPER39.31934.72887.53665.21650.81533.071371.7619.12
BABOON20.71657.02715.02712.05723.12803.571158.4216.32
GOLDHILL35.14582.97601.74955.58960.07885.021253.4218.33
BARB72.52815.84706.91878.68872.67693.681805.6136.23
Average40.90744.37733.89867.35857.00752.961446.5721.83
Percentage Improvement46.6497.0797.0397.4897.4597.1098.49
Size of Codebook: 1024
ImageAverage Time Taken for Computation (seconds) at Bitrate = 0.625
LBGPSO-LBGQPSO-LBGHBMO-LBGFA-LBGBA-LBGCS-LBGFLBG
CAMERAMAN66.741532.321572.271918.712013.431576.423679.8435.46
PEPPER63.441022.781156.531254.021231.37855.742059.3532.13
BABOON66.551435.451439.541664.231636.131665.132396.3334.13
GOLDHILL94.671353.841369.651489.711483.25775.302340.9248.36
BARB112.321515.021503.811199.271181.881133.722400.2858.23
Average80.741371.881408.361505.191509.211201.262575.3441.66
Percentage Improvement48.4096.9697.0497.2397.2496.5398.38
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bilal, M.; Ullah, Z.; Mujahid, O.; Fouzder, T. Fast Linde–Buzo–Gray (FLBG) Algorithm for Image Compression through Rescaling Using Bilinear Interpolation. J. Imaging 2024, 10, 124. https://doi.org/10.3390/jimaging10050124

AMA Style

Bilal M, Ullah Z, Mujahid O, Fouzder T. Fast Linde–Buzo–Gray (FLBG) Algorithm for Image Compression through Rescaling Using Bilinear Interpolation. Journal of Imaging. 2024; 10(5):124. https://doi.org/10.3390/jimaging10050124

Chicago/Turabian Style

Bilal, Muhammmad, Zahid Ullah, Omer Mujahid, and Tama Fouzder. 2024. "Fast Linde–Buzo–Gray (FLBG) Algorithm for Image Compression through Rescaling Using Bilinear Interpolation" Journal of Imaging 10, no. 5: 124. https://doi.org/10.3390/jimaging10050124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop