*Article* **Hybrid Data Hiding Based on AMBTC Using Enhanced Hamming Code**

#### **Cheonshik Kim 1,\*,†, Dongkyoo Shin 1, Ching-Nung Yang 2,† and Lu Leng 3,4,\***


Received: 3 July 2020; Accepted: 30 July 2020; Published: 2 August 2020

**Abstract:** The image-based data hiding method is a technology used to transmit confidential information secretly. Since images (e.g., grayscale images) usually have sufficient redundancy information, they are a very suitable medium for hiding data. Absolute Moment Block Truncation Coding (AMBTC) is one of several compression methods and is appropriate for embedding data due to its very low complexity and acceptable distortion. However, since there is not enough redundant data compared to grayscale images, the research to embed data in the compressed image is a very challenging topic. That is the motivation and challenge of this research. Meanwhile, the Hamming codes are used to embed secret bits, as well as a block code that can detect up to two simultaneous bit errors and correct single bit errors. In this paper, we propose an effective data hiding method for two quantization levels of each block of AMBTC using Hamming codes. Bai and Chang introduced a method of applying Hamming (7,4) to two quantization levels; however, the scheme is ineffective, and the image distortion error is relatively large. To solve the problem with the image distortion errors, this paper introduces a way of optimizing codewords and reducing pixel distortion by utilizing Hamming (7,4) and lookup tables. In the experiments, when concealing 150,000 bits in the Lena image, the averages of the Normalized Cross-Correlation (NCC) and Mean-Squared Error (MSE) of our proposed method were 0.9952 and 37.9460, respectively, which were the highest. The sufficient experiments confirmed that the performance of the proposed method is satisfactory in terms of image embedding capacity and quality.

**Keywords:** data hiding; AMBTC; BTC; Hamming code; LSB

#### **1. Introduction**

Recently, the Internet space has become like a single trading world where almost all digital content is distributed because every trading system is connected by high speed Internet, such as 5G. Many people distribute digital content in this space and are constantly consuming digital content. The problem with this digital space is that a copyright protection problem occurs because digital content is easily redistributed, copied, and modified by illegal users. There are various solutions to this problem, but the commonly used method is digital watermarking [1–3], which is used to protect the integrity and reliability of digital media.

Besides watermarking technology, Data Hiding (DH) technology is the most commonly used method of concealing information in digital media. The DH [4–6] technique can be used in various fields, such as digital signatures, fingerprint recognition, authentication, and secret communication. It has been proven various times that DH could be used for secret communication, as well as the protection of the copyright of digital content. The people who use Internet communication know that the Internet is not a fully protected communication channel due to the many attackers. However, secret communication using DH can safely protect secret messages in digital cover media from the incomplete Internet channel.

DH may achieve the role of a secret communication strategy only when it satisfies two important criteria. First, the quality of the cover image (including data) should not be significantly different from the quality of the original image, since the cover image must not be detected by attackers while it is transmitted. Second, it must have the ability to transmit many secret data to the receiver securely.

The DH method is mainly conducted in two domains, namely the spatial domain and the frequency domain. In the spatial domain, a secret bit is concealed in the pixels of a host image directly. In the case of DH based on the spatial domain, it is applied to a grayscale image; even though the four Least Significant Bits (LSBs) [7–10] of each pixel are used for information hiding, they may not be detected by the Human Visual System (HVS).

Reversible DH [11–20] is a special case of DH in the academic community. In RDH, after the embedded bits are extracted, the stego image can be recovered back to the original image without distortion. The representative RDH methods are Difference Expansion (DE) [11,12], image compression [13], Histogram Shifting (HS) [14,15], Prediction-Error expansion (PE) [16,17], and encrypted images [18,19] for privacy preserving.

In the frequency domain, a cover image is converted into a frequency form, and then, the data are concealed in the coefficients of the frequency. The two most common methods based on the frequency domain are Discrete Cosine Transform (DCT) [21,22] and Discrete Wavelet Transform (DWT) [23]. Since changing the coefficient adversely affects the image quality, it is necessary to find and change the positions of the coefficient that have a relatively small influence on the image quality during data insertion. Spatial domain methods have the merit of the ability to conceal many secret data compared to the frequency domain methods, and the quality of the image is better, while they have the demerits of compression, noise, and filtering attacks compared to the frequency domain methods. Meanwhile, excellent compression images like JPEG are preferred as digital media, because the file size is small compared to the raw images and is well transmitted. For this reason, many researchers closely studied the watermarking and DH methods based on JPEG compression a long time ago.

Block Truncation Coding (BTC) [24] is one of the compression methods, and the configuration of the BTC is very simple compared to conventional JPEG. Thus, the computation time of BTC is much shorter than that of JPEG, and the quality of an image based on BTC is not significantly deteriorated compared to that of the original image. For this reason, it seems many researchers are interested in DH based on Absolute Moment Block Truncation Coding (AMBTC) [25–27], originated from BTC recently. Chuang and Chang [28] proposed a DH method based on AMBTC replacing the bitmaps of smooth blocks with the secret bits after dividing the blocks of an image into smooth blocks and complex blocks directly. It is called the Direct Bitmap Substitution (DBS) method. The merit of this method is that it may control the quality of the stego image by adjusting the threshold *T*(= *b* − *a*) because the number of blocks using DH is decided according to the threshold value *T*. Here, *a* and *b* are quantization levels for each block in AMBTC. With the increase of the threshold *T*, the embedding capacity will be increased, while the image quality will be worse. In the case of decreasing the threshold *T*, the quality of the image will be improved, but the embedding capacity may be reduced.

Ou and Sun [29] introduced a way to embed data in the bitmaps of smooth blocks and proposed a method to reduce the distortions of the image by adjusting two quantization levels through re-computation, but the original image is required for re-calculation. Bai and Chang [30] proposed a way to embed secret data by applying a Hamming Code, i.e., HC(7,4) [7], to two quantization levels and bitmaps of AMBTC, respectively. When HC (7,4) is used for a complex block of AMBTC, it may be undesirable for high image quality. Kumar et al. [31] used two threshold values to increase the capacity of DH without significantly improving the image quality. Chen et al. [32] proposed a lossless

DH method using the order of two quantization levels in *trio*. This method is named the Order of Two Quantization Level (OTQL) method, which can conceal one bit per a block. For example, to store the bit "1", the order of two quantization levels, *a* and *b*, is reversed as *trio*(*b*, *a*, *BM*). This method does not change the coefficients of both quantization levels, so it does not affect the quality of the image.

Hong [33] proposed a DH using Pixel Pair Matching (PPM) [34], where PPM is applied to the quantization levels; while the existing OTQL and DBS are used together for complex and smooth blocks, respectively. In 2017, Huang et al. [35] proposed a scheme for hiding data using pixel differences (hidden bits = *log*2*T*: derived from the difference expansion method) at two quantization levels and introduced a method to adjust the differences in the quantization levels to maintain image quality. This method is also a hybrid method by using OTQL and DBS as well. Chen and Chi [36] sub-divided less complex blocks and highly complex blocks. In 2016, Malik et al. [37] introduced a DH based on AMBTC using a two bit plane and four quantization levels. The merit of this method is the high payload, and the demerit is the decrease in the compression rates.

The motivations to propose a DH method using the Hamming code based on the image compressed with AMBTC are as follows. First, AMBTC is suitable for DH because it has reasonable compression performance, very low computational complexity, and (although not many) redundant bits. In addition, DH is relatively less studied for grayscale images. Second, the Hamming code is very efficient for redundant bits, such as for grayscale images. This has been demonstrated in previous studies [7,10]. However, since the image compressed with AMBTC has fewer redundant bits than the grayscale image, the embedding of enough secret bits at two quantization levels results in a negative effect on the image in the decoding of the bitmap. Third, Bai and Chang [30] attempted to conceal data at two quantization levels, but this did not achieve optimized performance. Therefore, it is essential to develop an optimized method in the DH process.

The main contributions of this paper are summarized as follows:


The rest of this paper is organized as follows. Section 2 gives the introduction of the background research. The proposed method is described in detail in Section 3. The experimental results are analyzed in Section 4. Section 5 draws the conclusions.

#### **2. Preliminaries**

#### *2.1. AMBTC*

Absolute Moment Block Truncation Coding (AMBTC) [25] efficiently improves the computation time of Block Truncation Coding (BTC) and improves the image quality over BTC. The basic configuration of one block in AMBTC is two quantization levels and one bitmap, while one block is compressed by preserving the moment. Here, the two quantization values are obtained by calculating the higher mean and the lower mean of each block. For AMBTC compression, the grayscale image is first divided into (*k* × *k*) blocks without overlapping, where *k* can determine the compression level by (4 × 4), (6 × 6), (8 × 8), etc. AMBTC adopts block-by-block operations. For each block, the average pixel value is calculated by:

$$\mathfrak{X} = \frac{1}{k \times k} \sum\_{i=1}^{k^2} x\_i \tag{1}$$

where *xi* represents the *i*th pixel value of this block with a size of *k* × *k*. All pixels in this block are quantized into a bitmap *bi* (zero or one); that is, if the corresponding pixel *xi* is greater than or equal to the average (*x*¯), it is replaced with "1", otherwise it is replaced with "0". Pixels in each block are divided into two groups, "1" and "0". The symbols *<sup>t</sup>* and *<sup>k</sup>*<sup>2</sup> <sup>−</sup> *<sup>t</sup>* refer to the numbers of pixels in the "0" and "1" groups, respectively. The means *a* and *b* of the two groups indicate the quantization levels of the groups "0" and "1". The two quantization levels are calculated by Equations (2) and (3).

$$a = \left\lfloor \frac{1}{t} \sum\_{x\_i < \mathcal{X}} x\_i \right\rfloor \tag{2}$$

$$b = \left\lfloor \frac{1}{k^2 - t} \sum\_{\mathbf{x}\_i \ge \mathbf{x}} \mathbf{x}\_i \right\rfloor \tag{3}$$

where *a* and *b* are also used to reconstruct AMBTC.

$$b\_{i} = \begin{cases} 1, & \text{if } \mathbf{x}\_{i} \ge \mathbf{x}\_{i} \\ 0, & \text{if } \mathbf{x}\_{i} < \mathbf{x}. \end{cases} \tag{4}$$

$$\begin{array}{ll} \, \_{\mathcal{S}\_i} = \begin{cases} a, & \text{if } b\_i = 0, \\ b, & \text{if } b\_i = 1. \end{cases} \end{array} \tag{5}$$

The bitmap is obtained from Equation (4), and the compressed block is simply uncompressed by using Equation (5); that is, the compressed code unit, *trio*(*a*, *b*, *BM*), may be obtained by using Equations (2)–(5). The image block is compressed into two quantization levels *a*, *b*, and a Bitmap (BM) and can be represented as a *trio*(*a*, *b*, *BM*). A BM contains the bit-planes that represent the pixels, and the values *a* and *b* are used to decode the AMBTC compressed image by using Equation (5).

**Example 1.** *Here, we describe the encoding and decoding procedure of one block of a grayscale image using an example. Figure 1a is a grayscale block, and the mean value of the pixels is 106. By applying Equations (2)–(4) on (a), we can obtain the bitmap as shown in (b) and two quantization levels (a* = 102; *b* = 107*). The basic unit of each block is trio*(*a*, *b*, *BM*) *= (102, 107, 0101111111011001). Using the information of the trio and Equation (5), the decoded grayscale block in (c) is reconstructed.*


Compressed unit: -(, , ) = (102, 107, 0101111111011001)

**Figure 1.** An example of AMBTC: (**a**) a natural block; (**b**) a bitmap block; (**c**) a reconstructed block. BM, Bitmap.

#### *2.2. Hamming Code*

The Hamming Code (HC) [38] is a single error-correcting linear block code with a minimum distance of three for all the codewords. In HC(*n*, *k*), *n* is the length of the codeword, *k* is the number of information bits, and (*n* − *k*) is the number of parity bits.

Let *x* be a *k* bit information word. The *n* bit codeword *y* is created by using *y* = *xG*, where *G* is the *k* × *n* generator matrix. Let *e* = *y* − *y*˜ be the error vector that determines whether an error occurred while sending *y*. If *e* = 0, no error occurs, and *y*˜ = *y*.

Otherwise, the weight of *e* represents the number of errors. Let *H* be a (*n* − *k*) × *n* parity matrix with the relation of *<sup>G</sup>* · *<sup>H</sup><sup>T</sup>* = [0]*k*×(*n*−*k*). Let us assume that the codeword *<sup>y</sup>*˜ has an error like *e* = (*y* − *y*˜). In this case, we could correct one error (*e* = *y* ⊕ *y*˜) from the codeword *y*˜ by using the syndrome *<sup>S</sup>* <sup>=</sup> *<sup>y</sup>*˜ · *<sup>H</sup>T*, where the syndrome denotes the position of the error in the codeword. As show in Equation (6), the error *e* can be obtained.

$$\begin{cases} \ddot{\mathbf{y}} \cdot H^T = (\mathbf{e} \oplus \mathbf{y}) \cdot H^T = \mathbf{e} \cdot H^T + \mathbf{y} \cdot H^T\\ (\mathbf{y} \cdot H^T) = (\mathbf{x} \cdot \mathbf{G}) \cdot H^T = \mathbf{x} \cdot (\mathbf{G} \cdot H^T) = \mathbf{0} \end{cases} \tag{6}$$

$$\begin{cases} \mathbf{e} \cdot H^T + \mathbf{0} = \mathbf{e} \cdot H^T \end{cases} \tag{7}$$

Consider HC(7,4) with the following parity matrix.

$$H = \begin{bmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 0 & 0 & 1 & 1\\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \end{bmatrix} \tag{7}$$

For example, assuming that one error bit occurred in *y* (e.g., the second bit from the left in *e* = (*e*1,*e*2, ... ,*e*7) = (0100000), we may obtain the error position and recover the one bit error from the codeword *<sup>y</sup>* by calculating the syndrome *<sup>S</sup>*(= *<sup>y</sup>* · *<sup>H</sup><sup>T</sup>* = (010)).

#### *2.3. Bai and Chang's Method*

For DH, the AMBTC algorithm is applied to the original cover image to obtain a low mean, a high mean, and a bitmap for every block. Then, the secret message is concealed in the AMBTC compressed *trio*(*a*, *b*, *BM*). The merit of AMBTC is that it achieves a higher payload compared to other DH schemes performed in the compression domain. Here, it performs AMBTC DH in two phases. The method proposed by Bai and Chang is composed of two stages. One of them is to embed three bits in two quantization levels in *trio*(*a*, *b*, *BM*) by using HC(7, 4). The detailed process of this method is as follows.


additional bit. If the bit to be embedded is "1", swap the order of the two quantization levels as (*trio*(*b*, *a*, *BM*)), otherwise no change is conducted.

In Step 4, it is possible to embed an additional bit only under the given condition (*b* − *a* ≥ 8). The reason for the condition is necessary; if the difference between the values of *a* and *b* is small, the order of the two values may be reversed as a result of the computation of the Hamming code. An ambiguous result in the decoding procedure may occur.

#### **3. The Proposed Scheme**

In this section, we introduce a DH to embed secret data in bitmaps and the quantization levels of AMBTC using optimized the Hamming code and DBS method. First, compressed blocks, *trios*, are classified into smooth blocks and complex blocks. Then, DBS is applied to the bitmaps of the smooth blocks, while the Hamming code may be applied to the quantization levels regardless of the block characteristics. The method proposed by Bai and Chang results in the large distortion of the cover image. In Section 3.1, we introduce a way to solve this problem.

#### *3.1. Embedding Procedure*

We introduce a way of DH using the Hamming code, DBS, and OTQL based on AMBTC and explain the details of the procedure step-by-step as follows. Additionally, the flowchart of the embedding process is described in Figure 2.


$$y = (a\_4 a\_3 a\_2 a\_1 || b\_3 b\_2 b\_1) \tag{8}$$

where the symbol || denotes concatenation. Note that *a*<sup>4</sup> and *b*<sup>1</sup> are the MSB and LSB of the rearranged pixel *y*, respectively.

**Step 4:** In Figure 3b, the location of the coset leader that matches the decimal number *d* for *mi*+<sup>2</sup> *i* bits is retrieved from the Lookup Table (LUT) using the procedure in Figure 4. Assuming that *xi* = (*x*<sup>7</sup> *x*<sup>6</sup> *x*<sup>5</sup> *x*<sup>4</sup> *x*<sup>3</sup> *x*<sup>2</sup> *x*1), the codewords corresponding to the retrieved coset reader are converted to (*α β* ). That is, *α* = *bin*2*dec*(*x*<sup>7</sup> *x*<sup>6</sup> *x*<sup>5</sup> *x*4) and *β* = *bin*2*dec*(*x*<sup>3</sup> *x*<sup>2</sup> *x*1). Meanwhile, for codeword *y* generated in Step 3, *α* = *bin*2*dec*(*y*<sup>7</sup> *y*<sup>6</sup> *y*<sup>5</sup> *y*4) and *β* = *bin*2*dec*(*y*<sup>3</sup> *y*<sup>2</sup> *y*1) are converted; that is, *y* = (*α β*). The distances for *x* and *y* are calculated using Equation (9). After calculating *min*((*α* − *α* )<sup>2</sup> + (*<sup>β</sup>* <sup>−</sup> *<sup>β</sup>* )2) for all codewords, the value with the minimum distance among them is obtained. The obtained minimum distance codeword is *h* = (*α β* ).

$$\epsilon = \min\left( (\mathfrak{a} - \mathfrak{a}')^2 + (\mathfrak{z} - \mathfrak{z}')^2 \right) \tag{9}$$

For the codeword *h*, two quantization levels, *a* and *b*, are constructed as follows:

$$\begin{cases} a = \left(a\_8 a\_7 a\_6 a\_5 \middle| \left| h\_7 h\_6 h\_5 h\_4 \right| \right) \\ b = \left(b\_8 b\_7 b\_6 b\_5 b\_4 \middle| \left| h\_3 h\_2 h\_1 \right| \right) \end{cases} \tag{10}$$

Before next step, three is added to the index variable *i*.


**Figure 2.** The flowchart of the data embedding process. OTQL, Order of Two Quantization Level.


**Figure 3.** Standard array of HC(7,4) for Data Hiding (DH): (**a**) binary presentation and (**b**) decimal presentation.

**Figure 4.** The flowchart of the lookup codeword with *m*. HC, Hamming Code.

#### *3.2. Extraction Procedure*

The procedure for extracting the hidden secret bits is shown in Figure 5. The process is explained in detail according to the following procedure.



#### *3.3. Examples*

Here, we will show how to minimize the errors in the encoding process through an optimized method rather than the existing method. The detailed procedure of our proposed DH is explained by the process shown in Figure 6 using *trio*(103, 109, 0000010001110111) and secret bits *m* = (1011010111100001100). Since *b* − *a* = 109 − 103 = 6 ≤ *T*(7), the *trio* is classified as a smooth block. Therefore, in Figure 2, the data concealment process proceeds according to the processing corresponding to the smooth block of the *trio*. From now on, the process shown in Figure 6a will be explained step-by-step.

(1) The two quantization levels of a given *trio* are assigned to variables *a* and *b* and then converted to binary, i.e., *a* = 103 = 011001112 and *b* = 109 = 011011012.


**Figure 5.** The flowchart of the extracting procedure.

In Figure 6b, we explain a way of embedding secret bits into the bitmap.


To extract secret bits from two quantization levels, we need to construct a codeword using the quantization levels. To construct the codeword, the procedure of Figure 6a is followed. That is, the codeword (*y* = 0111100) is obtained by extracting four LSB (0111) and three LSB (100) from two quantization levels *a*(= 103) and *b*(= 108) and combining them. Here, we obtain the hidden secret bits, *<sup>m</sup>* = (101), by using the equation, *<sup>S</sup>* <sup>=</sup> *<sup>y</sup>* · *<sup>H</sup>T*, to the codeword. The decoding of the secret bits in the BM extracts the hidden bits by moving all pixels in the BM into a variable *m* array directly.

**Figure 6.** Illustration of data embedding.

#### **4. Experimental Results**

In this section, we prove the performance of our proposed scheme by comparing with the existing methods, such as Bai and Chang [30], W Hong [33], and Chuang et al. [28]. As shown in Figure 7, six grayscale images sized 512 × 512 are used for our experiments. In addition, the block size of AMBTC is set to 4 × 4, and the secret bits are generated by a pseudo-random number generator. Embedding Capacity (EC) and the Peak Signal-to-Noise Ratio (PSNR) are widely used as objective image evaluation indices. Here, EC is used as an indicator for the number of secret bits that can be embedded in a cover pixel. The relatively high PSNR value means that the quality of the stego image is good. The DH capacity is the size of the secret bit that is embedded in the cover image. The quality of the image is measured by the PSNR defined as:

$$PSNR = 10 \log\_{10} \frac{255^2}{MSE} \tag{11}$$

The Mean-Squared Error (MSE) used in the PSNR denotes the average intensity difference between the stego and reference images.

(b)

(a) (c)

**Figure 7.** Test images: (**a**∼**f**) 512 × 512.

The lower the MSE value of a stego image, the better the quality of the image. The MSE is calculated using the reference image *p* and the distorted image *p* as follows.

$$MSE = \frac{1}{N \times N} \sum\_{i=1}^{N} \sum\_{j=1}^{N} (p\_{ij} - p\_{ij}')^2 \tag{12}$$

The error value = *pij* − *p ij* indicates the difference between the original and the distorted pixels. The 2552 means the allowable pixel intensity in Equation (11). A typical value for the PSNR in a lossy image is from 30 dB to 50 dB for an eight bit depth; the higher the better. Structural SIMilarity (SSIM) [39] estimates whether changes such as image brightness, photo contrast, and other residual errors are identified as structural changes. The SSIM values is limited to a range between zero and one. If the SSIM value is close to one, it means that the stego image is similar to the cover image and of high quality. The equation of SSIM is as follows:

$$SSIM(x,y) = \frac{(2\mu\_x\mu\_y + c\_1)(2\sigma\_{xy} + c\_2)}{(\mu\_x^2\mu\_y^2 + c\_1)(\mu\_x^2\mu\_y^2 + c\_2)}\tag{13}$$

where *μx*, *μ<sup>y</sup>* are the mean values of the cover image (*x*) and stego image (*y*), *σx*, *σy*, *σ*<sup>2</sup> *<sup>x</sup>* , *σ*<sup>2</sup> *<sup>y</sup>* , and *σxy* are the standard deviation, variances, and covariance of the cover image and stego image, and *c*1, *c*2, *c*<sup>3</sup> are constant values to avoid the division by zero problem.

The Normalized Cross-Correlation (NCC) has been commonly used as a metric to evaluate the degree of similarity (or dissimilarity) between two compared images. The main advantage of the NCC is that it is less sensitive to linear changes in the amplitude of illumination in the two compared images. Furthermore, the NCC is confined to the range between −1 and one. NCC is calculated by the formula given in Equation (14).

$$\text{NCC} = \frac{\sum\_{\mathbf{x}=1}^{M} \sum\_{\mathbf{y}=1}^{N} (S(\mathbf{x}, \mathbf{y}) \times \mathbb{C}(\mathbf{x}, \mathbf{y}))}{\sum\_{\mathbf{x}=1}^{M} \sum\_{\mathbf{y}=1}^{N} (S(\mathbf{x}, \mathbf{y}))^2} \tag{14}$$

Table 1 represents the comparison of EC and PSNR between the proposed scheme and existing methods, i.e., Ou and Sun [29], Bai and Chang [30], and W Hong et al. [33]. Specifically, we compare the performance between our scheme and the existing methods using six images when the threshold value *T*(= *b* − *a*) is 5, 10, and 20. The evaluation of EC and the PSNR based on threshold values is necessary for objectivity and fairness for comparative evaluation of the performance; that is, the data measured under the same threshold value may be evaluated as a more meaningful comparison. One important point for EC and PSNR is that there is a trade-off between the two assessments. That is, if EC is higher, the PSNR is reduced, and vice versa. However, in the case that the proposed method has very good performance, the deviation from the trade-off may not be large. The EC of our proposed method is efficient with respect to the EC as 151,173 bits when *T* = 5.

**Table 1.** PSNR and Embedding Capacity (EC) according to different thresholds *T*.


In Table 1, Bai and Chang's PSNR (=33.0656 dB) is measured as higher than that (=32.6911 dB) of our proposed method. Here, the EC of Bai and Chang [30] is 55,407 bits, and the EC of our method is 151,173 bits. In the end, our proposed method shows the capability to conceal about 95,000 bits more than that of Bai and Chang.

If the threshold *T* and EC are given for a faithful measurement, the PSNR of our proposed method may be the highest. This is because the size of the hidden bits affects the PSNR. Apparently, a relative good method has high values both for the PSNR and EC. When *T* = 10, we can see that our method's EC (=219,847 bits) is the largest. The method of W Hong [33] and our proposed method both have the same PSNR (31.7 dB), which is 0.1 dB lower than that of Ou and Sun's method [29]. However, in this case as well, when considering the amount of EC, our method outperforms the other two methods.

When *T* = 20, the PSNR of the proposed method is higher than the previous two methods (Ou and Sun [29] and W Hong [33]), and the EC of our method has the highest performance. It can be seen from the simulation results in Table 1 that the proposed method has 140,000 bits more than that of Bai and Chang [30] in terms of EC.

Figure 8 shows the performance comparison between our proposed method and the existing methods, where we measured the PSNR while increasing the capacity of the secret bits from 20,000 bits to 310,000 bits in four images ((a) Lena, (b) Boat, (c) Pepper, and (d) Zelda) by using the proposed and existing methods.

**Figure 8.** Performance comparisons of the proposed method and other related methods (i.e., Ou and Sun, Bai and Chang, Hong) based on four images: (**a**) Lena, (**b**) Boat, (**c**) Pepper, and (**d**) Zelda.

We propose a way to improve the performance of Bai and Chang's method [30], and as shown in Figure 8, it is confirmed that our proposed method is superior to existing methods. On the other hand, our proposed method shows almost the same performance as W Hong's method [33], but it can be confirmed that the performance of our proposed scheme is slightly better. Ou and Sun's method [29] is superior to Bai and Chang's method [20], but the performance is not as high as that of our proposed method.

AMBTC has difficulty hiding enough data, because it is a compressed code, and unlike conventional grayscale images, it is not easy to exploit high embedding capacity by the constraint of compressed pixels. It is difficult to improve the DH performance for images with many complex blocks, and if we exploit many pixels for high data concealment, the image quality may deteriorate.

In Figure 8, we can see that the EC of Bai and Chang's method [30] is very low. That is because this method can hide only six bits of data while inverting up to two pixels in each bitmap. Thus, there is a limit to embedding enough data in the *trio*'s bitmaps. Since this method cannot conceal many secret bits for the threshold *T* of the same condition, it shows a relatively high PSNR. After all, that is why this method is inferior to other methods. If we would like to increase the number of secret bits even at the expense of the PSNR, it is possible to increase the size of the threshold *T*. However, it may often be the case that the PSNR becomes worse than expected without increasing the number of hidden bits. For example, when *T* ≤ 4, the three methods except Bai and Chang's method can hide about 130,000 bits, while the PSNRs are slightly reduced. For such a large amount of data to embed, they exploited the DBS method with respect to *BM* equally.

Bai and Chang's method must increase the *T* value in order to conceal 130,000 bits of data, and as a result, the errors accumulate rapidly. Since Bai and Chang's method [30] uses up to four LSB for data concealment, the size of the error inevitably increases. Since our proposed method uses up to three LSB and the frequent count of three LSB is also not very high, the negative effect on image quality is less than that of Bai and Chang's method [30]. In the end, we prove that the proposed method has a better optimization performance than Bai and Chang's method [30].

Figure 9 shows the evaluation by comparing the histograms of stego images generated from the proposed method and existing methods, i.e., W Hong, Ou and Sun, and Bai and Chang. Here, stego images are generated after concealing 150,000 bits in the cover Lena image by the existing and proposed methods. The pixel value range on the x-axis is [95, 115]. In Figure 9, the curves of our proposed method and the two existing methods (i.e., W Hong and Ou and Sun) are similar, while Bai and Chang's histogram curve has a larger amplitude than the other methods. The reason is that the maximum EC of Bai and Chang's method is up to 150,000 bits. In other words, we can see that the quality of the image reaches the lower limit because it exhausts all possible resources. The histogram does not show much difference because our proposed method and the two existing methods keep more than 33 dB in common when concealing 150,000 bits. As shown in Figure 8, as the EC increases, the histogram of the stego image also is far from the histogram of the original cover image.

**Figure 9.** Compared histograms among the proposed method and other related methods with the Lena image when the number of hidden bits is 150,000.

Table 2 shows an experiment to compare the PSNR and SSIM after concealing the same amount of data (120,000 bits) in the cover image for a more objective performance check and reliable comparison. The SSIM of the proposed method shows the highest value. On the other hand, in the case of the

PSNR, W Hong's method [33] shows a high average. In fact, the PSNR only quantifies the quality of reconstructed or damaged images in relation to the facts. For this reason, we introduce SSIM as a criterion for the secondary evaluation. SSIM evaluates the structure of the image. The SSIM of the reconstructed image for the ground image is always one, and if the value is close to one, you can see that the image quality is excellent. Therefore, we can see that our proposed method is superior to the existing methods in terms of SSIM.

**Table 2.** Performance comparison of the PSNR and SSIM among the proposed and previous schemes (using 120,000 bits).


Table 3 shows the MSE and NCC simulation results for the existing and proposed methods for the four images. The average MSE value of the proposed method is lower than those of the three other methods. The MSE value of the Airplane image in our proposed method is slightly higher than those of Ou and Sun [29] and W Hong [33]. However, from the NCC scores, there is no difference, so it is objectively proven that there is no problem with the performance of our proposed method. Furthermore, when the maximum EC of Ou and Sun reaches 270,000 bits, the PSNR drops to 23 dB. On the other hand, our proposed method can maintain the PSNR higher than 30 dB, so the DH performance of our proposed method is useful. Our proposed method can obtain better performance by creating a lookup table to obtain more optimal values than W Hong's method.

**Table 3.** Performance comparison of the MSE and Normalized Cross-Correlation (NCC) between the proposed and previous schemes (using 150,000 bits).


Table 4 shows the comparison of the CPU execution time between the proposed and the existing methods. The computer for the experiment is a YOGA 730, and the CPU processor is Intel(R) Core(TM) i5-8250U CPU 1.6 GHz. The software is MATLAB R2019a. Here, we measure the CPU time to conceal a random number from 20,000 bits to 200,000 bits in the Lena image by using the four methods (i.e., Ou and Sun, W Hong, Bai and Chang, our proposed method). The process of the measurement includes AMBTC compression, data embedding, and AMBTC decompression. The most time-consuming method is that of Bai and Chang, and the least time-consuming method is that of Ou and Sun. The method we propose is faster than Bai and Chang's, but it is time consuming compared to the other two. However, if we code using the C language, the required time would be less than 1 s.


**Table 4.** Comparing the CPU time between the proposed and the existing methods (measurement: seconds).

#### **5. Conclusions**

In this paper, we introduced a DH method that applies DBS and optimized HC(7,4) to AMBTC compressed grayscale images. The basic unit of AMBTC is the *trio*, which consists of two quantization levels and one bitmap and is represented by *trio*(*a*, *b*, *bitmap*). Therefore, AMBTC is a trioset, and the proposed DH method is applied to each block of an image. The proposed method may have different final performance results depending on the characteristics of each block. Therefore, we divided every block into two groups (smooth blocks and complex blocks) and applied the proposed method. The distinction of whether a block is a smooth block or a complex block is determined by the difference between the two quantization levels of the block. That is, if the difference (|*a* − *b*|) between two quantization levels is smaller than or equal to the threshold *T*, it is categorized as a smooth block. When hiding data in a complex block with a difference higher than threshold *T*, the MSE errors increase compared to a smooth block. Therefore, it is important in terms of DH to distinguish the blocks. In other words, the smoother the blocks are, the more they help to maintain the image quality while concealing more data. In this paper, our proposed method achieved the optimized level by HC(7,4) based on the lookup table. As a result, it was shown through experiments that our proposed scheme surpasses the performance of Hong's method. Experimental results show that the proposed scheme provides a high EC while suppressing the loss of quality of the cover image. In the future, we will devise a method to calculate a more optimal distance when applying HC(7,4) to two quantization levels and conduct research to find a way to minimize data concealment errors.

**Author Contributions:** Conceptualization, C.-N.Y. and C.K.; methodology, D.S. and C.K.; validation, C.-N.Y. and C.K.; formal analysis, C.-N.Y.; writing, original draft preparation, C.K. and L.L.; funding acquisition, D.-K.S., C.-N.Y., C.K., and L.L. All authors read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Natural Science Foundation of China (61866028), the Key Program Project of Research and Development (Jiangxi Provincial Department of Science and Technology) (20171ACE50024), the Foundation of China Scholarship Council (CSC201908360075), and the Open Foundation of Key Laboratory of Jiangxi Province for Image Processing and Pattern Recognition (ET201680245, TX201604002). This research was supported in part by the Ministry of Science and Technology (MOST), under grants MOST 108-2221-E-259-009-MY2 and 109-2221-E-259-010. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by 2015R1D1A1A01059253 and 2018R1D1A1B07047395 and was supported under the framework of the international cooperation program managed by NRF (2016K2A9A2A05005255).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
