**1. Introduction**

Steganography is a technique for secret communication, in which secret messages are embedded into common digital media such as images, audios, and videos. The original media is called "cover", and the embedded media is called "stego". Of these media, images are the most popular because they are widely transmitted over the Internet. Many image-based steganography methods [1–17] have been proposed and applied in raw [1–5], JPEG [6], VQ [7], and absolute moment block truncation coding (AMBTC) [8] images. However, the application in palette images [9–17] is limited. Palette images [18] have gained popularity. Graphics interchange format (GIF) is a type of palette image frequently used by young consumers to communicate and express themselves. GIF animations are widely used on social media platforms such as Tumblr, Twitter, Facebook, and Line [18,19].

A palette image includes a palette and an index table. The palette consists of a few (generally no more than 256) colors. Each pixel is represented by a color in the palette, and the index table records each pixel's corresponding color index. In general, the order of colors in the palette is random. Because only limited colors are used in palette images, modifying a pixel's color index considerably distorts the pixel color. Thus, embedding data into a palette image is more challenging than in other image formats.

Embedding methods are of two types. In pixel-based methods (PBMs) [9–14], a pixel is used as a unit to embed secret data. In block-based methods (BBMs) [15–17], a block is used as a unit to embed secret data.

Fridrich [11] proposed a method in which the colors of a palette are partitioned into two sets with different parities (*R* + *G* + *B* mod 2) to represent one secret bit. The capacity of this method is 1 bit per pixel (bpp). Under the same embedding capacity, Fridrich and Du [12] proposed an optimal parity assignment algorithm to improve image quality. Tzeng et al. [13] proposed an adaptive data-hiding scheme for palette images based on local complexity. An embedding capacity of approximately 0.1 bpp is used for most images [20]. Tanaka et al. [14] proposed an algorithm to partition colors into 2*<sup>k</sup>* sets. Each set represents one type of *k*-bit secret data. For each pixel, according to the secret data, the closest color in the corresponding set is determined to replace the pixel's color. The embedding capacity is increased to *k* bpp.

In [9–12,14], each pixel was used to embed secret data in PBMs. Therefore, the embedding distortion for a pixel may be large. In [13], an adaptive scheme was provided to skip pixels with large distortions. However, this reduced the capacity.

Imaizumi and Ozawa [15] proposed a BBM to embed *k* bits into each 3 × 3 block. In this method, first, all colors in a palette are reordered according to the color Euclidean distance. Next, for each block, the sum of all color indices is calculated and divided by 2*<sup>k</sup>* to obtain a remainder. Then, some pixels in the block are selected, and their color indices are adjusted to obtain a remainder equal to the value of *k* embedding secret data bits with the least embedding distortion. The embedding capacity is *k*9 bpp. Imaizumi and Ozawa [16] adopted a block size of 2 × 2 to improve the embedding capacity of their previous method [15]. Subsequently, Aryal et al. [17] used a block size of 1 × 2*k*−<sup>2</sup> + 1 to improve the embedding capacity of the aforementioned method [16] and used the *L\*a\*b* color space to replace the *RGB* color space to improve image quality. In all of these BBMs [15–17], the palette of an image is first reordered. Then, for each block, under the consideration of minimum embedding distortion, several pixels are selected, and their color indices are modified by +1 or −1. In the reordered palette, two colors with adjacent color indices may not be similar; some selected pixels may have large embedding distortion, which degrades image quality after data embedding. Furthermore, the color indices of some selected pixels may overflow or underflow after data embedding. To avoid using these blocks, an additional location map is required to record if a block is used to embed secret data. This reduces the embedding capacity for secret data because some blocks are skipped and some are used to embed the location map.

To address the aforementioned problem, a novel BBM for palette images was proposed in this study. First, Tanaka et al.'s *k*-bit parity assignment [14] was used to assign a *k*-bit parity to each color in the palette of an image. Then, for each block, at most one pixel was modified for data embedding, and no location map was required for data extraction. The proposed method provided higher capacity.

Note that the primary goal of steganographic methods [21] is to develop statically undetectable methods with high steganographic capacity. Steganographic capacity [21] is defined as the maximum number of bits that can be hidden in a given cover work, such that the probability of detection by an adversary is negligible. The embedding efficiency [21] is one kind of measure for undetectability, and it is defined as the number of embedded random message bits per embedding change. Crandall [22] mentioned that the most obvious method to reduce the possibility of detecting hidden information is to reduce the change density in the cover image. Since decreasing the change density raises the embedding efficiency, higher embedding efficiency will lower the detectability. Since BBMs reduce the change density, the detectability of these methods is lower.

To measure the undetectability of the proposed method, chi-square attack [23], RS steganalysis [24], and embedding efficiency are used. Estimation and experimental results demonstrated that the proposed method provided higher undetectability than the aforementioned BBMs [16,17].

The remainder of this paper is organized as follows. Related works are introduced in Section 2. The proposed method is presented in Section 3. The analysis of embedding capacities is given in Section 4. Experimental results are provided in Section 5. Finally, conclusions are presented in Section 6.

### **2. Related Works**

In this section, the parity assignment method proposed by Tanaka et al. [14] and referenced in the proposed method is first described. Then, the method proposed by Aryal et al. [17] is described; it is used to make a comparison with the proposed method.

#### *2.1. Parity Assignment Method Proposed by Tanaka et al.*

Tanaka et al. proposed an algorithm to assign *k* parity bits to each color in the palette of an image. The algorithm contains two parts. In the first part, from the palette, 2*<sup>k</sup>* closest colors (*c* - <sup>0</sup>, ... , *c* - 2*k*−1 ) are determined sequentially, *c* - *<sup>i</sup>* is assigned a *<sup>k</sup>*-bit parity with value *<sup>i</sup>*. Next, 2*<sup>k</sup>* sets (*S*0, ... , *<sup>S</sup>*2*k*−1) are established, that is, *Si* = *c* - *i* . In the second part, from the unassigned colors, the color *c* with the minimal distance from the last assigned color is determined. Next, the minimal distance *dp* of *c* from each set *Sp* is evaluated. Then, the maximal distance *dp* among {*dp <sup>p</sup>* <sup>=</sup> 0, ... , 2*<sup>k</sup>* <sup>−</sup> 1} is determined and *p*is assigned as the parity of color *c*. The procedure is repeated until all colors are assigned.

In the algorithm, the distance *dci*,*cj* between two colors *ci* = (*ri*, *gi*, *bi*) and *cj* = *rj*, *gj*, *bj* is defined using the following expression:

$$d\_{i,j} = \sqrt{\left(r\_i - r\_j\right)^2 + \left(g\_i - g\_j\right)^2 + \left(b\_i - b\_j\right)^2}.\tag{1}$$

The details of the parity assignment are described as follows:

Step 1: Let *A* be the set of all colors in the palette. Let *Sq* = φ, and assign parity *q* to *Sq*; *q* = 0, ... , 2*<sup>k</sup>* − 1. Step 2: Let *h* = 0. Find an initial color *c*' *<sup>h</sup>* using the following expression:

$$c'\_h = \arg\min\_{c\_i \in A} \left( 256^2 r\_i + 256 \varrho\_i + b\_i \right), \tag{2}$$

*Sh* = *Sh c*' *h* ; *A* = *A*\ *c*' *h* .

Step 3: Let *h* = *h* + 1. Find color *c*' *<sup>h</sup>* in *<sup>A</sup>* with the minimum distance from *<sup>c</sup>*' *<sup>h</sup>*−<sup>1</sup> using the following equation:

$$c\_h^{'} = \arg\min\_{c\_i \in A} d\_{c\_{h-1}^{'}, c\_i \prime} \tag{3}$$

$$\mathcal{S}\_h = \mathcal{S}\_h \cup \{c'\_h\}; \mathcal{A} = \mathcal{A} \backslash \{c'\_h\}.$$


$$md\_q = \min\_{\varepsilon\_i \in \mathcal{S}\_q} d\_{\varepsilon'\_h, \varepsilon'\_i} \tag{4}$$

Step 7: Find the maximum distance *mdq*among all *mdq* using the following expression:

$$q' = \underset{q}{\text{argmax}} \text{max}d\_{q}.\tag{5}$$

$$\mathcal{S}\_{q'} = \mathcal{S}\_{q'} \cup \{c'\_h\}\_{\prime} \\ \mathcal{A} = \mathcal{A} \backslash \{c'\_h\} .$$

Step 8: Repeat Steps 5 to 7 until *A* is empty.

#### *2.2. Aryal et al.'s Method*

Aryal et al.'s method [17] improves the capacity and image quality of Imaizumi et al.'s method [16]. In the method, the color *c* of each pixel in the *RGB* color space is first converted to color (L\*, a\*, b\*) in the *L\*a\*b* color space, and the CIEDE2000 formula [25] is used to calculate the color distance. The method includes three processes: Palette reordering, block embedding, and extraction. In the method, the *k*-bit messages are embedded into each 2*k*−<sup>2</sup> + 1 pixel matrix. For convenience, we assumed *k* = 3 in the following processes.

#### 2.2.1. Palette Reordering Process

Step 1: Let the original palette *P* be {*c*0, ... , *c*255}, the new palette *P* be {*c*' <sup>0</sup>, ... , *<sup>c</sup>*' <sup>255</sup>}, and *A* = *P*. Step 2: Set *h* = 0. Find the first color *c*' *<sup>h</sup>* in *<sup>P</sup>*using the following expression:

$$x\_h^\prime = \underset{c\_l \in \mathcal{A}}{\text{argmin}} L\_{i^\prime}^\* \tag{6}$$

where *L*<sup>∗</sup> *<sup>i</sup>* is the luminance component of color *ci* in the *L\*a\*b* color space.

Step 3: Let *A* = *A*\ *c*' *h* , *h* = *h* + 1. Find color *c*' *<sup>h</sup>* in *<sup>A</sup>* with the minimum distance from *<sup>c</sup>*' *<sup>h</sup>*−<sup>1</sup> using the following expression:

$$
\boldsymbol{c}'\_{h} = \underset{\boldsymbol{c}\_{l} \in \mathcal{A}}{\operatorname{argmin}} \Delta E\_{\boldsymbol{c}'\_{h-1}, \boldsymbol{c}'\_{l}} \tag{7}
$$

where Δ*Ec*' *h*−1 ,*ci* is the CIEDE2000 color distance [25] between *<sup>c</sup>*' *<sup>h</sup>*−<sup>1</sup> and *ci*.

Step 4: Repeat Step 3 until *h* = 255.

#### 2.2.2. Embedding Process


$$T\_0 = l\_0 \bmod 2.\tag{8}$$

$$T\_1 = (I\_1 + I\_2) \bmod 4.\tag{9}$$

	- Case 1: If |*T*<sup>1</sup> − *t*1| = 2, then *I*<sup>1</sup> and *I*<sup>2</sup> are changed by +1 or −1; this depends on whether Δ*Ec*' *I* 1+1,*c*' *I* 1 + Δ*Ec*' *I* 2+1,*c*' *I* 2 or Δ*Ec*' *I* 1−1,*c*' *I* 1 + Δ*Ec*' *I* 2−1,*c*' *I* 2 is smaller.
	- Case 2: If *T*<sup>1</sup> − *t*<sup>1</sup> = 1 or *T*<sup>1</sup> − *t*<sup>1</sup> = −3, then either *I*<sup>1</sup> or *I*<sup>2</sup> is changed by −1; this depends on whether Δ*Ec*' *I* 1−1,*c*' *I* or Δ*Ec*' *I* 2−1,*c*' *I* 2 is smaller.
	- Case 3: If *T*<sup>1</sup> − *t*<sup>1</sup> = −1 or *T*<sup>1</sup> − *t*<sup>1</sup> = 3, then either *I*<sup>1</sup> or *I*<sup>2</sup> is changed by +1; this depends on whether Δ*Ec*' *I* 1+1,*c*' *I* 1 or Δ*Ec*' *I* 2+1,*c*' *I* 2 is smaller.

Step 10: Go to Step 3 until all secret data are embedded.

1

If any Δ*E* > 5 in the embedding process, the current block is skipped, and no secret data are embedded. An additional location map is required for recording if a block is used to embed secret data.


**Table 1.** Relation between secret data w and (*t*0, *t*1).

2.2.3. Extraction Process


$$t\_0 = I\_0 \bmod 2.\tag{10}$$

$$t\_1 = (I\_1 + I\_2) \bmod 4.\tag{11}$$

Step 5: Extract *w* according to Table 1, *t*<sup>0</sup> and *t*1.

Step 6: Go to Step 3 until all embedded blocks are processed.

In Imaizumi et al. and Aryal et al.'s methods, for data extraction, the receiver requires the positions of embedded blocks. Thus, a location map with one bit for each block should be transmitted to the receiver; this will reduce the embedding capacity for secret data, and will be discussed in Section 4. To overcome this disadvantage, a novel BBM was proposed; it does not require a location map.

#### **3. Proposed Method**

To avoid using a location map and to obtain higher embedding efficiency and capacity, a novel BBM for palette images is proposed. The method includes three processes: Parity assignment, embedding, and extraction. In the parity assignment process, Tanaka et al.'s assignment [14] is used to assign a *k*-bit parity to each color in a palette. Through the assignment, a stego image with lower embedding distortion can be obtained, such that a location map is not required for secret data extraction. In the embedding process, an optimal scheme is provided to select the pixel in a block with the minimal embedding distortion; this makes each block used for data embedding. The embedding process is performed as follows:

#### *3.1. Embedding Process*

Step 1: Divide the cover image into nonoverlapping blocks, each of which contains *n* × *m* pixels.


Step 5: Calculate *r* using the following expression:

$$r = \sum\_{j=0}^{n \times m-1} q\_j \bmod 2^k. \tag{12}$$

Step 6: If *r* = *w*, then go to Step 10.

Step 7: For each pixel *Bj*, parity *q*' *<sup>j</sup>* is calculated using the following equation:

$$\mathcal{q}\_{j}^{\prime} = \left(q\_{j} + w - r + 2^{k}\right) \bmod 2^{k}, j = 0, \ldots, n \times m - 1. \tag{13}$$

Step 8: For each pixel *Bj*, find a new color *c*<sup>∗</sup> *Bj* with parity *q*' *<sup>j</sup>* according to the following equation:

$$\mathcal{L}\_{B\_j}^\* = \underset{c\_i \in \mathbb{S}\_{q\_j'}}{\arg\min} d\_{c\_{B\_j}, c\_{i'}} \tag{14}$$

where *Sq*' *j* is the set of all colors with parity *q*' *j* .

Step 9: Consider pixel *B*<sup>α</sup> satisfying the following expression, and set the color of *B*<sup>α</sup> to be *c*<sup>∗</sup> *B*α .

$$\alpha = \operatorname\*{argmin}\_{j \in \{0, \dots, n \prec m-1\}} d\_{c\_{B\_j}, c\_{B\_j}^\*} \tag{15}$$

Step 10:Repeat Steps 3 to 9 until all blocks are processed.

In Step 2, Tanaka et al.'s parity assignment is used; it makes each color be able to find a closer color in each set with a different parity. In Step 8, for each pixel, we always select the color with the required parity and the minimal embedding distortion to replace the original color. In Step 9, at most one pixel is modified with the minimal embedding distortion. Through these three steps, each block is used to embed secret data, and the embedding quality is also improved.

#### *3.2. Extraction Process*

In the extraction process, the receiver uses the same parity assignment as that used in the embedding process to assign a *k*-bit parity to each color. The extraction process is as follows:


Because each block is used to embed secret data, the receiver does not require a location map in data extraction.

#### **4. The Analysis for Embedding Capacities**

As mentioned previously, both Imaizumi et al. [16] and Aryal et al.'s [17] methods need a location map in the extraction process; thus, the location map should be transmitted through another channel or be embedded in the stego image. However, it is unreasonable to transmit the location map through another channel. Thus, in the following, we only consider the location map embedded in the stego image. Let an image size be *N* × *M*, block size be *n* × *m*, the embedding bits for each block be *k*, then the total block number *T* = *N* × *M*/(*n* × *m*) . Assume that in the location map, each block is represented by one bit, 1 stand for the corresponding block used for embedding; 0 for skipping. Let *X* be the size of the

location map, assume that the location map is embedded in the first [*X*/*k*] blocks, then the number of available blocks for secret data embedding is *T* − [*X*/*k*]. Thus, *X* should satisfy the following equation:

$$X = T - \left[X/k\right].\tag{16}$$

Note that the embedding capacity (bits) is *kX*. Let an image size be 256 × 256 and block size be 2 × 2, then *T* = 16384. Let *k* = 3; through Equation (16), we can obtain *X* = 12, 288, that is, the number of available blocks for secret data embedding is 12,288, and the block number needed for recording the location map is 4096. Table 2 shows embedding capacities for Imaizumi et al.'s, Aryal et al.'s, and the proposed methods. In the table, we can see that the proposed method is superior to Imaizumi et al. and Aryal et al.'s methods in embedding capacity.

**Table 2.** The comparisons of embedding capacities among Imaizumi et al.'s [16], Aryal et al.'s [17], and the proposed methods for image size 256 × 256 and *k* = 3.


#### **5. Experimental Results**

In the experiments, 25 images of 256 × 256 in Figure 1 were used. These images were obtained from the Standard Image Database [26] or CBlR Image Database [27] and in the TIFF or JPG format. Photopea [18] was first used to resize and crop each image to 256 × 256, then Cloudconvert [18] was applied to convert TIFF (JPG) format into the GIF format. The embedded secret data were generated using a pseudorandom number generator. Image quality was measured using the peak signal-to-noise ratios (PSNRs). Chi-square attack, RS steganalysis, and embedding efficiency were used to measure the undetectability of a steganography method.
