*1.3. Our Contribution*

As the illustrating example of PEE shows, EC depends on the prediction accuracy of the pixels. When PEE is applied, the data bits are embedded only when the prediction-error is −1 or 0. Hence, we have the following hypothesis.

#### **Hypothesis 1.** *As the prediction accuracy is improved, the performance of the PEE techniques for RDH is enhanced.*

In this paper, we devote our efforts in validating this hypothesis. Specifically, *we aim at improving the prediction accuracy in PEE using deep an artificial neural network (ANN), which has been developed rapidly and extensively studied in the past decade.* We propose a novel method based on a *multilayer perceptron* (MLP), which is a well-known ANN consisting of multiple sequential fully connected layers and providing nonlinear mapping between input data and output data with nonlinear activation functions. Moreover, we consider *eight octants in the three-dimensional space for embedding*, which makes better use of space (c.f. [5] which considers only the first octant for the embedding). We conduct experiments by applying our proposed method on six test images, including Lena, Baboon, Boat, Peppers, Airplane (F-16), and House. The experimental results well support our hypothesis. The EC greatly increases and is 1.9–9.8 times of previous methods. On the other hand, the image quality is still well maintained in terms of low PSNR, which is competitive compared with previous work.

**Remark 1.** *Our MLP consists of layers of nodes. The nodes between consecutive layers are fully connected by weighted edges. Each node receives input from nodes on the previous layer and sends output by passing the aggregated input to a nonlinear activation function. It has been shown that the well-trained MLP can be used to approximate any smooth and measurable function [28]. The MLP has been proven to be an effective alternative to more traditional statistical techniques [29]. Recently,the MLP has been widely used in many different fields of research (e.g., see [30–34] for more details).Our proposed method applies MLP to the* pixel prediction *phase of prediction-error histogram modification. We train the MLP network and use it to derive more accurate pixel prediction. Unlike other statistical techniques, the MLP makes no prior assumptions on the data distribution and can be accurately applied even when new or unseen data appear. These features of the MLP make it an attractive alternative when developing numerical models and choosing between statistical methods.*

#### **2. Related Work and Comparisons between the Methods**

Shi et al. [6] reviewed the recent advances on RDH in the past two decades, including various RDH schemes in image spatial domain, RDH for compressed images, robust RDH which aims at recovering hidden message from the lossily compressed image, RDH for encrypted images and RDH for video and audio. The RDH in image spatial domain is the most investigated subject and strongly related to this paper. We summarize progresses on this subject as below.

1. Lossless compression-based methods.

Most early RDH was implemented based on lossless compression [35–42]. Partial space is released by lossless compressing a feature set of the original image, and the data is embedded using the released space to achieve RDH. The performance of this method depends on the lossless compression algorithm used and the selection of compressed feature sets. The experimental results sugges<sup>t</sup> that the algorithm based on lossless compression will result in greater distortion and poorer embedding effect than the subsequent RDH method.

2. Integer-transform-based methods.

> Integer-transform-based methods can be seen in [36,39,41]. In this type of method, the original image is initially divided, so that multiple adjacent pixels can form an embedding unit. Subsequently, the secret information is embedded into each unit using integer transform. However, this type of method usually uses the average value of a pixel block to predict each pixel in the block, so that the image redundancy cannot be well utilized. Moreover, its algorithm cannot control the maximum modification range of each pixel so that the embedded distortion cannot be controlled effectively. Due to two defects mentioned above, the embedding performance of the integer transform-based methods is limited. The performance of this type of method has been significantly improved compared to the lossless compression-based methods; however, it still cannot achieve good embedding performance.

3. Two-phase embedding with location maps.

There are RDH schemes proceeds with two-phases (e.g., [43–45]) using location maps which map each pixel to a certain value and also ensure the reversibility of the cover image. In [44], Malik et al. considered even-valued and odd-valued pixels separately and embed the secret data bit for each pixel of the cover image by changing its value by at most 1. Their work improves previous complementary embedding strategy by Chang and Kieu [43] which uses vertical embedding and horizontal embedding separately in two phases. Kumar et al. considered even-valued and odd-valued pixels with location maps as well while the cover image is divided into non-overlapping 2-by-2 blocks of pixels and the secret bits are converted into 2-bit segments and embedded into the blocks by increasing or decreasing the pixel value of the corresponding block by at most 1. Since the second phase embedding has the affect as complement of the first phase embedding, this kind of approach persist the stego-image's quality while doubling the EC.

4. Histogram modification-based methods.

> In this type of method, the original image is mapped to space with a lower dimension at the beginning by using the redundancy of the image. Then generate a histogram by counting the distribution of the low-dimensional space. Finally, the reversible embedding is realized by modifying the histogram. The earliest method having a grea<sup>t</sup> impact is proposed by Ni et al. in 2006 [46]. In this method, the secret data is embedded into the pixels with the highest frequency in the image histogram by expanding the histogram. The stego-image with this method maintains high image quality, but the embedding rate is low. Therefore, Lee et al. [47] improved the method of [46], which uses the image difference histogram that the shape rule is similar to Laplace distribution. The histogram of the method experiences a very high peak and rapidly dropping; therefore, it can have a better embedding capacity while maintaining image quality.

### *2.1. Further Discussion on Histograph Modification-Based Approaches*

The method of Ni et al. [46] constitutes a rough framework and foundation for RDH based on histogram modification, and hence has been further developed in the followup research [5,8–27,30,48–51], In these studies, a histogram is first generated from the prediction error of pixels, and then it is modified by expansion or shifting to achieve reversible embedding. Currently, such methods, modifying the *prediction error histogram* (PEH), are collectively perceived as prediction error expansion (PEE). RDH based on histogram modification has the following two advantages:


From the above points of view, the methods based on histogram modification, especially PEE based on PEH modification, have better embedding performance than other methods. Therefore, we focus on histogram generation and three-dimensional histogram modification. Note that the current RDH methods based on histogram modification mainly include the following aspects:

• Generation method of histogram.

•

•

Combined with PEE, the methods of this research direction mainly aim to generate a sharp and rapidly dropping PEH by using better image prediction methods, e.g., the methods of [12,13,19,20,23,24].

 Modification method of histogram. Different from the early expansion methods [8,9,16,24] using a peak in histogram, several authors [15,25–27] proposed methods to expand the histogram by adaptively selecting with the frequency of pixels in the image histogram. These methods can significantly reduce the embedding distortion of PEE.

 Selection of embedding location. This type of method firstly selects the image area that is more suitable for reversible embedding (usually smooth areas), and then uses the selected area as a new carrier for RDH. The effect of these methods are remarkable. Combining with PEE can effectively reduce the embedding distortion of PEE. Its idea was first proposed by Kamstra et al. [18], and many subsequent works have also applied this method as an auxiliary means to further optimize the embedding performance.


In [11,22], the reversible embedding methods based on using multi-histograms are proposed. Compared with the method of using a single histogram, the use of multiple histograms has greater flexibility and can further improve the performance of PEE algorithms.

• PEH for color images.

> In [51], Zhan et al. applied 3D-PEH to color images. Their approach is to predict the pixel values of each RGB channel of a color image and establish the 3D prediction-error histogram. Their results yield low distortion for color images.

Below we summarize two recent progress on the other perspectives on the histogram modification-based methods.


Li et al. [20] proposed the pixel value ordering (PVO) technique which is an advancement of PEE. When the cover images are divided into blocks, PVO first sorts pixel values in each block and then computes minimum, maximum, second-minimum and second maximum pixels which are used for data embedding depending on the minimum and maximum prediction errors in the blocks. PVO changes the pixel values only by at most 1; hence, it generates high quality stego-images. Kaur et al. [54] propose RDH technique using PVO and pairwise PEE to improve EC while retain the quality of the stego image. The embedding strategy is performed in two-phases on three-pixel blocks. Pixels are traversed in a zig-zag way and then sorted based on their rhombus means. The key of PVO for increasing EC is that smaller prediction errors are derived after pixels are sorted. Kaur et al. [55] also considered RDH based on PVO for roughly texture images. For more thorough survey on RDH approaches based on PVO can refer to the survey in [54].

### *2.2. Comparisons and Highlight of Our Approach*

According to above discussions, we list the general comparisons of RDH methods in Table 1. As for the histogram modification-based framework which has attracted much attention and is strongly related to our work and covers the PEE paradigm we mainly follow, we highlight in Table 2 our proposed approach by comparisons with other approaches of this type, such as Ni et al. [46], Lee et al. [47], Li et al. [21], and Cai et al. [5], using average experimental results for six gray-scale images. As Table 2 shows, the embedding capacity of our method is much more than the four other methods, while the image quality is a bit sacrificed due to slightly larger image distortion, though it is tolerable since the PSNR is still close to 50dB. Our results reveal that, due to much better prediction accuracy of pixel values, our method is capable of achieving high embedding capacity while suffering only slight image distortion.

**Table 1.** General comparison of RDH methods. ×: poor; Δ: unable to control effectively/limited; ◦: good; -: even better.


**Table 2.** Comparisons of histogram modification-based RDH methods. The image quality is measured by average PSNR (dB) when maximum embedding capacity is attained. Average embedding capacity are measured in bits.


### **3. The Proposed Approach**

In this section, we introduce our reversible data hiding scheme based on 3D-PEH modification and a MLP as the pixel value predictor. As the hypothesis in Section 1 states, we expect the performance of such a RDH scheme can be greatly enhanced by an accurate MLP predictor. The characteristics of the correlation between the image pixel value and the neighboring pixels is used, so that the accuracy of pixel prediction can be hopefully improved due to a better trained MLP model. This then leads to increased embedding capacity. Overall, our proposed method includes four parts: the pre-processing phase, the training and prediction phase, the embedding and shifting phase, and the extraction and recovery phase. The flowchart of the proposed method is shown in Figure 2. We specify the four phases in the following subsections.

**Figure 2.** The flowchart of our method.

### *3.1. The Pre-Processing Phase*

The pixel values of the cover image will be modified by +1 or −1 when the secret data embed based on 3D-PEH. Therefore, in order to avoid the overflow and underflow, the cover image will be pre-processed. Amend the pixel with value 0 to 1, and the pixel with value 255 to 254. Meanwhile, a location map is created to record these modified pixel positions. The location map is a binary sequence, which can be losslessly compressed to reduce its size. Then the secret data and the compressed location map are combined (hereinafter referred to as secret data); thereby, the pre-processing phase has been completed. After that, they will be embedded in the pre-processed cover image together.

### *3.2. The Training and Prediction Phase*

The PEE method aims at the correlations between the pixels to derive accurate predictions where the prediction-errors are modified separately. However, the traditional PEE method uses the same algorithm to predict pixels for all images. This results in poor prediction accuracy and the prediction error increases as the image is relatively complex. Therefore, our proposed method, which leverages the power of trained MLP model, can predict the pixels of the cover image and significantly reduces the prediction-error so that the embedding capacity can be hopefully increased.

In the MLP training stage, except for pixels located in borders, the pixels are scanned from left to right and top to bottom to derive the cover sequence (*y*1, ... , *yn*). Consider the four-neighbor tuple (*<sup>x</sup>*top, *<sup>x</sup>*bottom, *<sup>x</sup>*left, *<sup>x</sup>*right} of a given pixel *yi*, shown in the left part of the Figure 3. The four-neighbor tuple is used as input data of the neural network, and the desired output value is *yi*.

The structure of an MLP neural network has one input layer, two hidden layers, and one output layer, as shown in Figure 3. The input of four-neighbor tuples (*<sup>x</sup>*top, *<sup>x</sup>*bottom, *<sup>x</sup>*left, *<sup>x</sup>*right) from the cover image is fed into the input layer of the MLP. Between the input and output

layers, there have 100 and 200 neurons in two hidden layers, respectively. After the information income is processed by the network, the output layer of the neural network provides one output *y*˜*i* as the predicted value by the MLP, and the corresponding *yi* in the cover image is used as reference data. We use the mean squared error (MSE) as the loss function which is calculated by taking the average squared difference between the predicted pixel value and the reference pixel value. The MSE function is defined as the Equation (1). Apparently, there is no prediction errors if and only if the MSE value is 0.

$$\text{MSE} = \frac{1}{N} \cdot \sum\_{i=1}^{N} (\tilde{y}\_i - y\_i)^2. \tag{1}$$

Here, *N* is the number of data points, *y*˜*i* is the value returned by the model, and *yi* is the actual value for data point. Based on those input and reference data, the MLP network is then trained with the loss function such that the edge weights of the MLP are optimized to best associate given neighborhoods with the reference pixel values.

**Figure 3.** The structure of our MLP neural network.

### *3.3. The Embedding and Shifting Phase*

After the training and prediction phase is completed, the scheme enters the embedding and shifting phase. In order to embed the binary secret data in the cover image, the threedimensional PEH (3D-PEH) modification is used for embedding and shifting. However, in the previous work on 3D-PEH modification, only the points located in the first octant of the three-dimensional coordinate system are modified. This way of hiding secrets did not make use of most of the space in the three-dimensional coordinate system for embedding; hence, the embeddable pixels are relatively less and a less embedding capacity of images is made. Instead, our proposed method embed secret data in eight octants of the three-dimensional space, so that we possibly exploit much more space than previous approaches.

We adopt rhombus prediction and double-layered embedding, the same as the way used in [5,24], for the implementation of the proposed method to generate non-overlapping prediction-error triple (*ex*,*ey*,*ez*)=(*<sup>e</sup>*3*i*−2,*e*3*i*−1,*e*3*<sup>i</sup>*) for feasible *i* (i.e., each pixel in the triple has four neighboring pixels). A 3D-PEH is generated by counting each non-overlapping prediction error triple, and the data embedding is realized by the obtained 3D-PEH modification using the designed reversible mapping. The data embedding procedure is briefly described as follows.

First, adopt double-layered embedding to divide the cover image into two sets denoted as "star"and "dot"(as shown in Figure 4a). The star and dot sets are embedded with half of the secret data, separately. Except for the pixels located in borders, the pixels of the star or dot set are scanned from left to right and top to bottom to derive the cover sequence (*p*1,..., *pn*). The scan orders for star and dot pixels are shown in Figure 4b,c.

**Figure 4.** (**a**) Star/dot pixels partition. (**b**) Scan order for star pixels. (**c**) Scan order for dot pixels.

Then, the 4-neighbor pixels of each *pi* are introduced to the trained MLP to obtain its predicted value *p*<sup>ˆ</sup>*i*. The predicted value is used to determine the prediction-error sequence (*<sup>e</sup>*1, ... ,*en*), and the sequence is divided into the prediction-error triples *ex*,*ey*,*ez*. The prediction-error *ei* can be obtained as

$$x\_i = p\_i - \mathcal{p}\_i.\tag{2}$$

Lastly, modify each prediction-error triple (*ex*,*ey*,*ez*) to be (*e*<sup>∗</sup>*x*,*e*<sup>∗</sup>*y*,*e*<sup>∗</sup>*z* ) and ge<sup>t</sup> (*p*˜*x*, *p*˜*y*, *p* ˜ *z*)=(*p*<sup>ˆ</sup>*x* + *<sup>e</sup>*<sup>∗</sup>*x*, *p*ˆ*y* + *e*∗*y*, *p*ˆ*z* + *e*∗*z* ) to embed data based on the 3D-PEH in the method shown in Tables 3–5. The 3D-PEH mapping method is divided into seven types: Type *A* to Type *G*.

**Table 3.** Type *A*–*C* of the marked values of prediction-error triple (*ex*,*ey*,*ez*) and cover pixel triple *px*, *py*, *pz* in different types of the proposed method with *embedding* as the data embedding operations on (*ex*,*ey*,*ez*).



**Table 4.** Type *D*–*F* of the marked values of prediction-error triple (*ex*,*ey*,*ez*) and cover pixel triple *px*, *py*, *pz* in different types of the proposed method with *embedding* as the data embedding operations on (*ex*,*ey*,*ez*).

**Table 5.** Type *G* of the marked values of prediction-error triple (*ex*,*ey*,*ez*) and cover pixel triple *px*, *py*, *pz* in different types of the proposed method with *shifting* as the data embedding operations on (*ex*,*ey*,*ez*).


Figure 5 visualizes the mapping how the secret data are embedded. The goal of such visualization is to provide an intuitive way to verify the reversibility of the our proposed method. First of all, there are seven types of embedding in the proposed method, the mapping relationship of Type A, B, ..., and G can be visualized as shown in Figure 5. An arrow with the starting point *x* to the end point *y* represents the data *x* transforms to the data *y* in this mapping. That is, the prediction-error groups *ex*,*ey*,*ez* and the cover pixel groups *px*, *py*, *pz* are modified by type *A* to type *F* according to the condition of the secret which will be embedded. For example, Type A could hide data by transforming (0, 0, 0) into (0, 0, <sup>0</sup>), (0, 0, <sup>1</sup>), (0, 1, <sup>0</sup>), (0, −1, <sup>0</sup>), (1, 0, <sup>0</sup>), (0, 0, <sup>−</sup><sup>1</sup>), and (−1, 0, <sup>0</sup>). Therefore, Figure 5a shows the six arrows which starts from (0, 0, 0) to the destinations (0, 0, <sup>0</sup>), (0, 0, <sup>1</sup>), (0, 1, <sup>0</sup>), (0, −1, <sup>0</sup>), (1, 0, <sup>0</sup>), (0, 0, <sup>−</sup><sup>1</sup>), and (−1, 0, <sup>0</sup>), respectively. Therefore, one can check if the

mapping for data hiding is revertible by checking if one point in the mapping diagram can be reached by multiple points.

**Figure 5.** *Cont*.

(**g**) Type-G

**Figure 5.** The 3D-PEH mappings for the proposed scheme.

After the embedding and shifting phase, the stego-image embedded with secret data will be obtained. Then, the stego-image and the trained MLP model are sent to the receiver side through the communication channel.

**Example 1.** *Consider the cover image P* = *{210, 99, 131, 65, 72, 162, 17, 19, 25, 161, 25, 71, 86, 95, 47}, the secret bits S* = {0, 0, 0, 1, 1, 0, 1, 1, 1, 1, <sup>0</sup>}*, and the prediction error E* = {0, 0, 0, 0, 0, 0, 1, 1, −1, 1, 0, 0, 0, 1, <sup>0</sup>}*.*

	- *1. Get the three bits from E* = {**0, 0, 0**, 0, 0, 0, 1, 1, −1, 1, 0, 0, 0, 1, <sup>0</sup>}*:* (*ex*,*ey*,*ez*)=(0, 0, 0)*. This is a Type-A case.*
	- *2. Get three bits from S* = {**0, 0, 0**, 1, 1, 0, 1, 1, 1, 1, 0} *if the first two bits are* [0], [0]*. Since the secret bits are* [0], [0], [0]*, we have* (*e*<sup>∗</sup>*x*,*e*<sup>∗</sup>*y*,*e*<sup>∗</sup>*z*)=(0, 0, <sup>0</sup>)*.*

*3. Get three units from P* = {**210, 99, 131**, 65, 72, 162, 17, 19, 25, 161, 25, 71, 86, 95, 47} *and derive* (*p*˜*x*, *p*˜*y*, *p*˜*z*)=(210, 99, 131)+(0, 0, 0)=(210, 99, <sup>131</sup>)*.*

*The results of this step are E*∗ = {0, 0, 0, . . .} *and p*˜*x* = {210, 99, 131, . . .}*.*

•*Step 2:*

•


*The results of this step are E*∗ = {0, 0, 0, −1, 0, 0, ...} *and p*˜*x* = {210, 99, 131, 64, 72, 162, ...}*. Step 3:*


*The results of this step are E*∗ = {0, 0, 0, −1, 0, 0, 1, 1, −1, ...} *and p*˜*x* = {210, 99, 131, 64, 72, 162, 17, 19, 25, . . .}*.*

	- *1. Get three bits from E* = {0, 0, 0, 0, 0, 0, 1, 1, −1, **1, 0, 0**, 0, 1, <sup>0</sup>}*:* (*ex*,*ey*,*ez*)=(1, 0, <sup>0</sup>)*. This is a Type-C case.*
	- *2. Get three bits from S* = {0, 0, 0, 1, 1, 0, **1, 1, 1**, 1, 0} *if the secret bits are* [1], [1], [1]*. Since the secret bit is* [1], [1], [1]*, we have* (*e*<sup>∗</sup>*x*,*e*<sup>∗</sup>*y*,*e*<sup>∗</sup>*z* )=(2, −1, <sup>0</sup>)*.*
	- *3. Get three units from P* = {210, 99, 131, 65, 72, 162, 17, 19, 25, **161, 25, 71**, 86, 95, <sup>47</sup>}*. and derive* (*p*˜*x*, *p*˜*y*, *p*˜*z*)=(161, 25, 71)+(1, −1, 0)=(162, 24, 71)

*The results of this step are E*∗ = {0, 0, 0, −1, 0, 0, 1, 1, −1, 2, −1, 0, ...} *and p*˜*x* = {210, 99, 131, 64, 72, 162, 17, 19, 25, 162, 24, 71, . . .}*.*

•*Step 5:*


*The results of this step are E*∗ = {0, 0, 0, −1, 0, 0, 1, 1, −1, 2, −1, 0, 1, 2, 0} *and p*˜*x* = {210, 99, 131, 64, 72, 162, 17, 19, 25, 87, 96, <sup>47</sup>}*.*

### *3.4. The Extraction and Recovery Phase*

Through the communication channel, the stego-image and the trained MLP model are received. Next, we consider the secret data extraction from the stego-image and the stego-image recovery. The scheme then enters the extraction and recovery phase.

In the extraction and recovery stage, the procedure of the secret data extraction and the stego-image recovery is similar to the procedure of embedding and shifting. The secret data extraction process is briefly described as follows.

First, rhombus prediction and double-layered embedding is adopted to divide the stego-image into two sets denoted as "star"and "dot" (as shown in Figure 4a), and half of the secret data will be extracted from the star and dot sets, respectively. Except for the pixels located in borders, the pixels of the star or dot set are scanned from top-left to bottom-right to derive the stego sequence (*p*1,..., *pn*).

Then, the 4-neighbor dots of each *pi* are introduced to the trained MLP to obtain its predicted value ˆ *pi*. The predicted value is used to determine the prediction-error sequence (*e*1, ... ,*en*), and the sequence is divided into the prediction-error triples (*ex*,*ey*,*ez*). The prediction-error *ei*can be obtained as

$$e'\_i = p'\_i - \hat{p'}\_i.$$

Finally, each recovered triple (*px*, *py*, *pz*) is extracted based on the 3D-PEH as the method shown in Table 6–8. The 3D-PEH recovery method is divided into seven types: Type *A* to Type *G*. Besides, (*ex*,*ey*,*ez*) should be the prediction-errors between the "marked pixels" (in the stego-image) and the prediction of the "marked pixels". When the predictionerror *ei* is 1, the recovered value is *pi* = *pi* − 1, and when the prediction-error *ei* is −1, the recovered value is *pi* = *pi*+ 1.

The secret data bits are extracted by type *A* to type *G* according to the condition of the prediction-error group and the stego pixel group (*px*, *py*, *pz*) is recovered to the recover pixel groups that have the same pixel values as the cover pixel groups (*px*, *py*, *pz*) . In addition, the type *G* has no embedded data bits, so only recover the stego pixel groups to the recover pixel groups without secret data extraction.


**Table 6.** Type *A*–*C* of the The extracted secret bits and the recovered values of prediction-error triple (*ex*,*ey*,*ez*) and stego pixel triple (*px*, *py*, *<sup>p</sup>z*) in different types of the proposed method with *embedding* as the data embedding operations on (*ex*,*ey*,*ez*).

**Table 7.** Type *D*–*F* of the the extracted secret bits and the recovered values of prediction-error triple (*ex*,*ey*,*ez*) and stego pixel triple (*px*, *py*, *<sup>p</sup>z*) in different types of the proposed method with *embedding* as the data embedding operations on (*ex*,*ey*,*ez*).



**Table 7.** *Cont*.

**Table 8.** Type *G* of the The extracted secret bits and the recovered values of prediction-error triple (*ex*,*ey*,*ez*) and stego pixel triple *<sup>p</sup>x*, *py*, *pz* in different types of the proposed method with *no embedded data bit* on (*ex*,*ey*,*ez*).


Through the extraction and recovery phase, the secret data and the recovered image are obtained.

**Example 2.** *Let P* = {210, 99, 131, 64, 72, 162, 17, 19, 25, 162, 24, 71, 87, 96, <sup>47</sup>}*, and E* = {0, 0, 0, −1, 0, 0, 1, 1, −1, 2, −1, 0, 1, 2, <sup>0</sup>}*.*

	- *1. Get the three bits from E* = {**0, 0, 0**, −1, 0, 0, 1, 1, −1, 2, −1, 0, 1, 2, <sup>0</sup>}*:* (*ex*,*ey*,*ez*) = (0, 0, <sup>0</sup>)*. This is a Type-A case. The extracted secret bits are* (0, 0, <sup>0</sup>)*.*
	- *2. Get three bits from P* = {**210, 99, 131**, 64, 72, 162, 17, 19, 25, 162, 24, 71, 87, 96, <sup>47</sup>}*. Then, we can derive* (*px*, *py*, *pz*)=(210, 99, <sup>131</sup>)*.*

*The results of this step are S* = {0, 0, 0, . . .} *and P* = {210, 99, 131, . . .}*.*

• *Step 2:*

•


*The results of this step are S* = {0, 0, 0, 1, 1, . . .} *and P* = {210, 99, 131, 65, 72, 162 . . .}*. Step 3:*


*The results of this step are S* = {0, 0, 0, 1, 1, 0, ...} *and P* = {210, 99, 131, 65, 72, 162, 17, 19, 25 . . .}*.*

	- *1. Get the three bits from E* = {0, 0, 0, −1, 0, 0, 1, 1, −1, **2,** *−***1, 0**, 1, 2, <sup>0</sup>}*:* (*ex*,*ey*,*ez*) = (2, −1, <sup>0</sup>)*. This is a Type-C case. The extracted secret bits are* [1], [1], [1]*.*
	- *2. Get three bits from P* = {210, 99, 131, 64, 72, 162, 17, 19, 25, **162, 24, 71**, 87, 96, <sup>47</sup>}*. Then, we can derive* (*px*, *py*, *pz*)=(<sup>162</sup> − 1, 24 + 1, 71)=(161, 25, <sup>71</sup>)*.*

*The results of this step are S* = {0, 0, 0, 1, 1, 0, 1, 1, 1, ...} *and P* = {210, 99, 131, 65, 72, 162, 17, 19, 25, 161, 25, 71 . . .}*.*

•*Step 5:*


*The results of this step are S* = {0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0} *and P* = {210, 99, 131, 65, 72, 162, 17, 19, 25, 161, 25, 71, 86, 95, <sup>47</sup>}*.*
