*3.4. Watermark Extraction*

Watermark extraction includes extracting the watermark in the encrypted model and extracting the watermark in the decrypted model.

#### 3.4.1. Extracting Watermark in an Encrypted Domain and Restore the Original Encrypted Model

The watermarked model is firstly divided into patches, and the direction values in ciphertext is calculated and mapped to the direction values in plaintext using the MMI method and the mapping table. Then, the direction histogram is constructed, and the watermark is extracted from direction histogram. Finally, with the embedding key (*F*(*Nl*), *T*(*Nl*)), the embedding function *B*(*Nl*) can be obtained to restore the original encrypted model. Let *w*(*l*)(*j*) be the watermark embedded in the *j*-axis of the *l th* patch, and *w*(*l*)(*j*) is extracted by Equation (32).

$$w^{(l)}(j) = \begin{cases} 0, & \text{if } d^{(l)}(j) \in (-F(\mathcal{N}\_l), F(\mathcal{N}\_l)) \\ 1, & \text{else} \end{cases} \tag{32}$$

The original encrypted model can be restored by histogram shifting, which is reverse to the embedding process. In order to restore the original encrypted model, the modular multiplicative inverse θ *gB*(*Nl* ) of *gB*(*Nl*) need to be calculated through the extended Euclidean method.

$$
\theta\_{\mathcal{g}^{B(N\_l)}} \cdot \mathcal{g}^{B(N\_l)} = 1 \text{mod} \mathcal{N}^2 \tag{33}
$$

Therefore, the original encrypted vertex coordinate *C*(*l*)(*p*, *j*) can be obtained by Equation (34).

$$\mathcal{C}^{(l)}(p,f) = \begin{cases} \mathcal{C}^{(l)}\_{w}(p,j) \cdot \partial\_{\mathcal{F}} \mathcal{S}^{(l)}\_{l} = \mathcal{C}^{(l)}(p,j) \cdot \mathcal{g}^{B(N\_l)} \cdot \partial\_{\mathcal{F}} \mathcal{S}^{(l\_l)}\_{l} \bmod \mathcal{N}^2 \\ \qquad \text{if } d^{(l)}(j) \in [F(N\_l) + T(N\_l), 2F(N\_l) + T(N\_l)) \text{ and } p = 2, \ldots, \text{N} \\ \mathcal{C}^{(l)}\_{w}(p,j) \cdot \partial\_{\mathcal{F}} \mathcal{S}^{(l)}\_{l} = \mathcal{C}^{(l)}(p,j) \cdot \mathcal{g}^{B(N\_l)} \cdot \partial\_{\mathcal{F}} \mathcal{S}^{(l\_l)}\_{l} \bmod \mathcal{N}^2 \\ \qquad \text{if } d^{(l)}(j) \in (-2F(N\_l) - T(N\_l), -F(N\_l) - T(N\_l)] \text{ and } p = 1 \\ \mathcal{C}^{(l)}\_{w}(p,j) \\ \qquad \text{if } d^{(l)}(j) \in (-F(N\_l), F(N\_l) \; ) \end{cases} \tag{34}$$

where *<sup>C</sup>*(*l*) *<sup>w</sup>* (*p*, *<sup>j</sup>*) is the vertex coordinate with watermark in the patch *<sup>C</sup>*(*l*) *<sup>w</sup>* , and the processing in ciphertext is equivalent to the change in plaintext by using Equation (35).

$$P^{(l)}(p,j) = \begin{cases} P^{(l)}\_{\text{w}}(p,j) - B(\text{N}\_l), & \text{if } d^{(l)}(j) \in [F(\text{N}\_l) + T(\text{N}\_l), 2F(\text{N}\_l) + T(\text{N}\_l)) \text{ and } p = 2, \dots, \text{N} \\\ P^{(l)}\_{\text{w}}(p,j) - B(\text{N}\_l), & \text{if } d^{(l)}(j) \in (-2F(\text{N}\_l) - T(\text{N}\_l), -F(\text{N}\_l) - T(\text{N}\_l)) \text{ and } p = 1 \\\ P^{(l)}\_{\text{w}}(p,j), & \text{if } d^{(l)}(j) \in (-F(\text{N}\_l), F(\text{N}\_l) \text{)} \end{cases} \tag{35}$$

After the above process, *d* (*l*) *<sup>w</sup>* is restored to *d*(*l*), and the original encrypted model can be obtained.

#### 3.4.2. Extracting Watermark in Decrypted Model

With the private key, the watermarked model can be decrypted. With the embedding key, the watermark can be extracted and the original 3D model can be restored. Firstly, the watermarked model is divided into patches and the direction values of each patch are calculated by using Equation (14). Then, the direction histogram is constructed and the watermark is extracted from direction histogram by using Equation (32). Finally, with the embedding key, the original model can be restored by using Equation (35).

The decrypted model with watermark may be vulnerable to some common attacks such as noise interference during transmission. Since the robust interval during histogram shifting is reserved, the proposed method is robust to common attacks, such as Gaussian noise, translation, scaling, etc. As illustrated in Figure 8, the 0-bit area and the 1-bit area are separated by the robust interval of size *T*(*Nl*). After the decrypted watermarked model is attacked slightly, it will cause a small range fluctuation of the direction values. However, if the direction values do not enter the error area, the receiver can still correctly extract the watermark. In order to improve the accuracy of watermark extraction after being disturbed, the watermark is extracted by using

$$w^{(l)}(j) = \begin{cases} 0, & \text{if } d^{(l)}(j) \in \left( -F(\mathcal{N}\_l) - T(\mathcal{N}\_l)/3, F(\mathcal{N}\_l) + T(\mathcal{N}\_l)/3 \right) \\ 1, & \text{else} \end{cases} \tag{36}$$

#### **4. Experimental Results and Discussion**

The proposed method processed 3D model and implemented the watermark method in MATLAB R2016b under Window 7. We implemented the following experiment on 40 3D models and calculated the average of 40 3D models. Figure 9 shows six models used in the experiment.

The quality of the decrypted watermarked model is evaluated by the signal-to-noise ratio (*SNR*). The higher the value *SNR*, the better the imperceptibility after embedding watermark. *SNR* is computed as

$$SNR = 10 \lg \frac{\sum\_{i=1}^{N\_V} \left[ (\upsilon\_{i,x} - \overline{\upsilon}\_x)^2 + (\upsilon\_{i,y} - \overline{\upsilon}\_y)^2 + (\upsilon\_{i,z} - \overline{\upsilon}\_z)^2 \right]}{\sum\_{i=1}^{N\_V} \left[ (g\_{i,x} - \upsilon\_{i,x})^2 + (g\_{i,y} - \upsilon\_{i,y})^2 + (g\_{i,z} - \upsilon\_{i,z})^2 \right]} \tag{37}$$

where *vx*, *vy*, *vz* are the mean of vertex coordinates, *vi*(*vi*,*x*, *vi*,*y*, *vi*,*z*) are the original coordinates, and *gi*(*gi*,*x*, *gi*,*y*, *gi*,*z*) are the coordinates of the watermarked model *Mw*.

In addition, the bit error rate (BER) is used to measure the error rate of the extracted watermark. The lower the value, the higher the accuracy of the extracted watermark.

### *4.1. The Value of* β

According to Equation (29), *B*(*Nl*) = β + ϕ(*Nl* − 1), and the embedding function *B*(*Nl*) is changed by changing the value of β. According to Equation (31), if the value of β is large, the distortion of the decrypted model is high and the accuracy of watermark extracting is high, and vice versa. In order to observe the effect of β on the quality of decrypted model and the bit error rate of the extracted watermark, we changed the value of β to perform on 40 tested models and calculated their average. The relationship between the value of β and the distortion *SNR* is illustrated in Figure 10a. As β increases, *SNR* gradually decreases. When β = 588, *SNR* of the decrypted model is slightly greater than 30 dB. Based on imperceptible considerations, in order to obtain better model quality, the value of β cannot exceed 588. The relationship between the value of β and BER is shown in Figure 10b. When β ≥ 528, the watermark was correctly extracted without being attacked. Therefore, the value of b cannot be less than 528.

**Figure 9.** Six tested 3D models. (**a**) Fairy. (**b**) Boss. (**c**) Solider. (**d**) Devil. (**e**) Thing. (**f**) Lord.

**Figure 10.** The effect of β on the distortion of decrypted model and the bit error rate of the extracted watermark. (**a**) β is related to signal-to-noise ratio (*SNR* ). (**b**) β is related to bit error rate (BER).

#### *4.2. The Value of t*

As shown in Figure 7, the 0-bit area and 1-bit area are separated by the robust interval of size *t* · (*Nl* − 1). If the robust interval is large, the robustness is high. However, as *t* increases, the quality of the decrypted model is reduced. Therefore, *t* needs to be adjusted according to the actual application scenario. If higher robustness is required, a greater value of *t* can be assigned. If better quality of decrypted model is required, a smaller value of *t* is set. In order to choose a suitable value, experiments were conducted on 40 models to test the robustness with different values of *t*.

As illustrated in Figure 11, the BER of watermark extraction is low under Gaussian noise (0.01). By increasing *t*, the BER could be reduced. When *t* = 50, the watermark could be extracted correctly. Therefore, when higher robustness is required, the value of *t* can be assigned to be 50.

**Figure 11.** The BER under Gaussian attacks (the strength is 0.01).

#### *4.3. Feasibility of the Watermarking*

In order to show the feasibility of the proposed watermarking method, the 3D model devil with 30,000 vertices was tested, and other models had similar results. The watermark was a 1024-bit pseudo-random sequence. Firstly, the original model was divided into patches and the encrypted model was obtained by encrypting the 3D model with the public key as illustrated in Figure 12. Secondly, with the embedding key, the watermark was embedded to obtain the watermarked model as illustrated in Figure 12c. Then, the directly decrypted model (as shown in Figure 12d) is obtained by decrypting the encrypted model; *SNR* of the decrypted model was 30.93. Lastly, the watermark was extracted and the model was restored (as shown in Figure 12e), and the *SNR* of the restored model approaches infinity, which shows that the restored model was exactly the same as the original model. Figure 12f shows that all watermark bits were correctly extracted. The experimental results showed that the proposed method achieved reversibility of embedding and extraction, and the restoration of the original model. Figure 13 shows five decrypted 3D models had less distortion compared to the original 3D model, and Figure 13f shows the *SNR* of the decrypted models were close to 30, which denotes the proposed method can obtain good quality.

**Figure 12.** Experiment with 3D model 'devil' (**a**) The original model; (**b**) the encrypted model; (**c**) the watermarked model; (**d**) the decrypted model. After decryption, the *SNR* was 30.93. (**e**) The restored model. After restoration, the *SNR* approached infinity. (**f**) The bit error rate after watermark extraction.

**Figure 13.** Five watermarked 3D models. (**a**) The watermarked "Fairy"; (**b**) the watermarked "Boss"; (**c**) the watermarked "Solider"; (**d**) the watermarked "Thing"; (**e**) the watermarked "Lord"; (**f**) *SNR* of the five watermarked models.

#### *4.4. Robustness Analysis*

In order to compare the robustness under attacks, several attacks were performed on the decrypted 3D model. Table 1 shows the bit error rate of watermark extraction under different attacks.


**Table 1.** The BER under several common attacks.

#### 4.4.1. Robustness Against Translation Attacks

The robustness against the translation attacks was tested. As shown in Table 1, the method perfectly resisted translation attacks. When the model was subjected to a translation attack, the vertex coordinates of the patch increased by a certain value at the same time. According to Equation (14), when the vertex coordinates in a patch are changed by the same size, it can be known that its direction values will not change. Therefore, the watermark can be extracted correctly.

#### 4.4.2. Robustness Against Scaling Attacks

The robustness against scaling attacks was tested by different levels (0.8, 1.2, 1.5) on the decrypted 3D model. As shown in Table 1, the proposed method was robust to scaling attacks. When the model was attacked, the vertex coordinates of the patch were multiplied by a certain coefficient at the same time. According to Equation (14), its direction values also increased or decreased accordingly. As illustrated in Figure 7, the direction values of most patches were concentrated in the central area. Therefore, when the scaling size was increased, most of the vertices were still in the original area, and only a small number of vertices were offset. On this condition, the robustness was high. When the scaling size was decreased, the 1-bit area was easily shifted to the 0-bit area, which affected the accuracy of extracting the watermark. Therefore, the robustness was much higher when the 3D model was amplified compared with other levels of attacks.

#### 4.4.3. Robustness to Gaussian Noise Attacks

The robustness against Gaussian noise attacks was tested by performing different degrees (0.005, 0.01, 0.02) on the decrypted 3D model. As shown in Table 1, the robustness against Gaussian noise attacks was high. When the model was attacked by Gaussian noise, the vertex coordinates were slightly disturbed. According to Equation (14), its direction values were also slightly modified. As illustrated in Figure 7, the direction values of most patches were concentrated in the central area, and only a few vertices were in the non-central area. Therefore, when the model was slightly disturbed, the direction values of the central area were slightly disturbed, and only a few direction values of non-central area were shifted.

However, the proposed method cannot resist the attacks of cropping and simplification, it is because those attacks will influence the order of the vertices. Moreover, the proposed method cannot resist salt and pepper noise, mainly because the attack obviously changes the relative position between vertices.

#### *4.5. Compared with the Existing Watermark Method in an Encrypted Domain*

To our knowledge, few effective robust reversible watermarking methods for 3D model in the encryption domain has been reported in the literature. In order to show the effectiveness of the proposed method, Jiang [1] is extended to the encrypted 3D model. From Table 2, the proposed method has a slightly higher embedding capacity compared with the Jiang [1], and it is mainly because a patch has three coordinate axes and three bits can be embedded. To sum up, the proposed method has good security and robustness, and the decrypted 3D model has low distortion.


**Table 2.** Compared to the method of Jiang [1].

#### **5. Conclusions**

In this paper, a robust reversible three-dimensional (3D) model watermarking method based on homomorphic encryption is presented for protecting the copyright of 3D models. The 3D model is divided into non-overlapping patches, and the vertex in each patch is encrypted by using the Paillier cryptosystem. On the cloud side, three direction values of each patch are computed, and the symmetrical direction histogram is constructed for shifting to embed watermark. In order to obtain robustness, the robust interval is designed in the process of histogram shifting. The watermark can be extracted from the direction histogram, and the original encrypted model can be restored by histogram shifting. Experimental results show that the decrypted 3D models have less distortion compared with the existing methods, which denotes the proposed method can embed more secret data without increasing the 3D models distortion. Moreover, the proposed method can resist a series of attacks compared to the existing watermarking methods on encrypted 3D model. Thus, the proposed method is efficient to protect copyright of 3D models in the cloud when the cloud administrator does not know the content of the 3D models, but the existing methods have no ability.

In the future, we will investigate the following two possible research directions. (1) Reduce the distortion of the directly decrypted 3D model. (2) Further improve the robustness against more kinds of attacks, such as cropping and salt and pepper noise.

**Author Contributions:** Conceptualization and funding acquisition are credited to L.L. Methodology and writing—original draft are due to S.W. Conceptualization and supervision are credited to S.Z. Writing—review & editing is credited to T.L. Formal analysis and investigation are originated by C.-C.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was partially supported by National Natural Science Foundation of China (No. 61370218, No. 61971247), Public Welfare Technology and Industry Project of Zhejiang Provincial Science Technology Department (No. LGG19F020016), and Ningbo Natural Science Foundation (No. 2019A610100).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
