Next Article in Journal
Predicting the Performance Deterioration of a Three-Shaft Industrial Gas Turbine
Next Article in Special Issue
An Image Encryption Algorithm Based on Complex Network Scrambling and Multi-Directional Diffusion
Previous Article in Journal
Multi-View Travel Time Prediction Based on Electronic Toll Collection Data
Previous Article in Special Issue
MLD: An Intelligent Memory Leak Detection Scheme Based on Defect Modes in Software
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

QuatJND: A Robust Quaternion JND Model for Color Image Watermarking

School of Information Science and Engineering, Shandong Normal University, Jinan 250358, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(8), 1051; https://doi.org/10.3390/e24081051
Submission received: 6 June 2022 / Revised: 25 July 2022 / Accepted: 28 July 2022 / Published: 30 July 2022

Abstract

:
Robust quantization watermarking with perceptual JND model has made a great success for image copyright protection. Generally, either restores each color channel separately or processes the vector representation from three color channels with the traditional monochromatic model. And it cannot make full use of the high correlation among RGB channels. In this paper, we proposed a robust quaternion JND Model for color image watermarking (QuatJND). In contrast to the existing perceptual JND models, the advantage of QuatJND is that it can integrate quaternion representation domain and colorfulness simultaneously, and QuatJND incorporates the pattern guided contrast masking effect in quaternion domain. On the other hand, in order to efficiently utilize the color information, we further develop a robust quantization watermarking framework using the color properties of the quaternion DCT coefficients in QuatJND. And the quantization steps of each quaternion DCT block in the scheme are optimal. Experimental results show that our method has a good performance in term of robustness with better visual quality.

1. Introduction

The protection of digital images is one of the urgent security issues that need to be solved nowadays, and digital image watermarking technology provides an effective solution. Digital image watermarking technology embeds watermarked information into multimedia information carriers without degrading the perceived quality but at the same time resists common attacks. The technology must satisfy robustness, imperceptibility and watermark capacity [1]. In past decades, digital image watermarking has been widely studied in grayscale images, whereas color images have received much less attention though they constitute most of the displayed multimedia content. Color information is also viewed as a significant feature in many fields of image processing. If correctly handled, color information will lead to more effective watermarking schemes, especially when achieving a good trade-off between imperceptibility and robustness [2]. Therefore, there is a considerable hot research topic for researchers to use the color information in digital image watermarking technology.
At present, most color image watermarking algorithms extract luminance information of color images or process only a single color channel, such as: (1) By transforming the color space model, the color image is transformed from RGB color space to YCbCr (or YUV) color space, and then the luminance component Y of the image is selected to embed the watermark; (2) According to the insensitivity of human vision system (HVS) to the change of blue component, the watermark is embedded by modifying the blue component value of color image [3]; (3) The three color channels of color images are processed separately, and watermark embedding also needs to be carried out on three color components respectively. Therefore, how to make better use of the correlation between the three channels of the color image is an issue that cannot be ignored.
In order to realize a better tradeoff between robustness and invisibility, the watermark strength can be achieved by the JND, which is the maximum distortion not perceived by HVS. The most well-known JND model is proposed by Watson et al. [4], the model consists of a sensitivity function, two masking components based on luminance and contrast masking. Lihong et al. [5] proposed robust algorithms which incorporate Watson’s model to compute the quantization steps, it has proved a significant improvement in robustness against the common attacks by the used perceptual model. In the past few years, the JND model has been the focus of research because of its excellent performance in the field of digital image analysis, such as Kim’s model [6], Zhang’s model [7], Wan’s model [8] and so on. And based on the development of JND modeling, some JND model-based watermarking algorithms are proposed [9,10,11]. In addition, visual saliency (VS) is also considered to facilitate JND metrics. However, these existing JND models restore each color channel separately or process the vector representation from three color channels with the traditional monochromatic model. And it cannot make full use of the high correlation among RGB channels. To account for this, a quaternion perceptual JND model is needed.
Quaternions, which have been increasingly used in color image processing in the past two decades, offer a solution to achieve this goal. They represent an image by encoding its three color channels on the imaginary parts of quaternion numbers. Compared with traditional color image processing technologies, the main advantage of such a representation is that a color image can be processed holistically as a vector field and can exploit the correlation between the three color components, so does the color image watermarking [12].
Recently, many algorithms have been proposed for color image watermarking based on Quaternion Discrete Fourier Transform (QDFT). Bas et al. [13] firstly proposed a non-blind color image watermarking algorithm in the QDFT domain by the method of quantization index modulation. But the algorithm has a low peak signal noise ratio (PSNR) and poor ability to resist attacks. Ma et al. [14] proposed a watermarking scheme for color images based on local quaternion Fourier spectral analysis (LQFSA). They introduced invariant feature transform (IFT) and geometric correction scheme to enhance the robustness to tackle geometric attacks. Jiang et al. [15] pointed out that, Bas et al. [13] didn’t consider the issue that the real part of the quaternion matrixes by inverse QDFT should be equal to zero and the problem could lead to a loss of watermark energy. They selected the real part of the QDFT coefficient matrixes to insert watermark and modified the coefficients of the real part symmetrically. Based on this constraint of symmetric distortion, Chen et al. [16] provided a Full 4-D quaternion discrete Fourier transform watermarking framework to illustrate the overall performance gain in terms of imperceptibility, capacity and robustness they can achieve compared to other quaternion Fourier transform based algorithms.
Furthermore, some other quaternion algorithms have been proposed, such as Quaternion Singular Value Decomposition (QSVD). In [17], a blind color image watermarking algorithm is proposed based on QSVD. The QSVD and rotation are employed to fulfill the process of watermarking and extracting watermark. Liu et al. [18] firstly performed QSVD to get U matrix and then the watermark was inserted into the optimally selected coefficients of the quaternion elements in the first column of the U matrix to enhance the invisibility. Recently, because the Discrete Cosine Transform (DCT) is compatible with the JPEG image compression standard, the watermarking algorithm in QDCT domain has received more considerable attention [19]. Therefore, it is meaningful to study how to introduce Quaternion Discrete Cosine Transform (QDCT) into watermarking algorithm.
In this paper, a robust quaternion JND model for color image watermarking (QuatJND) is proposed. And a novel and efficient robust quantization watermarking framework by exploiting quaternionic domain DCT based QuatJND model is proposed for color images. In our method, we embed the watermark into the QDCT domain by the method of spread transform dither modulation (STDM). At first, the colorfulness which is obtained in the QDCT domain is introduced as a new impact factor for QuatJND model. Furthermore, the QuatJND model is incorporated to derive the optimum quantization step for the embedding.
In summary, our main contributions are listed as follows:
(1)
We proposed perceptual unit pure quaternion in the QDCT watermarking scheme. In this way, the proposed scheme can have the better performance.
(2)
A quaternion perceptual JND model (QuatJND) is calculated in the QDCT domain.
(3)
The color information and the pattern guided contrast masking effect in quaternion domain are considered for the QuatJND model.
(4)
A logarithmic STDM watermarking scheme is proposed incorporate the QuatJND model. The proposed watermarking scheme can achieve a better performance with Peak Signal to Noise Ratio (PSNR) and Quaternion Structural Similarity Index (QSSIM).
The rest of this paper is organized as follows. Section 2 introduces the basic definitions that include quaternion and the QDCT of color images. Section 3 we provide QuatJND model which is used in the scheme and the colorfulness masking effect in quaternion DCT domain. Subsequently, we present the proposed watermarking scheme based on QDCT combines with QuatJND model. Experimental results and comparisons in Section 4 are provided to demonstrate the superior performance of the proposed scheme. Finally, we draw the conclusions in Section 5.

2. Quaternion DCT Definition

Quaternions were introduced by mathematician Hamilton in 1843 [20]. For easy reading, the main relevant abbreviations and symbols used in this paper is listed in Table 1.
Quaternion is the extension of real number and complex number, a quaternion has one real part and three imaginary parts given by
q = a + b i + c j + d k
where a , b , c , d R , and i , j , k are three imaginary numbers which obey the following rules
i 2 = j 2 = k 2 = 1
i · j = j · i = k , k · i = i · k = j , j · k = k · j = i
If the real part a = 0 , q is called a pure quaternion.
Pei et al. [21] first applied quaternion to color image, as well proposed quaternion model of color image, which considered the three color components R, G, B as three imaginary parts of the quaternion. Let f ( x , y ) be an RGB image function with the quaternion representation (QR), then each pixel can be represented as a pure quaternion as
f ( x , y ) = f R ( x , y ) i + f G ( x , y ) j + f B ( x , y ) k
where f R ( x , y ) , f G ( x , y ) and f B ( x , y ) are the pixel values of the R, G and B color components at position ( x , y ) , respectively.
Because of the non-commutative multiplication rule for quaternions, the form of QDCT has two categories, left-handed form and right-handed form [19]. Without loss of generality, for QDCT, only the left-side one is considered in this paper, which satisfy the following equation
C ( p , s ) = α ( p ) α ( s ) x = 0 M 1 y = 0 N 1 μ · f ( x , y ) · N ( p , s , x , y )
Corresponding to QDCT, the inverse Quaternion Discrete Cosine Transform (IQDCT) of f ( x , y ) is defined as
f ( x , y ) = α ( p ) α ( s ) p = 0 M 1 s = 0 N 1 μ · C ( p , s ) · N ( p , s , x , y )
where,
N ( p , s , x , y ) = cos [ π ( 2 x + 1 ) p 2 M ] cos [ π ( 2 y + 1 ) s 2 N ]
and
α p = 1 M , p = 0 2 M , p 0
α s = 1 N , s = 0 2 N , s 0
and μ is a unit pure quaternion which meets the constraint that μ 2 = 1 .
In order to reduce the complex computations and to make full use of the existing real-valued DCT codes, this subsection describes the relationship between QDCT and DCT. This relationship can provide not only an efficient computation approach for QDCT but also an approach to analyse the constraints for the watermark embedding.
Considering the general unit pure quaternion μ = ξ i + η j + γ k , substituting Equation (4) into Equation (5), we have
C ( p , s ) = x = 0 M 1 y = 0 N 1 α ( p ) α ( s ) μ · f ( x , y ) · N ( p , s , x , y ) = C 0 ( p , s ) + C 1 ( p , s ) i + C 2 ( p , s ) j + C 3 ( p , s ) k
where,
C 0 ( p , s ) = [ ξ D C T ( f R ( x , y ) ) + η D C T ( f G ( x , y ) ) + γ D C T ( f B ( x , y ) ) ] C 1 ( p , s ) = η D C T ( f B ( x , y ) ) γ D C T ( f G ( x , y ) ) C 2 ( p , s ) = ξ D C T ( f B ( x , y ) ) + γ D C T ( f R ( x , y ) ) C 3 ( p , s ) = ξ D C T ( f G ( x , y ) ) η D C T ( f R ( x , y ) )
D C T ( f R ( x , y ) ) ) , D C T ( f G ( x , y ) ) ) , D C T ( f B ( x , y ) ) , are respectively the conventional DCT matrix of the red, green and blue channels, and D C T ( · ) is the conventional discrete cosine transform.
Similarly, applying IQDCT, we get the reconstructed image
f ¯ ( x , y ) = p = 0 M 1 s = 0 N 1 α ( p ) α ( s ) μ · C ( p , s ) · N ( p , s , x , y ) = f 0 ¯ ( x , y ) + f 1 ¯ ( x , y ) i + f 2 ¯ ( x , y ) j + f 3 ¯ ( x , y ) k
where,
f 0 ¯ ( x , y ) = [ ξ I D C T ( C 1 ( p , s ) ) + η I D C T ( C 2 ( p , s ) ) + γ I D C T ( C 3 ( p , s ) ) ] f 1 ¯ ( x , y ) = [ ξ I D C T ( C 0 ( p , s ) ) + η I D C T ( C 3 ( p , s ) ) γ I D C T ( C 2 ( p , s ) ) ] f 2 ¯ ( x , y ) = [ ξ I D C T ( C 3 ( p , s ) ) + η I D C T ( C 0 ( p , s ) ) + γ I D C T ( C 1 ( p , s ) ) ] f 3 ¯ ( x , y ) = [ ξ I D C T ( C 2 ( p , s ) ) η I D C T ( C 1 ( p , s ) ) + γ I D C T ( C 0 ( p , s ) ) ]
Here, I D C T ( · ) is the conventional inverse discrete Cosine transform.
For the color image signal, it can be drawn from Equation (12) that IQDCT must be a pure quaternion matrix after modifying some QDCT coefficients to insert watermark. Otherwise, taking only the three imaginary parts of this quaternion matrix to get the watermarked image will discard non-null real part data and result in a loss of watermark energy. Based on the above relationships Equations (10) and (12) and depending on the pure unit quaternion considered, one can identify the constraint to respect when modifying QDCT coefficients so as to avoid watermark energy loss. After the watermark embedding process, f ¯ should be a pure quaternion, or more clearly
f 0 ¯ = 0
where 0 is a zero matrix.
For the IQDCT coefficients matrix, we can obtain the real part from Equation (13) as
f 0 ¯ ( x , y ) = [ ξ I D C T ( C 1 ( p , s ) ) + η I D C T ( C 2 ( p , s ) ) + γ I D C T ( C 3 ( p , s ) ) ]
In order to respect the constraint Equation (14), as we can see from Equation (15), f 0 ¯ is not related to one component C 0 ( p , s ) . So, if we modify C 0 ( p , s ) to insert watermark, the precondition f 0 ¯ = 0 is satisfied.

3. Proposed Method

3.1. Perceptual Unit Pure Quaternion

To avoid watermark energy loss, the real part C 0 ( p , s ) after QDCT is selected to embed watermarking. It can be seen form Equation (16), for different unit pure quaternion, the C 0 ( p , s ) part transformation coefficients are different, and the schemes of modifying coefficient embedding watermark are also different. Hence, the combination of unit pure quaternion and its weight will affect the performance of watermarking algorithm.
C 0 ( p , s ) = [ ξ D C T ( f R ( x , y ) ) + η D C T ( f G ( x , y ) ) + γ D C T ( f B ( x , y ) ) ]
D C T ( f R ( x , y ) ) ) , D C T ( f G ( x , y ) ) ) , D C T ( f B ( x , y ) ) are respectively the conventional DCT matrix of the red, green and blue channels. Therefore, C 0 ( p , s ) part can be deemed a weighted aggregate of each component of the color such as R, G, and B. Although we embed watermark information into C 0 ( p , s ) part which changes the distribution of the values of C 0 ( p , s ) , for the whole image in spatial domain, the differences can spread to the R, G, B three color components.
During the QDCT transformation, a unit pure quaternion μ = ( i + j + k ) / 3 is the most commonly used where ξ = η = γ = 1 / 3 . The unit pure quaternion will cause the same amount of change in the three color components of R, G, and B. However, due to the color sensitivity of the human eye to R, G and B is different, this change will make the invisibility of watermarking method is poor. In order to improve the invisibility of the watermarking scheme, we proposed the perceptual unit pure quaternion.
In the process of exploring the weight of ξ , η , and γ , we find that Zhu et al. [22] pointed out the RGB input signal can be converted into the YCbCr signal to remove the redundancies across three color channels and to offer good experimental results. The luminance component Y can be represented use the R, G, B three color components and the weight of R, G and B is 0.299, 0.587 and 0.114, respectively. And, some color image watermarking algorithms such as in YCbCr (or YUV) space [23,24], they modified the luminance component to inject watermark, and the experimental results showed good invisibility.
Therefore, to obtain well imperceptibility of the watermarked model, the unit pure quaternion and its weight according to relative relationship between the color channel R, G, and B is 0.299, 0.587 and 0.114, respectively. And the unit pure quaternion which should meet the constraint that μ 2 = 1 . Then the perceptual unit pure quaternion is
μ = ξ * i + η * j + γ * k
and,
μ 2 = ( ξ * i + η * j + γ * k ) ( ξ * i + η * j + γ * k ) = ( ξ * ) 2 + ξ * η * k ξ * γ * j ξ * η * k ( η * ) 2 + η * γ * i + ξ * γ * j η * γ * i ( γ * ) 2 = ( ξ * ) 2 ( η * ) 2 ( γ * ) 2 = 1
where, the perceptual unit pure quaternion μ and its weight ξ * , η * , and γ * is 0.299:0.587:0.114, substituting the relative relationship into Equation (18), and we can obtain the ξ * = 0.4472 , η * = 0.8780 , and γ * = 0.1705 , respectively. The experimental results are provided in Section 4.3.1 show that the perceptual pure unit quaternion μ has the better performance.

3.2. Proposed Quaternionic JND Model

For an image, a high-precision perceptual JND profile is usually perceived various changes which includes the spatial contrast sensitivity function (CSF), luminance adaptation (LA) effect and the contrast masking (CM) effect. In fact, the color sensitivity needs to be concerned for a perceptual JND profile in color images. The JND in the QDCT domain is typically expressed as a product of a base threshold and some modulation factor. In this paper, the real part C 0 ( p , s ) after QDCT is selected to embed watermarking. To obtain the JND threshold of the modified coefficients in C 0 ( p , s ) , in this section, a novel contrast masking effects considering colorfulness is introduced:
J N D ( t , m , n ) = τ · N · J q _ b a s e · M q _ C M · M q _ L A · M q _ C O L
where the parameter t is the index of a QDCT block, and ( m , n ) is the position of the QDCT block coefficients. τ is to account for the summation effect of individual JND thresholds over a spatial neighborhood for the visual system and is set to 0.14. N is the dimension of QDCT (8 in this case). J q _ b a s e is the base CSF threshold, M q _ L A is the LA effect and M q _ C M is the CM effect [8,9,25,26]. And M q _ C O L is an important factor to reflect the colorfulness.

3.2.1. Spatial CSF in Quaternion Domain

J q _ b a s e is the quaternion domain JND value for the component C 0 ( p , s ) generated by spatial CSF on a uniform background image [6] and can be given by considering the oblique effect in QDCT domain as
J q _ b a s e = ( J q _ d ( ω m , n ) J q _ v ( ω m , n ) ) · sin ( φ m , n ) 2 + J q _ v ( ω m , n )
where J q _ d ( ω m , n ) and J q _ v ( ω m , n ) is formulated by QDCT coefficients
J q _ d ( ω m , n ) = 0.0293 · ω m , n 2 + ( 0.1382 ) · ω m , n + 1.75 J q _ v ( ω m , n ) = 0.0238 · ω m , n 2 + ( 0.1771 ) · ω m , n + 1.75
where ω m , n is cycle per degree (cpd) for the ( m , n ) -th QDCT coefficient and is given by
ω m , n = m 2 + n 2 / ( 2 N θ )
and,
θ = tan 1 [ 1 / 2 · R V H · H ]
where θ indicates the horizontal/vertical length of a pixel in degrees of visual angle, R V H is the ratio of the viewing distance to the screen height, and H is the number of pixels in the screen height. φ m , n stands for the direction angle of the corresponding QDCT component, which is expressed as
φ m , n = sin 1 ( 2 · ω m , 0 · ω 0 , n / ω m , n 2 )

3.2.2. Luminance Adaptation in Quaternion Domain

An luminance adaptation factor M q _ L A that employed both the cycles per degree (cpd) ω m , n for spatial frequencies and the average intensity value of the block μ l a can be formulated as,
M q _ L A = 1 + ( M 0 , 1 1 ) μ l a 0.3 0.2 0.8 , μ l a 0.3 1 + ( M 0 , 9 1 ) μ l a 0.3 0.6 0.6 , μ l a > 0.3
where the M 0 , 1 , M 0 , 9 are empirically set as
M 0 , 1 = 2.468 × 10 4 ω m , n 2 + 4.466 × 10 3 ω m , n + 1.14 M 0 , 9 = 1.230 × 10 4 ω m , n 2 + 1.433 × 10 3 ω m , n + 1.34
where ω m , n is expressed as in Equation (22) and the average intensity value of the t-block μ l a can be expressed as
μ l a = C 0 ( 0 , 0 ) · C E d
where C 0 ( 0 , 0 ) is the QDCT coefficient at position ( 0 , 0 ) of the t-th C 0 block called Q-DC coefficient (Quaternion DC coefficient). E d denotes the maximum directional energy of image block in Equation (28), C is a fixed constant and is approximately equal to E d to ensure the invariance and stability of μ l a . Therefore, the proposed formula can resist the fixed gain attack as it will vary linearly with the amplitude changes.
E d = max ( C 0 ( 0 , 1 ) , C 0 ( 1 , 0 ) , C 0 ( 1 , 1 ) )
where C 0 ( 0 , 1 ) , C 0 ( 1 , 0 ) and C 0 ( 1 , 1 ) are the QDCT coefficients at position ( 0 , 1 ) , ( 1 , 0 ) and ( 1 , 1 ) of the t-th C 0 block called Q-AC coefficient (Quaternion AC coefficient). Similar to DCT transformation [27], the Q-AC coefficients obtained after QDCT transformation can reflect the image block direction energy. In our work, we select C 0 ( 0 , 1 ) , C 0 ( 1 , 0 ) , C 0 ( 1 , 1 ) to reflect the directional energy of the block in the horizontal, vertical and diagonal direction, respectively.

3.2.3. Pattern Guided Contrast Masking in Quaternion Domain

M q _ C M is modeled for boosting the J q _ b a s e based on local spatial texture complexity (e.g., smoothness, edge or texture), which is given by
M q _ C M = 1 + ( g 1 ( ω m , n ) 1 ) · ( μ c m 0.15 ) , 0 μ c m < 0.15 g 1 ( ω m , n ) , 0.15 μ c m < 0.2 g 1 ( ω m , n ) + ( g 2 ( ω m , n ) g 1 ( ω m , n ) ) · ( μ c m 0.2 0.1 ) , others
where the g ( ω m , n ) is modeled in a gamma pdf form and expressed as
g l ( ω m , n ) = ( ( β α c m Γ ( ω ) ) ω α c m 1 e β c m ω ) · γ c m + δ m
g l = 1 ( ω m , n ) : α c m = 3.4 , β α c m = 2 , γ c m = 8.0 , δ m = 1.42 g l = 2 ( ω m , n ) : α c m = 3.4 , β α c m = 2 , γ c m = 12.4 , δ m = 2.83
where, μ c m represents the contrast masking effect of t-th QDCT block. In this paper, both pattern complexity and luminance contrast are considered to construct the contrast masking effect. And the contrast masking effect μ c m is defined as
μ c m = f ( C p ) · μ ( C l )
where, C p is the pattern complexity and C l is the luminance contrast of t-th QDCT block, respectively.
The pattern complexity measurement of the block proposed by Wan et al. [9] is the ratio of the maximum directional energy and the DC coefficient of each 8 × 8 block, which can measure energy in different directions while keeping the measurement of pattern complexity insensitive to the changes caused by the watermarking process. However, this method ignores the relationship between the directional energy of a DCT block and its neighboring DCT blocks. Therefore, we propose a new pattern complexity representation that combines the directional energy within a QDCT block and the directional energy of its neighboring QDCT blocks. This method is more effective in representing the complexity relationship of image patterns.
Firstly, we choose a neighborhood of size 3 × 3 for each 8 × 8 QDCT block. If the directional location of the maximum directional energy of its neighboring block is the same as this QDCT block, then the neighboring block is marked. We choose the ratio of the number of marked neighborhood blocks to all neighborhood blocks of this QDCT block as the pattern complexity C p .
Therefore, the pattern complexity C p of the image block is represented by
C p = i = 1 n D i n
where, D i represents the correlation between the image block and its neighbor in Equation (34), and n is the number of neighborhoods of t-th QDCT block.
D i = 1 , l o c a t i o n ( E d ) = l o c a t i o n ( E d , i ) 0 , e l s e
where, E d is the maximum directional energy of t-th QDCT block and E d , i ( i = 1 , 2 , , n ) are the maximum directional energy of neighboring blocks of the t-th QDCT block.
Since the pattern complexity of the irregular regions in the image is stronger, the diminishing effect of C p follows the non-linear transducer as
f ( C p ) = 1 0.2 · C p 0.7
The luminance contrast C l can be obtained from Q-AC coefficients C 0 ( 0 , 1 ) , C 0 ( 1 , 0 ) and C 0 ( 1 , 1 )
C l = ( C 0 ( 0 , 1 ) 2 + C 0 ( 1 , 0 ) 2 + C 0 ( 1 , 1 ) 2 )
where, ( · ) is normalization operation. Following logarithmic form, the increasing effect of C l can be represented as
μ ( C l ) = ln ( 1 + 0.47 · C l )
Figure 1 shows the μ c m of three types of image blocks, such as smoothness, edge and texture. The yellow image block is smooth, and its μ c m is less than 0.15. The blue image block is an edge block whose μ c m is greater than 0.15 and less than 0.2. An image block with its μ c m greater than 0.2 is a texture block, such as the green image block in Figure 1.

3.2.4. Colorfulness Masking in Quaternion Domain

In this part, we proposed a new masking function that consider the colorfulness masking effect from C 1 , C 2 and C 3 parts. For the color images, when human eyes observe different colors, the interaction between different colors will interfere with the judgment of color. Colorfulness is the attribute of chrominance information humans perceive. Hasler and Susstrunk [28] have shown that colorfulness can be represented effectively with combinations of image statistics ( the variance and mean values). And Panetta et al. [29] pointed out, just as the human visual system (HVS), human eyes capture color information in the opponent color spaces such as red-green (R-G) and yellow-blue (Y-B) color space. In a word, the colorfulness can be formulated by using image statistics in opponent color spaces.
In this paper, we select C 1 , C 2 and C 3 parts after QDCT to calculate the image block’s colorfulness. In QDCT domain, we are first transformed into the opponent red-green and yellow-blue color space can be expressed as follows:
K 1 ( R G ) = C 3 K 2 ( Y B ) = C 2 C 1
Then, for a QDCT block ( 8 × 8 ), the image colorfulness Q c is defined as
Q c = ( σ K 1 2 + σ K 2 2 + 0.3 μ K 1 2 + μ K 2 2 ) / 85.59
where, σ K 1 2 , σ K 2 2 , μ K 1 and μ K 2 represent the variance and mean values along these two opponent color axes and can be expressed by the coefficients of QDCT block
μ K 1 = 1 N p = 1 N K 1 p
σ K 1 2 = 1 N p = 1 N ( K 1 p 2 μ K 1 2 )
Figure 2 shows the comparisons of colorfulness metrics. Figure 2a,b are from TID2008 database [30]. The colorfulness of Figure 2a,b is 0.9462 and 0.4563, respectively. The results indicate the colorfulness metrics have a good correlation with human color perception. Inspired by this, a factor obtained from colorfulness is used to make JND a better match for human beings. The colorfulness masking factor M q _ C O L is defined as
M q _ C O L = 1 + ( Q C 0.3 ) · 0.28
where, ( · ) is normalization operation.

3.3. QuatJND-Based Watermarking

In this section, the flowchart of the proposed watermarking scheme based QuatJND model is briefly introduced.

3.3.1. Adaptive Quantization Step

In this paper, some of the QDCT coefficients denoted as the host vector X, the maximum imperceptible change in the random direction of v can be given as X T v . To ensure the independence between the quantization compensation and the original signal in the watermarking process, the host vector is transformed into logarithmic domain firstly.
Y = F ( X T v ) = ln ( 1 + z X T v E d ) ln ( 1 + z )
where, v is the random projection vector, E d is the image block direction maximum energy in Equation (28), which is to resist the linear variation. And z is used as a secret key.
In this arrangement, the transformed vector Y is quantized into Y w regarding the watermark bit as
Y w = Q ( Y , Δ , w , d m ) = Δ · r o u n d ( Y + d m Δ ) d m , w { 0 , 1 }
where, d m is the dither signal corresponding to the message bit w and the proposed JND model can be used as a slack S to calculate the adaptive quantization step Δ
Δ = ln ( 1 + 2 S T v E d ) / ln ( 1 + z )
Thus, when the image is scaled by a fixed gain, the coefficients to be watermarked and the estimated quantization step Δ can ensure stability.

3.3.2. Watermark Embedding Procedure

The proposed watermarking scheme includes two parts, embedding and extraction procedure. Figure 3 illustrates the embedding steps of the watermarking scheme. Here, taking Lena image as an example, the procedures of the watermark embedding are shown as follows:
Step 1: For an original image, it is first divided into non-overlapped blocks of 8 × 8 size, and each block is converted to the quaternion representation by Equation (4).
Step 2: Apply QDCT which used the perceptual unit pure quaternion μ to each block, and the QDCT spectrum coefficients are obtained by Equation (11).
Step 3: Estimate the QuatJND factors including the spatial CSF effect, luminance adaptation and contrast masking in C 0 quaternion domain by Equations (20), (25) and (29), respectively.
Step 4: Extract colorfulness feature from C 1 , C 2 and C 3 by Equation (39). Quantize and calculate the colorfulness masking of each 8 × 8 block for QuatJND profile by Equation (42).
Step 5: Final QuatJND value of each block combined with colorfulness masking is determined by Equation (19). The proposed QuatJND value can be served as the perceptual redundancy vector S.
Step 6: The C 0 coefficients from the fourth to tenth except the fifth after zigzag scan are selected to form a host vector X. The host vector X and the perceptual redundancy vector S are used to obtain the transformed vector Y and the adaptive quantization step Δ .
Step 7: One bit of the watermark message w after Arnold transformation is embedded into the transformed vector Y as followed:
Y w = Q ( Y , Δ , w , d m )
Step 8: Transform the modulated coefficients Y w back to form the watermarked image X w .
Step 9: Finally, the inverse QDCT on each block is performed, and then the watermarked image is obtained.

3.3.3. Watermark Extracting Procedure

The extracting algorithm is an inverse procedure of the embedding algorithm. Figure 4 illustrates the extracting steps of the watermarking scheme. And the procedures of the watermark extracting are shown as follows:
Step 1: For a watermarked image, it is first and divided into non-overlapped blocks of 8 × 8 size, and each block is converted to the quaternion representation by Equation (4).
Step 2: Apply QDCT which used the perceptual unit pure quaternion μ to each block, and the QDCT spectrum coefficients are obtained by Equation (11).
Step 3: Estimate the QuatJND factors including the spatial CSF effect, luminance adaptation and contrast masking in C 0 quaternion domain by Equations (20), (25) and (29), respectively.
Step 4: Extract colorfulness feature from C 1 , C 2 and C 3 by Equation (39). Quantize and calculate the colorfulness masking of each 8 × 8 block for QuatJND profile by Equation (42).
Step 5: Final QuatJND value of each block combined with colorfulness masking is determined by Equation (19). The proposed QuatJND value can be served as the perceptual redundancy vector S .
Step 6: The C 0 coefficients from the fourth to tenth except the fifth after zigzag scan are selected to form a host vector X . The host vector X and the perceptual redundancy vector S are used to obtain the transformed vector Y and the adaptive quantization step Δ .
Step 7: The watermark can be detected according to the minimum distance detector as follows
w = arg min b { 0 , 1 } Y Q ( Y , b , Δ , d m )
Step 8: The final watermark image is obtained by the inverse Arnold transform.

4. Experimental Results and Comparisons

In this section, we show and discuss the experimental results. To prove the effectiveness and robustness performance of our proposed scheme, we perform experiments using the original code in MATLAB (MathWorks, Natick, MA, USA) R2019a on a 64-bit Windows 10 operating system at 16 GB memory, 3.40 GHz frequency of Intel (R) Core (TM) i7-6700 CPU (Intel, Santa Clara, CA, USA).

4.1. Performance Metrics

In the experiments, two objective criteria include Peak Signal to Noise Ratio (PSNR) and Quaternion Structural Similarity Index (QSSIM) have been considered to measure the fidelity. The Bit Error Rate (BER) is computed to evaluate the robustness of algorithms.
(1)
Peak Signal to Noise Ratio (PSNR)
PSNR provides an objective standard for measuring image distortion or noise level. In this experiment, we use PSNR to evaluate the quality between the embedded image and original image, which means it is used to evaluate the invisibility of the embedded watermark. The evaluation result is expressed in dB (decibel). The larger the PSNR value between the two images, the better the invisibility of the watermarking scheme. Considering the host color image I of size M × N and its watermarked version I , the PSNR is defined as
P S N R = 10 lg [ 255 2 1 3 M N x = 1 M y = 1 N θ { R , G , B } ( I θ ( x , y ) I θ ( x , y ) ) 2 ]
(2)
Quaternion Structural Similarity Index (QSSIM)
Kolaman et al. [31] developed a visual quality matrix that will be able to better evaluate the quality of color images, which is named quaternion SSIM (QSSIM). The value of QSSIM ranges is [0, 1]. And the closer the QSSIM value is to 1, the better the image’s visual quality effect. The QSSIM is defined by Equation (49), which is composed to be the same as SSIM but with quaternion subparts.
Q S S I M = 2 μ q I · μ q I μ q I 2 + μ q I 2 σ q I , q I σ q I 2 + σ q I 2
where,
q I , q I are the quaternion representation (QR) of image I and its watermarked version I respectively;
μ q I , μ q I are the mean of image I and its watermarked version I respectively;
σ q I 2 , σ q I 2 are the variance of image I and its watermarked version I respectively;
σ q I , q I is the covariance of image I and its watermarked version I .
(3)
Bit Error Rate (BER)
The Bit Error Rate was here utilized to evaluate the quality of the extracted binary watermark image w compared to its original version w, both of M w × N w pixels. The BER between w and w is given by
B E R = x = 1 M w y = 1 N w w ( x , y ) w ( x , y ) M w × N w

4.2. Imperceptibility

To verify the performance of the proposed color image watermarking algorithm, 109 color images available from the Computer Vision Group at the University of Granada (http://decsai.ugr.es/cvg/dbimagenes/, accessed on 21 September 2020) were considered. A binary watermark “SDNU” of length 4096 bits ( 64 × 64 ) is embedded into the original cover images as shown in Figure 5. Eight standard images ‘Lena’, ‘Avion’, ‘Baboon’, ‘House’, ‘Athens’, ‘Sailboat’, ‘Butrfly’ and ‘Goldgate’, were used as testing images. The size of the eight testing images are 512 × 512 shown in Figure 6.
For evaluating the invisibility of the embedded watermark, we embed the watermark in Figure 5 in the host images in Figure 6a–h, respectively. And the proposed scheme was compared with the popular watermarking schemes, referring to QDFT [16], QSVD [32], Wang et al. [10] proposed color image watermarking based on orientation diversity and color complexity (CIW-OCM), Wang et al. [11] proposed robust image watermarking via perceptual structural regularity-based JND model (RIW-SJM), and Su [33]. First of all, a good watermarking scheme must show a satisfying invisibility. Figure 7 gives the visual quality fraction of the watermarking images. The tested images in Figure 7 are first restrained with the same PSNR = 42 dB and we ensure this by modifying the embedded intensity factor. The QSSIM values are compared, the higher the QSSIM value, the more complete the details and structure of the image preserved. The average QSSIM values of different algorithms are 0.9850, 0.9886, 0.9794, 0.9814 and 0.9864, respectively, and the proposed QSSIM value is 0.9810. Although the results for the proposed image watermarking scheme are not the best compared with other watermarking schemes, the QSSIM values are almost similar to other schemes on average. With the same PSNR guaranteed, the QSSIM of our scheme is comparable to other schemes. This is because in order to achieve a balance between imperceptibility and robustness, our scheme satisfies the imperceptibility while calculating the redundancy of the image more accurately, making the modification of the image larger. Thus the algorithm in this paper can obtain better robustness while satisfying the imperceptibility, while the tests of robustness in Section 4.3 below also demonstrate this.
To prove that the proposed image watermarking scheme can produce a high watermark quality and the watermark can be extracted correctly without attack. The test images were watermarked with the uniform fidelity, a fixed Peak Signal to Noise Ratio (PSNR) of 42 dB. The bit error rate (BER) was computed to make the objective performance evaluation. Figure 8 shows the cover images, watermarked images, and extracted watermarks. Intuitively, it is noticeable that the proposed method can provide a good visual quality of the extracted watermark image.

4.3. Robustness

4.3.1. Evaluation of Different Unit Pure Quaternions

In order to prove that the perceptual unit pure quaternion in Section 3.1 can produce a better watermark quality, we compare the robustness results with different types of pure unit quaternion such as μ 1 = ( 2 j + 8 k ) / 68 [13], μ 2 = ( j k ) / 2 [34], and μ 3 = ( i + j + k ) / 3 [20]. It should be noticed that μ 3 is the most common unit quaternion used in the quaternion based on image processing literature. Table 2 shows the performance for different μ . From the results, the perceptual unit quaternion has lower BER in JPEG compression. This shows the advantages of QDCT transform itself which is compatible with the JPEG compression standard. Although for the perceptual unit quaternion, the performance is not the best as others under Gaussian noise and Filtering, it also has low BER and shows good robustness. In total, the perceptual pure unit quaternion μ has better performance against common signal attacks, especially in JPEG attacks.

4.3.2. Evaluation of Different JND Models within QDCT Watermarking Algorithm

This experiment is used to compare the performance of different JND models used within the proposed QDCT watermarking algorithm. To verify robustness performance of our proposed QuatJND model guided watermarking scheme, the proposed scheme is compared with different JND models, referring to Watson’s model [4], Kim’s model [6] and Zhang’s model [7].
In this experiment, we recomputed the features of Watson’s model, Kim’s model and Zhang’s model in the quaternion DCT domain, respectively. For example, in Kim’s model, we used the C 0 coefficients to calculate the base threshold, luminance adaptation, and contrast masking in the quaternion domain. The tested images are first embedded watermark and restrained with the same PSNR = 42 dB, and the average BER values are compared. As shown in Table 3, compared with the other JND models, the proposed model always has the lowest BER for different noise intensities. This indicates that the proposed model performed much better than others. As for JPEG compression, different performance emerges in the four JND models within the watermarking algorithms shown about JPEG compression attacks. The average BER of Watson’s, Kim’s, Zhang’s and QuatJND model are 0.0828, 0.1144, 0.0775, and 0.0331 when JPEG compression quality is 30, respectively. And from Figure 9c, the extracted watermark can be clearly identified when JPEG compression quality is 30. When the Median filtering and Gaussian filtering are used to attack the watermarked image. For filtering with median filter (3,3), the BER value of the proposed model is 4.5% higher than the Kim’s model, but in Figure 10b, the extracted watermark can also be correctly recognized. In summary, our proposed QuatJND model performs excellently in quaternion domain.

4.3.3. Evaluation of Watermarking Algorithms in Different Domains

This experiment is used to compare the performance of different watermarking algorithms in DCT domain and spatial domain. To verify the effectiveness of quaternion DCT and the advantage of the quaternion, the proposed scheme is compared with CIW-OCM [10], RIW-SJM [11]) and Su [33].
(1)
Under common attacks
During the image transmission, the watermarked image is attacked easily and inevitably by some common attacks such as Gaussian noise, Salt and Pepper noise, JPEG compression and Amplitude scaling. Table 4 lists the average robustness results for eight test images using the different watermarking schemes under various attacks, such as Gaussian Noise with zero mean, variances 0.0003, 0.0008 and 0.0012; Salt and Pepper noise with different densities 0.004, 0.008 and 0.015; JPEG compression with quality factors 30, 50, and 80; Amplitude scaling with factors 0.3, 1.2 and 1.5.
First, it is obvious that our proposed scheme can get a minimum bit error rate compared with other schemes after Gaussian noise and Salt and Pepper noise attack from Table 4. As shown in the robustness results, the proposed scheme has a lower average BER value than CIW-OCM [10] about 0.3% at least. And with the density of Salt and Pepper noise increased, Su [33] shows lower BER than ours when density is 0.015. For traditional JPEG compression attacks, our proposed scheme has similar results when JPEG compression quality is greater than 50, which is 0.1–0.4% lower than the CIW-OCM [10]. In general, the proposed model has the best performance against JPEG compression attacks on average. Finally, while the watermarked image is distorted by Amplitude Scaling attack, although the performance of the proposed model is not the best, the results are almost similar to other schemes on average. And from Figure 9d, the extracted watermark can be clearly identified when the Amplitude Scaling is 1.5, which can satisfy the robustness of watermarking scheme against Amplitude Scaling attacks.
(2)
Under filtering attacks
Filtering attacks such as Median filtering and Gaussian filtering are usually used to attack the watermarked image. And the visual perception of the extracted watermark can be destroyed by these attacks. The performance of watermarking model resists the Filter attacks needs to be considered. Table 5 and Figure 10a,b present the comparison results of filtering. For filtering with Median filtering (3,3), the BER value of the proposed model is 1% lower than the model of RIW-SJM [11]. And for Gaussian filtering, the proposed model has the lowest BER values than the rest of models, which can ensure that the extracted watermark image has a higher recognition.
(3)
Under cropping attacks
In practice, the watermarked image will also be contaminated by other attacks such as Cropping and geometric attacks. Here, in this experiment, image rotation has been considered as a geometric attack, which results in the change of image pixel value and image size. We firstly compared the robustness results after Cropping attacks in Table 6 and Figure 11. The watermarking image is affected by Central cropping (1/8 of image), Left upper cropping (1/8 of image), Row cropping (1/8 of image) and Column cropping (1/8 of image). From the results of Table 6, the proposed model gets the lowest BER value than other algorithms which means that the proposed method provides a good visual quality of the extracted watermark image after different types of cropping attacks.
(4)
Under rotation attacks
To verify that the proposed image watermarking scheme can be robust to geometric attacks, we test the robustness of the proposed algorithm after image Rotation. In this experiment, one watermarking image is first carried out a forward image Rotation transformation, and is then corrected by an inverse image Rotation transformation. More clearly, the watermarking image rotates clockwise 30, 60, 90, 120, and then rotates counter-clockwise to extract the watermarking.
The robustness of image rotation is listed in Table 7 and Figure 10c,d. The results show that our proposed method has the lowest BER value than other methods. For rotation, the value of BER obtained by our method does not exceed 0.2% when the rotation angle is 30, 60, 90, 120, which demonstrates our method can get a significant robustness performance for image rotation.

4.3.4. Evaluation of Different Quaternion Watermarking Schemes

This experiment is used to compare the performance of different quaternion watermarking algorithms. To verify the robustness of the proposed scheme in quaternion DCT domain, the proposed scheme is compared with QDFT [16] and QSVD [32].
Table 8 shows the BER values of watermarked images attacked by Gaussian Noise (GN), JPEG compression attacks, Salt and Pepper noise (SPN), Median filtering (MF) Gaussian filtering (GF) and Amplitude Scaling (AS). As shown in the robustness results, for traditional JPEG compression attack, both QSVD [32] and QDFT [16] have a poorer performance than the proposed scheme, the reason may be that the proposed scheme enhances the performance to resist JPEG attack by using QDCT domain. When the watermarked image is distorted by Amplitude Scaling attack, the performance of the proposed scheme is better than that of other schemes except QSVD [32]. In QSVD [32], they inserted the watermark through moderating the coefficients f 1 , 1 and f 2 , 1 of the quaternion elements in U matrix, the Amplitude Scaling attack leads to the minimum effect on the relative relationship between f 1 , 1 and f 2 , 1 . Therefore, QSVD [32] shows superior performance to Amplitude Scaling attack.
Table 9 demonstrates the comparision of the average BER values between our scheme and other methods for different image attacks with fixed image quality, QSSIM = 0.9820. Although the QDFT [16] has better robustness to against the process of adding Gaussian noise and Salt and Pepper noise, the scheme has a poorer performance in JPEG compression. For JPEG compression, it can be seen from Table 9 that our method has a lower BER than other watermarking algorithms when the JPEG with QF is 30 and 50. In addition, the robustness performance of our method is obviously better than others under combined image attacks that performed JPEG compression firstly, followed by the Gaussian noise and Salt and Pepper noise. Above all, the watermarking framework based on the QuatJND model in QDCT domain has better robustness performance than other methods in most cases.

4.3.5. Evaluation of Combined Attacks

Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9 list the robustness performance after single image attack. However, in the actual digital signal transmission process, the watermarked image will be destroyed by multiple attacks simultaneously. Here, we further compared the robustness results after various combined attacks by common image processing in Figure 12 and Figure 13. Figure 12 shows the BER after passing JPEG compression processing, followed by common Gaussian noise, Salt and Pepper noise, Gaussian filtering and Median filtering attacks.
Figure 13 shows the BER after passing Gaussian noise, followed by Amplitude Scaling, Cropping and image Rotation. From the results of Figure 12 and Figure 13, the human eye still can recognize the extracted watermark information after different combined attacks. In summary, our method shows well robustness performance after combinatorial image attacks, which means that our method can achieve good image copyright protection in practical applications.
On the whole, some exist quaternion watermarking algorithms such as QSVD [32] and QDFT [16], some DCT domain watermarking algorithms such as CIW-OCM [10] and RIW-SJM [11], and Su [33] which is an improved watermarking algorithm based on Schur decomposition, although these algorithms show better invisibility for watermarked images from the results of Figure 7, these algorithms have poorer robustness under some attacks. They can’t achieve a good tradeoff between invisibility and robustness. As for CIW-OCM [10] and RIW-SJM [11], although the algorithm achieves a good tradeoff by using JND models, the algorithm neglects the correlation between the three color components. The proposed model exploits the correlation for three color channels and uses a QuatJND model to obtain the optimum quantization step, and the results show our scheme has better robust performance than others.

5. Conclusions

In this paper, we proposed a robust quaternion JND model for color image watermarking (QuatJND). Firstly, we obtained the quaternion DCT coefficients by the perceptual unit pure quaternion. And then QuatJND model is calculated by using the quaternion DCT coefficients. The color information is also considered. A logarithmic STDM scheme is further proposed based on the QuatJND. Our scheme is evaluated under different types of attacks such as Gaussian noise, JPEG compression, Gaussian filter, Median filter and geometrical attacks like image rotation, cropping. The proposed technique also provides robustness results under combined attacks. Experimental results show that our scheme provides better robust performance than existing techniques. Color is a very important content in images. We can further use the color information in the images to enhance the accuracy of the QuatJND model. For example, the cross-masking effect of luminance and color components can be further analyzed in order to enhance the imperceptibility of watermarked images in future research. Meanwhile, deep learning methods can extract image features more effectively and build a more accurate JND model to make its robust performance under various attacks more effective.

Author Contributions

Conceptualization, W.W.; methodology, W.W. and W.L. (Wenqing Li); validation, W.W. and W.L. (Wenqing Li); formal analysis, W.L. (Wenxiu Liu) and Z.D.; investigation, W.L. (Wenxiu Liu) and Z.D.; writing—original draft preparation, W.W. and W.L. (Wenqing Li); writing—review and editing, W.W.; supervision, Y.Z.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China grant number 61601268, 61803237 and 61901246. This research was funded by the Natural Science Foundation of Shandong Province grant number ZR2019BF035, ZR2020MF042 and ZR2020QF034.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Su, Q.; Chen, B. Robust color image watermarking technique in the spatial domain. Soft Comput. 2018, 22, 91–106. [Google Scholar] [CrossRef]
  2. Tsui, T.K.; Zhang, X.P.; Androutsos, D. Color image watermarking using multidimensional Fourier transforms. IEEE Trans. Inf. Forensics Secur. 2008, 3, 16–28. [Google Scholar] [CrossRef]
  3. Thongkor, K.; Amornraksa, T. Digital image watermarking with partial embedding on blue color component. In Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2014 (APSIPA), Chiang Mai, Thailand, 9–12 December 2014; pp. 1–4. [Google Scholar]
  4. Watson, A.B. DCT quantization matrices visually optimized for individual images. In Proceedings of the Human Vision, Visual Processing, and Digital Display IV, San Jose, CA, USA, 1–4 February 1993; pp. 202–216. [Google Scholar]
  5. Ma, L.; Yu, D.; Wei, G.; Tian, J.; Lu, H. Adaptive spread-transform dither modulation using a new perceptual model for color image watermarking. IEICE Trans. Inf. Syst. 2010, 93, 843–857. [Google Scholar] [CrossRef]
  6. Bae, S.H.; Kim, M.A. A novel DCT-based JND model for luminance adaptation effect in DCT frequency. IEEE Signal Process. Lett. 2013, 20, 893–896. [Google Scholar]
  7. Zhang, X.H.; Lin, W.S.; Xue, P. Improved estimation for just-noticeable visual distortion. Signal Process. 2005, 85, 795–808. [Google Scholar] [CrossRef]
  8. Wan, W.; Liu, J.; Sun, J.; Gao, D. Improved logarithmic spread transform dither modulation using a robust perceptual model. Multimed. Tools Appl. 2016, 75, 13481–13502. [Google Scholar] [CrossRef]
  9. Wan, W.; Wang, J.; Li, J.; Meng, L.; Sun, J.; Zhang, H.; Liu, J. Pattern complexity-based JND estimation for quantization watermarking. Pattern Recognit. Lett. 2020, 130, 157–164. [Google Scholar] [CrossRef]
  10. Wang, J.; Wan, W.; Li, X.; Sun, J.; Zhang, H. Color image watermarking based on orientation diversity and color complexity. Expert Syst. Appl. 2020, 140, 112868.1–112868.16. [Google Scholar] [CrossRef]
  11. Wang, C.; Xu, M.; Wan, W.; Wang, J.; Meng, L.; Li, J.; Sun, J. Robust image watermarking via perceptual structural regularity-based JND model. Ksii Trans. Internet Inf. Syst. 2019, 13, 1080–1099. [Google Scholar]
  12. Tsui, T.K.; Zhang, X.P.; Androutsos, D. Quaternion image watermarking using the spatio-chromatic fourier coefficients analysis. In Proceedings of the 14th ACM International Conference on Multimedia, Santa Barbara, CA, USA, 23–27 October 2006; pp. 149–152. [Google Scholar]
  13. Bas, P.; Le Bihan, N.; Chassery, J.M. Color image watermarking using quaternion Fourier transform. In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, Hong Kong, China, 6–10 April 2003; p. 521. [Google Scholar]
  14. Ma, X.; Xu, Y.; Song, L.; Yang, X.; Burkhardt, H. Color image watermarking using local quaternion Fourier spectral analysis. In Proceedings of the 2008 IEEE International Conference on Multimedia and Expo, Hannover, Germany, 23 June–26 April 2008; pp. 233–236. [Google Scholar]
  15. Jiang, S.H.; Zhang, J.Q.; Hu, B. Content based image watermarking algorithm in hypercomplex frequency domain. Syst. Eng. Electron. 2009, 31, 2242–2248. [Google Scholar]
  16. Chen, B.; Coatrieux, G.; Chen, G.; Sun, X.; Coatrieux, J.L.; Shu, H. Full 4-D quaternion discrete Fourier transform based watermarking for color images. Digit. Signal Process. 2014, 28, 106–119. [Google Scholar] [CrossRef]
  17. Yanshan, L. A new color image blind watermarking algorithm based on quaternion. In Proceedings of the IEEE 10th International Conference on Signal Processing Proceedings, Beijing, China, 24–28 October 2010; pp. 1698–1701. [Google Scholar]
  18. Liu, F.; Ma, L.H.; Liu, C.; Lu, Z.M. Optimal blind watermarking for color images based on the U matrix of quaternion singular value decomposition. Multimed. Tools Appl. 2018, 77, 23483–23500. [Google Scholar] [CrossRef]
  19. Li, J.; Lin, Q.; Yu, C.; Ren, X.; Li, P. A QDCT and SVD-based color image watermarking scheme using an optimized encrypted binary computer-generated hologram. Soft Comput. 2018, 22, 47–65. [Google Scholar] [CrossRef]
  20. Feng, W.; Hu, B. Quaternion discrete cosine transform and its application in color template matching. In Proceedings of the 2008 Congress on Image and Signal Processing, Washington, DC, USA, 27–30 May 2008; pp. 252–256. [Google Scholar]
  21. Pei, S.C.; Ding, J.J.; Chang, J.H. Efficient implementation of quaternion Fourier transform, convolution, and correlation by 2-D complex FFT. IEEE Trans. Signal Process. 2001, 49, 2783–2797. [Google Scholar]
  22. Zhu, S.Y.; He, Z.Y.; Chen, C.; Liu, S.C.; Zhou, J.; Guo, Y.; Zeng, B. High-quality color image compression by quantization crossing color spaces. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 1474–1487. [Google Scholar] [CrossRef]
  23. Koju, R.; Joshi, S.R. Comparative analysis of color image watermarking technique in RGB, YUV, and YCbCr color channels. Nepal J. Sci. Technol. 2014, 15, 133–140. [Google Scholar] [CrossRef]
  24. Roy, A.; Maiti, A.K.; Ghosh, K. An HVS inspired robust non-blind watermarking scheme in YCbCr Color Space. Int. J. Image Graph. 2018, 18, 1850015. [Google Scholar] [CrossRef]
  25. Wan, W.; Wang, J.; Li, J.; Sun, J.; Zhang, H.; Liu, J. Hybrid JND model-guided watermarking method for screen content images. Multimed. Tools Appl. 2020, 79, 4907–4930. [Google Scholar] [CrossRef]
  26. Wang, J.; Wan, W. A novel attention-guided JND Model for improving robust image watermarking. Multimed. Tools Appl. 2020, 79, 24057–24073. [Google Scholar] [CrossRef]
  27. Muthuswamy, K.; Rajan, D. Salient motion detection in compressed domain. IEEE Signal Process. Lett. 2013, 20, 996–999. [Google Scholar] [CrossRef]
  28. Hasler, D.; Suesstrunk, S.E. Measuring colorfulness in natural images. In Proceedings of Human Vision and Electronic Imaging VIII, Santa Clara, CA, USA, 20–24 January 2003; pp. 87–95. [Google Scholar]
  29. Panetta, K.; Gao, C.; Agaian, S. No reference color image contrast and quality measures. IEEE Trans. Consum. Electron. 2013, 59, 643–651. [Google Scholar] [CrossRef]
  30. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008—A Database for Evaluation of Full-Reference Visual Quality Assessment Metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  31. Kolaman, A.; Yadid-Pecht, O. Quaternion Structural Similarity: A New Quality Index for Color Images. IEEE Trans. Image Process. 2011, 21, 1526–1536. [Google Scholar] [CrossRef] [PubMed]
  32. Liu, F.; Feng, H.; Lu, C. Blind watermarking scheme based on U matrix through QSVD transformation. Int. J. Secur. Its Appl. 2015, 9, 203–216. [Google Scholar] [CrossRef]
  33. Su, Q.; Zhang, X.; Wang, G. An improved watermarking algorithm for color image using Schur decomposition. Soft Comput. 2020, 24, 445–460. [Google Scholar] [CrossRef]
  34. Ell, T.A.; Sangwine, S.J. Decomposition of 2D hypercomplex Fourier transforms into pairs of complex Fourier transforms. In Proceeding of the 10th European Signal Processing Conference, Tampere, Finland, 4–8 September 2000; pp. 1–4. [Google Scholar]
Figure 1. The μ c m of three types of image blocks. The μ c m of yellow image block is 0.0035; the μ c m of blue image block is 0.1886; the μ c m of green image block is 0.2481.
Figure 1. The μ c m of three types of image blocks. The μ c m of yellow image block is 0.0035; the μ c m of blue image block is 0.1886; the μ c m of green image block is 0.2481.
Entropy 24 01051 g001
Figure 2. Image comparisons of colorfulness metrics. (a) Reference image with colorfulness is 0.9462; (b) Chrominance distortion image with colorfulness is 0.4563.
Figure 2. Image comparisons of colorfulness metrics. (a) Reference image with colorfulness is 0.9462; (b) Chrominance distortion image with colorfulness is 0.4563.
Entropy 24 01051 g002
Figure 3. The flowchart of the proposed watermark embedding scheme.
Figure 3. The flowchart of the proposed watermark embedding scheme.
Entropy 24 01051 g003
Figure 4. The flowchart of the proposed watermark extracting scheme.
Figure 4. The flowchart of the proposed watermark extracting scheme.
Entropy 24 01051 g004
Figure 5. Watermark image.
Figure 5. Watermark image.
Entropy 24 01051 g005
Figure 6. Original cover images. (a) Lena, (b) Avion, (c) Baboon, (d) House, (e) Athens, (f) Sailboat, (g) Butrfly, (h) Goldgate.
Figure 6. Original cover images. (a) Lena, (b) Avion, (c) Baboon, (d) House, (e) Athens, (f) Sailboat, (g) Butrfly, (h) Goldgate.
Entropy 24 01051 g006
Figure 7. QSSIM comparison of different models (QDFT [16], QSVD [32], CIW-OCM [10], Su [33], RIW-SJM [11]) with PSNR = 42 dB.
Figure 7. QSSIM comparison of different models (QDFT [16], QSVD [32], CIW-OCM [10], Su [33], RIW-SJM [11]) with PSNR = 42 dB.
Entropy 24 01051 g007
Figure 8. Experimental results without attack (ac) Cover image; (df) watermarked image; (gi) extracted watermark from (df).
Figure 8. Experimental results without attack (ac) Cover image; (df) watermarked image; (gi) extracted watermark from (df).
Entropy 24 01051 g008
Figure 9. Results under different types of attacks and recovered watermark from Lena image. (a) Gaussian Noise (var = 0.0015) (b) Salt and Pepper noise (density = 0.0015) (c) JPEG compression (Q = 30) (d) Amplitude Scaling 1.5.
Figure 9. Results under different types of attacks and recovered watermark from Lena image. (a) Gaussian Noise (var = 0.0015) (b) Salt and Pepper noise (density = 0.0015) (c) JPEG compression (Q = 30) (d) Amplitude Scaling 1.5.
Entropy 24 01051 g009aEntropy 24 01051 g009b
Figure 10. Results under different types of attacks and recovered watermark from Lena image. (a) Gaussian filtering (3,3) (b) Median filtering (3,3) (c) Image rotation (angle = 60) (d) Image rotation (angle = 120).
Figure 10. Results under different types of attacks and recovered watermark from Lena image. (a) Gaussian filtering (3,3) (b) Median filtering (3,3) (c) Image rotation (angle = 60) (d) Image rotation (angle = 120).
Entropy 24 01051 g010
Figure 11. Results under different types of attacks and recovered watermark from Lena image. (a) Central cropping 1/4 (b) Left upper cropping 1/4 (c) Row cropping 1/4 (d) Column cropping 1/4.
Figure 11. Results under different types of attacks and recovered watermark from Lena image. (a) Central cropping 1/4 (b) Left upper cropping 1/4 (c) Row cropping 1/4 (d) Column cropping 1/4.
Entropy 24 01051 g011
Figure 12. Results under different types of attacks and recovered watermark from Lena image. (a) JPEG 30 + Gaussian Noise (var = 0.0015) (b) JPEG 30 + Salt and Pepper noise (density = 0.0015) (c) JPEG 30 + Gaussian filtering (3,3) (d) JPEG 30 + Median filtering (3,3).
Figure 12. Results under different types of attacks and recovered watermark from Lena image. (a) JPEG 30 + Gaussian Noise (var = 0.0015) (b) JPEG 30 + Salt and Pepper noise (density = 0.0015) (c) JPEG 30 + Gaussian filtering (3,3) (d) JPEG 30 + Median filtering (3,3).
Entropy 24 01051 g012
Figure 13. Results under different types of attacks and recovered watermark from Lena image. (a) Gaussian Noise (var = 0.0015) + Amplitude Scaling 0.3 (b) Gaussian Noise (var = 0.0015) + Central cropping 1/16 (c) Gaussian Noise (var = 0.0015) + Left upper cropping 1/16 (d) Gaussian Noise (var = 0.0015) + Image rotation (angle = 60).
Figure 13. Results under different types of attacks and recovered watermark from Lena image. (a) Gaussian Noise (var = 0.0015) + Amplitude Scaling 0.3 (b) Gaussian Noise (var = 0.0015) + Central cropping 1/16 (c) Gaussian Noise (var = 0.0015) + Left upper cropping 1/16 (d) Gaussian Noise (var = 0.0015) + Image rotation (angle = 60).
Entropy 24 01051 g013
Table 1. The relevant abbreviations and symbols used in this paper.
Table 1. The relevant abbreviations and symbols used in this paper.
SymbolsMeaningSymbolsMeaning
JNDJust noticeable difference ω m , n Cycle per degree
QuatJNDQuaternion JND model φ m , n The direction angle
PSNRPeak Signal to Noise Ratio J q _ d , J q _ v The oblique effect
QSSIMQuaternion Structural Similarity Index μ l a Average intensity
QDCTQuaternion Discrete Cosine Transform E d Max direction feature
QDFTQuaternion Discrete Fourier Transform μ c m Contrast masking effect
QSVDQuaternion Singular Value Decomposition C p Pattern complexity
qA quaternion C l Luminance contrast
a,b,c,dFour real numbers of a quaternion K 1 ( R G ) Red-Green color space
i,j,kThree imaginary numbers of a quaternion K 2 ( Y B ) Yellow-Blue color space
μ Unit pure quaternion μ K 1 Mean value
C ( p , s ) QDCT coefficients σ K 1 2 Variance value
C 0 ( p , s ) , C 1 ( p , s ) , C 2 ( p , s ) , C 3 ( p , s ) Four parts of C ( p , s ) vRandom vector
f ¯ ( x , y ) Inverse QDCT coefficientszSecret key
f 0 ¯ ( x , y ) , f 1 ¯ ( x , y ) , f 2 ¯ ( x , y ) , f 3 ¯ ( x , y ) Four parts of f ¯ ( x , y ) SSlack vector
C 0 ( 0 , 0 ) Quaternion DC coefficients of C 0 ( p , s ) XHost vector
C 0 ( 0 , 1 ) , C 0 ( 1 , 0 ) , C 0 ( 1 , 1 ) Quaternion AC coefficients of C 0 ( p , s ) YTransformed vector
M q _ L A Luminance adaptation effect d m Dither signal
M q _ C O L Colorfulness maskingwWatermark bit
J q _ b a s e The base CSF threshold Δ Quantization step
M q _ C M Contrast masking Y w Quantization vector
Q c Colorfulness value w Extracted watermark
RIW-SJMWang et al. [11]
CIW-OCMWang et al. [10]
Table 2. BER comparison results of ‘Lena’ with PSNR = 42 dB.
Table 2. BER comparison results of ‘Lena’ with PSNR = 42 dB.
Attack μ 1 μ 2 μ 3 Perceptual μ
JPEG 300.09830.08930.08130.0352
JPEG 500.03460.03240.01150.0008
JPEG 800.01550.01950.00340.0000
Gaussian noise 0.00080.00420.14230.00390.0042
Salt and Pepper noise 0.0080.04980.15310.05790.0617
Amplitude Scaling 0.50.00000.00490.00000.0000
Median filtering (3,3)0.14580.15280.15550.1408
Gaussian filtering (3,3)0.00630.03810.01100.0083
Table 3. Average BER with different JND models.
Table 3. Average BER with different JND models.
AttackWatson Model [4]Kim Model [6]Zhang Model [7]QuatJND Model
Gaussian noise 0.00030.00290.01890.00850.0005
Gaussian noise 0.00080.02490.03560.01950.0065
Salt and Pepper noise 0.0040.04660.05720.04540.0289
Salt and Pepper noise 0.0080.08210.09470.08250.0562
JPEG 300.08280.11440.07750.0331
JPEG 500.01670.02530.01270.0018
Gaussian filtering (3,3)0.03440.02480.02240.0136
Median filtering (3,3)0.18610.14670.15510.1917
Rotation 30°0.00300.01830.04650.0014
Rotation 60°0.00360.01850.01430.0016
Table 4. Average BER comparison results with PSNR = 42 dB.
Table 4. Average BER comparison results with PSNR = 42 dB.
AttackCIW-OCM [10]RIW-SJM [11]Su [33]Proposed
Gaussian noise 0.00030.00080.00430.00520.0005
Gaussian noise 0.00080.01350.03430.01720.0065
Gaussian noise 0.00120.02980.06680.03120.0172
Salt and Pepper noise 0.0040.06200.03630.03020.0289
Salt and Pepper noise 0.0080.11570.06940.05680.0562
Salt and Pepper noise 0.0150.13940.20390.09210.1089
JPEG 300.05890.12650.15440.0331
JPEG 800.00020.00080.03260.0001
Amplitude Scaling 0.30.00370.01780.13110.0001
Amplitude Scaling 1.20.02070.02040.06840.0216
Amplitude Scaling 1.50.13700.17020.12410.1289
Table 5. Average BER of filtering attacks with PSNR = 42 dB.
Table 5. Average BER of filtering attacks with PSNR = 42 dB.
AttackCIW-OCM [10]RIW-SJM [11]Su [33]Proposed
Median filtering (3,3)0.20620.20160.23350.1917
Gaussian filtering (3,3)0.01830.02580.01420.0136
Table 6. Average BER of cropping attacks with PSNR = 42 dB.
Table 6. Average BER of cropping attacks with PSNR = 42 dB.
AttackCIW-OCM [10]RIW-SJM [11]Su [33]Proposed
Central cropping 1/80.00970.00990.01270.0091
Left upper cropping 1/80.01620.05790.04560.0129
Row cropping 1/80.07070.07300.08310.0674
Column cropping 1/80.07030.07860.08730.0646
Table 7. Average BER of Rotation attacks with PSNR = 42 dB.
Table 7. Average BER of Rotation attacks with PSNR = 42 dB.
AttackCIW-OCM [10]RIW-SJM [11]Su [33]Proposed
Rotation 300.00340.00270.00390.0014
Rotation 600.00390.00370.00350.0016
Rotation 900.00000.00000.00000.0000
Rotation 1200.00340.00270.00390.0014
Table 8. BER of different attacks with PSNR = 42 dB.
Table 8. BER of different attacks with PSNR = 42 dB.
AttackLenaHouse
QDFT [16]QSVD [32]ProposedQDFT [16]QSVD [32]Proposed
GN (0.0008)0.00070.01350.00590.00510.05620.0042
GN (0.0012)0.00950.02250.01490.01810.09280.0171
SPN (0.004)0.04050.02710.02910.03590.05710.0332
SPN (0.008)0.09060.05250.06400.08420.10600.0524
JPEG (30)0.30430.34000.03470.33550.35580.0308
JPEG (50)0.12700.20760.00070.22500.27870.0022
GF (3,3)0.00950.04320.00830.02250.09450.0149
MF (3,3)0.17750.47240.15260.21000.48460.2063
AS (0.3)0.04540.00010.00010.12400.00020.0004
AS (1.2)0.02490.00040.00290.08620.00070.0461
Table 9. Average BER of different attacks with QSSIM = 0.9820.
Table 9. Average BER of different attacks with QSSIM = 0.9820.
AttackCIW-OCM [10]RIW-SJM [11]Su [33]QDFT [16]QSVD [32]Proposed
GN (0.0008)0.01570.01080.01000.00030.02070.0083
GN (0.0012)0.03370.02490.01830.00060.03670.0203
SPN (0.004)0.03800.03300.02420.02380.03760.0248
SPN (0.008)0.07350.06480.04410.04400.05130.0554
JPEG (30)0.07080.05890.09830.27390.28660.0401
JPEG (50)0.00470.00340.06600.12780.20630.0029
GF (3,3)0.01600.01160.00480.00690.04350.0112
MF (3,3)0.19490.18450.19830.26280.46260.1807
AS (0.3)0.00370.00060.14510.04760.00010.0001
AS (1.2)0.02110.02100.07330.01770.00060.0170
JPEG (50) + GN (0.0008)0.04150.03240.11950.15560.21680.0254
JPEG (50) + GN (0.0012)0.05960.04980.14800.17340.23010.0394
JPEG (50) + SPN (0.004)0.05160.04440.14350.14990.21220.0367
JPEG (50) + SPN (0.008)0.09050.07640.15540.16830.22080.0672
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wan, W.; Li, W.; Liu, W.; Diao, Z.; Zhan, Y. QuatJND: A Robust Quaternion JND Model for Color Image Watermarking. Entropy 2022, 24, 1051. https://doi.org/10.3390/e24081051

AMA Style

Wan W, Li W, Liu W, Diao Z, Zhan Y. QuatJND: A Robust Quaternion JND Model for Color Image Watermarking. Entropy. 2022; 24(8):1051. https://doi.org/10.3390/e24081051

Chicago/Turabian Style

Wan, Wenbo, Wenqing Li, Wenxiu Liu, Zihan Diao, and Yantong Zhan. 2022. "QuatJND: A Robust Quaternion JND Model for Color Image Watermarking" Entropy 24, no. 8: 1051. https://doi.org/10.3390/e24081051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop