Next Article in Journal
An Energy Efficient MAC Protocol for Wireless Passive Sensor Networks
Next Article in Special Issue
Private and Secure Distribution of Targeted Advertisements to Mobile Phones
Previous Article in Journal
Spectrum and Energy Efficiency of Uplink Massive MIMO System with D2D Underlay
Previous Article in Special Issue
Participation and Privacy Perception in Virtual Environments: The Role of Sense of Community, Culture and Gender between Italian and Turkish
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature-Based Image Watermarking Algorithm Using SVD and APBT for Copyright Protection

School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
*
Author to whom correspondence should be addressed.
Future Internet 2017, 9(2), 13; https://doi.org/10.3390/fi9020013
Submission received: 8 March 2017 / Revised: 4 April 2017 / Accepted: 14 April 2017 / Published: 19 April 2017
(This article belongs to the Collection Information Systems Security)

Abstract

:
Watermarking techniques can be applied in digital images to maintain the authenticity and integrity for copyright protection. In this paper, scale-invariant feature transform (SIFT) is combined with local digital watermarking and a digital watermarking algorithm based on SIFT, singular value decomposition (SVD), and all phase biorthogonal transform (APBT) is proposed. It describes the generation process of the SIFT algorithm in detail and obtains a series of scale-invariant feature points. A large amount of candidate feature points are selected to obtain the neighborhood which can be used to embed the watermark. For these selected feature points, block-based APBT is carried out on their neighborhoods. Moreover, a coefficients matrix of certain APBT coefficients is generated for SVD to embed the encrypted watermark. Experimental results demonstrate that the proposed watermarking algorithm has stronger robustness than some previous schemes. In addition, APBT-based digital watermarking algorithm has good imperceptibility and is more robust to different combinations of attacks, which can be applied for the purpose of copyright protection.

1. Introduction

Since the 1990s, information hiding technology has become a major research focus in the field of information technology. The purpose of information hiding is to embed hidden data in digital media, such as image, audio, video, and so on. With the development of related technologies, information hiding has become an independent discipline, and its coverage is also expanding. According to the purpose of hiding information and technical requirements, invisibility, robustness, undetectability, security [1], and other attributes are all distinctive features of information hiding technology.
As digital media can be easily copied, altered, and illegally distributed, copyright protection of digital media has aroused increasingly widespread attention. How to protect the integrity and authenticity of digital media becomes urgent and important. Thus, digital signature [2] and digital watermarking [3] have been proposed for data hiding to solve this problem. Due to its superior performance, digital watermarking is applied as an effective method to solve the image’s copyright issue. According to different functions, digital watermarking can be classified into three types, containing robust watermarking, fragile watermarking, and semi-fragile watermarking. Robust watermarking can resist all kinds of attacks and common processing, which is applied for copyright protection. Fragile watermarking is sensitive to attacks, including malicious tampering and common processing. Combining advantages of the two above watermarking methods, semi-fragile watermarking is proposed to distinguish malicious tampering from non-malicious modification with the certain ability of resisting common processing. According to the domain where the watermark is embedded, two types of digital watermarking techniques can be defined [4,5]: spatial domain watermarking [6], and frequency or transform domain watermarking [7]. Spatial domain methods modify the pixel value of the digital image directly to embed the watermark. The least significant bit (LSB) algorithm is the common and simplest algorithm applied in spatial watermarking schemes [8], embedding the watermark information into the LSB or multiple bit layers. Correspondingly, frequency domain methods embed the watermark information by modifying transform coefficients of the original image. Compared with spatial methods, schemes based on transforms have better imperceptibility and robustness. Discrete cosine transform (DCT), discrete wavelet transform (DWT), discrete sine transform (DST), and singular value decomposition (SVD) [9] are common transforms applied in frequency domain watermarking. Moreover, other special transforms, such as redundant DWT (RDWT) [10], fractional wavelet transform (FWT) [11], and QR decomposition [12], can be applied for better performance in some certain aspects. In recent years, these transforms are still applied widely in the watermarking field. For example, Rasti et al. [12] proposed a robust color video watermarking scheme based on QR decomposition and entropy analysis, embedding the watermark into the block selected by entropy analysis. The selected block is performed by DWT, Chirp-Z transform (CZT), QR decomposition, and SVD for watermark embedding. In [13], Cheng et al. proposed a robust watermarking with the combination of DWT and DCT. In the embedding process, the HL2 sub-band coefficient is divided into 4 × 4 blocks after decomposing the cover image by two-level DWT. Then, the DCT is performed for each block. According to the watermark bit, two pseudo-random sequences are embedded into the middle band coefficients of DCT. Besides, Guo et al. [14] proposed a robust watermarking scheme based on DCT and DWT in the encrypted domain to improve the robustness, using an image splitting method to embed the watermark in the low-middle frequency coefficients. Nguyen et al. [15] proposed a fragile watermarking for image authentication in the wavelet domain, where the authentication code is embedded into the two-level low frequency sub-band. In [16], Martino and Sessa introduced a fragile watermarking tampering detection with compressed images based on a fuzzy transform, which is applied to compress each block. Besides, fuzzy c-means clustering technique is proposed to form the relationship between image blocks, and this method achieves more accurate tampering detection and localization. Lin and Tsay [17] presented a passive approach for tampering detection and location in video sequences with camera motion, which is based on spatio-temporal coherence analysis.
The scale-invariant feature transform (SIFT) [18] is a classical algorithm for interest points detection, as well as local features description, which has three advantages: (1) the number of extracted feature points or keypoints is significant with appropriate parameter settings; (2) the image feature extracted by SIFT has great uniqueness for accurate matching; and (3) SIFT features have better invariance ability on rotation, scaling, and translation. Owing to its efficiency [19], SIFT algorithm is widely used in the field of image processing to match two images, locate the position, and recognize objects. Moreover, SIFT provides good performance of watermarking schemes in recent years. Lee et al. [20] firstly proposed a novel image watermarking scheme using local invariant features, and embedded the watermark into circular patches generated by SIFT. In [21], Lyu et al. proposed an image watermarking scheme based on SIFT and DWT, and applied the DWT on the selected SIFT areas. Thorat and Jadhav [22] proposed an anti-geometric attack watermarking scheme based on integer wavelet transform (IWT) and SIFT, where feature points in red components of the image are extracted by SIFT, and blue and green components are performed by IWT to extract low frequency coefficients for watermark embedding. Luo et al. [23] presented an adaptive robust watermarking scheme based on discrete Fourier transform (DFT) and SIFT to deal with the synchronization errors. Pham et al. [24] proposed a robust SIFT and DCT-based watermarking scheme, embedding the watermark into the selected feature regions after DCT. In [25], Zhang and Tang introduced an SVD and SIFT-based watermarking algorithm to solve the synchronization problem, where SIFT is used for watermarking resynchronization.
Owing to the robustness feature of SIFT and SVD, as well as better concentration of all phase biorthogonal transform (APBT) in low frequencies, we propose the idea of combining local digital watermarking with SIFT, SVD, and APBT, and make comparisons between previous watermarking schemes and the proposed watermarking algorithm in this paper. The purpose of this paper is to propose a robust watermarking algorithm with better imperceptibility and stronger robustness for copyright protection.
The remainder of this paper is organized as follows. Section 2 introduces the related work. Section 3 describes the proposed algorithm in detail. Experimental results and analyses are presented in Section 4. Conclusions and future works are given finally in Section 5.

2. Related Work

2.1. Digital Watermarking

The purpose of digital watermarking is to embed obvious information into digital data. The embedded symbol is usually invisible or difficult to detect, but it can be detected or extracted by certain extraction algorithms. The watermark is hidden in the original data as an integral part of it, and general framework of digital watermarking technology is shown in Figure 1.
The digital watermark W is combined with the certain key K , which is embedded into original signal I by the embedding method E , generating the watermarked signal I w . In the transmission, owing to the interference of noise N c , the watermarked signal I w can be transformed into I r , and the received signal I r can be evaluated or extracted by applying the detection algorithm D or extraction algorithm E x .

2.2. APBT

APBT [26] is an improved transform firstly proposed by Hou et al. which can be applied in the field of image processing, such as image compression, image denoising, image demosaicking, and so on [27]. In practice, owing to the fact that APBT has better energy concentration in low frequencies and attenuation characteristics in high frequencies, it has better applications in image processing. Suppose that X is the data of a B × B image block, and V represents the APBT matrix with the size of B × B , respectively, the transform coefficients block Y after two-dimensional APBT transform can be denoted by:
Y = V X V T ,
where V T is the transpose matrix of V . In the contrary, we use:
X = V 1 Y ( V 1 ) T ,
to reconstruct the image, and V 1 is the inverse matrix of V .
When transform matrix V is set as the different forms as Equations (3) and (4) separately, the corresponding transforms can be defined as all phase discrete cosine biorthogonal transform (APDCBT) and all phase inverse discrete cosine biorthogonal transform (APIDCBT), respectively.
V 1 ( p , q ) = { B p B 2 , p = 0 , 1 , , B 1 , q = 0 , 1 B 2 [ ( B p ) cos p q π B csc q π B sin p q π B ] , p = 0 , 1 , , B 1 , q = 1 , 2 , , B 1 ,
V 2 ( p , q ) = { 1 B , p = 0 ,   q = 0 , 1 , , B 1 , B p + 2 1 B 2 cos p ( 2 q + 1 ) π B , p = 0 , 1 , , B 1 ,   q = 1 , 2 , , B 1 .

2.3. SIFT Algorithm

The SIFT algorithm was proposed by Lowe [28] to extract the local scale-invariant feature points, which are used for matching two related images. Generation steps of SIFT feature descriptors are described in the following.
The first step is scale-space selection. The basic idea of the scale-space approach is to introduce a scale parameter into the visual information processing model and obtain visual processing information at different scales by continuously changing scale parameters. Then, the information is integrated to explore the essential characteristics of the image. The Gaussian convolution kernel [29] is the only linear kernel to achieve the scale transformation, and the scale-space kernel f out can be defined as:
f out = K n * f in ,
where K n is the kernel, f in is the input signal, and * represents the convolution operation.
Scale-space S ( x , y , σ ) of image I ( x , y ) is shown as:
S ( x , y , σ ) = G ( x , y , σ ) * I ( x , y ) ,
G ( x , y , σ ) = 1 2 π σ 2 e ( x 2 + y 2 ) / 2 σ 2 ,
where G ( x , y , σ ) is the scale variable Gaussian function, ( x , y ) are the spatial coordinates, and σ is the scale-space factor, which decides the smoothness of the image. A large value of σ represents a smooth image with overview feature, while a small value of σ describes an abundant image with detailed feature. To effectively detect stable points in the scale-space, Lowe [28] proposed the difference of Gaussian (DOG) scale-space, presented as:
D ( x , y , σ ) = [ G ( x , y , k σ ) G ( x , y , σ ) ] * I ( x , y ) = S ( x , y , k σ ) S ( x , y , σ ) ,
where I ( x , y ) is the input image, k is the multiple between two neighboring scale-spaces, and * represents the convolution operation. To detect the local maxima and minima of D ( x , y , σ ) , each point is compared with its eight neighbors in the current image and nine neighbors in the scale above and below. With the help of the DOG scale-space image, all extreme points can be detected as feature points.
The second step is to locate feature points, aiming at orientating the location of feature points precisely. In this way, a large number of extreme points are obtained. However, not all extreme points are feature points. Therefore, a suitable method is needed to eliminate some points.
The third step is feature point orientation assignment, and the purpose is to achieve the SIFT features of rotation invariance. For scaling the smoothed image I L , the central derivative of I L at each feature point can be computed. The scale and orientation at feature point ( x , y ) can be calculated by:
{ θ ( x , y ) = tan 1 2 { [ I L ( x , y + 1 ) I L ( x , y 1 ) ] / [ I L ( x + 1 , y ) I L ( x 1 , y ) ] } g ( x , y ) = [ I L ( x + 1 , y ) I L ( x 1 , y ) ] 2 + [ I L ( x , y + 1 ) I L ( x , y 1 ) ] 2 ,
where θ ( x , y ) is the orientation of the gradient, and g ( x , y ) is the magnitude.
After obtaining the gradient direction and amplitude, the gradient direction of feature points can be determined by a gradient direction histogram. The peak represents the main direction. In the gradient direction histogram, when another peak value equals 80% of the main peak’s value, this direction should be set as the auxiliary direction of this feature point. The direction matching has been completed with the location, orientation, and scale, and this step is to find the local image descriptor of the feature points.
This completes all steps of the SIFT algorithm.

3. The Proposed Algorithm

In this section, a novel watermarking algorithm based on SIFT, SVD, and APBT is proposed and compared with previous watermarking schemes.

3.1. Preparation before Embedding

Although a large number of feature points are obtained by SIFT, some feature points are still not suitable for watermark information embedding and extraction in terms of scale and orientation. Therefore, feature points should be screened and improved before embedding operation.
In this section, the Airplane image is chosen to embed the watermark for representation. Figure 2 represents the obtained information of feature points after SIFT. The length of the blue arrow in the host image represents the scale, pointing to the direction of feature points, and 2050 feature points are picked out by SIFT, in total.
Since the scale range of initially obtained feature points are 0–50, points with larger or smaller scales may not be easily detected or successfully matched. Therefore, a certain scale range of points should be selected as candidate feature points, and points whose scale is between 2.3 and 8 are selected as feature points. Then, the 8 × 8 size of the neighborhood is chosen for each feature point to define feature regions for following operations. Additionally, feature points which are beyond the image boundary and have overlapping neighborhoods with others should be eliminated. Finally, feature points for embedding watermarks are obtained, as shown in Figure 3, where feature points are located in the center of the rectangles.

3.2. Watermark Embedding

Figure 4 shows the procedure of watermarking embedding based on SIFT, SVD, and APBT.
Step 1. The host image I is operated by SIFT, and feature points, as well as their parameters of orientation, scaling, and location, can be obtained. All feature points should be screened in the next step.
Step 2. According to the rules demonstrated in Section 3.1, scales of feature points should be restricted into a certain range, and ultimate feature points can be selected for embedding. Thus, coordinates of final feature points are saved as keys K .
Step 3. To improve the watermark security, Arnold transform is performed on watermark W on the basis of Equation (10):
[ x i + 1 y i + 1 ] = ( 1 b a a b + 1 ) [ x i y i ] mod ( N ) ,
where ( x i , y i ) are original coordinates of the image pixel, and ( x i + 1 , y i + 1 ) are scrambled coordinates. a , b , and N are positive integers. Additionally, mod means modulo operation, and N denotes the width of image matrix. After the Arnold transform, the watermark matrix will be encrypted as W a .
Step 4. The host image is divided into 4 × 4 blocks, and APBT is performed on each block. The first four APBT coefficients at the positions of (1, 1), (1, 2), (2, 1), and (3, 1) are selected according to the zig-zag sequence to form a new coefficients matrix M c at the location of (1, 1), (1, 2), (2, 1), and (3, 1) separately in M c . After all blocks having been operated, this step is completed.
Step 5. SVD is performed on coefficients matrix to obtain matrix S of singular values, as well as two orthogonal matrices, U and V . The encrypted watermark W a is embedded into singular values of the coefficients matrix according to Equations (11) and (12):
M c = U S V T ,
S w = S + α × W a ,
where α is the scaling factor, determining the performance of the watermarking scheme in robustness and imperceptibility. × represents the product of α and W a .
Step 6. SVD is performed on the S w in Equation (12) again, which is illustrated by:
S w = U w S ww ( V w ) T .
where S ww is the singular values matrix of S w . Additionally, U w and V w are two orthogonal matrices.
Step 7. The watermarked coefficients matrix M cw is generated by S ww , U , and V :
M cw = U S ww V T .
Step 8. In converse to the Step 4, certain values in the coefficients matrix M cw are selected to form the watermarked blocks.
Step 9. The inverse APBT is performed to obtain the watermarked image I w .

3.3. Watermarking Extraction

The procedure of watermark embedding based on SIFT, SVD, and APBT is shown in Figure 5.
Step 1. 4 × 4 block-based APBT is carried out on the received image after attacks I a and the first four coefficients of each block are picked out according to the zig-zag sequence.
Step 2. According to keys K , all feature points are performed by Step 1 to obtain the received watermarked coefficients matrix M cw * . To differentiate matrix in Section 3.2, superscript * is noted for received matrices.
Step 3. SVD is performed on M cw * using Equation (15), and S ww * is recorded for the next step:
M cw * = U * S ww * ( V * ) T .
Step 4. S w * is retrieved by Equation (16):
S w * = U w S ww * ( V w ) T .
Step 5. The extracted watermark is generated for coefficients matrix M cw * in Step 3, which is shown in Equation (17):
W e * = ( S w * S ) / α .
Step 6. The inverse Arnold transform is performed to extract the watermark W e .
This completes the watermark extraction process.

4. Experimental Results and Comparative Analyses

In this section, performances in terms of the watermark’s robustness and the imperceptibility of the proposed method are given and analyzed. This scheme is tested on images 512 × 512 pixels in size, and the watermark sizes applied in this scheme are 64 × 64 and 32 × 32 pixels, respectively. Moreover, to balance the imperceptibility and robustness of the proposed algorithm, the scaling factor α is set to 5. The execution time is calculated from image reading to watermark extraction. The algorithm is tested on MATLAB R2014a (MathWorks, Natick, MA, USA) with an Intel (R) Core (TM) i3-2100 3.10 GHz CPU (Intel, Santa Clara, TX, USA), 4 GB memory computer.

4.1. Performance Evaluation Indexes

After the watermark embedding, compared with the original image, the quality of the watermarked image will be reduced. Peak signal-to-noise ratio (PSNR) is an important objective quality evaluation index adopted in most watermarking methods to evaluate the watermark’s imperceptibility. For an image with size of M × N , PSNR is defined as:
PSNR = 10 log 10 255 2 M N i = 0 M 1 j = 0 N 1 [ I ( i , j ) I w ( i , j ) ] 2 ( dB ) ,
where I ( i , j ) and I w ( i , j ) express pixel values in row i and column j of the host image and the watermarked image separately.
The bit error rate (BER) is used to detect the difference between the extracted and original watermarks, which is shown in Equation (19):
BER = 1 l i = 0 m 1 j = 0 n 1 | W e ( i , j ) W ( i , j ) | ,
where W ( i , j ) and W e ( i , j ) show pixel values at a specific location ( i , j ) in the original watermark and the recovered watermark respectively. l expresses the length of the watermark information derived from the watermark one-dimensional vector, and m × n is the size of watermark.
The similarity of two images can be evaluated by normalized correlation (NC), which is defined as:
NC = i = 0 m 1 j = 0 n 1 [ W ( i , j ) × W e ( i , j ) ] i = 0 m 1 j = 0 n 1 [ W ( i , j ) × W ( i , j ) ] .
Higher NC values represent a better quality of the extracted watermark.

4.2. Performance Comparisons

In this section, owing to APDCBT and APIDCBT having similar performance when applied in the proposed algorithm, one of the typical transforms, the APDCBT matrix in APBT theory, is taken to test the performance of the proposed algorithm concretely. To demonstrate applicability of the proposed algorithm, different images with different entropies are tested in this section, which are shown in Figure 6.
To give more detailed subjective experimental results, an original image and a watermarked image, as well as a watermarked image and an APDCBT-based extracted watermark are compared in Figure 7. Considering the symmetry and asymmetry of the watermark, logo watermark 1 and letter watermark 2, defined as w 1 and w 2 , are applied in this experiment. As Figure 7 shows, the embedded watermark cannot be distinguished by the naked eye, which illustrates that the proposed algorithm achieves good imperceptibility.
Table 1 gives PSNR, NC, and BER values of experimental results. Owing to different entropies of images, the number of selected feature points is different according to Section 3.1. Images with a large number of refined feature points (more than 256), which contains images Airplane, Clown, Cornfield, are tested with a 64 × 64 watermark image. Images with a small number of refined feature points (less than 256), containing images Baboon, Barbara, Bank, Elaine, Barche, Milkdrop, Announcer, Vogue, Cablecar, Canyon, Frog, Fruits, are tested with a 32 × 32 watermark. As can be seen, performance of using symmetrical watermarks and asymmetrical watermarks both have good performance in PSNR.
To give more detailed results, the Airplane image and Shandong University logo are taken as examples for further demonstration. Table 2 indicates the PSNR and NC comparison among methods proposed by Jain et al. [30], Zhang et al. [31], Fzali and Moeini’s scheme [32], and the proposed algorithm. In [30], principal components of the watermark are embedded into the host image. In [31], the watermark is scrambled by shuffled SVD (SSVD), and principal components are extracted for watermark embedding. In [32], the watermark is embedded into singular values of the DCT coefficients matrix, which is derived from four segmented sub-images in the DWT domain.
As can be seen in Table 2, owing to the large number of feature points, the watermark information is embedded into the points neighborhood sparsely, which leads to a high PSNR value of the proposed scheme. Due to the block partition in the algorithm, the execution time will be higher. Additionally, the proposed watermarking method has better performance than previous watermarking schemes under some common image attacks. In [32], the host image is divided into four sub-images manually for watermark embedding, resulting in better robustness to JPEG compression.
Moreover, the watermarked image will suffer various types of attacks in the transmission procedures. The performance of the APDCBT-based algorithm is presented in Figure 8, in which (a–f) are watermarked images and extracted watermarks based on APDCBT after five types of attacks: Gaussian noise (0, 0.01), salt and pepper noise (0.01), JPEG compression (QF = 100), cropping the sparse region (100 × 100), and cropping the dense region (100 × 100), which are labeled as 2 to 6, respectively. Furthermore, the result with no attack is labeled as 1.
Figure 9 shows watermarked images and extracted watermarks in the APDCBT-based watermarking scheme under combinations of the above attacks. Both Figure 8 and Figure 9 demonstrate that the proposed watermarking scheme can extract the watermark with high quality, which can also suggest robustness of the proposed method.
Additionally, Table 3 illustrates the robustness of the proposed algorithm under combinations of different attacks. In terms of BER and NC, the proposed scheme based on APDCBT has excellent performance.

5. Conclusions

In this paper, a robust digital image watermarking algorithm based on SIFT, SVD, and APBT is proposed for copyright protection. Detailed introductions about the SIFT algorithm are made, and by combining the SIFT algorithm with the local digital watermark, the watermarking algorithm is proposed. Then, the neighborhood of feature points for embedding can be obtained by filtering beyond the image boundary and in the overlapping neighborhood. The block-based APBT is carried out on the neighborhood, respectively, to obtain the coefficients matrix for SVD watermark embedding. To make the scheme more secure, the Arnold transform is applied to the watermark.
Experimental results show that our proposed method has better performance in robustness and imperceptibility. For further work, we will continue to focus on the improved algorithm with small entropies host image and asymmetrical watermarks to achieve better performance on more types of images. Additionally, we will apply the proposed algorithm to color images.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61201371), the Research Award Fund for Outstanding Young and Middle-Aged Scientists of Shandong Province, China (No. BS2013DX022), and the Natural Science Foundation of Shandong Province, China (No. ZR2015PF004).

Author Contributions

Yunpeng Zhang and Chengyou Wang conceived the algorithm; Yunpeng Zhang and Xiaoli Wang designed the experiments; Yunpeng Zhang and Min Wang performed the experiments; Chengyou Wang and Xiaoli Wang analyzed the results; Yunpeng Zhang drafted the manuscript; and Chengyou Wang revised the manuscript. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skodras, A.; Christopoulos, C.; Ebrahimi, T. The JPEG 2000 still image compression standard. IEEE Signal Process. Mag. 2001, 18, 36–58. [Google Scholar] [CrossRef]
  2. Lou, D.C.; Liu, J.L. Fault resilient and compression tolerant digital signature for image authentication. IEEE Trans. Consum. Electron. 2000, 46, 31–39. [Google Scholar]
  3. Langelaar, G.C.; Setywan, I.; Lagendijk, R.L. Watermarking digital image and video data. IEEE Signal Process. Mag. 2000, 17, 20–46. [Google Scholar] [CrossRef]
  4. Van Schyndel, R.G.; Tirkel, A.Z.; Osborne, C.F. A digital watermark. In Proceedings of the 1st IEEE International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; pp. 86–90. [Google Scholar]
  5. Mishra, A.; Agarwal, C.; Sharma, A.; Bedi, P. Optimized gray-scale image watermarking using DWT-SVD and Firefly algorithm. Expert Syst. Appl. 2014, 41, 7858–7867. [Google Scholar] [CrossRef]
  6. Sakthivel, S.M.; Ravi Sankar, A. A real time watermarking of grayscale images without altering its content. In Proceedings of the International Conference on VLSI Systems, Architecture, Technology and Applications, Bengaluru, India, 8–10 January 2015; pp. 1–6. [Google Scholar]
  7. Jin, X.Z. A digital watermarking algorithm based on wavelet transform and Arnold. In Proceedings of the International Conference on Computer Science and Service System, Nanjing, China, 27–29 June 2011; pp. 3806–3809. [Google Scholar]
  8. An, Z.Y.; Liu, H.Y. Research on digital watermark technology based on LSB algorithm. In Proceedings of the 4th International Conference on Computational and Information Sciences, Chongqing, China, 17–19 August 2012; pp. 207–210. [Google Scholar]
  9. Qi, X.J.; Xin, X. A singular-value-based semi-fragile watermarking scheme for image content authentication with tamper localization. J. Vis. Commun. Image Represent. 2015, 30, 312–327. [Google Scholar] [CrossRef]
  10. Makbol, N.M.; Khoo, B.E. Robust blind image watermarking scheme based on redundant discrete wavelet transform and singular value decomposition. AEU—Int. J. Electron. Commun. 2013, 67, 102–112. [Google Scholar] [CrossRef]
  11. Pujara, C.; Bhardwaj, A.; Gadre, V.M.; Khire, S. Secure watermarking in fractional wavelet domains. IETE J. Res. 2007, 53, 573–580. [Google Scholar] [CrossRef]
  12. Rasti, P.; Samiei, S.; Agoyi, M.; Escalera, S.; Anbarjafari, G. Robust non-blind color video watermarking using QR decomposition and entropy analysis. J. Vis. Commun. Image Represent. 2016, 38, 838–847. [Google Scholar] [CrossRef]
  13. Cheng, M.Z.; Yan, L.; Zhou, Y.J.; Min, L. A combined DWT and DCT watermarking scheme optimized using genetic algorithm. J. Multimed. 2013, 8, 299–305. [Google Scholar]
  14. Guo, J.T.; Zheng, P.J.; Huang, J.W. Secure watermarking scheme against watermark attacks in the encrypted domain. J. Vis. Commun. Image Represent. 2015, 30, 125–135. [Google Scholar] [CrossRef]
  15. Nguyen, T.S.; Chang, C.C.; Yang, X.Q. A reversible image authentication scheme based on fragile watermarking in discrete wavelet transform domain. AEU—Int. J. Electron. Commun. 2016, 70, 1055–1061. [Google Scholar] [CrossRef]
  16. Martino, F.D.; Sessa, S. Fragile watermarking tamper detection with images compressed by fuzzy transform. Inf. Sci. 2012, 195, 62–90. [Google Scholar] [CrossRef]
  17. Lin, C.S.; Tsay, J.J. A passive approach for effective detection and localization of region level video forgery with spatio-temproal coherence analysis. Dig. Investig. 2014, 11, 120–140. [Google Scholar] [CrossRef]
  18. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
  19. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 2, pp. 257–263. [Google Scholar]
  20. Lee, H.; Kim, H.; Lee, H. Robust image watermarking using local invariant features. Opt. Eng. 2006, 45, 535–545. [Google Scholar]
  21. Lyu, W.L.; Chang, C.C.; Nguyen, T.S.; Lin, C.C. Image watermarking scheme based on scale-invariant feature transform. KSII Trans. Int. Inf. Syst. 2014, 8, 3591–3606. [Google Scholar]
  22. Thorat, C.G.; Jadhav, B.D. A blind digital watermark technique for color image based on integer wavelet transform and SIFT. Proced. Comput. Sci. 2010, 2, 236–241. [Google Scholar] [CrossRef]
  23. Luo, H.J.; Sun, X.M.; Yang, H.F.; Xia, Z.H. A robust image watermarking based on image restoration using SIFT. Radioengineering 2011, 20, 525–532. [Google Scholar]
  24. Pham, V.Q.; Miyaki, T.; Yamasaki, T.; Aizawa, K. Geometrically invariant object-based watermarking using SIFT feature. In Proceedings of the 14th IEEE International Conference on Image Processing, San Antonio, TX, USA, 16–19 September 2007; Volume 5, pp. 473–476. [Google Scholar]
  25. Zhang, L.; Tang, B. A combination of feature-points-based and SVD-based image watermarking algorithm. In Proceedings of the International Conference on Industrial Control and Electronics Engineering, Xi’an, China, 23–25 August 2012; pp. 1092–1095. [Google Scholar]
  26. Hou, Z.X.; Wang, C.Y.; Yang, A.P. All phase biorthogonal transform and its application in JPEG-like image compression. Signal Process. Image Commun. 2009, 24, 791–802. [Google Scholar] [CrossRef]
  27. Yang, F.F.; Wang, C.Y.; Huang, W.; Zhou, X. Embedding binary watermark in DC components of all phase discrete cosine biorthogonal transform. Int. J. Secur. Appl. 2015, 9, 125–136. [Google Scholar] [CrossRef]
  28. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  29. Babaud, J.; Witkin, A.P.; Baudin, M.; Duda, R.O. Uniqueness of the Gaussian kernel for scale-space filtering. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 26–33. [Google Scholar] [CrossRef] [PubMed]
  30. Jain, C.; Arora, S.; Panigrahi, P.K. A reliable SVD based watermarking scheme. Comput. Sci. arXiv 2008. Available online: https://arxiv.org/pdf/0808.0309.pdf (accessed on 18 April 2017).
  31. Zhang, Z.; Wang, C.Y.; Zhou, X. Image watermarking scheme based on DWT-DCT and SSVD. Int. J. Secur. Appl. 2016, 10, 191–206. [Google Scholar]
  32. Fzali, S.; Moeini, M. A robust image watermarking method based on DWT, DCT, and SVD using a new technique for correction of main geometric attacks. Optik 2016, 127, 964–972. [Google Scholar] [CrossRef]
Figure 1. General framework of digital watermarking.
Figure 1. General framework of digital watermarking.
Futureinternet 09 00013 g001
Figure 2. Feature points of the host image by the SIFT algorithm.
Figure 2. Feature points of the host image by the SIFT algorithm.
Futureinternet 09 00013 g002
Figure 3. Ultimate feature points for watermark embedding.
Figure 3. Ultimate feature points for watermark embedding.
Futureinternet 09 00013 g003
Figure 4. Watermark embedding procedure of the proposed scheme.
Figure 4. Watermark embedding procedure of the proposed scheme.
Futureinternet 09 00013 g004
Figure 5. Watermark extraction procedure of the proposed scheme.
Figure 5. Watermark extraction procedure of the proposed scheme.
Futureinternet 09 00013 g005
Figure 6. Tested images: (a) Airplane; (b) Baboon; (c) Barbara; (d) Bank; (e) Elaine; (f) Barche; (g) Milkdrop; (h) Panzer; (i) Announcer; (j) Vogue; (k) Cablecar; (l) Canyon; (m) Clown; (n) Cornfield; (o) Frog; (p) Fruits.
Figure 6. Tested images: (a) Airplane; (b) Baboon; (c) Barbara; (d) Bank; (e) Elaine; (f) Barche; (g) Milkdrop; (h) Panzer; (i) Announcer; (j) Vogue; (k) Cablecar; (l) Canyon; (m) Clown; (n) Cornfield; (o) Frog; (p) Fruits.
Futureinternet 09 00013 g006
Figure 7. Experimental results on APDCBT-based algorithms: (a) original image and w 1 ; (b) watermarked image and extracted w 1 based on APDCBT; (c) original image and w 2 ; (d) watermarked image and extracted w 2 based on APDCBT.
Figure 7. Experimental results on APDCBT-based algorithms: (a) original image and w 1 ; (b) watermarked image and extracted w 1 based on APDCBT; (c) original image and w 2 ; (d) watermarked image and extracted w 2 based on APDCBT.
Futureinternet 09 00013 g007
Figure 8. Watermarked images and extracted watermarks based on APDCBT under five types of attacks: (a) no attack; (b) Gaussian noise (0, 0.01); (c) salt and pepper noise (0.01); (d) JPEG compression (QF = 100); (e) cropping the sparse region (100 × 100); (f) cropping the dense region (100 × 100).
Figure 8. Watermarked images and extracted watermarks based on APDCBT under five types of attacks: (a) no attack; (b) Gaussian noise (0, 0.01); (c) salt and pepper noise (0.01); (d) JPEG compression (QF = 100); (e) cropping the sparse region (100 × 100); (f) cropping the dense region (100 × 100).
Futureinternet 09 00013 g008
Figure 9. APDCBT-based watermarked images and extracted watermarks under combinations of different attacks: (a) 2 and 4; (b) 2 and 5; (c) 2 and 6; (d) 3 and 4; (e) 3 and 5; (f) 3 and 6; (g) 4 and 5; (h) 4 and 6; (i) 2, 4, and 5; (j) 2, 4, and 6; (k) 3, 4, and 5; (l) 3, 4, and 6.
Figure 9. APDCBT-based watermarked images and extracted watermarks under combinations of different attacks: (a) 2 and 4; (b) 2 and 5; (c) 2 and 6; (d) 3 and 4; (e) 3 and 5; (f) 3 and 6; (g) 4 and 5; (h) 4 and 6; (i) 2, 4, and 5; (j) 2, 4, and 6; (k) 3, 4, and 5; (l) 3, 4, and 6.
Futureinternet 09 00013 g009aFutureinternet 09 00013 g009b
Table 1. Experimental results on different images with different entropies.
Table 1. Experimental results on different images with different entropies.
ImagesPSNRNCBER
Airplanew190.560.99940.0008
w293.280.99610.0137
Baboonw179.310.96830.0156
w289.760.90430.0068
Barbaraw191.180.96480.0176
w290.270.86810.0447
Bankw182.920.95770.0195
w291.180.91370.0317
Elainew174.590.94010.0225
w277.680.73540.0680
Barchew179.550.95420.0195
w292.320.87720.0391
Milkdropw178.120.92250.0273
w281.050.71290.0653
Panzerw178.850.96830.0146
w285.780.88030.0389
Announcerw176.480.81210.0957
w285.500.89730.0376
Voguew177.780.79280.0969
w283.290.85150.0434
Cablecarw186.300.85770.0770
w290.280.90440.0342
Canyonw185.980.82990.0784
w293.290.89180.0400
Clownw181.420.82080.1056
w287.690.82810.0489
Cornfieldw186.630.86180.0853
w284.190.89490.0365
Frogw182.540.82700.0842
w285.070.88200.0430
Fruitsw175.170.88790.0784
w281.490.92170.0291
Table 2. Comparison of watermarking algorithms based on APDCBT and previous schemes under different attacks.
Table 2. Comparison of watermarking algorithms based on APDCBT and previous schemes under different attacks.
Different IndexesJain et al. [30]Zhang et al. [31]Fzali and Moeini [32]APDCBT
PSNR (dB)20.9839.7556.4290.56
Execution Time (s)1.102.433.466.07
Gaussian Noise (0, 0.01)0.96950.80160.98970.9983
Salt and Pepper Noise (0.01)0.91570.89250.98420.9923
JPEG Compression (QF = 50)0.98770.98500.98650.9830
JPEG Compression (QF = 100)0.99600.99840.99990.9838
Sparse Region Cropping (100 × 100)0.82360.92360.99990.9841
Sparse Region Cropping (256 × 256)-0.81870.96020.9821
Dense Region Cropping (100 × 100)0.65890.92550.98040.9847
Dense Region Cropping (256 × 256)-0.81820.96820.9881
Table 3. Evaluation of the watermarking algorithms based on APDCBT under different combinations of attacks.
Table 3. Evaluation of the watermarking algorithms based on APDCBT under different combinations of attacks.
Combination of AttacksBERNC
10.00080.9994
2 and 40.00610.9945
2 and 50.00390.9974
2 and 60.00490.9991
3 and 40.00460.9949
3 and 50.00490.9949
3 and 60.00810.9915
4 and 50.00880.9838
4 and 60.01200.9847
2, 4, and 50.00460.9991
2, 4, and 60.00460.9993
3, 4, and 50.00610.9889
3, 4, and 60.00660.9915

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wang, C.; Wang, X.; Wang, M. Feature-Based Image Watermarking Algorithm Using SVD and APBT for Copyright Protection. Future Internet 2017, 9, 13. https://doi.org/10.3390/fi9020013

AMA Style

Zhang Y, Wang C, Wang X, Wang M. Feature-Based Image Watermarking Algorithm Using SVD and APBT for Copyright Protection. Future Internet. 2017; 9(2):13. https://doi.org/10.3390/fi9020013

Chicago/Turabian Style

Zhang, Yunpeng, Chengyou Wang, Xiaoli Wang, and Min Wang. 2017. "Feature-Based Image Watermarking Algorithm Using SVD and APBT for Copyright Protection" Future Internet 9, no. 2: 13. https://doi.org/10.3390/fi9020013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop