Next Article in Journal
A Constant Current Wireless Power Transfer Scheme with Asymmetric Loosely Coupled Transformer for Electric Forklift
Previous Article in Journal
A Generative Model for Traffic Demand with Heterogeneous and Spatiotemporal Characteristics in Massive Wi-Fi Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Securing Color Video When Transmitting through Communication Channels Using DT-CWT-Based Watermarking

Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(12), 1849; https://doi.org/10.3390/electronics11121849
Submission received: 1 May 2022 / Revised: 4 June 2022 / Accepted: 7 June 2022 / Published: 10 June 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In this paper, a color video watermarking system based on SVD in the complex wavelet domain is proposed. The process of inserting copyright information in video bit streams is known as video watermarking. It has been advocated in recent years as a solution to the problem of unlawful digital video alteration and dissemination. An effective, robust, and invisible video watermarking algorithm is proposed in this paper. The two-level dual tree complex wavelet transform (DT-CWT) and singular value decomposition are used to create this approach, which was built on a cascade of two powerful mathematical transforms. This hybrid technique demonstrates a high level of security as well as various levels of attack robustness. The proposed algorithm was used to the test for imperceptibility and robustness, and this resulted in excellent grades. We compared our suggested method to a DWT-SVD-based technique and found it to be far more reliable and effective.

1. Introduction

Information sharing is now commonplace among institutions and individuals all around the world. The introduction of 3G and 4G networks for high-speed communication, as well as the advent of social media, such as Facebook, Twitter, and the WhatsApp application on mobile phones, have enabled people to share information 24 h a day, seven days a week. The problem of data altering and subsequently republishing it under a different name has arisen as a result of sharing information with others. These issues prompted the researchers to create an algorithm that would put an end to the practice by embedding security into digital content. This effort resulted in digital watermarking [1].
A digital watermark is essentially a piece of ownership information inserted in digital data to protect it from unauthorized access. Watermarking for digital video is simply a more advanced variant of watermarking for digital images. Frames are a series of still images that make up digital video. The payload is the quantity of information that is embedded.
Video watermarking, unlike image watermarking, also addresses the issue of big volume. The watermark-embedding algorithm tucks the watermark into the host media (image/video) or the altered version of the host media. The frames of the movie are converted to frequency domain utilizing the frequency domain conversion methodology in the transform domain watermarking approach, and the watermark is then placed in transfer domain media.
Illegal digital movie distribution is a regular and serious danger to the film business. A pirated duplicate of a digital video can now be easily transmitted to a global audience due to the arrival of high-speed broadband internet access. Digital video watermarking, in which additional information, known as a watermark, is inserted in the host video, maybe one way to limit this type of digital theft. This watermark can be retrieved from the video content at the decoder and used to determine whether it is watermarked. [2]
SVD is popular due to its ease of implementation and appealing mathematical properties. The use of SVD for watermarking has gained favor. In image processing, pattern recognition, and information security, the SVD approach has received a great deal of attention. One method of watermarking with SVD is to apply it to the entire cover image and alter all of the SVs to incorporate the watermark data. For most sorts of attacks, the singular values (SVs) fluctuate relatively little, which is a key feature of SVD watermarking.
The piracy of a digital movie is a serious problem for movie studios and producers but can be addressed by digital video watermarking. Robustness to several attacks on the watermark has increased with existing watermarking methods. However, none of the available solutions is resistant to a combination of video compression and histogram equalization. The watermark is contained in the singular values of the dual-tree complex wavelet transform coefficients of the luminance channel in this study, which is a blind video watermarking approach.
The original video quality is preserved since distortion in the luminance channel is less sensitive to the human eye. Due to the strong stability of its singular values, the singular value decomposition is applied, and the approximate shift invariance characteristic of the dual-tree complex wavelet transform ensures robustness against attacks. Noise addition, compression and histogram equalization are all supported by the proposed approach. Our proposed technique using DT-CWT and SVD is compared with the existence DWT-SVD watermarking method.
The following is the paper’s structure. Section 2 introduces a background of Singular Value Decomposition (SVD) and the CWT Transform with Dual Trees (DT-CWT), which is used in watermarking. Section 3 reviews the related literature. Section 4 proposes the novel robust CWT-based SVD watermarking approach. The experimental results are presented in Section 5. Finally, Section 6 introduces our conclusions.

2. Background Review

2.1. Singular Value Decomposition (SVD)

An image is a matrix with non-negative elements, as defined in algebra. If A is a square image, indicated by AR n × n, and R is the real number domain, then A’s SVD is defined as [2]:
As found in algebra, an image is a matrix of non-negative entries. If A is a square image, denoted as AR n × n, where R represents the real number domain, then the SVD of A is defined as [2]:
A = U S V T
where U and V are orthogonal matrices such that UT U = I, VT V = I, I is an identity matrix. S = d i a g σ 1 , , σ p , where p = min (m, n), σ1σ2 ≥ …… ≥ σp ≥ 0, are the singular values of A. This decomposition is known as the singular value decomposition (SVD) of A and can be written as follows [2]:
S V D A = U S V
or
A = U S V T
The stability of SVs states that when there is a minor disruption with A, the deviation of SVs is not greater than the matrix’s greatest SV [3]. SVD has advantages when applied to digital images [3,4,5].
-
When a small perturbation occurs in an image, the SVs of the image have good stability.
-
The SVs denote the image’s algebraic qualities, which are not visible.

2.2. The CWT Transform with Dual Trees

In 1998, the dual-tree CWT was originally introduced using Kingsbury [6]. The notion behind the twin-tree technique is relatively straightforward, similar to the idea of fine/poor publish-filtering of real subband indicators. Real DWTs are used in the twin tree CWT; the first DWT provides the actual component of the renovation, while the second DWT provides the imaginary part. Figure 1 and Figure 2 show the evaluation and synthesis Feedbacks (FBs) used to implement the dual-tree CWT and its inverse (2). Each of the two real wavelets transforms uses a different set of filters to satisfy the PR criteria. The two sets of filters are combined in such a way that the overall rework is nearly analytic. The two sets of filters are collectively designed in order that the overall rework is approximately analytic.
Permit h0(n) and h1(n) to denote the upper fb’s low-bypass/high-pass filter out pair, and permit g0(n) and g1(n) to denote the lower fb’s low-skip/excessive-bypass filter out pair as ψh(t) and ψg(t) are the two real wavelets associated with each of the two real wavelet transforms (t). The filters are built so that the complex wavelet ψ(t) = ψh(t) + jψg(t) is nearly analytic, in addition to pleasing PR scenarios. They are built in such a way that ψg(t) is about the Hilbert remodel of of ψh(t) denoted as ψg(t) ≈ Hψh(t).
It is worth noting that the filters are real; there is no need for complicated arithmetic to create the twin-tree CWT. The real and imaginary elements are both inverted to invert the rework. To obtain genuine signals, the inverse of each of the two actual DWTs is employed. The final output is calculated by averaging the real alerts. Although the original signal x(n) may be retrieved by me from both the actual and imaginary parts, such inverse DT-CWTs do not capture all of the advantages of an analytic wavelet transform. If the square matrices Fh and Fg are used to represent the two actual DWTs, then the square matrix can also be used to represent the twin-tree CWT.
The DT-CWT is also simple to set up. As there is unlikely to be any data drift between the two actual DWTs, they might be used with current DWT software or hardware. Furthermore, for green hardware implementation, the rework is naturally parallelized. Furthermore, because the DT-CWT is implemented using two real wavelet transforms, the application of the DT-CWT can be guided by the current concept and practice of real wavelet transforms. Wavelet-based sign processing methods, such as wavelet coefficient thresholding established for actual wavelet transforms can also be used to the DT-CWT [6].
The DT-CWT, on the other hand, necessitates the configuration of modern filters. It usually necessitates a pair of filter out units chosen in such a way that the corresponding wavelets form a Hilbert rework pair. To enforce each tree of the DT-CWT, current wavelet transform filters are no longer required. Pairs of Daubechies’ wavelet filters, for example, do not satisfy the requirement that ψg(t) ≈ H ψh(t). If the DT-CWT is used with filters that do not meet this condition, the rework will no longer provide the overall benefits of the previously established analytic wavelets.
Wavelets cannot be used in other areas of image processing because of these issues. The lack of shift invariance problem can be avoided by not down sampling the filter outputs from each degree; however, this will significantly increase the computational costs, and the resulting decimated wavelet transform will still be unable to distinguish between opposing diagonals because the rework remains separable. To distinguish opposing diagonals with separable filters, the clear out frequency responses for beneficial and dreadful frequencies must be unequal.
Complex wavelet filters, which can be made to reduce negative frequency additions, are a good way to achieve this. As can be seen, the CWT outperforms the separable DWT in terms of shift-invariance and directional selectivity. As in the basic DWT, two trees are applied to the rows and then the columns of the picture to compute the 2-D CWT of images (Figure 3).
The CWT decomposes an image into six directed subimages, each level resulting from uniformly spaced directional filtering and subsampling.
There are four main flaws in the wavelet transform. The first problem is oscillations, which indicates that the wavelet coefficients tend to fluctuate positive and negative around singularities. The solution is given in this section. Wavelet-based processing is more complicated as a result, making singularity extraction and signal modeling, in particular, extremely difficult [6]. A wavelet overlapping a singularity could have a small or even zero wavelet coefficient. The second issue is shift variance, which causes the wavelet coefficient oscillation pattern around singularities to be significantly perturbed by a tiny shift in sign. Wavelet-area processing is also complicated by shift variance. Recall a piecewise smooth signal x (tt0) similar to the step function to better understand wavelet coefficient oscillations and shift variance [6],
u ( t ) = 0 t < 0 1 t 0
The data was analyzed using a wavelet basis with a sufficient number of vanishing moments. Its wavelet coefficients include samples from the wavelet’s step reaction [6].
d ( j , n ) 2 3 j 2 Δ 2 j t 0 n ψ ( t ) d t
where Δ is the jump’s height. As ψ(t) is a zero-oscillating bandpass function, its step response d (j, n) is a function of n. Furthermore, the factor 2 j in the upper limit (j ≥ 0) increases the sensitivity of d (j, n) to the time shift t0, resulting in high shift variance. The third issue is aliasing, which is caused by the wavelet coefficients being computed using iterated discrete-time down-sampling processes intermingled with non-ideal low-pass and high-pass filters. Of course, the IDWT eliminates aliasing, however, only if the wavelet and scaling coefficients remain unchanged. Lack of directionality is the fourth issue due to this lack of directional selectivity, modeling and processing of geometric visual elements, such as ridges and edges.

As a Solution, Complex Wavelets Are Used

There is a straightforward solution to these four DWT flaws. The key is to notice that these issues do not affect the Fourier rework. To begin with, the Fourier transform’s significance no longer oscillates unmistakably and adversely but instead provides a clear positive envelope within the Fourier domain. Second, the Fourier remodel is perfectly transfer-invariant, encoding the shift with a simple linear section offset. Third, the Fourier coefficients are not aliased, and the signal reconstruction does not rely on sophisticated aliasing cancellation properties.
The sinusoids of the multi-Dimensional Fourier basis become directional plane waves in the end. We provide a CWT-based completely SVD watermarking approach for grey and color photos in this research. The excessive-frequency band SVs are used to achieve the watermark embedding approach. The LSB and additive approaches are used to insert watermarks in this paper.

3. Related Work

Several publications on the usage of the DWT with SVD for information hiding in digital photographs have been published in the literature. In [7], DWT is used to split a picture into frequency subbands, and one of the bands is adjusted for data embedding by dividing it into 44-block blocks. Every block has SVD applied to it, and a watermark is embedded in the diagonal of each block. When a watermarked image is attacked, the normalized correlation coefficient NC is utilized to determine the degree of similarity between the original watermark and the extracted watermark. When DWT and SVD are coupled, the watermarking method exceeds the traditional DWT method in terms of attack resistance. In [8], the method presented works well for image processing processes, using the DWT and SVD methods to hide data in the image’s high frequency range.
The authors in [9] provided a lossless video watermarking strategy based on optimal key frame selection employing a linear wavelet transform and an intelligent gravitational search algorithm. The histogram difference approach is used to extract color motion and motionless frames from the cover video. On the chrominance channel of motion frames, a one-level linear wavelet transform is applied, and a low-frequency sub-band LL is used for watermark embedding. In terms of imperceptibility and robustness, the suggested approach was tested against 12 video processing assaults. Experiments show that the suggested method outperformed five state-of-the-art techniques on the assaults under consideration.
In terms of intricacy, embedding the watermark in all frames of the cover video is not a smart approach. A study [10] offered a solution for not having the watermark appear in all frames of the cover video. Due to the widespread illicit copying of digital media, such as images, videos, and audio, for the aim of copyright protection, the watermarking method is an important strategy. This work proposed a SCD approach for embedding a watermark in selected cover frames. The approach is resistant against numerous attacks due to the use of the wavelet transform technology and singular value decomposition. The algorithm’s performance measurement demonstrates its resistance to both intentional and unintended attacks.
An efficient compression-based safe digital watermarking was proposed in [11]. The video is first transformed into a number of frames in this document. The frame is then decomposed using a dual-tree complex wavelet transform. The embedding places are then optimally picked using the adaptive cuttlefish optimization technique to boost the system’s security. Then, using the elliptic curve cryptography method, the secret photos are encrypted. The encrypted images are then transformed to binary bits. The binary bits are then embedded in the video frame’s designated place. The H.265 video encoding strategy is used after the encryption procedure to reduce the size of the encrypted document. This method efficiently reduces the image size without compromising the image’s quality. Finally, the image is compressed.
An article [12] explains the principles of the suggested method and provides a strategy to cope with the most common sorts of assaults in video watermarking by thoroughly studying the types of attacks. As a result, by offering a three-dimensional discrete cosine transform technique, it offers a secure solution against collusion attempts. According to the tests, the proposed approach is immune to collusion attacks, which are the most common type of digital watermarking attack. The suggested technique also has a larger capability for watermarking than other methods. The suggested algorithm is capable of covering all types of video, both dynamic and static, due to its unique blocking mechanism.
A paper [13] explained the principles of the suggested method and provided a strategy to cope with the most common sorts of assaults in video watermarking by thoroughly studying the types of attacks. As a result, by offering a three-dimensional discrete cosine transform technique, it offers a secure solution against collusion attempts. According to the tests, the proposed approach is immune to collusion attacks, which are the most common type of digital watermarking attack. The suggested technique also has a larger capability for watermarking than other methods. The suggested algorithm is capable of covering all types of video, both dynamic and static, due to its unique blocking mechanism.
Based on 2-D DFT, this work [14] provided a geometrically robust multi-bit video watermarking technique (two-dimensional discrete Fourier transform). While most known video watermarking systems require synchronization to extract the watermark from rotated or scaled videos, which takes time and reduces accuracy, the suggested method can extract the watermark directly from rotated, scaled, or cropped videos without synchronization. Circular templates in the DFT domain are translated into spatial masks and appended to the video frames in the spatial domain to insert the watermark. To maintain the accuracy of the watermarked video, a perceptual model based on local contrast is used. We also present an accurate and efficient extraction approach based on the Wiener-filtered DFT magnitude cross-correlation with the stretched DFT magnitude.
Based on the integer wavelet transform and the generalized chaotic sine map, the work in [15] presents a strong blind and secure video watermarking method. Each main frame of the standard video is subjected to an integer wavelet transform in this method. The three principles of content quality, data resilience, and data capacity are used to evaluate watermarking strategies. As a result, the watermark is placed in low-frequency coefficients to ensure the quality of the watermarked video. By using a chaotic map as a watermark, an adequate security level is added to improve the efficiency and usefulness. In addition, the key criterion of resistance measurement is the normalized correlation coefficient (NCC) between the main and extraction watermarks. The results reveal that the proposed strategy is effective in a variety of situations.
A color video watermarking algorithm based on the hyperchaotic Lorentz system was presented in [16]. To begin, the color watermark images are scrambled using a hyperchaotic Lorentz system to increase their privacy. Second, we employ shot boundary detection to extract the video’s non-motion frames. The chaotic sequence is then utilized to determine which non-motion frames are specific. The discrete wavelet transform is then applied to particular frames to obtain the required subbands. Finally, the encrypted watermarks are placed into the subbands that have been chosen. The peak signal-to-noise ratio, normalized correlation, and structural similarity index measure are used to evaluate the suggested method’s performance. Experiments reveal that the average PSNR and SSIM of watermarked frames are 57.00 dB and 0.99, respectively; indicating that the suggested approach has a good level of effectiveness.
The authors in [17] used a two-level discrete wavelet transform (DWT) and a skilled singular value decomposition technique (SVD). For non-blind and blind watermarking techniques, performance characteristics, such as processing time, peak signal-to-noise ratio (PSNR), and normalized correlation (NC) are compared. When subjected to tough noise attacks, such as geometrical, filtering, salt and pepper noises, the PSNR and NC values are achieved as an average value of 40 dB and 0.99, respectively. For difficult attacks, such as salt and pepper, Gaussian, and Gamma sounds, the range conversion method enhances PSNR.
Based on the undecimated discrete wavelet transform, this paper [18] provides an improved video watermarking approach (UDWT). The cover video’s frames are broken into 808 chunks. To insert the watermark bit, two AC coefficients are chosen in each 8-bit block. The technique is applied to four UDWT bands, and the redundancy in this transform enabled for a large-capacity video watermarking. The watermarking approach is rendered ignorant due to the masking qualities of the UDWT human visual system. The experimental results show that the proposed video watermarking system meets all four watermarking requirements: security, obliviousness, robustness, and capacity.

4. Discussion

The suggested DT-CWT-based SVD video watermarking methods include two procedures: one embeds the watermark frame per frame, while the other extracts it from the watermarked version of the video clip. For each frame of the video, the embedding technique is shown in the block diagram shown in Figure 4. Figure 5 shows the two watermarks. Figure 6a–d shows the different videos that we used in this paper, and c and d show the two scene images of one video, whereas whereas Figure 5 shows the two watermarks.

4.1. CWT-Based SVD Watermarking Is a Suggested Watermarking Method

The proposed strategy is implemented by modifying the SVs of frames’ high frequency subbands using either the LSB or the addition methods.

4.1.1. CWT Using the LSB Method

Embedding a Watermark

(1)
The watermark is subjected to a one-level complex wavelet transform.
(2)
All high-pass bands are then subjected to the SVD transform.
(3)
Use the DT-CWT and SVD to process the original frame.
(4)
For the entire frame, the one-level CWT is computed. Six high-frequency sub-bands are created using this procedure.
(5)
Application of SVD to the high-frequency subband
A j = U j × S j × V j T ,   j = 1 , 2 , 3 , , 6
The high-pass subbands of the one-level decomposition are denoted by j, and the associated SVs matrix is denoted by Sj.
(6)
Using the LSB technique, replace the SVs of the high-pass sub band with the SVs of the watermark.
(7)
After that, embed the values of the SVs of the CWT coefficients of the watermark into the LSB of the SVs of the original frame.
As the size of subbands of cover frame is greater than the subbands of watermark to hide all S values of watermark in lsb of singular values of cover frame.
(8)
Obtain the new DT-CWT coefficients of six subbands:
A ^ j = U j × S ^ j × V j T ,   j = 1 , 2 , 3 , , 6
(9)
Finally, using the modified CWT coefficients, apply the reverse CWT. The last watermarked frame is created during this activity.
(10)
Then, we generate the watermarked video as shown in Figure 7.

The Watermark Extraction

Without using the cover frame, the watermark is retrieved. The following are the steps of the extraction procedure:
(1)
Calculate the frame’s one-level DT-CWT. This results in six high-frequency subbands.
(2)
Every high-pass subband is subjected to SVD.
A ^ j = U j × S ^ j × V j T ,   j = 1 , 2 , 3 , , 6
(3)
Every high frequency band has SVs taken from it. Take the LSB of each of the SVs of the watermarked image’s coefficients.
(4)
Using the matrix of the watermarked image and the vectors obtained during the embedding process; construct the DT-CWT coefficients of the six high-frequency subbands.
A w , j = U w , j × S j w × V j T ,   j = 1 , 2 , 3 , , 6
(5)
Finally, the watermarks are developed using reverse DT-CWT.

4.1.2. Proposed Additive Method

The Watermark’s Embedding

(1)
The watermark is decomposed using one level of CWT.
(2)
Every high-pass band is subjected to SVD.
(3)
Use the DT-CWT and SVD to process the original image.
(4)
The two-level CWT is calculated. Six high-frequency sub-bands are created using this procedure.
(5)
Decompose each high-frequency subband using the singular value decomposition:
A j = U j × S j × V j T ,   j = 1 , 2 , 3 , , 6
Sj signifies the SVs matrix, while as j denotes the high-pass subbands of the two-level decomposition.
As the size of subbands of cover frame is equal to the subband of the watermark to add the singular values of watermark’s subband and cover‘s subbands.
(6)
Using the additive method, combine the SVs of each high-frequency subband with the SVs of the watermark.
S ^ j = S + k × S w
(7)
Obtain the following six subbands:
A ^ j = U j × S ^ j × V j T ,   j = 1 , 2 , 3 , , 6
(8)
Finally, using the altered DT-CWT coefficients, apply the inverse of DT-CWT; this operation provides the final watermarked frame.
(9)
Then, we generate the watermarked video.
These steps are shown in Figure 8.

Extraction of the Watermark

The following is a description of the watermark extraction method:
(1)
Calculate the frame’s two-level DT-CWT. This results in the creation of six high-frequency sub-bands.
(2)
For each high-frequency subband, use SVD.
A ^ j = U j × S ^ j × V j T ,   j = 1 , 2 , 3 , , 6
(3)
Each high-frequency subband’s SVs should be extracted.
S j w = S ^ j S / k
(4)
Using the SV framework of the watermarked frame and the vectors calculated during the insertion operation, calculate the DT-CWT coefficients of the six high-frequency subbands.
A w , j = U w , j × S j w × V j T ,   j = 1 , 2 , 3 , , 6
(5)
Finally, use the inverse DT-CWT to create the visual watermarks.

5. Performance Evaluation

The performance of a digital watermark technique is primarily measured in terms of imperceptibility and resilience. This includes both subjective and objective evaluations. The subjective rating is based on human perception.
Subjective evaluation has some practical utility for assessing the quality of watermarking systems, although subjective evaluation findings of watermarking schemes might vary greatly amongst people with different experiences. As a result, we use objective assessment criteria, such as the peak signal-to-noise ratio (PSNR), structural similarity indexes (SSIM), and normalized cross-correlation (NC) to assess the watermarking algorithm’s imperceptibility and robustness.

5.1. Imperceptibility Assessment

The capacity of the host to hide the watermark information is known as imperceptibility.

5.1.1. Peak Signal-to-Noise Ratio

When comparing the mean square error between the original image and the watermarked image, the PSNR measure is utilized. This is related to imperceptibility. The higher the PSNR, the better the watermarked image’s quality [19]. The two error-measuring methods used to compare picture-watermarking quality are the mean square error (MSE) and the peak signal-to-noise ratio (PSNR). The PSNR and the MSE have an inverse relationship. This measurement is denoted by Equation (16).
P S N R = 10 log 10 L 2 1 M N M , N ( A w ( i , j ) A ( i , j ) ) 2 = 10 log 10 L 2 M S E
where L is the maximum value possible in each pixel, A(i,j) is the original image, Aw(i,j) is the watermarked image, and M and N are the number of rows and columns in the original images.

5.1.2. Correlation Coefficient

Between the original and extracted watermarks, this metric is employed. The value of Cr ranges from −1 to 1, with the closer value to 1 indicating that the images are more comparable. Mathematically, this measure is given as Equation (17)
C r = M N A ( i , j ) A W ( i , j ) M N A i , j 2
where A(i,j) and AW(i,j) are the original and extracted watermarks, respectively. The values of Cr of the system decide the robustness of the system is to the image processing operations. Higher Cr s that the system is highly robust to attacks.

5.1.3. Structural Similarity Index Measure

The structural similarity index measure (SSIM) is a method for comparing the original image to the watermarked image. The SSIM index can be thought of as a full reference metric, which means it measures image quality using an uncompressed or distortion-free image as a baseline. SSIM is intended to improve on older technologies, such as (PSNR) and (MSE), which have been shown to be incompatible with human vision. The numerical value between −1 and 1 is taken by SSIM.
The instance of two identical sets of data is represented by value 1. Unlike the PSNR or MSE, the SSIM index evaluates perceived errors and analyzes picture deterioration as a perceived change in structural information. The notion behind structural information is that pixels have substantial interdependencies when they are structurally close. These dependencies contain crucial information about the structure of the visual scene’s elements. The mean, variance, and covariance of values of the original and watermarked images include structural information. The following equation yields SSIM. (18):
S S I M = 2 μ x μ y + S 1 2 σ x y + S 2 μ x 2 + μ y 2 + S 1 σ x 2 + σ y 2 + S 2
where μ x is the average of x, μ y is the average of y, σ x y is the covariance of x and y, σ x 2 is the variance of x, σ y 2 is the variance of y, s 1 = k 1 D 2   and   s 2 = k 2 D 2 are two variables to stabillize the division with a weak denominator. D is the dynamic range of pixel values (typically this is 2 # bits   per   pixel − 1). k 1 = 0.01 and k 2 = 0.03 by default.

5.2. Robustness

The capacity of a watermark technology to withstand diverse attacks is referred to as robustness. The NC is used as the robustness evaluation approach in this case. This is how it is defined in (19):
N C = P i P a Q i Q a P i P a 2 Q i Q a 2
where Pi represents the intensity of ith pixel in the image m and Pa is the mean intensity of image m. Qi represents the intensity of ith pixel in the image n, and Qa is the mean intensity of image n.

6. Simulation Results

6.1. Results of Proposed DT-CWT-Based SVD Video Watermarking Using Additive Method

A total of 30 video frames were used to evaluate the proposed approach on video sequences. 30 frames have been chosen for watermarking in order to minimize memory utilization and to reduce embedding delay. A watermark is embedded via the suggested approach in the first 15 frames, and another watermark is embedded in the other 15 frames. Figure 9 and Figure 10 show how the correlation coefficient varies with the frame index. The performance of the suggested method is clearly superior to that of DWT-based SVD watermarking, as shown in Figure 9. We use a 30-frame colorful video clip from a mobile video, and each frame is 576 × 704 pixels in size. The video clip was divided into two scenes; we chose 15 frames from each scene to hide a watermark. Figure 9 and Figure 10 show snapshots of each scene. Using an additive approach, the watermark was then implanted in the Y component of each frame of the two scenes. In each of the two scenes, there was a different watermark. The two watermark pictures are 288 × 352 pixels in size.
The two watermarked sceneries without attacks are shown in Figure 9 and Figure 10. The suggested method’s resilience to conventional attacks, such as JPEG compression, Gaussian noise, and histogram equalization, is shown in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16. The Gaussian noise attack is applied to the watermarked video frames with variance = 0.01. The correlation values in Figure 11 and Figure 12 show that the DWT-based SVD watermarking approach is more resilient to the insertion of Gaussian noise at some frames than the DT-CWT-based SVD watermarking method.
The watermarked video frames are compressed with a quality of 50% for video compression. The correlation coefficients reveal that the proposed watermarking is more robust to video compression for most frames than the DWT-based SVD watermarking, as illustrated in Figure 13 and Figure 14.
Histogram Equalization is an image processing technique that uses a histogram to alter the contrast of an image. It spreads out the most frequent pixel intensity values or expands out the image’s intensity range to improve contrast. Histogram equalization achieves this by allowing the image’s low-contrast sections to obtain more contrast. Images that appear washed out due to a lack of contrast, equalization can be applied. Histograms will improve the contrast in an image, especially if the image has many low intensities. As illustrated in Figure 15 and Figure 16, the correlation coefficients values clearly indicate the suggested approach’s robustness when compared to the DWT-based SVD method.

6.2. The Effect of the Depth of the Watermark

Table 1 shows the effect of the depth of the watermark added to the video on the quality of video the relation between the depth of the added watermark and the correlation between the original watermark and the extracted watermark. We find that, as the depth increases, the quality of the recovered watermark is increases even under the effect of attacks.
The additive technique of adding watermark to the image is performed using the following equations:
S i n g u l a r   v a l u e s   1 = S i n g u l a r   v a l u e s   + k 1 w a t e r m a r k
We studied the effect of k1 on the quality of recovered image and the recovered watermark.
The quality of watermarked videos decreased with increasing the depth; however, the correlation between the watermarked video and the recovered one did not change. A comparison of PSNR was made for the effect of the depth of the watermark. The best quality was found when the depth was equal to 0.01. Therefore, we used this in our embedding technique.

6.3. Results of Proposed DT-CWT-Based SVD Video Watermarking Using LSB Method

Using a colorful video clip of 30 frames, we analyze the performance of the suggested DT-CWT-based SVD video watermarking approach. Figure 17 and Figure 18 show how the correlation coefficient varies with the frame index. The performance of the suggested method is clearly superior to that of DWT-based SVD watermarking, as shown in Figure 17 and Figure 18. Figure 17 and Figure 18 show snapshots of each scene. Using LSB approach, the watermark was then implanted in the Y component of each frame of the two scenes. In each of the two scenes, there was a different watermark. The two watermark pictures are 288 × 352 pixels in size and each frame is 576 × 704 pixels in size.
Figure 19 and Figure 20 show that the DT-CWT-based SVD watermarking method is more resilient than the DWT-based SVD watermarking method to the insertion of Gaussian noise with variance = 0.01.
As illustrated in Figure 21 and Figure 22, the correlation coefficients values clearly indicate the suggested approach’s robustness when compared to the DWT-based SVD method.
Then the proposed technique is also applied to two videos with 300 frames, which are Akiyo, and Foreman.mp4. As shown in Table 2 and Table 3, the PSNR and SSIM of the watermarked video frames, as well as the NC of the extract watermark, are described in Table 2 and Table 3. The PSNR value of the watermarked video frames reached 74 db, and the SSIM values were all around 0.999, showing that the watermark has good invisibility, as shown in Table 2 and Table 3. In addition, the extracted watermark’s NC values were all 1.0000, demonstrating that the watermark can be extracted reliably when the host is not under attack.

6.4. Image Processing Attacks

Noise and filter attacks are two types of image processing attacks. During the transmission process, noise may affect the host in varying degrees. Table 4 shows how the approach performs when the host is subjected to various types of noise, filtering, rotation, histogram equalization, sharpening, gamma correction, compression, cropping, and blurring attacks. The average NC value of the extracted watermark is above 0.9. This demonstrates that the technique has good robustness and can meet the requirements for safe and high-quality watermark transfer when a specific component of watermarked video frames is subjected to various attacks.

6.5. Capacity

We can easily determine capacity using the method we proposed. The capacity is determined by the host as well as the watermark. Table 5 compares the capability of several watermarking technologies currently in use. In addition, we suppose the host’s size is 576 × 704. When the watermark is a grayscale image, Table 5 displays the number of bits that can be embedded. It is simple to see how the proposed method can be used to imbed more watermark information into the host than methods in papers [16,19,20].

6.6. Comparison and Discussion

The proposed method’s findings are compared to those of similar methods in this section. It was not possible to find works with full similarity or very close matching due to the small number of works conducted in the field of video watermarking or the lack of a standardized dataset; therefore, steps were taken to make the methods chosen have the most similarity to the proposed method and, as far as possible, have similar conditions. As a result, the suggested method’s results are compared to the outcomes of each of these approaches in Table 6 and Table 7. We compare the quality of watermarked video and the time required for implementation. In Table 8, the proposed technique is compared to existence methods [16,21,22,23] under attacks and show the NC of recovered watermark. As shown in the tables, the proposed strategy produces superior outcomes in the majority of circumstances than similar methods.
The experimental results reveal that the suggested LSB watermarking approach is robust for most frames with and without attacks; however, when we measure the fidelity of the watermarked video, we discover that the proposed method is more robust than the DWT-based SVD watermarking in terms of fidelity.

7. Conclusions

In this paper, we offer a new approach for watermarking digital videos. The method employs the LSB and additive methods of two powerful mathematical transforms: DT-CWT and SVD. The DT-spatio-frequency CWT’s localization and the SVD’s substantial components’ compact capturing of semi-global features and the geometric information of images were integrated to utilize their appealing qualities. The proposed method’s resilience was shown by the fact that it successfully recovered the watermark from each frame without using the original video in the experiments. The retrieved watermark was extremely similar to the original watermark. The DWT-based SVD approach was compared to our proposed technique. The correlation coefficients revealed that the proposed watermarking was more robust to video compression for most frames than the DWT-based SVD watermarking.

Author Contributions

Conceptualization, R.A. and H.A.A.; Data curation, R.A. and H.A.A.; Formal analysis, R.A. and H.A.A.; Funding acquisition, R.A. and H.A.A.; Investigation, R.A. and H.A.A.; Methodology, R.A. and H.A.A.; Project administration, R.A. and H.A.A.; Resources, R.A. and H.A.A.; Software, R.A. and H.A.A.; Supervision, R.A. and H.A.A.; Validation, R.A. and H.A.A.; Visualization, R.A. and H.A.A.; Writing—original draft, R.A. and H.A.A.; Writing—review & editing, R.A. and H.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R323), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R323), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asikuzzaman, M.; Pickering, M.R. An Overview of Digital Video Watermarking. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2131–2153. [Google Scholar] [CrossRef]
  2. Shieh, J.-M.; Lou, D.-C.; Chang, M.-C. A semi-blind digital watermarking scheme based on singular value decomposition. Comput. Stand. Interfaces 2006, 28, 428–440. [Google Scholar] [CrossRef]
  3. Zhou, Z.; Tang, B.; Liu, X. A Block-SVD Based Image Watermarking Method. In Proceedings of the 2006 6th World Congress on Intelligent Control and Automation 2, Dalian, China, 21–23 June 2006; 2006; pp. 10347–10351. [Google Scholar]
  4. Chang, C.-C.; Tsai, P.; Lin, C.-C. SVD-based digital image watermarking scheme. Pattern Recognit. Lett. 2005, 26, 1577–1586. [Google Scholar] [CrossRef]
  5. Roy, S.; Pal, A.K. An SVD Based Location Specific Robust Color Image Watermarking Scheme Using RDWT and Arnold Scrambling. Wirel. Pers. Commun. 2017, 98, 2223–2250. [Google Scholar] [CrossRef]
  6. Selesnick, I.W.; Baraniuk, R.G.; Kingsbury, N.G. The Dual-Tree Complex Wavelet Transform. IEEE Signal. Processing Mag. 2005, 22, 123. [Google Scholar] [CrossRef]
  7. Liang, L.; Qi, S. A new SVD-DWT composite watermarking. In Proceedings of the IEEE International Conference on Signal Processing (ICSP2006), Guilin, China, 16–20 November 2006; Volume 4. [Google Scholar]
  8. Li, Q. Adaptive DWT-SVD Domain Image Watermarking Using Human Visual Model. In Proceedings of the 9th International Conference on Advanced Communication Technology, Gangwon, Korea, 12–14 February 2007; Volume 3, pp. 1947–1951. [Google Scholar]
  9. Singh, R.; Mittal, H.; Pal, R. Optimal keyframe selection-based lossless video-watermarking technique using IGSA in LWT domain for copyright protection. Complex. Intell. Syst. 2021, 8, 1047–1070. [Google Scholar] [CrossRef]
  10. Saxena, P.; Kumar, S. SCD based Video Watermarking by Using Wavelet Transform and SVD. In Proceedings of the 2021 Fourth International Conference on Computational Intelligence and Communication Technologies (CCICT), Sonepat, India, 3 July 2021; pp. 288–291. [Google Scholar] [CrossRef]
  11. Dhevanandhini, G.; Yamuna, G. An effective and secure video watermarking using hybrid technique. Multimed. Syst. 2021, 27, 953–967. [Google Scholar] [CrossRef]
  12. Keyvanpour, M.R.; Khanbani, N.; Boreiry, M. A secure method in digital video watermarking with transform domain algorithms. Multimed. Tools Appl. 2021, 80, 20449–20476. [Google Scholar] [CrossRef]
  13. Du, M.; Luo, T.; Xu, H.; Song, Y.; Wang, C. Robust HDR video watermarking method based on saliency extraction and T-SVD. Vis. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  14. Sun, X.-C.; Lu, Z.-M.; Wang, Z.; Liu, Y.-L. A geometrically robust multi-bit video watermarking algorithm based on 2-D DFT. Multimed. Tools Appl. 2021, 80, 13491–13511. [Google Scholar] [CrossRef]
  15. Farri, E.; Ayubi, P. A blind and robust video watermarking based on IWT and new 3D generalized chaotic sine map. Nonlinear Dyn. 2018, 93, 1875–1897. [Google Scholar] [CrossRef]
  16. Cao, Z.; Wang, L. A secure video watermarking technique based on hyperchaotic Lorentz system. Multimed. Tools Appl. 2019, 78, 26089–26109. [Google Scholar] [CrossRef]
  17. Shanmugam, M.; Chokkalingam, A. Performance analysis of 2 level DWT-SVD based non blind and blind video watermarking using range conversion method. Microsyst. Technol. 2018, 24, 4757–4765. [Google Scholar] [CrossRef]
  18. Meenakshi, K.; Swaraja, K.; Kora, P.; Karuna, G. A Robust Blind Oblivious Video Watermarking Scheme Using Undecimated Discrete Wavelet Transform. In Intelligent System Design; Springer: Berlin/Heidelberg, Germany, 2020; pp. 169–177. [Google Scholar] [CrossRef]
  19. Loan, N.A.; Hurrah, N.N.; Parah, S.A.; Lee, J.W.; Sheikh, J.A.; Bhat, G.M. Secure and Robust Digital Image Watermarking Using Coefficient Differencing and Chaotic Encryption. IEEE Access 2018, 6, 19876–19897. [Google Scholar] [CrossRef]
  20. Agilandeeswari, L.; Kaliyaperumal, G. A bi-directional associative memory based multiple image watermarking on cover video. Multimed. Tools Appl. 2015, 75, 7211–7256. [Google Scholar] [CrossRef]
  21. Akhlaghian, F.; Bahrami, Z. A new robust video watermarking algorithm against cropping and rotating attacks. In Proceedings of the 2015 12th International Iranian Society of Cryptology Conference on Information Security and Cryptology (ISCISC), Rasht, Iran, 8–10 September 2015; pp. 122–127. [Google Scholar] [CrossRef]
  22. Yassin, N.I.; Salem, N.M.; El Adawy, M.I. Block based video watermarking scheme using wavelet transform and principle component analysis. IJCSI Int. J. Comput. Sci. 2012, 9, 296. [Google Scholar]
  23. Bhardwaj, A.; Verma, V.S.; Jha, R.K. Robust video watermarking using significant frame selection based on coefficient difference of lifting wavelet transform. Multimed. Tools Appl. 2017, 77, 19659–19678. [Google Scholar] [CrossRef]
Figure 1. Dual-tree discrete CWT decomposition.
Figure 1. Dual-tree discrete CWT decomposition.
Electronics 11 01849 g001
Figure 2. Synthesis for dual-tree CWT.
Figure 2. Synthesis for dual-tree CWT.
Electronics 11 01849 g002
Figure 3. Complex filter responses.
Figure 3. Complex filter responses.
Electronics 11 01849 g003
Figure 4. DT-CWT watermark embedding procedure in video frames.
Figure 4. DT-CWT watermark embedding procedure in video frames.
Electronics 11 01849 g004
Figure 5. The two different watermarks embedded in the scenes of the video clip.
Figure 5. The two different watermarks embedded in the scenes of the video clip.
Electronics 11 01849 g005
Figure 6. (a) Akiyo. (b) Foreman. (c,d) Mobile. (c,d) Snapshots of the two scenes of the video clip.
Figure 6. (a) Akiyo. (b) Foreman. (c,d) Mobile. (c,d) Snapshots of the two scenes of the video clip.
Electronics 11 01849 g006
Figure 7. DT-CWT-based SVD watermark embedding procedure using the LSB method.
Figure 7. DT-CWT-based SVD watermark embedding procedure using the LSB method.
Electronics 11 01849 g007
Figure 8. CWT-based SVD embedding method using additive method.
Figure 8. CWT-based SVD embedding method using additive method.
Electronics 11 01849 g008
Figure 9. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene without attacks.
Figure 9. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene without attacks.
Electronics 11 01849 g009aElectronics 11 01849 g009b
Figure 10. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of the second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values due to detected watermarks from each frame in the second video scene without attack.
Figure 10. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of the second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values due to detected watermarks from each frame in the second video scene without attack.
Electronics 11 01849 g010
Figure 11. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene under Gaussian attack.
Figure 11. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene under Gaussian attack.
Electronics 11 01849 g011
Figure 12. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values due to detected watermarks from each frame in the second video scene under Gaussian attack.
Figure 12. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values due to detected watermarks from each frame in the second video scene under Gaussian attack.
Electronics 11 01849 g012
Figure 13. (a) The watermarked frame under compression attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene under compression attack.
Figure 13. (a) The watermarked frame under compression attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene under compression attack.
Electronics 11 01849 g013
Figure 14. (a) The watermarked frame under compression attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under compression attack.
Figure 14. (a) The watermarked frame under compression attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under compression attack.
Electronics 11 01849 g014
Figure 15. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene under histeq attacks.
Figure 15. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks for frames 1 to 15 in the first video scene under histeq attacks.
Electronics 11 01849 g015
Figure 16. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under histeq attack.
Figure 16. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method (c) Extracted watermark from frame 10 of the second video scene in the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under histeq attack.
Electronics 11 01849 g016aElectronics 11 01849 g016b
Figure 17. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each six frames in the first video scene without attacks.
Figure 17. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each six frames in the first video scene without attacks.
Electronics 11 01849 g017
Figure 18. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene without attack.
Figure 18. (a) The watermarked frame using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene without attack.
Electronics 11 01849 g018aElectronics 11 01849 g018b
Figure 19. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each six frames in the first video scene under Gaussian attack with variance 0.01.
Figure 19. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each six frames in the first video scene under Gaussian attack with variance 0.01.
Electronics 11 01849 g019
Figure 20. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under Gaussian attack with variance 0.01.
Figure 20. (a) The watermarked frame under Gaussian noise attack using the proposed method. (b) Extracted watermark from frame 10 of second video in the proposed method. (c) Extracted watermark from frame 10 of the second video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under Gaussian attack with variance 0.01.
Electronics 11 01849 g020
Figure 21. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each six frames in the first video scene under histeq.
Figure 21. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 10 of the first video scene in the proposed method. (c) Extracted watermark from frame 10 of the first video scene for the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each six frames in the first video scene under histeq.
Electronics 11 01849 g021
Figure 22. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 15 in the proposed method (c) Extracted watermark from frame 15 of the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under the histeq attack.
Figure 22. (a) The watermarked frame under the histeq attack using the proposed method. (b) Extracted watermark from frame 15 in the proposed method (c) Extracted watermark from frame 15 of the DWT-based SVD method. (d) Correlation values for the extracted watermarks from each frame in the second video scene under the histeq attack.
Electronics 11 01849 g022
Table 1. The effect of the depth of the watermark.
Table 1. The effect of the depth of the watermark.
Mobile Video
K1 (depth) 0.010.050.10.51
Proposed system PSNR (average) watermarked frames)115.13 dB99.7 dB91.48 dB75.58 dB68.34 dB
Correlation of watermark0.990.990.990.990.99
SSIM0.9990.99890.9560.910.6765
Table 2. PSNR, SSIM, and NC of watermarked Akiyo.mp4.
Table 2. PSNR, SSIM, and NC of watermarked Akiyo.mp4.
watermarkFrame 2Frame 5Frame 100
Electronics 11 01849 i001 Electronics 11 01849 i002
PSNR = 74.6
SSIM = −0.998
NC = 1
Electronics 11 01849 i003
PSNR = 74.65
SSIM = 0.99
NC = 1
Electronics 11 01849 i004
PSNR = 74.67
SSIM = 0.99
NC = 1
watermarkFrame 200Frame 250Frame 300
Electronics 11 01849 i005 Electronics 11 01849 i006
PSNR = 74.66
SSIM = 0.99
NC = 1
Electronics 11 01849 i007
PSNR = 74.56
SSIM = 0.99
NC = 1
Electronics 11 01849 i008
PSNR = 74.7
SSIM = 0.99
NC = 1
Table 3. PSNR, SSIM, and NC of watermarked “Foreman.mp4”.
Table 3. PSNR, SSIM, and NC of watermarked “Foreman.mp4”.
watermarkFrame 2Frame 50Frame 100
Electronics 11 01849 i009 Electronics 11 01849 i010
PSNR = 74.7, SSIM = −0.998
NC = 1
Electronics 11 01849 i011
PSNR = 74.9, SSIM = 0.99
NC = 1
Electronics 11 01849 i012
PSNR = 74.6, SSIM = 0.99
NC = 1
watermarkFrame 200Frame 250Frame 150
Electronics 11 01849 i013 Electronics 11 01849 i014
PSNR = 74.8, SSIM = 0.99
NC = 1
Electronics 11 01849 i015
PSNR = 74.7, SSIM = 0.99
NC = 1
Electronics 11 01849 i016
PSNR = 74.8, SSIM = 0.99
NC = 1
Table 4. Extracted watermark and NC value under various attacks.
Table 4. Extracted watermark and NC value under various attacks.
Extracted Watermarks (NC)
Type of attackWithout attackMedian filterHistogram equalizationblurringsharpeningGamma correctionRotation 30croppingAdditive gaussian noiseCompression 50%
Mobile.mp4 (30 frames)
Average of first 15 frames10.9380.9240.9320.9650.9830.9960.9960.9560.933
Average of second 15 frames10.9750.9320.9430.9640.9850.9950.9970.9670.932
Foreman.mp4 (301 frames)
Average of first 15 frames10.9450.9140.9330.9740.9870.9950.9970.9780.942
Average of second 15 frames10.9970.9230.9510.9720.9880.9960.9980.9880.934
Akiyo.mp4 (300 frames)
Average of first 15 frames10.9760.9280.9310.9670.9790.9970.9960.9760.955
Average of second 15 frames10.9530.9220.9610.9870.9790.9960.9960.9770.945
Table 5. Capacity comparison with existing methods.
Table 5. Capacity comparison with existing methods.
Paper [19]Paper [20]Paper [16]Proposed
Grey scale image4096 bits8192 bits40,960 bit101,376 bit
Table 6. Comparison between the proposed watermarking method and the traditional methods without attacks.
Table 6. Comparison between the proposed watermarking method and the traditional methods without attacks.
MethodPSNR (dB)Run Time (sec)
DWT65.9615.7
SVD33.3718.8
DWT-based SVD65.9625.2
Proposed DT-CWT-based SVD74.626.6
Table 7. Comparison between the proposed watermarking method and the traditional methods without attacks using LSB method for mobile video.
Table 7. Comparison between the proposed watermarking method and the traditional methods without attacks using LSB method for mobile video.
MethodPSNR (dB)Run Time (sec)
DWT35.9016.3
DWT-based SVD watermarking45.7728.916
Proposed DT-CWT-based SVD watermarking60.6529.69
Table 8. NC comparison against various attacks of proposed method with existing method.
Table 8. NC comparison against various attacks of proposed method with existing method.
Type of AttackPaper [21]Paper [16]Paper [22]Paper [23]Proposed Method
No attack11111
Gaussian noise0.9650.7070.8800.810.956
rotation0.9980.787--0.996
compression----0.945
Histogram equalization0.9810.6940.9900.8860.924
Median filter0.9960.839-10.953
blurring----0.961
Gamma correction--0.93510.979
Sharpening---10.967
Cropping0.991-0.0700.9000.996
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alkanhel, R.; Abdallah, H.A. Securing Color Video When Transmitting through Communication Channels Using DT-CWT-Based Watermarking. Electronics 2022, 11, 1849. https://doi.org/10.3390/electronics11121849

AMA Style

Alkanhel R, Abdallah HA. Securing Color Video When Transmitting through Communication Channels Using DT-CWT-Based Watermarking. Electronics. 2022; 11(12):1849. https://doi.org/10.3390/electronics11121849

Chicago/Turabian Style

Alkanhel, Reem, and Hanaa A. Abdallah. 2022. "Securing Color Video When Transmitting through Communication Channels Using DT-CWT-Based Watermarking" Electronics 11, no. 12: 1849. https://doi.org/10.3390/electronics11121849

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop