Next Article in Journal
Lidar Doppler Tomography Focusing Error Analysis and Focusing Method for Targets with Unknown Rotational Speed
Previous Article in Journal
The Impacts of Assimilating Radar Reflectivity for the Analysis and Forecast of “21.7” Henan Extreme Rainstorm Within the Gridpoint Statistical Interpolation–Ensemble Kalman Filter System: Issues with Updating Model State Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Template Watermarking Algorithm for Remote Sensing Images Based on Semantic Segmentation and Ellipse-Fitting

1
Key Laboratory of Virtual Geographic Environment, Nanjing Normal University, Ministry of Education, Nanjing 210023, China
2
State Key Laboratory Cultivation Base of Geographical Environment Evolution (Jiangsu Province), Nanjing 210023, China
3
Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 225127, China
4
Division of Geographic Information Management, Ministry of Natural Resources, Beijing 100812, China
5
School of Geoscience and Technology, Zhengzhou University, Zhengzhou 450001, China
6
Hunan Engineering Research Center of Geographic Information Security and Application, Changsha 410007, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(3), 502; https://doi.org/10.3390/rs17030502
Submission received: 20 November 2024 / Revised: 11 January 2025 / Accepted: 26 January 2025 / Published: 31 January 2025

Abstract

:
This study presents a ring template watermarking method utilizing semantic segmentation and elliptical fitting to address the inadequate resilience of digital watermarking techniques for remote sensing images against geometric attacks and affine transformations. The approach employs a convolutional neural network to determine the coverage position of the annular template watermark automatically. Subsequently, it applies the least squares approach to align with the relevant elliptic curve of the annular watermark, facilitating the restoration of the watermark template post-deformation due to an attack. Ultimately, it acquires the watermark information by analyzing the binarized image according to the coordinates. The experimental results indicate that, despite various geometric and affine modification attacks, the NC mean value of watermark extraction exceeds 0.83, and the PSNR value surpasses 35, thereby ensuring substantial invisibility and enhanced robustness. In addition, the methods presented in this paper provide useful references for imaging data in other fields.

1. Introduction

Remote sensing imaging is a vital strategic asset utilized in military operations, environmental conservation, land management, and other essential fields [1,2,3]. In recent years, advancements in Internet information technology have significantly enhanced the utilization and transmission of remote sensing imagery, yet they have also exacerbated security concerns such as data theft, leakage, and unlawful publication [4]. To completely eradicate security concerns associated with remote-sensing image data and to guarantee copyright protection throughout data distribution and utilization, there is an immediate necessity for reliable technical methods to safeguard the lawful and compliant use of remote-sensing image data [5]. Digital watermarking technology, a conventional technique of copyright protection, provides robust technological support for safeguarding remote-sensing image data by creating a tight link between digitized data and copyright information [6,7].
Digital watermarking methods for remote sensing imagery are currently categorized into two types: spatial domain watermarking and transform domain watermarking [8]. Spatial domain watermarking methods [9,10] integrate watermark information directly into pixel values, brightness levels, and other spatial features. These algorithms exhibit notable resilience to general panning, lossy compression, and various other assaults. Nonetheless, the technique is vulnerable to the geometric manipulation of the image and too reliant on the embedding carrier. An assault on the information carrier will immediately impact the methodology, constraining its efficacy due to its sensitivity to the geometric processing of the image and its excessive reliance on the embedding carrier. Assaults on the information carrier would directly affect the information itself and thus are incapable of managing scaling, interpolation, noise, and other assaults on the image data during transmission.
Algorithms based on the transform domain perform an invertible mathematical transformation of the carrier data before embedding it, after which the transform domain coefficients are replaced according to the embedding rules, and the data are inverted to obtain the carrier data containing the watermark [11,12]. Invertible transforms include orthogonal transforms like DWT [13], DCT [14], and DFT [15]. Transform domain-based algorithms that embed watermarks in low-frequency regions have no effect on the entire original data, and modification operations on the original data have no effect on the low-frequency coefficients in the transform domain, which can significantly improve watermarks’ ability to resist attacks such as compression and noise while ensuring very high invisibility. Current research on transform-domain watermarking algorithms mostly addresses resistance to conventional attacks (such as noise and filtering) and minor geometric attacks, but is inadequate against strong geometric transform attacks.
Template watermarking algorithms are a significant category of watermarking techniques based on the transform domain. This category of methods uses standard watermarking patterns, allowing for the extraction of the watermark template from the compromised image to recover the watermark information post-attack [16]. In comparison to conventional transform domain algorithms, the template watermarking algorithm enhances resistance to geometric transformation attacks, resulting in its broader application. The effectiveness of template watermarking methods is contingent upon the precision of watermark template detection and the efficacy of template matching. Chen Hui and colleagues [17] propose a template-matching watermarking algorithm based on LS-SVM, which effectively withstands conventional geometric attacks such as noise, filtering, and cropping while enhancing detection accuracy; however, the algorithm may fail with a certain probability under extreme geometric attacks. Pang Xinyan and associates employ a ring-shaped watermarking template and implement a cyclic watermarking embedding method to bolster resistance against geometric attacks [18]. Similarly, Qifei Zhou and his team utilize circular template watermarking and a multi-scale LCM method for watermark detection and extraction, further enhancing the accuracy and success rate of watermark template detection [19]. When the carrier data undergoes an affine transformation, the watermark template changes from a typical circle to an ellipse, resulting in a dramatic decline in the success rate of template matching. This algorithm shows little resistance to affine transformations, which is the fundamental issue that limits its practical applicability.
In conclusion, the spatial domain watermarking technique is easy to build; however, the watermarking information is excessively reliant on the carrier and is unduly sensitive to image processing, failing to satisfy robustness requirements in actual applications. The transform domain-based watermarking technique enhances resistance to conventional attacks, including compression and noise, to some degree, and can withstand modest geometric attacks; nevertheless, it fails to resist geometric attacks of arbitrary magnitude. The transform domain-based template watermarking algorithm enhances the watermarking’s resistance to arbitrary geometric attacks; however, accurately identifying the watermark template and restoring the watermark information in the presence of affine transformations remains a significant challenge due to template deformation.
This work provides a watermarking approach for remote sensing images that uses semantic segmentation and elliptical fitting to resolve the aforementioned issues. We employ the feature learning capabilities of deep learning technology, implementing semantic segmentation to precisely and autonomously identify the coverage location of the ring template watermark. Additionally, we apply the least-squares method to perform ellipse fitting on the detected coordinate points, thereby utilizing mathematical relations to restore the deformed ring template watermark and extract the embedded watermark information. This paper is structured as follows: The initial section presents the proposed fundamental algorithm concept and requisite background knowledge; the subsequent section details the algorithm and its implementation; the third section elucidates the experimental results and their analysis; the fourth section examines the impact of the algorithm’s parameter configurations on key metrics; and the final section concludes the paper.

2. Materials and Methods

2.1. Basic Idea

The essence of the ring template watermarking algorithm presented in this paper is founded on the precision of watermark template detection and template matching. Irrespective of any deformation resulting from an attack on the carrier image, both the watermark detection algorithm and the template matching algorithm can execute template identification and restoration of the carrier image, thereby extracting information that is closely associated with the original watermark data. The fundamental concept for addressing this issue involves leveraging the feature learning capabilities of deep learning to isolate the watermark information within the image spectrogram from other data. This enables the watermark extraction algorithm to identify the coverage location of the watermark and provide the corresponding coordinates. Furthermore, the returned coordinate points consist of a series of arrays formed by scattered points, necessitating the application of ellipse-fitting algorithms to curve-fit these coordinates, thereby obtaining ellipses. The ellipse-fitting algorithm must be employed to curve-fit the returned coordinates, thereby acquiring the parameters of the ellipse, including the major and minor axes, rotation angle, and focal point. Subsequently, mathematical methods should be utilized to restore the deformation curve to ascertain the actual coordinate points required for detection.
This paper employs semantic segmentation technology to isolate watermark information from other data, thereby identifying the watermark coverage area. It utilizes the least-squares method to fit the detected coordinate scatter, enabling the restoration of the watermark template based on the fitting results. Ultimately, it extracts watermark information from the original image spectrogram using the restored coordinate points. The algorithm’s framework is illustrated in Figure 1, and the primary algorithm module is principally separated into two components:
(1) Watermark design and embedding component. The watermarking algorithm, which combines template and message watermarking, converts the watermark information into a diminutive target within the DFT domain, embedding the watermark into the magnitude coefficients of the DFT domain at a predetermined radius to create a standard watermark circle.
(2) Watermark extraction component. Initially, the algorithm employs U-Net to semantically segment the spectrogram containing watermark data, isolating the watermark information bit from extraneous data and providing the coordinates of the watermark information bit. Subsequently, an ellipse is fitted to the watermark information bit utilizing the least-squares method to derive the general equation of the ellipse along with parameters such as the major and minor axes, the focal point, and the angle of rotation, thereby facilitating the reduction of the template watermark to ascertain the coordinates of the watermark information bit. The original image is denoised and binarized, and the watermark information is recovered based on the coordinates.

2.2. U-Net Semantic Segmentation

The U-Net technique is a convolutional neural network architecture designed for image segmentation [20].The U-Net algorithm features a U-shaped network architecture, as illustrated in Figure 2.
The U-Net architecture comprises two components: the encoder and the decoder [21]. The encoder incrementally harvests image features and diminishes resolution via a sequence of convolutional and pooling layers, whereas the decoder progressively reinstates image resolution by up-sampling and convolutional layers, ultimately producing the segmentation results. The two components are linked by skip connections, allowing the decoder to leverage feature information from the encoder, thereby enhancing segmentation accuracy and detail preservation, as well as precisely identifying target points for detection. Additionally, U-Net demonstrates strong performance on small sample datasets, offering advantages of high efficiency and lightweight design. Consequently, this research employs U-Net for the binary segmentation of images.

2.3. Ellipse-Fitting Algorithm: Least-Squares Approach

Ellipse fitting is analogous to resolving a general elliptic equation. Equation (1) demonstrates that the five parameters can be determined from the coordinates of five points. In practice, we obtain more than five points with coordinates, resulting in a over-determined equation that lacks a unique solution. Consequently, it is essential to determine its least-squares solution [22].
x i 2 + A x i y i + B y i 2 + C x i + D y i + E = 0
The concept of the algorithm is as follows:
Step 1: Define the coordinate detection values of N locations on the known ellipse as   x i , y i (with error), where i = 1,2 , 3 , , N , Substituting each coordinate point x i , y i into Equation (2) yields the equation error ε i , represented by the following expression:
x i 2 + A x i y i + B y i 2 + C x i + D y i + E = ε i         i = 1,2 , N
Step 2: Define μ as the sum of the squares of the errors in the equations following the substitution of each detection point, and compute its specific value. The equation is as follows:
μ = i = 1 N ε i 2 = i = 1 N x i 2 + A x i y i + B y i 2 + C x i + D y i + E 2
Step 3: The approach aims to determine the least value of the squared error μ of the equation. Based on the extreme value concept, Equation (4) can be derived:
μ A = 0 , μ B = 0 , μ C = 0 , μ D = 0 , μ E = 0
Step 4: By expanding the equation derived from Step 3, the subsequent equation can be formulated, which can be resolved to ascertain the values of the five parameters A, B, C, D, and E.
i = 1 N x i 2 y i 2 i = 1 N x i y i 3 i = 1 N x i 2 y i i = 1 N x i y i 3 i = 1 N y i 4 i = 1 N x i y i 2 i = 1 N x i y i 2 i = 1 N x i y i i = 1 N y i 3 i = 1 N y i 2 i = 1 N x i 2 y i i = 1 N x i y i 2 i = 1 N x i 2 i = 1 N x i y i 2 i = 1 N x i y i i = 1 N y i 3 i = 1 N y i 2 i = 1 N x i y i i = 1 N x i i = 1 N x i y i i = 1 N x i i = 1 N y i 2 i = 1 N y i i = 1 N y i i = 1 N 1 A B C D E = i = 1 N x i 3 y i i = 1 N x i 2 y i 2 i = 1 N x i 3 i = 1 N x i 2 y i i = 1 N x i 2
Step 5: Utilizing the geometric characteristics of an ellipse, determine the lengths of the major axis a , the minor axis   b , the coordinates of the focus x 0 , y 0 , and the angle of rotation θ .
x 0 = 2 B C A D A 2 4 B y 0 = 2 D A C A 2 4 B a = 2 A C D B C 2 D 2 + 4 B E A 2 E ( A 2 4 B ) ( B + 1 A 2 + 1 B 2 b = 2 A C D B C 2 D 2 + 4 B E A 2 E ( A 2 4 B ) ( B + 1 + A 2 + 1 B 2 θ = tan 1 a 2 b 2 B a 2 B b 2

3. Algorithm and Implementation

This study presents a template watermarking system that integrates both template and message watermarking. The watermarking template features a predefined design for attack mitigation while concurrently storing binary copyright information. The algorithm’s design and implementation are as follows:
Step 1: Encode the copyright information as a binary sequence of a defined length (recommended 40 to 80 bits) and arrange the binary sequence sequentially into a circular configuration at consistent angular intervals.
Step 2: Identify the embedding location of the watermark circle’s center (in this study, we utilize the center of the amplitude spectrum), ascertain the radius for the watermark circle embedding, and systematically substitute the amplitude values in the spectrum map to facilitate the embedding of the watermark information.
Step 3: The spectrogram of the target image is denoised and provided as an input to the semantic segmentation network, with the result subsequently binarized.
Step 4: Utilize the output from the previous step to sequentially deposit the coordinates of the non-zero points from the binary image into an array. Employ the least-squares method to fit the scatter points within the array, derive the general equation of the ellipse, and calculate the ellipse’s major and minor axes, focal lengths, angle of rotation, and additional parameters.
Step 5: Simplify the elliptic curve based on the resolved parameters to obtain the precise coordinate spots for detection.
Step 6: The spectrogram is denoised and binarized, followed by the sequential extraction of binary values based on the coordinate points to retrieve the watermark information.

3.1. Template Watermark Design

The watermark information comprises a binary sequence of “0”s and “1”s, referred to as a w a t e r m a r k , with its length indicated as l e n . The watermark is expressed as W = W i | w i 0,1 , i = 1 ,   2 ,   3 . . . . . . . l e n ; , The ring template watermark is formulated utilizing the magnitude coefficients within the DFT domain [23]. Figure 3 illustrates the schematic representation of the template watermark, wherein the black sections denote the watermark information bit “0” and the white highlights signify the watermark information bit “1”.
The watermark information is uniformly dispersed around the ring. The duration s l e n between two watermark bits is denoted as follows:
s l e n = 360 2 l e n

3.2. Watermark Embedding Algorithm

Designate the horizontal length of the image as p and the vertical length as q . Designate the center of the image as the focal point of the circular template watermark, represented as ( c p , c q ), with the following formula:
c p = p 2 + 1 , m o d p , 2 = 0 p + 1 2 , m o d p , 2 0
c q = q 2 + 1 , m o d q , 2 = 0 q + 1 2 , m o d q , 2 0
Where mod denotes the act of determining the mode. The high-frequency center point ( c p , c q ) of the DFT domain serves as the circle’s center, from which a circular template watermark with a radius of [ R S , R + S ] is created. The value S regulates the template’s dimensions, with S = 2 being recommended in this article.
The incorporation of watermark information is achieved by substituting the magnitude value in the spectrogram with the radius designated as R. The watermark information is embedded at coordinates ( x , y ) with a step length of s l e n as the embedding interval in Section 2.1, and the position is determined as follows:
x = c p + R c o s d θ , θ ϵ 0,180 s l e n y = c q + R s i n d θ , θ ϵ 0,180 s l e n
Where c o s d θ and s i n d θ transform the angle θ into radian values, thereafter computing the cosine and sine, respectively. The magnitude of angle θ is collectively influenced by the radius R and the step size s l e n . The position of symmetry ( x , y ) at θ ϵ 180,360 s l e n is determined using the following approach, owing to symmetry.
x = 2 c p x , θ ϵ 180,360 s l e n y = 2 c q x , θ ϵ 180,360 s l e n
The formula for watermark replacement is as follows:
M x , y = A , w i = 1 0 , w i = 0
Figure 4 illustrates the spectral image of the ring template post-watermark embedding, where the highlighted circle denotes the watermark information bit “1”, and the non-highlighted portion of the circle signifies the watermark information bit “0”.

3.3. Watermark Extraction Algorithm

3.3.1. Semantic Segmentation

This paper delineates the training dataset into two components: the original image and its associated mask. The original image is segmented into two components: background and watermark, as illustrated in Figure 5, where the black section represents the background and the red section denotes the watermark.
This paper utilizes 600 spectrum images and their corresponding masks, both unaltered and subjected to various attacks (e.g., projection transformation, affine transformation, rotation transformation, etc.), as training data, with the training and test sets divided in a 5:1 ratio.
The model employs the sigmoid activation function, with a training duration of 200 epochs, and the loss curve is illustrated in Figure 6.
The model test sample is depicted in Figure 7a, and the resulting image is binarized to produce Figure 7b. The coordinates of all non-zero points post-binarization are retained in the array template, and elliptical fitting is executed to derive the elliptic equation of the watermarked coverage region.

3.3.2. Ellipse Fitting

Utilizing the coordinate array template defined in Section 3.3.1, extract several coordinate points and allocate the x coordinates to the sample sequence x_sample and the y coordinates to the sample sequence y_sample. x_sample and y_sample serve as the input coordinates for ellipse fitting, employing the least-squares approach to derive the elliptic curve P. Calculate the rotation angle θ of the ellipse, the focal length 2 c , the semi-major axis a , and the semi-minor axis b in sequence to produce the fitted elliptical curve. This is illustrated in Figure 8.

3.3.3. Ellipse Repositioning

The localization algorithm is outlined as follows:
  • The major axis of the ellipse is denoted as a , and the minor axis as b , the distance between the two foci is represented as 2 c , and the angle of inclination of the ellipse is θ , where θ 0 , π , as derived in Section 3.3.2.
  • Rotate image P clockwise by θ degrees. Designate the rotated image as P′. When the center of the ellipse is positioned in the center of the image, the major axis of the ellipse, denoted as a, aligns with the x-axis in the horizontal orientation, while the minor axis, denoted as b, aligns with the y-axis in the vertical orientation.
The relocation of the watermark elliptic curve is finalized, as illustrated in the subsequent image (Figure 9).

3.3.4. Determination of Watermark Information Coordinates

1.
Methods for establishing standard circular coordinates:
A circle is a specific type of ellipse characterized by equal lengths of its axes, and the length of an arc of a circle corresponding to a central angle θ is given by R * θ . The distance of a circle between two watermarked points is articulated in increments as follows:
s l e n = 360 2 l e n
The conventional equation of a circular curve is x 2 + y 2 = R 2 , accompanied by the corresponding parametric equations.
x = R   cos θ y = R   sin θ
The position of the point to be detected M i ( x i , y i ) can be articulated in relation to the step size s l e n .
x i = R   cos i s l e n y i = R   sin i s l e n , i = 0,1 , 2 , l e n
2.
The elliptic coordinates are defined as follows:
The standard equation of an ellipse:
x 2 a 2 + y 2 b 2 = 1
It can be regarded as the standard circle x 2 + y 2 = 1 where all the horizontal coordinate components become a times the original, and all the vertical coordinate components become b times the original. If the circular template watermark ring is compressed to a radius of 1, the coordinates of the point to be detected are M ( x i , y i ) , and the values of x i and y i   are:
x i = cos i s l e n y i = sin i s l e n , i = 0,1 , 2 , l e n
The coordinates of the spots to be identified on the adjusted elliptical template watermark curve can be derived as K i ( x i ,   y i ) .
x i = a cos i s l e n y i = b sin i s l e n , i = 0,1 , 2 , l e n
All points in K i ( x i ,   y i ) are centered at the origin O ( 0,0 ) and rotated counterclockwise by the declination angle θ to obtain the coordinate coordinates F( x i , y i ) in the original watermarked image, where the values of x i and y i are specified in the following equation:
x i = x i cos θ + y i sin θ y i = x i sin θ + y i cos θ , i = 0,1 , 2 , l e n , θ 0 , π
That is:
x i = a cos i s l e n cos θ + b sin i s l e n sin θ y i = a cos i s l e n sin θ + b sin i s l e n cos θ , i = 0,1 , 2 , l e n , θ 0 , π
3.
Watermark information extraction:
The original watermark spectrogram is denoised and binarized; the pixel value at the coordinate F( x i , y i ) is read sequentially. If the value is non-zero, it indicates that the watermark bit is “1”; otherwise, it is “0”. The watermark information is denoted as W i | w i 0,1 , i = 1 , 2 , 3 . . . . . . . l e n
W i = 1 , f x i , y i 0 0 , f x i , y i = 0 , i = 1 , 2 , 3 . l e n

4. Experiment and Analysis

4.1. Presentation of Experimental Data

To assess the efficacy of the proposed approach across many dimensions, we selected four photos from Landsat 4-5 TM for experimentation, as depicted in Figure 10a–d, and calculated the average of the experimental outcomes. The photos depict many geographic features, including mountains, rivers, and plains, with the reference coordinate system and size information for each dataset presented in Table 1. The experimental data are available for download from the Geospatial Data Cloud (https://www.gscloud.cn, accessed on 1 October 2024).
The experimental environment comprises a Windows 11 PC, Python 3.12, an AMD 5800x CPU, a GeForce RTX 3080 GPU, and Torch version 2.4.0+cu124.

4.2. Assessment Metrics

4.2.1. Inconspicuousness

This research evaluates the imperceptibility of the watermarking algorithm using PSNR (Peak Signal-to-Noise Ratio) [24]. PSNR assesses the resemblance between the watermarked image and the original image by determining the ratio of the maximum image signal value to the background noise. Increased similarity correlates with enhanced imperceptibility before and after watermark embedding. This is demonstrated in the following equation:
P S N R = 10 log M A X 2 M S E
M A X represents the maximum value of the image’s color depth, which is 255 for an 8-bit color depth image. M S E denotes the Mean Square Error between the original image and the watermarked image.A greater PSNR value indicates reduced distortion of the watermarked image compared to the original, resulting in superior image quality.A PSNR ranging from 35 to 40 typically signifies good image quality [25].

4.2.2. Robustness

In watermark information identification on dubious data or post-attack data, it is essential to assess the similarity between the extracted watermark information and the original copyright information, followed by conducting a watermark similarity evaluation. The watermark evaluation metrics include NC, BER, PSNR, RMSE, etc. Among these, the normalized correlation (NC) is the most frequently employed similarity assessment index [26]. Consequently, this paper utilizes NC to evaluate the precision of the extracted watermarks [27]. A higher NC value between the extracted watermarks W and the original watermarks W signifies greater accuracy and robustness of the extracted watermarks. The NC is delineated by the following equation:
N C = i = 1 l e n W i W i i = 1 l e n W i 2 i = 1 l e n W i 2
The NC threshold is an empirical value established at 0.75 in this paper. If the observed NC exceeds this level, it is deemed a successful detection; otherwise, the detection is deemed unsuccessful [19].

4.3. Results and Analysis of Experiments

This experiment is aimed at assessing the method’s resilience against four types of attacks: rotation, scaling, cropping, and affine transformations. Various strengths are utilized for each attack type, and the experimental outcomes are juxtaposed with Xinyan Pang’s method (designated as Method A) [18] and Heidari’s method (designated as Method B) [28]. Method A utilizes Discrete Fourier Transform (DFT) and employs ring template watermarking, while Method B is founded on Discrete Wavelet Transform (DWT) and incorporates a binary image of 32 × 32 pixels as watermarking data. As seen in Figure 11.
Method A employs a watermark embedding strength of 120, a watermark radius control of 120 px, and a watermark information bit control of 40 bits. This paper establishes the embedded watermark radius at 120 px, the watermark strength at 120, and the watermark information at 40 bits. Furthermore, if the experimental image requires cropping, the lower right corner is excised while prioritizing the preservation of the upper left corner. If filling is necessary, the default fill value is set to 0, maintaining the upper left corner and filling the lower right corner.

4.3.1. Experiments on Invisibility

This research employs PSNR values to assess the invisibility of watermarks. Table 2 presents the PSNR findings for the three techniques. The table indicates that the PSNR value of the suggested method in this work is 39.856, compared to 38.874 for Method A and 38.165 for Method B. This method exhibits marginally superior invisibility compared to Method A and Method B.

4.3.2. Robustness Experiments

Rotation, scaling, and cropping attacks are prevalent threats to remotely sensed image data and represent the primary challenges encountered by digital watermarking methods. This section presents three experiments involving conventional geometric attacks to assess the methods’ robustness. Various strengths are applied to each attack to evaluate the methods’ effectiveness across multiple scenarios and to quantitatively analyze the watermark extraction results using NC values.
1.
Rotation attacks
In this experiment, we rotate the image clockwise by a specified angle, given as θ . The value of θ varies from 15° to 180° in increments of 15°. Figure 12 illustrates the outcomes of the rotation results.
Figure 12 illustrates that the NC value of the watermark recovered by the technique presented in this research exceeds 0.94 at any rotation angle, with an average value of 0.967, demonstrating exceptional resilience to rotational attacks. The retrieved watermarks exhibit a near-total correlation with the original watermarks, irrespective of the rotation angle. This results from the circle’s symmetry, where any angle can align with various rotational angles while maintaining a constant radius to interpret the watermark’s coordinate data.
Conversely, while Method A possesses the capability to withstand arbitrary rotational angles, the NC values across all 12 experimental datasets are inferior to the lowest NC value of this method, which is 0.94. Consequently, this method outperforms Method A in both accuracy and detection stability. Method B exhibits significant variations in NC values across multiple rotational attack angles. The method’s NC value is already below the threshold of 0.75 at rotation angles of 60°, 105°, and 120° to 165°, with a minimum NC value of 0.25 at a rotation angle of 135°. The experimental findings indicate that Method B exhibits inadequate resilience to rotating attacks.
2.
Scaling attacks
Scaling attacks encompass image interpolation and image resampling. This section establishes a scaling factor S to assess the method’s resilience against scaling attacks. An A × B pixel image transforms into an (A × S) × (B × S) image under the scaling factor, indicating that the image is either reduced or enlarged by a factor of S in both horizontal and vertical dimensions. Figure 13 and Table 3 illustrate that the value of S is increased by 0.1 at intervals ranging from 0.5 to 1. An additional S = 1.2 is incorporated to validate the extraction results of the watermark post-interpolation, using an image of 7891 × 6941 pixels as an example to illustrate the dimensions following the scaling attack.
Figure 14 illustrates the experimental outcomes of this technique in the context of scaling attacks. The NC value of the proposed method remains elevated under most scaling attacks, with the exception of a scaling factor of 0.5, where the NC value of the extracted watermark is 0.751. This reduction occurs because the watermarked circle approaches the edge of the spectrogram at this scaling factor, leading to the loss of some watermarked information and hindering the retrieval of the complete watermarked template. Upon removal of points with S = 0.5, the NC values of the proposed method consistently exceed 0.95, aligning closely with the experimental results of Method A. The findings demonstrate that the recovered watermark information remains significantly linked with the original watermark when exposed to scaling attacks with values ranging from 0.5 to 1.2. When the scaling factor is equal to 0.5, the NC value of Method A is slightly higher than the present method, because although they are both ring templates, the number of “1”s in the watermark information bits is slightly different, which leads to some fluctuations in the results. Conversely, the NC value of Method B falls below the threshold of 0.75 when the scaling factor is less than 0.7 or exceeds 1, with an NC value of merely 0.12 at S = 0.5, and it precipitously declines to 0 when S surpasses 1. This signifies that Method B can only manage scaling attacks of a lesser magnitude and is incapable of handling scaling attacks post-image interpolation.
This method and Method A exhibit similar resistance to scaling attacks and outperform Method B under such conditions, routinely achieving NC values exceeding 0.75, indicating superior robustness.
3.
Cropping attacks
This experiment employs a cropping assault that prioritizes the preservation of the upper left corner while establishing a cropping ratio. r represents the ratio of the cropped region to the original image, with values ranging from 20% to 70%, in 10% increments, as illustrated in Figure 15 and Table 4. The bigger the value of r, the smaller the retained data image. For instance, with an image of 7891 × 6941 pixels, when r = 70%, the dimensions of the cropped data are 4322 × 3802 pixels.
Figure 16 illustrates the experimental outcomes of the current technique subjected to various intensities of cropping attacks. The graphic illustrates that the current approach, along with Method A and Method B, exhibits comparable resistance to cropping attacks. The NC values of the three methods exhibit a consistent decline when subjected to cropping ratios ranging from 20% to 70%; however, they consistently remain above 0.83. This suggests that even if a significant portion of the image is cropped, the watermark information, which is strongly correlated with the original watermark, can still be retrieved. In this experiment, Method A exhibits greater fluctuation than both this method and Method B, resulting in slightly inferior stability. However, the NC value of this method is marginally higher than that of Method B in most instances, and its performance is nearly equivalent to that of Method B at cropping ratios of 40%, 50%, and 60%.
The experiment’s results demonstrate that the suggested approach effectively extracts complete watermark information despite numerous cropping attempts, indicating strong robustness against such threats.

4.3.3. Affine Attacks

An affine transformation can be represented as a combination of a translation transformation and a linear transformation, and any affine transformation can be articulated as “multiplying a matrix and adding a vector”. Both Method A and Method B are susceptible to the extraction of the original watermark information when subjected to affine transformations.
This work presents six affine transformations, illustrated in Figure 17, to assess the method’s resilience against various affine transformation attacks.
Figure 18 illustrates the extent of image distortion, the variation in spectral graphs, the outcomes of semantic segmentation, and the results of ellipse-fitting(red line section) following six affine transformations. As can be seen from the figure, facing these six affine transformation matrices, the semantic segmentation models trained in this paper are all able to effectively extract the deformed annular template, the obtained coordinate points satisfy the need of ellipse fitting, and the NC values of the final extracted watermarks are all maintained above 0.83, which is higher than the threshold value of 0.75. Method A, when subjected to an affine transformation, causes the circular template watermark to morph into an ellipse, lacking the properties of a standard circle, thereby precluding successful template matching and resulting in the failure of watermark extraction. Conversely, Method B fails to achieve the anticipated outcomes in the presence of spectral pattern deformation. Our approach outperforms Method A and Method B in response to affine transformation attacks. The method surpasses both Method A and Method B.
In the majority of affine transformation attacks, the proposed method successfully retrieves watermark information closely associated with the original watermark, with NC values of the extraction results exceeding the threshold, demonstrating stability, accuracy, and efficiency. This approach is thus deemed robust against affine transformations.

5. Discussion

The findings from the aforementioned trials indicate that the algorithm presented in this research is resilient against diverse attacks and surpasses two exemplary methods. This section discusses the impact of algorithm parameter settings on the imperceptibility and resilience of watermarking to elucidate the method’s properties.

5.1. Impact of Watermarking Parameters on Imperceptibility

The imperceptibility of the watermark is influenced by several parameters, including the radius of watermark embedding, the strength of watermark embedding, and the quantity of “1”s in the watermark information bits, all of which affect the PSNR value of the image. This paper presents two experiments aimed at quantitatively assessing the impact of these three parameters on the PSNR value.

5.1.1. Impact of Watermark Embedding Intensity on PSNR

To examine the influence of watermark embedding strength on PSNR, we varied the embedding strength in 20 intervals between 120 and 200, and the watermark radius in 10 intervals between 60 and 120, while maintaining the number of watermark bits set to “1” at 18. Figure 19 illustrates the impact of watermark embedding strength on PSNR values across different watermark radii. Figure 19 also illustrates the impact of watermark embedding strength on PSNR values across various watermark radii, while indicating that an increase in watermark embedding strength correlates with a decrease in PSNR value, hence diminishing the watermark’s imperceptibility. At a watermark strength of 200, the PSNR value remains approximately 36. This publication advises that the embedding strength of the watermark information should not exceed 200 to maintain the algorithm’s imperceptibility.

5.1.2. Impact of Varying Quantities of “1” on PSNR

To examine the impact of the quantity of watermark information bits “1” on PSNR, we established the number of embedded “1” information bits at intervals of 4 within the range of 18 to 30, the watermark radius at intervals of 10 within the range of 60 to 120, and the watermark intensity at 120. Figure 20 illustrates the impact of the quantity of watermark embedding bits “1” on the PSNR value across various watermark radii, indicating that the influence of the number of information bits “1” on the PSNR value is less significant than that of embedding intensity. The influence of embedding strength on PSNR is less significant than the impact of embedding strength. In the ring watermark containing 40 bits of information, an increased quantity of “1”s correlates with a diminished PSNR value, although the overall PSNR remains within the range of 38 to 41. Consequently, the quantity of information bits designated as “1” can be determined based on practical application requirements. To ensure the precision of semantic segmentation and elliptical fitting, this study recommends that the count of “1” exceed 10.

5.2. Impact of Watermarking Parameters on Robustness

The imperceptibility of the watermark, its radius, the quantity of watermark information bits “1”, and the intensity of watermark embedding all influence the algorithm’s robustness. Two studies are conducted to statistically assess the three elements utilizing the normalized correlation (NC) value as the evaluative criterion.

5.2.1. Impact of Watermark Embedding Intensity on NC Value

To examine the influence of watermark embedding strength on the NC value, we varied the embedding strength in 20 intervals between 120 and 200, and the watermark radius in 10 intervals between 60 and 120, while maintaining the number of information bits “1” at a constant 18. Figure 21 illustrates the impact of watermark embedding strength on the NC value across different watermark radii. The radius of the watermark exerts minimal influence on the NC value; however, the strength of the watermark embedding significantly affects the NC value. Increased intensity correlates with an elevated NC value. This outcome indicates that increased watermark intensity correlates with enhanced accuracy in semantic segmentation and elliptical fitting, leading to more precise localization of watermarked information bits, while mitigating the influence of image noise on extraction results.

5.2.2. Influence of Varying Quantities of “1” on NCs

To examine the impact of the quantity of watermark information bits “1” on NC, we established the number of embedded watermark bits “1” at intervals of 4 within the range of 18 to 30, maintained the watermark radius at intervals of 10 between 60 and 120, and fixed the watermark intensity at 120. Figure 22 illustrates the impact of the quantity of embedded watermark bits “1” on the NC value over various watermark radii. The correlation is evident: an increased quantity of watermark bits designated as “1” corresponds to a greater NC value, hence enhancing the integrity of watermark extraction. This outcome indicates that the watermark information bit “1” significantly influences the precision of semantic segmentation and ellipse fitting; an increased frequency of “1” corresponds to a greater number of bright spots in the image spectrum, facilitating the differentiation of the watermark information from the background, thereby enhancing segmentation and fitting accuracy. A greater quantity of “1”s correlates with an increased number of bright spots in the image spectral map, facilitating the segmentation of watermark information bits from the background, hence enhancing the accuracy of segmentation and fitting.

6. Conclusions

This study offers a robust watermarking approach for annular templates, utilizing semantic segmentation and elliptic curve fitting, to enhance the limited robustness of classic template watermarking against geometric and affine attacks. This method incorporates deep learning technology into the watermark information extraction process, addressing the limitation of classic ring template watermarking, which fails to extract watermark information when the template deforms due to affine attacks. The method is resilient to diverse geometric attacks, such as scaling, rotation, and cropping. Even under various affine transformations, the watermark template can still be restored, allowing for the retrieval of watermark coordinates and extraction of watermark information. The experimental results demonstrate that the watermark information extracted by the algorithm retains substantial integrity across diverse challenging environments. Subsequent efforts will enhance the precision of semantic segmentation to accommodate a wide range of contexts; additionally, the watermark template will be modified to transmit a greater volume of effective information bits.

Author Contributions

Conceptualization, X.C. and N.R.; methodology, X.C. and Q.Z.; software, Q.Z. and W.Z.; investigation, C.Z.; writing—original draft preparation, X.C. and W.Z.; writing—review and editing, N.R. and C.Z.; supervision, N.R. and C.Z.; funding acquisition, N.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Nature Science Foundation of China (Grant No. 42471440), and Hunan Provincial Natural Science Foundation of China (No:2024JJ8360).

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Conflicts of Interest

All authors disclosed no relevant relationships.

References

  1. Zhang, G.; Lu, S.; Zhang, W. CAD-Net: A context-aware detection network for objects in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10015–10024. [Google Scholar] [CrossRef]
  2. Lu, X.; Wang, B.; Zheng, X.; Li, X. Exploring models and data for remote sensing image caption generation. IEEE Trans. Geosci. Remote Sens. 2017, 56, 2183–2195. [Google Scholar] [CrossRef]
  3. Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
  4. Wan, W.; Wang, J.; Zhang, Y.; Li, J.; Yu, H.; Sun, J. A comprehensive survey on robust image watermarking. Neurocomputing 2022, 488, 226–247. [Google Scholar] [CrossRef]
  5. Li, D.; Deng, L.; Gupta, B.B.; Wang, H.; Choi, C. A novel CNN-based security guaranteed image watermarking generation scenario for smart city applications. Inf. Sci. 2019, 479, 432–447. [Google Scholar] [CrossRef]
  6. Zhu, C.Q. Research progresses in digital watermarking and encryption control for geographical data. Acta Geod. Cartogr. Sin. 2017, 46, 1609. [Google Scholar]
  7. Haghighi, B.B.; Taherinia, A.H.; Harati, A.; Rouhani, M. WSMN: An optimized multipurpose blind watermarking in Shearlet domain using MLP and NSGA-II. Appl. Soft Comput. 2021, 101, 107029. [Google Scholar] [CrossRef]
  8. Ren, N.; Guo, S.; Zhu, C.; Hu, Y. A zero-watermarking scheme based on spatial topological relations for vector dataset. Expert Syst. Appl. 2023, 226, 120217. [Google Scholar] [CrossRef]
  9. Peng, Z.; Fei, J.; Zhang, J. A copyright protection watermarking algorithm for remote sensing image based on binary image watermark. Optik 2013, 124, 4177–4181. [Google Scholar]
  10. Li, M.; Zhang, J.; Wen, W. Cryptanalysis and improvement of a binary watermark-based copyright protection scheme for remote sensing images. Optik 2014, 125, 7231–7234. [Google Scholar] [CrossRef]
  11. Ma, J.; Li, S.; Tang, X. A digital watermarking algorithm based on DCT and DWT. In Proceedings of the 2009 International Symposium on Web Information Systems and Applications, Xuzhou, China, 18–20 September 2009; Academy Publisher: Cambridge, MA, USA, 2009; p. 104. [Google Scholar]
  12. Suhail, M.A.; Obaidat, M.S. Digital watermarking-based DCT and JPEG model. IEEE Trans. Instrum. Meas. 2003, 52, 1640–1647. [Google Scholar] [CrossRef]
  13. Hua, Y.; Xi, X.; Qu, C.; Du, J.; Weng, M.; Ye, B. An adaptive watermarking for remote sensing images based on maximum entropy and discrete wavelet transformation. KSII Trans. Internet Inf. Syst. 2024, 18, 1. [Google Scholar]
  14. Yuan, Z.; Liu, D.; Zhang, X.; Wang, H.; Su, Q. DCT-based color digital image blind watermarking method with variable steps. Multimed. Tools Appl. 2020, 79, 30557–30581. [Google Scholar] [CrossRef]
  15. Kumar, A. A review on implementation of digital image watermarking techniques using LSB and DWT. In Information and Communication Technology for Sustainable Development: Proceedings of ICT4SD 2018; Springer: Berlin/Heidelberg, Germany, 2020; pp. 595–602. [Google Scholar]
  16. Chen, W.; Ren, N.; Zhu, C.; Keskinarkaus, A.; Seppänen, T.; Zhou, Q. Joint image encryption and screen-cam robust two watermarking scheme. Sensors 2021, 21, 701. [Google Scholar] [CrossRef]
  17. Chen, H.; Chen, J. A digital watermarking technique resistant to geometric attacks based on LS-SVM and template matching correction. J. Harbin Norm. Univ. (Nat. Sci. Ed.) 2018, 34, 44–50. [Google Scholar]
  18. Ren, N.; Pang, X.; Zhu, C.; Guo, S.; Xiong, Y. Blind and robust watermarking algorithm for remote sensing images resistant to geometric attacks. Photogramm. Eng. Remote Sens. 2023, 89, 60–71. [Google Scholar] [CrossRef]
  19. Zhou, Q.; Sun, H.; Pang, X.; Ai, C.; Zhu, X.; Zhu, C.; Ren, N. Watermarking algorithm for remote sensing images based on ring-shaped template watermark and multiscale LCM. Remote Sens. 2024, 16, 2535. [Google Scholar] [CrossRef]
  20. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; proceedings, part III 18; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  21. Du, G.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y. Medical image segmentation based on U-Net: A review. J. Imaging Sci. Technol. 2020, 64, 2. [Google Scholar] [CrossRef]
  22. Menke, W. Review of the generalized least squares method. Surv. Geophys. 2015, 36, 1–25. [Google Scholar] [CrossRef]
  23. Pramila, A.; Keskinarkaus, A.; Seppänen, T. Multiple domain watermarking for print-scan and JPEG resilient data hiding. In Proceedings of the Digital Watermarking: 6th International Workshop, IWDW 2007, Guangzhou, China, 3–5 December 2007; Proceedings 6. Springer: Berlin/Heidelberg, Germany, 2008; pp. 279–293. [Google Scholar]
  24. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2366–2369. [Google Scholar]
  25. Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
  26. Abubahia, A.; Cocea, M. Advancements in GIS map copyright protection schemes—A critical review. Multimed. Tools Appl. 2017, 76, 12205–12231. [Google Scholar] [CrossRef]
  27. Sun, X.-C.; Lu, Z.-M.; Wang, Z.; Liu, Y.-L. A geometrically robust multi-bit video watermarking algorithm based on 2-D DFT. Multimed. Tools Appl. 2021, 80, 13491–13511. [Google Scholar] [CrossRef]
  28. Heidari, M.; Karimi, S. A novel robust and more secure blind image watermarking for optical remote sensing using DWT-SVD and chaotic maps. Opt. Quantum Electron. 2023, 55, 535. [Google Scholar] [CrossRef]
Figure 1. Main framework of the algorithm.
Figure 1. Main framework of the algorithm.
Remotesensing 17 00502 g001
Figure 2. U-Net network structure.
Figure 2. U-Net network structure.
Remotesensing 17 00502 g002
Figure 3. Ring template watermark.
Figure 3. Ring template watermark.
Remotesensing 17 00502 g003
Figure 4. Spectrogram of ring template watermark.
Figure 4. Spectrogram of ring template watermark.
Remotesensing 17 00502 g004
Figure 5. Original spectrogram (a) with mask (b).
Figure 5. Original spectrogram (a) with mask (b).
Remotesensing 17 00502 g005
Figure 6. Loss curve.
Figure 6. Loss curve.
Remotesensing 17 00502 g006
Figure 7. Model test result (a) with binarized image (b).
Figure 7. Model test result (a) with binarized image (b).
Remotesensing 17 00502 g007
Figure 8. Ellipse fitting result.
Figure 8. Ellipse fitting result.
Remotesensing 17 00502 g008
Figure 9. Elliptic curve repositioning.
Figure 9. Elliptic curve repositioning.
Remotesensing 17 00502 g009
Figure 10. Experimental remote sensing imagery data.
Figure 10. Experimental remote sensing imagery data.
Remotesensing 17 00502 g010
Figure 11. Method A ring watermark template (a) and Method B watermark information (b).
Figure 11. Method A ring watermark template (a) and Method B watermark information (b).
Remotesensing 17 00502 g011
Figure 12. Results of the rotation attacks.
Figure 12. Results of the rotation attacks.
Remotesensing 17 00502 g012
Figure 13. Images after scaling attacks.
Figure 13. Images after scaling attacks.
Remotesensing 17 00502 g013
Figure 14. Results of scaling attacks.
Figure 14. Results of scaling attacks.
Remotesensing 17 00502 g014
Figure 15. Image after cropping attacks.
Figure 15. Image after cropping attacks.
Remotesensing 17 00502 g015
Figure 16. Results of cropping attacks.
Figure 16. Results of cropping attacks.
Remotesensing 17 00502 g016
Figure 17. Affine transformation matrices.
Figure 17. Affine transformation matrices.
Remotesensing 17 00502 g017
Figure 18. Results of affine attacks.
Figure 18. Results of affine attacks.
Remotesensing 17 00502 g018
Figure 19. Effects of watermark embedding strength q and watermark radius R on PSNR.
Figure 19. Effects of watermark embedding strength q and watermark radius R on PSNR.
Remotesensing 17 00502 g019
Figure 20. Effects of the number of watermark information bits “1” and watermark radius R on PSNR.
Figure 20. Effects of the number of watermark information bits “1” and watermark radius R on PSNR.
Remotesensing 17 00502 g020
Figure 21. Effects of watermark embedding strength q and watermark radius R on NC.
Figure 21. Effects of watermark embedding strength q and watermark radius R on NC.
Remotesensing 17 00502 g021
Figure 22. Effects of the number of watermark bits “1” and watermark radius R on NC.
Figure 22. Effects of the number of watermark bits “1” and watermark radius R on NC.
Remotesensing 17 00502 g022
Table 1. Information on experimental remote sensing image data.
Table 1. Information on experimental remote sensing image data.
Data NumberSpatial Reference Coordinate SystemImage Data Size
(a)WGS_1984_UTM_zone_47N7891 × 6941
(b)WGS_1984_UTM_zone_14N8211 × 7411
(c)WGS_1984_UTM_zone_19N7981 × 7011
(d)WGS_1984_UTM_zone_19N7981 × 7011
Table 2. Results of invisibility.
Table 2. Results of invisibility.
MethodProposed MethodMethod AMethod B
PSNR39.85638.87438.165
Table 3. Image size after scaling attacks.
Table 3. Image size after scaling attacks.
Scaling Factor “S”Image Size After Scaling Attacks
0.53946 × 2471
0.64735 × 4165
0.75524 × 4859
0.86312 × 5553
0.97102 × 6247
1.07891 × 6941
1.29469 × 8329
Table 4. Image size after cropping attacks.
Table 4. Image size after cropping attacks.
Cropping Ratio r (%)Image Size After Cropping Attacks
207058 × 6028
306602 × 5807
406112 × 5376
505580 × 4908
604991 × 4390
704322 × 3802
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, X.; Zhang, W.; Zhou, Q.; Zhu, C.; Ren, N. Template Watermarking Algorithm for Remote Sensing Images Based on Semantic Segmentation and Ellipse-Fitting. Remote Sens. 2025, 17, 502. https://doi.org/10.3390/rs17030502

AMA Style

Cao X, Zhang W, Zhou Q, Zhu C, Ren N. Template Watermarking Algorithm for Remote Sensing Images Based on Semantic Segmentation and Ellipse-Fitting. Remote Sensing. 2025; 17(3):502. https://doi.org/10.3390/rs17030502

Chicago/Turabian Style

Cao, Xuanyuan, Wei Zhang, Qifei Zhou, Changqing Zhu, and Na Ren. 2025. "Template Watermarking Algorithm for Remote Sensing Images Based on Semantic Segmentation and Ellipse-Fitting" Remote Sensing 17, no. 3: 502. https://doi.org/10.3390/rs17030502

APA Style

Cao, X., Zhang, W., Zhou, Q., Zhu, C., & Ren, N. (2025). Template Watermarking Algorithm for Remote Sensing Images Based on Semantic Segmentation and Ellipse-Fitting. Remote Sensing, 17(3), 502. https://doi.org/10.3390/rs17030502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop