Next Article in Journal
A Virtual Testing Framework for Real-Time Validation of Automotive Software Systems Based on Hardware in the Loop and Fault Injection
Next Article in Special Issue
Motor Imagery EEG Signal Classification Using Distinctive Feature Fusion with Adaptive Structural LASSO
Previous Article in Journal
Spatial Variation of Airborne Pollen Concentrations Locally around Brussels City, Belgium, during a Field Campaign in 2022–2023, Using the Automatic Sensor Beenose
Previous Article in Special Issue
Introduction of a Novel Structure for a Light Unmanned Free Balloon’s Payload: A Comprehensive Hybrid Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study of the Bias of the Initial Phase Estimation of a Sinewave of Known Frequency in the Presence of Phase Noise

Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, 1649-004 Lisboa, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(12), 3730; https://doi.org/10.3390/s24123730
Submission received: 30 April 2024 / Revised: 3 June 2024 / Accepted: 6 June 2024 / Published: 8 June 2024

Abstract

:
The estimation of the parameters of a sinusoidal signal is of paramount importance in various applications in the fields of sensors, signal processing, parameter estimation, and device characterization, among others. The presence, in the measurement system, of non-ideal phenomena such as additive noise in the signals, phase noise in the stimulus generation, jitter in the sampling system, frequency error in the experimental setup, among others, leads to increased uncertainty and bias in the estimated quantities obtained by least squares methods and those derived from them. Therefore, from a metrological point of view, it is important to be able to theoretically predict and quantify those uncertainties in order to properly design the measurement system and its parameters, such as the number of samples to acquire or the stimulus signal amplitude to use to minimize the uncertainty in the estimated values. Previous works have shown that the presence of these non-ideal phenomena leads to increased uncertainty and bias in the estimation of the sinewave amplitude. The present work complements this knowledge by focusing specifically on the effect of phase noise and sampling jitter in the bias of the initial phase estimation of a sinusoidal signal of known frequency (three-parameter sine fitting procedure). A theoretical derivation of the bias of initial phase estimation that takes into consideration the presence of phase noise in the sinewave is presented. Since a Taylor series approximation was used where only the first term was retained, it was necessary to validate the analytical derivations with numerical simulations using a Monte Carlo type of procedure. This process was applied to different conditions regarding the phase noise standard deviation, initial phase value, and number of samples. It is concluded that, in most scenarios, initial phase estimation using sine fitting is unbiased in the presence of phase noise or jitter. It is shown, however, that in cases of extremely high phase noise standard deviation and a very low number of samples, a bias occurs.

1. Introduction

In almost all areas of engineering, it is valuable to estimate the parameters of various systems or signals [1]. It is often the case that these signals are sinusoidal in time or can be decomposed into a sum of sinewaves. The parameters of a sinewave that may need to be estimated include the amplitude, the offset, the frequency, and the initial phase. The present work focuses on the latter. Estimating the initial phase of an electrical signal is important for measuring various indirect physical quantities, such as displacement [2], strain, acceleration, and power quality [3], among others, as well as in the context of various applications such as sonar, radar [4], and vibration analysis [5]. In addition, phase measurements are essential for characterizing the frequency response of electrical or electronic circuits and instruments. There are also applications in estimating the phase difference between two sinewaves [6] such as electric power calibration or measuring an unknown electrical impedance [7], and estimating the phase difference between voltage and current sinewaves. There are many systems that perform this estimation using specialized hardware circuits [8]. However, there are an increasing number of systems that sample and digitize a signal from the real world and then store and process it digitally on a computer. It is in the latter case that the current work finds its value.
This estimation, carried out on real signals, usually obtained from some kind of sensor, is always affected by the non-ideal phenomena present, which one tries to minimize but cannot completely eliminate. The most recognized one is the presence of additive noise [9], which is often of a thermal origin but can also appear due to other sources, such as interference between systems. As such, this type of noise has a broad frequency spectrum that is, in principle, white, but it often becomes colored due to unintentional or intentional filtering. Since all signals in an electronic system have to be generated somewhere, and this process is also not perfect, phase noise is always present where the phase of the generated sinewave changes randomly due to various reasons, often also related to the presence of thermal noise. It has been shown in the past that the presence of additive noise leads to a bias in the amplitude estimation of a sinewave [10]. This is also the case when jitter is present at the sampling instant [11]. Phase noise in oscillators is also a common non-ideal phenomenon encountered in practice [12]. It is therefore pertinent to ask whether or not these non-ideal phenomena lead to an estimation bias in the case of initial phase estimation. Even a negative result is valuable in the larger context of advancing current scientific knowledge. There is also another effect that contributes to the uncertainty of sinewave parameter estimation, which is the quantization error introduced when the signal of interest is sampled and converted from analogue to digital, which does influence the sine fitting procedure [13].
Elsewhere, the problem of estimating the precision of the sinewave initial phase estimation in the presence of this type of phase noise and jitter [14], as well as other non-ideal phenomena such as additive noise [15], colored noise [16], and frequency error [17], has been addressed. In [15], it was shown that in the case of extremely short record length, there is a bias in the initial phase estimation when additive noise is present.
Here, we focus on the bias of the estimation in the presence of phase noise or sampling jitter. We will conclude that it is, in most situations, null. We will also show that in some very particular cases, it is not. These results were validated using a Monte Carlo type of procedure where the estimation was carried out many times (one million, in this case), with different values of noise, and a confidence value interval was obtained that includes the value 0. It should be noted that this study was conducted assuming that the signal frequency is known. In the future, it will be undoubtedly helpful to also consider a case where the frequency is unknown and must be estimated using different algorithms like, for example, a four-parameter least squares sine fitting algorithm.

2. Least Squares Approximation

Before delving into the problem of estimating the initial phase of a sinewave using a least squares procedure, it is important to be clear about what this procedure involves and which mathematical assumptions are made [18,19].
Given a set of data points ( t i , v i ) for i = 1 ,   2 ,   ,   M , where t i is the independent variable value and v i is the observed dependent variable, we want to find a function g ( t ; a ) that depends on a vector of parameters a = ( a 1 , a 2 ,   ,   a n ) such that the sum of squared residuals is minimized. The residual for each data point is given by the difference between observed value y i and the predicted value g ( t i ; a ) .
The goal is to minimize the sum of squared residuals given by
S ( a ) = i = 1 M [ v i g ( t i ; a ) ] 2 ,
hence the name “least squares”. This is thus an optimization problem where the function to be optimized is S ( a ) , the decision variables are a = ( a 1 , a 2 ,   ,   a n ) , and we look to minimize the value of S ( a ) . In the context of this work, the function g is
g ( t ; A , φ ,   C ) = C + A cos ( 2 π f t + φ ) ,
where the parameter f , the frequency, is assumed to be known. The three parameters of the model that must be determined are the amplitude (A), initial phase ( φ ), and offset (C).
It is convenient to rewrite the model as
g ( t ; A I , A Q ,   C ) = C + A I cos ( 2 π f t ) + A Q sin ( 2 π f t ) ,
where, instead of the amplitude and initial phase, we use two different parameters called the in-phase amplitude ( A I ) and the in-quadrature amplitude ( A Q ), which are related to the amplitude and initial phase by
A I = A cos ( φ )
and
A Q = A sin ( φ ) .
The least squares estimation of the three sinewave parameters are obtained, in matrix form, by
[ A ^ I A ^ Q C ^ ] = ( D T D ) 1 D T [ v 1 v 2 v M ]
where
D = [ cos ( 2 π f t 1 ) sin ( 2 π f t 1 ) 1 cos ( 2 π f t 2 ) sin ( 2 π f t 2 ) 1 cos ( 2 π f t M ) sin ( 2 π f t M ) 1 ] .
To recover the estimation of the original sinewave parameters, besides the offset C ^ , we simply use
A ^ = A I ^ 2 + A Q ^ 2
and
φ ^ = arctan ( A Q ^ A I ^ ) .
Note that if A ^ I turns out to be negative, the constant π is conventionally added to the initial phase estimation. We thus obtain estimates for the sinewave amplitude, initial phase, and offset from the M data samples.
The least squares method assumes that the variance of the errors (residuals) is constant across all values of the independent variable (homoscedasticity). If the variance of the errors varies (heteroscedasticity), the method may produce biased estimates. In the present situation, where the independent variable is time, we assume that the variance of the phase noise does not change with time.
Another assumption is that the value of the phase noise at one instant in time is not correlated with the value of the phase noise at a different instant in time, and hence the different residuals are independent of each other.
A third assumption is that the data points cover at least one full period of the sinewave. If this is violated, the initial phase estimation may indeed be biased, as shown in [15].

3. Estimating the Initial Phase of a Sinewave

Consider a sinewave with amplitude A, initial phase φ, offset C, and frequency f:
x ( t ) = C + A cos ( 2 π f t + φ ) .
Furthermore, consider that this signal is sampled at a frequency of f s , leading to a set of M samples numbered from i = 1 to i = M . The sampled signal values are thus given by
x i = C + A cos ( 2 π f t i + φ ) ,   i = 1 , , M
where
t i = i 1 f s ,   i = 1 , , M .
In this work, we specifically consider the phenomena of phase noise in the signal generator and the occurrence of sampling jitter in the acquired samples. These two phenomena are considered together since, mathematically, they can be treated simultaneously. The other non-ideal phenomena mentioned have already been considered elsewhere [14].
The sample jitter is represented here with the random variable δ i , such that the sample instants become
t i = t i + δ i .
The phase noise, in turn, is represented by the random variable η , such that the value of the samples in the presence of these two types of noise is represented by
z i = C + A cos [ 2 π f ( t i + δ i ) + φ + η i ] ,   i = 1 , , M .
We can then compare this with Equation (10), the ideal sample values. The values of the estimated parameters are obtained using the in-phase amplitude estimative, given by (5), which can be written as (in this case, using z i instead of v i as the dependent variable)
A I ^ = 2 M i = 1 M z i cos ( ω t i ) ,
and the in-quadrature amplitude estimative
A Q ^ = 2 M i = 1 M z i sin ( ω t i ) ,
where ω = 2 π f . From these, one can obtain the estimated sinewave amplitude given by (7) and (8).
As can easily be seen, any non-ideal phenomena that affects the sample value z i will also affect the amplitude and initial phase estimates, A ^ and φ ^ .
There are two random variables in (13), namely δ i and η i : jitter in the sampling instant and phase noise in the stimulus signal at the instant of sampling. In most scenarios the phenomena leading to the randomness of these two variables are unrelated. It is therefore justifiable to consider them as independent variables and to describe them mathematically with a single variable, represented here by
θ i = ω δ i + η i ,   i = 1 , , M .
Note that in this work we consider the frequency of the sinusoidal signal, f , to be known. Our model (13) thus becomes
z i = C + A cos ( ω t i + φ + θ i ) ,   i = 1 , , M .
These are the values of the dependent variable to which we want to fit, via least squares, a sinewave model with three unknown parameters.

4. Expected Value of the Initial Phase Estimation

Since the estimated initial phase, φ ^ , is a function of the estimated in-phase and in-quadrature amplitudes, we can express the expected value of the initial phase estimation using an approximate expression made of the first term in a Taylor series approximation using Equation (7.20) in [20]:
μ φ ^ arctan ( μ A Q ^ μ A I ^ ) + 1 2 ( 2 φ ^ A I ^ 2 σ A I ^ 2 + 2 2 φ ^ A I ^ A Q ^ C o v { A I ^ , A Q ^ } + 2 φ ^ A Q ^ 2 σ A Q ^ 2 ) ,
where the partial derivatives should be computed for A I ^ = μ A I ^ and A Q ^ = μ A Q ^ , the expected values of the estimated in-phase and in-quadrature amplitudes, respectively. Next, we must compute the partial derivatives, the variances, and the covariance. In the end, we can bring all these parts together to obtain an expression for the average of the estimated initial phase.

4.1. Mean Values of the In-Phase and In-Quadrature Amplitudes

Computing the means of the two estimated amplitudes A I ^ and A Q ^ is straightforward. From (15), we can deduce
μ A Q ^ = 2 M i = 1 M E { z i } sin ( ω t i ) .
The expected value of the sample values can be obtained from (17). One thus obtains
E { z i } = A E { cos ( ω t i + φ + θ i ) } .
Assuming that the phase noise is normally distributed, with a mean equal to null, we can write
E { cos ( a + ξ ) } = cos ( a + ξ ) 1 2 π σ ξ e ξ 2 2 σ ξ 2 d ξ = cos ( a ) e 1 2 σ ξ 2 ,
where a = ω t i + φ and ξ = θ i . This leads to
E { z i } = A cos ( ω t i + φ   ) e 1 2 σ θ 2 .
Inserting this into (19) leads to
μ A Q ^ = 2 M i = 1 M A E { cos ( ω t i + φ   ) e 1 2 σ θ 2 } sin ( ω t i ) .
Making use of
cos ( a ) sin ( b ) = 1 2 sin ( a + b ) + 1 2 sin ( a b )
leads to
μ A Q ^ = A M e 1 2 σ θ 2 i = 1 M sin ( 2 ω t i + φ ) + A M e 1 2 σ θ 2 i = 1 M sin ( φ ) .
Since the first summation covers an integer number of periods, it results in a null value. The second summation results in M sin ( φ ) . Hence, we obtain
μ A Q ^ = A sin ( φ ) e 1 2 σ θ 2 .
Repeating the same computations for the in-phase amplitude leads to
μ A I ^ = A cos ( φ ) e 1 2 σ θ 2 .
We now move on to the computation of the second raw moments using a similar procedure.

4.2. Second Raw Moment of the In-Phase and In-Quadrature Amplitudes

Here, we determine the second raw moment of the in-phase amplitude, which will later enable us to compute the variance of those amplitudes. We obtain, from (14),
E   { A I ^ 2 } = E { [ 2 M i = 1 M z i cos ( ω t i ) ] 2 } .
By computing the square of square bracket, we obtain
E   { A I ^ 2 } = 4 M 2 E { i , j z i z j c o s ( ω t i ) c o s ( ω t j ) } .
The expected value can be moved inside the double summation, leading to
E   { A I ^ 2 } = 4 M 2 i , j E { z i z j } cos ( ω t i ) cos ( ω t j ) .
To compute E { z i z j } , we start with the expression of z i as given in (17) and obtain
E { z i z j } = E { [ A cos ( ω t i + φ + θ i ) ] [ A cos ( ω t j + φ + θ j ) ] } .
Multiplying the terms in the two square brackets leads to
E { z i z j } = E { A 2 cos ( ω t i + φ + θ i ) cos ( ω t j + φ + θ j ) } .
By making use of
cos ( a ) cos ( b ) = 1 2 cos ( a + b ) + 1 2 cos ( a b ) ,
we can write
E { z i z j } = E { A 2 2 cos ( ω t i + ω t j + 2 φ + θ i + θ j ) + A 2 2 cos ( ω t i ω t j + θ θ j ) } .
Using the fact that the expected value of the sum is equal to the sum of expected values and that the expected value of a constant times a random variable is equal to that same constant times the expected value of the random variable, we can deduce that
E { z i z j } = A 2 2 E { cos ( ω t i + ω t j + 2 φ + η i + η j ) } + A 2 2 E { cos ( ω t i ω t j + η i η j ) } .
Writing two different expressions for the cases where the indexes are different (called a i j ) and are the same (called b i ) leads to
a i j = E { z i z j } i j = A 2 2 E { cos ( ω t i + ω t j + 2 φ + θ i + θ j ) }   + A 2 2 E { cos ( ω t i ω t j + θ i θ j ) }
and
b i = E { z i z j } i = j = A 2 2 E { cos ( 2 ω t i + 2 φ + 2 θ i ) } + A 2 2 .
We are now able to compute the expected values when considering that θ i and θ j are two independent random variables (if i j ), which are normally distributed with null mean and the same variance σ θ 2 . We are going to make use of the definition of the expected value,
E { cos ( k + ξ ) } = cos ( k + ξ ) 1 2 π σ ξ e ξ 2 2 σ ξ 2 d ξ = cos ( k ) e 1 2 σ ξ 2 ,
where k is a generic constant and ξ is a random variable. The coefficient a i j , given in (35), when considering that ξ = θ i ± θ j and thus that σ ξ 2 = 2 σ θ 2 , can be written as
a i j = A 2 2 cos ( ω t i + ω t j + 2 φ ) e σ θ 2 + A 2 2 cos ( ω t i ω t j ) e σ θ 2 .
Equation (36), when considering that ξ = 2 θ i and thus that σ ξ 2 = 4 σ θ 2 , becomes
b i = A 2 2 + A 2 2 cos ( 2 ω t i + 2 φ ) e 2 σ θ 2 .
The analytical expression for the expected value E { z i z j } is different depending on whether the two indexes are the same ( a i j ) or not ( b i ).
E { A I ^ 2 } = 4 M 2 i , j [ { a i j ,     i j b i ,       i = j ] cos ( ω t i ) cos ( ω t j ) .
This can be written as an addition of two summations:
E { A I ^ 2 } = 4 M 2 i j a i j cos ( ω t i ) cos ( ω t j ) + 4 M 2 i b i cos 2 ( ω t i ) .
To compute the first summation, the one involving a i j , it is convenient to have a complete summation, that is, one where the term i = j is not missing. To achieve this, we use the complete summation, as desired, and subtract, using a single summation, the terms that should not be in the double summation. We can thus write the expected value of the square of the in-phase amplitude as
E { A I ^ 2 } = 4 M 2 i , j a i j cos ( ω t i ) cos ( ω t j ) 4 M 2 i a i i cos 2 ( ω t i ) + 4 M 2 i b i cos 2 ( ω t i ) .
The same procedure was used in [14]. Applying (32) in the first term of (42) leads to
E { A I ^ 2 } = 2 M 2 i , j a i j cos ( ω t i + ω t j ) S a + 2 M 2 i , j a i j cos ( ω t i ω t j ) S b 4 M 2 i a i i cos 2 ( ω t i ) S c + 4 M 2 i b i cos 2 ( ω t i ) S d .
Note that each term has been named S a , S b , S c , and S d , such that
E { A I ^ 2 } = 2 M 2 s a + 2 M 2 s b 4 M 2 s c + 4 M 2 s d .
Inserting the coefficients a i j and b i into each of the summation terms, starting with s a , produces
s a = i , j a i j cos ( ω t i + ω t j ) ,
and using a i j , given by (38), leads to
s a = i , j A 2 2 cos ( ω t i + ω t j + 2 φ ) e σ θ 2 cos ( ω t i + ω t j ) + i , j A 2 2 cos ( ω t i ω t j ) e σ θ 2 cos ( ω t i + ω t j ) .
Carrying out the product of the cosine functions using (32) leads to
s a = i , j A 2 4 cos ( 2 ω t i + ω t j + 2 φ ) e σ θ 2 + i , j A 2 4 cos ( 2 φ ) e σ θ 2 + i , j A 2 4 cos ( 2 ω t i ) e σ θ 2 + i , j A 2 4 cos ( 2 ω t j ) e σ θ 2 .
Considering that the summations of the first, third, and fourth terms occur over an integer number of periods, they result in a null value. Only the second term remains:
s a = A 2 M 2 4 cos ( 2 φ ) e σ θ 2 .
Moving now to s b , given in (43), we can deduce that
s b = i , j a i j cos ( ω t i ω t j ) ,
and making use of a i j , given in (38), leads to
s b = i , j A 2 2 cos ( ω t i + ω t j + 2 φ ) e σ θ 2 cos ( ω t i ω t j ) + i , j A 2 2 cos ( ω t i ω t j ) e σ θ 2 cos ( ω t i ω t j ) .
Multiplying the cosines using (32) leads to
s b = i , j A 2 4 cos ( 2 ω t i + 2 φ ) e σ θ 2 + i , j A 2 4 cos ( 2 ω t j + 2 φ ) e σ θ 2 + i , j A 2 4 cos ( 2 ω t i 2 ω t j ) e σ θ 2 + i , j A 2 4 e σ θ 2 .
Since the first three summations occur over an integer number of periods of a sinusoid, they result in 0. The fourth one does not. By using the fact that the double summation of a constant is just that constant multiplied by the number of terms of the summation, M 2 , we obtain
s b = A 2 M 2 4 e σ θ 2 .
We can now tackle s c , as defined in (43):
s c = i a i i cos 2 ( ω t i ) ,
By inserting a i i , given in (38), when i = j , we obtain
a i i = A 2 2 cos ( 2 ω t i + 2 φ ) e σ θ 2 + A 2 2 e σ θ 2 ,
This leads to
s c = i A 2 2 cos ( 2 ω t i + 2 φ ) e σ θ 2 cos 2 ( ω t i ) + i A 2 2 e σ θ 2 cos 2 ( ω t i ) .
Using
cos 2 ( x ) = 1 2 + 1 2 cos ( 2 x )
results in
s c = i A 2 4 cos ( 2 ω t i + 2 φ ) e σ θ 2 + i A 2 4 cos ( 2 ω t i + 2 φ ) e σ θ 2 cos ( 2 ω t i ) + i A 2 2 e σ θ 2 cos 2 ( ω t i ) .
Using (32) in the second term leads to
s c = i A 2 4 cos ( 2 ω t i + 2 φ ) e σ θ 2 + i A 2 8 cos ( 4 ω t i + 2 φ ) e σ θ 2 + i A 2 8 cos ( 2 φ ) e σ θ 2 + i A 2 2 e σ θ 2 cos 2 ( ω t i ) .
The first two summations result in a null value. Notice that, in the third one, the argument does not depend on i . We can thus multiply its contents by the number of terms, which is M , leading to
s c = A 2 M 8 cos ( 2 φ ) e σ θ 2 + i A 2 2 e σ θ 2 cos 2 ( ω t i ) .
Using (56) in the remaining summation results in
s c = A 2 M 8 cos ( 2 φ ) e σ θ 2 + i A 2 4 e σ θ 2 + i A 2 4 e σ θ 2 cos ( 2 ω t i ) .
The contents of the first summation are independent of i , and the second summation is 0. We thus obtain
s c = A 2 M 8 cos ( 2 φ ) e σ θ 2 + A 2 M 4 e σ θ 2 .
The final term, s d , from expression (43), can be written as
s d = i b i cos 2 ( ω t i ) .
Inserting b i , given by (39), results in
s d = i A 2 2 cos 2 ( ω t i ) + i A 2 2 cos ( 2 ω t i + 2 φ ) e 2 σ θ 2 cos 2 ( ω t i ) .
Using (56) leads to
s d = i A 2 4 + i A 2 4 cos ( 2 ω t i ) + i A 2 4 cos ( 2 ω t i + 2 φ ) e 2 σ θ 2 + i A 2 4 cos ( 2 ω t i + 2 φ ) e 2 σ θ 2 cos ( ω t i ) .
Using (32) in the last term leads to
s d = i A 2 4 + i A 2 4 cos ( 2 ω t i ) + i A 2 4 cos ( 2 ω t i + 2 φ ) e 2 σ θ 2 + i A 2 8 cos ( 4 ω t i + 2 φ ) e 2 σ θ 2 + i A 2 8 cos ( 2 φ ) e 2 σ θ 2 ,
which can be simplified by again using the fact that the summation over an integer number of periods is 0, leading to
s d = A 2 M 4 + A 2 M 8 cos ( 2 φ ) e 2 σ θ 2 .
Inserting the four terms just derived, s a , s b , s c , and s d , back into (43) leads to
E { A I ^ 2 } = 2 M 2 [ A 2 M 2 4 cos ( 2 φ ) e σ θ 2 ] + 2 M 2 [ A 2 M 2 4 e σ θ 2 ] 4 M 2 [ A 2 M 8 cos ( 2 φ ) e σ θ 2 + A 2 M 4 e σ θ 2 ] + 4 M 2 [ A 2 M 4 + A 2 M 8 cos ( 2 φ ) e 2 σ θ 2 ] .
By simplifying, we obtain
E { A I ^ 2 } = A 2 M + A 2 2 [ cos ( 2 φ ) + 1 1 M cos ( 2 φ ) 2 M ] e σ θ 2 + A 2 2 M cos ( 2 φ ) e 2 σ θ 2 .
Repeating the same procedure for the in-quadrature amplitude results in
E { A Q ^ 2 } = A 2 M + A 2 2 [ cos ( 2 φ ) + 1 + 1 M cos ( 2 φ ) 2 M ] e σ θ 2 A 2 2 M cos ( 2 φ ) e 2 σ θ 2 ,
which differs from the expected value of the in-phase amplitude due to the minus signs multiplying the cosine functions.
Having computed the expected values and the second raw moments, we can move on to the computation of the variances.

4.3. Variances of the In-Phase and In-Quadrature Amplitudes

In order to compute the variance of the in-phase and in-quadrature amplitudes we subtract from second raw moment in (68), the expected value given in (26), leading to
σ A I ^ 2 = A 2 M + A 2 2 [ cos ( 2 φ ) + 1 1 M cos ( 2 φ ) 2 M ] e σ θ 2 + A 2 2 M cos ( 2 φ ) e 2 σ θ 2 [ A cos ( φ ) e 1 2 σ θ 2 ] 2 .
Applying (56) to the last term results in
σ A I ^ 2 = A 2 M + A 2 2 [ cos ( 2 φ ) + 1 1 M cos ( 2 φ ) 2 M ] e σ θ 2 + A 2 2 M cos ( 2 φ ) e 2 σ θ 2 A 2 2 [ cos ( 2 φ ) + 1 ] e σ θ 2 .
Carrying out some simplifications leads to
σ A I ^ 2 = A 2 M A 2 2 M [ 2 + cos ( 2 φ ) ] e σ θ 2 + A 2 2 M cos ( 2 φ ) e 2 σ θ 2 .
In the case of the in-quadrature amplitude, given by (15), we can see that it has a sine function in place of the cosine function that one can observe in (14). Doing a similar derivation will lead to the following:
σ A Q ^ 2 = A 2 M A 2 2 M [ 2 cos ( 2 φ ) ] e σ θ 2 A 2 2 M cos ( 2 φ ) e 2 σ θ 2 ,
which is very similar to the variance obtained previously for the in-phase amplitude, given in (72), but with a minus sign before the two cos ( 2 φ ) terms. We can thus write them in a different form using
σ A I ^ 2 = P + R
and
σ A Q ^ 2 = P R ,
where
P = A 2 M ( 1 e σ θ 2 )
and
R = A 2 2 M cos ( 2 φ ) e σ θ 2 + A 2 2 M cos ( 2 φ ) e 2 σ θ 2 ,
which is equivalent to
R = A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) .
We have thus, at this point, determined the variances of the in-phase and in-quadrature amplitudes of the sinewave as a function of the sinusoid’s amplitude ( A ), initial phase ( φ ), number of acquired samples ( M ), and amount of phase noise standard deviation ( σ θ ):
σ A I ^ 2 = A 2 M ( 1 e σ θ 2 ) + A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 )
and
σ A Q ^ 2 = A 2 M ( 1 e σ θ 2 ) A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) .
It is time to determine the covariance of the two amplitude components.

4.4. Co-Variance of the In-Phase and In-Quadrature Amplitudes

To determine the covariance between A I ^ and A Q ^ , we use
C o v { A I ^ , A Q ^ } = E { A I ^ A Q ^ } E { A I ^ } E { A Q ^ } .
The expected value of the product of the two estimated amplitudes is
E { A I ^ A Q ^ } = 4 M 2 E { i , j z i z j c o s ( ω a t i ) sin ( ω a t j ) } .
Repeating what the process used earlier for E { A I ^ 2 } , given by (28), but with a sine function in place of the second cosine function leads to
E { A I ^ A Q ^ } = A 2 2 sin ( 2 φ ) ( 1 1 M ) e σ θ 2 + A 2 2 M sin ( 2 φ ) e 2 σ θ 2 .
Inserting this into expression (81), together with (25) and (26), leads to the following:
C o v { A I ^ , A Q ^ } = A 2 2 sin ( 2 φ ) ( 1 1 M ) e σ θ 2 + A 2 2 M sin ( 2 φ ) e 2 σ θ 2 [ A cos ( φ ) e 1 2 σ θ 2 ] [ A sin ( φ ) e 1 2 σ θ 2 ] .
Carrying out some simplifications leads to
C o v { A I ^ , A Q ^ } = A 2 2 M sin ( 2 φ ) ( e σ θ 2 e 2 σ θ 2 ) .
Notice the similarity between this expression and the one defined earlier for R , in (78).

4.5. Partial Derivatives

Computing the second order partial derivatives of φ ^ , given by (8), leads to
2 φ ^ A I ^ 2 = 2 μ A Q ^ μ A I ^ μ A I ^ 2 + μ A Q ^ 2 2 μ A Q ^ 3 μ A I ^ ( μ A I ^ 2 + μ A Q ^ 2 ) 2 ,
and
2 φ ^ A Q ^ 2 = 2 μ A I ^ μ A Q ^ ( μ A I ^ 2 + μ A Q ^ 2 ) 2 .
The second-order derivatives of the estimated initial phase relative to the in-phase amplitude, given by (86), becomes, after inserting (25) and (26),
2 φ ^ A I ^ 2 = 2 A 2 1 e σ θ 2 [ sin ( φ ) cos ( φ ) cos 2 ( φ ) + sin 2 ( φ ) sin 3 ( φ ) cos ( φ ) ( cos 2 ( φ ) + sin 2 ( φ ) ) 2 ] ,
which simplifies to
2 φ ^ A I ^ 2 = 2 A 2 1 e σ θ 2 sin ( φ ) [ 1 sin 2 ( φ ) ] cos ( φ ) ,
which further simplifies to
2 φ ^ A I ^ 2 = 1 A 2 e σ θ 2 sin ( 2 φ ) .
Focusing now on the second-order derivative of the in-quadrature component, given by (87), leads, after inserting (25) and (26) and simplifying, to
2 φ ^ A Q ^ 2 = 1 A 2 e σ θ 2 sin ( 2 φ ) .
Note that this has a symmetrical relationship with (90).
Next, the third second-order derivative present in (18) is, as shown in (8),
2 φ ^ A I ^ A Q ^ = 2 μ A Q ^ 2 ( μ A I ^ 2 + μ A Q ^ 2 ) 2 1 μ A I ^ 2 + μ A Q ^ 2 .
Inserting (25) and (26) leads to
2 φ ^ A I ^ A Q ^ = 2 A 2 sin 2 ( φ ) e σ θ 2 ( A 2 cos 2 ( φ ) e σ θ 2 + A 2 sin 2 ( φ ) e σ θ 2 ) 2 1 A 2 cos 2 ( φ ) e σ θ 2 + A 2 sin 2 ( φ ) e σ θ 2 .
After some simplification, we obtain
2 φ ^ A I ^ A Q ^ = 2 A 2 sin 2 ( φ ) e σ θ 2 1 A 2 e σ θ 2 .
Finaly, using the trigonometric relation 2 sin 2 ( φ ) = 1 cos ( 2 φ ) , we obtain
2 φ ^ A I ^ A Q ^ = 1 A 2 e σ θ 2 cos ( 2 φ )
which concludes the determination of the partial derivatives.

4.6. Bringing It All Together

At this point, we are able to bring together all the terms in (18) and obtain results regarding the bias of the estimated initial phase of a sinewave in the presence of phase noise and/or jitter. Inserting the expected values of the in-phase and in-quadrature amplitudes, given in (25) and (26), leads to
μ φ ^ φ + 1 2 ( 2 φ ^ A I ^ 2 σ A I ^ 2 + 2 2 φ ^ A I ^ A Q ^ C o v { A I ^ , A Q ^ } + 2 φ ^ A Q ^ 2 σ A Q ^ 2 ) .
Inserting the second-order derivatives given in (90), (91), and (95) leads to
μ φ ^ φ + 1 2 { [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] σ A I ^ 2 + 2 [ 1 A 2 e σ θ 2 cos ( 2 φ ) ] C o v { A I ^ , A Q ^ } + [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] σ A Q ^ 2 } .
Inserting the variances, (79) and (80), and the covariance, (85), leads to
μ φ ^ φ + 1 2 { [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] [ A 2 M ( 1 e σ θ 2 ) + A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) ] + 2 [ 1 A 2 e σ θ 2 cos ( 2 φ ) ] [ A 2 2 M sin ( 2 φ ) ( e σ θ 2 e 2 σ θ 2 ) ] + [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] [ A 2 M ( 1 e σ θ 2 ) A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) ] }
We will now proceed to simplify this rather large mathematical expression. This will be conducted in several simple stages. The first one is to see that, in the first and third lines in the curly brackets, the first term inside the second square brackets cancels out due to the leading minus sign in the third line. We thus obtain
μ φ ^ φ + 1 2 { [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] [ A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) ] + 2 [ 1 A 2 e σ θ 2 cos ( 2 φ ) ] [ A 2 2 M sin ( 2 φ ) ( e σ θ 2 e 2 σ θ 2 ) ] + [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] [ A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) ] } .
We now can observe that the first and third lines in the curly brackets have the same overall sign (the third line has two minus signs that cancel each other). We can thus add the first and third lines together, leading to
μ φ ^ φ + 1 2 { 2 [ 1 A 2 e σ θ 2 sin ( 2 φ ) ] [ A 2 2 M cos ( 2 φ ) ( e 2 σ θ 2 e σ θ 2 ) ] + 2 [ 1 A 2 e σ θ 2 cos ( 2 φ ) ] [ A 2 2 M sin ( 2 φ ) ( e σ θ 2 e 2 σ θ 2 ) ] } .
We can now see that the two lines are symmetric if we note the minus sign in the second one and the fact that we can swap the order of the sin ( 2 φ ) and cos ( 2 φ ) terms. Their sum is thus null, and we reach
μ φ ^ φ ,
showing that the expected value of the estimator, μ φ ^ , is equal to the real value that we are estimating, that is, φ . The conclusion is thus that this estimator is unbiased in the presence of phase noise and/or jitter:
e φ ^ = μ φ ^ φ 0 .
Along with the derivation carried out here, we used a Taylor series approximation where only the first term was kept. In order to validate the adequacy of this step, it is necessary to carry out some numerical simulations. This will be conducted in the next section using a Monte Carlo type of procedure.

5. Monte Carlo Validation

In order to validate the correctness of the derivations presented and to justify their domain of applicability, several numerical simulations have been carried out, through which the initial phase of a sinewave has been estimated. These simulations involved creating a set of equally spaced data points that follow a given mathematical model: in the present case, the model of a sinusoidal signal. In this model, we included the non-ideal effects we are interested in. For the current purpose, these include the presence of phase noise and jitter in the sampling instant. To this effect, we sampled from a random variable with a specific statistical distribution. Here, we focus on a normal distribution with different values of standard deviation. The use of this type of statistical distribution is justified considering that most sources of phase noise are of thermal origin.
Those values of phase noise were then added to the sampling instants multiplied by the angular frequency, and the value of the sinewave at the resulting time instants was obtained. At this point, one could add the effect of other non-ideal phenomena that might affect the signal, like additive noise or quantization error, but that was not the case in the current study; the only mathematically modeled non-ideal effect was the random phase noise following a normal distribution.
With those corrupted data points from the created sinewave in hand, we employed the least squares three-parameter sine fitting procedure we are studying here and obtained an estimate of its parameters, namely, the amplitude, offset, and initial phase. In the present work, we focus solely on the estimation of the initial phase.
By varying the different parameters of the model, we can see how they affect the estimates. In the current work, we are studying the bias that might be induced in the initial phase estimation of the sinewave. We can then compare those results with the expected ones that we derived analytically.
Focusing on the case in point, we studied the influence of the phase noise standard deviation, σ θ , on the systematic error (bias) of the initial phase estimation of a sinusoid, e φ ^ . The results are plotted in Figure 1. The solid circles represent the estimated average value of the initial phase error. In this specific instance, chosen as illustrative, we simulated a sinusoidal voltage signal with 1 V amplitude and an initial phase of π/5 rad. The number of samples acquired was 20 (M) and they were acquired at a rate where they covered exactly one signal period by making the ratio between sampling frequency ( f s ) and signal frequency ( f x ) exactly equal to the number of samples:
f s f x = M .
The vertical error bars were computed for 106 repetitions (R). The higher the number of repetitions carried out, the smaller the error bars become. Since the absolute value of the quantity being estimated (initial phase error) is null, there is not an obvious value that would allow one to say that the error bars are small enough to justify the claim that the estimation is unbiased. Alternatively, we can use the value of the phase noise standard deviation. Since a range of values up to 0.5 rad was used and given that the error bars have a length that increases with phase noise standard deviation and are, in the worst case, smaller than 0.001 rad (where σ θ is equal to 0.5 rad), we might claim that the error bars’ lengths are 500 times smaller than the phase noise standard deviation. Using more repetitions would increase the duration of the simulation without arguably leading to different conclusions.
The theoretical expected values were drawn in the same chart using a thick solid line. In this case, all the points have a null value. As we can see in Figure 1, the error bars are all close to this line, which shows that the result derived agrees with the Monte Carlo simulation.
As an extreme case, we repeated the same experiment with an extremely low number of samples: just three, which is the minimum possible since we are estimating three parameters (amplitude, offset, and initial phase). We also simulated an amount of phase noise that goes up to 3 rad, which is almost half the sinewave period. This situation, with such a low number of samples and so much phase noise, is not usually encountered in practice and is only intended to show the limits of our model. In this case, the first-order Taylor series approximation made in (18) is obviously not enough, as seen in Figure 2.
Where the phase noise standard deviation is low, the error bars are around 0, but for values higher than 0.3 rad, they are not.
To further study this behavior and its dependence on the number of samples, we carried out a third Monte Carlo simulation where the number of samples varied from three to twenty in the specific case of a very large phase noise standard deviation of up to 10 rad, which is much higher than the usual value encountered in practical situations. The results are depicted in Figure 3. As can be seen, only the first error bar, corresponding to three samples, is outside the theoretical value of 0.
Finally, a fourth numerical simulation was carried out where the phase noise standard deviation varied, and the number of samples used was 20 (the other parameters remained the same). The results can be seen in Figure 4.
In this case, we can see that, once again, the error bars of the data points are all close to the theoretical value of 0, as expected, which once again validates the final analytical results that were given in expression (104).

6. Conclusions

In this work, we studied how the bias of the least squares estimation of the initial phase of a sinewave depends on the amount of phase noise and jitter that affects the signal. The detailed analytical derivations presented have shown that the bias is null in these particular circumstances. In previous studies, it has also been shown, for example, that the presence of additive noise that the initial phase estimation is not biased [21]. In the case of the amplitude estimation of a sinusoid, however, it has been shown that it is biased in the presence of additive noise [10] and also in the presence of sampling jitter [14].
This type of knowledge is important in order to statistically characterize the measurements made regarding the initial phase of a sinusoid and, together with information about its precision, can be used to define a confidence interval for the estimated sinewave initial phase. We were thus able to show that phase noise and jitter do not bias the estimation of the initial phase of a sinewave.
Recall from Section 2 that three assumptions were made that if not met could render the conclusions of this work invalid; i.e., if those assumptions are not valid, the initial phase estimation may be biased. Those assumptions were: (i) that the data points are homoscedastic, that is, a constant value of phase noise standard deviation is maintained; (ii) that the values of phase noise of different samples are uncorrelated; and (iii) that the data points cover at least an entire period of the sinewave.
Even a null result like this is relevant in that it allows for this type of signal processing technique and estimation to be used confidently and in full knowledge of what does and does not affect it. Note, however, that the derivations made were approximate in as much as they employed only the first term of the Taylor series expansion in (18). For this reason, a numerical simulation was carried out using a Monte Carlo procedure. The results of this simulation allow us to conclude with a high degree of confidence that the estimator is indeed unbiased in typical situations.
This study has also shown that for extremely high phase noise standard deviation values and very small numbers of samples, a bias in the estimation of the initial phase does occur, as demonstrated by the numerical simulations carried out. In essence, what these results tell us is that an approximation using only the first term of the Taylor series expansion is insufficient for this particular case. This is important when statistically characterizing the estimations made and adding information about the expected bias of the results. It also serves as a warning of the dangers of using an unusually small number of samples. However, this work did not set a limit for the error, as it was considered to occur in situations that do not often occur in practice.

Author Contributions

Conceptualization, D.P., L.X. and F.A.C.A.; methodology, F.A.C.A.; software, D.P. and F.A.C.A.; investigation, D.P., L.X. and F.A.C.A.; writing—original draft preparation, D.P. and F.A.C.A.; writing—review and editing, L.X. and F.A.C.A.; supervision, F.A.C.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was developed at Instituto de Telecomunicações (IT) and was supported by the Portuguese Science and Technology Foundation (FCT), under projects UIDBASE-LX-UIDB/50008/2020, POCTI/ESE/46995/2002 and FCT/2022.72436.CPCA.A0, and by Instituto de Telecomunicações under the project IT/LA/295/2005 whose support the author gratefully acknowledges.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Han-Xiong, L.; Lee, T.H.; Yu, W. Model-Based Parameter Estimation: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  2. Mercuri, M.; Liu, Y.; Young, A.; Torfs, T.; Bourdoux, A.; Van Hoof, C. Digital IF phase-tracking doppler radar for accurate displacement measurements and vital signs monitoring. In Proceedings of the 2017 IEEE MTT-S International Microwave Symposium (IMS), Honolulu, HI, USA, 4–9 June 2017; pp. 999–1002. [Google Scholar]
  3. Carta, A.; Locci, N.; Muscas, C. A PMU for the Measurement of Synchronized Harmonic Phasors in Three-Phase Distribution Networks. IEEE Trans. Instrum. Meas. 2009, 58, 3723–3730. [Google Scholar] [CrossRef]
  4. Ristea, N.C.; Anghel, A.; Ionescu, R.T. Estimating the Magnitude and Phase of Automotive Radar Signals Under Multiple Interference Sources with Fully Convolutional Networks. IEEE Access 2021, 9, 153491–153507. [Google Scholar] [CrossRef]
  5. Santamaria-Caballero, I.; Pantaleon-Prieto, C.J.; Ibanez-Diaz, J.; Gomez-Cosio, E. Improved procedures for estimating amplitudes and phases of harmonics with application to vibration analysis. IEEE Trans. Instrum. Meas. 1998, 47, 209–214. [Google Scholar] [CrossRef]
  6. Vucijak, N.M.; Saranovac, L.V. A Simple Algorithm for the Estimation of Phase Difference Between Two Sinusoidal Voltages. IEEE Trans. Instrum. Meas. 2010, 59, 3152–3158. [Google Scholar] [CrossRef]
  7. Lario-Garcia, J.; Pallas-Areny, R. Analysis of a three-component impedance using two sine waves. In Proceedings of the 20th IEEE Instrumentation Technology Conference, Vail, CO, USA, 20–22 May 2003; Volume 2, pp. 1282–1284. [Google Scholar] [CrossRef]
  8. Rahmani, H. A new digital phase estimation technique based on centroid formula. Electr. Power Syst. Res. 2022, 22, 108631. [Google Scholar] [CrossRef]
  9. Kollar, I.; Blair, J.J. Improved determination of the best fitting sine wave in ADC testing. In Proceedings of the 21st IEEE Instrumentation and Measurement Technology Conference, Como, Italy, 18–20 May 2004; Volume 2, pp. 829–834. [Google Scholar] [CrossRef]
  10. Corrêa Alegria, F.A. Bias of Amplitude Estimation Using Three-Parameter Sine Fitting in the Presence of Additive Noise. Measurement 2009, 42, 748–756. [Google Scholar] [CrossRef]
  11. Corrêa Alegria, F.A.; Cruz Serra, F. Gaussian Jitter Induced Bias of Sine Wave Amplitude Estimation Using Three-Parameter Sine Fitting. IEEE Trans. Instrum. Meas. 2010, 59, 2328–2333. [Google Scholar] [CrossRef]
  12. Dalt, N.; Sheikholeslami, A. Understanding Jitter and Phase Noise: A Circuits and Systems Perspective; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  13. Carbone, P.; Schoukens, J. A Rigorous Analysis of Least Squares Sine Fitting Using Quantized Data: The Random Phase Case. IEEE Trans. Instrum. Meas. 2014, 63, 512–530. [Google Scholar] [CrossRef]
  14. Corrêa Alegria, F.A. Contribution of jitter to the error of amplitude estimation of a sinusoidal signal. Metrol. Meas. Syst. 2000, 16, 465–478. [Google Scholar]
  15. Plotkin, E.I.; Roytman, L.M.; Swamy, M.N. Estimation of an Initial Phase Over Extremely Short Record-Length of a Sinewave Signal. IEEE Trans. Instrum. Meas. 1985, 34, 624–629. [Google Scholar] [CrossRef]
  16. Cohen, G.; Francos, J.M. Least Squares Estimation of 2-D Sinusoids in Colored Noise: Asymptotic Analysis. IEEE Trans. Inf. Theory 2002, 48, 2243–2252. [Google Scholar] [CrossRef]
  17. Corrêa Alegria, F.A. Precision of sinewave amplitude estimation in the presence of additive noise and quantization error. J. Electr. Eng. 2023, 74, 374–381. [Google Scholar] [CrossRef]
  18. Kay, S.M. Modern Spectral Estimation: Theory and Application; Prentice Hall: Hoboken, NJ, USA, 1998. [Google Scholar]
  19. Kootsookos, P.J.; Williamson, R.C. Fitting a sinusoid using a linear model. IEEE Trans. Signal Process. 1999, 47, 1363–1367. [Google Scholar] [CrossRef]
  20. Papoulis, A. Probability, Random Variables and Stochastic Processes, 3rd ed.; McGraw-Hill: New York, NY, USA, 1991; p. 156. [Google Scholar]
  21. Corrêa Alegria, F.A. Phase Estimation Using Sine Fitting with Low Number of Samples is Biased in the Presence of Additive Noise. Glob. J. Eng. Technol. Adv. 2023, 17, 120–130. [Google Scholar]
Figure 1. Numerical simulation of the error of the estimated initial phase of a sinewave with 1 V amplitude and an initial phase of π/5 as a function of the phase noise standard deviation. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars and were obtained with 106 repetitions (R). The solid line represents the theoretical value, which is null. This situation is for a case where 20 samples are acquired (M).
Figure 1. Numerical simulation of the error of the estimated initial phase of a sinewave with 1 V amplitude and an initial phase of π/5 as a function of the phase noise standard deviation. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars and were obtained with 106 repetitions (R). The solid line represents the theoretical value, which is null. This situation is for a case where 20 samples are acquired (M).
Sensors 24 03730 g001
Figure 2. Numerical simulation of the error of the estimated initial phase of a sinewave with 1 V amplitude and an initial phase of π/5 as a function of the phase noise standard deviation. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars and were obtained with 106 repetitions (R). The solid line represents the theoretical value, which is null. This situation is for a case where three samples are acquired (M).
Figure 2. Numerical simulation of the error of the estimated initial phase of a sinewave with 1 V amplitude and an initial phase of π/5 as a function of the phase noise standard deviation. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars and were obtained with 106 repetitions (R). The solid line represents the theoretical value, which is null. This situation is for a case where three samples are acquired (M).
Sensors 24 03730 g002
Figure 3. Numerical simulation of the error of the estimated initial phase of a sinewave with 1 V amplitude and an initial phase of π/5 as a function of the number of samples for a phase noise standard deviation of 10 rad. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars and were obtained with 106 repetitions (R). The solid line represents the theoretical value, which is null.
Figure 3. Numerical simulation of the error of the estimated initial phase of a sinewave with 1 V amplitude and an initial phase of π/5 as a function of the number of samples for a phase noise standard deviation of 10 rad. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars and were obtained with 106 repetitions (R). The solid line represents the theoretical value, which is null.
Sensors 24 03730 g003
Figure 4. Numerical simulation of the estimated initial phase as a function of the phase noise standard deviation. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the theoretical value, which is null.
Figure 4. Numerical simulation of the estimated initial phase as a function of the phase noise standard deviation. The circles represent the values obtained via Monte Carlo analysis. The confidence intervals for a confidence level of 99.9% are represented by the vertical bars. The solid line represents the theoretical value, which is null.
Sensors 24 03730 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alegria, F.A.C.; Xie, L.; Pasadas, D. Study of the Bias of the Initial Phase Estimation of a Sinewave of Known Frequency in the Presence of Phase Noise. Sensors 2024, 24, 3730. https://doi.org/10.3390/s24123730

AMA Style

Alegria FAC, Xie L, Pasadas D. Study of the Bias of the Initial Phase Estimation of a Sinewave of Known Frequency in the Presence of Phase Noise. Sensors. 2024; 24(12):3730. https://doi.org/10.3390/s24123730

Chicago/Turabian Style

Alegria, Francisco A. C., Lian Xie, and Dário Pasadas. 2024. "Study of the Bias of the Initial Phase Estimation of a Sinewave of Known Frequency in the Presence of Phase Noise" Sensors 24, no. 12: 3730. https://doi.org/10.3390/s24123730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop