Next Article in Journal
Diagnosis of Bone Mineral Density Based on Backscattering Resonance Phenomenon Using Coregistered Functional Laser Photoacoustic and Ultrasonic Probes
Previous Article in Journal
Deep Reinforcement Learning for UAV Trajectory Design Considering Mobile Ground Users
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of the Complex Moments for Selection of an Optimal Sensor

by
Raoul R. Nigmatullin
* and
Vadim S. Alexandrov
Radioelectronics and Informative-Measurement Technics Department, Kazan National Research Technical University Named after by A.N., Tupolev (KNRTU-KAI), K. Marx Str., 10, 420111 Kazan, Tatarstan, Russia
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(24), 8242; https://doi.org/10.3390/s21248242
Submission received: 25 October 2021 / Revised: 30 November 2021 / Accepted: 7 December 2021 / Published: 9 December 2021
(This article belongs to the Section Physical Sensors)

Abstract

:
In the first time we apply the statistics of the complex moments for selection of an optimal pressure sensor (from the available set of sensors) based on their statistical/correlation characteristics. The complex moments contain additional source of information and, therefore, they can realize the comparison of random sequences registered for almost identical devices or gadgets. The proposed general algorithm allows to calculate 12 key correlation parameters in the significance space. These correlation parameters allow to realize the desired comparison. New algorithm is rather general and can be applied for a set of other data if they are presented in the form of rectangle matrices. Each matrix contains N data points and M columns that are connected with repetitious cycle of measurements. In addition, we want to underline that the value of correlations evaluated with the help of Pearson correlation coefficient (PCC) has a relative character. One can introduce also external correlations based on the statistics of the fractional/complex moments that form a complete picture of correlations. To the PCC value of internal correlations one can add at least 7 additional external correlators evaluated in the space of fractional and complex moments in order to realize the justified choice. We do suppose that the proposed algorithm (containing an additional source of information in the complex space) can find a wide application in treatment of different data, where it is necessary to select the “best sensors/chips” based on their measured data, presented usually in the form of random rectangle matrices.

1. Introduction and Formulation of the Problem

The widespread using of sensor systems in various technological and design solutions requires a more detailed assessment of the output parameters of physical quantities converted into an electric signal. It is necessary to understand exactly how the received signal is close to the real one, that is, it is always necessary to evaluate the final result containing a certain value of the hidden error. Now, a large number of solutions have been proposed that form the fundamental basis for selection of some efficient methods for processing the measured data [1,2,3,4,5,6,7,8,9,10]. In cases, when the analyzed data source is easily fitted to well-known analytical dependencies that follow from probability theory or from some proposed model, the calibration sensor problems do not arise usually. The situation becomes much more complicated with complex data arrays. In this case, it is not possible to establish the optimal distribution law due to the influence of many random factors. For solution of this problem in this complex case, the basic statistical methods of data processing, which are considered in brief below, are used.
Statistical test method (Monte Carlo method). In this case, for estimation of some unknown parameters, it is necessary to carry out a set of tests and determine the arithmetic mean of the resulting values. It should be noted that as a result we get some approximate evaluation of the desired value with a certain amount of error due to the fact that repeating a series of experiments N times, according to the central limit theorem, will lead initial distribution to the normal one, however, the actual result can differ significantly from these random tests.
The least squares method is based on minimizing the sum of the squares of the deviations of the fitting functions (test data) from the original (reference data). This approach is quite simple for its implementation; however, it inevitably causes the following difficulties—the limited accuracy of the adjusted parameters and a strong binding to the time interval. This method also implies that the fitting function is known. However, in many cases it is not available.
The maximum likelihood method (based on the calculation of the joint sampling function) allows determining the parameters of the general population that gives the maximum likelihood function. The disadvantage of this approach is the fact that the distribution law of the data array must be known a priori. However, a researcher should keep in his mind that in the case of random data sequences they do not follow to well-known distribution laws. This basic requirement cannot be realized, especially in cases, when researcher deals with sequences that are close to trendless fluctuations/noise.
Hypothesis testing (tests F, t, chi-square-distributions). In analysis of various types of data, a potential researcher makes some assumption about the correspondence (even remote and unjustified) of the studied dataset to a certain distribution law. Hypotheses are tested using various known criteria [1,2,3,4,5]. For example, the F-distribution is the ratio of two independent quantities, scaled relative to the original data. There is a good trend here—a specific transformation of the original dataset to the new key set. In the case, when it is necessary to set the scatter of some parameters (analysis of the variance), the Chi-square method [2,3] is used. In the case, when initial sampling is small, the Student’s t-test can be used to estimate the average of the selected values [1,2,3,4,5,6,7]. These methods work well in the theory of probability. However, in the case of the trendless sequences, it is necessary to evaluate data sets using other methods that are more accurate and reliable.
Analysis of the evaluated variances and regressions. In this approach it is necessary to take into account the influence of many external factors (input variables) on the parameters of the system studied (dependent variables), including the studied data that are considered as output variables. However, the calculation of these parameters is reduced to the central limit theorem, i.e., to normal distribution. In addition, as a priori statement about the absence of correlations and mutual influence of the parameters studied is supposed. Unfortunately, this supposition creates some uncontrollable error that is rather difficult to evaluate.
Time series analysis. This procedure is intended to eliminate the statistical fluctuation component and determines the initial dependence based on the controlled parameters [1,8,9,10]. However, in this case, a problem arises associated with the finding of the controlled (key) parameters that in many cases are remained as unknown. In addition, due to the finite time interval, it is not always possible to analyze the behavior of the function outside of the studied interval, which can introduce a significant amount of error. Therefore, it is advisable to move from time frames to discrete (counting) data points. The advantages and disadvantages of these basic methods are collected in Table 1.
Based on brief analysis of the conventional methods listed in Table 1, two general classes of disadvantages that are inherent in existing methods can be underlined:
(1)
The presence of an uncontrolled model (in many cases it becomes difficult to evaluate the limits and drawbacks of the proposed model) and numerous treatment errors due to the selection of the mathematical method and an approximate evaluation of a random variable.
(2)
The requirement a priori knowledge of the distribution law for a set of the chosen random variables and the reduction in the investigated data array to the normal distribution law, guided by the central limit theorem. In real situations it is difficult to realize the requirements of this theorem and a potential researcher does not have conditions for its verification.
This brief analysis of the existing methods allows to put forward the following question:
Is it possible to suppose a “universal” method that is free from the model assumptions and treatment errors and can be applied to any set of random functions?
This method should contain an additional source of information in another space. We mean that besides temporal and frequency conventional spaces associated by the Fourier transformation one can introduce a space of complex numbers associated with complex moments. This method also should evaluate a set of desired correlations that follow from independent expressions. From our points of view, the answer lies in generalization of the well-known Pearson correlation coefficient (PCC) and the existing concept of the integer moments. In this paper, we make the further step and propose the concept of the complex moments that give us four additional mathematical expressions that will be suitable for independent evaluations of the external correlations. These expressions can be considered as an additional source of information for evaluation of the desired correlations. In addition, with the help of independent evaluations of the ranges (see expression (12) below) one can compare all independent correlations together and choose the optimal sensor based on the statistical analysis of the measured pressure data.
The content of the paper is organized as follows. In the second chapter we propose the theory of the complex moments and obtain useful expressions for the further data treatment. In the third chapter we give the measurement details related to receiving of the desired set of data. In the fourth chapter we propose the treatment algorithm and obtain the necessary results. In the final chapter, we make necessary conclusions and discuss some details that can be useful for potential researches that will deal with their own random data.

2. Statistics of the Complex Moments

The correlation analysis plays a key role in data signal processing. Any manual related to the conventional statistics, many other books and papers are related to consideration of this important subject [11,12,13,14,15,16,17,18,19]. Why this type of analysis is significant? The evaluation of different correlations is important especially in complex systems, where the fitting function that can be derived from a simple model in many cases is absent.
The first generalization from the integer moments to a set of the fractional moments has been done in paper [20] and then was applied successfully for available data [21,22,23].
cos ( θ ) = ( y 1 y 2 ) ( y 1 ) 2 ( y 2 ) 2 ,   ( y 1 y 2 ) = j = 1 N ( y 1 ( j ) y 2 ( j ) )
However, attentive analysis shows that the conventional Pearson correlation coefficient (PCC) cannot cover all variety of “resemblances” that exist between the compared random sequences. From our point of view, the value of pair correlations obtained with the help of expression (1) has a relative character. Generalization of this important formula, describing some internal correlations becomes possible if one generalizes the concept of the integer moments and introduce the fractional moments that covers all admissible interval of the real moments. In papers [21,22,23] and in the recent book [24] one of the authors (RRN) introduced the definition of the generalized Pearson correlation function (GPCF) for a pair of the chosen random sequences y1 (j), y2 (j) (j = 1, 2, …, N) based on the generalized mean value function Gp (y1, y2)
G P C F p = G p ( y 1 , y 2 ) G p ( y 1 , y 1 ) G p ( y 2 , y 2 ) ,   G p ( y 1 , y 2 , , y k ) = ( 1 N j = 1 N | y n 1 ( j ) y n 2 ( j ) y n k ( j ) | m o m p ) 1 m o m p
Here, a set of the functions ynk (j), k = 1, 2, …, K defines the normalized values of the compared sequences located in the interval [0, 1]. The normalized value of the function yn (j) is defined as
y n ( j ) = { y j + | y j | max ( y j + | y j | ) y j | y j | min ( y j | y j | ) y j min ( y j ) max ( y j min ( y j ) )   ( for   initial   positive   sequence )
The values of the moments are taken from interval
m o m p = exp ( r + 2 p P r ) ,   e r m o m p e r ,   p = 0 , 1 , P
The chosen interval in (4) for the set of the moments covers practically all positive values of the real moments and, therefore, can be considered as an optimal one. Usually, the limiting values of r in (4) are taken from the interval (10–15) and this selected interval is sufficient for evaluation of all possible correlations that are given by the expression (2). We should notice also that (2) being located in the interval [0, 1] has “universal” behavior. At negative values of the argument momp it tends to the unit value, at positive values of the momp it achieves the limiting value L at p = P. The value L, in turn, can accept three values: (a) L = 1 that corresponds to the case of strong correlations; (b) the interval M < L < 1 corresponds to the case of intermediate correlations; (c) the case L = M (M is minimal value of (2)) corresponds to the weak correlations. We should notice also that weak correlations always exist and the case of the absence of correlations between two random sequences can be considered as a supposition. However, the case of the absence of correlations can be prepared artificially if one can use the orthonormalized Gramm-Schmidt procedure, widely used in quantum mechanics. This “universal” behavior of the function (2) allows to determine the complete correlation factor as the following product
C F = L M ,   M 2 C F M .
The complete correlation factor CF can characterize the degree of correlation between a couple of the chosen random sequences. The analysis based on expressions (2) and (4) was tested on many real and mimic data [11,22,23,24]. In addition to expression (4), one can introduce the parameter that defines a class of correlations described above.
C l s = L M 1 M
If L = 1 and M ≠ 1 and 0.8 ≤ Cls ≤ 1 then CF can be referred to high correlations (HC) case. When LM and 0 ≤ Cls < 0.2 then CF is referred to the low correlations (LC) case. When Cls occupies some intermediate position (0.2 < Clf < 0.8) then we deal with intermediate correlations (IM) case. We should note also the case M = 1, when the minimum of (2) is absent and the compared functions are identical to each other. In this case, we put Cls = 0/0 = 0.
However, in order to close the problem with correlations it is necessary to make the next step and consider the generalized Pearson correlation function based on the total set of the moments, including their complex parts. Therefore, we define the generalized Pearson correlation function for the complex moments (GPFCM) by the following expression
G P F C M p = G M V C p ( y 1 , y 2 ) G M V C p ( y 1 , y 1 ) G M V C p ( y 2 , y 2 ) ,   G M V C Z p ( y 1 , y 2 , , y k ) = ( 1 N j = 1 N | y n 1 ( j ) y n 2 ( j ) y n k ( j ) | Z p ) 1 Z p
In expression (7), the initial set of random sequences yn1 (j), yn2 (j),…, ynk (j) are normalized to the interval [0, 1] in accordance with expressions (3), the value of Zp accepts complex values; it contains two parts and located in the following intervals:
Z p = R p + i Ω p ,   R p = exp ( L + 2 L p P ) ,   Ω p = Ω 0 + p P ( Ω L Ω 0 ) , e L R p e L ,   Ω 0 Ω p Ω L ,   p = 0 , 1 , , P .
In order to separate the real and imaginary parts from numerator of expression (7) we introduce the following sums
S r p ( 1 , 2 ) = 1 N j = 1 N ( y n 1 ( j ) y n 2 ( j ) ) R p cos [ Ω p ln ( y n 1 ( j ) y n 2 ( j ) ) ] , S m p ( 1 , 2 ) = 1 N j = 1 N ( y n 1 ( j ) y n 2 ( j ) ) R p sin [ Ω p ln ( y n 1 ( j ) y n 2 ( j ) ) ] .
With the help of expressions (9) the numerator of expression (7) can be presented in the form.
G Z p ( 1 , 2 ) = ( 1 N j = 1 N ( y n 1 ( j ) y n 2 ( j ) ) Z p ) 1 Z p = | G Z p ( 1 , 2 ) | exp ( i Φ p ( 1 , 2 ) ) | G Z p ( 1 , 2 ) | = exp [ ln | S p ( 1 , 2 ) | ] | Z p | ,   Φ p ( 1 , 2 ) = φ p ( 1 , 2 ) tan 1 ( Ω p R p ) .
The values | S p ( 1 , 2 ) | , φ p ( 1 , 2 ) are determined with the help of expressions (9) as
| S p ( 1 , 2 ) | = S r p 2 ( 1 , 2 ) + S m p 2 ( 1 , 2 ) ,   φ p ( 1 , 2 ) = tan 1 ( S m p ( 1 , 2 ) S r p ( 1 , 2 ) )
Expressions in dominator figuring in (5) are obtained easily with the help of last expressions (7)–(9), where it is necessary to replace the corresponding sequences (1↔2). Finally, expression (5) after separation of the real and imaginary parts can be presented in the form
G P F C M p = | G Z p ( 1 , 2 ) | | G Z p ( 1 , 1 ) | | G Z p ( 2 , 2 ) | exp ( i [ Φ p ( 1 , 2 ) 1 2 ( Φ p ( 1 , 1 ) + Φ p ( 2 , 2 ) ) ] )
Other expressions, entering into (12) are defined by expressions from (10). Expression (12) represents itself the final formula that generalizes the previous results obtained earlier in papers [21,22,23,24] for the case of real fractional moments. With the help of new expressions (10)–(12) we want to show that possible generalization of the concept of correlations based on the conventional definition (1) has a relative character. It means that correlations from the fixed number is transformed to a distribution; they can be arbitrary and occupy all admissible interval [−1, 1] and even exceed it. These expressions will be used in Section 4 for evaluation of different types of correlations that are given by complex moments.
Finishing this section, we want to stress one important point. The concept of the complex moments alongside with fractional moments introduced earlier enriches the reduced “feature” space by additional set of independent parameters and can help in differentiation of the “hidden” differences, when the conventional methods do not “see” them. This solution becomes possible because the concept of the complex moments contains additional formulae (10) and (12). They can be helpful in selection of the “hidden” differences in these complex cases.

3. Experimental Scheme and Measurements Details

The collection of experimental data was carried out in accordance to the scheme shown in Figure 1.
Ten sensors were considered as initial data. Their principle of operation is based on the conversion of the control mechanical pressure into an electrical signal. These measuring devices are installed in the fuel rail of the automotive system and are designed to regulate the fuel supply to each cylinder. Note that sensors 1 and 3 operate in symmetrical mode (odd channels) similar to sensors 2 and 4 (even channels). Therefore, their output signals can be expected to be same. For registration the desired data, a personal computer with preinstalled Car Scanner software and appropriate drivers were used. The connection was made using the OBD-II protocol, which is currently installed in all similar vehicles. To obtain more reliable information, the connection was made using a tested cable, excluding a wireless connection.
The cable was connected and the program was started with the car ignition off (the output signal from the sensors is absent). When the ignition is turned on, the output signal registered by the sensors is proportional to the engine speed. Further measurements were made only after the so-called “warm-up revolutions”, i.e., when “the establishing idle speed” state was achieved. By adjusting the throttle position with using the accelerator pedal, the amount of air supplied to the combustion chamber was changed. As it is known, the fuel mixture has a certain ratio and this ratio is kept as unchanged. Consequently, the changing in the angle of the flap position entails a change in the amount of fuel supplied to the given injector, which affects the output readings of the sensors in each cylinder. To obtain a more accurate result, the same experiment was repeated 10 times. As an output signal, one can obtain an array of data, which was written to a computer file through an electronic control unit (ECU). As a next step it is necessary to analyze the data and compare them with each other. We paid a special attention to operation of sensors working in symmetric modes.
Our particular interest is the operation of sensors 2 and 4, operating in a symmetric mode. The process of diagnosing fault is coded by the system of the ECU. It is turned out that a lean mixture is controlled by the channel No. 2, therefore, an error of the corresponding type has been preserved in the memory of the computer system, which was displayed both on the indicator block as a “check engine”.
When comparing several elements with each other, such concepts as “identical”, “similar”, “different”, “completely different”, etc., arise, which is usually estimated by some quantitative value, called a statistical error, or some fitting error. In the event that this error value is low/high enough, it is necessary to make a decision about which of the tested elements is considered a priori as the reference one (i.e., it works in the mode established by the manufacturer with a high value of the considered parameters), and which sensor will be tested one (subject of verification).
In our case, sensor number 4 in channels 2–4 can be considered a priori as the reference, since it does not generate errors, and the sensors 2 including all measured set (0–9) can be considered as the tested ones, since there is an error in the system memory connected with the selected cylinder. However, important question related to selection of the “reference” sensor is remained. What kind of the justified arguments one can add in order to select the desired reference sensor?
In the symmetrical system 2–4, the choice of selection of sensor 4 can be argued as follows: the message about the malfunction of the second cylinder was recorded in the memory of the electronic control unit (ECU) as the error P0171 “Too poor fuel-air mixture”, which persisted for a long period of time. Therefore, for the analysis of the real state of channel 2, this sensor was accepted as a test one. The discrepancy of the parameters was confirmed based on a superficial analysis of the studied data using the built-in Pearson correlation coefficient (PCC). In addition, with the help of a more sensitive method associated with the calculation of fractional and complex moments, there is a chance to establish more noticeable and significant differences in the studied channels operating in symmetric modes. The treatment algorithm is given below in Chapter 4.
To evaluate and confirm the high efficiency of the described method, we apply a similar analysis for channels 1–3 and establish, as it was assumed, a high degree of correlation compared to the previously described symmetric channel. Our assumptions were confirmed. Since there was no information about the malfunction of any sensor (1 or 3) in the system, the choice of the reference variant is remained questionable because in the symmetric mode state the sensors 1–3 do not demonstrate significant differences. However, it should be noted that in order to obtain a significant result, it is necessary to choose a reference and test options based on some quantitative parameters.
In our opinion, such parameters can be short-term fuel correction time (STFCT) (refers to instantaneous changes in the fuel mixture) and long-term fuel correction time (LTFCT) (shows the change in the fuel mixture over a long period of time based on the indications of short-term correction). These parameters can be obtained by connecting a diagnostic scanner according to the scheme (see Figure 1). It should be noted that the change of these parameters in the ECU memory occurs infrequently, and during a series of 10 tests they remained constant, therefore, they can be considered as a stable. Is it possible to explain what kind of quantitative parameters can characterize this stable state? At the moment of time, when the fuel–air mixture burns oxygen sensor located in the exhaust system of the car determines the content of the given mixture presented in greater quantities: fuel or air. It should be noted that the assessment of the readings by the oxygen sensor would be completely different from the sensors considered earlier, since the correction system is located in a completely different node of the car. If all the fuel and air are burned in the required ratio, then the correction factor gives zero percentage. If an excess of fuel is detected, then a correction with a “plus” sign occurs, if the excess of air is a correction with a “minus” sign for two parameters LTFCT and STFCT, accordingly. By evaluation of the arithmetic mean of these parameters, it becomes possible to identify which of the channels is less affected by the found correction. Therefore, if it operates in a more stable mode then it is considered as a reference. For the sensors under consideration, the values of the estimated parameters are listed in Table 2. Since it is not known exactly which values should be used, the current results (taking into account the sign) and absolute values should be analyzed.
Analyzing the results given in Table 2 for two given channels, it can be seen that the correction value for the sensor number 1 is smaller (taking into account the sign) and its absolute value. Based on this observation, we made a conclusion that the sensor number 1 will be the reference, and the sensor number 3 will be the tested sensor. It should be noted that the calculation results are turned out to be quite close, which allows us to conclude that the given sensors operate in symmetrical modes. Nevertheless, the problem of selection some reference sensor is remained and this problem should be solved in each private case independently. Here, we demonstrated a possible solution.

4. Proposed Algorithm and Data Treatment Procedure

The key problem that is followed from the statistics of the complex moments can be formulated as:
What kind of new informative component is added by imaginary part of the complex moments in a feature space for evaluation of correlations (external and internal ones) more accurately (free from model and treatment errors) in comparison with the conventional PCC and other methods?
These new “informative units’ extracted from complex moments should be helpful in differentiation of initial data and detection of possible correlations between columns, if initial data are presented usually in the form of rectangle N × M matrix having N rows (j = 1, 2, …, N—number of data points) and m = 1, 2, …, M (number of columns). More specifically, we want to demonstrate the solution of the following problem:
Is it possible to compare a couple of random functions in the space of fractional and complex moments and receive more adequate functions, pretending on a more accurate “resemblance” with each other? This problem is facilitated if one of the random functions is chosen as the reference one. For solution of this problem, it is necessary to normalize the initial data and make them completely dimensionless and close to each other with the help of expression
y n j = y j y R a n g e ( y ) ,   y = 1 N j = 1 N y j , R a n g e ( y ) = max ( y ) min ( y ) , R a n g e ( y n ) = 1 .
For comparison of two random functions, we use a simple expression that can be used for additional evaluation of external correlations
E c r ( y i , y j ) = R a n g e ( y i ) + R a n g e ( y j ) max ( y i , y j ) min ( y i , y j )
Any random function y is located in the rectangle with the sides: [Range (x) = max (x) − min (x) that coincides with horizontal direction and Range (y) = max (y) − min (y) that coincides with the height of rectangle in the opposite direction]. If the function of external correlations Ecr (yi, yj) lie in the interval [1, 2] then the compared functions (read rectangles) cross each other. If this correlation function Ecr (yi, yj) lies in the interval (0, 1) it means that a couple of two compared functions (yi, yj) do not cross each other, and, therefore, are uncorrelated. Expression (12) is turned to be useful for evaluation of external correlations of different random functions, especially in cases when the fitting function following from some proposed model is absent.
We choose the following correlation parameters having in mind that true value of correlation has a relative character. These parameters are defined below and used as notions in Table 3, Table 4 and Table 5 (for sensors 2–4) and Table 6, Table 7 and Table 8 (for sensors 1–3), where the chosen data, corresponding to the given pressure sensors are compared with each other. We should note also that initial data can be compressed by means of procedure of reduction to three incident points, that was widely used earlier in papers [25,26]. This procedure is simple and it is used for testing the similarity of initial data under the compression. In addition, it decreases the computational cost and keeps all basic peculiarities that are necessary for the further data processing. In our case the compression parameter b = 100. The Figure 2 and Figure 3 demonstrate the effectiveness of this procedure. It shows that in many cases the measured curves are self-similar/fractal. The reduced curves (compressed in 100 times) are placed in small figures. This procedure reduces also the computational cost and keep the basic peculiarities of the data considered.
We chose the averaged data from all sensors of the type 4 and accepted these data as the reference ones. The different sensors referred to the class 2 are considered as the tested ones. We have again the averaged data taken from all over available data belonging to class 2. In addition, we have 10 separate datasets obtained for different sensors, belonging to class 2. The same procedure was realized for another class of sensors 1–3. For this class, the average value for the class—1 is chosen as the reference one, other data referred to class 3 are considered as the tested sensors.
In order to analyze deeply all possible correlations and possible deviations from the reference data we propose the following set of parameters:
CP1(s)—direct calculation of external correlations with the help of expression (12) for the normalized data yn (s) obtained from (11);
CP2(s)—calculation of external correlations for a couple of GMV-functions. These functions are evaluated with the help of (2a);
CP3(s)—calculation of the complete correlation factor (CF) based on internal correlations. The factor is evaluated from expression (4a);
CP4(s)—Pearson correlation parameter (PCC). It is defined by expression (1);
CP5(s)—calculation of correlation parameter that determines the class of correlations. For its evaluation we use expression (4b);
CP6(s)—calculation of the ranges for the differences DFs = ysy1, where y1 = Av_dat_4 (chosen as the refence data) and ys= Dat_s (2), s = 0, 1, …, 9—a set of the tested data. For sensors 1–3 y1 = Av_dat_1 and ys = Dat_s (3);
CP7(s)—external correlation parameter that corresponds the comparison of the absolute values of the complex moments for the pattern functions y1 = Av_dat_4 (y1 = Av_dat_1) with others ys= Dat_s (2), (ys = Dat_s (3)) for s = 0, 1, …, 9. It is evaluated for two module functions | G Z p ( y 1 , y 1 ) | and | G Z p ( y s , y s ) | from (8). We use the same notations as they are used above;
CP8(s)—external correlations for comparison of two phases Φ Z p ( y 1 , y 1 ) and Φ Z p ( y s , y s ) , obtained in the frame of the complex moments;
CrP9(s)—external correlations of the absolute values | G Z p ( y 1 , y s ) | and | G Z p ( y 1 , y 1 ) | | G Z p ( y s , y s ) | obtained in comparison of pair correlations for the complex moments;
CrP10(s)—external correlations of the phase parts Φ Z p ( y 1 , y s ) and 1 2 ( Φ Z p ( y 1 , y 1 ) + Φ Z p ( y s , y s ) ) , obtained for the complex moments;
CrP11(s)—ranges of the absolute values for significant difference ysy1;
CrP12(s)—ranges of the phase parts for significant difference ysy1.
These 12 quantitative parameters (actually the functions ys) are turned to be useful for receiving a set of the desired correlations. These correlations are sufficient in order to select the maximal values of parameters CrP1–5, CrP7–10 from the interval [1,2] and minimal deviations for parameters CrP6,11,12. If any ys will contain a maximal number from all compared set s = 0, 1, …, 9) then it can be admitted as the “best” one from the statistical point of view. If any parameter CrP1–12 pretending to be optimal has minimal number of the desired “scores” then one includes the values of the standard deviations for extending the desired selection interval [max (y), max (y) − stdev (y)] (for the parameters CrP1–5, CrP7–10) and [min (y), min (y) + stdev (y)] (for parameters CrP6,11,12). This procedure helps to find the closest correlations that can form the “best” class. The “best” selected sensors (their correlated values bolded) are presented in Table 5 and Table 8.
The meaning of the chosen correlation parameters presented in Table 3, Table 4, Table 6 and Table 7 are explained above.
Finishing this section, it is necessary to show a universal criterion that was used in selection the closest correlations. In each column we choose three/four maximal correlations that are close to each other. Then, these three correlations are compared with the values from different columns located in each line. The maximal number of correlators and minimal number of ranges (for parameters CrP6,11,12) located in each line helps to choose the optimal sensor.
In these tables, we underlined the most probable values that are closed to maximal values for parameters CP1–4 and CP7–10. As for CP6,11–12 we underlined the parameters that are close to minimal deviations. Obviously, we excluded the first two rows in these tables because they demonstrate an “ideal” case (the first row) and effectiveness of the averaged procedure (the second row). All bolded values for parameter CP lie in the interval max(CP) − stdev(CP) and min(CP) + stdev(CP). We choose the following criterion for selection of the best pressure sensor: maximal bolded numbers in each row. In accordance with this criterion, we obtain the following optimal selection that is given in Table 5 (sensors 2) and Table 8 (sensors 3).
The meaning of the rest correlation parameters presented in Table 3 are explained in the text also. Here and below the bolded numbers coincide with maximal values in columns CP1CP5 and minimal value for CP6. The maximal/minimal value in addition is underlined.
Based on parameters that are given in Table 5 one can conclude that the first place (“golden medal”) belongs to sensors 9 and 6; the second place (“silver medal”) can be divided between sensors 7, 8 and 3 and finally, the third place (“bronze medal”) belongs to sensors 1 and 2.
The meaning of the rest correlation parameters presented in Table 7 are explained in the text also.
Based on parameters that are given in Table 8 one can conclude that the first place (“golden medal”) belongs to sensor 9; the second place (“silver medal”) can be divided between sensors 3 and 6 and finally, the third place (“bronze medal”) belongs to sensor 5.
In addition, we demonstrate the Figure 2, Figure 3 and Figure 4, which explain the behavior of the reduced curves (shown in small Figure 2) in the space of the fractional (Figure 3) and complex moments (Figure 4a,b), accordingly. As one can see from these figures the behavior of these curves is rather “universal” Figure 4a corresponding to the module of the complex moments forms a “resonance” curve with maximum near “zero” moment Ln(momp). The phase behavior depicted in Figure 4b is also “universal”. This curve demonstrates strong oscillations for negative values of the Ωp and tends to zero for positive values of Ωp. This behavior allows to select fractional/complex moments for analysis of a wide class of random curves.

5. Discussion and Basic Conclusions

In this paper, we apply the statistics of the complex moments for evaluation of the additional correlations. Finishing the final section one can make the following conclusions:
  • We propose some “universal” tool as a source of additional information for detection of “hidden” correlations in many complex systems when the specific model is absent and, therefore, it is difficult to differentiate the desired differences;
  • For reliable evaluations the values of internal correlations the conventional PCC is not sufficient. As one sees from the paper the statistics of the fractional/complex moments allows to divide the correlations on two independent classes: (a) external correlations that allow to compare the samplings having different or equal number of data points; (b) internal correlations that allow to compare the samplings having equal number of data points only. As it follows from this analysis the concept of correlation has a relative character. It becomes impossible to receive their absolute values;
  • The method proposed in this paper is free from the model and treatment errors and has rather general character. It can be applied to any set of data (having a trend or without one);
  • The algorithm of application of this statistic is described in Section 4. We should stress here usefulness of expressions (13) and (14) that make initial data dimensionless and close to each other. Expression (14) for Ecr(y1,y2) has also universal character. It can be applied for comparison of any couple of random functions located in the given interval. We should stress here that combination (Ecr(y1,y2) − 1) 100% gives the degree of overlapping between two functions y1 and y2; 100%—corresponds to the complete “fusion”, while the value ≅ 0% signifies the absence of fusion. If this combination becomes negative then it shows the degree of “disconnection” between two compared functions;
  • The criterion for selection of the “best” sensor is described in the previous Section 4. This criterion is rather universal and can be applied to any data presented in the form of rectangle matrix. Each column of this matrix describes the independent correlations given by the statistics of the fractional/complex moments, while each line of this matrix belongs the compared part. In our case a “part” is associated with the sensor;
  • As it follows from analysis of specific data presented in Table 3, Table 4, Table 6 and Table 7 one can apply 6 external and independent correlations (CP1–2, CP7,8,9,10) and couple of internal correlations (CP3–4) for reliable selection of an “optimal” sensor based on its correlation characteristics. In addition, one can evaluate the ranges of the relative differences (CP6,11,12) between the tested and reference data that are turned to be useful for selection of the “best” sensor, as well.
Finishing this final section one can say that a potential researcher receives a new and general tool for analysis of different data, especially in cases when it is necessary to compare a couple of random functions and select the “best” one if one of the compared functions is considered as the reference/pattern function.

Author Contributions

R.R.N.—“Conceptualization, validation, supervision and methodology”. V.S.A.—“Measurements, software preparation”. Both of them were involved in “original draft preparation, writing—review and editing”. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

No any institutional statement is needed for this research.

Data Availability Statement

The sensor’s measured data can be sent to a potential reader under his personal request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

ECUElectronic control unit
GMV-functionGeneralized mean value function
GPCFGeneralized Pearson correlation function
LTFCTLong-term fuel correction time
OBD-IIOn board diagnostics, 2nd generation
PCCPearson correlation coefficient
STFCTShort-term fuel correction time

References

  1. Ramírez, J.G.; Brandt, S.; Cowan, G. Data Analysis: Statistical and Computational Methods for Scientists and Engineers. Technometrics 2000, 42, 312. [Google Scholar] [CrossRef]
  2. Geng, S.; Tan, C.; Niu, D.; Guo, X. Optimal allocation model of virtual power plant capacity considering Electric vehicles. Math. Probl. Eng. 2021, 4, 5552323. [Google Scholar] [CrossRef]
  3. Mandel, A.; Belyakov, A.; Semyenov, D. Expert-Statistical Processing of Data and the Method of Analogs in Solution of Applied Problems in Control Theory. Appl. Inf. Commun. Technol. 2008, 41, 3180–3185. [Google Scholar] [CrossRef] [Green Version]
  4. Kozierski, P.; Lis, M.; Krolikowski, A. Implementation of Fast Uniform Random Number Generator on FPGA. Pozn. Univ. Technol. Acad. J. 2014, 80, 167–173. [Google Scholar]
  5. Khusainiva, R.; Shilkova, Z. Selection of appropriate statistical methods for research results processing. Math. Educ. 2016, 11, 303–315. [Google Scholar]
  6. Moore, R.H.; Eadie, W.T.; Drijard, D.; James, F.E.; Roos, M.; Sadoulet, B. Statistical Methods in Experimental Physics. J. Am. Stat. Assoc. 1973, 68, 494. [Google Scholar] [CrossRef]
  7. Ghosh, J.; Delampady, M.; Samanta, T. An Introduction to Bayesian Analysis: Theory and Methods, 2nd ed.; Springer: New Delhi, India, 2006; pp. 38–144. [Google Scholar]
  8. Bertsimas, D.; Sim, M. Tractable approximations to robust conic optimization problems. Math. Program. 2006, 107, 5–36. [Google Scholar] [CrossRef] [Green Version]
  9. Revuelta, J.; Maydeu-Olivares, A.; Ximénez, C. Factor Analysis for Nominal (First Choice) Data. Struct. Equ. Model. 2020, 27, 781–797. [Google Scholar] [CrossRef]
  10. Kempton, R.; Fox, P. Statistical Methods for Plant Variety Evaluation, 1st ed.; Chapman & Hall: London, UK, 2012; pp. 136–192. [Google Scholar]
  11. Bastos, D.; Kowada, L.; Machado, R. On pseudorandom number generators. ACTA IMEKO 2020, 9, 128–135. [Google Scholar] [CrossRef]
  12. Skrobova, N. Statistical data analysis in the DANSS experiment. J. Phys. Conf. Ser. 2019, 1390, 012056. [Google Scholar] [CrossRef]
  13. Mandel, A.; Bordukov, D.; Dorofeyuk, A.; Dorofeyuk, Y.; Chernyavskiy, A. A Structural Prediction Concept for Railway State Forecasting Problem. IFAC-PapersOnLine 2015, 48, 1338–1342. [Google Scholar] [CrossRef]
  14. Mandel, A.; Vilms, M. Local Supply Chain Control Model with Unreliable Suppliers. IFAC-PapersOnLine 2016, 49, 437–442. [Google Scholar] [CrossRef]
  15. Chen, X.; Xie, M. A Split and conquer approach for extraordinary large data analysis. Stat. Sin. 2014, 24, 1655–1684. [Google Scholar]
  16. Parvin, C. An Introduction to Multivariate Statistical Analysis, 3rd ed.; Anderson, T.W., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2004; pp. 981–982. [Google Scholar]
  17. Blagus, N.; Subelj, L.; Weiss, G.; Bajec, M. Sampling promotes community structure in social and information networks. Phys. A Stat. Mech. Appl. 2015, 432, 206–215. [Google Scholar] [CrossRef] [Green Version]
  18. Greenacre, M. Compositional Data Analysis in Practice, 1st ed.; Chapman & Hall: London, UK, 2018; pp. 41–70. [Google Scholar]
  19. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Math. Intell. 2004, 27, 83–85. [Google Scholar]
  20. Nigmatullin, R. The statistics of the fractional moments: Is there any chance to “read quantitatively” any randomness? Signal Process. 2006, 86, 2529–2547. [Google Scholar] [CrossRef]
  21. Pershin, S.M.; Bunkin, A.F.; Lukyanchenko, V.A.; Nigmatullin, R.R. Detection of the OH band fine structure in liquid water by means of new treatment procedure based on the statistics of the fractional moments. Laser Phys. Lett. 2007, 4, 809–813. [Google Scholar] [CrossRef]
  22. Nigmatullin, R.R.; Osokin, S.I.; Nelson, S.O. Application of fractional-moments statistics to data for two-phase dielectric mixtures. IEEE Trans. Dielectr. Electr. Insul. 2008, 15, 1385–1392. [Google Scholar] [CrossRef]
  23. Nigmatullin, R.R. Strongly correlated variables and existence of a universal distribution function for relative fluctuations. Phys. Wave Phenom. 2008, 16, 119–145. [Google Scholar] [CrossRef]
  24. Nigmatullin, R.; Lino, P.; Maione, G. New Digital Signal Processing Methods Applications to Measurement and Diagnostics, 3rd ed.; Springer: Berlin/Heidelberg, Germany; New Delhi, India, 2020; pp. 1–433. [Google Scholar]
  25. Nigmatullin, R.R.; Zhang, W.; Striccoli, D. General theory of experiment containing re-producible data: The reduction to an ideal experiment. Commun. Nonlinear Sci. Numer. Simul. 2015, 27, 175–192. [Google Scholar] [CrossRef]
  26. Nigmatullin, R.R.; Maione, G.; Lino, P.; Saponaro, F.; Zhang, W. The general theory of the Quasi-reproducible experiments: How to describe the measured data of complex systems? Commun. Nonlinear Sci. Numer. Simul. 2017, 42, 324–341. [Google Scholar] [CrossRef]
Figure 1. Block diagram explaining the measured procedure of experimental data.
Figure 1. Block diagram explaining the measured procedure of experimental data.
Sensors 21 08242 g001
Figure 2. (a) On the left. Here, we plot the initial data containing 263,600 data points for the averaged data associated with reference sensor (bolded line). Red line corresponds to the averaged data for sensor −2. In the right corner above we place the reduced data for the same data compressed in b = 100. The correlation coefficient equals 0.987. They exhibit the scaling/fractal properties, from one side, and accelerate the calculations, from another side. Vertical axis gives the value of response in Volts. Horizontal axis is dimensionless and the measured data points are normalized to the unit value. (b) Data for the reference function (bolded black line) are shown on the right. The red line depicts the data for sensor 0 (3). In the left corner of this figure we depict the same data compressed in b = 100 times. As it was marked in Figure 2a, vertical axis gives the value of response in Volts and horizontal axis is dimensionless and the measured data points are normalized to the unit value.
Figure 2. (a) On the left. Here, we plot the initial data containing 263,600 data points for the averaged data associated with reference sensor (bolded line). Red line corresponds to the averaged data for sensor −2. In the right corner above we place the reduced data for the same data compressed in b = 100. The correlation coefficient equals 0.987. They exhibit the scaling/fractal properties, from one side, and accelerate the calculations, from another side. Vertical axis gives the value of response in Volts. Horizontal axis is dimensionless and the measured data points are normalized to the unit value. (b) Data for the reference function (bolded black line) are shown on the right. The red line depicts the data for sensor 0 (3). In the left corner of this figure we depict the same data compressed in b = 100 times. As it was marked in Figure 2a, vertical axis gives the value of response in Volts and horizontal axis is dimensionless and the measured data points are normalized to the unit value.
Sensors 21 08242 g002
Figure 3. In the central figure we compare the GMV-functions for the same curves Av_dat_1-1 and Dat_0 (3) calculated in the space of the fractional moments. In the right corner below we depict the generalized Pearson correlation function (Equation (2)) in the space of the fractional moments placed in the same interval [−10, 10].
Figure 3. In the central figure we compare the GMV-functions for the same curves Av_dat_1-1 and Dat_0 (3) calculated in the space of the fractional moments. In the right corner below we depict the generalized Pearson correlation function (Equation (2)) in the space of the fractional moments placed in the same interval [−10, 10].
Sensors 21 08242 g003
Figure 4. (a). The behavior of the initial functions (see Figure 2 above) in the space of the complex moments. Here, the corresponding modules are shown. On the left corner above we show the difference between these functions. (b). The behavior of the initial functions (see Figure 2 above) in the space of the complex moments. Here, the corresponding phases are shown. On the left corner above we show the sequences of the range amplitudes calculated for these functions. These functions are sensitive for evaluation of the differences between them.
Figure 4. (a). The behavior of the initial functions (see Figure 2 above) in the space of the complex moments. Here, the corresponding modules are shown. On the left corner above we show the difference between these functions. (b). The behavior of the initial functions (see Figure 2 above) in the space of the complex moments. Here, the corresponding phases are shown. On the left corner above we show the sequences of the range amplitudes calculated for these functions. These functions are sensitive for evaluation of the differences between them.
Sensors 21 08242 g004
Table 1. The conventional statistical methods that are used in information processing.
Table 1. The conventional statistical methods that are used in information processing.
NO.The Name of the MethodAdvantagesDisadvantages
1The Monte-Carlo methodSimple structure of the computational algorithm. The possibility of application for a wide class of mathematical models.The occurrence of an uncontrolled error due to an approximate evaluation of a random variable; Obtaining an acceptable result is possible only as a result of a multiple increase in the number of tests.
2The least square methodEasy in implementation; It has a “universal” computational procedure.Limited accuracy for the customized parameters; Strong binding to the selected time interval.
3Maximum-likehood methodAllows to register and process both grouped and non-grouped information; The low value of the variance parameter and, accordingly, the standard deviationThe needing a priori information about the distribution law of the data array, which, for example, is difficult to apply to the selected trendless sequences; A significant number of calculations is required.
4Hypothesis testing (chi-square, F-test, t-test)Allows to verify and describe the proposed hypothesis with a high degree of accuracy for most data types. Usually, these hypotheses are close to the Gaussian distribution.The occurrence of an uncontrolled error due to the insensitivity of the method to data containing trendless noise.
5Analysis of variancesAllows to test more complex hypotheses compared to the method described above due to factor analysis.A priori statement about the absence of correlation and mutual influence of the studied parameters on each other.
6Regression analysisAllows to get an acceptable result of the adjusted data relative to the initial data for a finite time interval if a limited set of factors under study is taken into account.The occurrence of an uncontrolled error due to the inability to take into account the basic external factors on which the output value depends.
7Time-series analysisAllows to determine the initial dependence on the controlled variable with a high degree of accuracy.The problem associated with the finding of controlled (key) parameters; For the fixed and finite time interval, it is not possible to analyze the behavior of the function beyond the boundary of the studied interval. This attempt is accompanied by significant amount of the uncontrolled error.
Table 2. Values of STFT and LTFT parameters registered for sensors 1 and 3.
Table 2. Values of STFT and LTFT parameters registered for sensors 1 and 3.
No of SensorSTFTLTFT(STFT + LTFT)/2|STFT + LTFT/2|
1+0.04%−0.02%0.01%0.03%
3+0.06%+0.04%0.05%0.05%
Table 3. The comparison of different pressure sensors for selection the optimal one between sensors 2. The sensor 4 is chosen as the pattern one.
Table 3. The comparison of different pressure sensors for selection the optimal one between sensors 2. The sensor 4 is chosen as the pattern one.
ParametersCP1CP2CP3CP4CP5CP6
Av_dat_4-42.000002.000001.000001.000000.000000.00000
Av_dat_2-41.920611.984630.990900.994761.000000.55556
Dat_0(2)-41.708121.876570.358500.500960.000000.73878
Dat_1(2)-41.774561.977230.519810.779360.000000.82199
Dat_2(2)-41.822091.984340.424530.764710.000000.59856
Dat_3(2)-41.644901.904660.427250.919520.000000.55907
Dat_4(2)-41.708481.897750.017480.411200.000001.00556
Dat_5(2)-41.706101.895530.082880.757740.031990.90528
Dat_6(2)-41.841751.952500.352580.912730.000000.77201
Dat_7(2)-41.642951.940930.755100.609611.000000.52327
Dat_8(2)-41.584421.937710.389560.890270.000000.50330
Dat_9(2)-41.643991.956340.467260.937910.000000.45556
Table 4. Additional correlation parameters that were extracted from complex moments.
Table 4. Additional correlation parameters that were extracted from complex moments.
ParametersCP7CP8CP9CP10CP11CP12
Av_dat_4-42.000002.000002.000002.000000.000000.00000
Av_dat_2-41.976721.960941.625371.960520.147810.62314
Dat_0(2)-41.890421.970251.439291.780060.321580.37876
Dat_1(2)-41.809581.918851.550081.931860.502230.51391
Dat_2(2)-41.853151.941671.476401.847440.390570.45681
Dat_3(2)-41.946291.956831.617561.875380.356100.50784
Dat_4(2)-41.552171.913681.157971.801640.420780.95679
Dat_5(2)-41.552151.894601.178951.913410.423830.99566
Dat_6(2)-41.966051.917911.581891.941220.373100.52606
Dat_7(2)-41.645151.838671.595831.928420.396391.11090
Dat_8(2)-41.995561.974571.480131.830070.428790.36855
Dat_9(2)-41.961991.928801.553891.911430.413550.53128
Table 5. The selection of the best sensor among sensors numbered 2.
Table 5. The selection of the best sensor among sensors numbered 2.
The Available Set of the Selected Pressure SensorsNumber of Optimal Parameters
Dat_0(2)-42
Dat_1(2)-44
Dat_2(2)-44
Dat_3(2)-46
Dat_4(2)-40
Dat_5(2)-42
Dat_6(2)-47
Dat_7(2)-45
Dat_8(2)-45
Dat_9(2)-47
Table 6. The similar parameters for the sensors 1–3. The sensor 1 is selected as the reference one.
Table 6. The similar parameters for the sensors 1–3. The sensor 1 is selected as the reference one.
ParametersCP1CP2CP3CP4CP5CP6
Av_dat_1-12.000002.000001.000001.000000.000000.00000
Av_dat_1-31.990031.998310.997550.999900.999910.02523
Dat_0(3)-11.746161.958580.853060.905280.000000.55612
Dat_1(3)-11.769481.983270.828970.856590.993580.52659
Dat_2(3)-11.789931.983340.837760.877250.998960.52485
Dat_3(3)-11.995441.995650.963270.986470.955870.30900
Dat_4(3)-11.749881.960890.866400.910070.007990.55524
Dat_5(3)-11.897311.992380.980290.978670.999780.33086
Dat_6(3)-11.953041.994290.971970.985230.997530.31062
Dat_7(3)-11.773471.980650.777190.853561.000000.55825
Dat_8(3)-11.923881.991320.905130.965820.972900.38061
Dat_9(3)-11.969221.994370.947300.982350.996790.32916
Table 7. Additional correlation parameters for sensors 1–3 that were extracted from complex moments.
Table 7. Additional correlation parameters for sensors 1–3 that were extracted from complex moments.
ParametersCP7CP8CP9CP10CP11CP12
Av_dat_1-12.000002.000002.000002.000000.000000.00000
Av_dat_1-31.980551.981361.999981.920220.030301.23588
Dat_0(3)-11.695691.967871.884381.993070.221030.67753
Dat_1(3)-11.486821.982561.798881.946990.354051.06661
Dat_2(3)-11.496241.989471.801941.921740.332501.01893
Dat_3(3)-11.927991.976491.999321.965340.214780.81327
Dat_4(3)-11.698501.969661.893411.892860.196990.74544
Dat_5(3)-11.895011.967701.998551.952340.239080.75165
Dat_6(3)-11.990821.992021.994401.941510.238570.85859
Dat_7(3)-11.448031.986351.778411.986730.343850.83632
Dat_8(3)-11.759151.973091.900031.987700.255600.97468
Dat_9(3)-11.867341.996041.987851.976170.242690.79408
Table 8. The selection of the “best” sensor among sensors numbered 3. The sensor 1 is kept as the reference.
Table 8. The selection of the “best” sensor among sensors numbered 3. The sensor 1 is kept as the reference.
The Available Set of the Selected Pressure Sensors between 3-1Number of Optimal Parameters
Dat_0(3)-13
Dat_1(3)-10
Dat_2(3)-11
Dat_3(3)-18
Dat_4(3)-12
Dat_5(3)-17
Dat_6(3)-19
Dat_7(3)-12
Dat_8(3)-12
Dat_9(3)-110
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nigmatullin, R.R.; Alexandrov, V.S. Application of the Complex Moments for Selection of an Optimal Sensor. Sensors 2021, 21, 8242. https://doi.org/10.3390/s21248242

AMA Style

Nigmatullin RR, Alexandrov VS. Application of the Complex Moments for Selection of an Optimal Sensor. Sensors. 2021; 21(24):8242. https://doi.org/10.3390/s21248242

Chicago/Turabian Style

Nigmatullin, Raoul R., and Vadim S. Alexandrov. 2021. "Application of the Complex Moments for Selection of an Optimal Sensor" Sensors 21, no. 24: 8242. https://doi.org/10.3390/s21248242

APA Style

Nigmatullin, R. R., & Alexandrov, V. S. (2021). Application of the Complex Moments for Selection of an Optimal Sensor. Sensors, 21(24), 8242. https://doi.org/10.3390/s21248242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop