1. Introduction
The perinatal mortality rate is very high throughout the world. According to the World Health Organization (WHO), it is estimated that 800 women die every day due to pregnancy or birth-related causes. From all pregnancies, four out of ten women are at high risk. There is no hospital infrastructure for the timely diagnosis of fetal distress due to overpopulation.
Continuous remote monitoring technology would provide a solution to solve some problems in high-risk pregnancies. Patients would no longer need to be at the hospital, and, from the comfort of their homes, would directly carry out and forward the studies prescribed by their physicians. The monitoring equipment could be configured for providing the corresponding studies directly by the patient without the need of a specialist. Patients may feel calm and reassured, knowing that their studies, once transmitted by the monitor, will be checked and assessed by a specialist.
Fetal monitoring technology faces reliability issues. When compared against intermittent auscultation, the use of technology reduces perinatal mortality [
1,
2,
3]. However, some other studies from the 1970s and 1980s did not show similar results [
4], and the contribution of technology to reduce perinatal mortality, as observed in the last decades, is still under discussion. On the other hand, the increase in rates of births via Caesarian section has become a matter of public interest in many countries, and there is a general belief that continuous monitoring during childbirth (CTG) has contributed to this.
CTG is subject to a poor observer interpretation, particularly while assessing variability and de-acceleration, in addition to the classification of tracing [
5,
6,
7]. Diagnosis depends on the criteria used for tracing analysis [
8], so objective guidelines are required to provide a practical approach.
It is necessary to have an approach that improves the data reliability produced by the remote fetal monitor to increase the diagnosis precision [
9,
10]. Fetal monitoring systems should yield reliable data, regardless of the conditions of use, such as location, installation, number of samples, etc., for the revision of fetal health and the prompt response of healthcare professionals.
In medical technology implementation, a safe and efficient operation must be guaranteed [
11]. Attention is focused on the practical implementation of technology within a sanitary environment. The simple fact of having the necessary technology is not a whole solution. The processes of planning, evaluation, selection, and implementation of new technologies, all in concept and context, must be considered. This may imply a direct solution to specific assessments of sanitary technology. When developing new technology, guaranteeing precision and reliability for the chosen environment contributes to achieving safer results and sound medical attention.
Under these conditions, the main problem is to reassure the reliability of the data obtained by the remote fetal monitor, since health-care professionals would not be present, and there would be only a few samples of variables (heart rate (HR), uterine activity) that lead to the detection of anomalies. Therefore, the analysis of measurement uncertainty becomes necessary.
For reliable estimation of measurement uncertainty, it is necessary to guarantee some properties (distribution independence, a large sample size, and a normal distribution). Health-care professionals are not familiar with the measurement process carried out by remote fetal monitoring, and, in general, only have a few samples. This means it is not possible to verify the hypotheses required so that the evaluator may have the properties required by a good estimator. By means of a statistical approach, several works related to uncertainty estimation have been conducted, for cases where the hypothesis of classical theory is not verified.
A fetal monitor is a stochastic process that has few samples available. Therefore, it is very difficult to model by using conventional methods. Different approaches have been used when the hypothesis of the classical theory is not verified, including the Monte-Carlo method (MCM) [
12], parametric and non-parametric theory formulation [
13], adaptive estimators [
14], spectrum-based estimation [
15], stochastic gradient [
16], and mean square error [
17] estimators with Cramer–Rao type restriction and reliability estimation under measures degradation [
18].
The MCM is a widely used tool for modeling stochastic systems since with a large enough sample, the probability that an estimator will deviate from the expected value can be as small as required. The MCM is used to get information related to estimator properties. The parameters of softening and form of estimating function are adjusted to modify the performance of maximum probability estimators. The MCM can be used for modeling unknown distribution functions. The parametric and non-parametric models are wide probability function estimation methods; theory formulation has been developed considering local asymptotic length and restrictions in sample size. Adaptive estimators that guarantee the consistency in finite samples for any distribution function are helpful in the fetal monitor application. The term adaptive integrates the concept that such estimators adapt to the sample using data to estimate a non-parametric density function. Adaptive estimators have been applied using the least-squares method with finite samples, reaching substantial increases in efficiency, but with hypotheses that are difficult to verify for applications with a few samples. The spectrum-based method is useful when the relationship between observations and parameter model is noisy, potentially non-linear, and not invertible. The stochastic gradient estimation is a numeric method that is stable and robust for fitting parameters and predicting standard errors well. Even though only the typical case, with some modifications, was considered, the hypothesis that let the Cramer–Rao type limit such estimators have been changed. The mean square error can be improved by implementing estimators with parametric restrictions that do not satisfy certain conditions. With a new definition of bias in restricted surroundings, a Cramer-Rao restriction type was determined for the pondered mean square error estimators, for normal distribution only. Correlated perturbations were not considered. The reliability estimation for a system whose degradation measures are monotonous, can obtain the fault time distribution, and, therefore, system reliability.
The remote fetal monitor has a context application. The aim of this work is to obtain an integral maternal-fetal watch program, integrated by a technological platform that allows the timely detection of perturbations in baby heart rate, that enable us to remotely send information to an obstetric monitoring center for timely and reliable diagnosis of the patient.
When modeling a process, the most complicated component of the model is represented in probabilistic terms, and the hypotheses taken as valid in many situations are not met, and the results are not valid. This work presents a proposal of how, using the existing theory, with a deterministic model and the Monte-Carlo method [
12], it is possible to apply probabilistic models that are genuinely representative of the application, and that characterize the statistics’ properties used. In a specific application, the statistical properties of interest are generally mean, variance, extreme values, dependence on sample size, bias, etc. [
13,
14,
15,
16,
17]. In general, only the mean and variance parameters are considered to obtain a probabilistic model because they characterize the uniform distributions that represent the worst-case scenario (total ignorance of the model) and the Gaussian distributions considered as typical or normal. However, to estimate these parameters, it is necessary to carry out a sampling to obtain more information about the system model. This additional information is not used in practice. This work proposes a new approach to include experimental data and to generate a representative probabilistic model without considering difficult suppositions. The suggested procedure is applied to a fetal monitor to correct the standard deviation estimation considering the noise type that occurs during the regular operation of the device.
2. Uncertainty Estimation Model
One of the main problems in metrology is the estimation of measurement uncertainty. According to [
19], “estimated standard deviation associated to output, or measurement estimation is called combined standard deviation, and it is determined from the estimated standard deviation associated for each estimation of output, called standard uncertainty”. In general, the process to be measured is ignored, and regularly, there are only just a few samples available. Measurements have defects that lead to an error in the result of such measurements. Although it is not possible to compensate for the random error in a measurement result, its effect can generally be reduced by increasing the number of observations until the expectations or expected value becomes zero.
Modeling of stochastic systems has been thoroughly studied, considering the hypothesis of the great numbers law. However, a measurement system is a stochastic system, with a limited or finite number of samples. The remote fetal monitor is a stochastic measurement system with a limited number of samples. Below an uncertainty estimation model for a measurement system with limited samples is presented.
A probability space
for a random variable
where the conditional expectation
concerning to sigma-algebra
[
20] may be expressed as
This means that the best estimation for the random variable, when there is no a priori knowledge of the process, is the mathematical expectation, and the associated standard uncertainty (
) may be obtained from the probability density function (PDF) of the random variable [
21]. Therefore, the mean is proposed to estimate the following sample, and standard uncertainty is obtained from the standard deviation estimator.
Let
be the event where a device does not fail, the reliability is by definition
if the indicator function is considered
,
This means that the problem of calculating reliability is changed for the problem of estimating the expected value, for which the use of the Monte-Carlo method in this work is necessary.
In the estimation of the mathematical expectation, the equation regularly used is
being
the sample number.
In the normal case with independent samples, the mean estimator is optimal in the sense of the mean square error. In addition, the large number theorem is consistent for large samples, and its distribution is known by the central limit theorem [
20], and is efficient [
22].
In the general case, the estimator is a random variable that depends on:
To measure the quality of this, estimate the standard deviation is used (called uncertainty in the metrology area). This standard deviation is also estimated; this is done with
defined as
for independent samples with the same distribution we get,
So, the problem of determining the quality of the estimate focuses on determining . However, we must identify the restrictions in the estimation.
A widespread practice is to consider that if
then,
Nonetheless, defining by z a random variable,
and applying Cauchy–Bunyakowskii–Schwartz inequality [
20],
that is
this inequality depends on all these three: sample size
, distribution function
, and correlation dependence
.
For a sample,
corresponds a density function
for which the score
is defined
and the Fisher information matrix
The Cramer–Rao theorem is fulfilled [
20] where if
is an unbiased estimator of
then
and also depend on sample size
, distribution
, and correlation dependence
.
This means that in the case of the fetal monitor where there are few samples, it is possible to determine the boundaries for the reliability estimation and its uncertainty using the Monte-Carlo method.
3. Uncertainty Estimation Approach
This section describes a proposed approach to obtain a reliability estimator for heart rate measurement.
Reliability , the probability that the measurement is correct, is a function of the measurement device, the perturbations represented by its distribution function and its correlation, in addition to the number of data used in the estimation.
To estimate the reliability and the uncertainty in the measurement, the following procedure is proposed:
System model: device model and perturbation model.
Distribution function estimation of the perturbations and correlation function.
Estimators properties analysis using the Monte-Carlo method.
Correction factors calculation to compensate estimators.
Confidence intervals for the estimation.
3.1. System Model
In the block diagram of
Figure 1, a system of single input
and single output
(SISO) is shown in simplified form. Noise
presence is assumed a priori, and, in this case, white noise properties are attributed, which is presented as additive noise
in the system output.
It is considered that an input–output data set of the system is available where is the measurement number taken from the input and output for modeling.
3.1.1. Device Model
To describe the dynamic behavior from a set of input and output measurements, an auto-regressive and exogenous (ARX) [
23] (p. 81) linear model structure is assumed. A scheme for system modeling is shown in
Figure 2.
The ARX structure model is
is the regression vector,
is the parameter vector, while
is the system degree. Based on Equation (15), the relationship between the input and the output in a compact way can be written in the following vector form.
where
,
, and
.
With this information and applying the least-squares method [
24] (pp. 17–27), a deterministic model is obtained which represents the fetal monitor dynamic, the solution is
Finally, the relationship between white noise
in the system input and additive noise
in the system output for the ARX model structure is
where
.
3.1.2. Perturbation Model
The probabilistic part is modeled by a stochastic process
, which must be strongly stationary, uncorrelated, with a cumulative distribution function (CDF)
. These hypotheses are experimentally validated. The probabilistic model is used to determine the properties of the estimators used to determine the heart rate; considering that the noise
is naturally present and that it is transformed into
by the system during the route that the signal travels from the fetus heart through amniotic fluid, body fat, etc., to the fetal monitor output.
Figure 3 shows a block diagram of the process for estimating the error distribution function
.
3.2. Distribution Function Estimation of the Perturbations and Correlation Function
For each patient, a stochastic model is determined, which describes the perturbation present in the fetal heart rate signal, which exists due to the individual anatomical characteristics of the patients, amniotic fluid, body fat, etc.
The stochastic model is obtained from data acquired by clinical studies using the patient’s fetal heart rate monitor. Each clinical study (sample) is a set of measurements .
Using the data-normalized histogram, the probability density function (PDF) is estimated for samples , numerically integrating and using some interpolator an estimator is obtained for the probability distribution function of the perturbations, which is used to build a simulator required in the Monte-Carlo method. Each sample that is added improves the probability distribution function estimation, so that, the more samples available, the better the estimator.
To guarantee that the bias of the expectation estimate is zero, the correlation between the perturbation and the regression vector used in the least-squares model must be verified as zero. For this, a correlation degree estimate is made [
20,
22] that exists between the perturbations and the measured signal. The correlation is defined as follows
where
represents the correlation function between the random variables
and
. This magnitude is estimated using
If there is a substantial correlation, a filter must be designed so that the perturbation behaves with white noise features.
3.3. Estimators Properties Analysis Using the Monte-Carlo Method
After obtaining the models of the system and the perturbations, it is necessary to estimate the value of the fetal heart rate. Simulated samples from the inverse distribution function F_ (Ξ_n) were used for this estimation. To carry out this task, the sample mean and the sample deviation were used. The mean is useful for determining the fetal heart rate value and the standard deviation for quantifying the quality of the estimate, known as uncertainty in the metrological area.
However, it should be noted that for these estimators to be used properly, it is necessary to determine their properties specifically for each patient. The most important property to determine is how the length of sample impacts the distribution function.
On the other hand, the sample mean
and the standard deviation
are in turn random variables that are characterized by their probability distribution function
and
, respectively. These functions in classical theory can be approximated with large samples and hypotheses of probabilistic independence, which turned out to be difficult to verify in this application. To solve this mathematical problem, the Monte-Carlo method is proposed as a tool to determine these distribution functions without the use of these hypotheses. The procedure is presented in
Figure 4, where
is the number of samples used to simulate
and
the number of samples to determine
and
.
This procedure allows us to estimate the variance
because if
is large, the following is fulfilled
3.4. Correction Factors Calculation to Compensate Estimators
This procedure removes the constraints that exist to achieve an adequate estimate of reliability and provides the factors that must be included to eliminate the estimators’ bias. These factors are
In our case, the Φ and Ξ turned out to be uncorrelated, so this additive term is close to zero and was not considered in this study.
The multiplicative factor for
is obtained from the Cauchy–Bunyakowskii–Schwartz inequality described in Equation (10). In many cases where
, the question is how far is
from
, so we will introduce a factor
that eliminates this bias, that is to say
The new standard deviation estimator
will be
, this factor is estimated using the Monte-Carlo method and the relation
3.5. Confidence Intervals for the Estimation
When clinical studies are performed on patients, new data are obtained, which must be analyzed to determine if they are correct. For carrying out this, confidence intervals are constructed on-line to ensure that the test is correct with a confidence level .
The last sample’s variations are analyzed. These variations are measured by standard deviation (), which is calculated by means of the estimator, considering the process defined by and the set of samples.
In a process defined by with clinical studies set , , each clinical study has samples , . The variations are measured by the standard deviation , which is estimated through the corrected .
From a statistical point of view, a measurement is considered incorrect if it is an atypical value. The criteria to decide is built by considering the probability that the current measurement will fall outside a range defined by a standard deviation estimator, considering the necessary compensations for the used estimators.
Now, we will determine a criterion to eliminate measurements that, with a high probability, may be wrong, since values will be far from most of the measurements, known as atypical data.
To do so, the Chebyshev approximation [
25] is used
where
and
are the mean and the standard deviation of the process, respectively.
If no further information is available but considering that new information is arriving every time a measurement is taken, the following statistics are considered
where
is a correction factor due to the finite number of samples, and to the law of standard deviation spread?
Since
,
, and
are typically unknown, these parameters are replaced by their estimators
,
, and
, respectively. Finally, the selection criterion is obtained
which is equivalent to
where
, and
is the correction factor that corresponds to the
clinical study.
Figure 5 shows the flowchart of the reliability estimation algorithm based on Chebyshev’s inequality.
3.6. Remote Fetal Monitoring Equipment
A fetal monitor was developed to provide a reliable remote monitoring of both mother and fetus during the pregnancy process. The fetal monitor has the capacity of recording fetal heart rate, uterine activity, and fetal movement. Its compact design allows the mother to use it herself. In addition, the fetal monitor allows the recording of studies, and features the capacity to send them directly to the obstetric monitoring center (OMC) to be evaluated by a specialist.
The purpose of this medical device is to provide monitoring of high-risk pregnancies, especially those in marginal areas, who cannot go to a clinical specialist periodically to reduce fetal distress, and, therefore, perinatal mortality in Mexico.
The fetal monitor is an embedded system, which includes a digital signal processor that allows for an adequate calculation to achieve higher precision in fetal heart rate reliability and signs of uterine activity. The algorithm developed included the proposed methodology of this work to guarantee the reliability of distance measurement. This instrument has enough hardware resources, such as remote connectivity by means of a mobile phone, through the general package radio service (GPRS), geo-referenced transmission, an alarm indicator for the patient, a speaker to listen to baby heartbeats, a flash memory to store studies and two Doppler ultrasound sensors, of 1.2 MHz in frequency, and 350 mW for output power, for the acquisition and processing of fetal monitoring variables. A block diagram with these functions is presented in
Figure 6.
3.7. Experimental Clinical Studies
During experimental tests, several fetal monitoring instruments, in remote sites, were connected to patients and sent data signals to the OMC based at the Central Hospital. The patients were informed about their data usage and consented to publish their data for research purposes.
Data collected at OMC were primary signals without processing, that is, directly from the analog–digital converter of the instrument. For each test, a set of data with fetal heart rate and signs of uterine activity was stored.
Table 1 shows the clinic studies in detail, to validate the proposed approach.
Experimental tests were conducted in the State of Querétaro. In the first stage, health care professionals of a Women and Child hospital in the city of Querétaro assessed patients. Patients were trained to handle the equipment at home, to step by step visualize data on the fetal monitor screen, and even to place two sensors on the abdominal region of the patient. One sensor measures the fetal heart rate, and the other measures the uterine activity of the patient. In the second stage, experimental tests for the fetal monitor were conducted with patients from remote sites. Data generated by each patient were sent to an OMC in the city of Querétaro. The locations of patients transmitting their records were in many different areas of the State of Queretaro, or in places nearby the State.
Distances from these sites to the OMC were 180 Km on average. Nineteen patients were evaluated, with around 40 studies for each patient. The number of measurements for each clinical study was 900, on average, to prove the approach proposed in this work, providing measurement reliability for fetal heart rate and uterine activity.
Figure 7 shows some images of the sites where tests were conducted.
Samples for each medical device were received and stored at OMC. The objective at this stage was to generate a database of unprocessed signals, containing bioelectrical perturbations, to implement and to validate the proposed methodology. Only fetal heart rate signals were processed.
5. Conclusions
Reliability has reached an important level for consideration at the beginning of product design, specifically for medical applications, which have been favored by including, as a quality criterion, the reliability level of the device.
Effect analysis of the distribution function and sample lengths in the typical estimators’ calculation, mean, and standard deviation were presented. Specific distribution estimates of each patient were used; this procedure suggests a natural way to integrate the additional information obtained in each study. Several studies carried out on patients verified that the real distributions were far from the theoretical distributions that are usually used to determine the estimators’ properties. Furthermore, these distributions were used to determine the factors necessary to obtain an unbiased estimator of uncertainty. This information was used to implement a simple algorithm that improves reliability estimation. With the available calculation tool, it is possible to obtain specific information that allows improving the estimators’ behavior used in decision making without assuming hypotheses that are difficult to verify in daily applications. Applying classical theory without verifying that hypotheses are met may lead to incorrect results. However, it was demonstrated that these might be corrected by using compensation factors.
Considering how fast available electronic systems for the integration of medical devices change, the way reliability is assessed, restrictions imposed by classical estimation theory, and computer resources, the developed methodology attempts to be the spearhead of a series of works that would lead to a more accurate estimation of reliability.