**1. Background: The Concept of Measurement Uncertainty**

In the 1990s, the "Guide to the expression of uncertainty in measurements", known as GUM, introduced the concept of measurement uncertainty and provided some guidelines for its representation and propagation through the measurement function. In particular, the measurement uncertainty is defined as "*a parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand*" [1], as also recalled in [2].

This definition refers to a "dispersion of the values" because, as is widely known, when a quantity (the measurand) is measured more times, the measurement result generally varies, due to different contributions affecting the measurement procedure. This means that, because of the "dispersion of the values", from a strict metrological point of view, the "true value" of the measurand cannot be known.

The uncertainty associated with a measured value has, therefore, the aim to provide information about how large this "dispersion of the values" is [1,2].

Therefore, from a strictly semantic point of view, it can be stated that the uncertainty value reflects the lack of exact knowledge or lack of complete knowledge about the value of the measurand. Hence, when one speaks about a measurement result, one always speaks about an incomplete information; this incomplete information must be somehow represented to provide validity of the measured value.

How can this representation be done? According to the GUM, the aim of the uncertainty evaluation is "*to provide an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the*

**Citation:** Salicone, S.; Jetti, H.V. A General Mathematical Approach Based on the Possibility Theory for Handling Measurement Results and All Uncertainties. *Metrology* **2021**, *1*, 76–92. https://doi.org/10.3390/ metrology1020006

Academic Editor: Han Haitjema

Received: 9 August 2021 Accepted: 30 August 2021 Published: 1 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

*quantity subject to measurement*" [1]. Furthermore, it clearly states that "*the ideal method for evaluating and expressing measurement uncertainty should be capable of readily providing such an interval, in particular, one with a coverage probability or level of confidence that corresponds in a realistic way to that required*" [1].

As stated above, the "dispersion of the values" is due to different contributions affecting the measurement procedure. In particular, in the "International vocabulary of metrology", known as VIM [3], two contributions are defined: the random and the systematic contributions to uncertainty. (There is sometimes the mistake that the words random and systematic are substituted by the words "type A" and "type B", defined in the GUM, respectively. However, "type A" and "type B" refer to methods of evaluation of the uncertainty contribution and not explicitly to the nature of the uncertainty contribution itself.)

The random contribution is defined as the "component of measurement error that in replicate measurements varies in an unpredictable manner" [3], while the systematic one is defined as the "Component of measurement error that in replicate measurements remains constant or varies in a predictable manner" [3]. Therefore, due to the random contributions to uncertainty, the dispersion of the measured values may define an interval around the mean (of the measured values), and this interval might be indeed the "interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the quantity subject to measurement" required by the GUM [1]. In Figure 1, the blue dot represents the measurand value, while the pink asterisk is the mean of the measured values, and the purple line represents the interval that includes all the measured values. It can be easily seen that the purple interval also encompasses the measurand value, as generally happens if a proper coverage factor is applied. However, if a systematic contribution also affects the measurement result, then the interval that includes all the measured values is shifted on the right/left with respect to the previous interval. The direction of right or left depends on whether the systematic effect is positive or negative, as shown with the blue and orange intervals in Figure 1. It can be easily seen that these last intervals no longer represent the "interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the quantity subject to measurement" since the measurand value is completely outside these intervals.

**Figure 1.** The effects of random and systematic contributions to uncertainty. Blue dot: unknown value of the measurand. Purple line: dispersion of the values and obtained interval when only random contributions affect the measured values. Blue and orange lines: obtained interval when a positive or negative systematic error affect the measured values. Red line: obtained interval when the effects of both random and unknown uncompensated systematic contributions are considered.

In the case that one wants to provide the interval, taking into account both the random and the systematic contributions to uncertainty, he/she should consider also the possible variability of the effect of the systematic contributions and, hence, should widen the uncertainty interval, as shown by the red line in Figure 1. Therefore, the purple interval is the uncertainty interval when only random contributions affect the measurement result, while the red interval is the uncertainty interval when systematic contributions also affect the measurement result.

The GUM states that "*It is assumed that the result of a measurement has been corrected for all recognized significant systematic effects and that every effort has been made to identify such effects*" [1]; in other words, the GUM requires that all efforts be made to identify, measure and correct for all the significant systematic effects. Under this assumption, only the random effects are present, and the uncertainty interval is reduced as shown in Figure 1.

## **2. The Authors' Point of View**

In the previous section, it is summarized the concept of measurement uncertainty, and it is recalled that the requirement of the GUM is that all the significant systematic effects are identified and compensated for. Satisfying this leads to the following important conclusions:


There are also some mathematical ways to also treat the systematic contributions to uncertainty in the mathematical framework of the probability theory, such as, for instance, a proper use of the correlation coefficients, but, in any case, the probability theory is born to handle the random phenomena and can correctly handle only random phenomena because of the way that pdfs combine with each other.

Furthermore, the GUM requires the compensation of the "significant systematic effects" [1] where the word "significant" is very important, bringing a crucial question: when is an effect (on the final measured result) significant?

Obviously, an effect can be significant in one topic and not significant in another. From the metrological point of view, the "significance" can be exploited by considering the "target uncertainty", which is defined by the VIM as the "*measurement uncertainty specified as an upper limit and decided on the basis of the intended use of measurement results*" [3]. The target uncertainty is, therefore, a value that depends on the topic: it is a number that is generally as small as possible in primary metrology or in the industrial world in the limited case in which very precious objects are measured (such as diamonds, for instance). However, in most practical industrial situations, the target uncertainty is a trade-off between the cost of the uncertainty evaluation and the waste production; therefore, there is no need to set the target uncertainty to be as small as possible. In these situations, the correction for the "significant systematic effects" is generally not necessary for not exceeding the target uncertainty. Therefore, the industrial world is generally not interested in reducing the overall uncertainty by identifying and compensating for systematic effects.

In any case, compensation or not, to state whether a systematic effect is significant or not, it must be considered in the uncertainty evaluation. It becomes, therefore, an important issue to be able to mathematically determine the overall uncertainty in the best possible way.

Methods that employ a mathematical theory different from the probabilistic theory encompassed by the GUM have been proposed in the literature [4–8]. These methods are based on the possibility theory, as well as the RFV method recalled in this paper, which tries to encompass the definitions of the GUM, thereby overcoming its limitations.

The RFV method recalled in this paper can handle both random and systematic contributions to uncertainty in closed form. This is possible because, in this mathematical framework, many operators between the variables naturally defined in it are available. Therefore, different operators can be chosen, which can simulate the combination of the variables in a random or a nonrandom way. To introduce this method, the theory of evidence is shortly recalled in the next section, with the aim to provide a cornerstone to the method, rather than giving the mathematical details, for which the readers are referred to [9–11].
