*5.2. Robustness*

Fusion using the default MCS-based topology is prone to unexpected behaviour of information sources regarding their consistency (see Section 4.2). In the following, the MCS fusion design approach and topology are evaluated on selected real-world datasets regarding their robustness. First, consistency-based design is compared to the redundancy-based design approach. Following this, the adaptations of discounting and estimation fusion are evaluated. Implementation and data preprocessing are detailed to increase reproducibility.

#### 5.2.1. Data Preprocessing

Several data preprocessing steps are performed before the implementation. These are necessary (i) to homogenise a heterogeneous frame of discernments, (ii) to reduce the effects of noise (aleatoric uncertainty) on the fusion results and topology design, and (iii) if data are not available as possibility distributions but rather as singular values or probability distributions. Preprocessing comprises the three following steps.

• If data are singular values or probability distributions, they are transferred into possibility distributions first. For this step, singular values *x* are interpreted as probability

distributions with *p*(*x*) = 1 and *x* ∈ *X* \ {*x*} : *p*(*x* ) = 0. Transformation is conducted by the truncated triangular probability-possibility transformation [49,77,78] resulting in *π*(*x*).


$$\mu(\mathbf{x}) = \begin{cases} 2^{-d(\mathbf{x}, \mathbf{p}\_l)} \text{ if } \mathbf{x} \le \overline{\mathbf{x}}, \\ 2^{-d(\mathbf{x}, \mathbf{p}\_r)} \text{ if } \mathbf{x} > \overline{\mathbf{x}}, \end{cases} \tag{31}$$

$$\begin{aligned} \text{with } d(\mathbf{x}, \mathbf{p}\_{\mathbf{l}}) &= \left(\frac{|\mathbf{x} - \overline{\mathbf{x}}|}{\mathbf{C}\_{\mathbf{l}}}\right)^{D\_{\mathbf{l}}}, \\ d(\mathbf{x}, \mathbf{p}\_{\mathbf{r}}) &= \left(\frac{|\mathbf{x} - \overline{\mathbf{x}}|}{\mathbf{C}\_{\mathbf{r}}}\right)^{D\_{\mathbf{r}}}, \text{and} \end{aligned}$$

with *x* being the arithmetic mean of given training data **x**. The parameters are determined as follows: *<sup>C</sup>*<sup>l</sup> = *<sup>x</sup>* − min*j*∈{1,2,...,*m*} - *xj* , *<sup>C</sup>*<sup>r</sup> = max*j*∈{1,2,...,*m*} - *xj* − *x*, and *D*l, *D*<sup>r</sup> ∈ N>1. *D*<sup>l</sup> and *D*<sup>r</sup> are often determined empirically [21,80]. A training routine for *D*<sup>l</sup> and *D*<sup>r</sup> based on density estimations is given by Mönks et al. [81].

The possibility distribution *π*(*x*) is then mapped to *π*(*μ*) via the extension principle as follows:

$$
\pi(\mu) := \max\_{\substack{\mathfrak{x} \in \mathsf{X}: \mu(\mathfrak{x}) = \mu}} \pi(\mathfrak{x}).
$$

A detailed description and visualisations of these preprocessing steps are given previous work [39]. Together, the preprocessing steps allow to apply the proposed design algorithms even on heterogeneous, noisy, and nonpossibilistic data. Robustness against noise can additionally be increased by data filtering. However, since parameters of (31) rely on minimum and maximum values of training data, applying filter directly on training data **x** would distort the borders of the unimodal potential function. For this reason, memberships *μ*(*x*)—instead of data—are filtered in the preprocessing.
