Let us now present several numerical examples, as well as analysis and discussion of the above entropy measures and their comparison with the commonly used entropy functions.
4.1. Application of the Proposed Entropy in the Multiple Attribute Decision-Making Problem
In many areas, including investment decision-making and project evaluation, the MADM approaches are extensively used. They include obtaining decision information, compiling that information in a specific manner, evaluating the options, and choosing the best alternative. Numerous MADM techniques can be found in [
36,
37,
38,
39]. As a first example, we recreated the experiment conducted in [
13] with multiple attribute decision-making (which means making decisions in the presence of multiple, usually conflicting criteria). The model proposed in [
13] is as follows:
An alternative with the highest score is considered the best.
Let us now consider the example of a buyer as in [
13], with five alternatives being taken into consideration, and four attributes used to rank the apartments: price (
), locality (
), design (
), and safety (
). Let the characteristics of the alternatives
be represented by a fuzzy decision matrix
For this example, we will take and .
Using the model and measure given in Equation (
39), we obtain
. Other values are shown in
Table 2.
By calculating Equation (
55), we get scores as
. We have also calculated the scores for the other values of
and
and listed them in
Table 3.
Based on results obtained, the ranks of the alternatives are:
For , .
For ,
For ,
For ,
For , .
In each case, is the best choice. We can see that varying the values of parameters in the proposed measure does not change the choice of the best alternative.
A comparison to other measures: Now let us calculate other measures and compare the results for the newly proposed measure.
When we apply the entropy proposed by Sharma and Tenaja, given in Equation (
20), to the example (
56), the values of the score functions take the values (for
)
and this results in a sequence of alternatives as
When we apply the entropy proposed by Fan and Mal, given in Equation (
21), to the example (
56), the values of the score functions take the values (for
)
and this results in a sequence of alternatives as
When we apply the entropy proposed by Hooda, given in Equation (
22), to the example (
56), the values of the score functions take the values (for
)
and this results in a sequence of alternatives as
When we apply the entropy proposed by Joshi and Kumar, given in Equation (
23), to the example (
56), values of score functions take the values (for
)
resulting in a sequence of alternatives as
We see that the values of the parameters in the proposed measure do not change the choice of the best alternative, but change the order of the other alternatives. The value does not seem to influence the choice of the alternatives in most cases; however, it seems that with greater values priority of the alternatives changes according to the greatest differences in the attribute values. In both cases, when the value was 50, alternative was given priority over the alternative based on the attribute . The same happens with the alternatives and , where the alternative was favored because of the attribute . We will take a closer look at the changes in entropy value based on the parameter changes in the next example.
4.2. A Comparison of the Proposed Measure with Classical Probability Entropies
In the next example, we applied the proposed entropy measures to two examples of data from the Household Finance and Consumption Survey, a joint project of the central banks and national statistical offices of the European Union (EU) [
40]. The dataset provides detailed household-level statistics on various aspects of household balances and related economic and demographic variables. To test the entropy measures listed earlier, we chose two examples, namely the distribution of household size and educational attainment in four different EU member states and aggregate information obtained for the EU. The EU members selected were Belgium (BE), Germany (DE), Croatia (HR), and Hungary (HU).
Table 4 shows the percentage distribution of household sizes in the EU and four EU countries. For each country, a list of entropies was calculated for the given probability distribution. The entropy measure found by using the generalized Dombi operator is given in the table for different values of
and
, as well as the Shannon and Rényi entropies. This example clearly indicates the large difference in entropy values.
The above entropy measures were also calculated for the distribution of the educational level of the population in the EU and the four countries studied, as shown in
Table 5.
Figure 3 shows the values of the entropy measure with different values of
or
. The proposed entropy measure is more sensitive to the choice of
, as can be seen in
Figure 3a. From
Figure 3b, we see that the choice of
affects the measure most strongly when the
value is between −10 and 10.
4.3. Extracting Useful Information Content from Noisy Time-Frequency Distributions
Motivated by the differences in entropy measure values in the previous examples, we decided to implement the entropy measure based on the generalized Dombi operator in the state-of-the-art 2D Local Entropy Method (2DLEM) to extract useful content from noisy signal time-frequency distributions proposed in [
28]. The flowchart of the method is shown in
Figure 4.
The 2DLEM method is based on a local 2D entropy calculation, and the useful content is extracted from the entropy map. The entropy level in the time-frequency domain is significantly different when signal components (useful information content) are present compared to domain regions containing mainly noise. The original method used Rényi entropy for information extraction. This study modified the technique to utilize the proposed entropy measure based on the Dombi operator instead and compared the denoising results with those obtained using the 2DLEM with the Shannon and Rényi entropy.
The 2DLEM method consists of several steps. The method works with time-frequency distributions; more precisely, with the spectrogram (the squared magnitude of the short-time Fourier transform) of the noisy signal, as well as other quadratic time-frequency distributions from Cohen’s class [
41,
42]). The local 2D entropy is calculated by treating the time-frequency distribution as a probability density function. For each point in the distribution, the entropy is calculated for different window sizes. The entropy value for the optimal window size is determined using the relative intersection of confidence intervals (RICI) algorithm [
43,
44,
45]. After the process is completed for each point in the distribution, we obtain the entropy map.
In the second step, the method again uses the RICI algorithm to extract useful content from the entropy map. The signal energy is calculated for different threshold values. Based on this calculation, the RICI algorithm extracts information from the entropy map by comparing the intersection of the confidence intervals of the signal energy for the particular threshold with the confidence intervals of the other thresholds.
By thresholding the entropy map, we obtain a classification mask containing 0 and 1, where 1 represents the useful content. In the calculation of the 2DLEM method with proposed entropy, we selected different
and
values. The results in extracting useful information from noisy signals are given for two synthetic, nonstationary test signals. The first tested signal consists of three signal atoms, while the second is a multicomponent frequency-modulated signal. The time series of the signals are shown in
Figure 5, while in
Figure 6 there are the time-frequency distributions of these signals.
The performance of the applied technique was analyzed for different signal-to-noise ratios (SNRs) ranging from −3 dB to 6 dB. The base model used for the evaluation was a noise-free signal spectrogram. Next, hard thresholding of 5% was performed to obtain a classification mask containing only ones and zeros, where ones are treated as useful content in the probability distribution. The ideal classification masks obtained this way for the two signals are shown in
Figure 7a,b. The feasibility and performance of this method were studied in previous articles, but only the Rényi entropy was applied [
28,
46].
The efficiency of extracting useful information from the noisy time-frequency domain is measured using the accuracy and F1 score.
Accuracy is defined as
where TP is true-positive, TN is true-negative, FP is false-positive, and FN is false-negative. Ideal mask extraction is shown in
Figure 7. Masks are obtained by thresholding the entropy maps (
Figure 8,
Figure 9 and
Figure 10). The result of subtracting the ideal mask and the obtained classification masks (
Figure 11,
Figure 12 and
Figure 13) is used to calculate the metric values. Elements, where the overlapping masks were equal, are TP and TN (overlapping of ones gives TP and overlapping of zeros gives TN). If the subtracting result is 1, the mask obtained did not recognize the useful content and is considered the FN. When the result of subtracting the ideal and obtained mask is −1, there was no useful content, but it was wrongly “recognized” as useful content and it was treated as the FP.
The F1 score takes into account both precision and recall of the classification. It is a harmonic mean of these two scores defined as
where
and
Table 6 shows the classification results for the first analyzed signal consisting of signal atoms. The proposed entropy measure based on the generalized Dombi operator improved the classification accuracy compared to the Shannon and Rényi entropies for all tested SNRs. The
and the
values for the results reported in
Table 6 and
Table 7 are 2 and 0.8, respectively. The proposed entropy outperforms the Rényi entropy by 0.0134 to 0.024 in accuracy and by 0.007 to 0.0134 in F1 score. The largest improvement in both accuracy and F1 score was observed for the highest tested noise level of SNR = −3 dB.
The results for the second tested signal are shown in
Table 7. As in the previous case, the proposed entropy measure based on the generalized Dombi operator outperforms other entropy measures, while for this test signal, the Shannon entropy outperforms the Rényi entropy. Namely, for the SNR of −3 dB, the proposed entropy improves the accuracy by up to 0.0118 compared to the Shannon entropy and by up to 0.0667 compared to the Rényi entropy. F1 is improved by up to 0.0081 and 0.0439 compared to the Shannon entropy and the Rényi entropy, respectively. Moreover, the difference is significant at higher SNR values. The greatest difference was for SNR = 6, and the accuracy was improved by 0.042 and the F1 by 0.0256.
Figure 8 and
Figure 9 show the entropy map computed with the Shannon and the Rényi entropies, respectively, while
Figure 10 shows the entropy map computed using the proposed entropy measure. These were then used as input for the next step of the 2DLEM method, which extracted useful content in form of a classification mask, as shown in
Figure 11 for the Shannon measure, in
Figure 12 for the Rényi measure, and in
Figure 13 for the measure based on the generalized Dombi operator, respectively.
Inspecting
Figure 11,
Figure 12 and
Figure 13 and comparing the quantitative results given in
Table 6 and
Table 7, we see that the proposed entropy measure based on the generalized Dombi operator significantly outperformed the Rényi entropy in extracting useful information content from the noisy signal in the time-frequency domain. By applying this measure, more of the useful content is preserved and less noise is included in the final result. Therefore, the novel technique produced a higher classification accuracy and F1 score for all SNRs. Results for different
and
values for the 2DLEM method in combination with the proposed entropy measure are reported in
Table 8 and
Table 9. There are slight differences when different values are used, but regardless of the values used, the proposed entropy measure still outperforms the Shannon and the Rényi entropy. The choice of values was motivated by
Section 4.2, where we can see that the
parameter has a greater influence on the results, and the largest difference is found between −10 and 10 (as can be seen from
Figure 2). This result clearly shows the potential of our entropy function based on the generalized Dombi operator to analyze the information content of nonstationary noisy data in the time-frequency domain, and its potential in similar areas where traditionally the Shannon or the Rényi entropy is applied.