Next Article in Journal
Towards Sentiment Analysis for Romanian Twitter Content
Previous Article in Journal
Coordinate Descent for Variance-Component Models
Previous Article in Special Issue
A Vibration Based Automatic Fault Detection Scheme for Drilling Process Using Type-2 Fuzzy Logic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Defuzzify Imprecise Numbers Using the Mellin Transform and the Trade-Off between the Mean and Spread

1
Department of Business Administration, Chung Yuan Christian University, No. 200 Chung Pei Road, Chung Li District, Taoyuan 320, Taiwan
2
Department of Computer Science & Information Management, SooChow University, No. 56 Kueiyang Street, Section 1, Taipei 100, Taiwan
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(10), 355; https://doi.org/10.3390/a15100355
Submission received: 2 August 2022 / Revised: 19 September 2022 / Accepted: 26 September 2022 / Published: 28 September 2022

Abstract

:
Uncertainty or vagueness is usually used to reflect the limitations of human subjective judgment on practical problems. Conventionally, imprecise numbers, e.g., fuzzy and interval numbers, are used to cope with such issues. However, these imprecise numbers are hard for decision-makers to make decisions, and, therefore, many defuzzification methods have been proposed. In this paper, the information of the mean and spread/variance of imprecise data are used to defuzzify imprecise data via Mellin transform. We illustrate four numerical examples to demonstrate the proposed methods, and extend the method to the simple additive weighting (SAW) method. According to the results, our method can solve the problem of the inconsistency between the mean and spread, compared with the center of area (CoA) and bisector of area (BoA), and is easy and efficient for further applications.

1. Introduction

As we know, the real world is imperfect, imprecise, and uncertain. Hence, many theories and approaches, e.g., fuzzy sets, rough sets, and interval sets, have been proposed to reflect the vagueness of the real world. To reflect the human’s subjective uncertainty of linguistics, Professor Zadeh used fuzzy numbers to capture the uncertainty and the membership function to be the corresponding weights [1]. Usually, a fuzzy number is represented by a triangular fuzzy number (a1, a2, a3), where a1, a2, and a3 are the left, center, and right values of a fuzzy number, respectively. Indeed, we can use alternative forms, e.g., trapezoidal or Gaussian, to present fuzzy numbers.
However, in most practical situations, we still hope the information for decision-making is clear and crisp. Therefore, many defuzzification methods are proposed and used to transform imprecise values, e.g., interval or fuzzy numbers, into crisp values for decision-making. Note that defuzzifying imprecise data means using some functions to transform fuzzy or interval numbers into crisp values. Since imprecise data analysis, no matter interval or fuzzy numbers, is popular in many fields, e.g., multiobjective programming [2,3,4], data envelopment analysis (DEA) [5,6,7], and decision sciences [8,9,10], the defuzzification of imprecise numbers is a critical topic.
The problem of defuzzifying interval numbers is similar to that of defuzzifying fuzzy numbers. The main difference is that the y-axis of fuzzy numbers denotes the membership function, but the y-axis of interval numbers is the probability density function (pdf). The authors of [11] first proposed the concept of ranking fuzzy numbers using the mean and spread. However, it is inefficient to determine the better fuzzy number or interval data when one has a higher mean and a higher spread. Although the concept of coefficient of variance (CV) was provided in [12] to cope with the problem of inconsistency, it fails when there are crisp values, and it is hard to apply to follow-up multiple attribute decision-making (MADM) applications, such as simple additive weighting (SAW) or weight product method (WPM), due to the fact the original unit is changed. Hence, an easy and efficient algorithm to defuzzify imprecise numbers is a critical issue for practical applications.
In this paper, we first find the mean and spread of imprecise numbers via the Mellin transformation. Then, we derive the trade-off coefficient between the mean and spread. Finally, we develop the preference relation of imprecise numbers to rank them. In addition, we use four examples to demonstrate our concepts and extend our method to choose the best alternative using the SAW method. Based on the numerical results, our method can handle the problem of the inconsistency between the mean and the spread with respect to the center of area (CoA) and the bisector of area (BoA), and can be easily used in further applications.
The rest of this paper is organized as follows. Section 2 states the problem of the inconsistency between the mean and the spread. Section 3 derives the relation between the mean and the spread according to the utility theory. The detail of the Mellin transform is presented in Section 4. Four numerical examples are used in Section 5 to demonstrate our concept. Discussion is presented in Section 6, and the final section presents conclusions.

2. Statement of the Problem

Assume there are two alternatives, A i and A j , and their means and spreads are μ i , μ j , and σ i , σ j , respectively. According to [10], the rules for ranking fuzzy numbers is A i A j if μ i > μ j or A i A j if μ i = μ j and σ i < σ j . However, if we assume there are two imprecise numbers with uniform distributions, then the rules seem incorrect when there is a smaller mean and a smaller spread, as in Figure 1.
It is intuitive for a decision-maker to prefer A 1 over A 2 , even though A 2 has a higher mean than A 1 . That being said, the higher mean does not necessarily have the higher ranking order if the spread is too large, i.e., the mean and the spread comprise the trade-off relation. This situation is also called risk premium (RP), which represents the additional compensation required for assuming a higher risk, and is usually discussed in economic and financial management.
In addition, [12] proposed another criterion, called the CV index, to improve Lee and Li’s method [11] according to the following equation:
C V = σ | μ |
where σ denotes the spread and μ denotes the mean.
However, it is clear that the CV index fails when there are crisp values or the mean is zero. In addition, another shortcoming is that the result is difficult to interpret because the unit has changed. Hence, many researchers provided different methods to efficiently rank fuzzy numbers, e.g., [13,14,15,16,17], or interval numbers, e.g., [16,17].
Another practical issue is that ranking imprecise numbers is not enough for further applications. A decision-maker might like to derive the crisp values of alternatives [18]. Surely, plenty of defuzzification methods have been proposed to efficiently derive the crisp values from imprecise numbers, e.g., the center of sums method, the CoA method, the BoA method, the weighted average method, and different kinds of maxima methods. However, none of them can handle all the considered situations here, and we will compare our method with CoA and BoA to justify the proposed method. Here, we incorporate the concept of risk premium into our method and derive the trade-off coefficient between the mean and spread to rank and defuzzify imprecise numbers.

3. A Trade-Off between the Mean and Spread

According to utility theory, it is intuitive that a higher mean ( μ A ) has a higher utility ( u ( A ) ) and a lower spread ( σ A ), correspondingly, has a higher utility. For simplicity, we assume the equations are linear as:
u ( A ) = a + b μ A
u ( A ) = c d σ A
and
d u d μ A > 0 ,   d u d σ A < 0
Next, we derive the relationship between the mean and spread according to our assumptions. Let two points be μ A 1 and μ A 2 , then u ( A 1 ) ,   u ( A 2 ) , σ A 1 , and σ A 2 can be derived by Equations (2) and (3) to form the relationship between μ A and σ A , as shown in Figure 2.
According to Figure 2, we know that having an alternative with a larger spread is the same as having a smaller mean and vice versa. Hence, the trade-off between the mean and the spread can be defined as:
Δ μ A = θ Δ σ A
where θ < 0 is an indifference ratio between the mean and the spread.
Next, we develop a criterion to incorporate the mean and the spread according to Equation (4). This is because if we can transform the spread to the mean, we can easily compare any imprecise data using the rule for the higher mean with the higher ranking order. Based on this concept, we can transform the difference between the spreads into the difference between the means and let:
Δ μ A = μ Δ ( A 1 , A 2 ) = μ A 1 μ A 2
Δ σ A = σ Δ ( A 1 , A 2 ) = σ A 2 σ A 1
Then, we can obtain the equation to be the criterion to rank the imprecise data as:
Δ Ψ A = Ψ ( A 1 , A 2 ) = μ Δ ( A 1 , A 2 ) + λ σ Δ ( A 1 , A 2 )
where Δ Ψ A denotes the preference difference between two imprecise numbers and λ = θ indicates the trade-off ratio between the mean and the spread.
Equation (7) indicates that the grades of the utility in A 1 are superior to A 2 , so we can conclude that
Δ Ψ A > 0 ,   if   A 1 A 2
Δ Ψ A < 0 ,   if   A 1 A 2
Δ Ψ A = 0 ,   if   A 1 A 2
As the mean and spread are monotone functions, it is obvious that Δ Ψ A satisfies the transitive axiom (i.e., if A 1 A 2 and A 2 A 3 , then A 1 A 3 ). In addition, our simulation indicates θ is approximately equal to −0.25, and the method of our experiment is listed in Appendix A.
Let us consider the defuzzifying problem of the proposed method as follows. Let A i be a status quo alternative. The defuzzifying value of A i is its μ A 1 , and the defuzzifying value of A j can be derived as:
A j = μ A 1 + λ σ Δ ( A 1 , A 2 )
In addition, this paper uses the Mellin transform to calculate the mean and spread/variance in any distribution quickly. Next, we describe details of the Mellin transform, and link it to the nth moment to calculate the mean and spread.

4. An Application of the Mellin Transform

Given a random variable, x R + , the Mellin transform, M ( s ) , of a pdf, f ( x ) , can be defined as:
M { f ( x ) ; s } = M ( s ) = 0 f ( x ) x s 1 d x
Let h be a measurable function on into and Y = h ( x ) is a random variable. Then, some properties of the Mellin transform can be described, as shown in Table 1. For example, if Y = a x , then the scaling property can be expressed as:
M { f ( a x ) ; s } = 0 f ( a x ) x s 1 d x = a s 0 f ( a x ) ( a x ) s 1 = a s M ( s )
Given a continuous non-negative random variable, X, the nth moment of X is denoted by E ( X n ) and is defined as:
E ( X n ) = 0 x n f ( x ) d x
For n = 1, the mean of X can be expressed as:
E ( X ) = 0 x f ( x ) d x
and the variance of X can be calculated by:
σ x 2 = E ( X 2 ) [ E ( X ) ] 2
Since the relation between the nth moment and the Mellin transform of X can be linked by:
E ( X n ) = 0 x ( n + 1 ) 1 f ( x ) d x = M { f ( x ) ; n + 1 }
then the mean and the variance of X can be calculated by:
E ( X ) = M { f ( x ) ; 2 }
σ x 2 = M { f ( x ) ; 3 } { M { f ( x ) ; 2 } } 2
According to Equations (18) and (19), we can quickly calculate the mean and the spread in any distribution. In practice, the uniform, the triangular, and the trapezoidal distribution are usually used, and their Mellin transforms are summarized as shown in Table 2. More Mellin transforms in various probabilistic density functions can be found in [19].
Based on Table 2, the mean and spread values can be efficiently derived by calculating M ( 2 ) and M ( 3 ) . Then, we can rank and defuzzify imprecise data using Equation (7). The following section uses four numerical examples to show the detailed procedures of the proposed method.

5. Numerical Examples

In this section, we demonstrate four examples to show how our method ranks imprecise data. The first three examples illustrate the way to rank various imprecise data. The last example demonstrates our approach to extending to the simple additive weighting method. The Mellin transform derives the values of the mean and spread, and the preference of objects is obtained by incorporating the variance into the mean value.
Example 1.
Let four alternatives using imprecise data be used to measure the preference where we do not know any information about our alternatives (i.e., assuming they have uniform distribution). Their imprecise preferences are A1 = (2, 10), A2 = (4, 8), A3 = (5, 7), and A4 = (6, 9), and these probability density functions are represented in Figure 3.
According to the Mellin transform, we can obtain:
E ( A 1 ) = 6.0 ,   σ A 1 = 2.3094 E ( A 2 ) = 6.0 ,   σ A 2 = 1.1547 E ( A 3 ) = 6.0 ,   σ A 3 = 0.5774 E ( A 4 ) = 7.5 ,   σ A 4 = 0.8660
then the differences in the alternatives’ total mean can be calculated as:
Ψ ( A 1 , A 2 ) = μ Δ ( A 1 , A 2 ) + λ σ Δ ( A 1 , A 2 ) = 0 + 0.25 × ( 1.1547 2.3094 ) = 0.2887 < 0 Ψ ( A 1 , A 3 ) = μ Δ ( A 1 , A 3 ) + λ σ Δ ( A 1 , A 3 ) = 0 + 0.25 × ( 0.5774 2.3094 ) = 0.4330 < 0 Ψ ( A 1 , A 4 ) = μ Δ ( A 1 , A 4 ) + λ σ Δ ( A 1 , A 4 ) = 1.5 + 0.25 × ( 0.8660 2.3094 ) = 1.8609 < 0
Based on the above result, we can conclude the ranking result is A 4 A 3 A 2 A 1 . We can set A1 as the status quo alternative and defuzzify other alternatives. The result is compared with CoA and Boa and presented, as shown in Table 3.
Example 2.
Given two imprecise data with triangular distributions, A1 = (0, 6, 10) and A2 = (3.8, 4.8, 5.8), and a crisp value A3 = 4.6, to present the measurement of the preference in A1, A2, and A3. Their probability density functions are listed in Figure 4.
By using the Mellin transform,
E ( A 1 ) = 5.0 ,   σ A 1 = 2.0412 E ( A 2 ) = 4.8 ,   σ A 2 = 0.4082 E ( A 3 ) = 4.6 ,   σ A 3 = 0
then the differences in the alternative’s total mean can be calculated as:
Ψ ( A 1 , A 2 ) = μ Δ ( A 1 , A 2 ) + λ σ Δ ( A 1 , A 2 ) = 0.2 + 0.25 × ( 0.4082 2.0412 ) = 0.2083 < 0 Ψ ( A 2 , A 3 ) = μ Δ ( A 2 , A 3 ) + λ σ Δ ( A 2 , A 3 ) = 0.2 + 0.25 × ( 0 0.4082 ) = 0.0980 < 0
The above result indicates that the preference relation is A 3 A 2 A 1 , where A3 is the best choice. Then, we can compare the proposed method with CoA and Boa, as shown in Table 4.
Example 3.
Assume three alternatives (A1, A2, and A3) using interval data to measure the preference rating, and of these alternatives, A2 and A3 are known to have trapezoidal and triangular distributions, respectively. Their scores are A1 = (4, 7), A2 = (4, 5, 6, 7), and A3 = (2, 7, 8), respectively, and the probability density functions are described in Figure 5.
By using the Mellin transform, the mean and the spread can be calculated as:
E ( A 1 ) = 5.5 ,   σ A 1 = 0.8660 E ( A 2 ) = 5.5 ,   σ A 2 = 0.4167 E ( A 3 ) = 5.67 ,   σ A 3 = 1.2329
then the differences in the alternatives’ total mean can be calculated as:
Ψ ( A 1 , A 2 ) = μ Δ ( A 1 , A 2 ) + λ σ Δ ( A 1 , A 2 ) = 0 + 0.25 × ( 0.4167 0.8660 ) = 0.1378 < 0 Ψ ( A 1 , A 3 ) = μ Δ ( A 1 , A 3 ) + λ σ Δ ( A 1 , A 3 ) = 0.1 + 0.25 × ( 1.2329 0.8660 ) = 0.008 < 0 Ψ ( A 2 , A 3 ) = μ Δ ( A 2 , A 3 ) + λ σ Δ ( A 2 , A 3 ) = 0.17 + 0.25 × ( 1.2329 0.4167 ) = 0.0341 > 0
The above result indicates that A 2 A 3 A 1 , and we can compare the proposed method with CoA and Boa, as shown in Table 5.
Example 4.
In order to extend to further applications in MADM, we demonstrate a way to employ the SAW method using our concept. The SAW method can be expressed as:
V ( A i ) = V i = j = 1 n w j v j ( x i j ) , i = 1 , , m ; j = 1 , , n
where V ( A i ) is the value function of alternative A i , and w j and v j ( ) are weight and value functions of attribute j , respectively. After a normalization process, the value of alternative A i can be rewritten as:
V i = j = 1 n w j r i j , i = 1 , , m
Next, we can apply our concept to the simple additive weighting method according to follow equations:
Δ V i k = V i V k = j = 1 n w j r i j j = 1 n w j r k j = j = 1 n w j Δ r i k j ,   i , k = 1 , , m ; j = 1 , , n
and let:
Δ r i k j = μ Δ ( A i j , A k j ) + λ σ Δ ( A i j , A k j )
Then,
Ψ ( A i j , A k j ) = j = 1 h w j μ Δ ( A i j , A k j ) + j = 1 h w j λ σ Δ ( A i j , A k j )
when Ψ ( A i j , A k j ) > 0 means A i A k where i , k [ 1 , , m ] , and vice versa.
Assume there are three alternatives described in Table 6, where each alternative is measured using interval data. In addition, we assume these alternatives satisfy the uniform distributions.
Next, we need to normalize all alternatives, and linear normalization is most often used in the SAW method [20]. Therefore, in this example, we normalize the interval data according to the following equation:
r i j = [ r r , r + r ]
where r and r + are the interval boundary, and r is the maximum value in jth attribute. Based on Equation (20), we can normalize the interval data in Table 7.
After this normalization process, we can calculate the differences between the alternatives’ means as follows:
Ψ ( A 1 , A 2 ) = j = 1 h w j μ Δ ( A 1 j , A 2 j ) + j = 1 h w j λ σ Δ ( A 1 j , A 2 j ) = 0.0206 + 0.0884 0.0846 = 0.0244 > 0 Ψ ( A 1 , A 3 ) = j = 1 h w j μ Δ ( A 1 j , A 3 j ) + j = 1 h w j λ σ Δ ( A 1 j , A 3 j ) = 0.0205 + 0.0118 0.0717 = 0.0804 < 0
According to the results A 1 A 2 and A 1 A 3 , and the alternative A 3 is the best choice.

6. Discussions

One critical issue for imprecise numbers is to rank their order and defuzzify their values. This paper focuses on imprecise data, and uses the mean and spread to defuzzify and rank data. Although the CoA and BoA are widely used to defuzzify fuzzy numbers because of their simplicity, they can obtain wrong preferences in more complicated situations. According to the utility theory, the difference between the spreads can be transformed into the difference between the means by multiplying a specific ratio. Therefore, we can develop some equations to achieve our purpose here.
Our method has several characteristics when ranking imprecise numbers based on our numerical examples. First, when imprecise numbers have the same mean, the smaller spread has the higher-ranking preference. Second, the influence of the spread to certain utilities can clearly be reflected in the results, so it is not necessary to have the higher mean indicate the higher-ranking preference if the spread is too large. However, the trade-off between the mean and the spread is not mentioned in Lee and Li’s method [10]. Third, our method is suitable for both imprecise and crisp data or mixed data. Last, our method does not change the measurement unit, and the results can be interpreted more intuitively and easily applied to further areas of research.
The main concept of our method is that the difference in the utility should be larger than zero when one preference is larger than another. We use three examples and the SAW method to demonstrate our concept. Based on the results, our methods can rank the preferences correctly. However, the correctness of the proposed method highly depends on the choice of θ . Although we propose an experimental method to determine the parameter, other sophisticated ways should be considered in different applications or considerations. The choice of the parameter also provides the flexibility for decision-makers to reflect on the facing problems.

7. Conclusions

According to the probability theory, we can describe the characteristics of imprecise data depending on its mean and spread. These two criteria are also used to rank interval and fuzzy numbers. However, the mean and spread exist in a trade-off relationship, and it is not necessary to choose the alternative with the higher mean if the spread of the alternative is too large. In order to solve the problem of this inconsistency under the perspectives of the mean and the spread, the difference between the spread is transformed into the difference between the mean. According to the results of the preference difference between alternatives, we can rank the priority of the imprecise data. In addition, we also provide a method to derive the crisp value of an imprecise number.
There are several advantages to our proposed method. First, after transforming the spread, we can overcome the problem of being unable to determine the better ranking when one has a higher mean and spread. Second, no matter the imprecise data distribution, the mean and spread can quickly be calculated by the Mellin transform. Third, interpretation is easier and more intuitive because the unit is not changed. In addition, our method can be easily extended to other methods in MADM for further applications.

Author Contributions

Conceptualization, C.-Y.C. and J.-J.H.; methodology, J.-J.H.; writing—review and editing, C.-Y.C. and J.-J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AbbreviationDefinition
DEAData envelopment analysis
pdfProbability density function
CVCoefficient of variance
MADMMultiple attribute decision-making
SAWSimple additive weighting
WPMsimple product weighting
CoACenter of area
BoABisector of area
RPRisk premium
Symbol
A Alternative
μ Mean
σ Spread
u ( ) Utility function
θ Indifference ratio between the mean and the spread
λ Trade-off ratio between the mean and the spread
Δ Ψ A Preference difference between two imprecise numbers
M ( s ) Mellin transform
[ r , r + ]Interval boundary

Appendix A

Based on the above discussion, the trade-off relationship between the mean and the spread can be defined as:
ΔμA = θΔσA
where θ < 0 is an indifference ratio between the mean and the spread.
According to the equation, the mean and the spread have a trade-off of indifference ratio and have no relation to the shape of the alternative. Next, we simulate various kinds of situations to determine the indifference ratio.
Two alternatives, A 1 and A 2 , measured using imprecise data, A 1 = (4, 6) and A 2 = (0, 10), with triangular distribution, respectively, and their probability density functions ( f ( x 1 ) and f ( x 2 ) ) are described in Figure A1. The mean and the spread of A 1 and A 2 can be calculated using the Mellin transform as:
E ( A 1 ) = 5 , σ A 1 = 0.5
E ( A 2 ) = 5 , σ A 2 = 2.0412
Figure A1. The concept of finding the indifference ratio.
Figure A1. The concept of finding the indifference ratio.
Algorithms 15 00355 g0a1
It is obvious that A 1 A 2 (with the same mean and A 1 has the smaller spread), and then we move f ( x 1 ) to the left by decreasing E ( A 1 ) until the expert cannot figure out which one is better, i.e., we view u ( A 1 ) u ( A 2 ) in this situation. Based on our experiments, when the E ( A 1 ) 4.6 , then u ( A 1 ) u ( A 2 ) and θ 0.25 . The results are described in Figure A2.
Figure A2. The trade-off result where u ( A 1 ) u ( A 2 ) .
Figure A2. The trade-off result where u ( A 1 ) u ( A 2 ) .
Algorithms 15 00355 g0a2

References

  1. Zadeh, L.A. Fuzzy Set Theory and Its Applications; University of California: Los Angeles, CA, USA, 1965; Volume 8, p. 338. [Google Scholar]
  2. Ishibuchi, H.; Tanaka, H. Multiobjective programming in optimization of the interval objective function. Eur. J. Oper. Res. 1990, 48, 219–225. [Google Scholar] [CrossRef]
  3. Shaocheng, T. Interval number and fuzzy number linear programmings. Fuzzy Sets Syst. 1994, 66, 301–306. [Google Scholar] [CrossRef]
  4. Sengupta, A.; Pal, T.K.; Chakraborty, D. Interpretation of inequality constraints involving interval coefficients and a solution to interval linear programming. Fuzzy Sets Syst. 2001, 119, 129–138. [Google Scholar] [CrossRef]
  5. Tavana, M.; Khalili-Damghani, K.; Arteaga, F.J.S.; Mahmoudi, R.; Hafezalkotob, A. Efficiency decomposition and measurement in two-stage fuzzy DEA models using a bargaining game approach. Comput. Ind. Eng. 2018, 118, 394–408. [Google Scholar] [CrossRef]
  6. Zhou, X.; Wang, Y.; Chai, J.; Wang, L.; Wang, S.; Lev, B. Sustainable supply chain evaluation: A dynamic double frontier network DEA model with interval type-2 fuzzy data. Inf. Sci. 2019, 504, 394–421. [Google Scholar] [CrossRef]
  7. Arana-Jiménez, M.; Sánchez-Gil, M.; Younesi, A.; Lozano, S. Integer interval DEA: An axiomatic derivation of the technology and an additive, slacks-based model. Fuzzy Sets Syst. 2021, 422, 83–105. [Google Scholar] [CrossRef]
  8. Akram, M.; Adeel, A. TOPSIS Approach for MAGDM Based on Interval-Valued Hesitant Fuzzy N-Soft Environment. Int. J. Fuzzy Syst. 2019, 21, 993–1009. [Google Scholar] [CrossRef]
  9. Tolga, A.C.; Parlak, I.B.; Castillo, O. Finite-interval-valued Type-2 Gaussian fuzzy numbers applied to fuzzy TODIM in a healthcare problem. Eng. Appl. Artif. Intell. 2020, 87, 103352. [Google Scholar] [CrossRef]
  10. Wu, L.; Gao, H.; Wei, C. VIKOR method for financing risk assessment of rural tourism projects under interval-valued intuitionistic fuzzy environment. J. Intell. Fuzzy Syst. 2019, 37, 2001–2008. [Google Scholar] [CrossRef]
  11. Lee, E.; Li, R.-J. Comparison of fuzzy numbers based on the probability measure of fuzzy events. Comput. Math. Appl. 1988, 15, 887–896. [Google Scholar] [CrossRef] [Green Version]
  12. Cheng, C.-H. A new approach for ranking fuzzy numbers by distance method. Fuzzy Sets Syst. 1998, 95, 307–317. [Google Scholar] [CrossRef]
  13. Ban, A.I.; Coroianu, L. Simplifying the Search for Effective Ranking of Fuzzy Numbers. IEEE Trans. Fuzzy Syst. 2014, 23, 327–339. [Google Scholar] [CrossRef]
  14. Chanas, S.; Zieliński, P. Ranking fuzzy interval numbers in the setting of random sets–further results. Inf. Sci. 1999, 117, 191–200. [Google Scholar] [CrossRef]
  15. Chanas, S.; Delgado, M.; Verdegay, J.; Vila, M. Ranking fuzzy interval numbers in the setting of random sets. Inf. Sci. 1993, 69, 201–217. [Google Scholar] [CrossRef]
  16. Moore, R.E. Interval Analysis; Prentice-Hall: Englewood Cliffs, NJ, USA, 1966; Volume 4, pp. 8–13. [Google Scholar]
  17. Sengupta, A.; Pal, T.K. On comparing interval numbers. Eur. J. Oper. Res. 2000, 127, 28–43. [Google Scholar] [CrossRef]
  18. Kundu, S. Min-transitivity of fuzzy leftness relationship and its application to decision making. Fuzzy Sets Syst. 1997, 86, 357–367. [Google Scholar] [CrossRef]
  19. Yoon, K. A probabilistic approach to rank complex fuzzy numbers. Fuzzy Sets Syst. 1996, 80, 167–176. [Google Scholar] [CrossRef]
  20. Yoon, K.P.; Huang, C.L. Multiple Attribute Decision Making: An Introduction, Sage University Paper Series on Quantitative Applications in the Social Sciences; Sage: Thousand Oaks, CA, USA, 1995. [Google Scholar]
Figure 1. The conflicting situation in precise data.
Figure 1. The conflicting situation in precise data.
Algorithms 15 00355 g001
Figure 2. The relationship between the mean and the spread.
Figure 2. The relationship between the mean and the spread.
Algorithms 15 00355 g002
Figure 3. The probability density function in the first example.
Figure 3. The probability density function in the first example.
Algorithms 15 00355 g003
Figure 4. The probability density function in the second example.
Figure 4. The probability density function in the second example.
Algorithms 15 00355 g004
Figure 5. The probability density function in the third example.
Figure 5. The probability density function in the third example.
Algorithms 15 00355 g005
Table 1. The properties of the Mellin transform.
Table 1. The properties of the Mellin transform.
Properties of Mellin Transform Y = h ( x ) M ( s )
Scaling property a x a s M ( s )
Multiplication by x a x a f ( x ) M ( s + a )
Rising to a real power f ( x a ) a 1 M ( s a ) , a > 0
Inverse x 1 f ( x 1 ) M ( 1 s )
Multiplication by l n x l n x f ( x ) d d s M ( s )
Derivative d k d s k f ( x ) Γ ( s ) Γ ( s k )
Table 2. Mellin transform of some probability density functions.
Table 2. Mellin transform of some probability density functions.
DistributionParametersM(s)
Uniform U N I ( a , b ) b s a s s ( b a )
Triangular T R I ( l , m , u ) 2 ( u l ) s ( s + 1 ) [ u ( u s m s ) ( u m ) l ( m s l s ) ( m l ) ]
Trapezoidal T R A ( a , b , c , d ) 2 ( c + d b a ) s ( s + 1 ) [ ( d s + 1 c s + 1 ) ( d c ) ( b s + 1 a s + 1 ) ( b a ) ]
Table 3. The comparison of defuzzifying approaches in Example 1.
Table 3. The comparison of defuzzifying approaches in Example 1.
MethodsA1A2A3A4Rank
CoA6667.5 A 4 A 3 A 2 A 1
BoA6667.5 A 4 A 3 A 2 A 1
Proposed6 (status quo)6.296.437.86 A 4 A 3 A 2 A 1
Table 4. The comparison of defuzzifying approaches in Example 2.
Table 4. The comparison of defuzzifying approaches in Example 2.
MethodsA1A2A3Rank
CoA5.334.594.6 A 1 A 3 A 2
BoA5.54.64.6 A 1 A 3 A 2
Proposed4.64.8 (status quo)4.9 A 3 A 2 A 1
Table 5. The comparison of defuzzifying approaches in Example 3.
Table 5. The comparison of defuzzifying approaches in Example 3.
MethodsA1A2A3Rank
CoA5.55.55.67 A 3 A 1 A 2
BoA5.55.55.9 A 3 A 1 A 2
Proposed5.5 (status quo)5.645.51 A 2 A 3 A 1
Table 6. Data for evaluation in the fourth example.
Table 6. Data for evaluation in the fourth example.
Alternatives w 1 = 0.2 w 2 = 0.5 w 3 = 0.3
A 1 [3.6, 5.8][4.3, 6.7][2.3, 5.7]
A 2 [2.3, 6.0][3.8, 4.6][4.8, 6.9]
A 3 [4.3, 6.4][4.7, 5.8][4.8, 6.2]
Table 7. Data for evaluation in the fourth example after normalization.
Table 7. Data for evaluation in the fourth example after normalization.
Alternatives w 1 = 0.2 w 2 = 0.5 w 3 = 0.3
A 1 [0.563, 0.906][0.642, 1.000][0.333, 0.826]
A 2 [0.359, 0.938][0.567, 0.687][0.696, 1.000]
A 3 [0.672, 1.000][0.701, 0.866][0.696, 0.899]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, C.-Y.; Huang, J.-J. Defuzzify Imprecise Numbers Using the Mellin Transform and the Trade-Off between the Mean and Spread. Algorithms 2022, 15, 355. https://doi.org/10.3390/a15100355

AMA Style

Chen C-Y, Huang J-J. Defuzzify Imprecise Numbers Using the Mellin Transform and the Trade-Off between the Mean and Spread. Algorithms. 2022; 15(10):355. https://doi.org/10.3390/a15100355

Chicago/Turabian Style

Chen, Chin-Yi, and Jih-Jeng Huang. 2022. "Defuzzify Imprecise Numbers Using the Mellin Transform and the Trade-Off between the Mean and Spread" Algorithms 15, no. 10: 355. https://doi.org/10.3390/a15100355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop