**6. Applications**

In this section, the proposed methods are respectively used to enhance the performance of some research methods related to fuzzy attributes membership: regression, clustering, and fuzzy evaluation. Four samples are presented with the detailed description and comparisons of the results from different methods.

#### *6.1. Applications for Regression*

For some regression problems, the size of KFA is small for some reason, which definitely affects the regression performance. These KFAs are partially correlated, which can be figured out by a known mechanism or experience for researchers. To deal with this type of problem, the technique to approximately estimate UFA in Section 4 is applied to enhance the performance of regression. We firstly use the estimation method to approximately estimate the UFA of samples and combine the estimated UFAs and KFA as the attributes of predictor before regressing. The support vector machine regression model (SVMR) and Gaussian kernel regression model using random feature expansion (GKR) are chosen as the regression models. For GKR, there are two kinds of learners: SVM learner and linear regression via ordinary least squares.

**Example 1.** *Let X* = {*<sup>x</sup>*1, *x*2, ··· , *xn*}*, n* = 392 *be a set of car samples, the measurable attribute sequence of node xi is defined as:*

$$S^{x\_i} = \langle a\_1^{x\_i}(weight), a\_2^{x\_i}(cylinders), a\_3^{x\_i}(horspower), a\_4^{x\_i}(model\ year), a\_5^{x\_i}(MPG) \rangle. \tag{17}$$

*Let fuzzy membership grade for attribute j of x* be calculated by:

$$
\hat{a}\_{\dot{j}}^{x\_i} = \nu\_{\dot{j}} \left( a\_{\dot{j}}^{x\_i} \right),
\tag{18}
$$

$$\nu\_{\dot{\jmath}}\left(a\_{\dot{\jmath}}^{x\_{\dot{\jmath}}}\right) = \frac{a\_{\dot{\jmath}}^{x\_{\dot{\jmath}}} - \min(a^x)}{\max(a^x) - \min(a^x)},\tag{19}$$

*where νj a<sup>x</sup> j* : *a<sup>x</sup> j* → [0, <sup>1</sup>]*, j* = 1, 2, 3, 4*, is the attribute membership function. Thefuzzymeasurableattributesequenceofxiis:*

$$\hat{S}^{x\_i} = \langle \hat{a}\_1^{x\_i}, \hat{a}\_2^{x\_i}, \dots, \hat{a}\_9^{x\_i} \rangle,\tag{20}$$

*Choose some attributes from a<sup>x</sup>* 1*, a<sup>x</sup>* 2*, a<sup>x</sup>* 3*, a<sup>x</sup>* 4 *as predictor variables, a<sup>x</sup>* 5 *as the response variable, calculate the regression model between predictor variables and the response variable by SVMR and GKR, respectively, with and without the application of the proposed method to approximately estimate UFA in Section 4. To compare these regression results, calculate loss indexes: Huber loss (HL), mean squared error (MSE), and epsilon-insensitive function (EI) of results. The smaller the values of these three indexes are, the better performances of the regressions are. EI index is appropriate for SVM learners only.*

If *a<sup>x</sup>* 1, *a<sup>x</sup>* 2, *a<sup>x</sup>* 3 are predictor variables, *a<sup>x</sup>* 5 is the response variable, then *a<sup>x</sup>* 1, *a<sup>x</sup>* 2, *a<sup>x</sup>* 3 are KFAs, so the KFA attribute sequence number is *N*KFA = 1, 2, 3 in Step 2 of the technique to approximately estimate UFAs. Calculate the regressions respectively by SVMR, GKR-SVM learner (denoted as GKR-1), GKR-linear regression (denoted as GKR-2) with and without the application of the proposed method. Take 20% of samples out as test samples. The comparisons of loss indexes of results are shown in Table 1. Notably, for the SVMR method, the more estimated UFAs to learn, the better performance of learning is. Meanwhile, the GKR method can use the small size of UFAs to expand features. If the size of UFAs is too large, it can result in the over learning for the GKR methods. Thus, the size of UFAs for the SVMR method should be much larger than the ones for GKR methods. After lots of tests, the best *N*UFA in Step 4 for different methods is shown in Table 1.


**Table 1.** The comparisons of loss indexes of results if *<sup>a</sup>x*1, *<sup>a</sup>x*2, *ax*3are knowable fuzzy attributes (KFAs).

1 A = 1: with the application of proposed method, A = 0: without the application of proposed method. 2 *α* : *β* : *γ*: the sequence (the starting point is *α*, step is *β*, the endpoint is *γ*). HL: Huber loss; MSE: mean squared error; EI: epsilon-insensitive function; SVMR: support vector machine regression; GKR: Gaussian kernel regression.

If *<sup>a</sup>x*1, *<sup>a</sup>x*2, *<sup>a</sup><sup>x</sup>*3, *ax*4 are predictor variables, *ax*5 is the response variable, then *<sup>a</sup>x*1, *<sup>a</sup>x*2, *ax*3 *ax*4 are KFAs, so the KFA attribute sequence number is *N*KFA = 1, 2, 3, 4 in Step 2 of the technique to approximately estimate UFAs. Calculate the regressions, respectively, by SVMR, GKR-1, and GKR-2 with and without the application of the proposed method. Take 20% of samples out as test samples. The UFA number sequences of each regression and the comparisons of loss indexes of results are shown in Table 2.


**Table 2.** The comparisons of loss indexes of results if *<sup>a</sup>x*1, *<sup>a</sup>x*2, *<sup>a</sup><sup>x</sup>*3,*ax*4are KFAs.

1 A = 1: with the application of proposed method, A = 0: without the application of proposed method. 2 *α* : *β* : *γ*: the sequence (the starting point is *α*, step is *β*, the endpoint is *γ*).

From Tables 1 and 2, we can find that the index values of the regression results (both training results and test results of the regression) with the application of the proposed method are all smaller than ones without the application, especially in Table 2. Thus, it is concluded that the application of the proposed method in the regression problem, where the size of KFAs is quite small and known attributes are correlated, is effective.

#### *6.2. Applications for Clustering*

For some clustering problems, only some attributes of given samples are given, which definitely lower the accurate rates of clustering results. In this section, the proposed methods are respectively used to enhance the performance of fuzzy c-means clustering (FCM), K-means clustering (K-means), and K-medoids clustering (K-medoids): the estimation method proposed is applied to approximately estimate the UFA of the sample and the estimated UFAs and KFAs are combined as the attributes of samples before clustering. These three clustering methods are partition-based clustering methods. To analyze and compare the clustering results, accurate rate (AR), Rand index (RI), normalized mutual information (NMI) of clustering results are calculated. The larger the values of these indexes are, the better the performances of clustering are.

**Example 2.** *Let X* = {*<sup>x</sup>*1, *x*2, ··· , *xn*}*, n* = 150 *be a set of iris samples, the measurable attribute sequence of node xiis defined as:*

$$S^{x\_i} = \left< a\_1^{x\_i}(\text{seqal length}), a\_2^{x\_i}(\text{seqal width}), a\_3^{x\_i}(\text{total length}), a\_4^{x\_i}(\text{total width}) \right>. \tag{21}$$

*Let fuzzy membership grade for attribute j of x be calculated by Equations (18) and (19).*

*The fuzzy measurable attribute sequence of xi is:*

$$
\widetilde{S}^{x\_i} = \langle \widetilde{a}\_1^{x\_i}, \widetilde{a}\_2^{x\_i}, \dots, \widetilde{a}\_4^{x\_i} \rangle. \tag{22}
$$

*For all 150 iris samples, they can be divided into three clusters, each cluster contains 50 samples.*

 *<sup>a</sup>x*1, *<sup>a</sup>x*2, *<sup>a</sup><sup>x</sup>*3, *ax*4 are KFAs, so the KFA attribute sequence number is *N*KFA = 1, 2, 3, 4 in Step 2 of the technique to approximately estimate UFAs. For the iris, the four attributes of the sample set are the depiction of its geometry. Thus, the size of UFAs can be large. Calculate the clustering respectively by FCM, K-means, and K-medoids with and without the application of the proposed method. Every method is calculated 100 times. All clustering calculations choose Euclidean distance. The UFA number sequences of each clustering and the comparisons of clustering indexes of results are shown in Table 3.


**Table 3.** The comparisons of clustering indexes of results.

1 A = 1: with the application of proposed method, A = 0: without the application of proposed method. AR: accurate rate; RI: Rand index; NMI: normalized mutual information; FCM: fuzzy c-means.

From Table 3, we can find that the index values of clustering results (worst, mean, and best values of AR, RI, and NMI) with the application of the proposed method are all larger than ones without the application. It is concluded that the application of the proposed method in the clustering problem, where the size of KFAs is quite small and known attributes are correlated, is effective.

#### *6.3. Applications for Power Quality Evaluation*

Let *X* = {*<sup>x</sup>*1(*node*1), *<sup>x</sup>*2(*node*2), *<sup>x</sup>*3(*node*3), *<sup>x</sup>*4(*node*4), *<sup>x</sup>*5(*node*5)} be a set of power net nodes, the measurable attribute sequence of node *xi* is defined as:

$$S^{x\_i} = \langle a\_1^{x\_i}(fragency\,\,deviation), a\_2^{x\_i}(voltage\,\,deviation), a\_3^{x\_i}(voltage\,\,sag), \begin{array}{l} a\_4^{x\_i}(voltage\,\,sag), \\ a\_5^{x\_i}(voltage\,\,fibration), a\_6^{x\_i}(voltage\,\,flocation), a\_7^{x\_i}(voltage\,\,flocker), \\ a\_7^{x\_i}(voltage\,\,harmtions), a\_8^{x\_i}( reliability\,\,index), a\_9^{x\_i}(server\,\,index) \end{array} \rangle$$

which detailed values are shown in Table 4, the smaller the value of the measurable attribute is, the better the power quality is.


**Table 4.** Power quality values.

Let the fuzzy membership grade for attribute *j* of *x* be calculated by Equations (18) and (19). The fuzzy measurable attribute sequence of *xi* is:

$$
\hat{S}^{x\_i} = \langle \hat{a}\_1^{x\_i}, \hat{a}\_2^{x\_i}, \dots, \hat{a}\_9^{x\_i} \rangle,\tag{24}
$$

the detailed values of *S xi* are shown as follows:

*S x*1 = *ax*<sup>1</sup> 1 (0.0000), *ax*<sup>1</sup> 2 (0.0000), *ax*<sup>1</sup> 3 (1.0000) , *a x*1 4 (0.0000), *ax*<sup>1</sup> 5 (0.0000), *ax*<sup>1</sup> 6 (0.0000) , *ax*<sup>1</sup> 7 (0.0000), *ax*<sup>1</sup> 8 (0.0000), *ax*<sup>1</sup> 9 (0.1778) , *S x*2 = *ax*<sup>2</sup> 1 (0.6598), *ax*<sup>2</sup> 2 (1.0000), *ax*<sup>2</sup> 3 (0.0000), *ax*<sup>2</sup> 4 (0.5300), *ax*<sup>2</sup> 5 (0.3226), *ax*<sup>2</sup> 6 (1.0000), *a x*2 7 (0.9333), *ax*<sup>2</sup> 8 (0.7634), *ax*<sup>2</sup> 9 (0.8389) , *S x*3 = *ax*<sup>3</sup> 1 (0.2660), *ax*<sup>3</sup> 2 (0.3281), *ax*<sup>3</sup> 3 (0.5570) , *ax*<sup>3</sup> 4 (0.5200), *ax*<sup>3</sup> 5 (1.0000), *ax*<sup>3</sup> 6 (0.4304) *a x*3 7 (0.3333), *ax*<sup>3</sup> 8 (0.3978), *ax*<sup>3</sup> 9 (0.0000) ,, *S x*4 = *ax*<sup>4</sup> 1 (0.8918), *ax*<sup>4</sup> 2 (0.6107), *ax*<sup>4</sup> 3 (0.6694) , *ax*<sup>4</sup> 4 (0.9100), *ax*<sup>4</sup> 5 (0.0645), *ax*<sup>4</sup> 6 (0.9439) , *ax*<sup>4</sup> 7 (0.5754), *ax*<sup>4</sup> 8 (1.0000), *ax*<sup>4</sup> 9 (1.0000) , *S x*5 = *ax*<sup>5</sup> 1 (1.0000), *ax*<sup>5</sup> 2 (0.2907), *ax*<sup>5</sup> 3 (0.5136) , *ax*<sup>5</sup> 4 (1.0000), *ax*<sup>5</sup> 5 (0.4032), *ax*<sup>5</sup> 6 (0.9492) , *ax*<sup>5</sup> 7 (1.0000), *ax*<sup>5</sup> 8 (0.7419), *ax*<sup>5</sup> 9 (0.4500) .

If the sequences of attribute weights of each node are same and the attribute weights are equal:

$$
\lambda = \left\langle \begin{array}{cccc} \lambda\_1 \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} & \lambda\_2 \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} & \lambda\_3 \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} & \lambda\_4 \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} \end{pmatrix}, \ \lambda\_5 \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} \end{, \ \lambda\_6 \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} \}, \ \lambda \gamma \begin{pmatrix} \frac{1}{\mathfrak{g}} \end{pmatrix} \}, \ \lambda \circ \left(\frac{1}{\mathfrak{g}} \right) \end{, \tag{25}
$$

then the evaluation of the power quality of nodes by the traditional fuzzy evaluation method is calculated by:

$$E^{x\_i} = \widetilde{S}^{x\_i} \cdot \lambda^{x\_i} = \sum\_{j=1}^{9} \widehat{a}\_j^{\alpha\_i} \times \lambda\_j. \tag{26}$$

The evaluation results are shown in Table 5. The smaller the evaluation value is, the better the power quality of the node is. From Table 5, we can conclude that: *x*1 *x*3 *x*2 *x*5 *x*4, where means better than.

**Table 5.** Evaluation results by the traditional method with all attribute information known.


Furthermore, if the conditions mentioned in Section 2 are taken into consideration, evaluate the following examples based on the proposed method mentioned in Section 5, and compare the result with other methods.

**Example 3.** *Assume that only measurable attributes <sup>a</sup>x*1*, <sup>a</sup>x*2*, <sup>a</sup><sup>x</sup>*3*, <sup>a</sup>x*4*, ax*5 *are known, the other four attributes are unknown in this example. According to the power quality theory, these unknown attributes are correlated with those five attributes, which means the fuzzy measurable attribute sequence of xi is redefined as follows:*

$$
\begin{array}{lcl}
\bar{S}^{\scriptscriptstyle\alpha\_{i}} &=& \langle \hat{\bar{a}}\_{1}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{2}^{\scriptscriptstyle\alpha\_{i}}, \dots, \hat{a}\_{9}^{\scriptscriptstyle\alpha\_{i}} \rangle \\
&=& \underbrace{\langle \hat{\bar{a}}\_{1}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{2}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{3}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{4}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{5}^{\scriptscriptstyle\alpha\_{i}} \rangle}\_{\bar{S}^{\scriptscriptstyle\alpha\_{i}}\_{KEA}} + \underbrace{\langle \hat{\bar{a}}\_{6}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{7}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{8}^{\scriptscriptstyle\alpha\_{i}}, \hat{a}\_{9}^{\scriptscriptstyle\alpha\_{i}} \rangle}\_{\bar{S}^{\scriptscriptstyle\alpha\_{i}}\_{LFA}}\,. \end{array} . \tag{27}
$$

Additionally, the sequence of attribute weights is redefined as follows:

$$
\lambda = \left\langle \frac{1}{5}, \frac{1}{5}, \frac{1}{5}, \frac{1}{5}, \frac{1}{5} \right\rangle. \tag{28}
$$

Apply the proposed evaluation method to solve this example, and the process is:

Firstly, approximately estimate *S xi* UFA based on the proposed method. Taking *N*KFA = 1, 2, ··· , 5 as the independent variable *E* in Step 2 of the technique to approximately estimate UFAs and *S xi* KFA as the dependent variable *Y*, interpolate the function curve of each node. There is less heuristic knowledge for us to determine the size of UFA *r*. Thus, *r* = 4 is the best choice conservatively. We can choose the median values of adjacent pairs of *N*KFA as UFA. Generate the UFA number sequence *N*UFA = 1.5, 2.5, 3.5, 4.5 in Step 4 of the technique to approximately estimate UFAs. Calculate the estimation based on the fit function with *N*UFA as its input and the results are:

 ˆ *Sx*1 UFA = <sup>ˆ</sup>*ax*<sup>1</sup> 1.5(−0.4063), <sup>ˆ</sup>*ax*<sup>1</sup> 2.5(0.6563), <sup>ˆ</sup>*ax*<sup>1</sup> 3.5(0.6564), <sup>ˆ</sup>*ax*<sup>1</sup> 4.5(−0.4062) , ˆ *S x*2 UFA = <sup>ˆ</sup>*ax*<sup>2</sup> 1.5(1.2571), <sup>ˆ</sup>*ax*<sup>2</sup> 2.5(0.4079), <sup>ˆ</sup>*ax*<sup>2</sup> 3.5(0.1352), <sup>ˆ</sup>*ax*<sup>2</sup> 4.5(0.7405) , ˆ *S x*3 UFA = <sup>ˆ</sup>*ax*<sup>3</sup> 1.5(0.2291), <sup>ˆ</sup>*ax*<sup>3</sup> 2.5(0.4695), <sup>ˆ</sup>*ax*<sup>3</sup> 3.5(0.5435), <sup>ˆ</sup>*ax*<sup>3</sup> 4.5(0.6264) , ˆ *S x*4 UFA = <sup>ˆ</sup>*ax*<sup>4</sup> 1.5(0.7162), <sup>ˆ</sup>*ax*<sup>4</sup> 2.5(0.5901), <sup>ˆ</sup>*ax*<sup>4</sup> 3.5(0.8289), <sup>ˆ</sup>*ax*<sup>4</sup> 4.5(0.7196) , ˆ *S x*5 UFA = <sup>ˆ</sup>*ax*<sup>5</sup> 1.5(0.4976), <sup>ˆ</sup>*ax*<sup>5</sup> 2.5(0.3168), <sup>ˆ</sup>*ax*<sup>5</sup> 3.5(0.7975), <sup>ˆ</sup>*ax*<sup>5</sup> 4.5(0.9317) .

Secondly, generate the final evaluation based on the attribute weight reconfiguration. Because four attributes are estimated from five attributes, the new sequence of attribute weights is calculated by Step 1 of the technique to generate the final evaluation:

$$
\begin{array}{rcl}
\lambda^{\mathbf{x}} &=& \left< \lambda^1\_1, \frac{\lambda^1\_{1\overline{5}}}{\mathbf{1}\cdot\mathbf{5}}, \lambda^2\_2, \frac{\lambda^2\_2, \lambda^2\_2, \lambda^3}{\mathbf{1}\cdot\mathbf{5}}, \lambda^3\_3, \frac{\lambda^3\_3, \lambda^4}{\mathbf{1}\cdot\mathbf{5}}, \lambda^4\_4, \frac{\lambda^4\_4, \lambda^5}{\mathbf{1}\cdot\mathbf{5}}, \lambda^5\_5 \right>\_{'} \\
&=& \left< \overbrace{1\mathbf{0}}^{\mathbf{1}}, \overbrace{1\mathbf{5}}^{\mathbf{1}}, \overbrace{1\mathbf{5}}^{\mathbf{1}}, \overbrace{1\mathbf{5}}^{\mathbf{1}}, \overbrace{1\mathbf{5}}^{\mathbf{1}}, \overbrace{1\mathbf{5}}^{\mathbf{1}}, \overbrace{1\mathbf{5}}^{\mathbf{1}} \right>\_{'} \\
\end{array} \tag{29}
$$

and the evaluation of power quality of nodes is calculated by Step 3 of the technique to generate the final evaluation:

$$\mathcal{E}^{x\_i} = \hat{u}\_1^{\underline{x}\_i} \times \hat{\lambda}\_1^1 + \hat{u}\_{1.5}^{\underline{x}\_i} \times \hat{\lambda}\_{1.5}^{1.2} + \hat{u}\_2^{\underline{x}\_i} \times \hat{\lambda}\_2^2 + \hat{u}\_{2.5}^{\underline{x}\_i} \times \hat{\lambda}\_{2.5}^{2.3} + \hat{u}\_3^{\underline{x}\_i} \times \hat{\lambda}\_3^3 + \hat{u}\_{3.5}^{\underline{x}\_i} \times \hat{\lambda}\_4^3 + \hat{u}\_4^{\underline{x}\_i} \times \hat{\lambda}\_4^4 + \hat{u}\_{4.5}^{\underline{x}\_i} \times \hat{\lambda}\_5^4 + \hat{u}\_5^{\underline{x}\_i} \times \hat{\lambda}\_5^5. \tag{30}$$

The evaluation results are shown in Table 3. The smaller the evaluation value is, the better the power quality of the node is. From Table 6, we can find that: the evaluation result by the traditional method is *x*1 *x*2 *x*3 *x*4 *x*5, while by the proposed method is *x*1 *x*3 *x*2 *x*5 *x*4, which is the same as the result by the traditional method with all attributes known. To compare the results quantitatively, hamming distance can be used to calculate the similarity between two evaluation results. The Hamming distance between two equal-length sequences is defined as the ratio of the minimum number of substitutions required to change one of them into another to the length of the ranking. Hamming distance between the decision result by the traditional method with all attributes known and the decision result by the traditional method with KFAs only known is 0.8, while the proposed method is 0, i.e., the proposed method to deal with the decision-making problem with incomplete attribute information can obtain the same decision result of the problem with complete attribute information calculated by the traditional method.

**Table 6.** Evaluation results with KFAs known.


**Example 4.** *Assume that only measurable attributes <sup>a</sup>x*1*, <sup>a</sup>x*2*, <sup>a</sup>x*4*, ax*5 *are known, the other five attributes are unknown in this example. According to the power quality theory, these unknown attributes are correlated with those four attributes, which means the fuzzy measurable attribute sequence of xi is redefined as follows:*

$$
\begin{array}{lcl}
\tilde{S}^{\boldsymbol{x}\_{i}} &=& \langle \tilde{a}^{\boldsymbol{x}\_{i}}\_{1}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{2}, \cdots, \tilde{a}^{\boldsymbol{x}\_{i}}\_{9} \rangle \\
&=& \underbrace{\langle \tilde{a}^{\boldsymbol{x}\_{i}}\_{1}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{2}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{4}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{5} \rangle}\_{\tilde{S}^{\boldsymbol{x}\_{i}}\_{K\ell\boldsymbol{A}}} + \underbrace{\langle \tilde{a}^{\boldsymbol{x}\_{i}}\_{3}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{6}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{7}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{8}, \tilde{a}^{\boldsymbol{x}\_{i}}\_{9} \rangle}\_{\tilde{S}^{\boldsymbol{x}\_{i}}\_{\ell\ell\boldsymbol{A}}}.
\end{array}.
\tag{31}
$$

*Additionally, the sequence of attribute weights is redefined as follows:*

$$
\lambda = \left\langle \frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4} \right\rangle. \tag{32}
$$

Apply the proposed evaluation method to solve this example, and the process is:

Firstly, approximately estimate *S xi* UFA based on the proposed method. Taking *N*KFA = 1, 2, 3, 4 as the independent variable *E* in Step 2 of the technique to approximately estimate UFAs and *S xi* KFA as the dependent variable *Y*, interpolate the function curve of each node. Here, we can also let *r* = 5 from a conservative perspective. To avoid the excessive effect of *axi* 1 , we chose 1.5 as the starting point of UFA. Generate the UFA number sequence *N*UFA = 1.5, 2.1, 2.7, 3.3, 3.9 in Step 4 of the technique to approximately estimate UFAs. Calculate the estimation based on the fit function with *NUFA* as its input and the results are:

 ˆ *S x*1 UFA = <sup>ˆ</sup>*ax*<sup>1</sup> 1.5(0), <sup>ˆ</sup>*ax*<sup>1</sup> 2.1(0), <sup>ˆ</sup>*ax*<sup>1</sup> 2.7(0), <sup>ˆ</sup>*ax*<sup>1</sup> 3.3(0), <sup>ˆ</sup>*ax*<sup>1</sup> 3.9(0) , ˆ *S x*2 UFA = <sup>ˆ</sup>*ax*<sup>2</sup> 1.5(0.9982), <sup>ˆ</sup>*ax*<sup>2</sup> 2.1(0.9718), <sup>ˆ</sup>*ax*<sup>2</sup> 2.7(0.6922), <sup>ˆ</sup>*ax*<sup>2</sup> 3.3(0.3914), <sup>ˆ</sup>*ax*<sup>2</sup> 3.9 (0.3009) , ˆ *S x*3 UFA = <sup>ˆ</sup>*ax*<sup>3</sup> 1.5(0.2908), <sup>ˆ</sup>*ax*<sup>3</sup> 2.1(0.3389), <sup>ˆ</sup>*ax*<sup>3</sup> 2.7(0.4394), <sup>ˆ</sup>*ax*<sup>3</sup> 3.3(0.6265), <sup>ˆ</sup>*ax*<sup>3</sup> 3.9 (0.9345) , ˆ *S x*4 UFA = <sup>ˆ</sup>*ax*<sup>4</sup> 1.5(0.5709), <sup>ˆ</sup>*ax*<sup>4</sup> 2.1(0.6430), <sup>ˆ</sup>*ax*<sup>4</sup> 2.7(0.8619), <sup>ˆ</sup>*ax*<sup>4</sup> 3.3(0.8550), <sup>ˆ</sup>*ax*<sup>4</sup> 3.9 (0.2497) , ˆ *S x*5 UFA = <sup>ˆ</sup>*ax*<sup>5</sup> 1.5(0.2977), <sup>ˆ</sup>*ax*<sup>5</sup> 2.1(0.3427), <sup>ˆ</sup>*ax*<sup>5</sup> 2.7(0.8004), <sup>ˆ</sup>*ax*<sup>5</sup> 3.3(1.0821), <sup>ˆ</sup>*ax*<sup>5</sup> 3.9 (0.5993) .

Secondly, generate the final evaluation based on the attribute weight reconfiguration. Because four attributes are estimated from five attributes, the new sequence of attribute weights is calculated by Step 1 of the technique to generate the final evaluation:

$$
\begin{array}{rcl}
\hat{\lambda}^{x} &=& \left< \hat{\lambda}^{1}\_{1'} \, \hat{\lambda}^{1}\_{\underline{1}\underline{5}}^{\underline{1}} \, \hat{\lambda}^{2}\_{2'} \, \hat{\lambda}^{2}\_{\underline{2}\underline{3}} \, \hat{\lambda}^{2}\_{\underline{2}\underline{7}} \, \hat{\lambda}^{3}\_{\underline{3}\prime} \, \hat{\lambda}^{3}\_{\underline{3}\prime} \, \hat{\lambda}^{3}\_{\underline{3}\underline{3}} \, \hat{\lambda}^{3}\_{\underline{3}\underline{3}} \, \hat{\lambda}^{4}\_{\underline{3}\underline{9}} \, \hat{\lambda}^{4}\_{\underline{4}} \right>\_{\underline{1}} \\
&=& \left< \hat{\lambda}^{1}\_{\underline{1}\prime} \, \hat{\lambda}^{1}\_{\underline{6}\prime} \, \hat{\lambda}^{1}\_{\underline{6}\prime} \, \hat{\lambda}^{1}\_{\underline{0}\prime} \, \hat{\lambda}^{2}\_{\underline{1}\prime} \, \hat{\lambda}^{2}\_{\underline{1}\prime} \, \hat{\lambda}^{1}\_{\underline{2}\prime} \, \hat{\lambda}^{2}\_{\underline{1}\prime} \, \hat{\lambda}^{1}\_{\underline{2}\prime} \right> \, \end{array} \tag{33}
$$

and the evaluation of the power quality of nodes is calculated by Step 3 of the technique to generate the final evaluation:

$$\mathcal{L}^{\underline{x}\underline{\boldsymbol{v}}} = \mathring{\mathfrak{a}}\_{1}^{\underline{\boldsymbol{w}}} \times \mathring{\lambda}\_{1}^{1} + \mathring{\mathfrak{a}}\_{1.5}^{\boldsymbol{w}} \times \mathring{\lambda}\_{1.5}^{1.2} + \mathring{\mathfrak{a}}\_{2}^{\boldsymbol{w}} \times \mathring{\lambda}\_{2}^{2} + \mathring{\mathfrak{a}}\_{2.1}^{\boldsymbol{w}} \times \mathring{\lambda}\_{2.7}^{2.3} + \mathring{\mathfrak{a}}\_{2.7}^{\boldsymbol{w}} \times \mathring{\lambda}\_{2.7}^{2.3} + \mathring{\mathfrak{a}}\_{3}^{\boldsymbol{w}} \times \mathring{\lambda}\_{3}^{3} + \mathring{\mathfrak{a}}\_{3.3}^{\boldsymbol{w}} \times \mathring{\lambda}\_{3.9}^{3.4} + \mathring{\mathfrak{a}}\_{3.9}^{\boldsymbol{w}} \times \mathring{\lambda}\_{3.9}^{3.4} + \mathring{\mathfrak{a}}\_{4}^{\boldsymbol{w}} \times \mathring{\lambda}\_{4}^{4} \tag{34}$$

The evaluation results are shown in Table 4. The smaller the evaluation value is, the better the power quality of the node is. From Table 7, we can find that: the evaluation result by the traditional method is *x*1 *x*3 *x*4 *x*2 *x*5, while by the proposed method is *x*1 *x*3 *x*4 *x*5 *x*2. Hamming distance between the result by the traditional method with KFAs only known and the result by the traditional method with all attributes known is 0.6, while the proposed method is 0.4.


**Table 7.** Evaluation results with KFAs known.

Thus, we can conclude that the proposed method is more effective than the traditional method to deal with this type of power quality evaluation problem.
