**3. Problem**

In this section, some further definitions are defined based on the following theorems. Then, the main problem this paper focuses on is raised and expressed by these definitions.

**Theorem 1.** *PAS and FAMS of x are equivalent:* PAS ∼ FAMS. **Proof.** ∵ <sup>∀</sup>*axj* = *<sup>ν</sup>jaxj* (*j* = 1, ··· , *<sup>m</sup>*), <sup>∀</sup>*axj* =*j axj* (*j* = 1, ··· , *<sup>m</sup>*). ∴ <sup>∀</sup>*axj* = *vjojaxj* (*j* = 1, ··· , *<sup>m</sup>*), which means there exists one-to-one correspondence (bijection) from PAS to FAMS. Besides, |PAS| = |FAMS| = *m*. Thus, PAS ∼ FAMS is proved. -

**Theorem 2.** *PAS and FAMS of x are countable sets.*

**Proof.** Choose random two attributes from FAMS of *x* and denote them as *ax*1 and *<sup>a</sup>xm*, the other attributes are ranked by the similarity with *ax*1 and *axm* from large to small and inserted between *ax*1 and *<sup>a</sup>xm*, which comes out as the sequence

$$
\breve{S}^x = \langle \hat{a}\_1^x, \hat{a}\_{2'}^x, \dots, \hat{a}\_m^x \rangle \dots
$$

By this method to rank PAS of *x*, it also comes out as the sequence

$$\mathbf{S}^{\boldsymbol{\chi}} = \langle \mathbf{a}\_1^{\boldsymbol{\chi}}, \mathbf{a}\_{2\boldsymbol{\prime}}^{\boldsymbol{\chi}}, \cdots, \mathbf{a}\_m^{\boldsymbol{\chi}} \rangle.$$

Thus, PAS and FAMS of *x* are both countable sets. -

**Definition 5.** *Based on Theorem 2, the attribute vector sequence of x is defined as follows:*

$$\mathbf{S}^{\boldsymbol{x}} = \langle \mathbf{a}\_1^{\boldsymbol{x}}, \mathbf{a}\_2^{\boldsymbol{x}}, \dots, \mathbf{a}\_m^{\boldsymbol{x}} \rangle. \tag{2}$$

*The measurable attribute sequence is defined as:*

$$\begin{array}{lcl} S^x &= \langle a\_1^x, a\_2^x, \dots, a\_m^x \rangle \\ &= \underbrace{\left\langle a\_{i-1}^x, \dots, a\_{j-1}^x, \dots, a\_{k-1}^x \right\rangle}\_{S\_{\text{KFA}}^x} + \underbrace{\left\langle a\_i^x, \dots, a\_{j'}^x, \dots, a\_k^x \right\rangle}\_{S\_{\text{UEA}}^x} \end{array} .\tag{3}$$

*The fuzzy measurable attribute sequence of x has the following form:*

$$\begin{split} \overset{\circ}{S}^{\mathbf{x}} &= \langle \overset{\circ}{\tilde{a}\_{1}^{\mathbf{x}}}, \overset{\circ}{a\_{2}^{\mathbf{x}}}, \dots, \overset{\circ}{a\_{m}^{\mathbf{x}}} \rangle \\ &= \underbrace{\langle \overset{\circ}{\tilde{a}\_{i-1'}^{\mathbf{x}}}, \dots, \overset{\circ}{a\_{j-1'}^{\mathbf{x}}}, \dots, \overset{\circ}{a\_{k-1}^{\mathbf{x}}} \rangle}\_{\overrightarrow{S}\_{\text{KFA}}^{\mathbf{x}}} + \underbrace{\langle \overset{\circ}{a}\_{i'}^{\mathbf{x}}, \dots, \overset{\circ}{a\_{j'}^{\mathbf{x}}}, \dots, \overset{\circ}{a\_{k}^{\mathbf{x}}} \rangle}\_{\overrightarrow{S}\_{\text{UFA}}^{\mathbf{x}}} \end{split} \tag{4}$$

**Definition 6.** *It is affirmative that KFA and UFA are correlated:*

$$
\tilde{S}^{\underline{x}}\_{\mathrm{UFA}} \propto \tilde{S}^{\underline{x}}\_{\mathrm{KFA}\prime}
$$

*consider the function relations among these attributes is undefined:*

$$
\tilde{a}\_{j-1}^x = f\left(\tilde{a}\_j^x\right)\_{\prime\prime}
$$

*where function f has no exact analytic expression, so UFA can only be depicted by approximate estimation:*

$$
\hat{\mathcal{S}}\_{\text{UFA}}^{\mathbf{x}} \cong \lg \left( \hat{\mathcal{S}}\_{\text{KFA}}^{\mathbf{x}} \right),
$$

*where function g is the function for approximately estimating S x* UFA *with S x* KFA as the independent variable.

Based on the above definitions, the main problem this paper focused on can be described as follows:

**Problem description:** Given a set of fuzzy membership grades of KFA:

$$\left\{ \left< i, \widehat{S}\_{\mathbf{KFA}}^{\chi\_{i}} \right> \middle| \mathcal{X}\_{i} \in X, i = 1, \cdots, n \right\} \_{\mathcal{A}}$$

evaluate *xi* under the following conditions:


Research about the problem with all conditions as mentioned above is rarely conducted. In fact, the condition (d) is an important index to evaluation. To satisfy condition (d), we sugges<sup>t</sup> a fuzzy attribute expansion method to evaluate *xi*: before the final evaluation, use KFAs to approximately estimate UFAs.

#### **4. Geometric Analysis of PAS, MAS, and FAMS**

In this section, the geometric analysis of PAS, MAS, and FAMS of the problem is conducted. Firstly, the generalized geometric structures (GGS) of PAS, MAS, and FAMS are modeled and represented in the form of a diagrammatic sketch. Secondly, the geometric relationship between GGS of PAS and GGS of FAMS is analyzed. Thirdly, GGS of *x* in FAMS can be approximately estimated by the *<sup>S</sup>x*KFA of *x* based on interpolation technique is discussed.

These three kinds of sets depict attributes, the relationships between attributes, and attribute membership degrees from different spatial cognition. PAS is an abstract set to characterize nonlinear relationships between attributes, especially the vague attributes. Each vector represents different attributes, all of the vectors represent *x*. MAS and FAMS are the numerical mapping of PAS. In fact, they are the projection of the PAS in the distance space. Both of them fail to show the attribute correlation because of the loss of vector directivity.

To understand these three kinds of sets intuitively, the generalized geometric structures (GGS) in different sets are modeled based on Theorem 2 as follows:

**GGS in PAS**: Use different dotted lines with direction to represent different attributes. These dotted lines are straight lines or curves, which depends on the linear relationship between each of attributes. To reduce complexity, choose one attribute as the unified reference attribute, then compare other attributes with it; if the relationship is linear (or nonlinear), the dotted lines of these attributes are straight lines (or curves). PAS of *x* is depicted by the combination of the *m* vectors whose ends are located on the curves, as Figure 1 illustrates. The surface consisting of all vector ends is defined as the GGS of *x* in PAS. The GGS in PAS is a smooth and continuous surface.

**GGS in MAS**: Use different line segments to represent the projections of different attributes in distance space (such as Euclidean space) which are arranged in sequence, such as the integer sequence. *Sx*KFA and *Sx*UFA are respectively indicated with solid line segments and dotted line segments. MAS of *x* is depicted by the combination of *m* line segments, as Figure 2 illustrates. The dotted curve consisting of *m* line segmen<sup>t</sup> ends is defined as the GGS of *x* in MAS.

**GGS in FAMS**: Use different line segments to represent the projections of different measurable attributes in [0, 1], which are arranged in the same sequence with MAS. *<sup>S</sup>x*KFA and *<sup>S</sup><sup>x</sup>*UFA are respectively indicated with solid line segments and dotted line segments. If the fuzzy membership function is linear, FAMS of *x* is depicted by the combination of *m* line segments, as Figure 3 illustrates. The dotted curve consisting of *m* line segmen<sup>t</sup> ends is defined as the GGS of *x* in FAMS. The GGS in FAMS is a smooth and continuous curve.

**Figure 1.** Illustration of generalized geometric structures (GGS) in a pure attribute set (PAS).

**Figure 2.** Illustration of GGS in a measurable attribute set (MAS).

**Figure 3.** Illustration of GGS in a fuzzy attribute membership set (FAMS) with a linear fuzzy membership function.

From the above set models, it is easy to intuitively understand that MAS and FAMS are the low-dimensional embedding of PAS. The value ranges of set elements are different for MAS and FAMS. FAMS is linear mapping of MAS, which is determined by the fuzzy membership function of each attribute. In FAMS, some fuzzy attribute membership grades may approach 1 while the value of it in MAS is quite small or even negative. Furthermore, after defining and studying the GGS of *x* in these three sets, we find that GGS of *x* in PAS is a surface, while in MAS it is a curve and in FAMS it is a curve. The curves intercept the surface.

Thus, if we want to depict the GGS of *x* in PAS (the surface) more precisely to solve the problem raised at the beginning of the paper, the GGS of FAMS (the curve) should be calculated more precisely by the *<sup>S</sup>x*KFA of *x*, which is a feasible and considerable way. Interpolation technique is a commonly used and effective technique to approximate estimation. From all of the interpolation techniques, the spline interpolation has the best smoothing ability which is the most important for the estimation for the curve. Since the GGSs in PAS and FAMS are smooth and continuous, the spline interpolation technique can be used as the interpolation technique to approximately estimate the GGS in FAMS.

#### **5. The Fuzzy Attribute Expansion Method**

In this section, the new fuzzy attribute expansion method to solve the problem is proposed. The method is basically consisted of two sub methods: (1) the method to approximately estimate UFA and (2) the method to generate the final evaluation. The detailed descriptions of these methods are given as follows.

#### *5.1. The Technique to Approximate Estimate UFAs Based on Interpolation*

The basic idea of the method is: UFAs can be approximately estimated by inputting specified attribute sequence numbers into the interpolation function, which is the result of applying the curve interpolation technique to KFAs. Notably, the UFAs and KFAs here are correlated. Otherwise, the technique will not work. Basically, this technique can be divided into five steps.

**Step 1:** Rearrange *<sup>S</sup>x*KFA. For new samples, rearrangemen<sup>t</sup> of *<sup>S</sup>x*KFA based on Theorem 2 is needed. **Step 2:** Determine the attribute sequence number. There are no special restrictions on the selection of sequence number form. Normally, we can simply number the attribute sequence with the form of the positive integer sequence:

$$N\_{\text{KFA}} = \langle 1, 2, \dots, \mathfrak{t}, \mathfrak{t} \rangle. \tag{5}$$

**Step 3:** Generate the interpolation function of KFAs. Choose a suitable interpolation. Taking *N*KFA as the independent variable *E* = ) *e*1 *e*2 ··· *et* \*T and *<sup>S</sup>x*KFA as the dependent variable *Y* = ) *y*1 *y*2 ··· *yt* \*T, minimizes the objective:

$$J = p \sum\_{i} (y\_i - s(e\_i))^2 + (1 - p) \int \left(\frac{d^2 s}{d e^2}\right)^2 de,\tag{6}$$

where *s* is the smoothing spline, *p* is smoothing parameter which is defined between 0 and 1 (*p* = 0.95 in this paper). When the optimum solution is found, the finalist *s* is the interpolation function.

**Step 4:** Generate UFA number sequence.

$$N\_{\rm UFA} = \underbrace{\langle 1 + a\_1, 1 + a\_2, \dots, 1 + a\_{\rm i}, \dots, 1 + a\_r \rangle}\_{I},\tag{7}$$

whose length is *r* with 1 + *α*1 as the start and 1 + *αr* as the end, 1 + *α*1 ≥ 1, 1 + *αr* ≤ *t*, *r* ≥ *m* − *t*. For different problems, *r* is different and determined by the heuristic knowledge, which can be shown in the examples in Section 6. *αr* is determined by two adjacent attributes of the attribute sequence. The sequence is distributed evenly and linear, step size *αi* should be fixed as:

$$
\alpha\_i = i \times \left[ \frac{t-1}{r} \right],
\tag{8}
$$

and the UFA number sequence becomes:

$$N\_{\rm UFA} = \langle 1 + \left[\frac{t-1}{r}\right], \dots, 1+r \times \left[\frac{t-1}{r}\right] \rangle. \tag{9}$$

**Step 5:** Approximately estimate UFAs finally. Input *N*UFA into *s*, so the UFA membership grades sequence can be estimated as:

$$
\hat{\boldsymbol{S}}\_{\text{UFA}}^{\mathbf{x}} = \boldsymbol{s}(\boldsymbol{N}\_{\text{UFA}}),\tag{10}
$$

and the total estimated membership grades sequence is:

$$\begin{array}{lcl}\stackrel{\scriptstyle\widetilde{\bf x}}{\widetilde{\bf \bf}}^{\widetilde{\bf x}} &= \langle \widetilde{a}\_{1}^{\scriptscriptstyle \bf x}, \widehat{a}\_{1+a\_{1}}^{\scriptscriptstyle \bf x}, \dots, \widehat{a}\_{1+a\_{\widetilde{f}}}^{\scriptscriptstyle \bf x}, \dots, \widehat{a}\_{1}^{\scriptscriptstyle \bf x}, \dots, \widehat{a}\_{1+a\_{\widetilde{f}}}^{\scriptscriptstyle \bf x}, \widehat{a}\_{1}^{\scriptscriptstyle \bf x} \rangle \\ &= \underbrace{\langle \widetilde{a}\_{1}^{\scriptscriptstyle \bf x}, \dots, \widetilde{a}\_{1}^{\scriptscriptstyle \bf x}, \dots, \widetilde{a}\_{t}^{\scriptscriptstyle \bf x} \rangle}\_{\widetilde{S}\_{\widetilde{\bf E} \rm A}} + \underbrace{\langle \widehat{a}\_{1+a\_{1}}^{\scriptscriptstyle \bf x}, \dots, \widehat{a}\_{1+a\_{\widetilde{f}}}^{\scriptscriptstyle \bf x}, \dots, \widehat{a}\_{1+a\_{\widetilde{f}}}^{\scriptscriptstyle \bf x} \rangle}\_{\widetilde{S}\_{\mathrm{UEA}}}\ . \end{array} \tag{11}$$

For this technique, membership grades of UFA and KFA are considered as vertical coordinate values of the attribute vector sequence curve. The interpolation technique is used to depict the curve. Once the function is interpolated, the estimation result could be calculated after inputting the customized horizontal ordinate values. The result ˆ *S x* UFA is an estimation of *S x* UFA to some degree. Its accuracy depends on the quality of the interpolation technique, which means the interpolation function *s* is the key to the method. For spline interpolation, *s* is the optimal result of all. Meanwhile, the attribute number sequence is another key. The cognition of the KFA determines the generation of the sequence. For instance, if some attributes of KFA are more important for *x*, which is usually judged artificially, then steps between each of them should be smaller than others.

#### *5.2. The Technique to Generate the Final Evaluation Based on Attribute Weight Reconfiguration*

Since the UFA membership grades sequence has been approximately estimated, we propose a new technique to generate the final evaluation based on attribute weight reconfiguration.

**Step 1**: Regenerate a new sequence of attribute weights. Let every element of *λ* be divided into certain parts, the number of which is equal to how many estimated UFAs locate between two KFAs, then the new sequence of attribute weights can be written as:

$$\lambda = \left\langle \lambda\_1^1, \dots, \lambda\_{i-1}^{i-1}, \dots, \underbrace{\lambda\_{1+u\_j}^{i-1}, \dots}\_{}, \lambda\_{i\prime}^i, \dots, \lambda\_t^i, \dots, \lambda\_t^t \right\rangle,\tag{12}$$

$$
\hat{\lambda}^i\_i = \frac{\lambda\_i}{d\_i}' \tag{13}
$$

ˆ

$$
\hat{\lambda}\_{1+\alpha\_j}^{i-1\\_i} = \frac{\lambda\_{i-1}}{d\_{i-1}} + \frac{\lambda\_i}{d\_i}'.\tag{14}
$$

where if *i* ∈ (1, *t*), then *di* is the size of *N*UFA within the [*i* − 1, *i* + 1]. If *i* = 1, then *d*1 is the size of *N*UFA within the [1, 2]. If *i* = *t*, then *dt* is the size of *N*UFA within the [*t* − 1, *t*].

**Step 2:** Calculate the multiplication of corresponding elements from *λ* ˆ and *S x* :

$$\left\langle \hat{\lambda}\_1^1 \times \hat{a}\_{1'}^x, \hat{\lambda}\_{1+a\_1}^{1,2} \times \hat{a}\_{1+a\_{1'}}^x, \dots, \hat{\lambda}\_{1+a\_{\hat{j}}}^{i-1,j} \times \hat{a}\_{1+a\_{\hat{j}'}}^x, \dots, \hat{\lambda}\_i^i \times \hat{a}\_i^x, \dots, \hat{\lambda}\_{1+a\_r}^{t-1,t} \times \hat{a}\_{1+a\_{r'}}^x, \hat{\lambda}\_t^t \times \hat{a}\_t^x \right\rangle. \tag{15}$$

**Step 3:** Sum and obtain final evaluation of *x*:

$$\mathcal{E} = \hat{\lambda}\_1^1 \times \overline{a}\_1^\underline{x} + \hat{\lambda}\_{1 + a\_1}^{1 - \underline{2}} \times \overline{\hat{a}}\_{1 + a\_1}^\underline{x} + \dots + \hat{\lambda}\_{1 + a\_{\underline{i}}}^{i - 1 - \underline{1}} \times \hat{\overline{a}}\_{1 + a\_{\underline{i}}}^\underline{x} + \dots + \hat{\lambda}\_{\bar{i}}^{\underline{i}} \times \overline{a}\_{\bar{i}}^\underline{x} + \dots + \hat{\lambda}\_{1 + \underline{a}\_{\underline{i}}}^{\underline{i} - 1} \times \hat{\overline{a}}\_{1 + a\_{\underline{i}}}^\underline{x} + \hat{\lambda}\_{\underline{i}}^{\underline{i}} \times \overline{a}\_{\underline{i}}^\underline{x}. \tag{16}$$
