2.2.3. ANFIS

ANFIS was introduced by Jang [24] and is an artificial intelligence model that benefits from advantages of both fuzzy systems and ANNs. In this model, fuzzy inference systems (FIS) are determined by if-then rules and membership functions (MFs) where tuning the MFs are realized by ANNs. The three main types of FISs are the Takagi-Sugeno-Kang (TSK), Mamdani, and Tsumoto [25]. In this research, TSK FIS model was employed, which is more powerful at handling non-linear input-output relationships [26]. TSK uses the pattern of input and output data to create if-then rules. If-then rules for TSK model in a 2-input system is given in Equation (2).

$$\begin{aligned} \text{If} \quad \mathbf{x} = A\_1 \quad \text{and} \quad \mathbf{y} = B\_1 \Rightarrow f = p\_1 \mathbf{x} + q\_1 \mathbf{x} + r\_1\\ \text{If} \quad \mathbf{x} = A\_2 \quad \text{and} \quad \mathbf{y} = B\_2 \Rightarrow f = p\_2 \mathbf{x} + q\_2 \mathbf{x} + r\_2 \end{aligned} \tag{2}$$

where *A*1, *B*1 and *A*2, *B*2 are the MFs related to input *x* and input *y* respectively and *p*1, *q*1, *r*1, *p*2, *q*2, *r*2 are linear parameters of part-Then in TSK. Figure 6 shows the structure of a typical ANFIS model which contains 5 layers.

**Figure 6.** ANFIS architecture.

The fuzzification process is realized in the first layer. In this layer, nodes are square nodes with function presented in Equation (3).

$$O\_i^1 = \mu A\_i(\mathbf{x})\tag{3}$$

where *i* is the *i*th node in the layer, *Ai* is linguistic value of the node. As the membership function, a Gaussian membership function was employed (Equation (4)).

$$\mu A\_i(\mathbf{x}) = \exp\left[-(\frac{\mathbf{x} - \mathbf{c}\_i}{2a\_i})^2\right] \tag{4}$$

where *ai* and *ci* are the parameters that define shape of the membership function. These parameters are tuned during the learning process. Layer 2 contains circle nodes that multiply the input signals and their output is the product of this multiplication which stands for firing strength of each rule as given in Equation (5).

$$
\omega\_{\bar{i}} = \mu A\_i(\mathbf{x}) \times \mu B\_{\bar{i}}(y), \bar{i} = \mathbf{1}, \mathbf{2}. \tag{5}
$$

In layer 3, node *i* calculates the ratio of *i*th rule firing strength in respect to sum of all rules firing strength which, is a normalization process. Similar to layer 2, nodes are fixed in layer 3. Next, adaptive nodes in layer 4 calculate values of the rule of the consequent part and finally, layer 5 sums all outputs of layer 4 and contains only one node [27]. Mathematical description of layer 3, layer 4 and layer 5 are given in Equations (6)–(8) respectively.

$$
\bar{\omega}\_{\bar{i}} = \frac{\omega\_{\bar{i}}}{\omega\_1 + \omega\_2}, \bar{i} = 1, 2. \tag{6}
$$

$$O\_i^4 = \bar{\omega}\_i f\_i = \bar{\omega}\_i (p\_i \mathbf{x} + q\_i \mathbf{y} + r\_i). \tag{7}$$

$$O\_i^5 = \sum\_i \bar{\omega}\_i f\_i = \frac{\sum\_i \omega\_i f\_i}{\sum\_i \omega\_i}. \tag{8}$$

ANFIS is trained using input-output data pairs. As seen in the ANFIS architecture and equations describing each layer, parameters in layer 1 and layer 4 can be tuned. This process defines the shape of the MFs (tuning of *ai* and *ci*) and specifies the fuzzy rules (tuning of *pi*, *qi* and *ri*). Tuning of these parameters is realized according to an error criterion. Backpropagation algorithm can be employed for training ANFIS; however, due to slow convergence rate of the backpropagation algorithm and its tendency to be trapped in local minima, it is used combined with the least-square estimator. This combination is called the hybrid method and because of reduction of the dimensional search space, provides a faster convergence rate [28].
