Next Article in Journal
Predicting DNA Motifs by Using Multi-Objective Hybrid Adaptive Biogeography-Based Optimization
Previous Article in Journal
Cooperative Non-Orthogonal Multiple Access with Energy Harvesting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of T-Norms and S-Norms for Interval Type-2 Fuzzy Numbers in Weight Adjustment for Neural Networks

1
Faculty of Engineering, Autonomous University of Chihuahua, 31110 Chihuahua, Mexico
2
Division of Graduate Studies and Research, Tijuana Institute of Technology, 22414 Tijuana, Mexico
3
Faculty of Engineering and Chemistry Sciences, Autonomous University of Baja California, 22390 Tijuana, Mexico
*
Author to whom correspondence should be addressed.
Information 2017, 8(3), 114; https://doi.org/10.3390/info8030114
Submission received: 29 August 2017 / Revised: 12 September 2017 / Accepted: 18 September 2017 / Published: 20 September 2017
(This article belongs to the Section Artificial Intelligence)

Abstract

:
A comparison of different T-norms and S-norms for interval type-2 fuzzy number weights is proposed in this work. The interval type-2 fuzzy number weights are used in a neural network with an interval backpropagation learning enhanced method for weight adjustment. Results of experiments and a comparative research between traditional neural networks and the neural network with interval type-2 fuzzy number weights with different T-norms and S-norms are presented to demonstrate the benefits of the proposed approach. In this research, the definitions of the lower and upper interval type-2 fuzzy numbers with random initial values are presented; this interval represents the footprint of uncertainty (FOU). The proposed work is based on recent works that have considered the adaptation of weights using type-2 fuzzy numbers. To confirm the efficiency of the proposed method, a case of data prediction is applied, in particular for the Mackey-Glass time series (for τ = 17). Noise of Gaussian type was applied to the testing data of the Mackey-Glass time series to demonstrate that the neural network using a interval type-2 fuzzy numbers method achieves a lower susceptibility to noise than other methods.

1. Introduction

In the literature there exists research based on a similar idea to this paper, but with different approaches and implementations, such as the adjustment of fuzzy number weights in the input and output layer in the training process for the neural network [1], or the proposal of fuzzy number operations in a fuzzy neural network [2], and also because the proposed work operates with interval type-2 fuzzy numbers weights using different T-norm and S-norm in the adaptation of the weights, which represent the contribution and main difference with respect to the methods in the literature [3,4,5,6].
The proposed method in the present research is different to other papers, such as in Gaxiola et al. [7,8], where the fuzzy weights are obtained using interval type-2 fuzzy inference systems in [7] and generalized type-2 fuzzy inference systems in [8] for the connections between the layers, and without any changes for obtaining the change of the weights for each epoch of the backpropagation algorithm.
In the present approach, the use of interval type-2 fuzzy number weights is proposed. The lower and upper fuzzy numbers are obtained with the Nguyen-Widrow algorithm for the initial weights, modifying the internal calculations of the neurons by performing the multiplications of the inputs for the lower and upper type-2 fuzzy numbers weights separately, and then applying the T-norm for the lower outputs and the S-norm for the upper outputs; furthermore, we modified the backpropagation algorithm to achieve the lower change of the weights and the upper change of the weights in each epoch, respectively.
The proposed approach has the goal in the data of time series of achieving the best prediction error, which is the minimal error. In this case, the prediction for the Mackey-Glass time series is utilized to verify the efficiency of the proposed approach.
This paper is focused on analyzing fuzzy neural networks with the interval type-2 fuzzy number weights and providing a comparison with respect to the traditional neural networks. A same learning algorithm is used for the two neural models. On the other hand, with the purpose of further analyzing the performance of the model, we also applied noise in the real test data.
A comparison of the performance for the traditional neural network against the proposed fuzzy neural network with interval type-2 fuzzy numbers weights is performed in this paper. This comparison is based on the use of fuzzy numbers for the weights instead of the real numbers utilized in the traditional neural network; this modification is of great importance, due to the fact that the learning process of a neural network is directly affected by obtaining the optimal weights and consequently this has a critical impact on obtaining better results [9,10,11].
In the fuzzy neural network with interval type-2 fuzzy numbers weights, different T-norms and S-norms are applied for obtaining prediction error results, like the sum-product, Dombi [12], Hamacher [13,14] and Frank [15].
The adjustment of the weights in the backpropagation learning using interval type-2 fuzzy numbers is the main contribution of the proposed work in this paper for neural networks. This contribution provides to the neural network the robustness to support real data with uncertainty [16,17,18].
The main contribution of the proposed work is to improve backpropagation learning with the use of lower and upper type-2 fuzzy numbers for the adaptation for the weights. The use of the T-Norm and S-Norm to obtain the outputs of the neurons with the approaches of the sum-product, Dombi, Hamacher and Frank, enables the achievement of a better support for the uncertainty in the training process. With this, better results can be accomplished [19,20].
The next section presents a background of research on fuzzy numbers, and other methods of adaptation of weights and previous work of modifications to the backpropagation learning in neural networks. Section 3 explains the proposed methodology and the description of the problem to solve in the paper. Section 4 presents the simulation results for the proposed approach and the statistical tests. Finally, in Section 5, conclusions are offered and future work outlined.

2. Related Work

In the neural networks area, the backpropagation algorithm and variations of it is the training method that researchers use in literature [21,22,23]. In the bibliography, several papers have proposed different methods to improve the convergence of the backpropagation training algorithm [3,4,6]. In this paper, the most significant and essential works about the representation or managing of fuzzy numbers will be reviewed [9,10,11].
Dunyak et al. [1] presented a new algorithm for obtaining new weights (inputs and outputs) in the phase of training with any type of fuzzy numbers for a fuzzy neural network. Fard et al. [24] presented the sum and the product of two interval type-2 triangular fuzzy numbers and, basing it on the Stone-Weierstrass theorem, a fuzzy neural network working with interval type-2 fuzzy logic is developed. Li Z. et al. [2] described a fuzzified neural network computing the results of operations in two fuzzy numbers such as addition, subtraction, multiplication and division.
Asady B. [25] outlined a method for approximating trapezoidal fuzzy numbers in comparison with other methods of approximation. Coroianu L. et al. [26] described the inverse F-transform to accomplish optimal fuzzy numbers, maintaining the support and the convergence of the core. Yang D. et al. [27] presented an interval and modified interval neuron perceptron with interval weights and biases, and modified the learning algorithm for this approach.
Requena et al. [28] presented trapezoidal fuzzy numbers with the conventional parameters (a, b, c, d) in an artificial neural network, and also proposed a decision personal index (DPI) to obtain the distance between the numbers. Kuo et al. [29] described a fuzzy neural network using the real-coded genetic algorithm to generate the initial fuzzy weights, and using the extension principle, the fuzzy operations are determined. Molinari [30] presented generalized triangular fuzzy numbers and a comparison with others fuzzy numbers. Chai et al. [31] described a representation of fuzzy numbers, establishing the theorem “that for each fuzzy number there exists a unique skew fuzzy number and a unique symmetric fuzzy number”.
Figueroa-García et al. [32] made a comparison between interval type-2 fuzzy numbers using distance measures. Ishibuchi et al. [33] presented a comparison between real numbers, and different fuzzy numbers such as symmetric triangular, asymmetric triangular and symmetric trapezoidal as weights in the connections between layers in a neural network. Karnik et al. [34] presented mathematical operations of type-2 fuzzy sets for obtaining the join and meet under t-norm.
Raj et al. [35] described fuzzy ranking alternatives for fuzzy numbers as linguistic variables for fuzzy weights. Chu et al. [36] proposed a ranking of fuzzy numbers with a zone between the original point and the centroid point and making numerical examples with triangular fuzzy numbers.
Ishibuchi et al. [37,38] proposed to use the weights for a fuzzy neural network like triangular or trapezoidal fuzzy numbers. Feuring [39] presented a new backpropagation algorithm for learning in the neural network, in which the new lower and upper limits of weights are computed. Castro et al. [40] proposed a type-2 fuzzy neurons model, in which the rules used interval type-2 fuzzy neurons in the antecedents and an interval of type-1 fuzzy neuron in the consequents.
In the area of time series prediction there exists in the literature many recent works developed by and based on the use of type-2 fuzzy logic, like Castro et al. [41], and other researchers [42,43,44].

3. Proposed Methodology

The proposed method in this work has the goal of generalizing the backpropagation learning algorithm by using interval type-2 fuzzy numbers in the calculations, and this approach gives the neural network less susceptibility to data with uncertainty. In interval type-2 fuzzy numbers, it will be necessary to obtain the interval of the fuzzy numbers, which consists in the footprint of uncertainty (FOU), and the calculations in the neurons was obtained with the T-norms and S-norms of sum-product, Dombi, Hamacher and Frank for the corresponding applications [45,46,47,48].
The method of adjustment of weights for each neuron in the connections between the layers in the backpropagation learning algorithm is modified from the original adjustment (Figure 1).
The method in this paper consists of utilizing interval type-2 fuzzy number weights in neurons. This development modifies the internal calculation for the neurons and the adjustment of the weights to allow handle fuzzy numbers (Figure 2) [49].
We modified the operations in the neurons and the backpropagation learning to adjust the fuzzy numbers weights and accomplish the desired result, working to find the optimal process to operate with interval type-2 fuzzy number weights [50,51].
To determine the appropriate activation function f(-) to utilize, the linear and secant hyperbolic functions were considered in this approach.

3.1. Architecture of the Traditional Neural Network

The architecture of the neural network used in this work (see Figure 3) consists of a hidden layer with 16 neurons and of an output layer with 1 neuron, and a training data of the Mackey-Glass time series to the input data in the input layer.

3.2. Architecture of the Fuzzy Neural Network with Interval Type-2 Fuzzy Numbers Weights

In Figure 4a scheme of the proposed methodology of the fuzzy neural network with interval type-2 fuzzy numbers weights (FNNIT2FNW) is presented.
The architecture of fuzzy neural network with interval type-2 fuzzy number weights (see Figure 5) is explained as follow:
Phase 0: equation of the inputs data.
x = [ x 1 ,   x 2 , ,   x n ]
Phase 1: Representation of the Interval type-2 fuzzy number weights [36].
w ˜ = [ w _ , w ¯ ]
where w ¯ and w _ are generated with the Nguyen-Widrow algorithm [52] for the initial weights.
Phase 2: Calculation of the output of the hidden neurons with interval type-2 fuzzy number weights.
N e t _ = f ( i = 1 n x i w i j _ )
N e t ¯ = f ( i = 1 n x i w i j ¯ )
We used the secant hyperbolic as the activation function for the hidden neurons. Subsequently, we applied the S-Norm and T-Norm for calculating the lower and upper outputs of the hidden neurons, respectively.
y _ = T N o r m ( N e t _ , N e t ¯ )
y ¯ = S N o r m ( N e t _ , N e t ¯ )
where: we used the T-norms and S-norms.
Sum-Product:
T N o r m ( N e t _ , N e t ¯ ) = N e t _   . N e t ¯
S N o r m ( N e t _ , N e t ¯ ) = N e t _ + N e t ¯ T N o r m ( N e t _ , N e t ¯ )
Dombi: for γ > 0 .
T N o r m D ( N e t _ , N e t ¯ , γ ) = 1 1 + [ ( N e t _ 1 1 ) γ + ( N e t ¯ 1 1 ) γ ] 1 γ
S N o r m D ( N e t _ , N e t ¯ , γ ) = 1 1 + [ ( N e t _ 1 1 ) γ + ( N e t ¯ 1 1 ) γ ] 1 γ
Hamacher: for γ > 0 .
T N o r m H ( N e t _ , N e t ¯ , γ ) = N e t _   . N e t ¯ γ + ( 1 γ ) ( N e t _ + N e t ¯ N e t _   . N e t ¯ )
S N o r m H ( N e t _ , N e t ¯ , γ ) = N e t _ + N e t ¯ + ( γ 2 ) ( N e t _   . N e t ¯ ) 1 + ( γ 1 ) ( N e t _   . N e t ¯ )
Frank: for s > 0 .
T N o r m F ( N e t _ , N e t ¯ , s ) = l o g s [ 1 + ( s N e t _ 1 ) ( s N e t ¯ 1 ) s 1 ]
S N o r m F ( N e t _ , N e t ¯ , s ) = 1 l o g s [ 1 + ( s 1 N e t _ 1 ) ( s 1 N e t ¯ 1 ) s 1 ]
Phase 3: Calculation of the output for the output neuron with interval type-2 fuzzy number weights.
O u t _ = f ( i = 1 n y i _ w i j _ )
O u t ¯ = f ( i = 1 n y i ¯ w i j ¯ )
In the output neuron the linear activation function is utilized. Subsequently, we applied the S-Norm and T-Norm for the lower and upper outputs of the output neurons, respectively.
y _ = S N o r m ( O u t _ , O u t ¯ )
y ¯ = T N o r m ( O u t _ , O u t ¯ )
where: we used the T-norms and S-norms.
Sum-Product:
T N o r m ( O u t _ , O u t ¯ ) = O u t _   . O u t ¯
S N o r m ( O u t _ , O u t ¯ ) = O u t _ + O u t ¯ T N o r m ( O u t _ , O u t ¯ )
Dombi: for γ > 0 .
T N o r m D ( O u t _ , O u t ¯ , γ ) = 1 1 + [ ( O u t _ 1 1 ) γ + ( O u t ¯ 1 1 ) γ ] 1 γ
S N o r m D ( O u t _ , O u t ¯ , γ ) = 1 1 + [ ( O u t _ 1 1 ) γ + ( O u t ¯ 1 1 ) γ ] 1 γ
Hamacher: for γ > 0 .
T N o r m H ( O u t _ , O u t ¯ , γ ) = O u t _   . O u t ¯ γ + ( 1 γ ) ( O u t _ + O u t ¯ O u t _   . O u t ¯ )
S N o r m H ( O u t _ , O u t ¯ , γ ) = O u t _ + O u t ¯ + ( γ 2 ) ( O u t _   . O u t ¯ ) 1 + ( γ 1 ) ( O u t _   . O u t ¯ )
Frank: for s > 0 .
T N o r m F ( O u t _ , O u t ¯ , s ) = l o g s [ 1 + ( s O u t _ 1 ) ( s O u t ¯ 1 ) s 1 ]
S N o r m F ( O u t _ , O u t ¯ , s ) = 1 l o g s [ 1 + ( s 1 O u t _ 1 ) ( s 1 O u t ¯ 1 ) s 1 ]
Phase 4: The single output of the neural network is obtained:
y = y _ + y ¯ 2

3.3. Proposed Adjustment for Interval Type-2 Fuzzy Numbers with Backpropagation Learning

The backpropagation learning algorithm is performed by the adjustment of the interval type-2 fuzzy number weights, described as follows:
Stage 1:
The Nguyen-Widrow algorithm is utilized to initialize the lower and upper values of the interval type-2 fuzzy numbers weights for the neural network.
Stage 2:
The input pattern and the wanted output for the neural network is established.
Stage 3:
The output of the neural network is calculated. In the first instance, the inputs for the network are introduced and the output of the network is obtained performing the calculations of the outputs from the input layer until the output layer.
Stage 4:
Determine the error terms for the neurons of the layers. In the output layer, the calculation of lower ( δ p k O _ ) and upper ( δ p k O ¯ ) delta for each neuron “k” is performed with the follow equations:
δ p k O _ = ( d p k y p k ) f k O ( y _ )
δ p k O ¯ = ( d p k y p k ) f k O ( y ¯ )
In the hidden layer, the calculation of lower ( δ p j h _ ) and upper ( δ p j h ¯ _ ) delta for each neuron “j” is perform with the follow equations:
δ p j h _ = f j h ( N e t _ ) k δ p k O _ w k j _
δ p j h ¯ = f j h ( N e t ¯ ) k δ p k O ¯ w k j ¯
Stage 5:
The utilization of a recursive algorithm allows the actualization of the interval type-2 fuzzy number weights, beginning from the output neurons and updating backwards until the neurons in the input layer. The adjustment is described as follows:
The calculation of the change of interval type-2 fuzzy number weights is achieved with the equations described as follows:
Calculations of the output neurons:
w k j ( t + 1 ) _ = δ p k O _ y p j _
w k j ( t + 1 ) ¯ = δ p k O ¯ y p j ¯
Calculations of the hidden neurons:
w j i ( t + 1 ) _ = δ p j h _ x p i
w j i ( t + 1 ) ¯ = δ p j h ¯ x p i
Stage 6:
The method is recurrent until for each of the learned patterns the error terms are small enough.
E p = 1 2 k = 1 M δ p k 2
Alternatively, we have the option of working with fuzzy inputs and fuzzy targets. In this case, the modification of the proposed neural network must be in phase 2, multiplying by the lower input in Equation (2) and the upper input in Equation (3); besides, in phase 4, we can maintain the lower and upper final outputs of the output neuron because there is no need for performing the average.
The neural network used in this research consists of 16 neurons for the hidden layer, based on a study of the performance of the neural networks modifying the numbers of neurons in the hidden layer, starting with 5 neurons and increasing one by one, until reaching 120 neurons; we are presenting this study in the following section. To test the proposed method, experiments in the time series prediction are performed. A benchmark and chaotic time series such as the Mackey-Glass time series (for τ = 17) is used in this study.
Based on previous work, in the experiments, the backpropagation algorithm applying gradient descent and adaptive learning rate is utilized.
The neural networks manage interval type-2 fuzzy number weights in the hidden and output layer [53,54]. In the hidden and output layer of the network we used the backpropagation algorithm modified for working with interval type-2 fuzzy numbers to achieve new weights for the next epochs of the network [55,56,57].

4. Simulation Results

We achieved the experiments for the Mackey-Glass time series, and for this we used 1000 data points. In this case, 500 data points are considered for the training stage and 500 data points for the testing stage.

4.1. Neural Network with Interval Type-2 Fuzzy Numbers Weights (NNIT2FNW) for T-Norm and S-Norm of Sum-Product

We performed an experiment for determining manually the optimal number of neurons in the hidden layer of the fuzzy neural network with the interval type-2 fuzzy numbers; we increase the number of neurons by one unit at a time in the interval of 5 to 120 neurons. The acquired results from the experiments are presented in Table 1. The fuzzy neural network with 16 neurons in the hidden layer obtained the best result with 0.0149 for the best prediction error, and 0.0180 for the average error (MAE). This experiment was realized with the T-norm and S-norm of sum-product.
The mean absolute error (MAE) is considered in obtaining the results of the experiments. The average error was obtained by taking 30 experiments with the equal parameters and conditions for all experiments.
The parameters for the fuzzy neural network with interval type-2 fuzzy numbers are of 500 epochs and 0.0000001 of error for the training phase.
We observe from Table 1 that the fuzzy neural network with interval type-2 fuzzy numbers and T-norm and S-norm of sum-product with 16 neurons in the hidden layer (FNNIT2FNSp) shows better results than the others; so, based on this fact, in the following experiments we work with this architecture for the neural network.
In Figure 6, we are presenting the plot of the real data of the Mackey-Glass time series against the predicted data of the interval type-2 fuzzy neural network (FNNIT2FNSp) with 16 neurons in the hidden layer. In Figure 7, an illustration of the convergence curves in the training process is presented.
We performed the same experiment that we presented before with the T-norm and S-norm of Dombi, Hamacher and Frank.

4.2. NNIT2FNW for T-Norm and S-Norm of Dombi

The architecture for the fuzzy neural network with T-norm of Dombi (FNNIT2FND) has 4 neurons in the hidden layer and γ = 0.8 , the best results with 0.0457 for the best result for the prediction error, and 0.0622 for the average error. We show some results for this architecture in Table 2, Figure 8 and Figure 9.

4.3. NNIT2FNW for T-Norm and S-Norm of Hamacher

The architecture for the fuzzy neural network with the T-norm of Hamacher (FNNIT2FNH) has 39 neurons in the hidden layer and γ = 1 , the best result with 0.0130 for the best prediction error, and 0.0164 for the average error. We show some results for this architecture in Table 3, Figure 10 and Figure 11.

4.4. NNIT2FNW for T-Norm and S-Norm of Frank

The architecture for the fuzzy neural network with T-norm of Frank (FNNIT2FNF) has 19 neurons in the hidden layer and γ = 2.8 , the best results with 0.0117 for the best prediction error, and 0.0167 for the average error. We show some results for this architecture in Table 4, Figure 12 and Figure 13.

4.5. Comparison of Traditional Neural Network Against NNIT2FNW for T-Norm and S-Norm

The acquired results in the experiments with the traditional neural network (TNN) are present on Table 5 and Figure 14 and Figure 15, and the neural network parameters are obtained are based on empirical testing. The best result of the prediction errors is of 0.0169, and the average error is of 0.0203 (MAE). In Table 2, we present the comparison of these results against the results of the fuzzy neural network with interval type-2 fuzzy numbers for all T-norms (FNNIT2FNSp, FNNIT2FND, FNNIT2FNH, FNNIT2FNF).

4.6. Comparison of the Proposed Methods for Mackey-Glass Data with Noise

We also implemented an experiment utilizing noisiness in the interval 0.1 to 1 in the test data to analyze the robustness of the traditional neural network (TNN) and the fuzzy neural network with interval type-2 fuzzy numbers for all T-norms (FNNIT2FNSp, FNNIT2FND, FNNIT2FNH, FNNIT2FNF). The obtained results for these experiments are presented in Table 6. We applied the noise by using the following equations:
D a t a N o i s e = D a t a + ( N o i s e L e v e l × N o i s e )
N o i s e = 2 × [ r a n d ( 1 , n D a t a ) 0.5 ]
where: “Data” are the test data points of the Mackey-Glass Time series, “NoiseLevel” is the level of noise in the range (0.1–1), “rand” is a uniformly distributed function for random numbers used for obtained the values for the noise.
We observe in Table 6 that the better performance was accomplished with the fuzzy neural network with interval type-2 fuzzy numbers with all T-norms for almost all the levels of noise in the data test. The prediction error for the traditional neural network was increasing considerably with the higher noise levels as a difference to the fuzzy neural network, which maintains the prediction error below 0.21.
We observe in Figure 16 that when at the test data is applied noise, the fuzzy neural network with interval type-2 fuzzy numbers with the Hamacher and Frank T-norms achieve minor prediction errors compared with the traditional neural network in the different levels of noise. An important fact is that the fuzzy neural network with T-norm of Dombi presents better performance than the others in the test with noise but without noise has a higher prediction error.
A statistical test, the t-student test, was applied to perform a comparison of the performance of TNN against FNNIT2FNH, and FNNIT2FNF in the prediction error; we selected these two fuzzy neural networks because they presented a better performance than FNNIT2FNSp, and FNNIT2FND. In the statistical tests we consider 30 experiments and a 95% of reliability in the tests.
We present in Table 7 the parameters for the statistical test of the TNN and FNNIT2FNHmodel. We used a Hypothesis testing of H0: TNN = FNNIT2FNHand for the alternative hypothesis H1: TNN > FNNIT2FNH by the comparison of these models; H0: TNN = FNNIT2FNF and H1: TNN > FNNIT2FNF.
The results obtained with the statistical test for the prediction errors for TNN against FNNIT2FNH are of 0.003907 in the estimated mean difference, 0.003153 in the lower limit of the difference, a t value of 1037, p value of 0.0001 and 56 degrees of freedom.
The results obtained with the statistical test for prediction errors for TNN against FNNIT2FNFare of 0.003631 in the estimated mean difference, 0.002897 in the lower limit of the difference, a t value of 9.93, p value of 0.0001 and 54 degrees of freedom.
The results obtained with the statistical test for prediction errors for FNNIT2FNHagainst FNNIT2FNF are of −0.00027 in the estimated mean difference, −0.000940 in the lower limit of the difference, a t value of −0.84, p value of 0.407 and 57 degrees of freedom.
The results demonstrate that there exists significant statistical evidence to affirm that the FNNIT2FNH and FNNIT2FNHF are better than the TNN, and that the FNNIT2FNH is equal to the FNNIT2FNF.

5. Conclusions

Based on the experiments, we have reached the conclusion that the fuzzy neural network with interval type-2 fuzzy number weights with T-norm of Sum-product (FNNIT2FNSp), Hamacher (FNNIT2FNH) and Frank (FNNIT2FNF) achieved better results than the traditional neural network for the benchmark time series used in this work, Mackey-Glass. This affirmation is based on the prediction errors of 0.0169 for TNN, and 0.0149, 0.0130 and 0.0117 for FNNIT2FNSp, FNNIT2FNH and FNNIT2FNF, respectively; and the average errors obtained of 30 experiments are of 0.0203, and 0.0180, 0.0164 and 0.0167, respectively.
The fuzzy neural network with interval type-2 fuzzy number weights with the T-norm presented a better tolerance and behavior than the traditional neural network when Gaussian noise is applied at the testing data. This inference was reached by analyzing that the fuzzy neural network with interval type-2 fuzzy number weights with T-norms show only minor prediction errors compared to the traditional neural network at increasing the levels of noise. Besides, analyzing Table 5 and Table 6, and Figure 10, from which the FNNIT2FNH and FNNIT2FNF show only minor prediction errors compared to the other paradigms in this work for the Mackey-Glass time series, and furthermore observing the results for the statistical tests performed to these paradigms.
The method proposed in this work, a fuzzy neural network with interval type-2 fuzzy number weights with T-norms, presents better performance, robustness and achieves lower results of prediction errors than the traditional neural network. Furthermore, the interval type-2 fuzzy number weights allow the neural network to have a minor susceptibility to increment the results of the prediction error when the real data is affected with noise.

Author Contributions

Patricia Melin proposed the idea of using fuzzy weights in the backpropagation algorithm. Juan R. Castro proposed the idea of using the T-norm and S-norm in the method. Patricia Melin, Juan R. Castro, Oscar Castillo and Fernando Gaxiola conceived and designed the experiments; Patricia Melin, Fevrier Valdez and Fernando Gaxiola performed the experiments; Juan R. Castro and Fevrier Valdez analyzed the data; Fernando Gaxiola and Oscar Castillo wrote the paper. In addition, Oscar Castillo review the correctness of the use of type 2 fuzzy logic.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dunyak, J.; Wunsch, D. Fuzzy Number Neural Networks. Fuzzy Sets Syst. 1999, 108, 49–58. [Google Scholar] [CrossRef]
  2. Li, Z.; Kecman, V.; Ichikawa, A. Fuzzified Neural Network based on fuzzy number operations. Fuzzy Sets Syst. 2002, 130, 291–304. [Google Scholar] [CrossRef]
  3. Beale, E.M.L. A Derivation of Conjugate Gradients. In Numerical Methods for Non-Linear Optimization; Lootsma, F.A., Ed.; Academic Press: London, UK, 1972; pp. 39–43. [Google Scholar]
  4. Fletcher, R.; Reeves, C.M. Function Minimization by Conjugate Gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef]
  5. Moller, M.F. A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
  6. Powell, M.J.D. Restart Procedures for the Conjugate Gradient Method. Math. Program. 1977, 12, 241–254. [Google Scholar] [CrossRef]
  7. Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O. Interval Type-2 Fuzzy Weight Adjustment for Backpropagation Neural Networks with Application in Time Series Prediction. Inf. Sci. 2014, 260, 1–14. [Google Scholar] [CrossRef]
  8. Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O. Generalized Type-2 Fuzzy Weight Adjustment for Backpropagation Neural Networks in Time Series Prediction. Inf. Sci. 2015, 325, 159–174. [Google Scholar] [CrossRef]
  9. Casasent, D.; Natarajan, S. A Classifier Neural Net with Complex-Valued Weights and Square-Law Nonlinearities. Neural Netw. 1995, 8, 989–998. [Google Scholar] [CrossRef]
  10. Draghici, S. On the Capabilities of Neural Networks using Limited Precision Weights. Neural Netw. 2002, 15, 395–414. [Google Scholar] [CrossRef]
  11. Kamarthi, S.; Pittner, S. Accelerating Neural Network Training using Weight Extrapolations. Neural Netw. 1999, 12, 1285–1299. [Google Scholar] [CrossRef]
  12. Dombi, J. A general class of fuzzy operators, the De Morgan class of fuzzy operators and fuzziness induced by fuzzy operators. Fuzzy Sets Syst. 1982, 8, 149–163. [Google Scholar] [CrossRef]
  13. Weber, S. A general concept of fuzzy connectives, negations and implications based on t-norms and t-conorms. Fuzzy Sets Syst. 1983, 11, 115–134. [Google Scholar] [CrossRef]
  14. Hamacher, H. Über logische verknupfungen unscharfer aussagen und deren zugehorige bewertungsfunktionen. In Progress in Cybernetics and Systems Research, III; Trappl, R., Klir, G.J., Ricciardi, L., Eds.; Hemisphere: New York, NY, USA, 1975; pp. 276–288. (In Germany) [Google Scholar]
  15. Frank, M.J. On the simultaneous associativity of F(x, y) and x + y − F(x, y). Aequ. Math. 1979, 19, 194–226. [Google Scholar] [CrossRef]
  16. Neville, R.S.; Eldridge, S. Transformations of Sigma–Pi Nets: Obtaining Reflected Functions by Reflecting Weight Matrices. Neural Netw. 2002, 15, 375–393. [Google Scholar] [CrossRef]
  17. Yam, J.; Chow, T. A Weight Initialization Method for Improving Training Speed in Feedforward Neural Network. Neurocomputing 2000, 30, 219–232. [Google Scholar] [CrossRef]
  18. Martinez, G.; Melin, P.; Bravo, D.; Gonzalez, F.; Gonzalez, M. Modular Neural Networks and Fuzzy Sugeno Integral for Face and Fingerprint Recognition. Adv. Soft Comput. 2006, 34, 603–618. [Google Scholar]
  19. De Wilde, O. The Magnitude of the Diagonal Elements in Neural Networks. Neural Netw. 1997, 10, 499–504. [Google Scholar] [CrossRef]
  20. Salazar, P.A.; Melin, P.; Castillo, O. A New Biometric Recognition Technique Based on Hand Geometry and Voice Using Neural Networks and Fuzzy Logic. Soft Comput. Hybrid Intell. Syst. 2008, 154, 171–186. [Google Scholar]
  21. Cazorla, M.; Escolano, F. Two Bayesian Methods for Junction Detection. IEEE Trans. Image Process. 2003, 12, 317–327. [Google Scholar] [CrossRef] [PubMed]
  22. Hagan, M.T.; Demuth, H.B.; Beale, M.H. Neural Network Design; PWS Publishing: Boston, MA, USA, 1996; p. 736. [Google Scholar]
  23. Phansalkar, V.V.; Sastry, P.S. Analysis of the Back-Propagation Algorithm with Momentum. IEEE Trans. Neural Netw. 1994, 5, 505–506. [Google Scholar] [CrossRef] [PubMed]
  24. Fard, S.; Zainuddin, Z. Interval Type-2 Fuzzy Neural Networks Version of the Stone–Weierstrass Theorem. Neurocomputing 2011, 74, 2336–2343. [Google Scholar] [CrossRef]
  25. Asady, B. Trapezoidal Approximation of a Fuzzy Number Preserving the Expected Interval and Including the Core. Am. J. Oper. Res. 2013, 3, 299–306. [Google Scholar] [CrossRef]
  26. Coroianu, L.; Stefanini, L. General Approximation of Fuzzy Numbers by F-Transform. Fuzzy Sets Syst. 2016, 288, 46–74. [Google Scholar] [CrossRef]
  27. Yang, D.; Li, Z.; Liu, Y.; Zhang, H.; Wu, W. A Modified Learning Algorithm for Interval Perceptrons with Interval Weights. Neural Process Lett. 2015, 42, 381–396. [Google Scholar] [CrossRef]
  28. Requena, I.; Blanco, A.; Delgado, M.; Verdegay, J. A Decision Personal Index of Fuzzy Numbers based on Neural Networks. Fuzzy Sets Syst. 1995, 73, 185–199. [Google Scholar] [CrossRef]
  29. Kuo, R.J.; Chen, J.A. A Decision Support System for Order Selection in Electronic Commerce based on Fuzzy Neural Network Supported by Real-Coded Genetic Algorithm. Expert. Syst. Appl. 2004, 26, 141–154. [Google Scholar] [CrossRef]
  30. Molinari, F. A New Criterion of Choice between Generalized Triangular Fuzzy Numbers. Fuzzy Sets Syst. 2016, 296, 51–69. [Google Scholar] [CrossRef]
  31. Chai, Y.; Xhang, D. A Representation of Fuzzy Numbers. Fuzzy Sets Syst. 2016, 295, 1–18. [Google Scholar] [CrossRef]
  32. Figueroa-García, J.C.; Chalco-Cano, Y.; Roman-Flores, H. Distance Measures for Interval Type-2 Fuzzy Numbers. Discret. Appl. Math. 2015, 197, 93–102. [Google Scholar] [CrossRef]
  33. Ishibuchi, H.; Nii, M. Numerical Analysis of the Learning of Fuzzified Neural Networks from Fuzzy If–Then Rules. Fuzzy Sets Syst. 1998, 120, 281–307. [Google Scholar] [CrossRef]
  34. Karnik, N.N.; Mendel, J. Operations on type-2 fuzzy sets. Fuzzy Sets Syst. 2001, 122, 327–348. [Google Scholar] [CrossRef]
  35. Raj, P.A.; Kumar, D.N. Ranking Alternatives with Fuzzy Weights using Maximizing Set and Minimizing Set. Fuzzy Sets Syst. 1999, 105, 365–375. [Google Scholar]
  36. Chu, T.C.; Tsao, T.C. Ranking Fuzzy Numbers with an Area between the Centroid Point and Original Point. Comput. Math. Appl. 2002, 43, 111–117. [Google Scholar] [CrossRef]
  37. Ishibuchi, H.; Morioka, K.; Tanaka, H. A Fuzzy Neural Network with Trapezoid Fuzzy Weights. In Proceedings of the Fuzzy Systems, IEEE World Congress on Computational Intelligence, Orlando, FL, USA, 26–29 June 1994; Volume 1, pp. 228–233. [Google Scholar]
  38. Ishibuchi, H.; Tanaka, H.; Okada, H. Fuzzy Neural Networks with Fuzzy Weights and Fuzzy Biases. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; Volume 3, pp. 1650–1655. [Google Scholar]
  39. Feuring, T. Learning in Fuzzy Neural Networks. In Proceedings of the IEEE International Conference on Neural Networks, Washington, DC, USA, 3–6 June 1996; Volume 2, pp. 1061–1066. [Google Scholar]
  40. Castro, J.; Castillo, O.; Melin, P.; Rodríguez-Díaz, A. A Hybrid Learning Algorithm for a Class of Interval Type-2 Fuzzy Neural Networks. Inform. Sci. 2009, 179, 2175–2193. [Google Scholar] [CrossRef]
  41. Castro, J.; Castillo, O.; Melin, P.; Mendoza, O.; Rodríguez-Díaz, A. An Interval Type-2 Fuzzy Neural Network for Chaotic Time Series Prediction with Cross-Validation and Akaike Test. Soft Comput. Intell. Control Mob. Robot. 2011, 318, 269–285. [Google Scholar]
  42. Abiyev, R. A Type-2 Fuzzy Wavelet Neural Network for Time Series Prediction. Lect. Notes Comput. Sci. 2010, 6098, 518–527. [Google Scholar]
  43. Karnik, N.; Mendel, J. Applications of Type-2 Fuzzy Logic Systems to Forecasting of Time-Series. Inform. Sci. 1999, 120, 89–111. [Google Scholar] [CrossRef]
  44. Pulido, M.; Melin, P.; Castillo, O. Genetic Optimization of Ensemble Neural Networks for Complex Time Series Prediction. In Proceedings of the 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, CA, USA, 31 July–5 August 2011; pp. 202–206. [Google Scholar]
  45. Pedrycz, W. Granular Computing: Analysis and Design of Intelligent Systems; CRC Press/Francis Taylor: Boca Raton, FL, USA, 2013. [Google Scholar]
  46. Tung, S.W.; Quek, C.; Guan, C. eT2FIS: An Evolving Type-2 Neural Fuzzy Inference System. Inform. Sci. 2013, 220, 124–148. [Google Scholar] [CrossRef]
  47. Zarandi, M.H.F.; Torshizi, A.D.; Turksen, I.B.; Rezaee, B. A new indirect approach to the type-2 fuzzy systems modeling and design. Inform. Sci. 2013, 232, 346–365. [Google Scholar] [CrossRef]
  48. Zhai, D.; Mendel, J. Uncertainty Measures for General Type-2 Fuzzy Sets. Inform. Sci. 2011, 181, 503–518. [Google Scholar] [CrossRef]
  49. Biglarbegian, M.; Melek, W.; Mendel, J. On the robustness of Type-1 and Interval Type-2 fuzzy logic systems in modeling. Inform. Sci. 2011, 181, 1325–1347. [Google Scholar] [CrossRef]
  50. Jang, J.S.R.; Sun, C.T.; Mizutani, E. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence; Prentice Hall: Englewood Cliffs, NJ, USA, 1997; p. 614. [Google Scholar]
  51. Chen, S.; Wang, C. Fuzzy decision making systems based on interval type-2 fuzzy sets. Inform. Sci. 2013, 242, 1–21. [Google Scholar] [CrossRef]
  52. Nguyen, D.; Widrow, B. Improving the Learning Speed of 2-Layer Neural Networks by choosing Initial Values of the Adaptive Weights. In Proceedings of the International Joint Conference on Neural Networks, San Diego, CA, USA, 17–21 June 1990; Volume 3, pp. 21–26. [Google Scholar]
  53. Montiel, O.; Castillo, O.; Melin, P.; Sepúlveda, R. The evolutionary learning rule for system identification. Appl. Soft Comput. 2003, 3, 343–352. [Google Scholar] [CrossRef]
  54. Sepúlveda, R.; Castillo, O.; Melin, P.; Montiel, O. An Efficient Computational Method to Implement Type-2 Fuzzy Logic in Control Applications. In Analysis and Design of Intelligent Systems Using Soft Computing Techniques; Springer: Berlin/Heidelberg, Germany, 2007; Volume 41, pp. 45–52. [Google Scholar]
  55. Castillo, O.; Melin, P. A review on the design and optimization of interval type-2 fuzzy controllers. Appl. Soft Comput. 2012, 12, 1267–1278. [Google Scholar] [CrossRef]
  56. Hagras, H. Type-2 Fuzzy Logic Controllers: A Way Forward for Fuzzy Systems in Real World Environments. IEEE World Congr. Comput. Intell. 2008, 5050, 181–200. [Google Scholar]
  57. Melin, P. Modular Neural Networks and Type-2 Fuzzy Systems for Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2012; 204 p. [Google Scholar]
Figure 1. Structure and equation of a neuron with traditional numerical weights.
Figure 1. Structure and equation of a neuron with traditional numerical weights.
Information 08 00114 g001
Figure 2. Scheme of the proposed structure and equations of the neuron with interval type-2 fuzzy number weights.
Figure 2. Scheme of the proposed structure and equations of the neuron with interval type-2 fuzzy number weights.
Information 08 00114 g002
Figure 3. Illustration of the architecture of the neural network.
Figure 3. Illustration of the architecture of the neural network.
Information 08 00114 g003
Figure 4. Flow chart of the FNNIT2FNW.
Figure 4. Flow chart of the FNNIT2FNW.
Information 08 00114 g004
Figure 5. Illustration of the architecture of the Fuzzy Neural network.
Figure 5. Illustration of the architecture of the Fuzzy Neural network.
Information 08 00114 g005
Figure 6. Illustration of the real data against the prediction data of the Mackey-Glass time series for the fuzzy neural network.
Figure 6. Illustration of the real data against the prediction data of the Mackey-Glass time series for the fuzzy neural network.
Information 08 00114 g006
Figure 7. Illustration of the convergence curves in the training process for FNNIT2FNSp.
Figure 7. Illustration of the convergence curves in the training process for FNNIT2FNSp.
Information 08 00114 g007
Figure 8. Illustration of the prediction data of the FNNIT2FND against the real data for the Mackey-Glass time series.
Figure 8. Illustration of the prediction data of the FNNIT2FND against the real data for the Mackey-Glass time series.
Information 08 00114 g008
Figure 9. Illustration of the convergence curves in the training process for FNNIT2FND.
Figure 9. Illustration of the convergence curves in the training process for FNNIT2FND.
Information 08 00114 g009
Figure 10. Illustration of the prediction data of the FNNIT2FNH against the real data for the Mackey-Glass time series.
Figure 10. Illustration of the prediction data of the FNNIT2FNH against the real data for the Mackey-Glass time series.
Information 08 00114 g010
Figure 11. Illustration of the convergence curves in the training process for FNNIT2FNH.
Figure 11. Illustration of the convergence curves in the training process for FNNIT2FNH.
Information 08 00114 g011
Figure 12. Illustration of the prediction data of the FNNIT2FNF against the real data for the Mackey-Glass time series.
Figure 12. Illustration of the prediction data of the FNNIT2FNF against the real data for the Mackey-Glass time series.
Information 08 00114 g012
Figure 13. Illustration of the convergence curves in the training process for FNNIT2FNF.
Figure 13. Illustration of the convergence curves in the training process for FNNIT2FNF.
Information 08 00114 g013
Figure 14. Illustration of the prediction data against for the traditional neural network the real data of the Mackey-Glass time series.
Figure 14. Illustration of the prediction data against for the traditional neural network the real data of the Mackey-Glass time series.
Information 08 00114 g014
Figure 15. Illustration of the convergence curves in the training process for traditional neural network.
Figure 15. Illustration of the convergence curves in the training process for traditional neural network.
Information 08 00114 g015
Figure 16. Illustration of the results of prediction error of the TNN against the results of FNNIT2FNSp, FNNIT2FND, FNNIT2FNH, and FNNIT2FNF for data with Gaussian noise of the Mackey-Glass time series for MAE.
Figure 16. Illustration of the results of prediction error of the TNN against the results of FNNIT2FNSp, FNNIT2FND, FNNIT2FNH, and FNNIT2FNF for data with Gaussian noise of the Mackey-Glass time series for MAE.
Information 08 00114 g016
Table 1. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of sum-product in time series prediction using Mackey-Glass time series.
Table 1. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of sum-product in time series prediction using Mackey-Glass time series.
No. NeuronsBest Prediction Error MAEAverage MAE
50.01870.0240
60.01970.0245
70.01880.0250
80.01720.0231
90.01980.0259
100.01700.0246
110.01900.0252
120.01920.0248
130.01980.0255
140.01910.0251
150.01850.0227
160.01490.0180
170.01800.0238
180.02020.0242
190.02050.0239
200.01640.0247
210.02010.0243
220.01890.0241
230.01780.0249
240.01950.0250
250.01950.0259
260.01890.0233
270.01950.0246
280.01910.0248
290.01750.0245
300.01490.0233
310.01930.0245
320.01820.0259
330.01950.0252
340.01700.0243
350.01950.0241
360.01880.0251
370.02090.0248
380.01870.0243
390.01950.0254
400.01900.0246
410.01880.0263
420.01720.0233
430.01880.0249
440.01920.0237
450.01920.0247
460.01570.0247
470.01880.0252
480.01890.0246
490.02040.0247
500.01510.0246
510.01900.0250
520.01790.0239
530.01910.0242
540.01770.0240
550.01680.0240
560.02020.0251
570.01960.0255
580.01810.0250
590.01920.0248
600.01730.0239
610.01680.0236
620.01880.0239
630.01680.0240
640.01830.0238
650.01690.0252
660.01850.0250
670.01740.0253
680.01710.0230
690.01850.0244
700.01860.0248
710.02100.0251
720.01820.0249
730.02060.0247
740.01690.0249
750.01700.0240
760.01740.0233
770.02060.0245
780.01850.0244
790.01900.0247
800.01780.0246
810.01790.0247
820.01850.0243
830.01920.0254
840.01700.0237
850.01780.0242
860.01860.0260
870.01970.0233
880.01970.0256
890.01780.0252
900.01910.0257
910.01830.0265
920.01930.0240
930.01990.0240
940.01660.0242
950.02060.0248
960.01810.0236
970.01910.0252
980.01990.0248
990.01730.0249
1000.01810.0248
1010.01680.0237
1020.01730.0250
1030.01980.0245
1040.01910.0237
1050.02050.0245
1060.01970.0246
1070.01790.0256
1080.01850.0244
1090.01890.0241
1100.01640.0242
1110.01900.0254
1120.01980.0250
1130.01730.0245
1140.02030.0244
1150.01680.0248
1160.01700.0233
1170.01990.0254
1180.01880.0252
1190.01960.0247
1200.01890.0250
Table 2. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of Dombi in time series prediction using Mackey-Glass time series.
Table 2. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of Dombi in time series prediction using Mackey-Glass time series.
ExperimentPrediction Error
10.0457
20.0466
30.0549
40.0581
50.0599
60.0636
70.0656
80.0671
90.0675
100.0694
Average0.0622
Table 3. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of Hamacher in time series prediction using Mackey-Glass time series.
Table 3. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of Hamacher in time series prediction using Mackey-Glass time series.
ExperimentPrediction Error
10.0130
20.0138
30.0149
40.0154
50.0163
60.0165
70.0170
80.0175
90.0177
100.0183
Average0.0164
Table 4. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of Frank in time series prediction using Mackey-Glass time series.
Table 4. Results for the fuzzy neural network with interval type-2 fuzzy numbers with T-norm of Frank in time series prediction using Mackey-Glass time series.
ExperimentPrediction Error
10.0117
20.0140
30.0153
40.0156
50.0158
60.0163
70.0170
80.0175
90.0177
100.0179
Average0.0167
Table 5. Results for the traditional neural network (TNN) in the Mackey-Glass time series and the comparison against the FNNIT2FNSp, FNNIT2FND, FNNIT2FNH, and FNNIT2FNF.
Table 5. Results for the traditional neural network (TNN) in the Mackey-Glass time series and the comparison against the FNNIT2FNSp, FNNIT2FND, FNNIT2FNH, and FNNIT2FNF.
Best Prediction ErrorAverage
TNN0.01690.0203
FNNIT2FNSp0.01490.0180
FNNIT2FND0.04570.0622
FNNIT2FNH0.01300.0164
FNNIT2FNF0.01170.0167
Table 6. Results for the traditional neural network and fuzzy neural networks with all T-norms in the Mackey-Glass time series under different noise levels (n).
Table 6. Results for the traditional neural network and fuzzy neural networks with all T-norms in the Mackey-Glass time series under different noise levels (n).
Noise LevelTNNFNNIT2FNSpFNNIT2FNDFNNIT2FNHFNNIT2FNF
n = 00.01690.01490.04570.01300.0117
n = 0.10.05640.06170.07040.05560.0594
n = 0.20.11150.11350.09810.09600.0954
n = 0.30.17490.12750.11680.11710.1175
n = 0.40.23110.15540.13620.13600.1419
n = 0.50.31240.16610.15020.15360.1571
n = 0.60.36760.18970.14850.15760.1589
n = 0.70.42500.18660.16840.17700.1736
n = 0.80.49410.20180.17440.18110.1808
n = 0.90.54110.20770.17750.18870.1858
n = 10.56840.20750.18580.19200.1935
Table 7. Parameters used in the t-student statistical test for the TNN against FNNIT2FNH and FNNIT2FNF.
Table 7. Parameters used in the t-student statistical test for the TNN against FNNIT2FNH and FNNIT2FNF.
TNNFNNIT2FNHFNNIT2FNF
No. Experiments303030
Mean Data0.020280.016380.01665
Standard Deviation0.001580.001330.00123
Standard error of the mean0.000290.000240.00023

Share and Cite

MDPI and ACS Style

Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O.; Castro, J.R. Comparison of T-Norms and S-Norms for Interval Type-2 Fuzzy Numbers in Weight Adjustment for Neural Networks. Information 2017, 8, 114. https://doi.org/10.3390/info8030114

AMA Style

Gaxiola F, Melin P, Valdez F, Castillo O, Castro JR. Comparison of T-Norms and S-Norms for Interval Type-2 Fuzzy Numbers in Weight Adjustment for Neural Networks. Information. 2017; 8(3):114. https://doi.org/10.3390/info8030114

Chicago/Turabian Style

Gaxiola, Fernando, Patricia Melin, Fevrier Valdez, Oscar Castillo, and Juan R. Castro. 2017. "Comparison of T-Norms and S-Norms for Interval Type-2 Fuzzy Numbers in Weight Adjustment for Neural Networks" Information 8, no. 3: 114. https://doi.org/10.3390/info8030114

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop