Next Article in Journal
Vibration Parameters Estimation by Blade Tip-Timing in Mistuned Bladed Disks in Presence of Close Resonances
Previous Article in Journal
Fast and Improved Real-Time Vehicle Anti-Tracking System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Index of Incremental Granular Model with Information Granule of Linguistic Intervals and Its Application

Department of Control and Instrumentation Engineering, Chosun University, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(17), 5929; https://doi.org/10.3390/app10175929
Submission received: 28 July 2020 / Revised: 24 August 2020 / Accepted: 24 August 2020 / Published: 27 August 2020
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
This paper addresses the performance index (PI) of an incremental granular model (IGM) with information granules of linguistic intervals. For this purpose, IGM is designed by combining a linear regression (LR) and an interval-based granular model (GM). The fundamental scheme of IGM construction comprises two essential phases: (1) development of LR as a basic model and (2) design of a local granular model, which attempts to reduce errors obtained by the LR model. Here, the local interval-based GM is based on an interval-based fuzzy clustering algorithm, which is materialized by information granulations. The PI of IGM is calculated by multiplying the coverage with specificity property, because the output of IGM is not a numerical value but a linguistic interval value. From the concept of coverage and specificity, we can construct information granules; thus, it is justified by the available experimental proof and presented as clearly defined semantics. To validate the PI method, an experiment is conducted on concrete compressive strength for civil engineering applications. The experimental results confirm that the PI of IGM is an effective performance evaluation method.

1. Introduction

A fuzzy set is a set in which each element belongs to a certain set, based on which various methodologies, algorithms, and structures have been actively researched [1,2,3,4,5]. Various studies related to a fuzzy set have been conducted; however, as a common feature, all these studies showed the model output as a numerical value [6,7,8,9,10,11,12,13]. On the other hand, Pedrycz et al. [14] proposed a granular model (GM), in which the model output represents a triangular fuzzy number rather than a numerical value. GM generates information granulation by using a context-based fuzzy clustering algorithm that uses the properties of data in the input and output variables. The incremental granular model (IGM) is a structure that combines a linear regression (LR) and GM, where the first part computes error through the LR model, while the local part compensates for it through GM [15,16]. This method improves the system performance by using evolutionary approaches [17,18]. In addition, Leite et al. [19] proposed a segmentation neural network that evolves from fuzzy data streams. Loia et al. [20] proposed a functional network that takes into account the granularity and time delay of information. Colace et al. [21] proposed a new computational model with an iterative structure through data segmentation.
Accuracy and clarity are important factors in evaluating the performance of the abovementioned prediction models. In general, the method of determining the model accuracy uses the conventional root mean square error (RMSE) or the mean absolute percentage error (MAPE). The RMSE evaluation method evaluates the model’s performance by subtracting its predicted value from the actual predicted value, calculating the mean of the squares, and then squaring the value. The following paper evaluated the model performance using the RMSE evaluation method. Huang et al. [22] proposed minimum mean square estimator for mobile location using time-difference-of-arrival measurement. Yu et al. [23] proposed a model based on the wide channel sounding of indoor staircases, corridors and office environments. Hwang et al. [24] proposed an open-loop low-complexity multiple-input-multiple-output spatial multiplexing method, by which spatially multiplex multiple data streams and can be repeatedly detected [25].
The following papers perform the evaluation method of MAPE. Draper et al. [26] estimated the continental-scale domain as the mean-squared error. Wu [27] presented a linear programming method for estimating the parameters under the criterion of minimizing MAPE (also called average relative error). Myttenaere et al. [28] proposed the consequences of using MAPE as a quality measure for regression models. McKenzie [29] used absolute percentage error and bias in economic forecasting applications. Kim [30] applied mean absolute percentage error for intermittent demand forecasts. Besides, various MAPE methods have been studied so far [31,32,33,34,35,36].
Although methods for evaluating model accuracy have been actively researched, studies related to model clarity are still necessary. Pedrycz [37,38,39] proposed a method to evaluate GM accuracy and clarity through a performance index (PI), to emphasize the information granularity of a fuzzy set. In addition, he proposed a fuzzy construction method using the parametric approach of fine granules [36] and free-structure segmentation mapping using the principle of justification [37]. Zhang et al. [40] proposed granular aggregation method in distributed data environments. Zhongjie et al. [41] explained the stabilization of information granules. Liu et al. [42] focused on designing a model with a higher type based on information granules.
In this study, the prediction performance of interval-based IGM is compared and analyzed using the PI with information granules of linguistic intervals. The linguistic intervals are generated in the output space under three cases according to the segmentation method, and the PI obtained in each case is compared with the traditional performance evaluation method. To validate the PI method, experiments are performed on the concrete compressive strength example applied to civil engineering. This paper is organized as follows. Section 2 describes the interval-based fuzzy clustering algorithm and GM. Section 3 describes the architecture and procedure of IGM. Section 4 describes the performance evaluation method, and Section 5 conducts an experiment on a concrete compressive strength example. Finally, Section 6 concludes the study and provides a future plan.

2. Interval-Based GM

In this section, we describe the interval-based GM for designing an interval-based IGM. Interval-based GM generates information granulation using the interval-based fuzzy c-means (IFCM clustering algorithm. The following section describes the IFCM clustering method and interval-based GM.

2.1. IFCM Clustering

In this study, interval-based GM is designed by IFCM clustering to generate information granulation. The IFCM clustering method creates an interval in the output space, taking into account the pattern characteristics of the input and output spaces. The general fuzzy clustering method [43] estimates a cluster by using the distance between its center and the input data in the input space without considering the output space. On the other hand, the IFCM performs clustering by considering the pattern characteristics of the output space, and thus, performs better than the existing clustering method.
Next, IFCM determines the cluster centers and the membership matrix using the following steps
[Step 1] 
Set the number of intervals I (1 < I < q) and the number of clusters per interval C (2 < C < n).
[Step 2] 
Initialize the membership matrix with random values between 0 and 1.
U ( [ u i j ] i = 1 ,   ,   C ,   j = 1 ,   ,   n )
[Step 3] 
Calculate C i (i = 1, 2, ..., C), which is the center of each cluster, by the following equation.
C i   =     j   =   1 n u i j m   x j j   =   1 n u i j m
[Step 4] 
Compute the partition matrix U considering f j as follows
u i j   =     f i k   =   1 C ( || x c k     c i || || x c k     c j || ) 2 ( m     1 )
where, f j represents the degree of inclusion of x j in the generated cluster. Then, f j   =   A ( y i ) ,   j   =   1 ,   ,   n can be denoted by A to y j . u i j denotes the membership degrees induced by the ith cluster and the jth data. x c k represents the kth input data in the cth cluster.
[Step 5] 
When the equation is satisfied, the above process stops. Otherwise, the process is started again from Step 3.
J   =   j   =   1 n i   =   1 C u i j m   || x j     C i || 2
In the IFCM, the number of interval and clusters per interval must be set in advance. Figure 1 shows the concept of the IFCM clustering method, where the intervals and clusters are estimated by setting the numbers of sections and clusters to 4 and 3, respectively. As shown in Figure 1, each interval has linguistic meaning and each cluster produced by the intervals is represented by fuzzy if–then rules.
In general, when creating an interval in the output space, the method of dividing the interval without an even overlapping is used. In this study, the performance of GM is evaluated by adding a method to divide the interval flexibly and without any overlapping, based on a stochastic distribution, and a method to divide the interval evenly at overlapping intervals. In general, a method of dividing an interval not being evenly overlapped is used to divide the output space evenly. Thus, we only need to consider the spacing of each interval. We define the general partitioning method as Case 1. The interval division method used in this study is as follows.
First, the method of dividing the interval without overlapping flexibly adjusts the interval length from the probability distribution of data in the output. The interval of a portion with a large distribution value is short, while that with a small distribution value is long. We define the method of flexible and non-overlapping partitioning as Case 2. Second, the method of dividing the interval evenly and overlapping a certain range is similar to the general method, but the difference lies in overlapping the ends of each interval. This method allows each section to overlap a certain range to find additional similar features. Figure 2 shows a conceptual diagram of Cases 1, 2, and 3 according to the method of interval division. Figure 2a shows Case 1, which divides the output space uniformly, and Figure 2b shows Case 2, which flexibly divides the output space based on a stochastic distribution. The second interval of Figure 2b is observed to be short, because the distribution value is large. Figure 2c shows Case 3, which is divided evenly over a certain range.
By changing the granularity of the intervals and its distribution in the output, we can adjust the width of the linguistic intervals in the output. The adjusted intervals can be helpful in further enhancing the granular model [16].

2.2. Interval-Based GM

GM is simply a web of associations between the constructed information granules; this model is inherently granular. Figure 3 shows the structure of GM, which comprises an input space, an output space, and three layers. Given the numerical input values, the model returns some information granules, especially some linguistic intervals, as shown in Figure 3.
The features of GM include the following: first, a set of granules of information are generated in the inputs as well as output. Second, the output of GM is expressed as information granulation rather than a numerical value, and has the shape of an interval. The GM’s output Y is computed by the fuzzy number as follows
Y   =     V t   z t   =     t   =   1 I ( z t ( x k ) [ V t ,   V t + ] )
Figure 4 shows GM’s final and actual output values. The GM’s output comprises the interval shape, and the prediction performance can be validated by checking whether the actual output is included in the final output value of GM. The GM’s output comprises the limit values as follows
y l o w e r   =     t   =   1 I z t I t
y u p p e r   =     t   =   1 I z t I t +

3. Interval-Based IGM

IGM is designed as a combination of an LR model and an interval-based GM as the global and local models, respectively [17,18,19].

3.1. Structure of Interval-Based IGM

IGM comprises a global and a local part. Figure 5 shows the structure of interval-based IGM. The first part uses the LR, while the second part uses the interval-based GM. The first part obtains the error by LR, and the local part compensates this error through the interval-based GM, to calculate the final value.

3.2. Global Part: LR Model

LR refers to modeling a linear correlation between the input variables and output variable. A simple LR is based on an explanatory variable, while multiple LRs are based on two or more explanatory variables. LR is modeled using a linear prediction function, where unknown parameters are estimated from the data. To illustrate, we want to simplify the data comprising two input variables and one output variable. Any input–output dataset is configured in the form of { x k ,   y k }   =   k   =   1 ,   2 ,   3 ,   ,   n . Here, x k represents an input variable and y k represents an output variable. Figure 6 shows the concept of the well-known LR. In Figure 6, the dots represent the data and the lines represent the simple linear regression equation.
z k   =   r T x k   +   r 0
Here, r T represents the coefficients of the LR model, which compute the model. The LR’s error is e k   =   y k     z k , which is expressed through linguistic rules. A new type of data, { x k ,   e k } , is generated by combining the model error and input data.

3.3. Local Part: Interval-Based GM

Interval-based GM is designed using the IFCM clustering method, as described in Section 2.2. The interval-based GM modeling of the local part is modeled using input-error data, which combine the input data and the error obtained from the global part, rather than the general input data. When input-error data enter the input of the interval-based GM, the intervals are generated in the output and a cluster corresponding to each interval is generated in the input space. Rules can be created through the intervals and clusters created here. The results obtained for the IFCM clustering method in Layer 3 are combined to generate the interval-based GM output and with the LR model output for calculating the final output of the interval-based IGM. Then, the LR’s error is compensated. Therefore, the model’s performance is enhanced. The procedure of interval-based IGM is described as follows.
[Step 1] 
Firstly, the LR model is designed from numerical data. The error is obtained by using the LR model, and input-error data are generated.
[Step 2] 
The intervals are created in the error space, by using Cases 1, 2, and 3.
[Step 3] 
IFCM clustering is performed on the input-error data points. The numbers of intervals and clusters are selected by the user.
[Step 4] 
The clusters in each interval are estimated, and the output value is calculated by the interval determined through the interval. GM’s output obtains the number of fuzzy in the form of intervals.
[Step 5] 
The interval-based IGM’s output is calculated by combining the outputs of the LR model and interval-based GM.

4. Performance Evaluation Method

A performance evaluation plays an important role in evaluating the accuracy and clarity of the proposed model, and various such methods have been developed so far. Some common performance evaluation methods include RMSE and MAPE. The RMSE method evaluates performance by subtracting the model predicted value from the actual predicted value, calculating the mean of the squares, and then squaring the value. The MAPE method evaluates the performance by subtracting the model predicted value from the actual output value, and then dividing it by the model predicted value. Thus, these methods evaluate performance using numerical model predictions.
However, it is difficult to evaluate the model using the general performance evaluation method because the IGM output based on the proposed interval is the number of fuzzy intervals, not numerical. Therefore, in this study, the model is evaluated using the PI, which is a performance evaluation method suitable for particle models.

PI

In this study, the PI method evaluates the prediction capability of interval-based IGM. This method evaluates performance by using the property of coverage and specificity. The initial PI method was proposed by Pedrycz, and since then, various PI methods have been proposed by Hu [27], Galaviz [28], and Zhu [29]. Figure 7 and Figure 8 show the characteristics of coverage and specificity, respectively.
Coverage indicates the range of the GM output. If the actual output value falls between the upper and lower bound of the GM’s output, a value close to 1 is given. Conversely, if the actual output value does not fall within the GM output range, a value is close to 0. In other words, it is checked whether the actual output is included within the range of fuzzy numbers in the section form, which is the GM output. If the coverage is large, the GM performance is excellent.
Specificity indicates the fineness and characteristics of the GM output, showing the distance from the upper limit of the GM output to the lower limit. If this distance is short, the specificity is given a value with high detail characteristics, and if the distance is long, it is given a value with small detail characteristics. In other words, we check the degree of distance from the upper bound to the lower bound of the GM output. If the specificity is large, the GM performance is excellent.
Figure 9 shows the relation between coverage and specificity, where the PI values are found to be curved. The two values have a tradeoff in which the specificity value decreases when the coverage value increases, and vice versa. Therefore, it is important to balance the two values without being biased on either side.

5. Experiment and Results Analysis

In this experiment, a concrete compressive strength (CCS) dataset was used to predict the CCS to compare and analyze the prediction performance of interval-based IGM (Cases 1, 2, and 3).

5.1. Database

In this experiment, the CCS dataset [44] was used to compare and analyze the prediction performance of interval-based IGM, which comprises the data for the CCS of high-performance concrete (HPC). This dataset consists of eight input variables and one output variable. The input includes eight variables and one output variable of CCS.
The ASN (Airfoil Self-Noise) data set is a measurement of noise generated by NACA 0012 in a wind tunnel environment of various speeds and angles. The ASN data set includes 5 inputs and 1 output variable. The input variables are frequency, angle, cord length in meters, free flow velocity, and suction side displacement thickness, and the output variable is the sound pressure level scaled in decibels.
The MPG (miles per gallon) data set contains data that predict the fuel consumption of a vehicle. The MPG data set includes 7 inputs and 1 output variable. The input variables are cylinder, displacement, horsepower, weight, acceleration, model year, model name, and output variable is vehicle fuel consumption. The training and verification data were divided into two equal halves and normalized to a value between 0 and 1 to obtain a more accurate value.

5.2. Experimental Method and Results

In interval-based IGM, the predictive performance of Cases 1, 2, and 3 according to the segmentation method of the interval existing in the output space is compared and analyzed through the PI approach. The interval-based IGM’s performance was verified through the PI method using the property of coverage and specificity described in Section 4. The experiments were performed as the number of intervals in the interval-based IGM increases from 2 to 10. In addition, the number of clusters per each interval was increased from 2 to 10, and the weighting exponent was fixed at 2. The experiments were performed in Cases 1, 2, and 3.
Table 1 and Table 2 show the experimental results for Case 1, which is a method of equally dividing the interval without any overlapping. As a result of Case 1, the number of intervals was found to be 7. We selected 10 clusters in each interval, which was approximately 0.39, showing the best result. Figure 10 shows the output value for Case 1 and the actual output value, and Figure 11 shows the PI value for Case 1 in the form of a mesh. Table 3 and Table 4 show the experimental results for Case 2, which is a method of flexibly dividing the intervals through probability distribution without overlapping. As a result, the number of intervals was found to be 7 and the number of clusters created in each interval was 10, which was approximately 0.49, and thus, the best result was obtained. Figure 12 shows the output value for Case 2 and the actual output value, and Figure 13 shows the PI value for Case 2 in the form of a mesh.
Table 5 and Table 6 show the experimental results for Case 3, which is a method of evenly dividing the intervals with overlapping. As a result, the number of intervals was found to be 5. We selected 10 clusters in each interval, which was approximately 0.44, thus yielding the best result. Figure 14 visualizes the output value for Case 2 and the actual output value, and Figure 15 shows the PI value for Case 2 in the form of a mesh.
Table 7 lists the excellent prediction results obtained for each interval-based IGM. Case 1 and 2 showed good performance when the numbers of intervals and clusters were 7 and 10, respectively, and generated 70 rules. Case 3 showed good performance when the numbers of intervals and clusters were 5 and 10, respectively. According to the experimental results, the performance was found to be superior when the intervals overlapped rather than that in the case of non-overlapping divisions, as well as when the division was more flexible than the equal division.
Next, we will look at the overfitting problem in the construction of the interval-based IGM. In the case of the CSS data set, the number of intervals and clusters per interval was increased from 2 to 50, respectively. As a result, it was confirmed that overfitting occurs when the number of intervals is large. Figure 16 and Figure 17 show the variation of performance index as the number of intervals and clusters per interval increase for training and testing data set, respectively. As shown in Figure 17, we found that performance index decreases when the number of intervals is more than 7. As to the number of cluster per interval, the performance index increases slightly. We will study the optimal allocation of clusters for future research.

6. Conclusions

In this study, the predictive performance of interval-based IGM according to the partitioning method of dividing the intervals in the output space was compared and analyzed using the PI method. According to the experimental result, Case 1 had seven intervals, in each of which 10 clusters were generated, with a PI value of approximately 0.39, thus yielding the best performance. Case 2 had seven intervals, in each of which 10 clusters were generated, with a PI value of approximately 0.49, thus yielding the best performance. Finally, Case 3 showed the best performance with a PI value of approximately 0.44 when there were five intervals and 10 clusters generated in each interval. When analyzing the results, Cases 1 and 2 achieved the best prediction results when 70 rules were generated, and Case 3 showed the best result when 50 rules were generated. It was confirmed that a section and a cluster suitable for data should be generated rather than a plurality of intervals and clusters. In the future, studies on segmentation methods and performance evaluation methods other than Cases 1, 2, and 3 will be conducted.

Author Contributions

C.-U.Y. suggested the basic idea of the work and performed the experiments. M.-W.L. presented the visualization and writing of the experimental results, K.-C.K. designed the experimental method. All authors wrote and critically revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the “Human Resources Program in Energy Technology” of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) and was granted financial resources from the Ministry of Trade, Industry & Energy, Korea. (No. 20194030202410). This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) and funded by the Ministry of Education (No.2018R1D1A1B07044907).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mendel, J.M.; Hagras, H.; Bustince, H.; Herrera, F. Comments on “Interval Type-2 Fuzzy Sets are Generalization of Interval-Valued Fuzzy Sets: Towards a Wide View on Their Relationship”. IEEE Trans. Fuzzy Syst. 2015, 24, 249–250. [Google Scholar] [CrossRef] [Green Version]
  2. Wang, L. A New Look at Type-2 Fuzzy Sets and Type-2 Fuzzy Logic Systems. IEEE Trans. Fuzzy Syst. 2016, 25, 693–706. [Google Scholar] [CrossRef]
  3. Hidalgo, J.I.; Massanet, S.; Mir, A.; Ruiz-Aguilera, D. On the Choice of the Pair Conjunction–Implication into the Fuzzy Morphological Edge Detector. IEEE Trans. Fuzzy Syst. 2014, 23, 872–884. [Google Scholar] [CrossRef]
  4. Lee, C.-S.; Wang, M.-H.; Lan, S.-T. Adaptive Personalized Diet Linguistic Recommendation Mechanism Based on Type-2 Fuzzy Sets and Genetic Fuzzy Markup Language. IEEE Trans. Fuzzy Syst. 2015, 23, 1777–1802. [Google Scholar] [CrossRef]
  5. Ruiz-Garcia, G.; Hagras, H.; Pomares, H.; Ruiz, I.R.; Rojas, I. Toward a Fuzzy Logic System Based on General Forms of Interval Type-2 Fuzzy Sets. IEEE Trans. Fuzzy Syst. 2019, 27, 2381–2395. [Google Scholar] [CrossRef]
  6. Jang, J.-S.R. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
  7. Juang, C.-F.; Chen, C.-Y. An Interval Type-2 Neural Fuzzy Chip with on-chip Incremental Learning Ability for Time-Varying Data Sequence Prediction and System Control. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 216–228. [Google Scholar] [CrossRef]
  8. Baranyi, P. The Generalized TP Model Transformation for T–S Fuzzy Model Manipulation and Generalized Stability Verification. IEEE Trans. Fuzzy Syst. 2013, 22, 934–948. [Google Scholar] [CrossRef] [Green Version]
  9. Deng, Z.; Jiang, Y.; Choi, K.-S.; Chung, F.-L.; Wang, S. Knowledge-Leverage-Based TSK Fuzzy System Modeling. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1200–1212. [Google Scholar] [CrossRef]
  10. Kaburlasos, V.G.; Papadakis, S.E. A granular extension of the fuzzy-ARTMAP (FAM) neural classifier based on fuzzy lattice reasoning (FLR). Neurocomputing 2009, 72, 2067–2078. [Google Scholar] [CrossRef]
  11. Kaburlasos, V.G. Granular fuzzy inference system (FIS) design lattice computing. In International Conference on Hybrid Artificial Intelligence Systems; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6077, pp. 410–417. [Google Scholar]
  12. Kaburlasos, V.G.; Papakostas, G.A.; Pachidis, T.; Athinelis, A. Intervals numbers (Ins) interpolation/extrapolation. In Proceedings of the IEEE International Conference on Fuzzy Systems, Hyderabad, India, 7–10 July 2013; pp. 1–8. [Google Scholar]
  13. Kasabov, N.; Song, Q. DENFIS: Dynamic evolving neural-fuzzy inference system and its application for time-series prediction. IEEE Trans. Fuzzy Syst. 2002, 10, 144–154. [Google Scholar] [CrossRef] [Green Version]
  14. Pedryez, W.; Vasilakos, A. Linguistic models and linguistic modeling. IEEE Trans. Syst. Man Cybern. 1999, 29, 745–757. [Google Scholar] [CrossRef] [PubMed]
  15. Pedrycz, W.; Gomide, F. Fuzzy Systems Engineering: Toward Human-Centric Computing; John Wiley & Sons: New York, NY, USA, 2007. [Google Scholar]
  16. Pedrycz, W.; Kwak, K.-C. The Development of Incremental Models. IEEE Trans. Fuzzy Syst. 2007, 15, 507–518. [Google Scholar] [CrossRef]
  17. Byeon, Y.-H.; Kwak, K.-C. A Design for Genetically Oriented Rules-Based Incremental Granular Models and Its Application. Symmetry 2017, 9, 324. [Google Scholar] [CrossRef] [Green Version]
  18. Yeom, C.-U.; Kwak, K.-C. Incremental Granular Model Improvement Using Particle Swarm Optimization. Symmetry 2019, 11, 390. [Google Scholar] [CrossRef] [Green Version]
  19. Leite, D.; Costa, P.; Gomide, F. Evolving granular neural networks from fuzzy data streams. Neural Netw. 2013, 38, 1–16. [Google Scholar] [CrossRef] [PubMed]
  20. Loia, V.; Parente, M.; Pedrycz, W.; Tomasiello, S. A Granular Functional Network with delay: Some dynamical properties and application to the sign prediction in social networks. Neurocomputing 2018, 321, 61–71. [Google Scholar] [CrossRef]
  21. Colace, F.; Loia, V.; Tomasiello, S. Revising recurrent neural networks from a granular perspective. Appl. Soft Comput. 2019, 82, 105535. [Google Scholar] [CrossRef]
  22. Huang, J.; Wan, Q.; Wang, P. Minimum mean square error estimator for mobile location using time-difference-of-arrival measurements. IET Radar Sonar Navig. 2011, 5, 137–143. [Google Scholar] [CrossRef]
  23. Yu, Y.; Liu, Y.; Lu, W.-J.; Zhu, H.-B. Measurement and empirical modelling of root mean square delay spread in indoor femtocells scenarios. IET Commun. 2017, 11, 2125–2131. [Google Scholar] [CrossRef]
  24. Hwang, T.; Kwon, Y. Root Mean Square Decomposition for EST-Based Spatial Multiplexing Systems. IEEE Trans. Signal Process. 2011, 60, 1295–1306. [Google Scholar] [CrossRef]
  25. Tolosana, R.; Vera-Rodriguez, R.; Fierrez, J.; Ortega-Garcia, J. Exploring Recurrent Neural Networks for On-Line Handwritten Signature Biometrics. IEEE Access 2018, 6, 5128–5138. [Google Scholar] [CrossRef]
  26. Draper, C.; Reichle, R.; De Jeu, R.; Naeimi, V.; Parinussa, R.; Wagner, W. Estimating root mean square errors in remotely sensed soil moisture over continental scale domains. Remote Sens. Environ. 2013, 137, 288–298. [Google Scholar] [CrossRef] [Green Version]
  27. Wu, L.; Wang, Y. Modelling DGM(1,1) under the criterion of the minimization of mean absolute percentage error. In Proceedings of the Second International Symposium on Knowledge Acquisition and Modelling, Wuhan, China, 30 November–1 December 2009. [Google Scholar]
  28. De Myttenaere, A.; Golden, B.; Le Grand, B.; Rossi, F. Mean Absolute Percentage Error for regression models. Neurocomputing 2016, 192, 38–48. [Google Scholar] [CrossRef] [Green Version]
  29. McKenzie, J. Mean absolute percentage error and bias in economic forecasting. Econ. Lett. 2011, 113, 259–262. [Google Scholar] [CrossRef]
  30. Kim, S.; Kim, H. A new metric of absolute percentage error for intermittent demand forecasts. Int. J. Forecast. 2016, 32, 669–679. [Google Scholar] [CrossRef]
  31. Aviña-Cervantes, J.G.; Torres-Cisneros, M.; Martinez, J.E.S.; Pinales, J. Frequency, time-frequency and wavelet analysis of ECG signal. In Proceedings of the Multiconference on Electronics and Photonics, Guanajuato, Mexico, 7–10 November 2006; pp. 257–261. [Google Scholar]
  32. Lam, K.F.; Mui, H.; Yuen, H. A note on minimizing absolute percentage error in combined forecasts. Comput. Oper. Res. 2001, 28, 1141–1147. [Google Scholar] [CrossRef]
  33. Frías-Paredes, L.; Mallor, F.; Gastón-Romeo, M.; León, T. Dynamic mean absolute error as new measure for assessing forecasting errors. Energy Convers. Manag. 2018, 162, 176–188. [Google Scholar] [CrossRef]
  34. Franses, P.H. A note on the Mean Absolute Scaled Error. Int. J. Forecast. 2016, 32, 20–22. [Google Scholar] [CrossRef] [Green Version]
  35. Hu, X.; Pedrycz, W.; Wang, X. Granular Fuzzy Rule-Based Models: A Study in a Comprehensive Evaluation and Construction of Fuzzy Models. IEEE Trans. Fuzzy Syst. 2016, 25, 1342–1355. [Google Scholar] [CrossRef]
  36. Reyes-Galaviz, O.F.; Pedrycz, W. Granular fuzzy models: Analysis, design, and evaluation. Int. J. Approx. Reason. 2015, 64, 1–19. [Google Scholar] [CrossRef]
  37. Zhu, X.; Pedrycz, W.; Li, Z. Granular Models and Granular Outliers. IEEE Trans. Fuzzy Syst. 2018, 26, 3835–3846. [Google Scholar] [CrossRef]
  38. Pedrycz, W.; Wang, X. Designing Fuzzy Sets with the Use of the Parametric Principle of Justifiable Granularity. IEEE Trans. Fuzzy Syst. 2015, 24, 489–496. [Google Scholar] [CrossRef]
  39. Pedrycz, W.; Al-Hmouz, R.; Morfeq, A.; Balamash, A. The Design of Free Structure Granular Mappings: The Use of the Principle of Justifiable Granularity. IEEE Trans. Cybern. 2013, 43, 2105–2113. [Google Scholar] [CrossRef]
  40. Zhang, B.; Pedrycz, W.; Fayek, A.R.; Gacek, A.; Dong, Y. Granular Aggregation of Fuzzy Rule-Based Models in Distributed Data Environment. IEEE Trans. Fuzzy Syst. 2020, 1. [Google Scholar] [CrossRef]
  41. Zhongjie, Z.; Jian, H. Stabilizing the information granules formed by the principle of justifiable granularity. Inf. Sci. 2019, 503, 183–199. [Google Scholar] [CrossRef]
  42. Liu, S.; Pedrycz, W.; Gacek, A.; Dai, Y. Development of information granules of higher type and their applications to granular models of time series. Eng. Appl. Artif. Intell. 2018, 71, 60–72. [Google Scholar] [CrossRef]
  43. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Advanced Applications in Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  44. Concrete Compressive Strength Data Set. Available online: https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength (accessed on 20 June 2020).
Figure 1. Concept of interval-based fuzzy clustering concept.
Figure 1. Concept of interval-based fuzzy clustering concept.
Applsci 10 05929 g001
Figure 2. Conceptual diagram of interval division method: (a) Case 1; (b) Case 2; (c) Case 3.
Figure 2. Conceptual diagram of interval division method: (a) Case 1; (b) Case 2; (c) Case 3.
Applsci 10 05929 g002
Figure 3. Architecture of interval-based granular model (GM).
Figure 3. Architecture of interval-based granular model (GM).
Applsci 10 05929 g003
Figure 4. The model output and actual output of interval-based GM.
Figure 4. The model output and actual output of interval-based GM.
Applsci 10 05929 g004
Figure 5. Architecture of interval-based incremental granular model (IGM) based on interval-based fuzzy c-means (IFCM) clustering.
Figure 5. Architecture of interval-based incremental granular model (IGM) based on interval-based fuzzy c-means (IFCM) clustering.
Applsci 10 05929 g005
Figure 6. Concept of linear regression (LR) model.
Figure 6. Concept of linear regression (LR) model.
Applsci 10 05929 g006
Figure 7. Concept of coverage.
Figure 7. Concept of coverage.
Applsci 10 05929 g007
Figure 8. Concept of specificity.
Figure 8. Concept of specificity.
Applsci 10 05929 g008
Figure 9. Relation between coverage and specificity.
Figure 9. Relation between coverage and specificity.
Applsci 10 05929 g009
Figure 10. Prediction results for interval-based IGM (Case 1).
Figure 10. Prediction results for interval-based IGM (Case 1).
Applsci 10 05929 g010
Figure 11. Results of performance index (PI) method for interval-based IGM (Case 1).
Figure 11. Results of performance index (PI) method for interval-based IGM (Case 1).
Applsci 10 05929 g011
Figure 12. Prediction results for interval-based IGM (Case2).
Figure 12. Prediction results for interval-based IGM (Case2).
Applsci 10 05929 g012
Figure 13. Results of PI method for interval-based IGM (Case 2).
Figure 13. Results of PI method for interval-based IGM (Case 2).
Applsci 10 05929 g013
Figure 14. Prediction results for interval-based IGM (Case 3).
Figure 14. Prediction results for interval-based IGM (Case 3).
Applsci 10 05929 g014
Figure 15. Results of PI method for interval-based IGM (Case 3).
Figure 15. Results of PI method for interval-based IGM (Case 3).
Applsci 10 05929 g015
Figure 16. The variation of performance index (training data).
Figure 16. The variation of performance index (training data).
Applsci 10 05929 g016
Figure 17. The variation of performance index (checking data).
Figure 17. The variation of performance index (checking data).
Applsci 10 05929 g017
Table 1. Concrete compressive strength (CCS) prediction results for interval-based IGM (Case 1) using training data.
Table 1. Concrete compressive strength (CCS) prediction results for interval-based IGM (Case 1) using training data.
C2345678910
I
2000000000
30.2700.2780.2730.2720.2710.2680.2740.2770.274
40.3280.3340.3330.3370.3280.3380.3380.3300.338
50.2960.3100.3180.3210.3390.3370.3490.3570.362
60.2740.3020.3120.3210.3420.3130.3140.3130.327
70.3360.3630.3590.3620.3510.3540.3710.3800.392
80.3120.3260.3310.3110.3230.3460.3450.3540.371
90.2820.2800.2870.3110.2900.3110.3170.3240.325
100.2490.2770.2760.2790.2800.2840.3020.3120.338
Table 2. CCS prediction results for interval-based IGM (Case 1) using testing data.
Table 2. CCS prediction results for interval-based IGM (Case 1) using testing data.
C2345678910
I
2000000000
30.2950.3030.3060.2980.3040.3000.3010.3000.302
40.3120.3370.3430.3470.3380.3260.3230.3290.332
50.2970.3170.3080.3030.3150.3190.3150.3070.314
60.2900.2950.3060.3110.3000.2950.3010.2980.298
70.3570.3690.3690.3630.3630.3760.3760.3860.392
80.3290.3220.3380.3260.3190.3250.3270.3240.339
90.3050.2980.3000.3070.2910.3040.2980.3000.306
100.2760.2920.2890.2960.3020.2860.3020.2940.282
Table 3. CCS prediction results for interval-based IGM (Case 2) using training data.
Table 3. CCS prediction results for interval-based IGM (Case 2) using training data.
C2345678910
I
2000000000
30.2620.2870.2850.2910.2850.2840.2840.2830.284
40.3710.3730.3740.3710.3790.3670.3770.3930.388
50.3590.3770.3770.3760.3840.4070.4110.4130.404
60.3260.3540.3440.3560.3670.3730.3710.3640.371
70.4230.4350.4500.4460.4560.4600.4620.4680.480
80.4030.4100.4260.4290.4350.4430.4480.4500.458
90.3830.3840.3940.4100.4130.4190.4330.4390.446
100.3640.3670.3840.3920.3920.3970.4090.4150.431
Table 4. CCS prediction results for interval-based IGM (Case 2) using testing data.
Table 4. CCS prediction results for interval-based IGM (Case 2) using testing data.
C2345678910
I
2000000000
30.3150.3480.3400.3480.3460.3420.3460.3440.343
40.3990.3980.3980.4020.4070.4010.3960.4090.403
50.3730.3870.3780.3770.3720.3830.3860.3840.387
60.3360.3570.3610.3570.3510.3520.3500.3500.348
70.4580.4690.4730.4830.4820.4760.4760.4750.491
80.4440.4490.4510.4610.4620.4650.4520.4620.465
90.4210.4270.4250.4300.4460.4340.4360.4510.455
100.4050.3980.4070.4150.4160.4140.4220.4100.418
Table 5. CCS prediction results for interval-based IGM (Case 3) using training data.
Table 5. CCS prediction results for interval-based IGM (Case 3) using training data.
C2345678910
I
2000000000
30.2500.2600.2580.2610.2570.2610.2600.2630.260
40.3830.3980.3940.4000.4060.4070.4080.4070.413
50.3860.3960.3960.4060.4170.4320.4310.4320.436
60.3520.3850.3900.3970.3910.3990.4080.4230.419
70.3390.3630.3570.3610.3420.3610.3700.3780.387
80.3100.3210.3270.3210.3210.3320.3460.3660.369
90.2810.2800.2880.3100.2980.3090.3100.3140.325
100.2500.2770.2750.2780.2880.2820.2960.3130.323
Table 6. CCS prediction results for interval-based IGM (Case 3) using testing data.
Table 6. CCS prediction results for interval-based IGM (Case 3) using testing data.
C2345678910
I
2000000000
30.3100.3130.3170.3190.3190.3200.3210.3230.324
40.4140.4170.4240.4170.4260.4280.4350.4320.441
50.4140.4140.4170.4180.4270.4350.4310.4390.443
60.3830.4000.3990.4000.3960.4050.4190.4320.416
70.3560.3690.3670.3630.3700.3700.3730.3830.390
80.3300.3350.3380.3360.3270.3360.3300.3590.335
90.3040.2980.3000.3030.2950.3060.3010.3000.309
100.2760.3140.2960.3010.3010.2890.2910.2810.294
Table 7. CCS prediction result for interval-based IGM (Cases 1, 2, and 3).
Table 7. CCS prediction result for interval-based IGM (Cases 1, 2, and 3).
DBIGMsNum. of IntervalsNum. of ClustersNum. of RulesTraining PITesting PI
CCS DBCase 1710700.3910.392
Case 2710700.4800.491
Case 3510500.4360.443
ASN DBCase 1710700.4480.427
Case 2710700.5340.522
Case 367420.4570.441
MPG DBCase 179630.3990.362
Case 288640.5770.464
Case 359450.4480.388

Share and Cite

MDPI and ACS Style

Yeom, C.-U.; Lee, M.-W.; Kwak, K.-C. Performance Index of Incremental Granular Model with Information Granule of Linguistic Intervals and Its Application. Appl. Sci. 2020, 10, 5929. https://doi.org/10.3390/app10175929

AMA Style

Yeom C-U, Lee M-W, Kwak K-C. Performance Index of Incremental Granular Model with Information Granule of Linguistic Intervals and Its Application. Applied Sciences. 2020; 10(17):5929. https://doi.org/10.3390/app10175929

Chicago/Turabian Style

Yeom, Chan-Uk, Myung-Won Lee, and Keun-Chang Kwak. 2020. "Performance Index of Incremental Granular Model with Information Granule of Linguistic Intervals and Its Application" Applied Sciences 10, no. 17: 5929. https://doi.org/10.3390/app10175929

APA Style

Yeom, C. -U., Lee, M. -W., & Kwak, K. -C. (2020). Performance Index of Incremental Granular Model with Information Granule of Linguistic Intervals and Its Application. Applied Sciences, 10(17), 5929. https://doi.org/10.3390/app10175929

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop