Next Article in Journal
Dynamic Stiffness Measurements of Road Pavements by Means of Impact Hammer in a Non-Resonant Configuration
Previous Article in Journal
Explainable AI-Enhanced Human Activity Recognition for Human–Robot Collaboration in Agriculture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Classification Model Based on Interval Rule Inference Network with Interpretability

1
School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
2
Shijiazhuang Key Laboratory of Artificial Intelligence, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
3
School of Computer, Guangdong University of Petrochemical Technology, Maoming 525000, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(2), 649; https://doi.org/10.3390/app15020649
Submission received: 5 December 2024 / Revised: 3 January 2025 / Accepted: 7 January 2025 / Published: 10 January 2025

Abstract

:
Interpretability requirements, complex uncertain data processing, and limited training data are characteristics of classification in some real industry applications. The interval belief rule base (IBRB) can deal with various types of uncertainty and provides high interpretability. However, there is a large number of parameters in IBRB, which makes it difficult for experts to accurately set them manually, limiting its application scope. To address this issue, this paper proposes an interval rule inference network (IRIN) with interpretability for classification models to automatically generate IBRB through integrating the ideas of the IBRB and the neural network. Firstly, hybrid data with different types are transformed into an interval belief distribution for automatic generation processing. Secondly, the interval evidence reasoning method is utilized as the inference engine to transfer information ensuring the process’s interpretability. Finally, a reasonable IBRB is generated automatically by updating the parameters by employing the learning engine in the neural network. Moreover, the differentiability of the interval evidence reasoning method in the IRIN is proved as a theoretical foundation of the IRIN, and an interpretability analysis of the IRIN’s structures is discussed. Experimental results demonstrate that the proposed method possesses high interpretability, enhancing the reliability of classification and maintaining the accuracy. Its application in an actual engineering case illustrates that it is particularly suitable for engineering problems where the explanation of results is a critical requirement.

1. Introduction

Classification problems have long been a key component of machine learning in many fields. In view of the nonlinearity, complexity, and uncertainty of the real world, a variety of classification methods have been proposed that perform well in solving classification problems. However, many classification methods, especially neural networks, achieve superior performance at the expense of low interpretability. In recent years, there has been a surge of interest in interpretable machine learning techniques [1,2,3,4].
To address the opacity issue in the decision making of many classification methods, researchers have begun to explore integrating expert knowledge into classifiers [5]. Rule-based classification systems, which integrate expert knowledge and provide high interpretability, have become a prominent artificial intelligence technology [6]. However, when designing and implementing a rule-based classification system, due to the diversity of input data and the complex uncertainty of the real world, the system will inevitably face challenges such as fuzziness, incompleteness, and uncertainty [7,8].
To tackle these challenges, Yang et al. [9] proposed the belief-rule-based inference methodology using the evidential reasoning approach (RIMER). This method establishes rules using belief distributions to effectively handle various types of uncertain knowledge. Subsequently, Liu et al. [10] further expanded the BRB and proposed the extended belief rule base (EBRB). The EBRB embeds belief degrees in antecedent attributes as well as the results of each rule to handle uncertainty more efficiently. However, neither the EBRB nor the BRB consider interval uncertainty, so it cannot effectively handle interval uncertainty caused by interval values, interval belief degrees, and interval reference values.
To solve this problem, some researchers have conducted various studies in recent years. Aiming at the interval uncertainty caused by interval reference values, AbuDaha et al. [11] introduced the generalized belief rule base, where grade intervals are used to express consequents, effectively solving the interval uncertainty problem caused by the reference value of the antecedent attribute. Aiming at the interval uncertainty caused by interval values and interval belief degree, Wang et al. [12] extended the ER method to the interval ER method, which effectively dealt with uncertainty, including the interval belief degree and interval value. Gao et al. [7] introduced an interval belief rule-based system (IBRBS) that considers interval data, interval belief degrees, and interval reference values, addressing the overall interval uncertainty. Zhu et al. [13] proposed an interval-valued belief rule-based inference method using the evidential reasoning approach (IRIMER). The IRIMER embeds interval-valued belief degrees in both the antecedents and consequents of each rule, allowing it to represent uncertain information or knowledge more flexibly and reasonably than previous belief rule bases. However, although these methods can effectively deal with interval uncertainty, these systems are all set by experts manually. When the data scales or data dimensions increase, there are a large number of parameters in the IBRB system, which inevitably increases the number of parameters. The complexity creates the risk of subjective fallacy. Therefore, determining how to determine these parameters has become a key issue in IBRB research.
To date, research on the automatic generation of IBRB is still insufficient. In other rule-based systems, there are some methods for automatic generation, which provide research ideas for building methods for automatically generating IBRBs. Liu et al. [14] proposed a student performance prediction model based on a belief rule base with automatic construction, where the accuracy of the model is improved by optimizing the parameters using the P-CMA-ES algorithm. Zhang et al. [15] proposed a new parameter optimization approach based on an improved gray wolf optimizer with interpretability reinforcement, which contributes to the interdisciplinary research on BRB systems and metaheuristic algorithms. Fu et al. [16] proposed a new EBRB system based on the K-nearest neighbor graph index. In the optimization model, all samples are utilized for gradient updates in each training iteration to update the parameters. Huang et al. [17] used a gradient descent algorithm to adjust parameters in the BRB and proposed a rule inference network model to automatically generate a BRB.
To automatically generate an IBRB and break through the limitations of its application, this paper proposes an interpretable interval rule inference network by combining interval belief rule bases with neural networks. This approach effectively handles interval uncertainty, enhances system reliability, and leverages the learning mechanism of neural networks to reduce dependency on experts setting parameters, enabling automatic updates of the interval belief rule base.
The main contributions of this paper include three aspects:
(1)
An interpretable rule inference network (IRIN) for the classification model is proposed. This model automatically generates an interval belief rule base, avoiding the problem that experts cannot accurately set IBRB parameters. It can improve the classification performance as well, extending the application areas compared with the RIN.
(2)
The feedforward process of IRIN is the inference process of the IBRB, and an IBRB is obtained to explain the algorithm results. It ensures the interpretability of the model, improving the research in the field of interpretable machine learning, meeting the interpretability, the intervenible ability, and reliability required in some.
(3)
Experiments on classification tasks are carried out on multiple public datasets. The experimental results show that IRIN has high interpretability and performs better than other BRB-based methods. This model is used in an engineering application, providing strong support for the practical application of IRIN.

2. Related Work

A rule-based system is an interpretability artificial intelligence technology. Rules can manipulate and incorporate new and existing knowledge, and adjust the behavior of the system through rules. At present, rule-based systems are mainly divided into three types: belief rule base (BRB) [9], extended belief rule base (EBRB) [10], and interval belief rule base (IBRB) [13].
The BRB, as an interpretable language model, has the ability to process qualitative and quantitative information under uncertainty and has developed rapidly in recent years. Gao et al. [18] proposed a fast and accurate BRB generation and reduction method, which grouped and combined similar rules in BRBs and removed the redundancy-based belief rule to alleviate the combinatorial explosion problem of BRBs. Yang et al. [19] proposed hierarchical BRB using a multi-layer tree structure (MTS-BRB), which effectively overcomes the combinatorial explosion problem of BRBs by introducing a multi-layer tree structure. Jiao et al. [20] proposed to learn a compact and accurate belief rule base with the help of the decision tree building technique. By introducing a decision tree build technology that jointly implements feature selection and model construction, a compact and accurate BRB was obtained.
On the basis of the BRB, Liu et al. [10] further proposed an extended BRB, namely, EBRB. EBRB not only inherits the advantages of the BRB inference model, but also has an effective and concise modeling method, so it has attracted the attention of many researchers. Yang et al. [5] introduced data envelopment analysis to evaluate the efficiency of each rule in EBRB, effectively reducing the number of EBRB rules. Fu et al. [21] proposed a rule activation method based on nearest neighbor propagation, which significantly improved the accuracy of EBRB inference and efficiency. Ma et al. [22] proposed a rule generation and activation method for the extended belief rule system based on an improved decision tree, and constructed an effective classification model. This model effectively solved the problems of low EBRB inference efficiency and inconsistent rule activation.
However, since neither EBRB nor BRB considers interval uncertainty, they are limited in handling this type of uncertainty. To solve this problem, some researchers proposed a new rule-based system called IBRB. IBRB effectively handles various types of interval uncertainty by embedding interval-valued belief degrees in both the antecedents and consequents of each rule. Currently, IBRB is still in the improvement and development stage, and most of the research focuses on how to deal with the interval uncertainty caused by interval data, interval reference values, and interval belief degree [7,11,12,13].
It can be found from the above-mentioned related works on rule-based systems that the IBRB is a more flexible and rational rule-based system for expressing uncertain information and knowledge. However, it is a challenge for experts to set multiple parameters manually in IBRB, especially when the number of rules is large. Hence, the present study focuses on using a method that can automatically generate IBRBs to solve the highly complex problem of manual parameter setting by IBRB experts.

3. Preliminaries

3.1. Interval Belief Rule Base

The IBRB was established with interval belief distributions embedded in both the antecedent and consequent terms of each rule, and is capable of capturing interval uncertainty and incompleteness in an integrated way. The IBRB has multiple interval belief rules, and the kth rule is profiled as follows:
R k : IF ( U 1 is { A n , 1 k , [ α n , 1 k , α n , 1 k + ] } ( n = 1 , , j i ) ) ( U T is { A m , T k , [ α m , T k , α m , T k + ] } ( m = 1 , , j i ) ) , THEN ( D 1 , [ β 1 , k , β 1 , k + ] ) , , ( D J D , [ β J D , k , β J D , k + ] ) , with a rule weight θ k ( k = 1 , , L ) and antecedent attribute weights δ T ( T = 1 , , T )
where ( A j , i k , [ α j , i k , α j , i k + ] ) ( i = 1 , , T , j = 1 , , j i ) is the interval-valued belief distribution of the ith antecedent, [ α j , i k , α j , i k + ] is the interval-valued belief degree to which the ith attribute U i is evaluated to be the referential value A j , i k and α j , i k [ α j , i k , α j , i k + ] such that j = 1 j i α j , i k 1 ; [ β j , k , β j , k + ] is the interval-valued belief degree to which D n is believed to be the consequent of the kth rule.

3.2. Rule Inference Method

The inference process of IBRB can be divided into three steps: input transformation, activation weight calculation, and the rule inference method based on the IER algorithm.
Assume that an input for the interval belief rule-based system is ( x 1 , x 2 , , x T ) , x i ( i = 1 , , T ) is the input for the ith antecedent attribute U i , and T is the total number of the antecedent attributes involved in all the rules of the IBRB.
Before rule inference, the input can be transformed into a belief structure using a utility based information transformation technique as follows:
S ( x i ) = { ( A i , j , α i , j ) , j = 1 , , j i }
where α j , i k [ α j , i k , α j , i k + ] . For input interval samples, there are usually two different cases.
When x i [ x i , x i + ] is between two reference values:
α i , j = α i j + 1 x i + α i j + 1 α i j , α i , j + = α i j + 1 x i + α i j + 1 α i j α i , j + 1 = x i α i j α i j + 1 α i j , α i , j + 1 + = x i + α i j α i j + 1 α i j
When x i [ x i , x i + ] includes one or more reference values:
α i , j 1 = 0 , α i , j 1 + = I j 1 , j × x i α i , ( j 1 ) α i , j α i , ( j 1 ) α i , j = 0 , α i , j + = I j 1 , j + I j , j + 1 α i , j + 1 = 0 , α i , j + 1 + = I j , j + 1 + I j + 1 , j + 2 α i , j + 2 = 0 , α i , j + 2 + = I j + 1 , j + 2 × x i + α i , ( j + 1 ) α i , j + 2 α i , ( j + 1 )
where if x i [ x i , x i + ] is between α i , j 1 and α i , j , then I j 1 , j = 1 , otherwise I j 1 , j = 0 .
Once the input data are converted, the similarity between the ith attribute x i of the input and U i of the kth rule is measured, which can be calculated by
s i k = 1 1 j i j = 1 j i ( α i , j k α i , j ) 2 ( α i , j k + α i , j + ) 2 2
Next, the activation weight for the kth belief rule can be calculated as follows:
w k = θ k × i = 1 T k ( s i k ) δ ¯ i j = 1 L ( θ k × i = 1 T k ( s i k ) δ ¯ i )
with
δ ¯ i = δ i max i = 1 , , M { δ i }
After calculating the activation weights for each rule, the activated rules are aggregated based on the IER approach to generate the combined belief degree. The calculation is as follows:
m j , k = m k ( D j ) [ m j , k , m j , k + ] = [ w k × β j , k , w k × β j , k + ]
m ˜ D , k = m ˜ k ( D ) [ m ˜ D , k , m ˜ D , k + ] = [ w k × β D , k , w k × β D , k + ] = [ w k × ( 1 j = 1 J D β j , k + ) , w k × ( 1 j = 1 J D β j , k ) ]
m ¯ D , k = m ¯ k ( D ) = 1 w k
where m j , k represents the credibility that the kth rule is not assigned to any consequent attribute, and m ˜ D , k represents the credibility of the missing reference attribute of the kth rule. The total uncertainty credibility is given by m D , k = m ¯ D , k + m ˜ D , k . We synthesize the credibility information of all rules and obtain the final belief result of each consequent attribute:
m j = K [ k = 1 L ( m j , k + m ˜ D , k + m ¯ D , k ) k = 1 L ( m ˜ D , k + m ¯ D , k ) ]
m ˜ D = K [ k = 1 L ( m ˜ D , k + m ¯ D , k ) k = 1 L ( m ¯ D , k ) ]
m ¯ D = K [ k = 1 L m ¯ D , k ]
K = [ j = 1 J D k = 1 L ( m j , k + m ˜ D , k + m ¯ D , k ) ( J D 1 ) k = 1 L ( m ˜ D , k + m ¯ D , k ) ] 1
max / min y i = m j 1 m ¯ D

4. Interval Rule Inference Network

In this section, an interval rule inference network model is constructed. This model combines the interval belief rule base and neural network, which can not only handle a variety of uncertain information or knowledge including interval uncertainty, but also automatically generate an IBRB through learning algorithms. To facilitate the automatic generation of an IBRB, the framework of the proposed IRIN is presented in Figure 1.
The IRIN is composed of three parts, namely, the IBRB establishment, the inference process, and the parameter determination. The IBRB establishment contains a transformation process where the values of samples of any type are translated into interval belief distributions by statistical methods [23,24,25], and rule base establishment where an IBRB is constructed associated with the samples. The inference process is divided into three layers, i.e., the input layer, the activation layer, and the conclusion layer.
The similarity measures are input into the IBRB, in which is the sample and the rules in IBRB are both represented with interval belief distributions, and the IER-based method is used to reason the information with the interval belief distribution values. The conclusion is obtained containing interval belief distribution information.
In the parameter determination, the IBRB is optimized using gradient descent, where the error of mean square is calculated between the outputs of the inference process and the actual labels, which are all represented as interval belief distribution values. The belief degrees of consequents β j , k ( j = 1 , , J D ; k = 1 , , L ) , β j , k [ β j , k , β j , k + ] should be handled as interval belief distribution, while attribute weights δ ¯ i ( i = 1 , , T ) and rule weights θ k ( 1 , , L ) are real values.

4.1. IBRB Establishment

In some real-world classification problems, samples are represented by numerical values or interval values. To establish IBRB, data preparation is necessary, including two steps, interval value conversion and belief distribution conversion. An IBRB is constructed using datasets with interval belief distribution representation obtained by the data preparation.

4.1.1. Data Transformation

In the interval value conversion, numerical values are converted into interval values. This process ensures the equivalence and rationality of the information before and after the data conversion.
Assume the ith sample in the dataset is x i = x 1 , i , , x t , i , , x T , i ; y i , in which x t , i is general numerical representation, T denotes the total number of the attribute, y i represents the category of the ith sample, and y i { y 1 , , y J D } , where J D indicates the total number of categories. The method for converting a numerical value into an interval value x t , i [ x t , i , x t , i + ] is as follows:
x t , i = x t , i rand ( ) σ t , j λ , x t , i + = x t , i + rand ( ) σ t , j λ , if y i = y j
where λ is the step size, and σ t , j is the standard deviation of x t in the same class y j .
σ t , j = 1 y i 1 x t , i y j ( x t , i x ¯ t , j ) 2
with
x ¯ t , j = 1 y J D x t , i y j x t , i
The category of the sample should be converted to interval values [ y i , y i + ] as follows:
y i = y i 1 2 γ y i + = y i + 1 2 γ
where γ is random number and γ 0 , 1 .
A given input x t is transformed into an interval belief distribution of the antecedent attribute U t as follows:
E x t = A t j , [ a t , j , a t , j + ] , j = 1 , , J , t = 1 , , T
where A t j is the jth reference value of the tth antecedent attribute, and [ a t , j , a t , j + ] is the interval belief degree to which the input x t belongs to the referential value A t j . [ a t , j , a t , j + ] will be obtained in different cases from Equations (3) and (4) according to the nature of the antecedent attribute.
The category of samples can be expressed as the interval belief distribution representation. Assuming that the sample belongs to the ith category, then it can be expressed as
( D 1 , b 1 ) , , ( D J D , b J D )
where b j = [ b j , b j + ] , b j = b j + = 1 , j i 0 , j i

4.1.2. IBRB Generation

The sample represented by the above interval belief distribution representation can obtain the IBRB in the form shown in Equation (1); the IBRB is shown in Table 1.
In Table 1, A t , j k ( k = 1 , , L ; t = 1 , , T ; j = 1 , , J ) is the jth reference value of the antecedent attribute U t ( t = 1 , , T ) of the kth rule, T is the total number of the antecedent attributes involved in all the rules of the IBRB, J represents the number of reference values, L represents the number of IBRB rules, δ ¯ t represents the attribute weight, α t , j k = α t , j k , α t , j k + is the interval belief degree belongs to A t , j k . β k , J D = [ β k , J D , β k , J D + ] the interval-valued belief degree to which D J D is believed to be the consequent of the kth rule, and J D represents the number of categories.
The antecedent and consequent terms of each IBR in the IBRB are embedded with interval belief distributions, effectively capturing the interval uncertainty associated with the input samples. As the IBRB generated in Table 1shows, the kth IBR can be represented as follows:
R k : IF ( U 1 i s { ( A 1 , 1 k , [ α 1 , 1 k , α 1 , 1 k + ] ) , , ( A 1 , J k , [ α 1 , J k , α 1 , J k + ] ) } ) ( U T i s { { ( A T , 1 k , [ α T , 1 k , α T , 1 k + ] ) , , ( A T , J k , [ α T , J k , α T , J k + ] ) } ) , THEN ( D 1 , [ β k , 1 , β k , 1 + ] ) , , ( D J D , [ β k , J D , β k , J D + ] ) , with a rule weight θ k and antecedent attribute weights δ T ( T = 1 , , T ) .

4.2. Inference Process

After the IBRB is generated, an inference process is performed in order to obtain the final conclusion. Different from previous systems based on BRB or EBRB, the input for IBRB is represented by an interval belief distribution representation. In order to effectively capture the interval information of the sample and the rule, the matching degree of the ith sample x t , i and the antecedent attribute U t k of the kth rule is assessed by the similarity measure between the two interval belief distributions. The matching degree between the input x i and the antecedent attribute U t of the kth rule is calculated by Equation (5).
At the activation layer, the activation weight can describe the activation degree of the IBR, which can be obtained by combining the matching degree obtained above with attribute weight and rule weight. The activation weight is calculated by Equation (6).
As the input for IRIN is interval information, the final conclusion should also contain interval information. At the conclusion layer, the final conclusion is obtained by aggregating the belief degrees of consequents with interval information and the activation weight of each IBR. The IER-based method is used to aggregate the belief degrees of consequents with interval information and the activation weight of each IBR to obtain the final conclusion. The conclusion presented in the form of intervals can make up for the lack of expert knowledge and improve the robustness of the system to a certain extent. The calculation is as follows:
y i = [ y i , y i + ]
s.t.
β j , k β j , k β j , k + a n d j = 1 J D β j , k = 1 k = 1 L w k = 1
where
y i = μ · [ k = 1 L ( w k β j , k + 1 w k ) k = 1 L ( 1 w k ) ] 1 μ · [ k = 1 L ( 1 w k ) ]
y i + = μ + · [ k = 1 L ( w k β j , k + + 1 w k ) k = 1 L ( 1 w k ) ] 1 μ + · [ k = 1 L ( 1 w k ) ]
μ = [ j = 1 J D k = 1 L ( w k β j , k + 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) ] 1
μ + = [ j = 1 J D k = 1 L ( w k β j , k + + 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) ] 1
For classification problems, the final classification result of the sample is the category with the highest average belief degree.
c j = y j + y j + 2
where c j ( j = 1 , , J D ) is the average belief degree.
f ( x ) = D n , n = arg max j = 1 , , J D { c j }

4.3. Parameter Determination

To automatically generate a reasonable IBRB, parameters should be determined through the learning algorithm of IRIN, including the attribute weights, rule weights, and the belief degrees of consequents. The gradient descent algorithm is used in the parameter learning process. It avoids the problem of the subjectivity and difficulty of parameter setting manually by experts when the number of parameters is too large, and improves the classification effect of IRIN. Thereby, IRIN can be applied more widely in various areas.
The parameter determination flowchart of IBRB is shown in Figure 2. In order to obtain a reasonable IBRB, the error function E i is used to calculate the difference between the final conclusion obtained by the inference process and the actual label of the input sample. If this difference is small enough, a reasonable IBRB is considered to have been generated. Otherwise, the belief degree of consequents, rule weights, and attribute weights should be updated continuously.

4.3.1. The Theoretical Foundations of IRIN

The traditional gradient descent algorithm is adopted to adjust the parameters in IRIN. In order to update the parameters by partial derivation in different directions for different parameters, it is necessary to prove the differentiability of the approximate reasoning algorithm based on IER in IRIN.
Theorem 1.
If j = 1 J D k = 1 L β j , k + 0 , j = 1 J D k = 1 L β j , k 0 and 0 < w k < 1 , 0 β j , k β j , k + 1 , the partial derivatives of the overall combined degree y i in Equations (25)–(26) to β j , k , β j , k + and w k exist.
Proof of Theorem 1.
Let
f j = f j , f j + = [ k = 1 L ( w k β j , k + 1 w k ) , k = 1 L ( w k β j , k + + 1 w k ) ]
g = k = 1 L ( 1 w k )
It can be seen from the differentiability theorem of functions that the partial differentiable of f j with respect to β j , k , β j , k + and w k exist, and g is partially differentiable to w k .
Let
μ 0 = [ μ 0 , μ 0 + ] = [ j = 1 J D k = 1 L ( w k β j , k + 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) , j = 1 J D k = 1 L ( w k β j , k + + 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) ]
since,
μ 0 = j = 1 J D k = 1 L ( w k β j , k + 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) > μ 0 = j = 1 J D k = 1 L ( 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) = J D j = 1 J D ( 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) = k = 1 L ( 1 w k ) > 0
μ 0 + = j = 1 J D k = 1 L ( w k β j , k + + 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) > μ 0 + = j = 1 J D k = 1 L ( 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) = J D j = 1 J D ( 1 w k ) ( J D 1 ) k = 1 L ( 1 w k ) = k = 1 L ( 1 w k ) > 0
Hence, μ 0 and μ 0 + are partially differentiable with respect to β j , k , β j , k + , and w k .
And since μ = 1 μ 0 , μ is partially differentiable with respect to β j , k , β j , k + , and w k .
Hence, y j = μ · [ f j g ] 1 μ · g is partially differentiable with respect to β j , k , β j , k + , and w k .   □
Theorem 2.
If k = 1 L ( θ k × i = 1 T k ( s i k ) δ ¯ i ) 0 , the partial derivatives of activation weight w k in Equation (5) to rule weight θ k and attribute weight δ ¯ i exist.
Proof of Theorem 2.
Let
z = i = 1 T k ( s i k ) δ ¯ i
z is an exponential function about δ ¯ i , so z is a partial differentiable to.
w k = θ k · z j = 1 L ( θ k · z )
It is obvious that θ k , z is differentiable and k = 1 L ( θ k · z ) 0 . Hence, w k is a partially differentiable to δ ¯ i , z . From above, w k is partially differentable to δ ¯ i , θ k .
From Theorems 1 and 2, y i is a partial derivative about δ ¯ i , θ k .    □

4.3.2. Parameter Training

To obtain a reasonable IBRB and build a high-performance classification system, IRIN updates parameters such as β j , k , β j , k + , θ k , δ ¯ i , and the gradient descent algorithm is adopted.
  • Calculate the discrepancy:
    A given input ith sample is denoted as x i = x 1 , i , , x t , i , , x T , i ; y i , and the conclusion obtain by inference from the IBRB is denoted as y ^ i = [ y ^ i , y ^ i + ] , where y ^ i = { y ^ 1 ( i ) , , y ^ J D ( i ) } , y ^ i + = { y ^ 1 ( i ) + , , y ^ J D ( i ) + } , x i is the ith sample and y i = [ { y 1 ( i ) , , y J D ( i ) } , { y 1 ( i ) + , , y J D ( i ) + } ] . The discrepancy between the y ^ i and y i is calculated according to the following loss function.
    E i = 1 J D j = 1 J D ( y ^ j y j ) 2 + ( y ^ j + y j + ) 2 2
  • Update of the belief degrees of consequents:
    The belief degrees of consequents can be updated as
    β j , k = β j , k + Δ β j , k ( j = 1 , , J D , k = 1 , , L )
    β j , k + = β j , k + + Δ β j , k + ( j = 1 , , J D , k = 1 , , L )
    The belief degrees of consequents biases can be updated as
    Δ β j , k = η E i y i
    Δ β j , k + = η E i y i +
    where η is the learning rate, and E i y i , E i + y i + are the gradients.
    The partial derivative E i y i , E i + y i + is calculated as follows:
    E i β j , k = j = 1 J D E i y j · y j β j , k E i β j , k + = j = 1 J D E i y j + · y j + β j , k +
    E i y j = 1 J D ( y ^ j y j ) E i y i + = 1 J D ( y ^ i + y i )
    y i β j , k = ( μ β f j + μ f j β ) · g + ( 1 g ) μ β f j g 2 y i + β j , k + = ( μ β + f j + + μ + f j β + ) · g + + ( 1 g + ) μ β + f j + g + 2
    where
    f j = k = 1 L ( w k × β j , k + 1 w k ) k = 1 L 1 w k , k = 1 L ( w k × β j , k + + 1 w k ) k = 1 L 1 w k
    g = 1 μ k = 1 L ( 1 w k ) , 1 μ + k = 1 L ( 1 w k )
    s k = l = 1 , l k L w l × β j , l + 1 w l , l = 1 , l k L w l × β j , l + + 1 w l
    h k = l = 1 , l k L ( 1 w l )
    Let μ 0 = 1 μ , μ 0 + = 1 μ +
    μ β + = μ β β j , k = w k · s k μ 0 2 , μ β + = μ β + β j , k + = w k · s k + μ 0 + 2
    f j β = f j β j , k = w k · s k f j β + = f j β j , k + = w k · s k +
  • Update the rule weights and attribute weights:
    The rule weights and attribute weights can be updated as
    θ k = θ k + Δ θ k ( k = 1 , , L ) δ ¯ i = δ ¯ i + Δ δ ¯ i ( i = 1 , , T )
    The rule weights and attribute weights biases can be updated as
    Δ θ k = η E i θ k Δ δ ¯ i = η E i δ ¯ i
    From Theorem 1 and Theorem 2, E i w k , E i + w k , w k θ k , w k δ ¯ i exists. According to the inference process, the activation weight of the rule is a function with respect to all the attribute weights, and the output function is a complex function with respect to all the activation weights, bringing high complexity of parameter leaning. In order to reduce the complexity, the “pseudo-gradient” will be used instead of the complex “true gradient”, that is, the partial derivative Δ w k δ ¯ i , Δ w k θ k will be used instead of E i δ ¯ i , E i θ k as the gradient. Therefore,
    Δ θ k = η 2 Δ w k θ k = η 1 η 2 E i w k · w k θ k Δ δ ¯ i = η Δ w k δ ¯ i = η 1 η 2 E i w k · w k δ ¯ i
    The partial derivative E i δ ¯ i , E i θ k is calculated as follows:
    E i θ k = 1 2 ( E i y i + · y i + w k + E i y i · y i w k ) · w k θ k
    E i δ ¯ i = 1 2 ( E i y i + · y i + w k + E i y i · y i w k ) · w k δ ¯ i
    where
    y w k = ( μ w f n + μ f n w ) · g g μ f n g 2
    μ w = μ w k , μ + w k = j J D ( β j , k 1 ) · s k + ( J D 1 ) · h k μ 0 2 , j J D ( β j , k + 1 ) · s k + + ( J D 1 ) · h k μ 0 + 2
    f j w = [ f j w , f j w + ] = [ ( β j , k 1 ) · s k + h k , ( β j , k + 1 ) · s k + + h k ]
    g = [ μ w ( 1 g ) + μ 2 · h k μ , μ w + ( 1 g ) + μ 2 · h k μ + ]
    Let
    z k = i = 1 T k ( s i k ) δ ¯ i
    If l = k
    w k θ i = θ k · i = 1 T k θ i z i θ k · ( θ i z k ) ( i = 1 T k θ i z i ) 2
    w k θ i = z k · i = 1 , i k T k θ i z i ( i = 1 T k θ i z i ) 2
    w k δ ¯ i = w k z k · z k δ ¯ i = z k i = 1 T k θ i z i ( i = 1 T k θ i z i ) · ( θ k z k ) ( i = 1 T k θ i z i ) 2 · ln s i k · i = 1 T k ( s i k ) 2 = z k i = 1 , i k T k θ i z i ( i = 1 T k θ i z i ) 2 · ln s i k · i = 1 T k ( s i k ) 2
    If l k
    w k θ i = θ k z k ( j = 1 , j k T k θ j z j ) ( j = 1 T k θ j z j ) 2 = θ k z k z i ( i = 1 T k θ i z i ) 2
    w k δ ¯ i = w k z k · z k δ ¯ i = θ k z k ( i = 1 T k θ i z i ) ( i = 1 T k θ i z i ) 2 · ln s i k · i = 1 T k ( s i k ) 2 = θ i z k z i ( i = 1 T k θ i z i ) 2 · ln s i k · i = 1 T k ( s i k ) 2

4.4. Algorithm Description

An IRIN that can automatically generate IBR and effectively handle interval uncertainty was constructed to avoid the limitations of parameters given by experts in the above. The algorithm describes the construction process of IRIN. The construction steps of IRIN are as follows:
Step 1:
Normalize the dataset and convert numeric data to interval data.
Step 2:
Use Equations (3) and (4) to transform the interval data into an interval belief distribution according the antecedent attribute.
Step 3:
Refer to Section 4.2 to obtain IBRB inference conclusion y ^ .
Step 4:
Compute the difference between the IBRB inference conclusion y ^ and the actual conclusion y using Equation (38). If the difference is greater than the expected error ε min , update the parameters by referring to Section 4.3 and return to Step 3.
Step 5:
Repeat Step 5 until the difference is less than the expected error ε min or the training data are greater than the preset maximum training time λ max .
The IRIN algorithm is shown in Algorithm 1.    
Algorithm 1: IRIN algorithm.
Applsci 15 00649 i001

5. Interpretability Analysis of IRIN Structures

In this section, the interpretability of the IRIN classification model constructed in Section 4 will be introduced. To alleviate the problem of the “black box” mechanism, IRIN’s classification model should be interpretable. The authors of [26,27] proposed a general interpretability criterion. Through the analysis of the general interpretability criterion, the constructed IRIN interpretability criterion is as follows.
Criterion 1.
The reference value of the attribute can be effectively distinguished.
When constructing an IRIN, each attribute is assigned at least one reference value with a different meaning. In the present work, the set of referential value of each attribute is A = (L,0.0),(M,0.5),(H,1.0), where Hrepresents “high”, M represents “medium”, and L represents “low”.
Criterion 2.
A complete membership function.
IRIN can convert data with any type of representation into the corresponding interval data, where each datum can match at least one reference value, and at least one rule can be activated in rule inference.
Criterion 3.
The structures and parameters have actual meaning.
As shown in Table 1, all the parameters of the IBRB automatically generated by IRIN have actual meaning, so that is able to deduce a reasonable causal relationship.
Criterion 4.
Standardization of the matching degree.
The standardization of the matching degree helps the understandability of language terms and provides guidance for the classification of IRIN.
Criterion 5.
Reasonable information transformation.
Data in any type or hybrid type need to be converted into interval belief distribution at the data transformation stage. Note that the boundaries of the interval belief distribution obtained by Equations 3 and 4 are not independent interval belief structures and meet the requirement of normalization. Therefore, the data are equivalently expressed in the interval belief structures.
Criterion 6.
A transparent inference engine.
The feedforward process of IRIN is the inference process of IBRB, which uses IER as the inference engine to ensure the interpretability of the rule base in the whole inference process.
Criterion 7.
The simplicity of the rule base.
The numbers of rules, parameters, and reference values for IRIN are moderate to improve the readability of IBRB.
These criteria guarantee the interpretability of IRIN classification and provide a reference for establishing an interpretable classification model.

6. Experiment and Analysis

The datasets used in this paper are described before the experiment. The experiments are divided into three parts. The first part analyzes the interpretability and intervention ability of IRIN, the second part validates the classification performance by comparing the proposed method with other BRB systems, and the third part shows the actual value through applying IRIN to actual engineering.
The experimental environment of this paper is Intel(R) Core(TM) i5-8300H CPU @2.3 GHz (4 CPUs) 24 GB memory, with the Windows 11 operating system. The algorithm in this paper is implemented using the Python 3.7 programming language.

6.1. Experimental Datasets

To verify the effectiveness of the proposed method in this paper, the experiments are conducted on the general datasets from the UC Irvine Machine Learning Repository, with detailed information available at https://archive.ics.uci.edu/; accessed on 2 January 2025. The basic information of the datasets used in this paper is presented in Table 2.
In certain engineering domains, historical data for training is not readily available, while the samples for classification prediction are various, such as the classification of mechanical properties of coarse aggregate in the following. Therefore, this paper employs the inverse five-fold cross-validation for experimentation, ensuring the accuracy and reliability of the model in practical applications.

6.2. Interpretability Analysis of IRIN

Experts or decision-makers can intervene in IBRB automatically generated by IRIN based on expert knowledge or experience to modify the results to meet different task needs. In order to illustrate the interpretability and the intervention ability, the IBRB with respect to the Iris dataset automatically generated by IRIN is taken as an example, shown in Table 3.
The Iris dataset contains four attributes, namely, sepal length, sepal width, petal length, and petal width, and three classes, where each class refers to a type of iris plant as “Setosa”, “Versicolour”, or “Virginica”. In Table 3, A 1 , A 2 , A 3 , and A 4 represent the antecedent attribute weights, 0.2496, 0.6754, 0.1146, and 0.8475, respectively; D 1 , D 2 , and D 3 represent consequents, which are the class labels; each row is a rule, such as the R 15 expressed as follows:
R 15 : if the sepal . length is { ( L , [ 0.0002 , 0.0080 ] ) , ( M , [ 0.9920 , 0.9998 ] ) , ( H , 0.0 ) } , AND the sepal . width is { ( L , [ 0.1667 , 0.1766 ] ) , ( M , [ 0.8234 , 0.8333 ] ) , ( H , 0.0 ) } , AND the petal . length is { ( L , 0.0 ) , ( M , [ 0.8298 , 0.9136 ] ) , ( H , [ 0.1667 , 0.1701 ] ) } , AND the petal . width is { ( L , 0.0 ) , ( M , [ 0.8333 , 0.8412 ] ) , ( H , [ 0.1588 , 0.1667 ] ) } , Then the iris category is { ( S e t o s a , [ 0.2181 , 0.3661 ] ) , ( V e r s i c o l o u r , [ 0.0595 , 0.0935 ] ) , ( V i r g i n i c a , [ 0.5404 , 0.7224 ] ) } . with rule weight is 0.8998 and attribute weight is { ( 0.2496 ) , ( 0.6754 ) , ( 0.1146 ) , ( 0.8475 ) }
It represents that if the membership degree of the sepal length at short, average, and long is [0.0002, 0.008], [0.992, 0.9998], 0.0, and the membership degree of the sepal width at narrow, average, and wide is [0.1667, 0.1766], [0.8234, 0.8333], 0.0, and the membership degree of the petal length at short, average, and long is 0, [0.8298, 0.9136], [0.1667, 0.1701], the membership degree of the petal width at narrow, average, and wide is 0.0, [0.8333, 0.8412], [0.1588, 0.1667], then the membership degrees that this iris category is setosa, versicolour or virginica are [0.2181, 0.3661], [0.0595, 0.0935], and [0.5404, 0.7224], respectively. The rule weight for this rule is 0.8998, and the antecedent attribute weights of the sepal length, the sepal width, the petal length, and the petal width are 0.2496, 0.6754, 0.1146, and 0.8475, respectively.
When one or more IBRs in IBRB are inconsistent with expert knowledge information, experts or decision-makers can intervene to modify these IBRs according to expert knowledge or experience, thus correcting the results.
The interpretability of IRIN is illustrated by comparing the consistency of the belief distribution between IRIN and the expert knowledge. More details about the belief distribution can be found in [28]. In Figure 3, the belief distribution of the last 16 rules for IRIN and expert knowledge are depicted. It can be observed that the belief distributions of IBRB are basically consistent with these of expert knowledge, especially rules 28, 27, 20, 19, and 18, which are drawn with pink ellipse. For rules 16, 17, 21, and 22, the differences in their belief distributions compared to expert knowledge may be due to noise interference during the automatic generation of the IBRB by IRIN. However, experts or decision-makers can use their expert knowledge or experience to intervene and modify these rules. Therefore, the belief distribution of the IBRB automatically generated by IRIN can better retain the characteristics of expert knowledge information and effectively handle interval uncertainty, while also improving IRIN’s interpretability and ensuring classification accuracy.

6.3. Performance Analysis of IRIN

Parameters in IBRB such as attribute weight, rule weight, and belief degrees of consequents were determined by experts, where the belief degrees of consequents are interval values, complicating the problem of manual parameter setting. Therefore, IRIN effectively avoids this problem by automatically generated an IBRB in which both the antecedent and the consequent term of each rule are embedded in the interval belief distribution.
In order to verify the classification performance of IRIN, the classification accuracy of IRIN is compared with some improved belief rule-based systems.
Table 4 shows the comparison results of classification accuracy with other BRB systems. As can be seen from Table 4, the classification accuracy of IRIN on the Diabetes, Glass, and Ecoli datasets are 75.3%, 76.7%, and 88.1%, respectively, which is 0.9%, 3.8%, and 1.0% higher compared to the second-ranked BRB system. The classification accuracy of the Seeds and Iris datasets is slightly lower than other BRB systems, ranking second and third, respectively. This may be due to the simplicity of the Seeds and Iris datasets, where the class distribution is relatively uniform and the feature dimensions are low. In such a case, the complexity and advantages of the IRIN method are not fully utilized. Nevertheless, the average ranking of IRIN is still better than that of other BRB systems. This fully shows that the IBRB automatically generated using the method proposed can effectively handle interval uncertainty, and the uncertainty can be expressed more carefully by using interval boundaries, so that the system can better distinguish boundary cases and reduce classification errors.

6.4. Application in Mechanical Properties of Coarse Aggregate of Reinforced Recycled Clay Brick

The utilization of recycled brick aggregate as a coarse aggregate in concrete enhances the recycling of waste bricks and reduces resource depletion mixtures. It is meaningful in environmental protection and economy, predicting the classification of the mechanical properties of coarse aggregate of reinforced recycled clay brick in concrete. However, the process requires interpretability and the results should be intervenable, while the history data for training are small. In this study, the proposed IRIN for classification is applied.
The data containing 72 samples with five attributions—apparent density ρ , water absorption rate ( ω ), crushing value ( σ ), substitution rate ( λ g ), and glue–water ratio were collected from Zhongtu International Architectural Design Co. Ltd. (Shijiazhuang, China) and Hebei University of Science and Technology (Shijiazhuang, China) [33]. They are divided into two categories, good mechanical property and bad mechanical property, depending on the compressive strength.
In this experiment, a five-fold cross-validation method was employed to evaluate the performance of the proposed method in identifying the mechanical properties. The evaluation metrics for classifying performance include the following:
(1)
Precision: The proportion of true positive predictions out of all positive predictions.
(2)
Recall: The proportion of true positive predictions out of all actual positives.
(3)
F1 score: The harmonic mean of precision and recall, considering both accuracy and recall metrics. The calculation formula is as follows:
F 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
The comparison of the performance metrics for the seven classification methods is shown in Figure 4. As can be seen from Figure 4, the multilayer perceptron classifier (MLP) and support vector classifier (SVC) have poor performance, while the remaining six methods’ classification performance from best to worst are the proposed IRIN, random forest classifier (RFC), decision tree classifier (DTC), RIN, XGBoost (XGB), and K-nearest neighbors classifier (KNC). The proposed IRIN classification method ranks in the top two in all three performance indicators. This shows that IRIN automatically generates reasonable IBRB, which effectively improves the inference effect of IBRB, and then improves the performance of IRIN.
The precision, recall, and F1 score values for the seven classification methods are presented in Table 5. As can be seen from Table 5, the precision of the IRIN model is slightly lower than that of the DTC classifier and RFC classifier by 1.2 percentage points, but IRIN is higher than other classifiers in terms of recall rate and F1 score. Despite the slightly lower precision compared to DTC and RFC classifiers, IRIN’s higher recall and F1 score indicate its stronger overall performance in identifying both categories of mechanical properties accurately. The higher recall suggests that IRIN effectively captures more instances of coarse aggregate with bad mechanical properties, minimizing the risk of missing critical cases. The F1 score, which balances precision and recall, further highlights IRIN’s ability to maintain reliable identification while ensuring a balance between false positives and false negatives.
In addition to the high classification performance, interpretability is a concern in an actual engineering problem. The IBRB automatically generated by IRIN shows the law for identifying the mechanical properties of coarse aggregate of reinforced recycled clay brick in concrete. One of the rules in IBRB is as follows.
R 1 : if the apparent density is { ( L o w , 0.0 ) , ( M e d i u m , [ 0.3213 , 0.4912 ] ) , ( H i g h , [ 0.5087 , 0.6786 ] ) } , AND the absorption rate is { ( W e a k , [ 0.5617 , 1.0 ] ) , ( M e d i u m , [ 0.0 , 0.4383 ] ) , ( S t r o n g , 0.0 ) } , AND the crushing value is { ( L o w , [ 0.4864 , 0.6831 ] ) , ( M e d i u m , [ 0.3169 , 0.5136 ] ) , ( H i g h , 0.0 ) } , AND the substitution rate is { ( L o w , 0.0 ) , ( M e d i u m , [ 0.4378 , 0.5515 ] ) , ( H i g h , [ 0.4484 , 0.5621 ] ) } , AND the glue - water ratio is { ( L o w , 0.0 ) , ( M e d i u m , [ 0.6141 , 0.6255 ] ) , ( H i g h , [ 0.3745 , 0.3859 ] ) } , Then the mechanical property is { ( G o o d , [ 0.6211 , 0.7289 ] ) , ( B a d , [ 0.2196 , 0.3789 ] ) } . with rule weight is 0.8311 and attribute weight is { ( 0.9062 ) , ( 0.8523 ) , ( 0.8912 ) , ( 0.4536 ) , ( 0.6611 ) }
It is noted that when the expert or decision-maker believes that the mechanical properties of the rule should be more than 80% good, the expert or decision-maker can intervene in the rule to correct the result.
The experimental results show that the proposed IRIN method can effectively identify the mechanical properties of coarse aggregate of reinforced recycled clay brick in concrete and the IBRB can be intervened with to further improve the performance. Therefore, the proposed method can be applied to real-world scenarios.

7. Conclusions

In this paper, the IRIN model established by integrating the ideas of IBRB and neural network was proposed, to solve the problem of interval uncertainty and the inability of experts to accurately set a large number of parameters of IBRB. This model uses the inference mechanism of IBRB and updates parameters according to the gradient descent algorithm to automatically generate reasonable IBRB. This method not only effectively solves the complexity problem of parameter determination in a rule base, but also effectively handles interval uncertainty while maintaining interpretability.
Experimental results show that the proposed model has interpretability and intervention ability, improving the reliability of the model, and higher classification accuracy compared with other BRB systems. In addition, the IRIN system was used to identify the mechanical properties of reinforced recycled clay brick coarse aggregate in concrete, demonstrating that IRIN can effectively capture more instances of coarse aggregate with bad mechanical properties, minimizing the risk of missing critical cases.
However, since IRIN’s learning algorithm only considers the boundaries of intervals, this method may lose part of the interval information, reducing the inference efficiency of IBRB and, thus, affecting the performance of IRIN. Therefore, in future research, the interval information of the interval belief structures will be considered to fully express the characteristics of the interval data.

Author Contributions

Methodology, Y.Z. (Yunxia Zhang) and Y.Z. (Yiming Zhong); formal analysis, X.W., Y.Z. (Yunxia Zhang), and J.B.; writing—original draft, Y.Z. (Yunxia Zhang) and Y.Z. (Yiming Zhong); writing—review and editing, X.W., Y.Z. (Yunxia Zhang), and Y.Z. (Yiming Zhong); data curation, Y.Z. (Yiming Zhong); validation, Y.Z. (Yunxia Zhang) and J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Hebei Provincial Natural Science Foundation (No. F2022210023, F2023210001), National Natural Science Foundation of China (NSFC) (No. 62341121), and the Projects of PhDs’ Start-up Research of GDUPT (No. 2023bsqd2005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, Y.; Tiňo, P.; Leonardis, A.; Tang, K. A survey on neural network interpretability. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 5, 726–742. [Google Scholar] [CrossRef]
  2. Liu, Z.; Xu, F. Interpretable neural networks: Principles and applications. Front. Artif. Intell. 2023, 6, 974295. [Google Scholar] [CrossRef] [PubMed]
  3. Prokopowicz, P.; Mikołajewski, D. Fuzzy approach to computational classification of burnout—Preliminary findings. Appl. Sci. 2022, 12, 3767. [Google Scholar] [CrossRef]
  4. Cao, J.; Zhou, T.; Zhi, S.; Lam, S.; Ren, G.; Zhang, Y.; Wang, Y.; Dong, Y.; Cai, J. Fuzzy inference system with interpretable fuzzy rules: Advancing explainable artificial intelligence for disease diagnosis—A comprehensive review. Inf. Sci. 2024, 662, 120212. [Google Scholar] [CrossRef]
  5. Yang, R.; Zhao, Y.; Shi, Y. RPREC: A Radar Plot Recognition Algorithm Based on Adaptive Evidence Classification. Appl. Sci. 2023, 13, 12511. [Google Scholar] [CrossRef]
  6. Xu, Z.; Lu, W.; Hu, Z.; Yan, W.; Xue, W.; Zhou, T.; Jiang, F. Decision-Refillable-Based Shared Feature-Guided Fuzzy Classification for Personal Thermal Comfort. Appl. Sci. 2023, 13, 6332. [Google Scholar] [CrossRef]
  7. Gao, F.; Bi, C.; Bi, W.; Zhang, A. A new belief rule base inference methodology with interval information based on the interval evidential reasoning algorithm. Appl. Intell. 2023, 53, 12504–12520. [Google Scholar] [CrossRef]
  8. Hu, G.; He, W.; Sun, C.; Zhu, H.; Li, K.; Jiang, L. Hierarchical belief rule-based model for imbalanced multi-classification. Expert Syst. Appl. 2023, 216, 119451. [Google Scholar] [CrossRef]
  9. Yang, J.B.; Liu, J.; Wang, J.; Sii, H.S.; Wang, H.W. Belief rule-base inference methodology using the evidential reasoning approach-RIMER. IEEE Trans. Syst. Man Cybern. Part Syst. Hum. 2006, 36, 266–285. [Google Scholar] [CrossRef]
  10. Liu, J.; Martinez, L.; Calzada, A.; Wang, H. A novel belief rule base representation, generation and its inference methodology. Knowl.-Based Syst. 2013, 53, 129–141. [Google Scholar] [CrossRef]
  11. AbuDahab, K.; Xu, D.l.; Chen, Y.W. A new belief rule base knowledge representation scheme and inference methodology using the evidential reasoning rule for evidence combination. Expert Syst. Appl. 2016, 51, 218–230. [Google Scholar] [CrossRef]
  12. Wang, Y.M.; Yang, J.B.; Xu, D.L.; Chin, K.S. The evidential reasoning approach for multiple attribute decision analysis using interval belief degrees. Eur. J. Oper. Res. 2006, 175, 35–66. [Google Scholar] [CrossRef]
  13. Zhu, H.; Zhao, J.; Xu, Y.; Du, L. Interval-valued belief rule inference methodology based on evidential reasoning-IRIMER. Int. J. Inf. Technol. Decis. Mak. 2016, 15, 1345–1366. [Google Scholar] [CrossRef]
  14. Liu, M.; He, W.; Zhou, G.; Zhu, H. A New Student Performance Prediction Method Based on Belief Rule Base with Automated Construction. Mathematics 2024, 12, 2418. [Google Scholar] [CrossRef]
  15. Zhang, H.; Yang, R.; He, W.; Feng, Z. Cooperative performance assessment for multiagent systems based on the belief rule base with continuous inputs. Inf. Sci. 2024, 676, 120815. [Google Scholar] [CrossRef]
  16. Fu, Y.G.; Lin, X.Y.; Fang, G.C.; Li, J.; Cai, H.Y.; Gong, X.T.; Wang, Y.M. A novel extended rule-based system based on K-Nearest Neighbor graph. Inf. Sci. 2024, 662, 120158. [Google Scholar] [CrossRef]
  17. Huang, D.; Zhang, Y.; Lin, H.; Zou, L.; Liu, Z. Rule inference network model for classification. ruan jian xue bao. J. Softw. 2020, 31, 1063–1078. [Google Scholar]
  18. Gao, F.; Bi, W. A fast belief rule base generation and reduction method for classification problems. Int. J. Approx. Reason. 2023, 160, 108964. [Google Scholar] [CrossRef]
  19. Yang, L.H.; Ye, F.F.; Liu, J.; Wang, Y.M. Belief rule-base expert system with multilayer tree structure for complex problems modeling. Expert Syst. Appl. 2023, 217, 119567. [Google Scholar] [CrossRef]
  20. Jiao, L.; Zhang, H.; Geng, X.; Pan, Q. Belief rule learning and reasoning for classification based on fuzzy belief decision tree. Int. J. Approx. Reason. 2024, 175, 109300. [Google Scholar] [CrossRef]
  21. Fu, C.; Hou, B.; Xue, M.; Chang, L.; Liu, W. Extended belief rule-based system with accurate rule weights and efficient rule activation for diagnosis of thyroid nodules. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 251–263. [Google Scholar] [CrossRef]
  22. Ma, J.; Zhang, A.; Gao, F.; Bi, W.; Tang, C. A novel rule generation and activation method for extended belief rule-based system based on improved decision tree. Appl. Intell. 2023, 53, 7355–7368. [Google Scholar] [CrossRef]
  23. Leung, Y.; Fischer, M.M.; Wu, W.Z.; Mi, J.S. A rough set approach for the discovery of classification rules in interval-valued information systems. Int. J. Approx. Reason. 2008, 47, 233–246. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Li, T.; Luo, C.; Zhang, J.; Chen, H. Incremental updating of rough approximations in interval-valued information systems under attribute generalization. Inf. Sci. 2016, 373, 461–475. [Google Scholar] [CrossRef]
  25. Zhang, X.; Mei, C.; Chen, D.; Li, J. Multi-confidence rule acquisition and confidence-preserved attribute reduction in interval-valued decision systems. Int. J. Approx. Reason. 2014, 55, 1787–1804. [Google Scholar] [CrossRef]
  26. Cheng, X.; Qian, G.; He, W.; Zhou, G. A Liquid Launch Vehicle Safety Assessment Model Based on Semi-Quantitative Interval Belief Rule Base. Mathematics 2022, 10, 4772. [Google Scholar] [CrossRef]
  27. Han, P.; He, W.; Cao, Y.; Li, Y.; Mu, Q.; Wang, Y. Lithium-ion battery health assessment method based on belief rule base with interpretability. Appl. Soft Comput. 2023, 138, 110160. [Google Scholar] [CrossRef]
  28. Si, Z.; Shen, J.; He, W. Lithium-Ion Battery Health Assessment Method Based on Double Optimization Belief Rule Base with Interpretability. Batteries 2024, 10, 323. [Google Scholar] [CrossRef]
  29. Yang, L.H.; Liu, J.; Wang, Y.M.; Martínez, L. Extended belief-rule-based system with new activation rule determination and weight calculation for classification problems. Appl. Soft Comput. 2018, 72, 261–272. [Google Scholar] [CrossRef]
  30. Gao, F.; Zhang, A.; Bi, W.; Ma, J. A greedy belief rule base generation and learning method for classification problem. Appl. Soft Comput. 2021, 98, 106856. [Google Scholar] [CrossRef]
  31. Fu, Y.; Yin, Z.; Su, M.; Wu, Y.; Liu, G. Construction and reasoning approach of belief rule-base for classification base on decision tree. IEEE Access 2020, 8, 138046–138057. [Google Scholar] [CrossRef]
  32. Yang, L.H.; Wang, Y.M.; Lan, Y.X.; Chen, L.; Fu, Y.G. A data envelopment analysis (DEA)-based method for rule reduction in extended belief-rule-based systems. Knowl.-Based Syst. 2017, 123, 174–187. [Google Scholar] [CrossRef]
  33. Hao, G.; Feng, H.; Yu, H.; Zhao, Y.; Li, J.; Haotian, W. Calculation method of compressive strength of recycled concrete co nsidering substitution rate and brick aggregate quality. Hebei J. Ind. Sci. Technol. 2023, 40, 202–209. [Google Scholar]
Figure 1. IRIN network framework.
Figure 1. IRIN network framework.
Applsci 15 00649 g001
Figure 2. Flowchart of IRIN parameter determination.
Figure 2. Flowchart of IRIN parameter determination.
Applsci 15 00649 g002
Figure 3. Belief distribution comparison.
Figure 3. Belief distribution comparison.
Applsci 15 00649 g003
Figure 4. Comparison of performance indicators of various classification methods.
Figure 4. Comparison of performance indicators of various classification methods.
Applsci 15 00649 g004
Table 1. Rule generation in IBRB.
Table 1. Rule generation in IBRB.
RuleRule WeightAntecedentConsequent
U 1 δ ¯ 1 U T δ ¯ T D 1 , , D J D
R 1 θ 1 A 1 , 1 1 , a 1 , 1 1 , , A 1 , J 1 , a 1 , J 1 A T , 1 1 , a T , 1 1 , , A T , J 1 , a T , J 1 β 1 , 1 , , β 1 , J D
R k θ k A 1 , 1 k , a 1 , 1 k , , A 1 , J k , a 1 , J k A T , 1 k , a T , 1 k , , A T , J k , a T , J k β k , 1 , , β k , J D
R L θ L A 1 , 1 L , a 1 , 1 L , , A 1 , J L , a 1 , J L A T , 1 L , a T , 1 L , , A T , J L , a T , J L β L , 1 , , β L , J D
Table 2. Datasets information.
Table 2. Datasets information.
DatasetSamplesNumber of AttributesNumber of Classes
Iris15043
Seeds20973
Ecoli33678
Glass21497
Haberman30532
Bupa34462
Diabetes76882
Table 3. The IBRB of Iris.
Table 3. The IBRB of Iris.
RuleRule WeightAntecedentConsequent
A 1 ( 0.2496 ) A 2 ( 0.6754 ) A 3 ( 0.1146 ) A 4 ( 0.8475 ) D 1 D 2 D 3
R 1 0.4952 L ( [ 0.3333 , 0.3544 ] ) L ( [ 0.6667 , 0.7254 ) L ( [ 0.0899 , 0.1509 ] ) L ( [ 0.1667 , 0.2627 ] ) [ 0.2907 , 0.3005 ] [ 0.3076 , 0.3769 ] [ 0.3322 , 0.3917 ]
M ( [ 0.6456 , 0.6667 ] ) M ( [ 0.2747 , 0.3333 ] ) M ( [ 0.8492 , 0.9100 ] ) M ( [ 0.7375 , 0.8334 ] )
H ( [ 0.0 , 0.0 ] ) H ( [ 0.0 , 0.0 ] ) H ( [ 0.0 , 0.0 ] ) H ( [ 0.0 , 0.0 ] )
R 15 0.8998 L ( [ 0.0002 , 0.008 ] ) L ( [ 0.1667 , 0.1766 ] ) L ( [ 0.0 , 0.0 ] ) L ( [ 0.0 , 0.0 ] ) [ 0.2181 , 0.3661 ] [ 0.0595 , 0.0935 ] [ 0.5404 , 0.7224 ]
M ( [ 0.992 , 0.9998 ] ) M ( [ 0.8234 , 0.8333 ] ) M ( [ 0.6992 , 0.7779 ] ) M ( [ 0.5833 , 0.6779 ] )
H ( [ 0.0 , 0.0 ] ) H ( [ 0.0 , 0.0 ] ) H ( [ 0.2221 , 0.3008 ] ) H ( [ 0.3221 , 0.4167 ] )
R 30 0.6762 L ( [ 0.0 , 0.0 ] ) L ( [ 0.0 , 0.1861 ] ) L ( [ 0.0 , 0.0 ] ) L ( [ 0.0 , 0.0 ] ) [ 0.1449 , 0.2777 ] [ 0.1119 , 0.4737 ] [ 0.3812 , 0.6103 ]
M ( [ 0.8333 , 0.8446 ] ) M ( [ 0.8139 , 1.0 ] ) M ( [ 0.8298 , 0.9136 ] ) M ( [ 0.8333 , 0.8412 ] )
H ( [ 0.1554 , 0.1667 ] ) H ( [ 0.0 , 0.0 ] ) H ( [ 0.1667 , 0.1701 ] ) H ( [ 0.1588 , 0.1667 ] )
Table 4. Comparison of classification results with other BRB systems.
Table 4. Comparison of classification results with other BRB systems.
IrisSeedsEcoliGlassDiabetsAverage Rank
EBRB [10]95.2%(9)87.1%(9)81.2%(8)51.4%(10)-36(11)
Yang-EBRB [29]95.8%(6)91.2%(7)59.3%(13)58.8%(9)-35(9)
GSR-BRB [30]98.6%(1)94.3%(3)78.3%(9)69.1%(7)72.9%(3)23(5)
FG-BRB [18]95.3%(8)91.2%(7)75.3%(11)68.2%(8)72.7%(4)38(12)
RD-BRB [18]98.3%(2)94.2%(4)75.8%(10)68.2%(8)72.7%(4)28(8)
DT-BRB [31]96.0%(5)94.3%(3)82.5%(6)72.9%(4)-18(3)
AP-EBRB [21]95.8%(6)97.6%(1)87.1%(2)50.0%(11)-20(4)
DEA-EBRB [32]95.4%(7)91.7%(6)83.3%(5)69.5%(6)74.2%(2)26(7)
IDT-EBRB [22]97.3%(3)93.3%(5)84.6%(4)70.4%(5)-17(2)
MTS-BRB [19]96.0(5)87.1%(9)81.5%(7)73.4%(2)68.0%(5)28(8)
FBDT [20]96.0%(5)85.2%(10)84.6%(4)65.4%(12)75.3%(1)31(9)
RIN [17]96.6(4)90.5%(8)84.8%(3)60.5%(7)-22(5)
IRIN96.6%(4)95.2%(2)88.1%(1)76.7%(1)75.3%(1)9(1)
Table 5. Performance index values of each classification method.
Table 5. Performance index values of each classification method.
IndexRecallPrecisionF1_Score
MLP60.00%36.00%45.00%
DTC92.86%94.05%92.97%
SVC35.71%12.76%18.80%
KNC73.33%76.43%73.57%
RFC92.86%94.05%92.97%
RIN86.67%86.67%86.67%
IRIN93.94%93.93%93.12%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Zhong, Y.; Wu, X.; Bai, J. A Classification Model Based on Interval Rule Inference Network with Interpretability. Appl. Sci. 2025, 15, 649. https://doi.org/10.3390/app15020649

AMA Style

Zhang Y, Zhong Y, Wu X, Bai J. A Classification Model Based on Interval Rule Inference Network with Interpretability. Applied Sciences. 2025; 15(2):649. https://doi.org/10.3390/app15020649

Chicago/Turabian Style

Zhang, Yunxia, Yiming Zhong, Xiaochang Wu, and Jing Bai. 2025. "A Classification Model Based on Interval Rule Inference Network with Interpretability" Applied Sciences 15, no. 2: 649. https://doi.org/10.3390/app15020649

APA Style

Zhang, Y., Zhong, Y., Wu, X., & Bai, J. (2025). A Classification Model Based on Interval Rule Inference Network with Interpretability. Applied Sciences, 15(2), 649. https://doi.org/10.3390/app15020649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop