Next Article in Journal
Enhancing Anomaly Detection in Maritime Operational IoT Time Series Data with Synthetic Outliers
Previous Article in Journal
ProtectingSmall and Medium Enterprises: A Specialized Cybersecurity Risk Assessment Framework and Tool
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees

1
School of Computer Engineering, Jiangsu University of Technology, Changzhou 213001, China
2
Shanghai Huace Navigation Technology Co., Ltd., Shanghai 201700, China
3
School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3911; https://doi.org/10.3390/electronics13193911
Submission received: 2 September 2024 / Revised: 20 September 2024 / Accepted: 30 September 2024 / Published: 3 October 2024
(This article belongs to the Section Artificial Intelligence)

Abstract

:
The development of explainable machine learning methods is attracting increasing attention. Dendritic neuron models have emerged as powerful machine learning methods in recent years. However, providing explainability to a dendritic neuron model has not been explored. In this study, we propose a logic dendritic neuron model (LDNM) and discuss its characteristics. Then, we use a tree-based model called the morphology of decision trees (MDT) to approximate LDNM to gain its explainability. Specifically, a trained LDNM is simplified by a proprietary structure pruning mechanism. Then, the pruned LDNM is further transformed into an MDT, which is easy to understand, to gain explainability. Finally, six benchmark classification problems are used to verify the effectiveness of the structure pruning and MDT transformation. The experimental results show that MDT can provide competitive classification accuracy compared with LDNM, and the concise structure of MDT can provide insight into how the classification results are concluded by LDNM. This paper provides a global surrogate explanation approach for LDNM.

1. Introduction

With the rapid development of artificial intelligence (AI) in recent years, machine learning (ML), a subset of AI, has been widely used in many fields of society [1,2,3]. However, some scientists and regulators have a cautious attitude toward the ML technique when it is applied in sensitive domains [4,5,6,7], such as law, finance, and health care [8], because numerous ML models can provide accurate predictions but lose explainability; thus, the inner workings and predictions of these ML models are not understandable for humans [9]. Hence, research concerning the explainability of ML has attracted great attention from researchers in recent years [10].
The ML model, whose intrinsic architecture can provide human-comprehensible explanations, is considered an interpretable (or transparent) model [11]. The typical interpretable models include linear models, decision trees, and k-nearest neighbors (KNN). However, numerous ML models, such as random forest, support vector machine (SVM), and multilayer neural networks, are considered black box (or opaque) models because they are not “interpretable by nature” [12]. Numerous methods have been proposed to provide explainability to black box models in the literature. These methods can be classified into the following two categories [6,12]: global explanation approaches and local explanation approaches. Global explanation approaches aim to provide insight into the whole logic of a black box model. A representative global explanation approach is partial dependence plots [13]. Local explanation approaches, such as Anchors [14], attempt to explain how the prediction is made by a black box model based on a specific instance and its neighbors [15]. Moreover, these two categories of explanation approaches can be further subdivided into two classes according to whether surrogate models are used. Thus, a surrogate model, which is much easier to interpret, approximates the black box model to provide explainability. Currently, inTrees [16] is considered a popular global surrogate explanation approach. LIME [17] and SHAP [18,19] are considered the most popular approaches among local surrogate explanation approaches.
Decision trees belong to a category of interpretable models because they construct tree-like structures that can be directly explained to gain explainability. Several explanation approaches employing decision trees as the core techniques have been proposed in recent years. For example, Zilke et al. proposed an approach called DeepRED to extract rules from deep neural networks [20]. The rules created by C4.5 decision trees for hidden layers are used to explain the whole network. Wu et al. trained a neural network model with novel tree regularization [21]. Decision trees are used to approximate the neural network model for interpretability. Deng proposed an explanation approach called inTrees to gain the explainability of tree ensembles [16]. This approach can extract key rules from a tree ensemble and calculate frequent variable interactions. Lundberg et al. proposed an explanation approach called TreeExplainer for explaining tree-based ML models [22]. The novelty of this approach is interpreting global model structure by combining numerous local explanations. These studies indicate that tree-based methods can serve as promising alternatives for solving the black box model explanation problem.
A neuron is the fundamental unit of the brain and consists of the following three components: dendrites, a cell body, and an axon. McCulloch and Pitts were the pioneers who first proposed an artificial neuron model in 1943 [23]. Subsequently, Rosenblatt improved the McCulloch–Pitts neuron and invented the perceptron [24]. These models laid a foundation for research involving artificial neural networks but are considered simplistic because they can solve linearly separable problems but are unable to solve nonlinear separable problems, such as the XOR problem [25]. Subsequently, scientists confirmed the importance of dendritic branches in neuron computational power [25,26]. Consequently, numerous neuron models involving the nonlinear mechanisms of dendrites have been proposed in the literature [27,28,29,30,31]. Moreover, synaptic pruning and dendritic pruning are considered two important mechanisms for learning [32,33]. Incorporating these relevant mechanisms into artificial neuron models is considered a promising way to improve their performance.
In recent years, researchers have paid continuous attention to developing and improving artificial neuron models with dendrite morphology. For example, Sossa et al. proposed a dendrite morphological neural network with an efficient training algorithm [34]. Gómez-Flores et al. incorporated smooth maximum and minimum functions into a dendrite morphological neuron model to generate smooth decision boundaries [35]. Luo et al. proposed a decision tree-based method to properly initialize the synaptic weights of a dendritic neuron model [36]. Bicknell et al. developed a pyramidal neuron model and a synaptic learning rule to investigate the nonlinear computations performed in dendrites [37]. In our previous studies, we proposed a series of dendritic neuron models to address several practical problems [38,39,40,41,42]. These studies indicate that a single neuron with dendrite morphology can perform network-level computations in various ML tasks. However, notably, these dendritic neuron models serve as black box models when applied to ML problems. Similar to many sophisticated ML models, dendritic neuron models can make accurate predictions but provide limited explainability. Although research concerning the explainability of ML has become an important issue in the field of artificial intelligence [10,12,43], to the best of the knowledge of the authors, providing explainability to a dendritic neuron model has not yet been well explored.
In this study, we attempt to provide explainability to the proposed logic dendritic neuron model (LDNM) by using a tree-based model called the morphology of decision trees (MDT). We first review the definition of LDNM. Then, a structure pruning mechanism based on the characteristics of LDNM is proposed to simplify the structure of a trained LDNM. Next, we propose a method to transform a pruned LDNM into an MDT, which can provide explainability to LDNM. Finally, six benchmark classification problems are used to verify the classification performance of LDNM, the effectiveness of the structure pruning, and the MDT transformation. The contribution of this paper is threefold. First, we propose a method to transform a pruned LDNM into an MDT. To the best of the knowledge of the authors, this transformation method has not been explored. Second, a global surrogate explanation approach is proposed for LDNM. Third, the effectiveness of the explanation approach is experimentally investigated.
The remainder of this paper is organized as follows. Section 2 presents the definition of the proposed LDNM and a description of the classification and regression tree (CART). Section 3 presents the learning method, structure pruning method, and MDT transformation method of LDNM. The experimental studies are provided in Section 4. Finally, the conclusion of this study is provided in Section 5.

2. Materials

In this section, we introduce two important concepts: LDNM and CART.

2.1. LDNM Formulation

A biological neuron is composed of the following three typical components: a cell body, dendrites, and a single axon. LDNM is composed of the following four components: synaptic layers, dendrite layers, a membrane layer, and a cell body. Figure 1 shows the architecture of the LDNM, and the four components are defined in the following.
Synaptic layer: The synaptic layers receive nerve signals from presynaptic neurons. A synapse connects a synaptic input with a dendritic branch, and a logistic function is employed to formulate this connection. The output Y i , j of the ith ( i = 1 , 2 , , n ) synaptic input located on the jth ( j = 1 , 2 , , m ) dendrite branch is expressed as follows:
Y i , j = 1 1 + e k ( w i , j x i q i , j )
where k is a positive parameter. x i is the ith ( i = 1 , 2 , , n ) synaptic input and ranges from 0 to 1. w i , j and q i , j are the weights of a synapse and need to be adjusted by a learning algorithm. In addition, all w i , j and q i , j are randomly initialized in [ 1.5 , 1.5 ] before training the LDNM.
Dendritic layer: These are multiplicative operations believed to exist on the dendrite tree of a biological neuron [44]. This operation can be considered a logical AND operation in the binary case. In the proposed LDNM, the jth dendritic layer is expressed as follows:
Z j = i = 1 n Y i , j
where Z j represents the output of the jth dendritic layer.
Membrane layer: The membrane layer processes the signals conveyed from all dendritic branches. A summation operation is employed to imitate the calculation performed in the membrane layer. This operation can be considered a logical OR operation when the dendritic signals are binary. The output of membrane layer V can be calculated as follows:
V = j = 1 m Z j
Cell body: The cell body is enclosed by its membrane. The cell body receives signals from the membrane and transports the processed signals up the axon. A logistic function is employed to simulate the information processing of the cell body as follows:
O = 1 1 + e c ( V γ )
where c is a constant parameter. γ is the threshold used for the classification task. O is the actual output of the cell body.

2.2. Classification and Regression Tree

Decision tree is a popular ML method that uses a tree-like structure to build a learning model. CART [45] is a typical decision tree model that constructs a binary tree for classification and regression tasks. There are two types of nodes in CART. Each internal node stores an integer d to label the tested feature and a threshold value φ . Each leaf node stores a variable p to represent a class label (for the classification task) or a real number (for the regression task). CART makes a prediction by executing a top-down search from root to leaf. In each internal node, if x d φ , the left child tree is further explored; otherwise, the right child tree is further explored. Figure 2 shows an example of CART for solving the XOR problem.
A trained CART can be translated into a set of decision rules. A decision rule is a decision path (starting from the root node to a leaf node) and has the following form:
IF(condition1 AND condition2 AND…) THEN outcome;
  • where the condition corresponds to the internal node, and the outcome corresponds to the leaf node. For example, the CART shown in Figure 2 can be translated into the following decision rules:
IF( x 1 < = 0.5 AND x 2 < = 0.5 ) THEN p = 0 ;
IF( x 1 < = 0.5 AND x 2 > 0.5 ) THEN p = 1 ;
IF( x 1 > 0.5 AND x 2 < = 0.5 ) THEN p = 1 ;
IF( x 1 > 0.5 AND x 2 > 0.5 ) THEN p = 0 ;
Moreover, for binary classification, the aforementioned IF-THEN statements with p = 0 can be merged into an ELSE statement.
The decision tree method is considered simpler and more interpretable than other ML methods because the workflow of decision trees is very similar to human decision behaviors. A trained decision tree can easily be visualized and understood.

3. Methodology

In this section, we provide the details of the proposed approach.

3.1. Learning Method

Since the proposed LDNM is a feedforward neural network, the backpropagation (BP) algorithm is employed as the learning method [39]. Given the training data with P samples, the error function of LDNM in the pth training sample is defined as follows:
E p = 1 2 ( T p O p ) 2
T p is the target output in the pth training sample, and O p is the actual output of LDNM. At each training epoch, the synaptic weights w i , j and q i , j are adjusted based on the gradient descent method as follows:
w i , j = w i , j η p = 1 P E p w i , j
q i , j = q i , j η p = 1 P E p q i , j
where η is the learning rate.

3.2. Structure Pruning

Synaptic pruning is observed in the human brain, where many axons and dendrites are eliminated during youth in humans. Neural network pruning increases the efficiency of a trained neural network by removing redundant weights. A trained LDNM can also be pruned by specific mechanisms. The main idea of the specific mechanisms is that the synapses contributing minimally to the output of the trained LDNM can be removed. Thus, determining the unimportant synapses in a trained LDNM is the key issue in pruning LDNM.
Equation (1) describes the relation between the synaptic input x i and the output Y i , j , namely, the connection state. Obviously, the connection state is uniquely determined by the synaptic weights w i , j and q i , j . According to the values of w i , j and q i , j , a synapse in the trained LDNM stays in one state among the following four types of connection states: direct connection, inverse connection, constant 1 connection, and constant 0 connection. Four types of symbols are used to represent these connection states, as shown in Figure 3. Figure 4 shows how the synaptic weights w i , j and q i , j determine the connection state of a trained synapse. A parameter θ i , j = q i , j / w i , j is defined as the threshold of the trained synapse. Obviously, θ i , j is the midpoint of the modified logistic function (Equation (1)) on the horizontal axis as shown in Figure 4. The four connection states are explained as follows.
State 1.1: w i , j > 0 , q i , j < 0 < w i , j ; for example, w i , j = 1 , q i , j = 0.5 , and θ i , j = 0.5 as shown in Figure 4a. This state is a constant 1 connection because the output is almost 1 whenever x i ranges in [ 0 , 1 ] .
State 1.2: w i , j < 0 , q i , j < w i , j < 0 ; for example, w i , j = 1 , q i , j = 1.5 , and θ i , j = 1.5 as shown in Figure 4d. This state is also a constant 1 connection for the abovementioned reason.
State 2: w i , j > 0 , 0 < q i , j < w i , j ; for example, w i , j = 1 , q i , j = 0.5 , and θ i , j = 0.5 as shown in Figure 4b. This state is a direct connection because a larger input x i leads to a larger output Y i , j .
State 3: w i , j < 0 , w i , j < q i , j < 0 ; for example, w i , j = 1 , q i , j = 0.5 , and θ i , j = 0.5 as shown in Figure 4e. This state is an inverse connection because a larger input x i leads to a smaller output Y i , j .
State 4.1: w i , j > 0 , 0 < w i , j < q i , j ; for example, w i , j = 1 , q i , j = 1.5 , and θ i , j = 1.5 as shown in Figure 4c. This state is a constant 0 connection because the output is almost 0 whenever x i ranges in [ 0 , 1 ] .
State 4.2: w i , j < 0 , w i , j < 0 < q i , j ; for example, w i , j = 1 , q i , j = 0.5 , and θ i , j = 0.5 as shown in Figure 4f. This state is also a constant 0 connection for the abovementioned reason.
After training, all synapses in an LDNM stay in one state of the four connection states. As described in Section 2.1, the multiplicative operations in the dendritic layers are considered logical AND operations in the binary case. Since the dendritic input of the constant 1 connection and the constant 0 connection are always 1 and 0, respectively, these two types of connection states can play special roles in the output of a trained LDNM. As a result, two pruning mechanisms can be performed in a trained LDNM.
Synaptic pruning: Given a synapse in a constant 1 state and a dendritic layer connected to this synapse, the synaptic output Y i , j = 1 has no effect on the output of dendritic layer Z j because of the Boolean operation “x AND 1 = x”. Therefore, this synapse in constant 1 is redundant and can be removed from the trained LDNM. Figure 5a shows how the synaptic pruning operation functions.
Dendritic pruning: Given a synapse in a constant 0 state and a dendritic layer connected to this synapse, the synaptic output Y i , j = 0 leads dendritic layer Z j to always output 0 because of the Boolean operation “x AND 0 = 0”. Moreover, the dendritic layer with Z j = 0 has no effect on the output of the membrane layer because of the logical OR operation in the membrane layer. As a result, this dendritic layer is redundant and can be removed from the trained LDNM. Figure 5b shows how the dendritic pruning operation functions.
After synaptic pruning and dendritic pruning, all synapses in the constant 1 connection state and the constant 0 connection state and the redundant dendritic layers are removed from a trained LDNM. In addition, the structure of the pruned LDNM is much simpler than that of the original LDNM.

3.3. Transformation into a Morphology of Decision Trees

The logistic function and the step function are two common activation functions used in artificial neural networks [46]. The logistic function has the following form:
f ( x ) = 1 1 + e k ( x μ )
where μ is the midpoint of the logistic function. k ( k > 0 ) determines the steepness of the curve as shown in Figure 6a. The step function is defined as follows:
H ( x ) = 0 , x μ 1 , x > μ
where μ is the threshold of the step function as shown in Figure 6b. The step function can be considered a smooth approximation of the logistic function if k approaches positive infinity, suggesting that these two functions are interchangeable in some cases. In fact, the step function serves as the activation function in early artificial neural networks (ANNs), such as the McCulloch–Pitts neuron, where the aggregation of the weighted input is compared with a predefined threshold. Since the logistic functions are monotonic and derivable, they fit the gradient-based training method and have become more popular in the field of ANNs.
As shown in Equation (1), a modified logistic function is used to model the synaptic layer. After synaptic pruning and dendritic pruning, only direct connections and inverse connections remain in the pruned LDNM. The discussion above suggests that the step function can serve as a substitute for modified logistic functions. For simplicity, the direct connection (Figure 4b) can be replaced with the following step function:
Y i , j = 0 , x i θ i , j 1 , x i > θ i , j
The inverse connection (Figure 4e) can be replaced with the following step function:
Y i , j = 1 , x i θ i , j 0 , x i > θ i , j
How a pruned LDNM outputs 1 is analyzed in the following. According to Equation (4), the membrane layer output V = 1 leads to the cell body outputting O = 1 in the binary case, and vice versa. Moreover, since the membrane layer performs a logical OR operation, the membrane layer outputs V = 1 , while at least one dendritic layer outputs Z j = 1 . Furthermore, all synapses in the constant 1 connection state and the constant 0 connection state and the redundant dendritic layers are removed, and only the synapses in the direct connection state and inverse connection state exist on the dendritic branches. Since the dendritic layers perform a logical AND operation, a dendritic layer outputs Z j = 1 , while all synapses are in the direct connection state and the inverse connection output Y i , j = 1 . Considering that the synapses in the direct connection state and inverse connection state have the form of a step function (Equations (10) and (11)), the synaptic layer output Y i , j is determined by the comparison between the feature x i and the threshold θ i , j . Therefore, the pruned LDNM can be expressed as a set of decision rules consisting of comparing, AND, and OR logistic operations.
Given the example of a pruned LDNM shown in Figure 7a, based on the aforementioned analysis, the structure of the pruned LDNM can be translated as a set of decision rules as follows:
IF( x 1 > θ 1 , 1 AND x 3 < = θ 3 , 1 ) THEN p = 1 ;
IF( x 1 < = θ 1 , 2 AND x 2 > θ 2 , 2 ) THEN p = 1 ;
ELSE p = 0 ;
It is obvious that this set of decision rules is similar to a set of decision rules of CART. For simplicity, each IF-THEN statement (corresponding to a dendritic branch) is combined with an ELSE p = 0 statement to construct an equivalent CART. As a result, two CARTs are constructed, and these CARTs have a parallel relation. Finally, a morphology of decision trees (MDT) is generated based on the pruned LDNM as shown in Figure 7b. The prediction of MDT is achieved by performing the logistic OR operation based on the prediction of all individual CARTs.

4. Experimental Studies

In this section, we present experimental studies to evaluate the performance of the proposed approach.

4.1. Experimental Setup

All algorithms used in this study are implemented in Python and C language. The experiments are conducted on a Linux system with a Core-i5 CPU and 16 GB RAM. According to our previous studies [39], the parameters of LDNM are set as follows: the number of dendritic branches m = 2 n (n is the number of synaptic inputs), the positive parameter k = 5 , the constant parameter c = 5 , and the threshold γ = 0.5 . The parameters of the learning method of LDNM are set as follows: the learning rate η = 0.01 , and the maximum number of epochs is set to 2000.
Six classification datasets, which can be accessed in the UCI Machine Learning Repository, are used as benchmark problems to evaluate the effectiveness of LDNM and other classifiers. These six datasets are iris, blood transfusion service center, glass identification, origin of wines, heart disease, and heart failure clinical records. In addition, all datasets are processed as binary classification problems. All features of each dataset are normalized in [ 0 , 1 ] to fit the input of LDNM. More details of these datasets are presented in Table 1.
In the experiments, the stratified 5-fold cross-validation is conducted six times using each dataset to evaluate a classifier. Specifically, in a 5-fold cross-validation, the samples of a given dataset are randomly divided into five equal-sized groups. Four groups of samples are used to train the classifier, and the other group of samples is used as test data. Then, the cross-validation process is repeated five times in sequence. Finally, the experiments of training and testing a classifier in a benchmark problem are performed a total of 30 times.

4.2. Verifying the Classification Performance of LDNM

First of all, we confirm whether LDNM can serve as an effective ML model. To verify the classification performance of LDNM, we compare it with seven common classifiers. These classifiers include SVM, KNN, multilayer perceptron (MLP), naive Bayes (NB) classifier, decision tree, random forest, and AdaBoost. These classifiers are implemented using the ML library scikit-learn [47]. Their parameters are set according to the recommendation of scikit-learn and are presented in Table 2. For each classifier, the prepared datasets for the six classification problems are successively implemented 30 times as described above. The classification accuracies of LDNM and the seven classifiers are summarized in Table 3.
According to Table 3, KNN, SVM, NB, AdaBoost, and LDNM exhibit the best performance in one, one, one, one, and two classification problems, respectively. Decision tree, random forest, and MLP do not yield the best results in any classification problem. It is obvious that the proposed LDNM is considered a very effective classifier for solving these classification problems with respect to the number of best performances. Moreover, the Friedman test is used to quantitatively compare these classifiers. The Friedman test ranks the eight classifiers according to their performance and provides the statistical significance. A smaller ranking value represents better performance by a classifier. The statistical results obtained by the Friedman test are presented in Table 4. According to Table 4, LDNM is considered to exhibit the best performance in six classification problems because it obtains the smallest ranking value of 2.67. In addition, the p-values of each classifier are provided in Table 4. To avoid a Type I error [48], a post hoc method, i.e., Holm’s procedure, is used to adjust the p-values to p H o l m values. As shown in Table 4, the p H o l m value of random forest is smaller than the significance level (0.05). This finding suggests that the performance of LDNM is significantly better than that of random forest in the six classification problems. However, the p H o l m values of SVM, NB, decision tree, KNN, MLP, and AdaBoost are larger than 0.05. Thus, LDNM is not significantly superior to these six classifiers. Overall, the comparison result suggests that the proposed LDNM can achieve very competitive performance compared to these classic classifiers. LDNM can be considered an effective classifier in terms of classification accuracy.

4.3. Investigating the Effectiveness of Structure Pruning and MDT Transformation

The effectiveness of the structure pruning and the MDT transformation is verified in this subsection. For each benchmark problem, the LDNM is applied to the training datasets 30 times as described above. Then, the performance of the trained LDNMs, the corresponding pruned LDNMs and the corresponding MDTs is verified in the same test datasets. The classification accuracies of the three classifiers are presented in Table 5. It is obvious that compared with the trained LDNM, the degradation of the classification capability of the pruned LDNM is less than 1%, except for two problems (i.e., Heart and HeartFailure). Moreover, compared with the trained LDNM, the degradation of the classification capability of the MDT is commonly less than 3%. This finding indicates that the structure pruning and the MDT transformation are considered effective because the classification accuracy is not greatly sacrificed. Moreover, we analyze the precision and recall scores of the three classifiers. As shown in Table 6 and Table 7, the precision scores of the pruned LDNM and MDT are lower than those of the trained LDNM in most cases. However, the recall scores of the pruned LDNM and MDT are higher than those of the trained LDNM in most cases. This finding indicates that structure pruning and MDT transformation improve the ability to detect positives but sacrifice the ability to predict true positives accurately.

4.4. Exhibiting the Proposed Explanation Approach

For each benchmark problem, a typical pruned LDNM and the corresponding MDT are exhibited in Figure 8 and Figure 9. It can be observed that the structures of the pruned LDNMs are concise. Table 5 reports the number of branches of the trained LDNM and the pruned LDNM for each benchmark problem. The number of branches of the pruned LDNMs is significantly reduced compared with the trained LDNM. Less than four dendritic branches are retained in most pruned LDNMs. Moreover, as shown in Figure 8 and Figure 9, a few feature inputs are retained in the pruned LDNM. Therefore, some irrelevant feature inputs are also removed by the structure pruning operations. Overall, the structure pruning mechanisms can greatly simplify the structure of a trained LDNM. Usually, a small number of synapses in the direct connection and the inverse connection are retained in the pruned LDNMs.
As shown in Figure 8 and Figure 9, the MDTs are considered more interpretable than the pruned LDNMs because MDTs mainly comprise a small number of independent CARTs. Moreover, these CARTs have reasonable depths (usually less than four). The structures of MDTs can provide insight into how the classification results are obtained by the MDTs, the pruned LDNMs, and the trained LDNMs. As a result, the structures of MDTs help us understand the intrinsic characteristics of these benchmark problems and make LDNM explainable.

4.5. Evaluation of the Explanation Approach

Evaluating an explanation approach is commonly considered subjective [9]. The aforementioned experimental studies have revealed the comprehensibility and accuracy of the proposed explanation approach. In this subsection, we provide qualitative evaluations, including fidelity and certainty, of the proposed explanation approach.
Since the proposed explanation approach is a global surrogate explanation approach, it is necessary to measure how well MDT approximates LDNM. The fidelity score is a common metric used to measure fidelity [10]. It is defined as the percentage of the matched prediction of the black box model and the surrogate explanation approach. Table 8 reports the fidelity scores to evaluate how well MDT approximates LDNM on the test datasets. It is obvious that the fidelity scores are considered high and are larger than 90% in most cases. This finding indicates that MDT can serve as a satisfactory global surrogate explanation approach for LDNM.
Although MDT is considered more concise and can provide satisfactory classification performance, we should note that MDT cannot completely replace LDNM in some practical applications. This is because MDT does not reflect the certainty of LDNM. LDNM performs a real number computation, while MDT performs a logical operation. LDNM can provide a prediction with a probability to show the confidence. However, MDT does not support probability prediction and is not a probabilistic classifier.

5. Conclusions

In this study, we proposed an artificial neuron model called LDNM for classification tasks. To provide explainability to LDNM, we attempted to transform a trained LDNM into a concise tree-based model called MDT. Specifically, we introduced a proprietary structure pruning mechanism to simplify a trained LDNM. Then, we proposed a method to transform the pruned LDNM into an MDT. Since the structure of the MDT mainly comprises some simple decision trees, the explainability of the LDNM for a specific problem can be gained based on the corresponding MDT. Finally, six benchmark classification problems were used to confirm the effectiveness of the proposed explanation approach for LDNM. The experimental results demonstrate that the MDT can approximate a trained LDNM well and provide explainability. As a result, an effective global surrogate explanation approach for LDNM is provided in this paper. The results of this study suggest that the explainability of other dendritic neuron models can be gained by similar decision tree-based explanation approaches.
Nevertheless, this study has two limitations. First, although the experimental results demonstrate the effectiveness of the proposed explanation approach, more high-dimensional classification problems could be used to further verify its effectiveness. Second, the proposed LDNM and its explanation approach are designed to solve binary classification problems. Multiclass classification techniques, such as the one-vs-rest strategy or the one-vs-one strategy, could be used to extend the proposed approach to multiclass classification problems. This issue deserves our further research.
In future studies, we intend to apply the proposed LDNM to more problems in sensitive domains because the explainability of LDNM can contribute to understanding the intrinsic characteristics of these problems. In addition, how to utilize the explainability of LDNM to improve its learning method is worth investigating.

Author Contributions

Conceptualization, X.C., H.F. and S.S.; methodology, X.C.; software, X.C. and S.S.; validation, W.C. and D.Z.; formal analysis, Y.Z. and D.Z.; investigation, H.F.; resources, H.F., W.C. and D.Z.; data curation, X.C. and Y.Z.; writing—original draft preparation, X.C.; writing—review and editing, H.F., W.C., Y.Z., D.Z. and S.S.; visualization, Y.Z. and D.Z.; supervision, H.F.; project administration, S.S.; and funding acquisition, W.C. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62203069), the Natural Science Foundation of Jiangsu Province of China (Grant No. BK20220619), the Qingpu District Industry University Research Cooperation Development Foundation of Shanghai (No. 202314).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data are contained within the article.

Conflicts of Interest

Author W. Chen was employed by the company Shanghai Huace Navigation Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; Li, B.; Madabhushi, A.; Shah, P.; Spitzer, M.; et al. Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discov. 2019, 18, 463–477. [Google Scholar] [CrossRef] [PubMed]
  2. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef] [PubMed]
  3. Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [Google Scholar] [CrossRef]
  4. Jiménez-Luna, J.; Grisoni, F.; Schneider, G. Drug discovery with explainable artificial intelligence. Nat. Mach. Intell. 2020, 2, 573–584. [Google Scholar] [CrossRef]
  5. Bibal, A.; Lognoul, M.; De Streel, A.; Frénay, B. Legal requirements on explainability in machine learning. Artif. Intell. Law 2021, 29, 149–169. [Google Scholar] [CrossRef]
  6. Ding, W.; Abdel-Basset, M.; Hawash, H.; Ali, A.M. Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey. Inf. Sci. 2022, 615, 238–292. [Google Scholar] [CrossRef]
  7. Sarvaiya, H.; Loya, A.; Warke, C.; Deshmukh, S.; Jagnade, S.; Toshniwal, A.; Kazi, F. Explainable Artificial Intelligence (XAI): Towards Malicious SCADA Communications. In ISUW 2020: Proceedings of the 6th International Conference and Exhibition on Smart Grids and Smart Cities, Chengdu, China, 22–24 October 2022; Springer: Singapore, 2022; pp. 151–162. [Google Scholar]
  8. Imran, S.; Mahmood, T.; Morshed, A.; Sellis, T. Big data analytics in healthcare- A systematic literature review and roadmap for practical implementation. IEEE/CAA J. Autom. Sin. 2020, 8, 1–22. [Google Scholar] [CrossRef]
  9. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
  10. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 2018, 51, 1–42. [Google Scholar] [CrossRef]
  11. Belle, V.; Papantonis, I. Principles and Practice of Explainable Machine Learning. Front. Big Data 2021, 4, 688969. [Google Scholar] [CrossRef]
  12. Burkart, N.; Huber, M.F. A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 2021, 70, 245–317. [Google Scholar] [CrossRef]
  13. Goldstein, A.; Kapelner, A.; Bleich, J.; Pitkin, E. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat. 2015, 24, 44–65. [Google Scholar] [CrossRef]
  14. Ribeiro, M.T.; Singh, S.; Guestrin, C. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  15. Delgado-Panadero, Á.; Hernández-Lorca, B.; García-Ordás, M.T.; Benítez-Andrades, J.A. Implementing local-explainability in gradient boosting trees: Feature contribution. Inf. Sci. 2022, 589, 199–212. [Google Scholar] [CrossRef]
  16. Deng, H. Interpreting tree ensembles with intrees. Int. J. Data Sci. Anal. 2019, 7, 277–287. [Google Scholar] [CrossRef]
  17. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 13–17 August 2016; KDD ’16. pp. 1135–1144. [Google Scholar]
  18. Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Long Beach, CA, USA, 2017; Volume 30. [Google Scholar]
  19. Chen, H.; Covert, I.C.; Lundberg, S.M.; Lee, S.I. Algorithms to estimate Shapley value feature attributions. Nat. Mach. Intell. 2023, 5, 590–601. [Google Scholar] [CrossRef]
  20. Zilke, J.R.; Loza Mencía, E.; Janssen, F. DeepRED—Rule Extraction from Deep Neural Networks. In Proceedings of the Discovery Science; Springer: Cham, Switzerlands, 2016; pp. 457–473. [Google Scholar]
  21. Wu, M.; Hughes, M.; Parbhoo, S.; Zazzi, M.; Roth, V.; Doshi-Velez, F. Beyond sparsity: Tree regularization of deep models for interpretability. In Proceedings of the the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  22. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  23. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  24. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386. [Google Scholar] [CrossRef]
  25. Gidon, A.; Zolnik, T.A.; Fidzinski, P.; Bolduan, F.; Papoutsi, A.; Poirazi, P.; Holtkamp, M.; Vida, I.; Larkum, M.E. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science 2020, 367, 83–87. [Google Scholar] [CrossRef]
  26. Euler, T.; Detwiler, P.B.; Denk, W. Directionally selective calcium signals in dendrites of starburst amacrine cells. Nature 2002, 418, 845–852. [Google Scholar] [CrossRef]
  27. Poirazi, P.; Brannon, T.; Mel, B.W. Arithmetic of subthreshold synaptic summation in a model CA1 pyramidal cell. Neuron 2003, 37, 977–987. [Google Scholar] [CrossRef] [PubMed]
  28. Todo, Y.; Tamura, H.; Yamashita, K.; Tang, Z. Unsupervised learnable neuron model with nonlinear interaction on dendrites. Neural Netw. 2014, 60, 96–103. [Google Scholar] [CrossRef] [PubMed]
  29. Abbott, L.F.; DePasquale, B.; Memmesheimer, R.M. Building functional networks of spiking model neurons. Nat. Neurosci. 2016, 19, 350–355. [Google Scholar] [CrossRef]
  30. Ji, J.; Gao, S.; Cheng, J.; Tang, Z.; Todo, Y. An approximate logic neuron model with a dendritic structure. Neurocomputing 2016, 173, 1775–1783. [Google Scholar] [CrossRef]
  31. Gao, S.; Zhou, M.; Wang, Z.; Sugiyama, D.; Cheng, J.; Wang, J.; Todo, Y. Fully complex-valued dendritic neuron model. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 2105–2118. [Google Scholar] [CrossRef]
  32. Kanamori, T.; Kanai, M.I.; Dairyo, Y.; Yasunaga, K.i.; Morikawa, R.K.; Emoto, K. Compartmentalized calcium transients trigger dendrite pruning in Drosophila sensory neurons. Science 2013, 340, 1475–1478. [Google Scholar] [CrossRef] [PubMed]
  33. Faust, T.E.; Gunner, G.; Schafer, D.P. Mechanisms governing activity-dependent synaptic pruning in the developing mammalian CNS. Nat. Rev. Neurosci. 2021, 22, 657–673. [Google Scholar] [CrossRef]
  34. Sossa, H.; Guevara, E. Efficient training for dendrite morphological neural networks. Neurocomputing 2014, 131, 132–142. [Google Scholar] [CrossRef]
  35. Gómez-Flores, W.; Sossa, H. Smooth dendrite morphological neurons. Neural Netw. 2021, 136, 40–53. [Google Scholar] [CrossRef]
  36. Luo, X.; Wen, X.; Zhou, M.; Abusorrah, A.; Huang, L. Decision-Tree-Initialized Dendritic Neuron Model for Fast and Accurate Data Classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 4173–4183. [Google Scholar] [CrossRef]
  37. Bicknell, B.A.; Häusser, M. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021, 109, 4001–4017. [Google Scholar] [CrossRef] [PubMed]
  38. Ji, J.; Song, S.; Tang, Y.; Gao, S.; Tang, Z.; Todo, Y. Approximate logic neuron model trained by states of matter search algorithm. Knowl.-Based Syst. 2019, 163, 120–130. [Google Scholar] [CrossRef]
  39. Song, S.; Chen, X.; Song, S.; Todo, Y. A neuron model with dendrite morphology for classification. Electronics 2021, 10, 1062. [Google Scholar] [CrossRef]
  40. Song, S.; Xu, Q.; Qu, J.; Song, Z.; Chen, X. Training a Logic Dendritic Neuron Model with a Gradient-Based Optimizer for Classification. Electronics 2022, 12, 94. [Google Scholar] [CrossRef]
  41. Song, S.; Zhang, B.; Chen, X.; Xu, Q.; Qu, J. Wart-Treatment Efficacy Prediction Using a CMA-ES-Based Dendritic Neuron Model. Appl. Sci. 2023, 13, 6542. [Google Scholar] [CrossRef]
  42. Song, Z.; Tang, C.; Song, S.; Tang, Y.; Li, J.; Ji, J. A complex network-based firefly algorithm for numerical optimization and time series forecasting. Appl. Soft Comput. 2023, 137, 110158. [Google Scholar] [CrossRef]
  43. Bonifazi, G.; Cauteruccio, F.; Corradini, E.; Marchetti, M.; Terracina, G.; Ursino, D.; Virgili, L. A model-agnostic, network theory-based framework for supporting XAI on classifiers. Expert Syst. Appl. 2024, 241, 122588. [Google Scholar] [CrossRef]
  44. Gabbiani, F.; Krapp, H.G.; Koch, C.; Laurent, G. Multiplicative computation in a visual neuron sensitive to looming. Nature 2002, 420, 320–324. [Google Scholar] [CrossRef]
  45. Carrizosa, E.; Molero-Río, C.; Romero Morales, D. Mathematical optimization in classification and regression trees. Top 2021, 29, 5–33. [Google Scholar] [CrossRef]
  46. Apicella, A.; Donnarumma, F.; Isgrò, F.; Prevete, R. A survey on modern trainable activation functions. Neural Netw. 2021, 138, 14–32. [Google Scholar] [CrossRef]
  47. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  48. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
Figure 1. The architecture of LDNM.
Figure 1. The architecture of LDNM.
Electronics 13 03911 g001
Figure 2. An example of CART for solving the XOR problem. (a) Inputs and outputs of the XOR problem. (b) Desired solution for the XOR problem by using CART.
Figure 2. An example of CART for solving the XOR problem. (a) Inputs and outputs of the XOR problem. (b) Desired solution for the XOR problem by using CART.
Electronics 13 03911 g002
Figure 3. A synapse in a trained LDNM stays in one state among the following four types of connection states: direct connection, inverse connection, constant 1 connection, and constant 0 connection.
Figure 3. A synapse in a trained LDNM stays in one state among the following four types of connection states: direct connection, inverse connection, constant 1 connection, and constant 0 connection.
Electronics 13 03911 g003
Figure 4. Four connection states are determined by the synaptic weights w i , j and q i , j . Notably, a parameter θ i , j = q i , j / w i , j is defined as the threshold of the trained synapse, and x i ranges from 0 to 1.
Figure 4. Four connection states are determined by the synaptic weights w i , j and q i , j . Notably, a parameter θ i , j = q i , j / w i , j is defined as the threshold of the trained synapse, and x i ranges from 0 to 1.
Electronics 13 03911 g004
Figure 5. Two examples showing how synaptic pruning and dendritic pruning are performed in trained LDNMs.
Figure 5. Two examples showing how synaptic pruning and dendritic pruning are performed in trained LDNMs.
Electronics 13 03911 g005
Figure 6. Curves of the logistic function and the step function. The step function can be considered a smooth approximation of the logistic function if k approaches positive infinity.
Figure 6. Curves of the logistic function and the step function. The step function can be considered a smooth approximation of the logistic function if k approaches positive infinity.
Electronics 13 03911 g006
Figure 7. An example of transforming a pruned LDNM into the MDT.
Figure 7. An example of transforming a pruned LDNM into the MDT.
Electronics 13 03911 g007
Figure 8. Typical pruned LDNMs and corresponding MDTs for three datasets (Iris, Transfusion, and Glass). Specifically, the thresholds in each MDT are reverted to the primitive values according to the original datasets (i.e., before normalization).
Figure 8. Typical pruned LDNMs and corresponding MDTs for three datasets (Iris, Transfusion, and Glass). Specifically, the thresholds in each MDT are reverted to the primitive values according to the original datasets (i.e., before normalization).
Electronics 13 03911 g008
Figure 9. Typical pruned LDNMs and corresponding MDTs for three datasets (Wine, Heart, and HeartFailure). Specifically, the thresholds in each MDT are reverted to the primitive values according to the original datasets (i.e., before normalization).
Figure 9. Typical pruned LDNMs and corresponding MDTs for three datasets (Wine, Heart, and HeartFailure). Specifically, the thresholds in each MDT are reverted to the primitive values according to the original datasets (i.e., before normalization).
Electronics 13 03911 g009
Table 1. Details of the six benchmark datasets.
Table 1. Details of the six benchmark datasets.
DatasetNumber of
Samples
Number of
Features
Number of
Classes
Iris15042
Transfusion74842
Glass21492
Wine178132
Heart270132
HeartFailure299122
Table 2. Parameter settings of the seven classifiers.
Table 2. Parameter settings of the seven classifiers.
ClassifierKey ParametersSetting
SVMKernelRadial basis function
Regularization parameter1.0
KNNNumber of neighbors5
MLPSize of the hidden layer100
Learning methodAdam
NBAssumption for featuresGaussian distribution
Decision treeMaximum depth5
Random forestNumber of trees10
Maximum depth of each tree3
AdaBoostNumber of estimators50
Base estimatorDecision tree
Table 3. Comparing the classification accuracies (%) of the eight classifiers.
Table 3. Comparing the classification accuracies (%) of the eight classifiers.
DatasetsKNNSVMDecision TreeRandom ForestMLPNBAdaBoostLDNM
Iris95.56 ± 2.9067.22 ± 1.5194.44 ± 3.8892.33 ± 4.7372.11 ± 11.1791.67 ± 4.2893.78 ± 3.9295.56 ± 3.26
Transfusion77.25 ± 2.6276.20 ± 0.2577.18 ± 2.6476.38 ± 0.6177.05 ± 1.3875.06 ± 2.0178.12 ± 2.2579.36 ± 1.97
Glass92.91 ± 3.7291.12 ± 3.4394.40 ± 3.3290.03 ± 3.9692.29 ± 3.6690.58 ± 3.2094.54 ± 3.5993.29 ± 3.72
Wine95.79 ± 3.3297.74 ± 1.70 91.83 ± 4.9492.14 ± 3.7997.56 ± 1.9095.58 ± 3.0996.61 ± 3.2294.82 ± 4.24
Heart80.25 ± 5.1683.46 ± 4.3876.79 ± 6.3679.01 ± 5.3584.14 ± 4.8784.69 ± 4.78 79.38 ± 5.2881.73 ± 5.51
HeartFailure68.39 ± 4.1771.57 ± 1.8779.54 ± 4.4972.80 ± 2.9082.44 ± 4.0676.47 ± 3.6880.66 ± 4.0983.95 ± 4.27
Table 4. Friedman test ranks the eight classifiers according to their classification performance (in Table 3).
Table 4. Friedman test ranks the eight classifiers according to their classification performance (in Table 3).
ClassifierRankingp p Holm
Random forest6.500.0067170.047017
SVM5.330.0593460.356079
NB5.330.0593460.356079
Decision tree4.830.1255060.502026
KNN4.330.2385930.715778
MLP3.830.4093950.818791
AdaBoost3.170.7236740.818791
LDNM2.67--
Table 5. Comparisons of the classification accuracies of the LDNM, the pruned LDNM, and the MDT.
Table 5. Comparisons of the classification accuracies of the LDNM, the pruned LDNM, and the MDT.
DatasetLDNMPruned LDNMMDT
Accuracy (%)Num. of BranchesAccuracy (%)Num. of BranchesAccuracy (%)
Iris95.56 ± 3.268.00 ± 0.0095.67 ± 3.121.43 ± 0.5095.44 ± 3.04
Transfusion79.36 ± 1.978.00 ± 0.0079.01 ± 2.962.67 ± 0.6076.22 ± 4.09
Glass93.29 ± 3.7218.00 ± 0.0093.60 ± 4.012.63 ± 0.7193.14 ± 4.10
Wine94.82 ± 4.2426.00 ± 0.0094.83 ± 4.172.30 ± 0.6992.94 ± 3.96
Heart81.73 ± 5.5126.00 ± 0.0074.63 ± 9.493.17 ± 0.8277.04 ± 6.30
HeartFailure83.95 ± 4.2724.00 ± 0.0081.88 ± 5.433.47 ± 0.9680.72 ± 5.26
Table 6. Comparisons of the precision scores (%) of the LDNM, the pruned LDNM, and the MDT.
Table 6. Comparisons of the precision scores (%) of the LDNM, the pruned LDNM, and the MDT.
DatasetLDNMPruned LDNMMDT
Iris92.74 ± 6.6892.74 ± 6.6893.16 ± 5.74
Transfusion64.87 ± 9.6962.70 ± 11.4851.05 ± 8.17
Glass89.00 ± 9.6588.23 ± 10.2886.67 ± 10.84
Wine96.17 ± 5.2295.13 ± 5.2091.51 ± 5.29
Heart84.43 ± 7.3068.93 ± 9.6873.74 ± 8.84
HeartFailure79.63 ± 7.3969.96 ± 8.0371.13 ± 10.96
Table 7. Comparisons of the recall scores (%) of the LDNM, the pruned LDNM, and the MDT.
Table 7. Comparisons of the recall scores (%) of the LDNM, the pruned LDNM, and the MDT.
DatasetLDNMPruned LDNMMDT
Iris94.67 ± 7.1895.00 ± 6.1993.67 ± 7.95
Transfusion31.64 ± 8.4734.64 ± 9.7252.33 ± 7.72
Glass83.27 ± 13.0585.88 ± 12.7285.52 ± 11.40
Wine90.78 ± 8.5091.95 ± 8.9991.21 ± 9.17
Heart72.78 ± 10.3084.17 ± 9.7777.64 ± 9.83
HeartFailure68.11 ± 12.4778.67 ± 12.2572.41 ± 12.99
Table 8. Fidelity scores to evaluate how well MDT approximates LDNM on the test datasets.
Table 8. Fidelity scores to evaluate how well MDT approximates LDNM on the test datasets.
DatasetFidelity Score (%)
Iris99.22 ± 1.86
Transfusion86.16 ± 5.10
Glass97.97 ± 2.33
Wine96.45 ± 2.97
Heart88.89 ± 6.50
HeartFailure90.41 ± 4.72
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Fan, H.; Chen, W.; Zhang, Y.; Zhu, D.; Song, S. Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees. Electronics 2024, 13, 3911. https://doi.org/10.3390/electronics13193911

AMA Style

Chen X, Fan H, Chen W, Zhang Y, Zhu D, Song S. Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees. Electronics. 2024; 13(19):3911. https://doi.org/10.3390/electronics13193911

Chicago/Turabian Style

Chen, Xingqian, Honghui Fan, Wenhe Chen, Yaoxin Zhang, Dingkun Zhu, and Shuangbao Song. 2024. "Explaining a Logic Dendritic Neuron Model by Using the Morphology of Decision Trees" Electronics 13, no. 19: 3911. https://doi.org/10.3390/electronics13193911

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop