Next Article in Journal
A Combined Image- and Coordinate-Based Meta-Analysis of Whole-Brain Voxel-Based Morphometry Studies Investigating Subjective Tinnitus
Previous Article in Journal
New Insights into Molecular Mechanisms Underlying Neurodegenerative Disorders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying the Properties of Neurons in Machine Learning: A Brain-like Neural Model with Interactive Stimulation for Data Classification

1
School of Mathematics and Information Science, Guangxi University, Nanning 530004, China
2
School of Intelligent Manufacturing Engineering, Guangxi Electrical Polytechnic Institute, Nanning 530007, China
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(9), 1191; https://doi.org/10.3390/brainsci12091191
Submission received: 15 August 2022 / Revised: 26 August 2022 / Accepted: 29 August 2022 / Published: 3 September 2022
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)

Abstract

:
Some neural models achieve outstanding results in image recognition, semantic segmentation and natural language processing. However, their classification performance on structured and small-scale datasets that do not involve feature extraction is worse than that of traditional algorithms, although they require more time to train. In this paper, we propose a brain-like neural model with interactive stimulation (NMIS) that focuses on data classification. It consists of a primary neural field and a senior neural field that play different cognitive roles. The former is used to correspond to real instances in the feature space, and the latter stores the category pattern. Neurons in the primary field exchange information through interactive stimulation and their activation is transmitted to the senior field via inter-field interaction, simulating the mechanisms of neuronal interaction and synaptic plasticity, respectively. The proposed NMIS is biologically plausible and does not involve complex optimization processes. Therefore, it exhibits better learning ability on small-scale and structured datasets than traditional BP neural networks. For large-scale data classification, a nearest neighbor NMIS (NN_NMIS), an optimized version of NMIS, is proposed to improve computational efficiency. Numerical experiments performed on some UCI datasets show that the proposed NMIS and NN_NMIS are significantly superior to some classification algorithms that are widely used in machine learning.

1. Introduction

Cognition is usually considered as the internal processes involved in environmental sensing and decision-making [1]. Classification is considered to be one of its main activities. In machine learning, image recognition, semantic segmentation, natural language processing and emotion analysis are ultimately categorized as classification problems [2,3,4]. There have been many algorithms proposed that seek to solve specific learning tasks by simulating the neural mechanisms of the brain’s cognitive process. They are called neural networks [5,6,7,8]. In 1958, Rosenblatt proposed the famous perceptron, which is regarded as the basic unit of modern neural networks, leading to the first boom in neural network research. The BP algorithm [9] was applied to neural models in 1986, which provided an error propagation method for feed-forward neural networks. A second boom in artificial neural network research followed.
In 1998, the convolution network LeNet [10] represented a breakthrough in the field of image classification. Then, a variety of deep neural networks were developed, such as VGG (Visual Geometry Group) [11], GoogleNet [12] and ResNet [13]. Deep neural models have achieved great successes [14,15,16,17] and some even surpass humans in the field of large-scale image recognition [18,19]. Google used a multi-head attention mechanism to improve the performance of a neural model in natural language processing [20,21], leading to a boom in the investigation of transformers [22,23,24,25]. The vision transformer (VIT) [26] applied a multi-head attention mechanism to computer vision and demonstrated excellent performance in image classification. Consequently, neural networks have become essential to the development of current machine learning.
Nevertheless, most neural models tend to be understood in terms of engineering functions, but ignore biological interpretability [27,28,29,30]. For example, what role does a neuron play in the whole network, and what rules are followed to transmit information between neurons? The lack of clarity regarding these mechanisms make it inevitable that there are uncertain risks, and that tedious manual parameter tuning is necessary when designing a neural network, which leads to demanding requirements of neural networks with regard to training conditions [31,32].
The success of neural networks depends on their excellent feature extraction capability resulting from their complex architectures and huge parameters, the optimizations of which are based on a BP algorithm that is a mathematical tool rather than a biological rule [33,34]. When the training set is small, or the learning task does not involve feature extraction, a BP algorithm is often not necessary. We note that the classification performance of neural networks is weaker than some traditional classification algorithms which are not based on optimization means for structured and small-scale datasets.
In this paper, we propose a brain-like neural model NMIS that is applied to the field of small-scale and structured data classification. It simulates hierarchical brain structures and their neural activities using two neural fields: a primary field and a senior field. Its primary neurons (PN) are considered to be the representation of real instances in the feature space and its senior neurons (SN) represent category patterns.
At present, the research findings on memory generation generally agree that the connection between two neurons is usually built according to synaptic plasticity rules, such as the Hebbian rule and spike-timing-dependent plasticity (STDP). In NMIS, when a PN that an instance corresponds to, and an SN that stores this instance’s category pattern, are activated by the external input stimulation simultaneously, they tend to establish an inter-field connection, which represents the formation of memory [35,36,37]. We consider that the inter-field connections between PNs and SNs also follow the Hebbian rule.
In addition, in the NMIS’s primary field, the PNs corresponding to instances with the same category pattern form a subpopulation [38,39,40]. Once a PN is activated, it tends to trigger others in the same subpopulation and to inhibit unrelated neurons. The neuroscience literature reports that when a neuron discharges in the brain, it will release a chemical substance at the end of the axon which can be transmitted to connected neurons through the synaptic gap to activate or inhibit them [41,42]. NMIS’s PNs employ a similar information transmission mechanism to activate or inhibit other PNs in the primary field, called interactive stimulation. Inter-field interaction is defined as the stimulation of PN to SN, which is unidirectional and based on inter-field connections. Through it, an SN can perceive the PNs in an excited state.
The complete cognitive process of NMIS is described as follows: (1) External input stimulation activates a new PN; (2) Through interactive stimulation, other PNs in the same subpopulation as the activated PN are activated; (3) Through inter-field interaction, an SN is activated by these excited PNs, causing a category pattern to be perceived.
Consequently, in NMIS, the roles played by all neurons are clearly defined and the information interaction mechanism among the neurons is determined explicitly by the interactive stimulation (DOG function) or Hebbian rule instead of a BP strategy. So, compared with traditional neural networks, NMIS does not contain complex optimization steps or involve manual intervention in parameter selection, avoids the “black box” issue, and exhibits good performance in small-scale data classification.
Finally, we analyze the difficulties faced by the proposed NMIS in processing for large-scale data classification. Based on NMIS, involving a cooperation with the nearest neighbor strategy, we propose the NN_NMIS, an optimized version of NMIS, that focuses on large instance learning. The main contributions of the paper are summarized as follows:
  • We propose a brain-like NMIS that consists of the primary field and the senior field, simulating hierarchical brain structures and their neural activities. The connections between neurons in NMIS are determined explicitly by the interactive stimulation or Hebbian rule instead of a BP strategy. So, the NMIS model does not require a complex optimization process.
  • We propose a supervised learning algorithm based on the NMIS model. This algorithm applies a clear neural mechanism that is similar to real cognitive processes and avoids the “black-box” problem. Numerical results confirm that the proposed methods perform better than some widely used classification algorithms.
  • We propose NN_NMIS for structured and large-scale data classification by combining the NMIS with a nearest neighbor strategy. The experimental results demonstrate that its performance is better than that of traditional classification algorithms.

2. Materials and Methods

2.1. The NMIS Model

There are two neural fields in NMIS to simulate the neural behaviors of cognition: the primary field and the senior field. The neurons in the primary field correspond one-to-one to the real instances in the feature space and the neurons in the senior field correspond one-to-one to the category pattern. Similar hierarchical structures are also commonly discussed in brain function research [43,44,45].
Recent investigations in cognitive science and neuroscience confirm that some neurons in the primary visual cortex V1 exhibit sharp selectivity for motion direction, and some of them possess the same preference and respond similarly to stimulation [40,46,47,48,49,50]. In NMIS, these neurons with similar behavior are defined as the PNs whose corresponding instances belong to the same category pattern. They form a subpopulation in NMIS’s primary neural field and are easily activated by each other. The information interaction among PNs is achieved through interactive stimulation, the strength of which is determined by an interaction kernel. Once a PN is activated, its interactive stimulation can activate other neurons that have a similar preference to it and inhibit unrelated neurons. When a PN that an instance corresponds to, and an SN that stores this instance’s category pattern, are activated by the external stimulation at the same time, they establish an inter-field connection. The inter-field interaction among the PNs and the SNs is based on the inter-field connections via which the excited PNs can activate the SNs that store their category pattern.
The interactive stimulation may be unidirectional for some PNs. In NMIS’s primary field, each PN has a resting activation. If a PN has not established the inter-field connection with an SN, it is called an implicit primary neuron (IPN). The resting activation of all IPNs is a uniform value and named the intrinsic resting activation [51,52,53]. The other PNs are called explicit primary neurons (EPN). The IPNs cannot receive interactive stimulation from other PNs, but the activated IPNs can exert interactive stimulation on EPNs. So, the IPNs can only be activated by the external input stimulation from their corresponding instances. Although the activation of neurons is generally positive, to prevent the PNs from being in an excitable state for a long time, we set their resting activation to be a negative value. In particular, the resting activation of the IPNs should be so small that they cannot be activated by the interactive stimulation from other PNs. Similar properties are also applied to SNs. We abbreviate the explicit senior neuron to ESN and the implicit senior neuron to ISN.
Based on these interaction mechanisms of NMIS, the activation of neurons in the two fields would be influenced by the external input stimulation, the interactive stimulation, the inter-field interaction and their resting activation. The cognitive ability of the model ultimately depends on the responses of the ESNs to external input stimulation.
We consider the binary classification problem as an example to illustrate the two neural fields of NMIS in Figure 1. We introduce the details of NMIS’s interaction mechanism in the following discussion. The symbols used in this paper are shown in Table 1.

2.1.1. The Primary Neural Field

The primary neural field is used to correspond to the feature space of the real instances. Each primary neuron corresponds to a real instance. Suppose there are m PNs (IPNs and EPNs). Since the activation of PNs is mainly affected by interactive stimulation, external input stimulation and resting activation, we define the activation of the ith PN as f i ( t ) , i = 1 , 2 , , m which is a time-continuous dynamic form. If it is an EPN, f i ( t ) , i = 1 , 2 , , m satisfies the Equation:
τ f i ˙ ( t ) = C f η ( k = 1 m ω ( z i z k ) ϕ ( f k ( t ) ) ) + h f , i + e f , i ( t ) + f i ( t )
τ decides the evolution rate of f i . For simplicity, we usually let τ = 1 . e f , i ( t ) be an external input stimulation from the ith EPN corresponding instance. The strength of it is usually supposed to be 1 when there is external stimulation input and 0 when there is not external stimulation input.
The term
C f η k = 1 m ω ( z i z k ) ϕ f k ( t )
describes the interactive stimulation received by the ith EPN. The function k = 1 m ω ( z i z k ) is a DOG (difference of Gaussian) function that determines the interactive stimulation strength. Its form is
ω ( z i z k ) = A exp ( d ( z i , z k ) 2 σ 1 2 ) B exp ( d ( z i , z k ) 2 σ 2 2 ) ,
where σ 1 and σ 2 are two positive constants, describing the excitatory and inhibitory interactive stimulation scales of the PNs, respectively. Notice that σ 1 and σ 2 are two uniform values for all PNS (EPNs and IPNs). Generally, the inhibitory scale is about the triple of the excitatory scale, so, we let σ 2 = 3 σ 1 . The two exponential terms of ω ( z i z k ) are usually selected as the density functions of a normal distribution. So,
A = 1 / 2 π σ 1 , B = 1 / 2 π σ 2 .
For convenience, let A B = 1 so that the maximum of ω ( z i z k ) is 1. Then we can obtain two definite values:
A = 3 / 2 , B = 1 / 2 .
Therefore, the ω ( z i z k ) is finally given as:
ω ( z i z k ) = 3 2 exp ( d ( z i , z k ) 2 σ 1 2 ) 1 2 exp ( d ( z i , z k ) 2 σ 2 2 ) .
d ( z i , z k ) is a distance function that can use the Euclidean distance, cosine distance, etc. The cosine distance is considered to be more suitable for processing image features extracted by deep networks.
ϕ ( x ) is an activation function for x R . It is monotonically increasing, non-negative and bounded. η ( x ) is a monotonically increasing threshold function. It describes the response of the ith EPN to the interactive stimulation that it receives, satisfying
lim x + η ( x ) = 1 ,
lim x η ( x ) = 1 .
The functions ϕ ( x ) and η ( x ) are given as:
ϕ ( x ) = 1 exp ( x ) , x > 0 0 , x 0 ,
and
η ( x ) = 1 exp ( x ) , x > 0 1 + exp ( x ) , x 0 .
C f is a positive constant that is employed to limit the interactive stimulation. In most cases, C f = 1 .
We define h f as the resting activation of all EPNs, h f , i = h f . For EPN, we specify that it can be activated by strong interaction stimulation from other PNs (EPNs and IPNs) even without any external input stimulation. So, let
h f , i = α · m a x C f η k = 1 m ω ( z i z k ) ϕ f k ( t ) ,
where α is a positive constant. Normally, take α = 0.2 , so, h f , i = h f = 0.2 .
Provided that the ith PN is an IPN, its activation behavior f i ( t ) , i = 1 , 2 , , m is given as the following Equation:
τ f i ˙ ( t ) = C f η ( k = 1 m ω ( z i z k ) ϕ ( f k ( t ) ) ) + H f , i + e f , i ( t ) + f i ( t )
H f , i is the intrinsic resting activation of the ith IPN. It is assumed that the IPN can only be activated by external input stimulation. Therefore,
H f , i m a x C f η k = 1 m ω ( z i z k ) ϕ f k ( t ) .
Generally, let H f , i = H f = 1 .
Further, in order to improve computational efficiency, the interaction term of IPNs is canceled and the external input stimulation is changed to strong stimulation. The updated activation behavior of the ith IPN is satisfied by the following Equation:
f i ( t ) = H f , i + 2 e f , i ( t )

2.1.2. The Senior Neural Field

In NMIS, the senior neural field is used to store the category patterns. Each SN corresponds to a category pattern. When an SN is activated, this indicates that a category pattern is perceived. For convenience, the activation behavior of all SNs, whether ISNs or ESNs, is not time-continuous and there is no interactive stimulation among SNs. The activation of the SNs depends on the inter-field interaction from the primary field and the external input stimulation. In addition, we stipulate that, when the activation of EPNs in the primary field is stable, the information can be transmitted to the senior neural field through the inter-field interaction.
Suppose that there are m c SNs (ISNs and ESNs). The activation of the jth SN (if it is an ESN) is described by g j , j = 1 , 2 , , m c . It satisfies
g j = C g ϕ 1 C j i = 1 m x j , i ϕ f i + h g , j + e g , j .
e g , j is an external input stimulation from the jth ESN corresponding category pattern. It is 0 or 1.
f i represents the activation of the ith EPN when the activation of all the EPNs in the primary neural field is stable.
x i , j is the inter-field connection weight between the ith EPN and the jth ESN, which relies on the Hebbian rule and describes the inter-field interaction. Because ϕ ( ) is a non-negative threshold function, only the positive inter-field interaction is considered.
C j is a positive normalization constant to improve the robustness of the model, which is given as the number of the EPNs that connect to the jth ESN by the Hebbian rule. C g is a positive constant that controls the inter-field interaction from the primary field. Commonly, C g = 1
h g is defined as the resting activation of all ESNs. h g , j = h g . Similar to the discussion on the resting activation of EPNs, let
h g , j = α · ( m a x C g ϕ 1 C j i = 1 m x j , i ϕ f i .
Then, h g , j = h g = 0.2
Provided that the jth SN is an ISN, its activation behavior g j , j = 1 , 2 , , m c is given as the following Equation:
g j = C g ϕ 1 C j i = 1 m x j , i ϕ f i + H g , j + e g , j .
Since no EPNs establish the inter-field connections with the ISNs, the term
C g ϕ 1 C j i = 1 m x j , i ϕ f i
is ignored. Therefore, the activation behavior of the jth ISNs is described by the Equation:
g j = H g , j + e g , j .
H g , j is the intrinsic resting activation of the jth ISN. If e g , j = 1 , define
H g , j e g , j .
Generally, let H g , j = H g = 0.9
Equations (1) and (10) give the activation behavior and interaction mechanism of NMIS’s neurons. Compared with the traditional BP neural network, NMIS has a defined neural mechanism and avoids the “black box” problem. Therefore, it is biologically plausible.

2.2. The Cognitive Process of NMIS

In NMIS, each PN corresponds to a specific exterior input stimulation. Specifically, in the classification task, this external stimulation comes from an instance in the feature space. The SNs are used to store the category patterns corresponding to these PNs. When an SN is activated, it indicates that a stored category pattern is recalled. Similarly, when a PN is activated, it illustrates that a specific instance is expressed.
We introduce the cognitive process of NMIS in two main parts: the memory generation stage and the external stimulation recognition stage.

2.2.1. The Memory Generation of NMIS

In NMIS, the inter-field interaction between the primary field and the senior field is realized through the inter-field connections whose weights are given as X = ( x i , j ) l , m c . For a training instance z i , i = 1 , 2 , , l of T r a i n = { z 1 , z 2 , , z l } , if the IPN that it corresponds to and the ISN that its category pattern corresponds to are activated by the external input stimulation from it and its category pattern, respectively, at the same time, they tend to establish an inter-field connection according to the Hebbian rule. Commonly, we let their inter-field weight be 1. So, we obtain
x i , j = 1 , e f , i ( t ) = e g , j = 1 0 , e l s e .
Figure 2 shows the memory generation process of NMIS, and the state that the NMIS is in after the memory is generated.

2.2.2. The External Stimulation Recognition of NMIS

After the memory generation stage of the NMIS, the IPNs corresponding to the training instances are transformed into EPNs and can be easily activated again. If a new IPN is activated by an external stimulation from a test instance, some EPNs within its cognitive scale will be activated through its interactive stimulation and the unrelated EPNs will be inhibited. So, the cognitive scale of PN plays an important role in NMIS, which is determined by the interactive stimulation scale σ 1 and σ 2 . Next, we offer a calculation method for the interactive stimulation scale using the distribution information of the instances.
Performing small-scale learning, we cannot determine the interactive stimulation scale by employing the distribution information of the training instances alone—the test instances must also be used. Let D be the l × m l distance matrix that describes the distance between the training instances and test instances. Its elements   
d p , q = d ( z p , z q )
where p = 1 , 2 , , l and q = 1 , 2 , , m l . The d ( ) is a distance function. Let d m i n , q , q = 1 , 2 , , m l be the minimum element in the corresponding column of D and d m a x , q , q = 1 , 2 , , m l be the maximum element of each column. Then,
D m i n = [ d m i n , 1 , d m i n , 2 , , d m i n , m l ]
and
D m a x = [ d m a x , 1 , d m a x , 2 , , d m a x , m l ]
indicate the minimum and maximum distance among the training instances and test instances, respectively. We are unable to evaluate the range of the categories accurately, but it is reasonable to ascertain that the interactive stimulation induced by the new IPN activated by external input stimulation (from a test instance) should be large enough to activate at least one EPN. So, when handling the small-scale dataset, the interactive stimulation scale is given as:
σ 1 = m a x ( D m i n ) .
Define
σ 1 , m a x = m a x ( D m a x ) ,
and
σ 1 , m i n = m i n ( D m i n ) .
If the number of training instances for each category pattern is large. We can obtain enough internal information for the sub-populations only via the training instances’ distribution. Let D ^ be an l × l matrix that describes the distance among the training instances. Its elements
d ^ p , q = d ( z p , z q ) ,
where p = 1 , 2 , , l and q = 1 , 2 , , l . Let d ^ m a x , q , q = 1 , 2 , , l and d ^ m i n , q , q = 1 , 2 , , l be the maximum and minimum element in the corresponding column of D ^ , respectively. Then,
D ^ m a x = [ d ^ m a x , 1 , d ^ m a x , 2 , , d ^ m a x , l ] ,
and
D ^ m i n = [ d ^ m i n , 1 , d ^ m i n , 2 , , d ^ m i n , l j ] .
Because there is sufficient distribution information, it can be assumed that the interactive stimulation scale is
σ 1 = m e a n ( D ^ m a x ) .
Define
σ 1 , m a x = m a x ( D ^ m a x ) ,
and
σ 1 , m i n = m i n ( D ^ m i n ) .
After the interactive stimulation scales are determined, we distinguish the category pattern of the test instances one-by-one. Notice that the external input stimulation to the ESNs is blocked, i.e., during the recognition process of NMIS, the ESNs only receive the inter-field interaction stimulation from the EPNs. So,
e g , j = 0 , j = 1 , 2 , , m c .
For a test instance z k , k = l + 1 , l + 2 , , m of T e s t = { z l + 1 , z l + 2 , z m } , its initial state is the intrinsic resting activation. That is,   
f k ( t = 0 ) = H f , k , k = l + 1 , l + 2 , , m .
Let
e ( f , k ) ( t ) = 1 , k = l + 1 , l + 2 , , m
to activate the the corresponding IPN.
Via the interactive stimulation, some EPNs that belong to the same subpopulation as this activated IPN are triggered and the unrelated EPNs are inhibited. When the primary field is stable, the activation information of the EPNs is transmitted to the senior field through the inter-field interaction, resulting in the change in the ESN’s activation.
There are three situations that should be considered: (1) Only one ESN is activated; (2) No more than one ESN is activated; and (3) No ESN is activated. Figure 3 shows their general situation.
The first case is ideal because only one category pattern is perceived. So, the external input stimulation (test instance) is labeled as this perceived category pattern. For the second case, if the activation of an activated ESN is much higher than that of other ESNs, the external input stimulation is labeled accordingly. For the other cases, the interactive stimulation scale should be adjusted. Algorithm 1 demonstrates the details of our scale-adjusting algorithm in which the number of the activated ESNs is abbreviated as A e s n . The complete cognitive algorithm is shown in the Algorithm 2.
Algorithm 1 Scale-Adjusting Algorithm.
  1:INPUT
  2: A e s n , σ 1 , σ 1 , m a x , σ 1 , m i n , λ ;
  3:
  4:OUTPUT   
  5: σ 1 , σ 1 , m a x , σ 1 , m i n ;
  6:
  7:if A e s n = 0 then
  8:    if σ 1 , m a x σ 1 < ϵ  then
  9:        Let σ 1 , m a x = σ 1 , m a x / λ ;
10:    end if
11:    Let σ 1 , m i n = σ 1 ;
12:    Calculate σ 1 = σ 1 + λ ( σ 1 , m a x σ 1 ) ;
13:else
14:    if  A e s n > 1  then
15:        if  σ 1 σ 1 , m i n < ϵ  then
16:           Let σ 1 , m i n = λ σ 1 , m i n ;
17:        end if
18:        Let σ 1 , m a x = σ 1 ;
19:        Calculate σ 1 = σ 1 λ ( σ 1 σ 1 , m i n ) ;
20:    end if
21:end if
Algorithm 2 Recognition Algorithm.
  1:INPUT   
  2: T e s t = { z l + 1 , z l + 2 , z m } , x i , j , i = 1 , 2 , , l and j = 1 , 2 , m c ;
  3:
  4:OUTPUT   
  5: Z t e s t ^ ;
  6:
  7:Let f i ( t = 0 ) = 0.2 , g j = 0.2 , where i = 1 , 2 , , l and j = 1 , 2 , m c ;
  8:for each z k T e s t  do
  9:    Let e f , k ( t ) = 1 ;
10:    Compute the stationary solution f i , i = 1 , 2 , , l of the Equation (1);
11:    Compute the stationary solution g j , j = 1 , 2 , m c of the Equation (10);
12:    while  A e s n 1  do
13:        Adjust the scale σ 1 by Algorithm 1;
14:        Compute the stationary solution g j , j = 1 , 2 , m c of the Equation (10);
15:    end while
16:    Let z k ^ = z ( r ) , where r is the category pattern that stored by the senior neuron with the highest activation;
17:end for
18: Z t e s t ^ = { z k ^ } , k = l + 1 , l + 2 , , m .

2.3. The NN_NMIS

The proposed NMIS has a vivid neural mechanism, which can effectively identify the external input stimulation—but it is not an efficient model. The number of its EPNs is equal to the number of the training instances. When the training set is large, a very complex interactive stimulation needs to be calculated, which means that NMIS faces unacceptable computational and storage requirements.
Not all of the interaction information of EPNs is meaningful. In Figure 4, the interaction stimulation induced by the activated IPN to the EPNs that are outside the red dotted line (cognitive scale) is so small that none of the EPNs can be activated or inhibited, but the model needs to spend most of the resources calculating them. This is inefficient and unreasonable.
In this section, we propose the nearest neighbor NMIS (NN_NMIS) which is mainly used to deal with large-scale data classification. Firstly, we select one representative EPN for each subpopulation, which is the weighted combination of all EPNs belonging to the same subpopulation. Then, the activated IPN only applies interactive stimulation to its nearest K EPNs and these representative EPNs.
The value of K is flexible. However, based on the premise that adequate meaningful EPNs are included, it should be as small as possible. According to the prior information, we can not get an optimal K, but we can reasonably assume that the number of these meaningful EPNs should not be more than that of the EPNs contained in the largest sub-population in the primary field. So, let
K < m a x ( l j ) , j = 1 , 2 , , m c ,
where l j is the number of EPNs in the jth sub-population. Generally, let
K = 10
The representative EPNs should contain much information about the EPNs closing to the sub-population center and little information about the marginal EPNs. To generate high-quality representative EPNs, we propose a weight initialization method based on the interactive stimulation among the EPNs in the same sub-population.
Applying external input stimulation to activate all EPNs that belong to the same sub-population simultaneously, the EPNs corresponding to the sub-population center receive strong interactive stimulation from other EPNs and the interactive stimulation received by the EPNs corresponding to the sub-population boundary is tiny. Inspired by this idea, we initialize the weights using the interactive stimulation received by all EPNs as prior knowledge.
Suppose that there are l j EPNs in the jth sub-population. Denote I T S j , k as the interactive stimulation received by the k EPN of the jth sub-population. It is described by an instantaneous Equation:
I T S j , i = C f η k = 1 l j ω ( z j , i z j , k ) ϕ h f , j , k + e f , j , k ,
where l j is the number of the EPNs in the jth subpopulation and e f , j , k = 1 . The representative EPN is abbreviated to R E P N in the following equations. It is given by:
R E P N j = k = 1 l j w j , k z j , k , j = 1 , 2 , , m c .
where m c is the number of subpopulations. The weight vector w j = [ w j , 1 , w j , 2 , , w j , l j ] T is given by the following equation:
w j = S o f t m a x ( I T S j ) ,
where I T S j = [ I T S j , 1 , I T S j , 2 , , I T S j , l j ] T . The function S o f t m a x ( ) is a normalizing function that is used to ensure k = 1 l j w j , k = 1 .

3. Results

In this section, to illustrate the classification performance of the proposed NMIS and NN_NMIS, we tested them on some real datasets. The details of the experimental datasets are described in Table 2, all of which were selected from the University of California Irvine (UCI) repositories (https://archive.ics.uci.edu/, accessed on 1 August 2022) and only underwent simple preprocessing, such as deleting the instances with null values. The original attribute values and dimensions of the datasets were not changed. The KNN and SVM were used for contrast, because they are considered two effective classification algorithms and are commonly used in various fields. We did not design comparative experiments with BP neural networks because their architectures are so diverse that the fairness of the experiments could not be guaranteed. To avoid accidental situations, all experimental results are the average of 30 rounds.
DERMATOLOGY, WINE, GERMAN, SEGMENTATION, PENDIGITS and SATELLITE datasets were selected to assess the model’s small-scale data classification capacities. The experiments were designed as one-shot learning and five-shot learning. There were one instance and five instances for each category pattern that were randomly selected as the training set; the rest were used as the test set. The comparison algorithms were: 1-NN (for one-shot learning), 3-NN (for five-shot learning) and SVM with a linear kernel (for all learning tasks).
Figure 5 shows the classification results of one-shot learning. We can see that the proposed NMIS outperformed the other algorithms in both accuracy and stability. On WINE, DERMATOLOGY and SEGMENTATION datasets, the accuracy of NMIS exceeded that of other algorithms by more than 15%.
The classification results of five-shot learning are shown in Figure 6. It can be seen that the accuracy and robustness of NMIS were still better than that of 3-NN. Significant performance gaps were obtained on the WINE, DERMATOLOGY, GERMAN and SEGMENTATION datasets. We observed that a similar accuracy to NMIS was obtained by SVM. Using the confusion matrix, we visualize the classification results of NMIS and SVM in Figure 7 and Figure 8. We can see that NMIS showed stable recognition ability for rare categories.
To demonstrate the large-scale data classification capacity of NN_NMIS, we tested it on some large datasets: SPAMBASE, COVERTYPE, GERMAN, SEGMENTATION, PENDIGITS and SATELLITE. All datasets contained more than one thousand instances. A total of 30% of instances for each category pattern for SPAMBASE, GERMAN, SEGMENTATION, PENDIGITS and SATELLITE were randomly chosen as the training set and the rest were applied as the test set. The COVERTYPE dataset was so large that only 0.001% of instances for each category pattern were selected as the training set. The 3-NN and SVM were employed for comparison. For NN_NMIS, K = 10 for all datasets. The results are shown in Figure 9. We can clearly see that the stability of the NN_NMIS was so good that there were no visible fluctuations. For the SPAMBASE, COVERTYPE and GERMAN datasets, the accuracy of the model was significantly better than for the other algorithms, especially SVM.

4. Discussion

Human beings show excellent cognitive ability, which depends on complex neuronal behavior mechanisms. Some scientists have sought to imitate these mechanisms to enable machines to acquire similar learning ability. The related research outcomes are called artificial neural networks (ANN). From the initial perceptron to the current diversified architectures (RNN, CNN, GNN and Transformer), the development of ANNs has undergone many reformulations, which have achieved extraordinary success in various fields [54,55]. However, the neural mechanism of ANNs is deficient and their training conditions are rigorously induced by the BP strategy, which leads to their poor biological rationale and unsatisfactory small-sample learning ability [27,28,29,30,33].
Some properties of real neurons can be applied to neural models to improve their biological interpretability. The BP algorithm is not necessary when dealing with structured datasets that do not involve data feature extraction. In this paper, we propose a brain-like neural model with interactive stimulation (NMIS) that focuses on structured and small-scale data classification. In contrast to traditional BP neural networks, the inspiration for NMIS originates from real cognitive processes. There are two neural fields in NMIS that are used to simulate the neural activation of primary and senior visual cortices, respectively. The information transmission and inter-field connections among the neurons in NMIS depend on the interactive stimulation and synaptic plasticity. Thus, its neural mechanism is clear. In addition, all parameters of the proposed model are reasonably selected according to cognitive science or independently designed according to the datasets. So, there are no complicated optimization steps and manual parameter adjustments in NMIS. To solve the unacceptable computing and storage requirements faced by NMIS when processing large-scale data classification, we propose NN_NMIS, an optimized version of NMIS, involving combining the nearest neighbor strategy, for large-scale data classification processing. Benefiting from a rational cognitive mechanism, NMIS and NN_NMIS show better classification abilities than other algorithms.
We do not doubt the excellent results achieved by neural networks. NMIS does not have feature extraction capability, which makes it unable to handle unstructured data directly. So, the massive structured data provided by neural networks is necessary. In the future, we intend to consider embedding NMIS into neural networks to achieve more interesting functions, such as emotion analysis, reinforcement learning and image recognition.

5. Conclusions

In this paper, we discuss some problems faced by traditional artificial neural networks, i.e., poor interpretability and demanding training conditions. Considering the excellent cognitive ability of human beings, some properties of neurons in the brain inspired us to design the NMIS model for small-scale and structured data classification. The neural mechanism of NMIS is clear. It consists of the primary field and the senior field, simulating the neural activation of primary and senior visual cortices, respectively. Its neurons transmit information through interactive stimulation and inter-field interaction, corresponding to the interaction and synaptic plasticity of real neurons. In contrast to the BP strategy, the memories of NMIS are stored as inter-field connections, which are based on the Hebbain rule and do not require strict optimization. Consequently, the proposed NMIS is biologically reasonable and efficient, and is essentially suitable for small-scale data classification. In addition, based on NMIS, we propose an NN_NMIS for large-scale learning, which only calculates the interaction information among a few important neurons. So, it is efficient. The numerical experiments on some UCI datasets demonstrate that the proposed NMIS and NN_NMIS are feasible and show better performance and generalization ability than some widely used classification algorithms in machine learning.

Author Contributions

Conceptualization, D.L. and Z.H.; methodology, D.L. and M.L.; software, D.L. and M.L.; data curation, M.L. and Z.H.; writing—original draft preparation, D.L. and M.L.; writing—review and editing, D.L. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Guangxi under Grant 2022GXNSFAA035519.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NMISThe neural model with interactive stimulation
NN_NMISThe nearest neighbor NMIS
PNThe primary neuron
SNThe senior neuron
IPNThe implicit PN
EPNThe explicit PN
ISNThe implicit SN
ESNThe explicit SN

References

  1. Eysenck, M.W.; Keane, M.T. Cognitive Psychology: A Student’Handbook; Psychology Press: London, UK, 2015. [Google Scholar]
  2. Kotsiantis, S.B.; Zaharakis, I.; Pintelas, P. Supervised machine learning: A review of classification techniques. Emerg. Artif. Intell. Appl. Comput. Eng. 2007, 160, 3–24. [Google Scholar]
  3. Kotsiantis, S.B.; Zaharakis, I.D.; Pintelas, P.E. Machine learning: A review of classification and combining techniques. Artif. Intell. Rev. 2006, 26, 159–190. [Google Scholar] [CrossRef]
  4. Amores, J. Multiple instance classification: Review, taxonomy and comparative study. Artif. Intell. 2013, 201, 81–105. [Google Scholar] [CrossRef]
  5. Sharma, P.; Singh, A. Era of deep neural networks: A review. In Proceedings of the 2017 8th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Delhi, India, 3–5 July 2017; pp. 1–5. [Google Scholar]
  6. Dreiseitl, S.; Ohno-Machado, L. Logistic regression and artificial neural network classification models: A methodology review. J. Biomed. Inform. 2002, 35, 352–359. [Google Scholar] [CrossRef]
  7. Egmont-Petersen, M.; de Ridder, D.; Handels, H. Image processing with neural networks—A review. Pattern Recognit. 2002, 35, 2279–2301. [Google Scholar] [CrossRef]
  8. Liao, S.H.; Wen, C.H. Artificial neural networks classification and clustering of methodologies and applications–literature analysis from 1995 to 2005. Expert Syst. Appl. 2007, 32, 1–11. [Google Scholar] [CrossRef]
  9. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  10. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  11. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  12. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  13. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  14. Amodei, D.; Ananthanarayanan, S.; Anubhai, R.; Bai, J.; Battenberg, E.; Case, C.; Casper, J.; Catanzaro, B.; Cheng, Q.; Chen, G.; et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; pp. 173–182. [Google Scholar]
  15. Chan, W.; Jaitly, N.; Le, Q.; Vinyals, O. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 4960–4964. [Google Scholar]
  16. Ding, C.; Tao, D. Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 1002–1014. [Google Scholar] [CrossRef]
  17. Shen, W.; Zhou, M.; Yang, F.; Yu, D.; Dong, D.; Yang, C.; Zang, Y.; Tian, J. Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification. Pattern Recognit. 2017, 61, 663–673. [Google Scholar] [CrossRef]
  18. Bhandare, A.; Bhide, M.; Gokhale, P.; Chandavarkar, R. Applications of convolutional neural networks. Int. J. Comput. Sci. Inf. Technol. 2016, 7, 2206–2215. [Google Scholar]
  19. Li, Y. Research and Application of Deep Learning in Image Recognition. In Proceedings of the 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 21–23 January 2022; pp. 994–999. [Google Scholar]
  20. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
  21. Parikh, A.P.; Täckström, O.; Das, D.; Uszkoreit, J. A decomposable attention model for natural language inference. arXiv 2016, arXiv:1606.01933. [Google Scholar]
  22. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  23. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  24. Kitaev, N.; Kaiser, Ł.; Levskaya, A. Reformer: The efficient transformer. arXiv 2020, arXiv:2001.04451. [Google Scholar]
  25. Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.; Koltun, V. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 16259–16268. [Google Scholar]
  26. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  27. Féraud, R.; Clérot, F. A methodology to explain neural network classification. Neural Netw. 2002, 15, 237–246. [Google Scholar] [CrossRef]
  28. Vincenzi, S.; Crivelli, A.J.; Munch, S.; Skaug, H.J.; Mangel, M. Trade-offs between accuracy and interpretability in von B ertalanffy random-effects models of growth. Ecol. Appl. 2016, 26, 1535–1552. [Google Scholar] [CrossRef]
  29. Azodi, C.B.; Tang, J.; Shiu, S.H. Opening the black box: Interpretable machine learning for geneticists. Trends Genet. 2020, 36, 442–455. [Google Scholar] [CrossRef]
  30. Fan, F.L.; Xiong, J.; Li, M.; Wang, G. On interpretability of artificial neural networks: A survey. IEEE Trans. Radiat. Plasma Med. Sci. 2021, 5, 741–760. [Google Scholar] [CrossRef] [PubMed]
  31. Laudani, A.; Lozito, G.M.; Riganti Fulginei, F.; Salvini, A. On training efficiency and computational costs of a feed forward neural network: A review. Comput. Intell. Neurosci. 2015, 2015. [Google Scholar] [CrossRef] [PubMed]
  32. Zhang, Q.; Zhang, M.; Chen, T.; Sun, Z.; Ma, Y.; Yu, B. Recent advances in convolutional neural network acceleration. Neurocomputing 2019, 323, 37–51. [Google Scholar] [CrossRef]
  33. Erb, R.J. Introduction to backpropagation neural network computation. Pharm. Res. 1993, 10, 165–170. [Google Scholar] [CrossRef] [PubMed]
  34. Whittington, J.C.; Bogacz, R. Theories of error back-propagation in the brain. Trends Cogn. Sci. 2019, 23, 235–250. [Google Scholar] [CrossRef]
  35. Pehlevan, C.; Sengupta, A.M.; Chklovskii, D.B. Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks? Neural Comput. 2018, 30, 84–124. [Google Scholar] [CrossRef]
  36. Chu, D.; Le Nguyen, H. Constraints on Hebbian and STDP learned weights of a spiking neuron. Neural Netw. 2021, 135, 192–200. [Google Scholar] [CrossRef]
  37. Schönsberg, F.; Roudi, Y.; Treves, A. Efficiency of Local Learning Rules in Threshold-Linear Associative Networks. Phys. Rev. Lett. 2021, 126, 018301. [Google Scholar] [CrossRef]
  38. Lee, K.S.; Vandemark, K.; Mezey, D.; Shultz, N.; Fitzpatrick, D. Functional synaptic architecture of callosal inputs in mouse primary visual cortex. Neuron 2019, 101, 421–428. [Google Scholar] [CrossRef]
  39. Nishiyama, M.; Matsui, T.; Murakami, T.; Hagihara, K.M.; Ohki, K. Cell-type-specific thalamocortical inputs constrain direction map formation in visual cortex. Cell Rep. 2019, 26, 1082–1088. [Google Scholar] [CrossRef]
  40. Pawar, A.S.; Gepshtein, S.; Savel’ev, S.; Albright, T.D. Mechanisms of spatiotemporal selectivity in cortical area MT. Neuron 2019, 101, 514–527. [Google Scholar] [CrossRef] [PubMed]
  41. Johnson, D.H. Point process models of single-neuron discharges. J. Comput. Neurosci. 1996, 3, 275–299. [Google Scholar] [CrossRef] [PubMed]
  42. Fadiga, L.; Fogassi, L.; Gallese, V.; Rizzolatti, G. Visuomotor neurons: Ambiguity of the discharge or ‘motor’perception? Int. J. Psychophysiol. 2000, 35, 165–177. [Google Scholar] [CrossRef]
  43. Başar, E. Chaos in Brain Function: Containing Original Chapters by E. Basar and TH Bullock and Topical Articles Reprinted from the Springer Series in Brain Dynamics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  44. Shipp, S. Neural elements for predictive coding. Front. Psychol. 2016, 7, 1792. [Google Scholar] [CrossRef]
  45. Spratling, M.W. A review of predictive coding algorithms. Brain Cogn. 2017, 112, 92–97. [Google Scholar] [CrossRef]
  46. Williams, L.E.; Holtmaat, A. Higher-order thalamocortical inputs gate synaptic long-term potentiation via disinhibition. Neuron 2019, 101, 91–102. [Google Scholar] [CrossRef]
  47. Cossell, L.; Iacaruso, M.F.; Muir, D.R.; Houlton, R.; Sader, E.N.; Ko, H.; Hofer, S.B.; Mrsic-Flogel, T.D. Functional organization of excitatory synaptic strength in primary visual cortex. Nature 2015, 518, 399–403. [Google Scholar] [CrossRef]
  48. Andrillon, T.; Pressnitzer, D.; Léger, D.; Kouider, S. Formation and suppression of acoustic memories during human sleep. Nat. Commun. 2017, 8, 179. [Google Scholar] [CrossRef] [Green Version]
  49. Lee, W.C.A.; Bonin, V.; Reed, M.; Graham, B.J.; Hood, G.; Glattfelder, K.; Reid, R.C. Anatomy and function of an excitatory network in the visual cortex. Nature 2016, 532, 370–374. [Google Scholar] [CrossRef]
  50. Makino, H.; Hwang, E.J.; Hedrick, N.G.; Komiyama, T. Circuit mechanisms of sensorimotor learning. Neuron 2016, 92, 705–721. [Google Scholar] [CrossRef]
  51. Zou, Q.; Ross, T.J.; Gu, H.; Geng, X.; Zuo, X.N.; Hong, L.E.; Gao, J.H.; Stein, E.A.; Zang, Y.F.; Yang, Y. Intrinsic resting-state activity predicts working memory brain activation and behavioral performance. Hum. Brain Mapp. 2013, 34, 3204–3215. [Google Scholar] [CrossRef] [PubMed]
  52. Verrel, J.; Almagor, E.; Schumann, F.; Lindenberger, U.; Kühn, S. Changes in neural resting state activity in primary and higher-order motor areas induced by a short sensorimotor intervention based on the Feldenkrais method. Front. Hum. Neurosci. 2015, 9, 232. [Google Scholar] [CrossRef] [PubMed]
  53. Keilholz, S.D. The neural basis of time-varying resting-state functional connectivity. Brain Connect. 2014, 4, 769–779. [Google Scholar] [CrossRef] [PubMed]
  54. Hegazy, T.; Fazio, P.; Moselhi, O. Developing practical neural network applications using back-propagation. Comput.-Aided Civ. Infrastruct. Eng. 1994, 9, 145–159. [Google Scholar] [CrossRef]
  55. Zou, J.; Han, Y.; So, S.S. Overview of artificial neural networks. In Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2008; pp. 14–22. [Google Scholar]
Figure 1. The primary neural field (below) and senior neural field (above) of NMIS. There are two subpopulations in the primary field, which correspond to two SNs. The white ones are IPNs (in the primary field) and ISNs (in the senior field). All of them are implicit until the inter-field connections are established.
Figure 1. The primary neural field (below) and senior neural field (above) of NMIS. There are two subpopulations in the primary field, which correspond to two SNs. The white ones are IPNs (in the primary field) and ISNs (in the senior field). All of them are implicit until the inter-field connections are established.
Brainsci 12 01191 g001
Figure 2. (a) shows the establishment process of the inter-field connection. (b) shows the neurons that have established inter-field connections. The white, pink and red ones represent implicit, explicit and activated neurons respectively. In (a), The thin blue arrows indicate an inter-field connection. The inter-field connections between the EPNs in the same subpopulation and ESN are indicated by a thick blue arrow in (b).
Figure 2. (a) shows the establishment process of the inter-field connection. (b) shows the neurons that have established inter-field connections. The white, pink and red ones represent implicit, explicit and activated neurons respectively. In (a), The thin blue arrows indicate an inter-field connection. The inter-field connections between the EPNs in the same subpopulation and ESN are indicated by a thick blue arrow in (b).
Brainsci 12 01191 g002
Figure 3. (ac) show that only one ESN is activated, no more than one ESN is activated and no ESN is activated, respectively. The figures on the left describe the new IPN that is activated by external input stimulation and its excitatory interaction range (the yellow dotted line). The figures on the right describe the EPNs that are activated (red) or inhibited (blue) by interactive stimulation and the ESNs activated by inter-field interaction.
Figure 3. (ac) show that only one ESN is activated, no more than one ESN is activated and no ESN is activated, respectively. The figures on the left describe the new IPN that is activated by external input stimulation and its excitatory interaction range (the yellow dotted line). The figures on the right describe the EPNs that are activated (red) or inhibited (blue) by interactive stimulation and the ESNs activated by inter-field interaction.
Brainsci 12 01191 g003aBrainsci 12 01191 g003b
Figure 4. (a) shows the cognitive range of the activated IPN: the yellow dotted line is the excitatory interaction range and the red dotted line is the inhibition interaction range. (b) describes the activated and inhibited EPNs and ESNs in the cognitive range.
Figure 4. (a) shows the cognitive range of the activated IPN: the yellow dotted line is the excitatory interaction range and the red dotted line is the inhibition interaction range. (b) describes the activated and inhibited EPNs and ESNs in the cognitive range.
Brainsci 12 01191 g004aBrainsci 12 01191 g004b
Figure 5. The one-shot learning results (%) of NMIS, KNN and SVM on (a) DERMATOLOGY, (b) WINE, (c) GERMAN, (d) SEGMENTATION, (e) PENDIGITS and (f) SATELLITE datasets.
Figure 5. The one-shot learning results (%) of NMIS, KNN and SVM on (a) DERMATOLOGY, (b) WINE, (c) GERMAN, (d) SEGMENTATION, (e) PENDIGITS and (f) SATELLITE datasets.
Brainsci 12 01191 g005
Figure 6. The five-shot learning results (%) of NMIS, KNN and SVM on (a) DERMATOLOGY, (b) WINE, (c) GERMAN, (d) SEGMENTATION, (e) PENDIGITS and (f) SATELLITE datasets.
Figure 6. The five-shot learning results (%) of NMIS, KNN and SVM on (a) DERMATOLOGY, (b) WINE, (c) GERMAN, (d) SEGMENTATION, (e) PENDIGITS and (f) SATELLITE datasets.
Brainsci 12 01191 g006
Figure 7. The confusion matrix of NMIS and SVM on (a) DERMATOLOGY, (b) WINE and (c) GERMAN datasets. Each orange square represents the number of wrongly predicted instances. The main diagonal represents the number of correctly predicted instances. The bottom and right light gray rectangles indicate the prediction accuracy of the corresponding instance categories.
Figure 7. The confusion matrix of NMIS and SVM on (a) DERMATOLOGY, (b) WINE and (c) GERMAN datasets. Each orange square represents the number of wrongly predicted instances. The main diagonal represents the number of correctly predicted instances. The bottom and right light gray rectangles indicate the prediction accuracy of the corresponding instance categories.
Brainsci 12 01191 g007
Figure 8. The confusion matrix of NMIS and SVM on (a) SEGMENTATION, (b) PENDIGITS and (c) SATELLITE datasets. Each orange square represents the number of wrongly predicted instances. The main diagonal represents the number of correctly predicted instances. The bottom and right light gray rectangles indicate the prediction accuracy of the corresponding instance categories.
Figure 8. The confusion matrix of NMIS and SVM on (a) SEGMENTATION, (b) PENDIGITS and (c) SATELLITE datasets. Each orange square represents the number of wrongly predicted instances. The main diagonal represents the number of correctly predicted instances. The bottom and right light gray rectangles indicate the prediction accuracy of the corresponding instance categories.
Brainsci 12 01191 g008
Figure 9. The large-scale data classification results (%) of NN_NMIS, KNN and SVM on (a) SPAMBASE, (b) COVERTYPE, (c) GERMAN, (d) SEGMENTATION, (e) PENDIGITS and (f) SATELLITE datasets.
Figure 9. The large-scale data classification results (%) of NN_NMIS, KNN and SVM on (a) SPAMBASE, (b) COVERTYPE, (c) GERMAN, (d) SEGMENTATION, (e) PENDIGITS and (f) SATELLITE datasets.
Brainsci 12 01191 g009
Table 1. Symbols Table.
Table 1. Symbols Table.
SymbolsDescriptions
Z = { z 1 , z 2 , , z m } Set of all instances
T r a i n = { z 1 , z 2 , , z l } Set of all training instances
T e s t = { z l + 1 , z l + 2 , z m } Set of all test instances
Z ^ = { z ^ 1 , z ^ 2 , , z ^ m } Set of all instances’ category patterns
Z t r a i n ^ = { z ^ 1 , z ^ 2 , z ^ l } Set of all training instances’ category patterns
Z t e s t ^ = { z ^ l + 1 , z ^ l + 2 , z ^ m } Set of all test instances’ labels
N = { z ^ ( 1 ) , z ^ ( 2 ) , z ^ ( m c ) } Set of all category pattern names
mThe number of instances
m c The number of all category patterns
lThe number of training instances
l j The number of training instances with the category pattern z ^ ( j )
Table 2. Details of datasets.
Table 2. Details of datasets.
InstancesClassesAttributes
COVERTYPE581,012754
SPAMBASE4601257
GERMAN1000220
DERMATOLOGY366634
WINE17833
SEGMENTATION2310719
PENDIGITS74941016
SATELLITE6435636
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, D.; Li, M.; Huang, Z. Applying the Properties of Neurons in Machine Learning: A Brain-like Neural Model with Interactive Stimulation for Data Classification. Brain Sci. 2022, 12, 1191. https://doi.org/10.3390/brainsci12091191

AMA Style

Li D, Li M, Huang Z. Applying the Properties of Neurons in Machine Learning: A Brain-like Neural Model with Interactive Stimulation for Data Classification. Brain Sciences. 2022; 12(9):1191. https://doi.org/10.3390/brainsci12091191

Chicago/Turabian Style

Li, Da, Molan Li, and Zhili Huang. 2022. "Applying the Properties of Neurons in Machine Learning: A Brain-like Neural Model with Interactive Stimulation for Data Classification" Brain Sciences 12, no. 9: 1191. https://doi.org/10.3390/brainsci12091191

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop