Next Article in Journal
Effects of Chopping Length and Additive on the Fermentation Quality and Aerobic Stability in Silage of Leymus chinensis
Previous Article in Journal
A New Low-Cost and Reliable Method to Evaluate the Release of Hg0 from Synthetic Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Threshold Neural P Systems with Multiple Channels and Inhibitory Rules

Business School, Shandong Normal University, Jinan 250358, China
*
Author to whom correspondence should be addressed.
Processes 2020, 8(10), 1281; https://doi.org/10.3390/pr8101281
Submission received: 29 September 2020 / Revised: 7 October 2020 / Accepted: 9 October 2020 / Published: 13 October 2020
(This article belongs to the Section Advanced Digital and Other Processes)

Abstract

:
In biological neural networks, neurons transmit chemical signals through synapses, and there are multiple ion channels during transmission. Moreover, synapses are divided into inhibitory synapses and excitatory synapses. The firing mechanism of previous spiking neural P (SNP) systems and their variants is basically the same as excitatory synapses, but the function of inhibitory synapses is rarely reflected in these systems. In order to more fully simulate the characteristics of neurons communicating through synapses, this paper proposes a dynamic threshold neural P system with inhibitory rules and multiple channels (DTNP-MCIR systems). DTNP-MCIR systems represent a distributed parallel computing model. We prove that DTNP-MCIR systems are Turing universal as number generating/accepting devices. In addition, we design a small universal DTNP-MCIR system with 73 neurons as function computing devices.

1. Introduction

Membrane computing (MC) is a type of system with the characteristic of distributed parallel computing, usually called P systems or membrane systems. MC is obtained by researching the structure and functioning of biological cells as well as the communication and cooperation of cells in tissues, organs, and biological neural networks [1,2]. P systems are mainly divided into three categories, namely cell-like P systems, tissue-like P systems, and neural-like P systems. In the past two decades, many P-system variants have been studied and applied to real-world problems, and most of them have been proven to be universal number generating/accepting devices and functional computing devices [3,4].

1.1. Related Work

Spiking neural P (SNP) systems are abstracted from biological facts that neurons transmit spikes to each other through synapses. An SNP system can be regarded as a directed graph, where neurons are regarded as nodes, and the synaptic connections between neurons are regarded as arcs [5,6]. SNP systems are the main form of neural-like P systems [7]. An SNP system consists of two components: data and firing rules. Data usually describe the states of neurons and can also indicate the number of spikes contained in every neuron. Firing rules contain spiking rules and/or forgetting rules. The firing of rules needs to satisfy necessary conditions. The form of firing rules is E / a c a p , where E indicates the regular expression. When a n L ( E ) and n c , the rule is applied and the neuron fires. L ( E ) represents the language set associated with regular expressions E. a n indicates that the neuron where the rule is located contains n spikes. Once the rule is enabled, the neuron removes c spikes and transmits the generated p spikes to succeeding neurons. When p = 0 , rule E / a c λ is called a forgetting rule. λ represents an empty string, which indicates that forgetting rules does not generate new spikes. In summary, the firing condition of a firing rule is: a n L ( E ) and n c . This means that the firing of a rule is only related to the state of the neuron and has nothing to do with the state of other neurons. Moreover, the parallel work of neurons determines that SNP systems are models of distributed parallel computing. Therefore, most SNP systems are equipped with a global clock to mark time. SNP systems also have nondeterminism characteristic. If more than one rule can be enabled in a neuron at a certain time, then only one of them will be selected non-deterministically.
Neural-like P systems have received extensive attention and research in recent years. SNP systems was proposed by Ionescu et al. [7]. Usually, SNP systems have spikes and forgetting rules, but Song et al. [8] proposed that neurons can also have request rules. The request rules enable neurons to perceive “stimuli” from the environment by receiving a certain number of spikes. Zeng et al. [9] proposed SNP systems with weights. In this system, when the potential of a neuron is equal to a given value (called a threshold), it will fire. Zhao et al. [10] introduced a new mechanism called neuron dissolution, which can eliminate redundant neurons generated in the calculation process. In order to enable SNP systems to represent and process fuzzy and uncertain knowledge, the weighted fuzzy peak neural P system was proposed by Wang et al. [11]. Considering that SNP systems can only fire one neuron in each step, sequential SNP systems is investigated [12,13]. Although most SNP systems are synchronous, asynchronous SNP systems are investigated by [14,15,16]. Song et al. [15] also studied the computing power of asynchronous SNP system with partial synchronization. Moreover, most SNP systems as number generating/receiving devices [17,18], language generators [19], and functional computing devices [20,21] have been proven to be Turing universal. Furthermore, SNP systems have applications in dealing with real-world problems, such as fault diagnosis [22,23,24], clustering [25], and optimization problems [26].

1.2. Motivation

Yang et al. [27] researched SNP systems with multiple channels (SNP-MC systems) based on the fact: there are multiple ion channels in the process of chemical signal transmission, therefore, spiking rules with channel labels are introduced into neural P systems. After SNP-MC systems were proposed, many neural P systems combine with multiple channels have been investigated. They prove that using multiple channels can improve the computing power of neural P systems.
In addition, there are the following facts in the biological nervous system:
(1)
The conduction of nerve impulses between neurons is unidirectional, that is, nerve impulses can only be transmitted from the axon of one neuron to the cell body or dendrites of another neuron, but not in the opposite direction.
(2)
Synapses are divided into excitatory synapses and inhibitory synapses. According to the signal from the presynaptic cells, if the excitability of the postsynaptic cell is increased or excited, then the connection is excitatory synapse. If the excitability of the postsynaptic cell is decreased or the excitability is not easily generated, then it is inhibitory synapse.
Li et al. [28] were inspired by the above two biological facts and proposed SNP system with inhibitory rules (SNP-IR systems). The firing condition of an inhibitory rule not only related to the state of neuron where the rule is located but also related to other neurons (presynaptic neurons). This is because of the unidirectionality of nerve impulse transmission. SNP-IR systems have stronger control capabilities than other SNP systems.
Peng et al. [29] first proposed the dynamic threshold neural P systems (DTNP systems), which were abstracted from ICM model of the cortical neurons. The firing mechanism of DTNP systems are different from SNP systems. DTNP systems adopt a dynamic-threshold-based firing mechanism, and they have two data units (feeding input unit and dynamic threshold unit) as well as a maximum spike consumption strategy. In order to improve the computational efficiency of DTNP systems, we introduce multiple channels and inhibitory rules.
The main motivation of this paper is mainly to introduce inhibitory rules into DTNP systems, which can better reflect the working mechanism of inhibitory synapses and simulate the actual situation of communication between neurons through synapses. The introduction of multiple channels improves the control capability of the DTNP system so that it can better solve real-world problems.
Compared with DTNP systems, DTNP-MCIR systems have the following innovations:
(1)
DTNP-MCIR systems introduce firing rules with channel labels.
(2)
Inspired by SNP-IR systems, we introduce inhibitory rules to DTNP systems, but the form and firing conditions of inhibitory rules have been re-defined.
(3)
The firing rules of neuron σ r (corresponds to register r) is E r / ( a u , a τ ) a p ( l ) in DTNP systems, E r is a default firing condition usually not displayed, because E r = { u i τ i u i u τ i τ } can be reflected in the rules. However, E r can also be a regular expression in DTNP-MCIR systems, for example a 2 b + 1 / ( a u , a τ ) a p ( 1 ) , where a 2 b + 1 , b 1 represents an odd number of spikes. The rule is only enabled when neuron σ r contains an odd number of spikes and satisfies the default firing condition.
If a rule in a neuron is enabled in DTNP-MCIR systems, then the number of spikes consumed by the neuron is u + τ p n . When two rules in a neuron can be applied simultaneously at time t, we can choose one of the rules according to the neuron’s maximum spike consumption strategy.
Based on the above, we built a variant of DTNP systems called dynamic threshold neural P systems with multiple channels and inhibitory rules (DTNP-MCIR systems). We will prove Turing universal of DTNP-MCIR systems as number generating/accepting devices and functional computing devices. The content of the rest of this paper is arranged as follows. Section 2 defines DTNP-MCIR systems and gives an illustrative example. Section 3 illustrates the Turing universality of DTNP-MCIR systems as number-generating/accepting devices. Section 4 explains DTNP-MCIR systems as function computing devices. Section 5 is the conclusion of this paper.

2. DTNP-MCIR Systems

In this section, we define a DTNP-MCIR system, describe some of its related details, and give an illustrative example. For convenience, DTNP-MCIR systems use the same notations and terms as SNP systems.

2.1. Definition

A DTNP-MCIR system Π with degree m 1 is shown below:
Π = ( O , L , σ 1 , σ 2 , , σ m , s y n , i n , o u t )
where:
(1)
O = { a } is a singleton alphabet (the object a is called the spike);
(2)
L = { 1 , 2 , , N } is the channel labels;
(3)
σ 1 , σ 2 , , σ m are m neurons of the form σ i = ( u i , τ i , L i , R i ) , 1 i m where:
(a)
u i 0 is the number of spikes contained in the feeding input unit of neuron σ i ;
(b)
τ i 0 is the number of spikes as a (dynamic) threshold in neuron σ i ;
(c)
L i L is a finite set of channel labels used by neuron σ i ;
(d)
R i is a finite set of rules of two types in σ i ;
(i)
the form of firing rules is E i / ( a u , a τ ) a p ( l ) , where E i = { u i τ i u i u τ i τ } is a firing condition, u 1 , τ 0 , p 0 , p u and l L i ; when p 1 , the rule is known as a spiking rule, and when p = 0 , a 0 = λ , the rule is known as a forgetting rule, can be written as E i / ( a u , a τ ) λ ;
(ii)
the form of inhibitory rules is E ( i , j ) / ( a u , a τ ¯ ) a p ( 1 ) , where the subscript ( i , j ) represents an inhibitory arc between neuron σ i and σ j , E ( i , j ) = { u i τ i u i u τ i τ u j τ i } , u 1 , τ 0 , p 0 , p u , l L i , and 1 j m , i j ;
(4)
s y n = ( i , j , l ) { 1 , 2 , , m } × { 1 , 2 , , m } × L with i j (synapse connections)
(5)
in indicate input neurons;
(6)
out indicate output neurons.
From the perspective of topological structure, a DTNP-MCIR system can be regarded as a directed graph with inhibitory arcs, where m neurons denote m nodes, and the synaptic connections between neurons denote the arcs of the directed graph. An inhibitory arc is represented by an arc with a solid circle. A directed arc with an arrow indicates excitatory conduction. Synaptic connections are expressed as a tuple ( i , j , l ) s y n , which means that neuron σ i is connected to neuron σ j through the l-th channel. Outside system Π is called environment. In system Π , only input neurons σ i n and output neurons σ o u t can communicate with the environment. When the system Π works in generative mode, input neurons σ i n are not considered, by contrast, output neurons σ o u t are not considered in accepting mode.
Data and rules are two components of a DTNP-MCIR system. There are two data units in each neuron σ i : feeding input unit u i and dynamic threshold unit τ i . If the calculation of system Π proceeds to time t, then u i ( t ) and τ i ( t ) respectively represent the number of spikes in the two data units of the neuron σ i [29]. The direct manifestation of data is the number of spikes contained in each neuron, the number of spikes contained in a neuron can also indicate the state of the neuron. The state of the entire system at time t is characterized by the configuration C t = ( [ u 1 ( t ) , τ 1 ( t ) ] , [ u 2 ( t ) , τ 2 ( t ) ] , , [ u m ( t ) , τ m ( t ) ] ) , which can reflect the number of spikes in feeding input unit and dynamic threshold unit of m neurons. The initial configuration can be denoted as C 0 = ( [ u 1 ( 0 ) , τ 1 ( 0 ) ] , [ u 2 ( 0 ) , τ 2 ( 0 ) ] , , [ u m ( 0 ) , τ m ( 0 ) ] ) .
The rules in DTNP-MCIR systems can be divided into two types: firing rules and inhibitory rules. The form of firing rules is E i / ( a u , a τ ) a p ( l ) , firing condition is E i = { u i τ i u i u τ i τ } which is equivalent to u i ( t ) τ i ( t ) , u i ( t ) u and τ i ( t ) τ . If the firing condition in neuron σ i is fulfilled at time t, then rules in neuron σ i will be applied to a configuration C t . When rule E i / ( a u , a τ ) a p ( l ) is enabled, neuron σ i fires and then consumes u spikes from feeding input unit (retaining ( u i ( t ) u )spikes) and τ spikes from dynamic threshold unit (retaining ( τ i ( t ) τ ) spikes). When p 1 , p spikes generated by neuron σ i are transmitted to neuron σ j through the l-th synaptic channel. When p = 0, rule E i / ( a u , a τ ) λ is called forgetting rule. If forgetting rule in neuron σ i is enabled, then spikes in feeding input unit and dynamic threshold unit will be removed and no new spikes are generated.
The form of inhibitory rules is E ( i , j ) / ( a u , a τ ¯ ) a p ( l ) , as shown in Figure 1a, its firing mechanism is special. Usually the firing of a neuron depends only on its current state, other neurons have no influence on it. However, the firing condition of an inhibitory rule not only depends on the state of the current neuron, but also related to the state of the preceding neuron (called an inhibitory neuron). Suppose an inhibitory rule is located in neuron σ i and the inhibitory neuron of σ i is σ j . We assume that when there is an inhibitory arc between neurons σ i and σ j , the usual arc cannot exist. In addition, the inhibitory neuron σ j only controls the firing of the neuron σ i , and the neuron σ i has no effect on the inhibitory neuron σ j . The firing condition of the inhibitory rule is defined as E ( i , j ) = { u i τ i u i u τ i τ u j = τ i } , which is equivalent to u i ( t ) τ i ( t ) , u i ( t ) u , τ i ( t ) τ and u j ( t ) = τ i ( t ) ,where u j ( t ) = τ i ( t ) is an additional constraint compared to the firing condition of the firing rule. u j ( t ) represents the number of spikes at time t in the feeding input unit of inhibitory neuron σ j . τ i ( t ) represents the number of spikes at time t in the dynamic threshold unit of neuron σ i . Therefore, the firing conditions of inhibitory rules correspond to the function of inhibitory synapses. It is not only related to the state of neuron σ i , but also depends on the state of the inhibitory neuron σ j . Although the firing conditions of inhibitory rules and firing rules are different, the spike consumption strategy is same when they are applied. If neuron σ i applies rules to configuration C t = ( [ u 1 ( t ) , τ 1 ( t ) ] , [ u 2 ( t ) , τ 2 ( t ) ] , , [ u m ( t ) , τ m ( t ) ] ) , we can get:
u i ( t + 1 ) = { u i ( t ) u + n ,   if   neuron   σ i   fires u i ( t ) + n ,   otherwise
τ i ( t + 1 ) = { τ i ( t ) τ + p ,   if   neuron   fires   τ i ( t ) ,   otherwise
where n spikes come from other neurons, and p spikes are generated by the neuron σ i .
A neuron may have multiple inhibitory neurons. As shown in Figure 1b, neuron σ i has z inhibitory neurons σ j 1 , , σ j z . At this time, the form of inhibitory rules is E ( i , j 1 ) , , ( i , j z ) / ( a u , a τ ¯ ) a p ( 1 ) , which is called an extended inhibitory rule. The firing condition of the extended inhibitory rule is u i ( t ) τ i ( t ) , u i ( t ) u , τ i ( t ) τ , and u j 1 ( t ) = τ i ( t ) u j z ( t ) = τ i ( t ) .
If neuron σ i meets firing conditions during calculation, then one of the rules in R i must be used. If two rules E i / ( a u , a τ ) a p ( l ) and E i / ( a u , a τ ) a p ( l ) in neuro n σ i can be applied to configuration C t at the same time, only one of them can be applied. According to the maximum spikes consumption strategy of DTNP-MCIR systems, when u + τ p n 1 > u + τ p n 2 , the rule E i / ( a u , a τ ) a p ( l ) is applied, otherwise, the rule E i / ( a u , a τ ) a p ( l ) is applied, when u + τ p n 1 = u + τ p n 2 , one of them will be non-deterministically selected. Note that this strategy is also effective for forgetting rules. For example, two rules E 1 / ( a 2 , a ) λ and E 2 / ( a , a ) a ( l ) both satisfy firing conditions at time t, since the firing of forgetting rule E 1 / ( a 2 , a ) λ consumes three spikes, however, the spiking rule E 2 / ( a , a ) a ( l ) only consumes one spike, then the forgetting rule will be chose and applied.
We assume that a transition step is from one configuration to another. A sequence of transitions from the initial configuration is defined as a calculation. For a configuration C t , if no rules in the system can be applied, the system calculation halts. Any calculation corresponds to a binary sequence, write 1 when the output neuron σ o u t emit a spike to the environment, otherwise write 0. Therefore, the calculation result is defined as the time interval between the first two spikes emitted by the output neuron σ o u t . N 2 ( Π ) represents a set of numbers calculated by system Π . N 2 D T N P M C I R m n denotes the families of all sets N 2 ( Π ) accepted by DTNP-MCIR systems having at most m neurons and at most n rules in each neuron. When m or n is not restricted, the symbol “*” is usually used instead. System Π can be used as a accepting device, at this time, the input neuron receives spikes from the environment, but the output neuron is removed from the system. The system imports spikes train from the environment, then stores the number n in the form of 2n spikes. When the system calculation halts, n is the number to be accepted by it. N a c c ( Π ) represents a set of numbers accepted by the system, N a c c D T N P M C I R m n denotes the families of all sets N a c c ( Π ) accepted by DTNP-MCIR systems having at most m neurons and at most n rules in each neuron.

2.2. Illustrative Example

In order to clearly understand the working mechanism of DTNP-MCIR systems, we give an example that can generate a finite spike train, as shown in Figure 2.
We assume that a DTNP-MCIR system Π consists of four neurons σ 1 , σ 2 , σ 3 , out . Initially, the feeding input units of neuron σ 1 and neuron σ 2 each have two spikes, but there are no spikes in neuron σ 3 and out, that is u 1 ( 0 ) = 2 , u 2 ( 0 ) = 2 , u 3 ( 0 ) = 0 , u o u t ( 0 ) = 0 . The number of spikes in the initial dynamic threshold unit of the four neurons is τ 1 ( 0 ) = 2 , τ 2 ( 0 ) = 2 , τ 3 ( 0 ) = 1 , τ out ( 0 ) = 1 respectively. Therefore, the initial configuration C 0 = ( [ 2 , 2 ] , [ 2 , 2 ] [ 0 , 1 ] , [ 0 , 1 ] ) .
At time 1, since u 1 ( 0 ) = 2 τ 1 ( 0 ) = 2 , rules ( a 2 , a 2 ) a 2 ( 1 ) ( u = τ = p = 2 ) and ( a 2 , a ) a ( 2 ) ( u = 2 , τ = p = 1 ) can be applied in neuron σ 1 , and since the number of spikes consumption for both rules is u + τ p = u + τ p = 2 , one of them will be selected non-deterministically. Therefore, there are the following two cases:
(1)
Case 1: if rule ( a 2 , a 2 ) a 2 ( 1 ) is applied in neuron σ 1 at time 1, then neuron σ 1 will consume two spikes in the feeding input unit and sends two spikes to the neuron σ 3 through channel (1). Neuron σ 1 is the inhibitory neuron of neuron σ 2 , since u 2 ( 0 ) = 2 τ 2 ( 0 ) = 2 and u 1 ( 0 ) = τ 2 ( 0 ) , rule ( a 2 , a 2 ¯ ) a 2 ( 1 ) reaches the firing condition in neuron σ 2 , so neuron σ 2 consumes two spikes in the feeding input unit and sends two spikes to the neuron out via channel (1) at time 1. Therefore, C 1 = ( [ 0 , 2 ] , [ 0 , 2 ] [ 2 , 1 ] , [ 2 , 1 ] ) . At time 2, since u o u t ( 1 ) = 2 τ o u t ( 1 ) = 1 , both rule ( a 2 , a ) a ( 1 ) and rule ( a , λ ) λ ( 2 ) in neuron out can be applied. However, according to the maximum spike consumption strategy, rule ( a 2 , a ) a ( 1 ) will be applied and sends one spike to the environment through channel (1). Further, since u 3 ( 1 ) = 2 τ 3 ( 1 ) = 1 , rule ( a 2 , a ) a ( 1 ) is enable in neuron σ 3 and consumes two spikes in the feeding input unit and sends two spike to neuron out. Thus C 2 = ( [ 0 , 2 ] , [ 0 , 2 ] [ 0 , 2 ] , [ 2 , 1 ] ) . At time 3, since u o u t ( 2 ) = 2 τ o u t ( 2 ) = 1 , rule ( a 2 , a ) a ( 1 ) fires again and sends one spike to the environment. The system Π halts. Therefore, the spike train generated by this system is “011”.
(2)
Case 2: if rule ( a 2 , a ) a 2 ( 2 ) is applied in neuron σ 1 at time 1, then neuron σ 1 consumes two spikes in the feeding input unit and sends one spikes to the neuron out through channel (2). Because the state of system Π is the same as that in case 1 at time 1, neuron σ 2 consumes two spikes in the feeding input unit and sends two spikes to the neuron out through channel (1).So C 1 = ( [ 0 , 3 ] , [ 0 , 2 ] [ 0 , 1 ] , [ 3 , 1 ] ) . At time 2, since u o u t ( 1 ) = 3 τ o u t ( 1 ) = 1 , also according to the maximum spike consumption strategy, neuron out fires by the rule ( a 2 , a ) a ( 1 ) and sends one spike to the environment through channel (1). The configuration of System Π at this time is C 1 = ( [ 0 , 3 ] , [ 0 , 2 ] [ 0 , 1 ] , [ 1 , 1 ] ) . At time 3, since u o u t ( 2 ) = 1 τ o u t ( 2 ) = 1 , rule ( a , λ ) λ is applied and removes one spike in feeding input unit and the system Π halts. Therefore, the spike train generated is “010”.

3. Turing Universality of DTNP-MCIR Systems as Number-Generating/Accepting Device

In this section, we will explain the working mechanism of DTNP-MCIR systems as the number generating and number accepting mode. DTNP-MCIR systems proves its computational completeness by simulating register machines. More specifically, DTNP-MCIR systems can generate/accept all recursively enumerable sets of numbers (their family is called NRE).
A register machine can be defined as M = ( m , H , l 0 , l h I ) , where m is the number of registers, H indicates the set of instruction labels, l 0 corresponds to the start label, l h corresponds to the halt label, I denotes the set of instructions. Each instruction in I corresponds to a label in H, the instructions in I have the following three forms:
(1)
l i : ( A D D ( r ) , l j , l k ) (add 1 to register r and then move non-deterministically to one of the instructions with labels l j , l k ).
(2)
l j : ( S U B ( r ) , l j , l k ) (if register r is non-zero, then subtract 1 from it, and go to the instruction with label l j ; otherwise go to the instruction with label l k ).
(3)
l h : H A L T (halting instruction).

3.1. DTNP-MCIR Systems as Number Generating Devices

Initially, every register in the register machine M is empty. The register machine can calculate the number n in the generation mode. It first starts with instruction l 0 , and then applies a series of instructions until it reaches the halting instruction l h . Finally, the number stored in the first register is the result calculated by the register machine. Usually the family NRE can be characterized by the register machine.
Theorem 1.
N 2 S N P * 2 = N R E .
Proof. 
As we all know, N 2 S N P * 2 NRE . So we only need to prove NRE N 2 S N P * 2 we consider a register machine M = ( m , H , l 0 , l h I ) that works in generation mode, assuming that all registers except register 1 are empty, and register 1 never decrements during the calculation. □
We designed a DTNP-MCIR system Π 1 to simulate the register machine working in generating mode. The system Π 1 includes three modules: ADD module, SUB module, and FIN module. ADD module and SUB module used to simulate the ADD instruction and the SUB instruction respectively, and FIN module deals with the calculation results of the system Π 1 .
We stipulate that each register r of register machine M corresponds to a neuron σ r , note that there are no rules in neuron σ r ,numbers can be encoded in neuron σ r . If the number stored in the register r is n 0 , then neuron σ r contains 2n spikes. Each instruction l corresponds to a neuron σ l , and some auxiliary neurons are introduced into models. Assume that there are no spikes in the feeding input unit of all auxiliary neurons, but neuron σ l i receives two spikes at the beginning. Each neuron has an initial threshold for the initial threshold unit: (i) the initial threshold of each instruction neuron σ l is a 2 ; (ii) each register neuron σ r contains an initial threshold of a; (iii) the initial threshold of other neurons is uncertain. Because neuron σ l i gets two spikes, the system Π 1 begin to simulate the instruction n l i = ( O P ( r ) , l j , l k ) (OP represents one of ADD or SUB operations). Starting from the activated neuron σ l i , the simulation deals with neuron σ r according to OP, and then two spikes are introduced to one of the neurons σ l j and σ l k . The simulation will continue until the neuron σ l h fires, indicating that the simulation is completed [30]. In the calculation process, the spikes are sent to the environment twice by neuron σ o u t at times t 1 and t 2 respectively, the calculation result is defined as the interval of t 2 t 1 that is also the number contained in register1.
In order to verify that the system Π 1 can indeed simulate the register machine M correctly, we will explain how the ADD or SUB module simulates the ADD or SUB instruction, and how the FIN module outputs the calculation results.
(1)
ADD module (shown in Figure 3)—simulating an ADD instruction l i : ( A D D ( r ) , l j , l k )
The system Π 1 starts from ADD instruction l 0 . Suppose an ADD instruction l i : ( A D D ( r ) , l j , l k ) is simulated at time t. At this time, neuron σ l i contains two spikes and the configuration of module ADD is C t = ( [ u 1 , τ 1 ] , [ u 2 , τ 2 ] , , [ u 7 , τ 7 ] , ) , which involves 7 neurons σ l i , σ c 1 , σ c 2 , σ c 3 , σ c 4 , σ l j , σ l k respectively. Thus, C t = ( [ 2 , 2 ] , [ 0 , 2 ] , [ 0 , 2 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 0 , 2 ] ) . Since u l i ( t ) = 2 τ l i ( t ) = 2 , rule ( a 2 , a 2 ) a 2 ( 1 ) is applied in neuron σ l i ,then two spikes are sent to neurons σ c 1 , σ c 2 , σ r via channel (1). Because neuron σ r receives two spikes, register r increases by one. Thus, C t + 1 = ( [ 0 , 2 ] , [ 2 , 2 ] , [ 2 , 2 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 0 , 2 ] ) .
At time t + 1, both feeding input unit and dynamic threshold unit of neuron σ c 1 and neuron σ c 2 contain two spikes, so they fire. Rule ( a 2 , a ) a ( 1 ) in neuron σ c 1 is applied and send one spike to neuron σ c 3 . For neuron σ c 2 , rules ( a 2 , a 2 ) a 2 ( 1 ) and ( a 2 , a ) a ( 2 ) both satisfy the firing condition, and both consume an equal number of spikes, so one of them will be selected non-deterministically. We consider the following two cases:
(i)
At time t + 1, if rule ( a 2 , a 2 ) a 2 ( 1 ) is applied, neuron σ c 2 sends two spikes to neuron σ l k via channel (1). Then neuron σ l k receives two spikes, system Π 1 starts to simulate instruction l k . Therefore C t + 2 = ( [ 0 , 2 ] , [ 0 , 2 ] , [ 0 , 2 ] , [ 1 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 2 , 2 ] ) . At time t + 3, since u c 3 ( t + 2 ) = 1 τ c 3 ( t + 2 ) = 1 , the rule ( a , λ ) λ of neuron σ c 3 is enabled and removes the only spike in the feeding input unit.
(ii)
At time t + 1, if rule ( a 2 , a ) a ( 2 ) is applied, neuron σ c 2 sends one spike to neurons σ c 3 and σ c 4 via channel (2). Thus, C t + 2 = ( [ 0 , 2 ] , [ 0 , 2 ] , [ 0 , 2 ] , [ 2 , 1 ] , [ 1 , 1 ] , [ 0 , 2 ] , [ 0 , 2 ] ) . Note that neuron σ c 4 is the inhibitory neuron of neuron σ c 3 , since u c 4 ( t + 2 ) = τ c 3 ( t + 2 ) , inhibitory rule ( a 2 , a ¯ ) a 2 ( 1 ) is enabled, neuro n σ c 3 sends two spikes to neuron σ l j . This means that the system Π 1 starts to simulate instruction l j . The configuration of Π 1 at this time is C t + 3 = ( [ 0 , 2 ] , [ 0 , 2 ] , [ 0 , 2 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 2 , 2 ] , [ 0 , 2 ] ) .
Therefore, ADD instructions can be correctly simulated by the ADD module. Since neuron σ l i has two spikes, two spikes are added to neuron σ r , and then neuron σ l j and σ l k are selected non-deterministically.
(2)
SUB module (shown in Figure 4)—simulating a SUB instruction l j : ( S U B ( r ) , l j , l k ) .
Suppose that the system Π 1 starts to simulate the SUB instruction l j : ( S U B ( r ) , l j , l k ) at time t, and neuron σ l i contains two spikes at the same time. The configuration of module SUB is C t = ( [ u 1 , τ 1 ] , [ u 2 , τ 2 ] , , [ u 7 , τ 7 ] , ) , which involves 7 neurons σ l i , σ r , σ c 1 , σ c 2 , σ c 3 , σ l j , σ l k respectively. Thus, C t = ( [ 2 , 2 ] , [ 2 n , 1 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 0 , 2 ] ) . Since u l i ( t ) = 2 τ l i ( t ) = 2 , neuron σ l i fires and sends a spike to neuron σ r and σ c 1 via channel (1). Therefore C t + 1 = ( [ 0 , 2 ] , [ 2 n + 1 , 1 ] , [ 1 , 1 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 0 , 2 ] ) . Based on the number of spikes in neuron σ r , there are two cases:
(i)
If feeding input unit of neuron σ r contains 2 n 0 spikes at time t, then at time t + 1, neuron σ r contains 2 n + 1 3 spikes. At time t + 2, rule a 2 b + 1 / ( a 3 , a ) a 2 ( 1 ) will be enabled and neuron σ r will send two spikes to neurons σ l k via channel (1). This indicates that system Π 1 starts to simulate the instruction l k of register machine M. In addition, rule ( a , a ) a ( 1 ) satisfies firing condition, neuron σ c 1 sends a spike to neuron σ c 2 . Thus C t + 2 = ( [ 0 , 2 ] , [ 2 n 2 , 1 ] , [ 0 , 1 ] , [ 1 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 2 , 2 ] ) . At time t+3, the forgetting rule ( a , λ ) λ of neuron σ c 2 is enabled and removes the only spike in the feeding input unit. Therefore, C t + 3 = ( [ 0 , 2 ] , [ 2 n 2 , 1 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 2 , 2 ] ) .
(ii)
If feeding input unit of neuron σ r does not contain spikes at time t, then at time t + 1, neuron σ r contains only one spike. At time t + 2, rule a 2 b + 1 / ( a , a ) a ( 1 ) will be applied and neuron σ r will transmit one spike to neurons σ c 2 and σ c 3 via channel (2). At the same time neuron σ c 1 also transmits a spike to neuron σ c 2 . So C t + 2 = ( [ 0 , 2 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 2 , 1 ] , [ 1 , 1 ] , [ 0 , 2 ] , [ 0 , 2 ] ) . Note that neuron σ c 3 is the inhibitory neuron of neuron σ c 2 . At time t + 3, the forgetting rule ( a , λ ) λ of neuron σ c 3 is enabled and removes the only spike in the feeding input unit. Moreover, since u c 3 ( t + 2 ) = τ c 2 ( t + 2 ) , rule ( a 2 , a ¯ ) a 2 ( 1 ) is applied, neuron σ c 2 sends two spikes to neuron σ l j . This means that the system Π 1 starts to simulate instruction l j . Thus, C t + 2 = ( [ 0 , 2 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 0 , 1 ] , [ 2 , 2 ] , [ 0 , 2 ] ) .
It can be seen from the above description that the SUB instruction can be correctly simulated by the SUB module. The system Π 1 starts with neuron σ l i contains two spikes, then determine whether neuron σ l k or σ l j receives two spikes according to whether neuron σ r contains spikes.
(3)
Module FIN (shown in Figure 5)—outputting the result of computation
We assume that neuron σ 1 contains 2n spikes. Since neuron σ l h has two spikes at time t, it means register machine M halts, the rule ( a 2 , a ) a ( 1 ) is enabled and sends a spike to neurons σ 1 and σ o u t . Therefore, the number of spikes in neuron σ 1 becomes an odd number. At time t + 1, rule ( a , a ) a ( 1 ) in neuron σ o u t is enabled and sends the first spike to the environment. Moreover, both rule a 2 b + 1 / ( a 2 , λ ) λ and rule a 2 b + 1 / ( a , a ) a ( 1 ) can be applied in neuron σ 1 . However, according to neuron’s maximum spike consumption strategy, forgetting rule a 2 b + 1 / ( a 2 , λ ) λ is applied first and two spikes are eliminated. This process will repeat until there is only a spike in neuron σ 1 . At time t + n , neuron σ 1 contains only a spike. Thus, the rule a 2 b + 1 / ( a , a ) a ( 1 ) is applied and transmits a spike to neuron σ o u t . At time t + n + 1 , rule ( a , a ) a ( 1 ) in neuron σ o u t is enabled and sends the second spike to the environment. The time interval between two spikes being transmitted to the environment is n = ( t + 1 ) ( t + n + 1 ) , which is the number in register σ 1 when the register machine halts.
According to the above description, it can be found that the system Π 1 can correctly simulate register machine M working in generating module. Therefore, Theorem 1 holds.

3.2. Turing Universality of Systems Working in the Accepting Mode

In the following, we will prove the universal of DTNP-MCIR systems as number accepting device.
Theorem 2.
N a c c S N P * 2 = N R E .
Proof. 
A DTNP-MCIR system Π 2 is constructed to simulate a register machine M = ( m , H , l 0 , l h , I ) working in the accepting mode. The setting of the initial threshold is the same as that used in the proof of Theorem 1 and initially any auxiliary neurons do not contain spikes. The proof is based on a modification to the proof of Theorem 1. System Π 2 contains a deterministic ADD module, a SUB module, and an INPUT module. □
Figure 6 shows the input module where neuron σ i n is used to read the spike train 10 n 1 1 from the environment. The number accepted by system Π 2 is the interval ( n = ( n + 1 ) 1 ) between two spikes in the spike train.
Suppose that neuron σ in imports the first spike from the environment at time t. Therefore, the initial configuration of System Π 2 is C t = ( [ 1 , 1 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 0 , 2 ] , [ 0 , 1 ] ) , which involves 5 neurons σ l i , σ c 1 , σ c 2 , σ l 0 , σ 1 respectively. Then rule ( a , a ) a ( 1 ) in neuron σ in is enabled and sends a spike to neurons σ c 1 and σ c 2 via channel (1). Thus C t + 1 = ( [ 0 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] , [ 0 , 2 ] , [ 0 , 1 ] ) . At time t + 2 , the rule ( a , a ) a ( 1 ) in neurons σ c 1 and σ c 2 is applied. From time t + 2 to the second spike is imported, neurons σ c 1 and σ c 2 both send a spike to neuron σ 1 and exchange a spike with each other via channel (1). Thus neuron σ 1 receives a total of 2n spikes from time t + 2 to time t + n + 1, which also indicates that the number contained in register 1 is n. At time t + n + 1, neuron σ i n receives the second spike. The configuration of system Π 2 at this time is C t + n + 1 = ( [ 1 , 1 ] , [ 1 , 1 ] , [ 1 , 1 ] , [ 2 n , 2 ] , [ 0 , 1 ] ) . At the next moment, the rule ( a , a ) a ( 1 ) is enabled again and neuron σ in sends a spike to neurons σ c 1 and σ c 2 . Thus, C t + n + 2 = ( [ 0 , 1 ] , [ 2 , 1 ] , [ 2 , 1 ] , [ 2 n , 2 ] , [ 0 , 1 ] ) . Since neurons σ c 1 and σ c 2 each contain two spikes, the forgetting rule ( a 2 , λ ) λ in neuron σ c 2 is enabled, so the two spikes in feeding input unit are removed. The rule ( a 2 , a ) a 2 ( 2 ) in neuron σ c 1 is also applied and two spikes are transmitted to neuron σ l 0 via channel (2). Since neuron σ l 0 contains two spikes, system Π 2 starts to simulate instruction l 0 .
In the acceptance mode, we use the deterministic ADD module as shown in Figure 7 to simulate the instruction l i : ( A D D ( r ) , l j ) in the register machine M. When neuron σ l i receives two spikes, rule ( a 2 , a ) a 2 ( 1 ) is applied and two spikes is sent to neurons σ l j and σ r . This means that system Π 2 starts to simulate instruction l j and register r increases by 1. The SUB module shown in Figure 4 is used to simulate the instruction l j : ( S U B ( r ) , l j , l k ) .
Module FIN is omitted, but the instruction σ l h remains in system Π 2 . When neuron σ l h receives two spikes, it indicates that the halt instruction l h is reached, that is the register machine M halts.
Through the above description, we prove that system Π 2 can correctly simulate the register machine M working in the accepting mode and can find that neurons of system Π 2 contain at most two rules, Therefore, Theorem 2 holds.

4. DTNP-MCIR Systems as Function Computing Devices

In this part, we will discuss the ability of a small universal DTNP-MCIR system to calculate functions. A register machine M = ( m , H , l 0 , l h , I ) is used to compute the function f : N k N . Initially we assume that all registers are empty, k arguments are introduced into k registers. In general, only the first two registers are used. Then the register machine starts from the instruction l 0 and continues to calculate until it reaches the halt instruction l h . Finally, the function value is stored in a special register r t . ( φ 0 , φ 1 , ) is defined as a fixed admissible enumeration of the unary partial recursive functions. We think that a register machine is universal when there is a recursive function g that makes φ X ( y ) = M u ( g ( x ) , y ) for all natural numbers x, y.
Korec [31] proposed a small universal register machines M u = ( 8 , H , l 0 , l h , I ) . M u contains 8 registers (labeled from 0 to 8) and 23 instructions. By introducing two numbers g(x) and y into registers 1 and 2, respectively, φ x ( y ) can be computed by M u . When M u stops, the function value is stored in register 0. In order to simplify the register machine M u , we introduce a new register 8 and replace the halt instruction with three new instructions: l 22 : ( S U B ( 0 ) , l 23 , l h ) ; l 23 : ( A D D ( 8 ) , l 22 ) ; l h : H A L T . We define the modified register machine as M u , as shown in Table 1. We will design a small universal DTNP-MCIR system to simulate the register machine M u .
Theorem 3.
There is a small universal DTNP-MCIR system having 73 neurons for computing functions.
Proof. 
We design a DTNP-MCIR system Π 3 to simulate the register machine M u ' . System Π 3 contains an INPUT module, an OUTPUT module, and some ADD and SUB modules to simulate the ADD and SUB instructions of M u ' , respectively. The INPUT module is used to read the spike train from the environment, and the OUTPUT module deal with calculation result which is placed in register 8. Each register r and each instruction l i of M u ' corresponds to a neuron σ r and σ l i respectively. If the feeding input unit of neuron σ r contains 2n spikes, it means that the register r contains the number n. When neuron σ l i receives two spikes, the simulation of instruction l i is started. Initially, all neurons in system Π 3 are assumed to be empty. □
The INPUT module is shown in Figure 8. The spike train 10 g ( x ) 10 y 1 is read from the environment by it, and 2g(x) spikes and 2y spikes will be stored in neurons σ 1 and σ 2 respectively. We do not use inhibitory rules in this module, but make full use of the function of multiple channels. Multiple channels have played a major role in saving neurons and improving system operation efficiency.
Suppose that neuron σ in receives the first spike form the environment at time t 1 .At time t 1 + 1 , rule ( a , a ) a ( 1 ) is applied, and neuron σ in sends a spike to neurons σ c 1 , σ c 2 and σ c 3 . Since neurons σ c 1 , σ c 2 and σ c 3 all contain a spike, the rule ( a , a ) a ( 1 ) in three neurons is enabled at time t 1 + 2 . That is, neuron σ c 2 sends a spike to neuron σ c 1 , neuron σ c 2 and neuron σ c 3 exchange a spike with each other, as well as neuron σ c 1 and neuron σ c 2 each send a spike to neuron σ 1 via channel (1). This process will repeat until the second spike reaches neuron σ in . Therefore, neuron σ 1 receives two spikes in each moment from time t 1 + 2 to time t + 1 g ( x ) + 1 and imports a total of 2 g ( x ) spikes. (That is, the number stored in neuron σ 1 is g ( x ) ).
Assuming that the time for the second spike to reach neuron σ in is t 2 , in fact t 2 = t + 1 g ( x ) + 1 . Similarly, rule ( a , a ) a ( 1 ) in neuron σ in is applied to send a spike to neurons σ c 1 , σ c 2 and σ c 3 at time t 2 + 1 . Then neurons σ c 1 , σ c 2 and σ c 3 each contain two spikes, so rule ( a 2 , a 2 ) a ( 1 ) in neurons σ c 1 and σ c 3 is enabled and rule ( a 2 , a 2 ) a 2 ( 1 ) is applied in neurons σ c 2 at time t 2 + 2 . Then neurons σ c 1 and σ c 3 respectively send a spike to neuron σ 2 via channel(2) and exchange spikes with neuron σ c 2 each step from this moment.
Neuron σ 2 receives a total of 2y spikes from time t 2 + 2 to time t + 2 y + 1 (That is, the number stored in neuron σ 2 is y). When the third spike is imported by neuron σ in , the next moment neurons σ c 1 , σ c 2 , and σ c 3 will each contain three spikes. Three spikes are consumed by forgetting rule ( a 3 , λ ) λ in neuron σ c 2 and σ c 3 . Rule ( a 3 , λ ) a 2 ( 3 ) in neuron σ c 1 is applied to transmit two spikes to neuron σ l 0 via channel (3), this indicates that the system Π 3 starts to simulate the initial instruction l 0 .
By observing Table 1, we can find that all ADD instructions have the form l i : ( A D D ( r ) , l j ) , therefore, we use the deterministic ADD module shown in Figure 7 to simulate the ADD instructions. Its working mechanism has been clarified in the proof of Theorem 2.
In addition, the SUB instruction l j : ( S U B ( r ) , l j , l k ) can be simulated by the SUB module in Figure 4. The working mechanism of the SUB module has been discussed during the proof of Theorem 1.
When neuron σ l h receives two spikes at time t, the calculation of the system Π 3 halts. The OUTPUT module shown in Figure 9 is used to deal with the calculation results. We assume that neuron σ 8 contains 2n spikes. The configuration of OUTPUT module at time t is C t = ( [ u 1 , τ 1 ] , [ u 2 , τ 2 ] , , [ u 5 , τ 5 ] , ) , which involves fives neurons σ l h , σ c 1 , σ c 2 , σ 8 , σ o u t respectively. Thus, C t = ( [ 2 , 2 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 2 n , 1 ] , [ 0 , 1 ] ) . Rule ( a 2 , a ) a ( 1 ) is enabled, and neuron σ l h sends one spike to neurons σ c 1 , σ c 2 and σ 8 via channel (1) at time t. Therefore, C t + 1 = ( [ 0 , 2 ] , [ 1 , 1 ] , [ 1 , 1 ] , [ 2 n + 1 , 1 ] , [ 0 , 1 ] ) . Since neuron σ c 2 is the inhibitory neuron of neuron σ c 1 and u c 2 ( t + 1 ) = τ c 1 ( t + 1 ) , rule ( a , a ¯ ) a ( 1 ) is applied, neuron σ c 1 sends a spike to neuron σ o u t . Rule ( a , a ) a ( 1 ) in neuron σ c 2 and rule a 2 b + 1 / ( a 3 , a ) a ( 1 ) in neuron σ 8 are also enabled. Although the rule a 2 b + 1 / ( a 3 , a ) a ( 1 ) consumes three spikes of neuron σ 8 , because neuron σ 8 and neuron σ c 2 exchange one spike with each other via channel (1), neuron σ 8 only consumes two spikes each moment. Thus, C t + 2 = ( [ 0 , 2 ] , [ 0 , 1 ] , [ 1 , 1 ] , [ 2 n 1 , 1 ] , [ 1 , 1 ] ) . Rule ( a , a ) a ( 1 ) is applied in neuron σ o u t , then the first spike is sent to the environment at time t + 2. Rule a 2 b + 1 / ( a 3 , a ) a ( 1 ) will be executed continuously until neuron σ 8 contains only one spikes. At time t + n + 1, neuron σ 8 contains only a spike, rule a 2 b + 1 / ( a , a ) a ( 2 ) is applied and neuron σ 8 transmits a spike to neuron σ o u t via channel (2). At time t + n + 2, rule ( a , a ) a ( 1 ) is applied, neuron σ o u t sends the second spike to the environment. According to the definition in Section 2, the result of the system Π 3 is n = ( t + n + 2 ) ( t + 2 ) , which is also the number contained in register 8.
From the above description, it can be seen that the system Π 3 can correctly simulate the register machine M u ' calculation function. System Π 3 contains a total of 81 neurons: (1) the INPUT module has 3 auxiliary neurons; (2) the OUTPUT module has 2 auxiliary neurons; (3) each SUB module has 3 auxiliary neurons, for a total of 42 neurons; (4) 9 neurons correspond to 9 registers; (5) 25 neurons correspond to 25 instructions.
We can further reduce the number of neurons in system Π 3 by combining instructions. Here are three cases:
Case 1. For the sequence of two consecutive ADD instructions l 17 : ( A D D ( 2 ) , l 21 ) and l 21 : ( A D D ( 3 ) , l 18 ) , we can use the model ADD-ADD in Figure 10 to simulate so that neuron σ l 21 corresponding to instruction l 21 can be saved.
Case 2. We can also combine instructions l 15 : ( S U B ( 3 ) , l 18 , l 20 ) and l 20 : ( A D D ( 0 ) , l 0 ) to form one instruction l 15 : ( S U B ( 3 ) , l 18 , ( A D D ( 0 ) , l 0 ) ) so that σ l 20 will be saved. The SUB-ADD-1 model shown in Figure 11 can be used to simulate newly formed instructions.
Case 3. The following six pairs of ADD and SUB instructions can be combined.
l 0 : ( S U B ( 1 ) , l 1 , l 2 ) , l 1 : ( A D D ( 7 ) , l 0 ) ; l 4 : ( S U B ( 6 ) , l 5 , l 3 ) , l 5 : ( A D D ( 5 ) , l 6 ) ; l 6 : ( S U B ( 7 ) , l 7 , l 8 ) , l 7 : ( A D D ( 1 ) , l 4 ) ; l 8 : ( S U B ( 6 ) , l 9 , l 0 ) , l 9 : ( A D D ( 6 ) , l 10 ) ; l 14 : ( S U B ( 5 ) , l 16 , l 17 ) , l 16 : ( A D D ( 4 ) , l 11 ) ; l 22 : ( S U B ( 0 ) , l 23 , l h ) , l 23 : ( A D D ( 8 ) , l 22 ) ;
By observing the above six pairs of instructions, it can be seen that the latter ADD instruction is at the position of the first exit of the previous SUB instruction. Therefore, the combination instructions can be expressed as l i : ( S U B ( r 1 ) , l j , l k ) , l j : ( A D D ( r 2 ) , l g ) . In this way, we can save six neurons. The recombined SUB and ADD instructions can be simulated by the SUB-ADD-2module in Figure 12.
In summary, we can save eight neurons in total by combining some ADD/SUB instructions. The recombined instructions can be simulated with ADD-ADD, SUB-ADD-1and SUB-ADD-2modules respectively. Therefore, the number of neurons in system Π 3 dropped from 81 to 73. So far, we have completed the proof of Theorem 3.
In order to further illustrate the computing power of DTNP-MCIR systems, we compared it with some computing models in term of small number of computing units. From Table 2, we can see that DTNP systems, SNP systems, SNP-IR systems and recurrent neural networks need 109,67,100 and 886 neurons respectively to achieve Turing universality for computing function. However, DTNP-MCIR systems need fewer neurons than them. Although SNP-MC systems require only 38 neurons to compute function, which is much less than the computing unit required by DTNP-MCIR systems, they have different working modes. SNP-MC systems work in asynchronous mode but DTNP-MCIR systems work in synchronous mode. Therefore, the comparison indicates that DTNP-MCIR systems need fewer neurons than most of modes. In addition, Table 3 gives the full name of the comparative model abbreviation.

5. Conclusions and Further Work

Inspired by SNP systems with inhibitory rules (SNP-IR systems) and the learning of SNP systems with multiple channels (SNP-MC systems), this paper proposes a dynamic threshold neural P system with inhibitory rules and multiple channels (DTNP-MCIR systems). In fact, dynamic threshold neural P systems (DTNP systems) had been researched and proved to be Turing universal number generating/accepting device. Our original intention to construct DTNP-MCIR systems is to more fully simulate the actual situation of neurons communicating through synapses, and also to show the use of inhibitory rules and multiple channels in DTNP systems. In addition, we have optimized DTNP systems. The form of firing rules is E i / ( a u , a τ ) a p , where u 1 , τ 0 , p 0 , and p u . However, τ and p are always equal in the DTNP system, we think it lacks generality and improve it; For the rule ( a u , a τ ) a p of neuron σ r in the SUB module, If neuron σ r contains 2n spikes at time t and 2 n u , then neuron σ r can be fired at time t, which is unreasonable for the entire system. Therefore, we improve the form of firing rules as a 2 b + 1 / ( a u , a τ ) a p ( 1 ) , where a 2 b + 1 is a regular expression we introduced, which means that neuron σ r can be fired only when the number of spikes is an odd number.
In the future, we want to research whether DTNP-MCIR systems can be combined with certain algorithms, we think that the two data units of DTNP-MCIR systems can be used as two parameter inputs, the use of inhibitory rules and multiple channels make them have stronger control capabilities. Moreover, because DTNP-MCIR systems are a distributed parallel computing model, they can greatly improve the computational efficiency of algorithms. Future work will focus on using DTNP-MCIR systems to solve real-world problems, such as image processing and data clustering

Author Contributions

Conceptualization, X.Y. and X.L.; methodology, X.Y.; formal analysis, X.Y.; writing—original draft preparation, X.Y.; writing—review and editing, X.Y.; visualization, X.L.; funding acquisition, X.Y. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is supported by National Natural Science Foundation of China (61876101, 61802234, 61806114), Social Science Fund Project of Shandong Province, China (16BGLJ06, 11CGLJ22), Natural Science Fund Project of Shandong Province, China (ZR2019QF007), Postdoctoral Project, China (2017M612339, 2018M642695), Humanities and Social Sciences Youth Fund of the Ministry of Education, China (19YJCZH244), Postdoctoral Special Funding Project, China (2019T120607).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Păun, G. Computing with membranes. J. Comput. Syst. Sci. 2000, 61, 108–143. [Google Scholar] [CrossRef] [Green Version]
  2. Song, T.; Gong, F.; Liu, X. Spiking neural P systems with white hole neurons. IEEE Trans. NanoBiosci. 2016, 15, 666–673. [Google Scholar] [CrossRef]
  3. Freund, R.; Păun, G.; Pérez-Jiménez, M.J. Tissue-like P systems with channel-states. Theor. Comput. Sci. 2005, 330, 101–116. [Google Scholar] [CrossRef] [Green Version]
  4. Song, T.; Pan, L.; Wu, T. Spiking neural P Systems with leaning functions. IEEE Trans. NanoBiosci. 2019, 18, 176–190. [Google Scholar] [CrossRef] [PubMed]
  5. Cabarle, F.G.C.; Adorna, H.N.; Jiang, M.; Zeng, X. Spiking neural P systems with scheduled synapses. IEEE Trans. NanoBiosci. 2017, 16, 792–801. [Google Scholar] [CrossRef] [PubMed]
  6. Song, T.; Pan, L. Spiking neural P systems with rules on synapses working in maximum spiking strategy. IEEE Trans. NanoBiosci. 2015, 14, 465–477. [Google Scholar] [CrossRef]
  7. Ionescu, M.; Păun, G.; Yokomori, T. Spiking neural P systems. Fund. Inform. 2006, 71, 279–308. [Google Scholar]
  8. Song, T.; Pan, L. Spiking neural P systems with request rules. Neurocomputing 2016, 193, 193–200. [Google Scholar] [CrossRef]
  9. Zeng, X.; Zhang, X. Spiking Neural P Systems with Thresholds. Neural Comput. 2014, 26, 1340–1361. [Google Scholar] [CrossRef]
  10. Zhao, Y.; Liu, X. Spiking Neural P Systems with Neuron Division and Dissolution. PLoS ONE 2016, 11, e0162882. [Google Scholar] [CrossRef]
  11. Wang, J.; Shi, P.; Peng, H.; Perez-Jimenez, M.J.; Wang, T. Weighted Fuzzy Spiking Neural P Systems. IEEE Trans. Fuzzy Syst. 2012, 21, 209–220. [Google Scholar] [CrossRef]
  12. Jiang, K.; Song, T.; Pan, L. Universality of sequential spiking neural P systems based on minimum spike number. Theor. Comput. Sci. 2013, 499, 88–97. [Google Scholar] [CrossRef]
  13. Zhang, X.; Luo, B. Sequential spiking neural P systems with exhaustive use of rules. Biosystems 2012, 108, 52–62. [Google Scholar] [CrossRef] [PubMed]
  14. Cavaliere, M.; Ibarra, O.H.; Păun, G.; Egecioglu, O.; Ionescu, M.; Woodworth, S. Asynchronous spiking neural P systems. Theor. Comput. Sci. 2009, 410, 2352–2364. [Google Scholar] [CrossRef] [Green Version]
  15. Song, T.; Pan, L.; Păun, G. Asynchronous spiking neural P systems with local synchronization. Inf. Sci. 2013, 219, 197–207. [Google Scholar] [CrossRef] [Green Version]
  16. Song, T.; Zou, Q.; Liu, X.; Zeng, X. Asynchronous spiking neural P systems with rules on synapses. Neurocomputing 2015, 151, 1439–1445. [Google Scholar] [CrossRef]
  17. Păun, G. Spiking neural P systems with astrocyte-like control. J. Univers. Comput. Sci. 2007, 13, 1707–1721. [Google Scholar]
  18. Wu, T.; Păun, A.; Zhang, Z.; Pan, L. Spiking Neural P Systems with Polarizations. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3349–3360. [Google Scholar]
  19. Păun, A.; Păun, G. Small universal spiking neural P systems. BioSystems 2007, 90, 48–60. [Google Scholar] [CrossRef] [Green Version]
  20. Cabarle, F.; Adorna, H.; Perez-Jimenez, M.; Song, T. Spiking neuron P systems with structural plasticity. Neural Comput. Appl. 2015, 26, 1905–1917. [Google Scholar] [CrossRef]
  21. Peng, H.; Chen, R.; Wang, J.; Song, X.; Wang, T.; Yang, F.; Sun, Z. Competitive spiking neural P systems with rules on synapses. IEEE Trans. NanoBiosci. 2017, 16, 888–895. [Google Scholar] [CrossRef] [PubMed]
  22. Xiong, G.; Shi, D.; Zhu, L.; Duan, X. A new approach to fault diagnosis of power systems using fuzzy reasoning spiking neural P systems. Math. Probl. Eng. 2013, 2013, 1–13. [Google Scholar] [CrossRef] [Green Version]
  23. Wang, T.; Zhang, G.X.; Zhao, J.B.; He, Z.Y.; Wang, J.; Pérez-Jiménez, M.J. Fault diagnosis of electric power systems based on fuzzy reasoning spiking neural P systems. IEEE Trans. Power Syst. 2015, 30, 1182–1194. [Google Scholar] [CrossRef]
  24. Peng, H.; Wang, J.; Ming, J.; Shi, P.; Pérez-Jiménez, M.J.; Yu, W.; Tao, C. Fault diagnosis of power systems using intuitionistic fuzzy spiking neural P systems. IEEE Trans. Smart Grid. 2018, 9, 4777–4784. [Google Scholar] [CrossRef]
  25. Peng, H.; Wang, J.; Shi, P.; Pérez-Jiménez, M.J.; Riscos-Núñez, A. An extended membrane system with active membrane to solve automatic fuzzy clustering problems. Int. J. Neural Syst. 2016, 26, 1–17. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, G.; Rong, H.; Neri, F.; Pérez-Jiménez, M. An optimization spiking neural P system for approximately solving combinatorial optimization problems. Int. J. Neural Syst. 2014, 24, 1440006. [Google Scholar] [CrossRef] [PubMed]
  27. Peng, H.; Yang, J.; Wang, J.; Wang, T.; Sun, Z.; Song, X.; Lou, X.; Huang, X. Spiking neural P systems with multiple channels. Neural Netw. 2017, 95, 66–71. [Google Scholar] [CrossRef] [PubMed]
  28. Peng, H.; Li, B.; Wang, J. Spiking neural P systems with inhibitory rules. Knowl.-Based Syst. 2020, 188, 105064. [Google Scholar] [CrossRef]
  29. Peng, H.; Wang, J. Dynamic threshold neural P systems. Knowl.-Based Syst. 2019, 163, 875–884. [Google Scholar] [CrossRef]
  30. Peng, H.; Wang, J. Coupled Neural P Systems. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1672–1682. [Google Scholar] [CrossRef]
  31. Korec, I. Small universal register machines. Theor. Comput. Sci. 1996, 168, 267–301. [Google Scholar] [CrossRef] [Green Version]
  32. Zhang, X.; Zeng, X.; Pan, L. Smaller universal spiking neural P systems. Fundam. Inf. 2007, 87, 117–136. [Google Scholar]
  33. Siegelmann, H.T.; Sontag, E.D. On the computational power of neural nets. J. Comput. Syst. 1995, 50, 132–150. [Google Scholar] [CrossRef] [Green Version]
  34. Song, X.; Peng, H. Small universal asynchronous spiking neural P systems with multiple channels. Neurocomputing 2020, 378, 1–8. [Google Scholar] [CrossRef]
Figure 1. (a) inhibitory rule and (b) extended inhibitory rule.
Figure 1. (a) inhibitory rule and (b) extended inhibitory rule.
Processes 08 01281 g001
Figure 2. An illustrative example.
Figure 2. An illustrative example.
Processes 08 01281 g002
Figure 3. Module ADD simulating the ADD instruction l i : ( A D D ( r ) , l j , l k ) .
Figure 3. Module ADD simulating the ADD instruction l i : ( A D D ( r ) , l j , l k ) .
Processes 08 01281 g003
Figure 4. Module SUB simulating a SUB instruction l j : ( S U B ( r ) , l j , l k ) .
Figure 4. Module SUB simulating a SUB instruction l j : ( S U B ( r ) , l j , l k ) .
Processes 08 01281 g004
Figure 5. FIN module.
Figure 5. FIN module.
Processes 08 01281 g005
Figure 6. The INPUT module of Π 2 .
Figure 6. The INPUT module of Π 2 .
Processes 08 01281 g006
Figure 7. Deterministic ADD module, simulating l i : ( A D D ( r ) , l j ) .
Figure 7. Deterministic ADD module, simulating l i : ( A D D ( r ) , l j ) .
Processes 08 01281 g007
Figure 8. Module INPUT.
Figure 8. Module INPUT.
Processes 08 01281 g008
Figure 9. OUTPUT module.
Figure 9. OUTPUT module.
Processes 08 01281 g009
Figure 10. The model simulating consecutive ADD-ADD instructions l 17 : ( A D D ( 2 ) , l 21 ) and l 21 : ( A D D ( 3 ) , l 18 ) .
Figure 10. The model simulating consecutive ADD-ADD instructions l 17 : ( A D D ( 2 ) , l 21 ) and l 21 : ( A D D ( 3 ) , l 18 ) .
Processes 08 01281 g010
Figure 11. The model simulating consecutive SUB-ADD-1 instructions l 15 : ( S U B ( 3 ) , l 18 , l 20 ) and l 20 : ( A D D ( 0 ) , l 0 ) .
Figure 11. The model simulating consecutive SUB-ADD-1 instructions l 15 : ( S U B ( 3 ) , l 18 , l 20 ) and l 20 : ( A D D ( 0 ) , l 0 ) .
Processes 08 01281 g011
Figure 12. The model simulating consecutive SUB-ADD-2 instructions l i : ( S U B ( r 1 ) , l j , l k ) , l j : ( A D D ( r 2 ) , l g ) .
Figure 12. The model simulating consecutive SUB-ADD-2 instructions l i : ( S U B ( r 1 ) , l j , l k ) , l j : ( A D D ( r 2 ) , l g ) .
Processes 08 01281 g012
Table 1. Universal register machines M u .
Table 1. Universal register machines M u .
l 0 : ( S U B ( 1 ) , l 1 , l 2 ) l 2 : ( A D D ( 6 ) , l 3 ) l 4 : ( S U B ( 6 ) , l 5 , l 3 ) l 6 : ( S U B ( 7 ) , l 7 , l 8 ) l 8 : ( S U B ( 6 ) , l 9 , l 0 ) l 10 : ( S U B ( 4 ) , l 0 , l 11 ) l 12 : ( S U B ( 5 ) , l 14 , l 15 ) l 14 : ( S U B ( 5 ) , l 16 , l 17 ) l 16 : ( A D D ( 4 ) , l 11 ) l 18 : ( S U B ( 4 ) , l 0 , l 22 ) l 20 : ( A D D ( 0 ) , l 0 ) l 22 : ( S U B ( 0 ) , l 23 , l h ) l h : H A L T l 1 : ( A D D ( 7 ) , l 0 ) l 3 : ( S U B ( 5 ) , l 2 , l 4 ) l 5 : ( A D D ( 5 ) , l 6 ) l 7 : ( A D D ( 1 ) , l 4 ) l 9 : ( A D D ( 6 ) , l 10 ) l 11 : ( S U B ( 5 ) , l 12 , l 13 ) l 13 : ( S U B ( 2 ) , l 18 , l 19 ) l 15 : ( S U B ( 3 ) , l 18 , l 20 ) l 17 : ( A D D ( 2 ) , l 21 ) l 19 : ( S U B ( 0 ) , l 0 , l 18 ) l 21 : ( A D D ( 3 ) , l 18 ) l 23 : ( A D D ( 8 ) , l 22 )
Table 2. The comparison of different computing models in the term of small number of computing units.
Table 2. The comparison of different computing models in the term of small number of computing units.
Computing ModelsComputing Functions
DTNP-MCIR systems73
DTNP systems [29]109
SNP systems [32]67
SNP-IR systems [28]100
Recurrent neural networks [33]886
SNP-MC systems [34]38
Table 3. Abbreviations and corresponding full names of some systems cited in the paper.
Table 3. Abbreviations and corresponding full names of some systems cited in the paper.
DTNP systems [29]Dynamic threshold neural P systems
SNP-IR systems [28]Spiking neural P systems with inhibitory rules
SNP-MC systems [27]Spiking neural P systems with multiple channels
SNP systems [32]Smaller universal spiking neural P systems
SNP-MC systems [34]Small universal asynchronous spiking neural P systems with multiple channels.

Share and Cite

MDPI and ACS Style

Yin, X.; Liu, X. Dynamic Threshold Neural P Systems with Multiple Channels and Inhibitory Rules. Processes 2020, 8, 1281. https://doi.org/10.3390/pr8101281

AMA Style

Yin X, Liu X. Dynamic Threshold Neural P Systems with Multiple Channels and Inhibitory Rules. Processes. 2020; 8(10):1281. https://doi.org/10.3390/pr8101281

Chicago/Turabian Style

Yin, Xiu, and Xiyu Liu. 2020. "Dynamic Threshold Neural P Systems with Multiple Channels and Inhibitory Rules" Processes 8, no. 10: 1281. https://doi.org/10.3390/pr8101281

APA Style

Yin, X., & Liu, X. (2020). Dynamic Threshold Neural P Systems with Multiple Channels and Inhibitory Rules. Processes, 8(10), 1281. https://doi.org/10.3390/pr8101281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop