Next Article in Journal
Research on Road Characteristics and the Microscopic Mechanism of Lime–Phosphogypsum-Stabilized Red Clay
Previous Article in Journal
Historical Eco-Environmental Quality Mapping in China with Multi-Source Data Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spiking Neural P Systems with Rules Dynamic Generation and Removal

College of Business, Shandong Normal University, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(14), 8058; https://doi.org/10.3390/app13148058
Submission received: 7 June 2023 / Revised: 7 July 2023 / Accepted: 9 July 2023 / Published: 10 July 2023

Abstract

:
Spiking neural P systems (SNP systems), as computational models abstracted by the biological nervous system, have been a major research topic in biological computing. In conventional SNP systems, the rules in a neuron remain unchanged during the computation. In the biological nervous system, however, the biochemical reactions in a neuron are also influenced by factors such as the substances contained in it. Based on this motivation, this paper proposes SNP systems with rules dynamic generation and removal (RDGRSNP systems). In RDGRSNP systems, the application of rules leads to changes of the substances in neurons, which leads to changes of the rules in neurons. The Turing universality of RDGRSNP systems is demonstrated as a number-generating device and a number-accepting device, respectively. Finally, a small universal RDGRSNP system for function computation using 68 neurons is given. It is demonstrated that the variant we proposed requires fewer neurons by comparing it with five variants of SNP systems.

1. Introduction

Membrane computing (MC) [1], proposed in 1998, is a distributed parallel computational method inspired by the working process of biological cells. The computational models of MC are also known as membrane systems or P systems. P systems can be roughly classified into three groups according to the biological cells they imitate: cell-like P systems, tissue-like P systems, and neural-like P systems.
In neural-like P systems, spikes are used to represent information. Two classes of neural-like P systems have been proposed: axon P systems (AP systems) [2] and spiking neural P systems (SNP systems) [3]. An AP system has a linear structure, so that each node of this system can only send spikes to its neighboring left and right nodes. An SNP system has a directed graph structure, where neurons are represented as the nodes in the directed graph and the synapses between neurons are represented by directed arcs.
SNP systems, which also belong to the spiking neural network [4], have become a major research issue as soon as they were proposed. The research related to SNP systems mainly includes four aspects: variant model design, computational power proof, algorithm design (application), and implementation. Theoretical research mainly includes variant model design and computational power proof, while algorithmic research mainly includes algorithm design (application) and implementation. Under the continuous research of a wide range of scholars, several different variants of SNP systems have been proposed. Numerical SNP (NSNP) systems [5,6,7] are a variant of SNP systems inspired by numerical P systems, in which information is encoded by the values of variables and processed by continuous functions. Compared to the original SNP systems, NSNP systems are no longer discrete, but have a continuous numerical nature, which is useful for solving real-world problems. Homogeneous SNP (HSNP) systems [8,9,10,11] are a restricted variant of SNP systems in which each neuron has the same set of rules, making HSNP systems simpler than the original SNP systems. SNP systems with communication on request (SNQP systems) [12,13,14,15,16] are proposed by adding a communication strategy to the original SNP systems in which neuron request spikes from neighboring neurons according to the number of spikes it contains, and no spikes are consumed or produced during the computation. SNP systems with anti-spikes [17,18,19,20] are proposed by considering the anti-spikes and annihilation rules in the original SNP systems. In SNP systems with anti-spikes, the usual spikes and anti-spikes will annihilate each other when they meet in the same neuron. SNP systems with weights [21,22,23,24,25] are proposed by introducing synaptic weights to the original SNP systems in which the number of spikes received by a neuron depends on not only the number of spikes sent by its presynaptic neurons, but also the synaptic weights between neurons.
In addition, as a kind of bio-inspired computational model, SNP systems have many excellent characteristics. First, neurons are not fully connected to each other; therefore, the topological structure of SNP systems is sparse and simple. In SNP systems, information is transmitted only when meeting a specific value, so their information transmission is characterized by low power consumption. Additionally, because SNP systems are a kind of parallel computational model, the tasks can be processed in parallel, reducing the processing time required to be consumed significantly. In addition, the working mechanism of SNP systems is special. In SNP systems, spikes are used to represent information, and the information is processed and transmitted by using spiking rules. In SNP systems, a neuron can contain multiple rules and use regular expression as a control condition for the rules, allowing the neuron to freely select rules to use depending on the number of spikes. It is because of these characteristics that SNP systems have good performance in many kinds of applications [26,27], such as time series forecasting [28,29,30,31], pattern recognition [32,33], image processing [34,35,36,37], etc.
Generally, the rules of neurons in SNP systems are set in the initial state and cannot be changed during the computation process. However, in biological neurons, biochemical reactions are usually changed according to factors such as different substances in the neuron. Motivated by this, spiking neural P systems with rules dynamic generation and removal (RDGRSNP systems) is proposed. In RDGRSNP systems, rules in the rule set of a neuron can be added and removed during the computation process.
Our contribution is to propose a new variant of SNP systems called RDGRSNP systems. This new variant enriches the research on SNP systems. In RDGRSNP systems, the application of rules causes the number of spikes in the neuron to change, which causes the rules in the neuron to change. This makes the rules in RDGRSNP systems no longer constant like the traditional SNP systems, but dynamically changing during the computation. This is not considered by previous theoretical studies on SNP systems. Therefore, our proposed RDGRSNP systems are closer to biological nervous systems and fit better with biological facts than the original SNP systems. In addition, our proposed RDGRSNP systems can improve the ability of adaptive regulation of SNP systems based on dynamical change of rules according to the different substances in neurons.
This paper will be structured as follows: in Section 2, we introduce the proposed RDGRSNP systems and an illustrative example; in Section 3, we demonstrate the Turing universality of RDGRSNP systems when used as a number-generating device and a number-accepting device, respectively. In Section 4, a small universal RDGRSNP system for simulating function computation is constructed. In Section 5, the work of this paper is summarized.

2. SNP Systems with Rules Dynamic Generation and Removal

This section presents the definition of SNP systems with rules dynamic generation and removal and explains them with an illustrative example.

2.1. Definition and Description

In this subsection, we introduce the formal definition of RDGRSNP systems using abstract notation and a detailed description of the definition.

2.1.1. Definition

An RDGRSNP system Π of degree m consists of the following tuples:
Π = ( O , σ 1 , σ 2 , , σ n , R , s y n , i n , o u t )
where
(1)
O is a single alphabet with only one element a used to represent a spike.
(2)
σ 1 , σ 2 , , σ m denote m neurons in Π , and σ i is represented by tuple σ i = ( n i , R i ) , where
(a)
n i denotes the number of spikes in neuron σ i .
(b)
R i denotes the rule set contained in neuron σ i , where R i R .
(3)
R is the rule set of Π , which contains rules of the following two forms:
(a)
r i : E / a c a p ; α 1 r 1 , α 2 r 2 , , α l r l . A rule of this form is called the spiking rule with rules dynamic generation and removal, where E is a regular expression defined on the single alphabet O, and c and p denote the number of spikes consumed and produced with c p , r 1 , r 2 , , r l denoting the l rules in R, and α 1 , α 2 , , α l { + , , 0 } are the labels. The spiking rule with rules dynamic generation and removal r i : E / a c a p ; α 1 r 1 , α 2 r 2 , , α l r l can be applied when the number of spikes in the neuron satisfies conditions a n i L ( E ) and a n i a c . Furthermore, the spiking rule with rules dynamic generation and removal can be abbreviated as r i : a c a p ; α 1 r 1 , α 2 r 2 , , α l r l when a c L ( E ) .
(b)
r i : a s λ ; α 1 r 1 , α 2 r 2 , , α l r l . A rule of this form is called the forgetting rule with rules dynamic generation and removal, where s 0 denotes the number of spikes consumed, with the additional restriction that for all the spiking rules with rules dynamic generation and removal r i : E / a c a p ; α 1 r 1 , α 2 r 2 , , α l r l , there is a s E . Similarly, r 1 , r 2 , , r l denotes the l rules R, and α 1 , α 2 , , α l { + , , 0 } are the labels.
(4)
s y n = { ( i , j ) } denotes the set of synapses, where ( i , j ) { 1 , 2 , , m } × { 1 , 2 , , m } and i j .
(5)
i n , o u t { σ 1 , σ 2 , , σ m } denotes the input and output neurons, respectively.

2.1.2. Description

The definition of RDGRSNP systems proposed above is described here in detail. We mainly describe the process of applying rules in RDGRSNP systems, how the rules are executed in RDGRSNP systems, and the state and graphical definition of RDGRSNP systems.
When neuron σ i applies a spiking rule with rules dynamic generation and removal, r i : E / a c a p ; α 1 r 1 , α 2 r 2 , , α l r l , c spikes in σ i are consumed, and p spikes are sent along synapse ( i , j ) to postsynaptic neuron σ j . Meanwhile, neuron σ i adjusts its rule set R i as follows. When α k = + , α k r k can be abbreviated to r k , indicating that rule r k is added to the rule set R i of neuron σ i . If rule r k already exists R i , no rule is repeatedly added. When α k = , it means that rule r k is removed from the rule set R i of neuron σ i . If rule r k does not exist in R i , no rule is removed. When α k = 0 , α k r k can be omitted and not written, and it means that rule r k is neither added to nor removed from the rule set R i of neuron σ i .
When neuron σ i applies a forgetting rule with rules dynamic generation and removal r i : a s λ ; α 1 r 1 , α 2 r 2 , , α k r k , s spikes in σ i are removed and no spikes are sent. Meanwhile, σ i adjusts its rule set, operating in the same way as when applying the spiking rule with rules dynamic generation and removal.
Note that an RDGRSNP system Π is parallel on the system level and sequential on the neuron level. There is a global clock used to mark steps throughout the system. When a neuron contains only one applicable rule, the neuron must apply it. When a neuron contains two or more applicable rules, the neuron can only non-deterministically choose one to apply. When no rule is applicable in the neurons of Π , the system stops computing.
In general, the state of the system at each step is represented by the number of spikes in each neuron at that step. However, in the proposed RDGRSNP system Π , since rules can be generated and removed dynamically, the rules contained in each neuron should also be included in the state of system Π . That is, the state of Π is represented by the number of spikes and the rules in each neuron, symbolized as C t = ( n 1 , R 1 ) , ( n 2 , R 2 ) , , ( n m , R m ) . The initial state of Π is represented by C 0 . When the number of spikes and the rule sets contained in the neurons of Π do not change, the system stops computing.
Regarding the graphical definition, rounded rectangles are used to represent neurons, and directed line segments are used to represent synapses between neurons. In an RDGRSNP system, the result of a computation is defined by the time interval between the first two spikes sent to the environment e n v by the output neuron σ o u t .

2.2. An Illustrative Example

In Figure 1, an illustrative example is given to elaborate the definition of RDGRSNP systems. In this example, both σ 1 and σ 2 initially have one spike, and σ o u t initially has two spikes, so the initial state is C 0 = ( 1 , { r 3 } ) , ( 1 , { r 3 , r 4 } ) , ( 2 , { r 1 } ) .
At step 1, neuron σ 1 applies r 3 : a a to send one spike to σ 2 and σ o u t . The output neuron σ o u t applies the spiking rule with rules dynamic generation and removal r 1 : a 2 a ; r 1 , r 2 , r 3 , consumes two spikes, sends the first spike to e n v , removes r 1 , and adds r 2 and r 3 . As for neuron σ 2 , the number of spikes contained can satisfy the two rules r 3 and r 4 , so it will non-deterministically choose one rule to apply.
Suppose at step 1, neuron σ 2 applies r 3 : a a and sends one spike to σ 1 and σ o u t . At step 2, σ 1 applies r 3 : a a again and sends one spike to σ 2 and σ o u t , respectively. As for the output neuron σ o u t , it contains two spikes and rules r 2 and r 3 , so it applies rule r 2 : a 2 λ , forgetting the two included spikes. Thereafter, if neuron σ 2 continues to apply rule r 3 : a a , the system will continue to repeat the cycle until neuron σ 2 chooses rule r 4 : a λ to apply.
Suppose at step n, where n 1 , neuron σ 2 chooses rule r 4 : a λ to apply, so it does not send spikes outward. In addition, neuron σ 1 sends one spike to σ o u t . Therefore, at step n + 1 , σ o u t has only one spike, applies rule r 3 : a a , and sends the second spike to e n v .
Thus, σ o u t sends the first two spikes with a time interval of ( n + 1 ) 1 = n , where n 0 , i.e., all positive integers.

3. The Turing Universality of SNP Systems with Rules Dynamic Generation and Removal

This section is used to prove the Turing universality of RDGRSNP systems when used as a number-generating device and a number-accepting device, respectively. It has been shown that register machines can be used to characterize Turing computable sets of numbers (NRE), so we demonstrate the Turing universality of RDGRSNP systems by simulating register machines.
For a register machine M = ( m , H , l 0 , l h , I ) , m denotes the number of registers r in the register machine M, H is a label set in which each label corresponds to an instruction in the instruction set I, l 0 denotes the start instruction, l h denotes the halt instruction, and I denotes an instruction set. The instructions in I have the following three forms:
(1)
l i : ( A D D ( r ) , l j , l k ) , the instruction indicates that: add one to the number in r and then jump non-deterministically to l j or l k ;
(2)
l i : ( S U B ( r ) , l j , l k ) , the instruction indicates that: if the number in r is not zero, subtract the number by one and jump to l j ; if the number in r is zero, jump directly to l k ;
(3)
l h : H A L T , the instruction is the halt instruction, indicating that M stops computing.
In the number-generating mode, all registers in M are initially empty, and M starts from l 0 and executes until l h . After the computation is finished, all registers except register 1 are empty, and the number generated by M is stored in register 1. It is assumed that there is no l i : ( S U B ( r ) , l j , l k ) for the instructions related to register 1, i.e., the number in register 1 will not be subtracted.
In the number-accepting mode, all registers of M except register 1 are initially empty, and the number to be accepted is stored in register 1. If M can reach the halt instruction l h : H A L T , it proves that the number is accepted by M. Note that the instruction l i : ( A D D ( r ) , l j , l k ) needs to be changed to deterministic l i : ( A D D ( r ) , l j ) in the number accepting mode.
In this paper, we use N 2 R D G R S N P ( Π ) and N a c c R D G R S N P ( Π ) to denote the set of numbers that can be generated and accepted by a RDGRSNP system, respectively.
In the proofs of this paper, we use neuron σ l i to denote instruction l i , neuron σ r to denote register r, and auxiliary neurons σ l i ( j ) for module construction, where j N + . In addition, we stipulate that the simulation of instruction l i starts when σ l i has four spikes.
Theorem 1.
N 2 R D G R S N P ( Π ) = N R E
Proof. 
By Turing Church’s theorem, there is N 2 R D G R S N P ( Π ) N R E , so it is sufficient if we can prove N R E N 2 R D G R S N P ( Π ) . For N R E N 2 ( Π ) , we simulate the register machine to prove it by following the process shown in Figure 2.   □
In the initial state, all neurons are empty except neuron σ l 0 . Four spikes in neuron σ l 0 are used to trigger the simulation of computation. The simulation of computation follows the simulation of the instructions in the register machine until neuron σ l h receives four spikes. When neuron σ l h receives four spikes, it means that the computation process in the register machine is successfully simulated, and the FIN module starts to output the computation result stored in register 1.
ADD module: The ADD module is used to simulate instruction l i : ( A D D ( r ) , l j , l k ) , as shown in Figure 3. This module consists of six neurons and eight rules.
Suppose that at step t, σ l i has four spikes and starts simulating instruction l i : ( A D D ( r ) , l j , l k ) . In neuron σ l i , the spiking rule with rules dynamic generation and removal r 1 : a 4 / a 2 a 2 ; r 2 is applied, consuming two spikes, sending two spikes to σ r , σ l i ( 1 ) , and σ l i ( 2 ) and adding r 2 : a 2 a 2 ; r 2 . At step t + 1 , σ l i has two spikes with r 1 , r 2 , and r 7 . Therefore, σ l i applies rule r 2 : a 2 a 2 ; r 2 , sending two spikes to σ r , σ l i ( 1 ) , and σ l i ( 2 ) and removing r 2 . At step t + 1 , σ r receives two spikes again, for a total of four spikes, corresponding to the number in register r add one. In addition, σ l i ( 1 ) has four spikes with rules r 1 and r 2 . Since both rules can be applied, neuron σ l i ( 1 ) non-deterministically chooses one to apply. The situations are as follows:
(1)
When neuron σ l i ( 1 ) chooses rule r 1 : a 4 / a 2 a 2 ; r 2 to apply, it consumes two spikes, sends two spikes to neurons σ l j and σ l i ( 2 ) , and adds rule r 2 : a 2 a 2 ; r 2 . At step t + 3 , σ l i ( 2 ) applies the forgetting rule with rules dynamic generation and removal r 5 : a 6 λ ; r 6 , consumes the six spikes it contains, and adds rule r 6 : a 2 λ ; r 6 . In neuron σ l i ( 1 ) , since it contains two spikes with rules r 1 , r 2 , and r 3 , rule r 2 : a 2 a 2 ; r 2 is applied, sending two spikes to neurons σ l j and σ l i ( 2 ) , and removing rule r 2 . At step t + 4 , σ l i ( 2 ) applies r 6 : a 2 λ ; r 6 , consuming two spikes and removing rule r 6 . In addition, neuron σ l j receives two spikes, containing four spikes in total and starts simulating l j .
(2)
When neuron σ l i ( 1 ) chooses rule r 3 : a 4 / a 2 λ ; r 4 to apply, it consumes two spikes and adds rule r 4 . At step t + 3 , σ l i ( 1 ) contains two spikes with rules r 1 , r 3 , and r 4 . Thus, the spiking rule with rules dynamic generation and removal r 4 : a 2 a ; r 4 is applied, sending one spike to neurons σ l j and σ l i ( 2 ) , respectively, and removing rule r 4 . At step t + 4 , σ l j applies a λ , forgetting the one spike from neuron σ l i ( 1 ) . In addition, r 8 : a 5 a 4 is applied in neuron σ l i ( 2 ) , sending four spikes to σ l k . At step t + 5 , σ l k contains four spikes and starts simulating l k .
Table 1 lists five variants of SNP systems and the number of neurons they require to construct the ADD module. From Table 1, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 7, 8, 11, 9, and 8 neurons, respectively, while RDGRSNP requires 6. Therefore, the proposed RDGRSNP system in this paper requires the least number of neurons.
SUB module: The SUB module is used to simulate instruction l i : ( S U B ( r ) , l j , l k ) , as shown in Figure 4. This module consists of six neurons and twelve rules.
Suppose that neuron σ l i contains four spikes at step t and starts simulating instruction l i : ( S U B ( r ) , l j , l k ) . Neuron σ l i applies the spiking rule with rules dynamic generation and removal r 1 : a 4 / a a ; r 2 , sends one spike to σ l i ( 1 ) , σ l i ( 2 ) , and σ r , and adds r 2 to the rule set. At step t + 1 , σ l i remains with three spikes with rules r 1 , r 2 , and r 8 , so σ l i applies the spiking rule with rules dynamic generation and removal r 2 : a 3 a 2 ; r 2 , sends two spikes to the postsynaptic neuron, and removes the rule r 2 . In σ l i ( 1 ) and σ l i ( 2 ) , the forgetting rule is applied, forgetting one spike. In addition, different rules are applied in neuron σ r depending on whether the initial state contains spikes or not in the following two cases:
(1)
If σ r contains 4 n ( n 1 ) spikes, then at step t + 1 , σ r applies r 3 : a ( a 4 ) + / a a ; r 4 , consumes one spike, and sends it to σ l i ( 1 ) and σ l i ( 2 ) . Thus, at step t + 2 , both σ l i ( 1 ) and σ l i ( 2 ) have three spikes. Additionally, neuron σ r contains 4 n + 2 spikes with rules r 3 , r 4 and r 6 . Thus, the spiking rule with rules dynamic generation and removal r 4 : a 2 ( a 4 ) + / a 2 a 2 ; r 5 , r 4 is applied, sending two spikes to σ l i ( 1 ) and σ l i ( 2 ) , adding rule r 5 , and removing rule r 4 . At step t + 3 , σ r applies r 5 : ( a 4 ) + / a 4 λ ; r 5 , forgetting four spikes. Thus, at step t + 4 , σ r has 4 n 4 spikes, corresponding to the number in register r subtracted by one. Additionally, at step t + 3 , σ l i ( 1 ) applies r 11 : a 5 a 4 to send four spikes to σ l j . In neuron σ l i ( 2 ) , the forgetting rule r 12 : a 5 λ is applied. At step t + 4 , σ l j contains four spikes, simulating instruction l j .
(2)
If neuron σ r does not contain spikes, then at step t + 1 , the spiking rule with rules dynamic generation and removal r 6 : a a ; r 7 is applied, sending one spike to σ l i ( 1 ) and σ l i ( 2 ) and adding r 7 . At step t + 2 , σ r has r 3 , r 6 , and r 7 with two spikes, so the spiking rule with rules dynamic generation and removal r 7 : a 2 a ; r 7 is applied, removing rule r 7 . Thus, at step t + 3 , the auxiliary neuron σ l i ( 1 ) applies rule r 10 : a 4 λ , while σ l i ( 2 ) applies rule r 9 : a 4 a 4 , sending spikes to σ l k . At step t + 4 , σ l k contains four spikes, simulating instruction l k .
Table 2 lists five variants of SNP systems and the number of neurons they require to construct the SUB module. From Table 2, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 7, 10, 15, 8, and 8 neurons, respectively, while RDGRSNP requires 6. Therefore, the proposed RDGRSNP system in this paper requires the least number of neurons.
FIN module: The FIN module is used to simulate the halt instruction l h : H A L T and output the numbers generated by system Π , as shown in Figure 5. The generated number n is represented by the time interval between the first two spikes sent to the environment e n v by the output neuron. The FIN module constructed in this paper consists of three neurons and six rules.
Suppose that at step t, σ l h has four spikes, starts simulating instruction l h : H A L T and outputting the computation results. Neuron σ l h applies the spiking rule with rules dynamic generation and removal r 1 : a 4 / a 2 a ; r 2 , sends one spike to σ 1 and σ o u t , and adds r 2 to it. Thus, at step t + 1 , neuron σ l h applies r 2 : a 2 a ; r 2 , sends one spike to σ 1 and σ o u t , and removes r 2 . Neuron σ 1 applies the spiking rule with rules dynamic generation and removal r 3 : a ( a 4 ) + / a 5 a ; r 4 , r 5 , r 3 , consuming five spikes, sending one spike to the output neuron σ o u t , adding rules r 4 and r 5 , and removing rule r 3 . At step t + 2 , σ o u t applies r 6 : a 3 a ; r 5 sending the first spike to e n v and adding r 5 . So, starting from step t + 2 , neuron σ 1 will forget four spikes at each step until step t + n + 1 , where n 1 .
At step t + n + 1 , σ 1 has only one spike with rules r 4 and r 5 . Therefore, it applies the spiking rule with rules dynamic generation and removal r 5 : a a ; r 3 , r 4 , r 5 , sends one spike to σ o u t , adds r 3 , and removes rules r 4 and r 5 . At step t + n + 2 , σ o u t applies r 5 : a a ; r 3 , r 4 , r 5 to send the second spike to e n v . Thus, the time interval between the first two spikes sent by σ o u t to the environment is ( t + n + 2 ) ( t + 2 ) = n , i.e., the number stored in register 1 when the computation stops.
Table 3 lists five variants of SNP systems and the number of neurons they require to construct the FIN module. From Table 3, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 4, 9, 8, 8, and 5 neurons, respectively, while RDGRSNP requires 3. Therefore, the proposed RDGRSNP system in this paper requires the least number of neurons.
Theorem 2.
N a c c R D G R S N P ( Π ) = N R E
Proof 
Similarly, by Turing Church’s theorem, we have N a c c R D G R S N P ( Π ) N R E , so we only need to prove N R E N a c c R D G R S N P ( Π ) . For N R E N a c c ( Π ) , we simulate the register machine to prove it by following the process shown in Figure 6.   □
In the initial state, neuron σ 1 contains 4 n spikes, corresponding to the number n to be accepted in the register machine, neuron σ l 0 contains four spikes for triggering the simulation of computation, and all the remaining neurons are empty. The simulation of computation follows the simulation of instructions in the register machine until neuron σ l h receives four spikes. When neuron σ l h receives four spikes, it means that the computation process in the register machine is simulated successfully, the computation stops, and the number is accepted.
First, in the number-accepting mode, we need the INPUT module to read the number to be accepted. The number n to be accepted is represented by the time interval between the first two spikes entered, i.e., the spike train entered is: 10 n 1 1 .
INPUT module: The INPUT module is used to read the number to be accepted, as shown in Figure 7. This module consists of nine neurons and seven rules.
Suppose that at step 0, σ i n receives one spike, so at step 1, σ i n applies r 3 : a a and sends one spike to auxiliary neurons σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 , respectively. Starting from step 2, auxiliary neurons σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 will send spikes in a cycle and send one spike to σ i n 5 and σ i n 6 , respectively. For neuron σ i n 5 , it applies r 4 : a 4 a 4 to send four spikes to σ 1 ; for neuron σ i n 6 , it applies rule r 5 : a 4 λ to forget the received four spikes. The above process will continue until the input neuron σ i n receives another spike.
At step n, σ i n receives another spike, so at step n + 1 , σ i n applies r 3 : a a again, sends one spike to σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 . At step n + 2 , σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 all apply the spiking rule with rules dynamic generation and removal r 1 : a 2 a 2 ; r 2 , r 1 , send two spikes to neurons σ i n 5 and σ i n 6 , remove rule r 1 , and add rule r 2 . Thus, at step n + 3 , these four auxiliary neurons apply rule r 2 : a 2 λ ; r 1 , r 2 to forget two spikes. At step n + 3 , both neurons σ i n 5 and σ i n 6 receive eight spikes. Neuron σ i n 5 applies rule r 7 : a 8 λ to forget the spikes in it. Neuron σ 1 receives four spikes at step n + 2 , so it contains 4 n spikes. In σ i n 6 , rule r 6 : a 8 a 4 is applied to send four spikes to σ l 0 . Then, neuron σ l 0 contains four spikes, simulating the starting instruction l 0 .
Table 4 lists five variants of SNP systems and the number of neurons they require to construct the INPUT module. From Table 4, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 6, 6, 10, 6, and 9 neurons, respectively, while RDGRSNP requires 9. However, this is acceptable because the INPUT module will be used only once in the number-generating mode and will not have a great impact on the overall number of neurons.
In the number-generating mode, the SUB instruction in M does not change, so we can continue to use the SUB module in Theorem 1 above. Since the system stop means that the input numbers are accepted, an additional FIN module is no longer needed to output the computation results. However, in the number-accepting mode, the ADD instruction in M is no longer in the non-deterministic form of l i : ( A D D ( r ) , l j , l k ) , but in the deterministic form of l i : ( A D D ( r ) , l j ) . Therefore, this paper provides a deterministic ADD module, as shown in Figure 8. This module consists of only three neurons and three rules.
In this deterministic ADD module, σ l i sends spikes to σ r , simulating the operation of adding one to the number of register r and, at the same time, sends spikes to σ l j , causing instruction l j to start being simulated.
Table 5 lists five variants of SNP systems and the number of neurons they require to construct the deterministic ADD module. From Table 5, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 4, 3, 5, 3, and 3 neurons, respectively, while RDGRSNP requires 3. Therefore, PASNP, MPAIRSNP, SNP-IR and the proposed RDGRSNP system in this paper require the least number of neurons.

4. Small Universal SNP System with Rules Dynamic Generation and Removal

This section presents a small universal RDGRSNP system used to simulate function computation and demonstrates that the RDGRSNP system we proposed requires fewer neurons by comparing it with five variants of SNP systems.
Theorem 3.
A small universal RDGRSNP system using 68 neurons can implement function simulation.
Regarding Theorem 3, we continue to prove it by simulating a register machine. In M u [41], for simulating the function computation shown in Figure 9, there always exists a recursive function g such that θ x ( y ) = M u ( g ( x ) , y ) holds for any θ x ( y ) , where θ x ( y ) is any one function of the fixed enumeration ( θ 1 , θ 2 , ) of all unary partial recursive functions, M u denotes a universal register machine, and g ( x ) and y are two parameters with x , y N stored in registers 1 and 2, respectively. The register machine M u for the simulating function computation stops when the execution reaches l h : H A L T . The result of the computation is stored in register 0.
Proof 
For Theorem 3, we simulate the register machine to prove it by following the process shown in Figure 10.
In the initial state, neuron σ 1 contains 4 g ( x ) spikes, corresponding to parameter g ( x ) in register 1, neuron σ 2 contains 4 y spikes, corresponding to parameter y in register 2, neuron σ l 0 contains four spikes for triggering the simulation of computation, and all the remaining neurons are empty. The simulation of computation follows the simulation of instructions in the register machine until neuron σ l h receives four spikes. When neuron σ l h receives four spikes, it means that the computation process of the register machine is successfully simulated and the computation stops. Meanwhile, the FIN module is triggered and used to output the computation result.
In M u , the SUB instruction in the form of l i : ( S U B ( r ) , l j , l k ) , the halt instruction l h : H A L T , and the deterministic ADD instruction in the form of l i : ( A D D ( r ) , l j ) are included. Thus, we can continue to use the SUB, FIN, and deterministic ADD module, as proposed in Theorems 1 and 2 above. In addition, since two registers are needed to store parameters g ( x ) and y, respectively, this paper makes a little change to the INPUT module in Theorem 2 above, as shown in Figure 11. We still use the time interval between two spikes to represent the input number, so the input sequence is 10 g ( x ) 1 10 y 1 1 .
Suppose that at step 0, σ i n receives the first spike, so at step 1, σ i n applies r 3 : a a and sends one spike to the four auxiliary neurons σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 . Thus, from step 2, these four auxiliary neurons send spikes in a cycle by applying rule r 3 : a a , and all send one spike to σ i n 5 , σ i n 6 , and σ i n 7 . For neuron σ i n 5 , starting at step 3, it continuously applies r 5 : a 4 a 4 to send four spikes to σ 1 , by which the first input number is stored in σ 1 , while for neurons σ i n 6 and σ i n 7 , they continuously apply rule r 6 : a 4 λ to forget four spikes. The process continues until the input neuron receives the second spike.
At step g ( x ) , σ i n receives the second spike, so at step g ( x ) + 1 , σ i n applies r 3 : a a again, sends one spike to neurons σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 . Thus, starting at step g ( x ) + 2 , these four neurons apply rule r 4 : a 2 a 2 to send two spikes in a cycle, while sending two spikes to their postsynaptic neurons σ i n 5 , σ i n 6 , and σ i n 7 . Similarly, for neuron σ i n 6 , starting at step g ( x ) + 3 , it applies rule r 7 : a 8 a 4 continuously to send four spikes to the neuron σ 2 , by which the second input number is stored in σ 2 . For neurons σ i n 5 and σ i n 7 , they apply rule r 8 : a 8 λ to forget the spikes. The process continues until σ i n receives the third spike. In addition, σ 1 receives four spikes sent from σ i n 5 at step g ( x ) + 2 . Therefore, σ 1 contains 4 g ( x ) spikes, which corresponds to the number g ( x ) in register 1.
At step g ( x ) + y , σ i n receives the third spike, so at step g ( x ) + y + 1 , σ i n applies r 3 : a a again and sends one spike to σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 . At step g ( x ) + y + 2 , neurons σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 all contain three spikes with rules r 1 , r 3 , and r 4 , so they all apply the spiking rule with rules dynamic generation and removal r 1 : a 3 a 3 ; r 2 , r 1 , send three spikes to neurons σ i n 5 , σ i n 6 , and σ i n 7 , and add rule r 2 and remove rule r 1 . At step g ( x ) + y + 3 , neurons σ i n 1 , σ i n 2 , σ i n 3 , and σ i n 4 all apply the forgetting rule with rules dynamic generation and removal r 2 : a 3 λ ; r 1 , r 2 , forget three spikes, and remove rule r 2 and add rule r 1 again. Meanwhile, both neurons σ i n 5 and σ i n 6 contain 12 spikes and apply rule r 10 : a 12 λ to forget these 12 spikes. Neuron σ i n 7 applies rule r 9 : a 12 a 4 to send four spikes to σ l 0 . In addition, σ 2 receives four spikes sent by neuron σ i n 6 at step g ( x ) + y + 2 , so it has 4 y spikes, which corresponds to the number y in register 2. At step g ( x ) + y + 4 , σ l 0 contains four spikes, simulating the starting instruction l 0 .
Table 6 lists five variants of SNP systems and the number of neurons they need to build the INPUT module in the register machine. From Table 6, it can be seen that DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR require 9, 9, 13, 9, and 9 neurons, respectively, while RDGRSNP requires 11. However, this is acceptable because the INPUT module will be used only once during the simulation of function calculation and will not have a great impact on the overall number of neurons.
According to the INPUT, deterministic ADD, SUB, and FIN modules described above, a small universal RDGRSNP system for simulating function θ x ( y ) = M u ( g ( x ) , y ) consists of 71 neurons, as follows:
(1)
Neurons used to start simulating instructions: 25
(2)
Neurons used to simulate registers: 9
(3)
Auxiliary neurons in the ADD module: 0
(4)
Auxiliary neurons in the SUB module: 2 × 14 = 28
(5)
The input and auxiliary neurons in the INPUT module: 8
(6)
The output neuron in the FIN module: 1
We can also further reduce the number of neurons by compound connections between modules, and the following is the specific structure of compound connections between modules.
Figure 12 represents the composite connection of a set of consecutive ADD instructions l i : ( A D D ( r ) , l g ) and l g : ( A D D ( r ) , l j ) . We put σ r from the former module into the latter module, removing σ l g .
Figure 13 represents the compound connection of a set of consecutive ADD instruction l i : ( A D D ( r ) , l g ) and SUB instruction l g : ( S U B ( r ) , l j , l k ) . We put the neuron σ r of the former ADD module into the later SUB module, removing σ l g .
By the compound connection between the modules mentioned above, we remove three neurons σ l 21 , σ l 6 , and σ l 10 , so that the number of neurons required to build a small universal RDGRSNP system for simulating function θ x ( y ) = M u ( g ( x ) , y ) is reduced from 71 to 68.
Table 7 lists several variants of SNP systems and the number of neurons they require to construct a small universal SNP system for simulating function computation. From the table, it can be seen that five variants of SNP systems, DSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR, require 81, 121, 151, 95, and 100 neurons, respectively, while the RDGRSNP system proposed in this paper requires only 68.
In the construction of SNP systems, the number of neurons is generally used to measure the computational resources required to build a system. A smaller number of neurons means fewer computational resources are required to build a system; conversely, a larger number of neurons means more resources are required to build a system. Therefore, the smaller the number of neurons needed to construct the SNP system, the better. As we can see in Table 7, our proposed RDGRSNP systems require fewer neurons to build a small universal SNP system for simulating function θ x ( y ) = M u ( g ( x ) , y ) . This means that a RDGRSNP system needs only fewer computational resources compared to other systems. Thus, our proposed RDGRSNP systems have an advantage.   □

5. Conclusions

In conventional SNP systems, the rules contained in neurons do not change during the computation process. However, biochemical reactions in biological neurons tend to be different depending on factors such as the substances in the neuron. Considering this motivation, RDGRSNP systems are proposed in this paper. In RDGRSNP systems, neurons apply rules to update the rule set. In Section 2, we give the definition of RDGRSNP systems and illustrate how RDGRSNP systems work with an illustrative example.
In Section 3, we demonstrate the computational power of RDGRSNP systems by simulating the register machines. Specifically, we demonstrate that RDGRSNP systems are Turing universal when used as a number-generating device and a number-accepting device. Subsequently, in Section 4, we construct a small universal RDGRSNP system by using 68 neurons. By comparing with five variants of SNP systems, DSNP, Asynchronous RSSNP, PASNP, PSNRSP, MPAIRSNP, and SNP-IR, it is demonstrated that the RDGRSNP system proposed in this paper requires only fewer resources to construct a small universal RDGRSNP system for simulating function computation.
Our future research will focus on the following areas. Although the computational power of RDGRSNP systems has been demonstrated, the potential of RDGRSNP systems goes far beyond that. SNP systems have shown excellent capabilities in solving NP problems, and we have demonstrated the advantage of RDGRSNP systems compared with other variants of SNP systems. Therefore, we believe that RDGRSNP systems can perform better in solving NP problems.
RDGRSNP systems proposed in this paper work in sequential mode. However, there are other modes of operation such as asynchronous mode that have not been discussed. In asynchronous mode, the neurons in the system can choose to apply or not to apply the rules at each step. Of course, these rules also need to satisfy control conditions to be applied. Therefore, neurons have more autonomy in asynchronous mode. Many variants of SNP systems have been discussed working in asynchronous mode. An exploration of RDGRSNP systems working in asynchronous mode is also a future research direction for this paper.

Author Contributions

Conceptualization, Y.S. and Y.Z.; methodology, Y.S. and Y.Z.; writing—original draft preparation, Y.S.; writing—review and editing, Y.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (Nos. 61806114, 61876101) and China Postdoctoral Science Foundation (Nos. 2018M642695, 2019T120607).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Păun, G. Membrane Computing. In Fundamentals of Computation Theory, Proceedings of the 14th International Symposium, FCT 2003, Malmö, Sweden, 12–15 August 2003; Lingas, A., Nilsson, B.J., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2751, pp. 284–295. [Google Scholar]
  2. Zhang, X.; Pan, L.; Păun, A. On the Universality of Axon P Systems. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2816–2829. [Google Scholar] [CrossRef]
  3. Ionescu, M.; Păun, G.; Yokomori, T. Spiking Neural P Systems. Fundam. Inform. 2006, 71, 279–308. [Google Scholar]
  4. Fortuna, L.; Buscarino, A. Spiking Neuron Mathematical Models: A Compact Overview. Bioengineering 2023, 10, 174. [Google Scholar] [CrossRef] [PubMed]
  5. Jiang, S.; Liu, Y.; Xu, B.; Sun, J.; Wang, Y. Asynchronous Numerical Spiking Neural P Systems. Inf. Sci. 2022, 605, 1–14. [Google Scholar] [CrossRef]
  6. Wu, T.; Pan, L.; Yu, Q.; Tan, K.C. Numerical Spiking Neural P Systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2443–2457. [Google Scholar] [CrossRef]
  7. Yin, X.; Liu, X.; Sun, M.; Ren, Q. Novel Numerical Spiking Neural P Systems with a Variable Consumption Strategy. Processes 2021, 9, 549. [Google Scholar] [CrossRef]
  8. Zeng, X.; Zhang, X.; Pan, L. Homogeneous Spiking Neural P Systems. Fundam. Inform. 2009, 97, 275–294. [Google Scholar] [CrossRef]
  9. Jiang, K.; Chen, W.; Zhang, Y.; Pan, L. Spiking Neural P Systems with Homogeneous Neurons and Synapses. Neurocomputing 2016, 171, 1548–1555. [Google Scholar] [CrossRef]
  10. Jiang, K.; Song, T.; Chen, W.; Pan, L. Homogeneous Spiking Neural P Systems Working in Sequential Mode Induced by Maximum Spike Number. Int. J. Comput. Math. 2013, 90, 831–844. [Google Scholar] [CrossRef]
  11. Wu, T.; Wang, Y.; Jiang, S.; Shi, X. Small Universal Spiking Neural P Systems with Homogenous Neurons and Synapses. Fundam. Inform. 2016, 149, 451–470. [Google Scholar] [CrossRef]
  12. Pan, L.; Paun, G.; Zhang, G.; Neri, F. Spiking Neural P Systems with Communication on Request. Int. J. Neural Syst. 2017, 27, 1750042. [Google Scholar] [CrossRef] [Green Version]
  13. Pan, L.; Wu, T.; Su, Y.; Vasilakos, A.V. Cell-Like Spiking Neural P Systems with Request Rules. IEEE Trans. Nanobiosci. 2017, 16, 513–522. [Google Scholar] [CrossRef]
  14. Pan, T.; Shi, X.; Zhang, Z.; Xu, F. A Small Universal Spiking Neural P System with Communication on Request. Neurocomputing 2018, 275, 1622–1628. [Google Scholar] [CrossRef]
  15. Wu, T.; Bîlbîe, F.D.; Păun, A.; Pan, L.; Neri, F. Simplified and Yet Turing Universal Spiking Neural P Systems with Communication on Request. Int. J. Neural Syst. 2018, 28, 19. [Google Scholar] [CrossRef] [Green Version]
  16. Wu, T.; Neri, F.; Pan, L. On the Tuning of the Computation Capability of Spiking Neural Membrane Systems with Communication on Request. Int. J. Neural Syst. 2022, 32, 12. [Google Scholar] [CrossRef]
  17. Pan, L.; Păun, G. Spiking Neural P Systems with Anti-Spikes. Int. J. Comput. Commun. Control 2009, 4, 273–282. [Google Scholar] [CrossRef] [Green Version]
  18. Song, T.; Jiang, Y.; Shi, X.; Zeng, X. Small Universal Spiking Neural P Systems with Anti-Spikes. J. Comput. Theor. Nanosci. 2013, 10, 999–1006. [Google Scholar] [CrossRef]
  19. Song, T.; Pan, L.; Wang, J.; Venkat, I.; Subramanian, K.G.; Abdullah, R. Normal Forms of Spiking Neural P Systems with Anti-Spikes. IEEE Trans. Nanobiosci. 2012, 11, 352–359. [Google Scholar] [CrossRef]
  20. Song, T.; Liu, X.; Zeng, X. Asynchronous Spiking Neural P Systems with Anti-Spikes. Neural Process. Lett. 2015, 42, 633–647. [Google Scholar] [CrossRef]
  21. Wang, J.; Hoogeboom, H.J.; Pan, L.; Paun, G.; Pérez-Jiménez, M.J. Spiking Neural P Systems with Weights. Neural Comput. 2010, 22, 2615–2646. [Google Scholar] [CrossRef]
  22. Pan, L.; Zeng, X.; Zhang, X.; Jiang, Y. Spiking Neural P Systems with Weighted Synapses. Neural Process. Lett. 2012, 35, 13–27. [Google Scholar] [CrossRef]
  23. Zeng, X.; Pan, L.; Pérez-Jiménez, M.J. Small Universal Simple Spiking Neural P Systems with Weights. Sci. China Inf. Sci. 2014, 57, 11. [Google Scholar] [CrossRef] [Green Version]
  24. Zeng, X.; Xu, L.; Liu, X.; Pan, L. On Languages Generated by Spiking Neural P Systems with Weights. Inf. Sci. 2014, 278, 423–433. [Google Scholar] [CrossRef]
  25. Zhang, X.; Zeng, X.; Pan, L. Weighted Spiking Neural P Systems with Rules on Synapses. Fundam. Inform. 2014, 134, 201–218. [Google Scholar] [CrossRef]
  26. Zhang, X.; Zeng, X.; Luo, B.; Xu, J. Several Applications of Spiking Neural P Systems with Weights. J. Comput. Theor. Nanosci. 2012, 9, 769–777. [Google Scholar] [CrossRef]
  27. Fan, S.; Paul, P.; Wu, T.; Rong, H.; Zhang, G. On Applications of Spiking Neural P Systems. Appl. Sci. 2020, 10, 7011. [Google Scholar] [CrossRef]
  28. Liu, Q.; Long, L.; Peng, H.; Wang, J.; Yang, Q.; Song, X.; Riscos-Núñez, A.; Pérez-Jiménez, M.J. Gated Spiking Neural P Systems for Time Series Forecasting. IEEE Trans. Neural Netw. Learn. Syst. [CrossRef]
  29. Liu, Q.; Peng, H.; Long, L.; Wang, J.; Yang, Q.; Pérez-Jiménez, M.J.; Orellana-Martín, D. Nonlinear Spiking Neural Systems with Autapses for Predicting Chaotic Time Series. IEEE Trans. Cybern. [CrossRef]
  30. Long, L.; Liu, Q.; Peng, H.; Wang, J.; Yang, Q. Multivariate Time Series Forecasting Method Based on Nonlinear Spiking Neural P Systems and Non-Subsampled Shearlet Transform. Neural Netw. 2022, 152, 300–310. [Google Scholar] [CrossRef]
  31. Long, L.; Liu, Q.; Peng, H.; Yang, Q.; Luo, X.; Wang, J.; Song, X. A Time Series Forecasting Approach Based on Nonlinear Spiking Neural Systems. Int. J. Neural Syst. 2022, 32, 2250020. [Google Scholar] [CrossRef]
  32. Ma, T.; Hao, S.; Wang, X.; Rodríguez-Patón, A.A.; Wang, S.; Song, T. Double Layers Self-Organized Spiking Neural P Systems with Anti-Spikes for Fingerprint Recognition. IEEE Access 2019, 7, 177562–177570. [Google Scholar] [CrossRef]
  33. Song, T.; Pan, L.; Wu, T.; Zheng, P.; Wong, M.L.D.; Rodríguez-Patón, A. Spiking Neural P Systems with Learning Functions. IEEE Trans. Nanobiosci. 2019, 18, 176–190. [Google Scholar] [CrossRef]
  34. Peng, H.; Wang, J.; Pérez-Jiménez, M.J.; Shi, P. A Novel Image Thresholding Method Based on Membrane Computing and Fuzzy Entropy. J. Intell. Fuzzy Syst. 2013, 24, 229–237. [Google Scholar] [CrossRef]
  35. Peng, H.; Yang, Y.; Zhang, J.; Huang, X.; Wang, J. A Region-based Color Image Segmentation Method Based on P Systems. Rom. J. Inf. Sci. Technol. 2014, 17, 63–75. [Google Scholar]
  36. Song, T.; Pang, S.; Hao, S.; Rodríguez-Patón, A.; Zheng, P. A Parallel Image Skeletonizing Method Using Spiking Neural P Systems with Weights. Neural Process. Lett. 2019, 50, 1485–1502. [Google Scholar] [CrossRef]
  37. Xue, J.; Wang, Y.; Kong, D.; Wu, F.; Yin, A.; Qu, J.; Liu, X. Deep Hybrid Neural-Like P Systems for Multiorgan Segmentation in Head and Neck CT/MR Images. Expert Syst. Appl. 2021, 168, 114446. [Google Scholar] [CrossRef]
  38. Ren, Q.; Liu, X. Delayed Spiking Neural P Systems with Scheduled Rules. Complexity 2021, 2021, 13. [Google Scholar] [CrossRef]
  39. Wu, T.; Zhang, T.; Xu, F. Simplified and Yet Turing Universal Spiking Neural P Systems with Polarizations Optimized by Anti-Spikes. Neurocomputing 2020, 414, 255–266. [Google Scholar] [CrossRef]
  40. Jiang, S.; Fan, J.; Liu, Y.; Wang, Y.; Xu, F. Spiking Neural P Systems with Polarizations and Rules on Synapses. Complexity 2020, 2020, 12. [Google Scholar] [CrossRef]
  41. Liu, Y.; Zhao, Y. Spiking Neural P Systems with Membrane Potentials, Inhibitory Rules, and Anti-Spikes. Entropy 2022, 24, 834. [Google Scholar] [CrossRef]
  42. Peng, H.; Li, B.; Wang, J.; Song, X.; Wang, T.; Valencia-Cabrera, L.; Perez-Hurtado, I.; Riscos-Nunez, A.; Perez-Jimenez, M.J. Spiking Neural P Systems with Inhibitory Rules. Knowl. Based Syst. 2020, 188, 105064. [Google Scholar] [CrossRef]
Figure 1. An Illustrative Example.
Figure 1. An Illustrative Example.
Applsci 13 08058 g001
Figure 2. The flowchart of the SNP system with rules dynamic generation and removal simulating register machine in the number-generating mode.
Figure 2. The flowchart of the SNP system with rules dynamic generation and removal simulating register machine in the number-generating mode.
Applsci 13 08058 g002
Figure 3. ADD module.
Figure 3. ADD module.
Applsci 13 08058 g003
Figure 4. SUB module.
Figure 4. SUB module.
Applsci 13 08058 g004
Figure 5. FIN module.
Figure 5. FIN module.
Applsci 13 08058 g005
Figure 6. Flowchart of the RDGRSNP system simulating the register machine in number-accepting mode.
Figure 6. Flowchart of the RDGRSNP system simulating the register machine in number-accepting mode.
Applsci 13 08058 g006
Figure 7. INPUT module.
Figure 7. INPUT module.
Applsci 13 08058 g007
Figure 8. Deterministic ADD module.
Figure 8. Deterministic ADD module.
Applsci 13 08058 g008
Figure 9. Register machine M u .
Figure 9. Register machine M u .
Applsci 13 08058 g009
Figure 10. The flowchart of a small universal RDGRSNP system simulating a register machine.
Figure 10. The flowchart of a small universal RDGRSNP system simulating a register machine.
Applsci 13 08058 g010
Figure 11. INPUT module in register machine M u .
Figure 11. INPUT module in register machine M u .
Applsci 13 08058 g011
Figure 12. Compound connection of ADD–ADD.
Figure 12. Compound connection of ADD–ADD.
Applsci 13 08058 g012
Figure 13. Compound connection of ADD–SUB.
Figure 13. Compound connection of ADD–SUB.
Applsci 13 08058 g013
Table 1. The comparison of the number of neurons required to construct the ADD module.
Table 1. The comparison of the number of neurons required to construct the ADD module.
Variants of SNP SystemsNumber of Neurons
DSNP [38]7
PASNP [39]8
PSNRSP [40]11
MPAIRSNP [41]9
SNP-IR [42]8
RDGRSNP6
Table 2. The comparison of the number of neurons required to construct the SUB module.
Table 2. The comparison of the number of neurons required to construct the SUB module.
Variants of SNP SystemsNumber of Neurons
DSNP [38]7
PASNP [39]10
PSNRSP [40]15
MPAIRSNP [41]8
SNP-IR [42]8
RDGRSNP6
Table 3. The comparison of the number of neurons required to construct the FIN module.
Table 3. The comparison of the number of neurons required to construct the FIN module.
Variants of SNP SystemsNumber of Neurons
DSNP [38]4
PASNP [39]9
PSNRSP [40]8
MPAIRSNP [41]8
SNP-IR [42]5
RDGRSNP3
Table 4. The comparison of the number of neurons required to construct the INPUT module.
Table 4. The comparison of the number of neurons required to construct the INPUT module.
Variants of SNP SystemsNumber of Neurons
DSNP [38]6
PASNP [39]6
PSNRSP [40]10
MPAIRSNP [41]6
SNP-IR [42]6
RDGRSNP9
Table 5. The comparison of the number of neurons required to construct the deterministic ADD module.
Table 5. The comparison of the number of neurons required to construct the deterministic ADD module.
Variants of SNP SystemsNumber of Neurons
DSNP [38]4
PASNP [39]3
PSNRSP [40]5
MPAIRSNP [41]3
SNP-IR [42]3
RDGRSNP3
Table 6. The comparison of the number of neurons required to construct the INPUT module in register machine M u .
Table 6. The comparison of the number of neurons required to construct the INPUT module in register machine M u .
Variants of SNP SystemsNumber of Neurons
DSNP [38]9
PASNP [39]9
PSNRSP [40]13
MPAIRSNP [41]9
SNP-IR [42]9
RDGRSNP11
Table 7. The comparison of the number of neurons required to construct a small universal SNP system for simulating function computation.
Table 7. The comparison of the number of neurons required to construct a small universal SNP system for simulating function computation.
Variants of SNP SystemsNumber of Neurons
DSNP [38]81
PASNP [39]121
PSNRSP [40]151
MPAIRSNP [41]95
SNP-IR [42]100
RDGRSNP68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, Y.; Zhao, Y. Spiking Neural P Systems with Rules Dynamic Generation and Removal. Appl. Sci. 2023, 13, 8058. https://doi.org/10.3390/app13148058

AMA Style

Shen Y, Zhao Y. Spiking Neural P Systems with Rules Dynamic Generation and Removal. Applied Sciences. 2023; 13(14):8058. https://doi.org/10.3390/app13148058

Chicago/Turabian Style

Shen, Yongshun, and Yuzhen Zhao. 2023. "Spiking Neural P Systems with Rules Dynamic Generation and Removal" Applied Sciences 13, no. 14: 8058. https://doi.org/10.3390/app13148058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop