Next Article in Journal
A Generalized Multigranulation Rough Set Model by Synthesizing Optimistic and Pessimistic Attitude Preferences
Previous Article in Journal
Watermark and Trademark Prompts Boost Video Action Recognition in Visual-Language Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

First ElGamal Encryption/Decryption Scheme Based on Spiking Neural P Systems with Communication on Request, Weights on Synapses, and Delays in Rules

Instituto Politécnico Nacional, ESIME Culhuacan, Av. Santa Ana No. 1000, Ciudad de México 04260, Mexico
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1366; https://doi.org/10.3390/math13091366
Submission received: 3 April 2025 / Revised: 15 April 2025 / Accepted: 16 April 2025 / Published: 22 April 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

:
During the last five years, spiking neural P (SN P) systems have attracted a lot of attention in the field of cryptography since these systems can more efficiently support advanced and complex cryptographic algorithms due to their high computational capabilities. Specifically, these systems can be seen as a potential solution to efficiently performing asymmetric algorithms, which are more demanding than symmetric systems. This factor becomes critical, especially in resource-constrained single-board computer systems, since many of these systems are currently used to ensure the security of IoT applications in portable systems. In this work, we present for the first time the implementation of an asymmetric encryption algorithm called ElGamal based on spiking neural P systems and their cutting-edge variants. The proposed design involves the encryption and decryption processes. Specifically, we propose the design of a neural network to efficiently perform the extended Euclidean algorithm used in the decryption task. Here, we exert major efforts to create a compact and high-performance circuit to perform the extended Euclidean algorithm since the calculation of this algorithm is the most demanding when the decryption process is required. Finally, we perform several tests to show the computational capabilities of our proposal in comparison to conventional implementations on single-board computer systems. Our results show that the proposed encryption/decryption scheme potentially allows its use to ensure confidentiality, data integrity, and secure authentication, among other applications for resource-constrained embedded systems.

1. Introduction

In the last five years, spiking neural P systems have attracted significant attention as a cutting-edge solution within cybersecurity. These systems take advantage of the unique properties of spiking neurons, characterized by their discrete signaling mechanisms, making them suitable for practical hardware implementations. Recent studies have explored the application of spiking neural P systems within mobile operating systems, focusing on addressing critical issues such as phishing and spam detection [1]. The findings of these experiments reveal an impressive accuracy rate that surpasses 90%, underscoring the effectiveness of this innovative approach in strengthening security protocols and protecting users against malicious online threats. In addition, other works have been developed to create key-agreement protocols based on spiking neural networks with anti-spikes. Specifically, the authors created an advanced system inspired by the functionality of the tree parity machine [2]. More specifically, recent studies have shown that spiking neural P systems can be used to compute the RSA algorithm [3], which is one of the most used algorithms to secure communications over the Internet, securing the exchange of sensitive information, such as in emails and messaging applications. However, the implementation of this algorithm requires large area consumption and processing time. In particular, the calculation of the modular exponentiation is computationally demanding [4]. On the other hand, the Elgammal encryption algorithm can be seen as a potential solution to guarantee more security compared to the RSA algorithm. However, ElGamal requires more processing time than the RSA algorithm. Therefore, the use of spiking neural P systems to support the ElGamal opens new horizons in the simulation of advanced public-key cryptographic algorithms since the SN P systems are considered parallel and distributed computing systems. Recently, several variants of SN P systems, such as anti-spikes [5,6,7], astrocytes [8], weights in synapses [9,10,11], rules in synapses [12,13,14], asynchronous mechanisms [15], cooperative rules [16], programming synaptic connections between neurons [17], and communication on request [18], have been proposed to increase the computational capabilities of the conventional SN P systems. Here, we have made great efforts to design neural circuits to support the ElGamal cryptographic algorithm based on the SNQ P systems. Additionally, we use spiking neural P systems with communication on request (SNQ P) to design the Extended Euclidean Algorithm, which is a necessary method for calculating the multiplicative inverse of an integer in the decryption process of the ElGamal encryption algorithm. In addition, we incorporate delays in synapses to improve communication mechanisms between neurons. To verify the validity of our design, we conducted various encryption and decryption tests on different texts.

2. The Proposed Implementation of the ElGamal Encryption/Decryption Scheme Based on SNQ P Systems

Before presenting the proposed ElGamal encryption/decryption scheme based on SNQ P Systems, we provide some concepts and necessary notations related to the formal definition of the SN P systems with communication on request. In general terms, the SN P systems are mainly composed of neurons interconnected through synapses. Specifically, the behavior of the soma is regulated by rules forgetting rule and firing rule) to process information, which is encoded by means of spikes [19]. The definition of SNQ P systems provided in [18] allows for the handling of different types of spikes; however, in this work, we only use one type of spike. Therefore, the following definition is based on [20].
Definition 1. 
A spiking neural P system with communications on request (SNQ P System) is a tuple consisting of four components: a finite alphabet, neurons, synapses, and an output neuron. The system is mathematically defined as follows:
Π = ( O , σ 1 , , σ m , s y n , o u t )
where
  • m > 0 is the number of neurons;
  • O = { a } , where a is the unique element of this set called a spike;
  • σ i = ( n i , R i ) represents the set of neurons where 1 i m , and the following applies:
    -
    n i 0 is the number of spikes present in neuron σ i in the initial configuration of the system;
    -
    R i is a finite set of rules in neuron σ i , with syntax E / Q w ; t ; here, E is a regular expression over a and λ (the empty string), and w is a finite non-empty sequence of queries of the form w = ( a p , j ) or w = ( a , j ) , where p > 0 , 1 j m , and j i . t is the delay time for rule R i to be applied.
  • s y n is the set of synapses. Let s be an element of s y n . Then, s has the form ( ( i , j ) , w ) , where the pair ( i , j ) represents the synaptic connection between σ i and σ j , with i , j 1 , 2 , , m and i j . The term w indicates that, if c spikes are required by the receiving neuron, only c w spikes will reach it ( 0 w c ) . When the weight w = 0 , we use the notation ( i , j ) instead of ( ( i , j ) , w ) .
  • o u t { 1 , 2 , , m } denotes the output neuron.
Now, we explain the meaning of the queries ( a p , j ) and ( a , j ) . If neuron σ i contains ( a p , j ) , then neuron σ i requests p copies of a from neuron σ j . If neuron σ i contains ( a , j ) , then neuron σ i requests all spikes from neuron σ j . Note that these types of queries implicitly define the set of synapses. To apply the rule E / Q w in neuron σ i two firing conditions must be satisfied: (1) a n i E (the number of spikes contained in σ i must be in E. (2) The neuron σ j must contain at least p spikes of type a. Suppose two or more requesting neurons want to simultaneously apply the rule ( a p , j ) , and the two firing conditions are satisfied. In that case, only p copies of spike a are removed from the emitting neuron σ j , and these p spikes are replicated and transmitted simultaneously to each of the requesting neurons. There are conflicting queries when two or more neurons require a different number of spikes from a sending neuron. In such cases, the conflict is resolved by selecting one of the queries in a non-deterministic manner and discarding the rest. A spiking neural P system with communication on request starts from its initial configuration, which is defined by the number of spikes present in each neuron, and proceeds by applying the rules described above in all neurons to obtain the next configuration, which is called a transition. A sequence of transitions from one configuration to another is called a computation. A computation halts if no rule can be applied in any neuron. The result of an SNQ P system is the number of spikes present in σ o u t in the halting configuration. Although our definition of SNQ P systems incorporates the concept of time delay in rules, it is still equivalent to Turing machines. If we consider a zero delay in all neurons, our definition becomes equivalent to those in [18,20], which are computationally universal.
Recently, the use of spiking neural P systems with communication on request (SNQ P systems) has increased the computational capabilities of arithmetic neural circuits by exploiting their parallel processing capabilities in natural computing due to the ability of a neuron to request any number of spikes from another simultaneously. Based on this, we use SNQ P systems to support the ElGamal encryption/decryption scheme since this algorithm is the most demanding circuit in terms of processing speed.
The ElGamal encryption/decryption scheme includes three steps:
  • The calculation of the modular exponentiation operation, defined by the operation Y α = α X α m o d q , where q is a prime number, α is a primitive root of q, X α is the value of the private key used in the ElGamal encryption algorithm, and the public key is composed as follows: P U = { q , α , Y α } .
  • For encryption-process operations, we need to calculate three operations, K = Y α p m o d q , C 1 = α p m o d q and C 2 = K M m o d q , where p is a random number defined by the system, and M is the plain text; both values must be less than q.
  • For decryption process operations, K = C 1 X α m o d q , calculate the inverse modular product K 1 and M = ( C 2 K 1 ) m o d q .

2.1. Modular Exponentiation

To design the proposed neural modular exponentiation circuit, we use the fast modular exponentiation algorithm; in our design, we use fifteen neurons, σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , and σ 15 , to perform the operation ( α X A mod q ) , as shown in Figure 1.
The circuit operates as follows. In the first simulation step, neuron σ 1 is loaded with the exponentiation value, neurons σ 5 and σ 6 are loaded with α (also known as B), and neurons σ 7 and σ 13 are loaded with q (also known as m). Neuron σ 2 computes the binary representation of σ 1 . Once the binary membrane reaches the halting condition, the membrane executes the operation r = ( r × b ) mod m . In this process, neurons σ 5 and σ 6 ensure the execution of r × b , storing the result in neuron σ 7 . Neuron σ 8 computes the modulo operation, and neuron σ 9 fires a spike when the binary membrane reaches de halting condition.
The values in neurons σ 5 and σ 6 are then overwritten with the result stored in σ 13 . Finally, the membrane B = ( B × B ) mod m follows a similar process, with the only difference being that, instead of computing r × b , it calculates B × B . The result is stored in neuron σ 13 . Ultimately, the final result of ( α X A mod q ) is obtained in neuron σ 7 .
According to the formal definition of SNQ P systems, the proposed neural modular exponentiation circuit is mathematically expressed as follows.
Π = ( O , σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , σ 15 , s y n , o u t )
where
O = { a } σ 1 = ( n , { { R 1 = a * / Q ( a , 2 ) } ) σ 2 = ( { { R 1 = a * / Q ( a 2 , 1 ) } ) σ 3 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 15 ) } } ) σ 4 = ( r , { R 1 = a * / Q ( a , 3 ) ( a , 8 ) } ) σ 5 = ( b , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = a * / Q ( a , 6 ) } } ) σ 6 = ( b , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = a * / Q ( a , 5 ) } } ) σ 7 = ( m , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = a * / Q ( a , 5 ) } } ) σ 8 = ( { R 1 = a * / Q ( a m , 7 ) } ) σ 9 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 3 ) ( a , 15 ) } } ) σ 10 = ( B , { { R 1 = a * / Q ( a , 9 ) ( a , 13 ) } } ) σ 11 = ( B , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = Q ( a , 12 ) } } ) σ 12 = ( B , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = Q ( a , 11 ) } } ) σ 13 = ( B , { { R 1 = a * / Q ( a , 10 ) ( a , 9 ) } , { R 2 = Q ( a , 12 ) } } ) σ 14 = ( { R 1 = a * / Q ( a m , 13 ) } ) σ 15 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 3 ) } } ) s y n = { ( ( 1 , 2 ) , 1 ) , ( ( 3 , 4 ) , 1 ) , ( ( 3 , 5 ) , 1 ) , ( ( 3 , 6 ) , 1 ) , ( ( 3 , 7 ) , 1 ) , ( 3 , 9 ) , ( 3 , 15 ) , ( 5 , 6 ) , ( 6 , 5 ) , ( 7 , 8 ) , ( ( 9 , 10 ) , 1 ) , ( ( 9 , 11 ) , 1 ) , ( ( 9 , 12 ) , 1 ) , ( ( 9 , 13 ) , 1 ) , ( 9 , 15 ) , ( 11 , 12 ) , ( 12 , 11 ) , ( 12 , 13 ) , ( 13 , 14 ) , ( 15 , 3 ) } o u t = { σ 1 , σ 13 }

2.2. The Proposed Implementation of the ElGamal Encryption Process Based on SNQ P Systems

To design the proposed neural encryption circuit, we use 22 neurons, as shown in Figure 2. The K module performs the operation K = ( Y A ) p m o d q and consists of neurons σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , and σ 8 . The C 1 module performs the operation C 1 = α p m o d q and is composed of neurons σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , σ 15 , and σ 16 . Finally, neurons σ 17 , σ 18 , σ 19 , σ 20 , σ 21 , and σ 22 perform the operation C 2 = ( K × M ) m o d q in the C 2 module.
This circuit works as follows. At the first simulation step, neurons σ 1 , σ 9 , and σ 17 are loaded with Y A , α and M, respectively, while neurons σ 2 and σ 10 are loaded with p 1 , and neurons σ 7 , σ 15 , and σ 21 are loaded with the q value. We can observe, in this circuit, that modules K and C 1 perform the modular exponentiation whose operation has already been described in Figure 1, so it only remains to describe the operation of the C 2 module.
Module C 2 starts its operation when no rule can be applied to any neuron in module K. At this point, K = ( Y A ) p m o d q spikes are loaded into neuron σ 19 . In the next simulation step, neuron σ 18 is loaded with K spikes (it receives K 1 from neuron σ 19 and 1 from neuron σ 17 ). This process is repeated M times until K × M is obtained in neuron σ 20 . Finally, neurons σ 21 and σ 22 perform the modular operation to obtain the final result of C 2 = ( K × M ) m o d q in neuron σ 20 .
According to the formal definition of SNQ P systems, the proposed neural cipher circuit to perform the operation K = ( Y A ) p m o d q , which is shown in Figure 2, is mathematically expressed as follows.
Π = ( O , σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , σ 15 , s y n , o u t )
where:
O = { a } σ 1 = ( n , { { R 1 = a * / Q ( a , 2 ) } ) σ 2 = ( { { R 1 = a * / Q ( a 2 , 1 ) } ) σ 3 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 15 ) } } ) σ 4 = ( r , { R 1 = a * / Q ( a , 3 ) ( a , 8 ) } ) σ 5 = ( b , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = a * / Q ( a , 6 ) } } ) σ 6 = ( b , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = a * / Q ( a , 5 ) } } ) σ 7 = ( m , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = a * / Q ( a , 5 ) } } ) σ 8 = ( { R 1 = a * / Q ( a m , 7 ) } ) σ 9 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 3 ) ( a , 15 ) } } ) σ 10 = ( B , { { R 1 = a * / Q ( a , 9 ) ( a , 13 ) } } ) σ 11 = ( B , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = Q ( a , 12 ) } } ) σ 12 = ( B , { { R 1 = a * / Q ( a , 3 ) ( a , 13 ) } , { R 2 = Q ( a , 11 ) } } ) σ 13 = ( B , { { R 1 = a * / Q ( a , 10 ) ( a , 9 ) } , { R 2 = Q ( a , 12 ) } } ) σ 14 = ( { R 1 = a * / Q ( a m , 13 ) } ) σ 15 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 3 ) } } ) s y n = { ( ( 1 , 2 ) , 1 ) , ( ( 3 , 4 ) , 1 ) , ( ( 3 , 5 ) , 1 ) , ( ( 3 , 6 ) , 1 ) , ( ( 3 , 7 ) , 1 ) , ( 3 , 9 ) , ( 3 , 15 ) , ( 5 , 6 ) , ( 6 , 5 ) , ( 7 , 8 ) , ( ( 9 , 10 ) , 1 ) , ( ( 9 , 11 ) , 1 ) , ( ( 9 , 12 ) , 1 ) , ( ( 9 , 13 ) , 1 ) , ( 9 , 15 ) , ( 11 , 12 ) , ( 12 , 11 ) , ( 12 , 13 ) , ( 13 , 14 ) , ( 13 , 31 ) , ( 15 , 3 ) } o u t = { σ 1 , σ 13 }
According to the formal definition of SNQ P systems, the proposed neural cipher circuit to perform the operation C 1 = α p m o d q in Figure 2 is mathematically expressed as follows.
Π = ( O , σ 16 , σ 17 , σ 18 , σ 19 , σ 20 , σ 21 , σ 22 , σ 23 , σ 24 , σ 25 , σ 26 , σ 27 , σ 28 , σ 29 , σ 30 , s y n , o u t )
where
O = { a } σ 16 = ( n , { { R 1 = a * / Q ( a , 17 ) } ) σ 17 = ( { { R 1 = a * / Q ( a 2 , 1 ) } ) σ 18 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 30 ) } } ) σ 19 = ( r , { R 1 = a * / Q ( a , 18 ) ( a , 23 ) } ) σ 20 = ( b , { { R 1 = a * / Q ( a , 18 ) ( a , 28 ) } , { R 2 = a * / Q ( a , 21 ) } } ) σ 21 = ( b , { { R 1 = a * / Q ( a , 18 ) ( a , 28 ) } , { R 2 = a * / Q ( a , 20 ) } } ) σ 22 = ( m , { { R 1 = a * / Q ( a , 18 ) ( a , 28 ) } , { R 2 = a * / Q ( a , 20 ) } } ) σ 23 = ( { R 1 = a * / Q ( a m , 22 ) } ) σ 24 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 18 ) ( a , 30 ) } } ) σ 25 = ( B , { { R 1 = a * / Q ( a , 24 ) ( a , 28 ) } } ) σ 26 = ( B , { { R 1 = a * / Q ( a , 18 ) ( a , 28 ) } , { R 2 = Q ( a , 27 ) } } ) σ 27 = ( B , { { R 1 = a * / Q ( a , 18 ) ( a , 28 ) } , { R 2 = Q ( a , 26 ) } } ) σ 28 = ( B , { { R 1 = a * / Q ( a , 25 ) ( a , 24 ) } , { R 2 = Q ( a , 27 ) } } ) σ 29 = ( { R 1 = a * / Q ( a m , 28 ) } ) σ 30 = ( a , { { R 1 = h a l t a } , { R 2 = Q ( a , 18 ) } } ) s y n = { ( ( 16 , 17 ) , 1 ) , ( ( 18 , 19 ) , 1 ) , ( ( 18 , 20 ) , 1 ) , ( ( 18 , 21 ) , 1 ) , ( ( 18 , 22 ) , 1 ) , ( 18 , 24 ) , ( 18 , 30 ) , ( 20 , 21 ) , ( 21 , 20 ) , ( 22 , 23 ) , ( ( 24 , 25 ) , 1 ) , ( ( 24 , 26 ) , 1 ) , ( ( 24 , 27 ) , 1 ) , ( ( 24 , 28 ) , 1 ) , ( 24 , 30 ) , ( 26 , 27 ) , ( 27 , 26 ) , ( 27 , 28 ) , ( 28 , 29 ) , ( 30 , 18 ) } o u t = { σ 16 , σ 28 }
According to the formal definition of SNQ P systems, the proposed neural cipher circuit to perform the operation C 2 = ( K × M ) m o d q in Figure 2 is mathematically expressed as follows.
Π = ( O , σ 31 , σ 32 , σ 33 , σ 34 , σ 35 , σ 36 , s y n , o u t )
where
O = { a } σ 31 = ( M ) σ 32 = ( m 1 , { R 1 = λ / Q ( a K 1 , 33 ) ( a , 31 ) } ) σ 33 = ( K , { { R 1 = λ / Q ( a v , 13 ) } , { R 2 = a * / Q ( a m 1 , 32 ) } } ) σ 34 = ( C 2 , { R 1 = a * / Q ( a m 1 , 32 ) } ) σ 35 = ( q , { R 1 = a * / Q ( a q , 35 ) } ) σ 36 = ( u 2 , { R 1 = a * / Q ( a q , 34 ) ( a q , 35 ) } ) s y n = { ( 31 , 32 ) , ( ( 32 , 33 ) , 1 ) , ( 32 , 34 ) , ( 33 , 32 ) , ( 34 , 35 ) , ( 34 , 36 ) , ( 35 , 36 ) } o u t = { σ 34 }

2.3. Decryption Process

To perform the decryption process, we must design the neural circuits in SNQ P systems for the operations K = ( C 1 ) X A m o d q and M = ( C 2 · K 1 ) m o d q . However, the second operation requires computing the multiplicative inverse K 1 using the Extended Euclidean Algorithm. The design of this algorithm in SNQ P systems is shown below.

2.3.1. Extended Euclidean Algorithm

In our design of the Extended Euclidean Algorithm neural circuit, we use two fundamental arithmetic operations:
  • The addition of signed numbers.
  • The multiplication of signed numbers.
The addition of signed numbers.
In Figure 3, we use 19 neurons to design the neural circuit to perform the operation ( ± a ± b ) . At the first simulation step, neurons σ 2 and σ 4 are loaded with the addends a and b, respectively, while neuron σ 1 is loaded with the sign of a and neuron σ 3 with the sign of b (zero spikes correspond to a positive sign and one spike to a negative sign). Neuron σ 16 must be loaded with two spikes.
From the second to the fourth simulation steps, the sign membrane calculates the sign of the operation ( ± a ± b ) . In the second cycle, neurons σ 5 , σ 6 , σ 7 , and σ 8 are loaded with the values of neurons σ 1 , σ 2 , σ 3 , and σ 4 , respectively. In the third clock cycle, neuron σ 13 extracts a spike from neuron σ 14 (so σ 14 will be activated in the next clock cycle). If neuron σ 6 contains more spikes than neuron σ 8 , then the rules in neuron σ 13 assign | n 4 n 3 | spikes to neuron σ 6 and zero spikes to neuron σ 8 ; otherwise, it assigns zero spikes to neuron σ 6 and | n 4 n 3 | spikes to neuron σ 8 . In the fourth clock cycle, neuron σ 14 will contain zero spikes and activate one of its two rules, indicating a negative value if the result of the operation ( ± a ± b ) is negative.
From the second to the sixth simulation steps, the signed sum membrane obtains the result of the operation ( ± a ± b ) in neuron σ 19 . From the second to the fourth simulation steps, the operation flag membrane determines whether to add or subtract the operands, with zero or a spike stored in neuron σ 15 if the operation is an addition or subtraction, respectively. Due to the delay in the synapse of neuron σ 18 ( t = 3 ), this neuron will be activated when the operation flag membrane has reached the halting condition and neuron σ 15 contains a spike. At this moment, if neuron σ 10 contains more spikes than neuron σ 12 , the rules in neuron σ 18 assign | n 6 n 5 | spikes to neuron σ 10 and zero spikes to neuron σ 12 ; otherwise, assign zero spikes to neuron σ 10 and | n 6 n 5 | spikes to neuron σ 12 . Finally, in the sixth step, the neuron σ 19 adds the remaining spikes from the neurons σ 10 and σ 12 to obtain the final result in the signed sum membrane.
According to the formal definition of SNQ P systems, the proposed neural cipher circuit to perform the operation ± C = ± A ± B , which is shown in Figure 3, is mathematically expressed as follows.
Π = ( O , σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , σ 15 , σ 16 , σ 17 , σ 18 , σ 19 , s y n , o u t )
where
O = { a } σ 1 = ( S 1 ) σ 2 = ( n 1 ) σ 3 = ( S 2 ) σ 4 = ( n 2 ) σ 5 = ( S 3 , { R 1 = λ / Q ( a S 1 , 1 ) } ) σ 6 = ( n 3 , { R 1 = λ / Q ( a n 1 , 2 ) } ) σ 7 = ( S 4 , { R 1 = λ / Q ( a S 2 , 3 ) } ) σ 8 = ( n 4 , { R 1 = λ / Q ( a n 2 , 4 ) } ) σ 9 = ( S 5 , { R 1 = λ / Q ( a S 1 , 1 ) } ) σ 10 = ( n 5 , { R 1 = λ / Q ( a n 1 , 2 ) } ) σ 11 = ( S 6 , { R 1 = λ / Q ( a S 2 , 3 ) } ) σ 12 = ( n 6 , { R 1 = λ / Q ( a n 2 , 4 ) } ) σ 13 = ( n 7 , { { R 1 = λ / Q ( a n 3 , 6 ) ( a n 3 , 8 ) ( a , 14 ) } , { R 2 = λ / Q ( a n 4 , 6 ) ( a n 4 , 8 ) ( a , 14 ) } } ) σ 14 = ( S = 1 , { { R 1 = λ / Q ( a S 3 , 5 ) ( a n 3 , 6 ) } , { R 2 = λ / Q ( a S 4 , 7 ) ( a n 4 , 8 ) } } ) σ 15 = ( S 7 , { { R 1 = a * / Q ( a S 5 , 9 ) ( a S 6 , 11 ) } ) σ 16 = ( q = 2 , { { R 1 = λ / Q ( a q , 15 ) } ) σ 17 = ( S 8 , { { R 1 = λ / Q ( a q , 15 ) ( a q , 16 ) } ) σ 18 = ( r e , { { R 1 = λ / Q ( a n 5 , 10 ) ( a n 5 , 12 ) ( a S 7 , 15 ) ; 3 } , { R 2 = λ / Q ( a n 6 , 10 ) ( a n 6 , 12 ) ( a S 7 , 15 ) ; 3 } } ) σ 19 = ( r , { { R 1 = a * / Q ( a n 5 , 10 ) ; 4 } , { R 2 = a * / Q ( a n 6 , 12 ) ; 4 } } ) s y n = { ( 1 , 5 ) , ( 1 , 9 ) , ( 2 , 6 ) , ( 2 , 10 ) , ( 3 , 7 ) , ( 3 , 11 ) , ( 4 , 8 ) , ( 4 , 12 ) , ( 5 , 14 ) , ( 6 , 13 ) , ( ( 6 , 14 ) , n 3 ) , ( 7 , 14 ) , ( 8 , 13 ) , ( ( 8 , 14 ) , n 4 ) , ( 9 , 15 ) , ( 10 , 18 ) , ( 10 , 19 ) , ( 11 , 15 ) , ( 12 , 18 ) , ( 18 , 19 ) , ( 14 , 13 ) , ( 15 , 16 ) , ( 15 , 17 ) , ( 15 , 18 ) } o u t = { σ 14 , σ 19 }
Multiplication of signed numbers. To operate a q b , we use the membranes shown in Figure 3 and Figure 4. The Mult membrane in Figure 4 calculates the product q b as follows: Initially, neurons σ 25 and σ 26 are loaded with the values of the operands q and b, respectively. Starting from the second simulation step and continuing for q cycles, the neuron σ 27 receives spikes b, which eventually accumulate in the neuron σ 28 .
The operation of the Sign membrane is as follows: In the first simulation step, the neuron σ 20 is loaded with a spike to denote the sign of q, and the neuron σ 21 is loaded with the sign of b. After one simulation step, neuron σ 22 accumulates the spikes extracted from neurons σ 20 and σ 21 . Finally, the neuron σ 23 performs the modulo 2 operation on the spikes contained in the neuron σ 22 .
Once the sign and mult membranes reach the halting condition (at which point neuron σ 22 contains the sign and neuron σ 28 contains the product of the expression ( q b ) ), the query λ / Q ( a , 28 ) is established in neuron σ 4 , as shown in Figure 3. Neurons σ 1 , σ 2 , σ 3 , and σ 4 are then loaded with the spike representing the sign of a, the value of a, the minus sign of ( q b ) , and the product value of ( q b ) obtained in neuron σ 28 , respectively. In the next simulation step, the circuit in Figure 3 comes into operation so that, when it completes its calculations, the sign of ( a q b ) will be in neuron σ 14 and the result in neuron σ 19 .
According to the formal definition of SNQ P systems, the proposed neural cipher circuit to perform the operation ± C = ( ± A ) · ( ± B ) , which is shown in Figure 4, is mathematically expressed as follows.
Π = ( O , σ 20 , σ 21 , σ 22 , σ 23 , σ 24 , σ 25 , σ 26 , σ 27 , σ 28 , s y n , o u t )
where
O = { a } σ 20 = ( S 1 = 1 ) σ 21 = ( S 2 ) σ 22 = ( S m , { { R 1 = a + / Q ( a S 1 , 20 ) } , { R 2 = a + / Q ( a S 1 , 21 ) } } ) σ 23 = ( p = 2 , { { R 1 = a * / Q ( a p , 22 ) σ 24 = ( u , { { R 1 = a λ / Q ( a p , 22 ) ( a p , 23 ) σ 25 = ( n 1 ) σ 26 = ( n 2 , { R 1 = a * / Q ( a m 1 , 27 ) } ) σ 27 = ( m 1 , { R 1 = λ / Q ( a n 2 1 , 25 ) ( a , 26 ) } ) σ 28 = ( m 2 , { R 1 = a * / Q ( a m 1 , 27 ) } ) s y n = { ( 20 , 22 ) , ( 21 , 22 ) , ( 22 , 23 ) , ( 22 , 24 ) , ( 23 , 24 ) , ( 25 , 27 ) , ( 26 , 27 ) , ( ( 27 , 26 ) , 1 ) , ( 27 , 28 ) } o u t = { σ 22 , σ 28 }
Extended Euclidean Algorithm (EEA): The Extended Euclidean Algorithm computes the greatest common divisor (GCD) of the integers a and b ( a > b ) and also finds the multiplicative inverse of b. The algorithm proceeds with the following steps as long as the condition b > 0 is satisfied:
q = a b r = a ( q · b ) x = x 2 ( q · x 1 ) y = y 2 ( q · y 1 ) a = b b = r x 2 = x 1 x 1 = x y 2 = y 1 y 1 = y
The initial values of the algorithm are: x 2 = 1 , x 1 = 0 , y 2 = 0 and y 1 = 1 . The design of the EEA neural circuit is shown in Figure 5. In the first simulation step, neurons 74 to 83 are initialized with zero spikes. Neurons σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , and σ 10 are loaded with the values of a, b, the sign of x 2 , the value of x 2 , the sign of x 1 , the value of x 1 , the sign of y 2 , the value of y 2 , the sign of y 1 , and the value of y 1 , respectively.
In each simulation step, neurons 1 to 10 update their values according to Equation (1). Starting from the second simulation step, the Quotient membrane becomes active, and by the end of its operations, neurons σ 11 , σ 12 , and σ 13 hold the values ( a m o d b ) , b and the quotient a b , respectively.
When the Q u o t i e n t membrane reaches the stop condition, neurons σ 14 and σ 15 are assigned a spike and quotient value obtained in neuron σ 13 (the same process is applied to neurons σ 16 and σ 17 ), respectively.
In the subsequent clock cycles, the M u l t 1 and M u l t 2 membranes are executed to compute the products ( q 1 · x 1 ) and ( q 2 · y 1 ) , respectively. Both membranes function similarly to the membrane in Figure 4, but M u l t 1 numbers its neurons from σ 18 to σ 26 , and M u l t 2 numbers its neurons from σ 46 to σ 54 . When M u l t 1 reaches the stopping condition, the sign and result of ( q 1 · x 1 ) are stored in neurons σ 20 and σ 26 , respectively. Similarly, when M u l t 2 reaches the halting condition, the sign and result of ( q 2 · y 1 ) are stored in neurons σ 48 and σ 54 , respectively.
Once the M u l t 1 and M u l t 2 membranes have reached the halting condition, the A d d i t i o n 1 and A d d i t i o n 2 membranes begin their execution, respectively. The A d d i t i o n 1 membrane calculates the operation x 2 + s x with its neurons numbered from σ 27 to σ 45 . The sign and result of this operation (when M u l t 1 reaches the stop condition) are stored in neurons σ 40 and σ 45 , respectively. Similarly, the A d d i t i o n 2 membrane calculates the operation y 2 + s y with its neurons numbered from σ 55 to σ 73 . The sign and result of this operation (when M u l t 2 reaches the stop condition) are stored in neurons σ 68 and σ 73 , respectively.
When M u l t 1 and M u l t 2 membranes reach the stopping condition, neuron σ 84 extracts all spikes from neurons σ 24 and σ 52 of the membrane shown in Figure 4 (note that the neuron relabeling for M u l t 1 and M u l t 2 corresponds to neurons σ 18 σ 26 and σ 46 σ 54 , respectively).
Once the Q u o t i e n t membrane has reached the stopping condition, and after a delay of a simulation steps, neurons σ 74 and σ 75 are loaded with the value of b and a m o d b , respectively.
Once the A d d i t i o n 1 and A d d i t i o n 2 membranes have reached the stopping condition, neurons σ 74 , σ 75 , σ 76 , σ 77 , σ 78 , σ 79 , σ 80 , σ 81 , σ 82 , and σ 83 are assigned the values a, b, s 1 , x 2 , s 2 , x 1 , s 3 , y 2 , s 4 , y 1 , respectively.
Finally, neurons σ 80 and σ 81 display the sign and value of the multiplicative inverse of b, respectively.
According to the formal definition of SNQ P systems, the proposed neural cipher circuit to perform the Extended Euclidean Algorithm, which is shown in Figure 5, is mathematically expressed as follows.
Π = ( O , σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , σ 15 , σ 16 , σ 17 , σ 74 , σ 75 , σ 76 , σ 77 , σ 78 , σ 79 , σ 80 , σ 81 , σ 82 , σ 83 , σ 84 , s y n , o u t )
where
O = { a } σ 1 = ( a , { { R 1 = a * / Q ( a n a , 74 ) ( a n b , 75 ) σ 2 = ( b , { { R 1 = a * / Q ( a n b , 75 ) σ 3 = ( s 1 , { { R 1 = a * / Q ( a n s 1 , 76 ) ( a n b , 75 ) σ 4 = ( x 2 , { { R 1 = a * / Q ( a n x 2 , 77 ) ( a n b , 75 ) σ 5 = ( s 2 , { { R 1 = a * / Q ( a n s 2 , 78 ) ( a n b , 75 ) σ 6 = ( x 1 , { { R 1 = a * / Q ( a n x 1 , 79 ) ( a n b , 75 ) σ 7 = ( s 3 , { { R 1 = a * / Q ( a n s 3 , 80 ) ( a n b , 75 ) σ 8 = ( y 2 , { { R 1 = a * / Q ( a n y 2 , 81 ) ( a n b , 75 ) σ 9 = ( s 4 , { { R 1 = a * / Q ( a n s 4 , 82 ) ( a n b , 75 ) σ 10 = ( y 1 , { { R 1 = a * / Q ( a n y 1 , 83 ) ( a n b , 75 ) σ 11 = ( d 1 , { { R 1 = a * / Q ( a α , 1 ) σ 12 = ( d 2 , { { R 1 = λ / Q ( a b , 2 ) } , { R 2 = λ / Q ( a d 2 , 11 ) } } ) σ 13 = ( q , { { R 1 = λ / Q ( a d 2 , 12 ) σ 14 = ( s q 1 = 1 , { { R 1 = λ / Q ( a , e n v ) σ 15 = ( q 1 , { { R 1 = λ / Q ( a q , 13 ) σ 16 = ( s q 2 = 1 , { { R 1 = λ / Q ( a , e n v ) σ 17 = ( q 2 , { { R 1 = λ / Q ( a q , 13 ) σ 74 = ( n a , { { R 1 = λ / Q ( a d 2 , 12 ) ; a σ 75 = ( n b , { { R 1 = λ / Q ( a d 1 , 11 ) ; a σ 76 = ( n s 1 , { { R 1 = λ / Q ( a s 2 , 5 ) σ 77 = ( n x 2 , { { R 1 = λ / Q ( a x 1 , 6 ) σ 78 = ( n s 2 , { { R 1 = λ / Q ( a , 40 ) σ 79 = ( n x 1 , { { R 1 = λ / Q ( a x , 45 ) σ 80 = ( n s 3 , { { R 1 = λ / Q ( a s 4 , 9 ) σ 81 = ( n y 2 , { { R 1 = λ / Q ( a y 1 , 10 ) σ 82 = ( n s 4 , { { R 1 = λ / Q ( a , 68 ) σ 83 = ( n y 1 , { { R 1 = λ / Q ( a x , 73 ) σ 84 = ( t 1 , { { R 1 = λ / Q ( a , 24 ) } , { R 2 = λ / Q ( a , 52 ) } } ) s y n = { ( 1 , 11 ) , ( 2 , 12 ) , ( 3 , 27 ) , ( 4 , 28 ) , ( 5 , 18 ) , ( 5 , 76 ) , ( 6 , 23 ) , ( 6 , 77 ) , ( 7 , 55 ) , ( 8 , 56 ) , ( 9 , 46 ) , ( 9 , 80 ) , ( 10 , 51 ) , ( 10 , 81 ) , ( 11 , 12 ) , ( 11 , 75 ) , ( 12 , 13 ) , ( 12 , 74 ) , ( 13 , 15 ) , ( 13 , 17 ) , ( 14 , 19 ) , ( 15 , 24 ) , ( 16 , 47 ) , ( 17 , 52 ) , ( 18 , 20 ) , ( 19 , 20 ) , ( 20 , 21 ) , ( 20 , 22 ) , ( 20 , 29 ) , ( 21 , 22 ) , ( 23 , 25 ) , ( 24 , 25 ) , ( 24 , 84 ) , ( ( 25 , 24 ) , 1 ) , ( 25 , 26 ) , ( 26 , 30 ) , ( 27 , 31 ) , ( 27 , 35 ) , ( 28 , 32 ) , ( 28 , 36 ) , ( 29 , 33 ) , ( 29 , 37 ) , ( 30 , 34 ) , ( 30 , 38 ) , ( 31 , 40 ) , ( 32 , 39 ) , ( ( 32 , 40 ) , n 3 ) , ( 33 , 40 ) , ( 34 , 39 ) , ( ( 34 , 40 ) , n 4 ) , ( 35 , 41 ) , ( 36 , 44 ) , ( 36 , 45 ) , ( 37 , 41 ) , ( 38 , 44 ) , ( 38 , 45 ) , ( 40 , 78 ) , ( 41 , 42 ) , ( 41 , 43 ) , ( 41 , 44 ) , ( 42 , 43 ) , ( 45 , 79 ) , ( 46 , 48 ) , ( 47 , 48 ) , ( 48 , 49 ) , ( 48 , 50 ) , ( 48 , 57 ) , ( 49 , 50 ) , ( 51 , 53 ) , ( 52 , 53 ) , ( ( 53 , 52 ) , 1 ) , ( 53 , 54 ) , ( 54 , 58 ) , ( 55 , 59 ) , ( 55 , 63 ) , ( 56 , 60 ) , ( 56 , 64 ) , ( 57 , 61 ) , ( 57 , 65 ) , ( 58 , 62 ) , ( 58 , 66 ) , ( 59 , 68 ) , ( 60 , 67 ) , ( ( 60 , 68 ) , n 3 ) , ( 61 , 68 ) , ( 62 , 67 ) , ( ( 62 , 68 ) , n 4 ) , ( 63 , 69 ) , ( 64 , 72 ) , ( 64 , 73 ) , ( 65 , 69 ) , ( 66 , 72 ) , ( 66 , 73 ) , ( 68 , 82 ) , ( 69 , 70 ) , ( 69 , 71 ) , ( 69 , 72 ) , ( 70 , 71 ) , ( 73 , 81 ) , ( ( 74 , 1 ) , 1 ) , ( ( 75 , 1 ) , n b 1 ) , ( 75 , 2 ) , ( ( 76 , 3 ) , 1 ) , ( ( 75 , 3 ) , n b 1 ) , ( ( 77 , 4 ) , 1 ) , ( ( 75 , 4 ) , n b 1 ) , ( ( 78 , 5 ) , 1 ) , ( ( 75 , 5 ) , n b 1 ) , ( ( 79 , 6 ) , 1 ) , ( ( 75 , 6 ) , n b 1 ) , ( ( 80 , 7 ) , 1 ) , ( ( 75 , 7 ) , n b 1 ) , ( ( 81 , 8 ) , 1 ) , ( ( 75 , 8 ) , n b 1 ) , ( ( 82 , 9 ) , 1 ) , ( ( 75 , 9 ) , n b 1 ) , ( ( 83 , 10 ) , 1 ) , ( ( 75 , 10 ) , n b 1 ) } o u t = { σ 80 , σ 81 }

2.3.2. The Proposed Implementation of the ElGamal Decryption Process Based on SNQ P Systems

After the neural circuit for the Extended Euclidean Algorithm is obtained, the design of the decryption process for the ElGamal encryption algorithm is presented in Figure 6. Its description is as follows. The K membrane performs the same operation as the membrane in Figure 1, and it is now used to compute K = ( C 1 ) X A m o d q ; this result is stored in neuron σ 6 .
Neurons σ 9 and σ 10 extract the sign and value of b, respectively, from neurons σ 80 and σ 81 in the membrane corresponding to the Euclidean algorithm. If the sign of b is positive, neuron σ 9 contains zero spikes, and neuron σ 14 receives the value of the multiplicative inverse K 1 from neuron σ 10 . If the sign of b is negative, the C o n g r u e n t membrane is activated to calculate the value ( b m o d q ) and assign it to neuron σ 14 . Finally, the M membrane operates M = ( C 2 × K 1 ) m o d q , and its operation is similar to that performed by the C 2 membrane described in Figure 2.
According to the formal definition of SNQ P systems, the proposed neural cipher circuit is mathematically expressed as follows.
Π = ( O , σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 , σ 7 , σ 8 , σ 9 , σ 10 , σ 11 , σ 12 , σ 13 , σ 14 , σ 15 , σ 16 , σ 17 , σ 18 , σ 19 , s y n , o u t ) where
O = { a } σ 1 = ( α ) σ 2 = ( X A 1 ) σ 3 = ( x , { { R 1 = λ / Q ( a α , 1 ) ( a , 2 ) } , { R 2 = λ / Q ( a v , 6 ) ( a , 2 ) ; 1 } } ) σ 4 = ( w , { R 1 = λ / Q ( a y 1 , 5 ) ( a , 3 ) } ) σ 5 = ( y , { { R 1 = λ / Q ( a α , 1 ) } , { R 2 = a + / Q ( a w , 4 ) } } ) σ 6 = ( v , { R 1 = a + / Q ( a w , 4 ) } ) σ 7 = ( q , { R 1 = a * / Q ( a q , 6 ) } ) σ 8 = ( u , { R 1 = a * / Q ( a q , 6 ) ( a q , 7 ) } ) σ 9 = ( s s k , { R 1 = a * / Q ( a , 80 ) } ) σ 10 = ( s k , { R 1 = a * / Q ( a , 81 ) } ) σ 11 = ( n v , { R 1 = a * / Q ( a , 9 ) ( a , 10 ) } ) σ 12 = ( p v = q ) σ 13 = ( r v , { R 1 = a * / Q ( a n v , 11 ) ( a n v , 12 ) } ) σ 14 = ( k 1 , { { R 1 = λ / Q ( a , 10 ) } , { R 2 = λ / Q ( a p v 1 , 12 ) ( a , 13 ) } } ) σ 15 = ( C 2 , { R 1 = a * / Q ( a n 3 , 16 ) } ) σ 16 = ( n 3 , { R 1 = λ / Q ( a C 2 1 , 15 ) ( a , 14 ) } ) σ 17 = ( M , { R 1 = a * / Q ( a n 3 , 16 ) } ) σ 18 = ( q , { R 1 = a * / Q ( a q , 17 ) } ) σ 19 = ( u , { R 1 = a * / Q ( a q , 17 ) ( a q , 18 ) } ) s y n = { ( ( 1 , 3 ) , 1 ) , ( 1 , 5 ) , ( 2 , 3 ) , ( 3 , 4 ) , ( ( 4 , 5 ) , 1 ) , ( 4 , 6 ) , ( 5 , 4 ) , ( ( 6 , 3 ) , 1 ) , ( 6 , 7 ) , ( 6 , 8 ) , ( 7 , 8 ) , ( 80 , 9 ) , ( 9 , 11 ) , ( 81 , 10 ) , ( ( 10 , 11 ) , 1 ) , ( 11 , 13 ) , ( 12 , 13 ) , ( 12 , 14 ) , ( 13 , 14 ) , ( 14 , 16 ) , ( 15 , 16 ) , ( ( 16 , 15 ) , 1 ) , ( 16 , 17 ) , ( 17 , 18 ) , ( 17 , 19 ) , ( 18 , 19 ) } o u t = { σ 17 }

3. Performance Evaluation

Here, we implement the proposed modular exponentiation neural circuit on an Intel Arria 10 GX 1150 FPGA, as shown in Figure 7. In this way, the proposed digital circuit processes the modular exponentiation neural circuit in parallel since this operation is the most demanding in terms of area and computational cost. To achieve this, we use basic digital components, such as registers, comparators, adders, and multiplexers. In particular, the use of these digital components has allowed us to mimic the neural behavior of the proposed modular exponentiation neural circuit.
In this work, we implement 15 complex neurons by consuming 28,750 logic elements (LE), which represent 2.5 % of the total, respectively. To obtain the power consumption of the MPU implementation, we use the ALTERA tool (Quartus Prime), for which the power consumption is 0.26 W. The static power consumption is 0.005 W, which is 1.9 % of the total power consumption and the dynamic power consumption is 0.255 W, which is 98 % of the total power consumption.

4. Conclusions and Future Work

This paper has introduced a new method for implementing the ElGamal cryptographic algorithm using SNQ P systems. We also outlined the implementation of the Extended Euclidean Algorithm using Spiking Neural P Systems with communication on request, which is crucial for calculating an integer’s multiplicative inverse in the ElGamal encryption algorithm’s decryption process. In addition, we incorporated delays in rules to improve communication mechanisms between neurons. Moreover, we conducted various tests to validate the effectiveness of our circuits. In general, our contributions highlight the potential of membrane computing, particularly SNQ P systems, in cryptography and secure communication. This work presents new opportunities to exploit the parallel processing capabilities of natural computing in cryptographic applications.
In future work, we plan to exploit the parallelism of membrane computing to further implement neural circuits that handle reliable key sizes in the ElGamal scheme. An alternative approach is to use addition chains to reduce the computational cost of ElGamal’s modular exponentiation.

Author Contributions

Conceptualization, G.S.; data curation, I.R. and E.V.; formal analysis, G.S.; funding acquisition, G.S.; investigation, I.R. and J.-G.A.; methodology, J.-G.A. and G.S.; resources, I.R. and G.D.; software, I.R. and D.-E.V.; supervision, G.D.; validation, I.R.; writing—original draft, E.V.; writing—review and editing all authors. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to thank the Instituto Politécnico Nacional for its financial support.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the Consejo Nacional de Ciencia y Tecnología (CONACYT) and the IPN for financial support in creating this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Plesa, M.I.; Gheorghe, M.; Ipate, F.; Zhang, G. Applications of spiking neural P systems in cybersecurity. J. Membr. Comput. 2024, 8, 310–317. [Google Scholar] [CrossRef]
  2. Plesa, M.I.; Gheoghe, M.; Ipate, F.; Zhang, G. A key agreement protocol based on spiking neural P systems with anti-spikes. J. Membr. Comput. 2024, 4, 341–351. [Google Scholar] [CrossRef]
  3. Ganbaatar, G.; Nyamdorj, D.; Cichon, G.; Ishdorj, T. Implementation of RSA cryptographic algorithm using SN P systems based on HP/LP neurons. J. Membr. Comput. 2021, 3, 22–34. [Google Scholar] [CrossRef]
  4. Suzuki, D. How to Maximize the Potential of FPGA Resources for Modular Exponentiation. In Proceedings of the Cryptographic Hardware and Embedded Systems—CHES 2007, Vienna, Austria, 10–13 September 2007; Volume 10, pp. 272–288. [Google Scholar] [CrossRef]
  5. Pan, L.; Păun, G. Spiking Neural P Systems with Anti-Spikes. Int. J. Comput. Commun. Control 2009, 7, 273–282. [Google Scholar] [CrossRef]
  6. Krithivasan, K.; Metta, V.P.; Garg, D. On string languages generated by spiking neural P systems with anti-spikes. Int. J. Found. Comput. Sci. 2011, 1, 15–27. [Google Scholar] [CrossRef]
  7. Song, T.; Jiang, Y.; Shi, X.; Zeng, X. Small Universal Spiking Neural P Systems with Anti-Spikes. J. Comput. Theor. Nanosci. 2013, 4, 999–1006. [Google Scholar] [CrossRef]
  8. Pan, L.; Wang, J.; Hoogeboom, H.J. Spiking neural P systems with astrocytes. Neural Comput. 2011, 11, 805–825. [Google Scholar] [CrossRef] [PubMed]
  9. Zeng, X.; Xu, L.; Liu, X.; Pan, L. On languages generated by spiking neural P systems with weights. Inf. Sci. 2014, 9, 423–433. [Google Scholar] [CrossRef]
  10. Wang, J.; Hoogeboom, H.J.; Pan, L.; Păun, G.; Pérez-Jiménez, M.J. Spiking Neural P Systems with Weights. Neural Comput. 2010, 10, 2615–2646. [Google Scholar] [CrossRef] [PubMed]
  11. Zeng, X.; Pan, L.; Pérez-Jiménez, M.J. Small universal simple spiking neural P systems with weights. Sci. China Inf. Sci. 2014, 7, 1–11. [Google Scholar] [CrossRef]
  12. Zhang, X.; Wang, B.; Pan, L. Spiking neural P systems with a generalized use of rules. Neural Comput. 2014, 12, 2925–2943. [Google Scholar] [CrossRef] [PubMed]
  13. Song, T.; Xu, J.; Pan, L. On the Universality and Non-Universality of Spiking Neural P Systems with Rules on Synapses. IEEE Trans Nanobiosci. 2015, 12, 960–966. [Google Scholar] [CrossRef] [PubMed]
  14. Su, Y.; Wu, T.; Xu, F.; Păun, A. Spiking Neural P Systems with Rules on Synapses Working in Sum Spikes Consumption Strategy. Fundam. Informaticae 2017, 10, 187–208. [Google Scholar] [CrossRef]
  15. Cavaliere, M.; Ibarra, O.H.; Păun, G.; Egecioglu, O.; Ionescu, M.; Woodworth, S. Asynchronous spiking neural P systems. Theor. Comput. Sci. 2009, 3, 2352–2364. [Google Scholar] [CrossRef]
  16. Song, T.; Pan, L. A small universal spiking neural P systems with cooperating rules. Rom. J. Inf. Sci. Technol. 2014, 7, 177–189. [Google Scholar]
  17. Carbale, F.; Ardona, H.; Jiang, M.; Zeng, X. Spiking Neural P Systems with Scheduled Synapses. IEEE Trans. Nanobiosci. 2017, 10, 99. [Google Scholar] [CrossRef]
  18. Pan, L.; Păun, G.; Zhang, G.; Neri, F. Spiking Neural P Systems with Communication on Request. Int. J. Neural Syst. 2017, 12, 1750042. [Google Scholar] [CrossRef] [PubMed]
  19. Ionescu, M.; Păun, G.; Yokomori, T. Spiking neural P systems. Fundam. Informaticae 2006, 6, 279–308. [Google Scholar] [CrossRef]
  20. Wu, T.; Bîlbîe, F.-D.; Paun, A.; Pan, L.; Neri, F. Simplified and yet Turing Universal Spiking Neural P Systems with Communication on Request. Int. J. Neural Syst. 2018, 4, 1–19. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The proposed membrane in SNQ P systems for performing modular exponentiation ( α X A m o d q ) .
Figure 1. The proposed membrane in SNQ P systems for performing modular exponentiation ( α X A m o d q ) .
Mathematics 13 01366 g001
Figure 2. The proposed neural encryption circuit.
Figure 2. The proposed neural encryption circuit.
Mathematics 13 01366 g002
Figure 3. The proposed membrane in SNQ P systems for adding signed numbers ( ± a ± b ) .
Figure 3. The proposed membrane in SNQ P systems for adding signed numbers ( ± a ± b ) .
Mathematics 13 01366 g003
Figure 4. The proposed membrane in SNQ P systems for performing the product q b .
Figure 4. The proposed membrane in SNQ P systems for performing the product q b .
Mathematics 13 01366 g004
Figure 5. The proposed SNQ P system for performing the Extended Euclidean Algorithm.
Figure 5. The proposed SNQ P system for performing the Extended Euclidean Algorithm.
Mathematics 13 01366 g005
Figure 6. The proposed decipher circuit.
Figure 6. The proposed decipher circuit.
Mathematics 13 01366 g006
Figure 7. The proposed digital circuit.
Figure 7. The proposed digital circuit.
Mathematics 13 01366 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rangel, I.; Vázquez, D.-E.; Vázquez, E.; Duchen, G.; Avalos, J.-G.; Sanchez, G. First ElGamal Encryption/Decryption Scheme Based on Spiking Neural P Systems with Communication on Request, Weights on Synapses, and Delays in Rules. Mathematics 2025, 13, 1366. https://doi.org/10.3390/math13091366

AMA Style

Rangel I, Vázquez D-E, Vázquez E, Duchen G, Avalos J-G, Sanchez G. First ElGamal Encryption/Decryption Scheme Based on Spiking Neural P Systems with Communication on Request, Weights on Synapses, and Delays in Rules. Mathematics. 2025; 13(9):1366. https://doi.org/10.3390/math13091366

Chicago/Turabian Style

Rangel, Irepan, Daniel-Eduardo Vázquez, Eduardo Vázquez, Gonzalo Duchen, Juan-Gerardo Avalos, and Giovanny Sanchez. 2025. "First ElGamal Encryption/Decryption Scheme Based on Spiking Neural P Systems with Communication on Request, Weights on Synapses, and Delays in Rules" Mathematics 13, no. 9: 1366. https://doi.org/10.3390/math13091366

APA Style

Rangel, I., Vázquez, D.-E., Vázquez, E., Duchen, G., Avalos, J.-G., & Sanchez, G. (2025). First ElGamal Encryption/Decryption Scheme Based on Spiking Neural P Systems with Communication on Request, Weights on Synapses, and Delays in Rules. Mathematics, 13(9), 1366. https://doi.org/10.3390/math13091366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop