Next Article in Journal
In Silico SwissADME Analysis of Antibacterial NHC–Silver Acetates and Halides Complexes
Previous Article in Journal
Corrosion Susceptibility and Microhardness of Al-Ni Alloys with Different Grain Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privately Generated Key Pairs for Post Quantum Cryptography in a Distributed Network

1
School of Informatics Computing and Cyber Systems, Northern Arizona University, Flagstaff, AZ 86011, USA
2
Department of Mathematics, Brown University, Providence, RI 02901, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8863; https://doi.org/10.3390/app14198863
Submission received: 3 September 2024 / Revised: 26 September 2024 / Accepted: 28 September 2024 / Published: 2 October 2024
(This article belongs to the Special Issue Recent Progress of Information Security and Cryptography)

Abstract

:
In the proposed protocol, a trusted entity interacts with the terminal device of each user to verify the legitimacy of the public keys without having access to the private keys that are generated and kept totally secret by the user. The protocol introduces challenge–response–pair mechanisms enabling the generation, distribution, and verification of cryptographic public–private key pairs in a distributed network with multi-factor authentication, tokens, and template-less biometry. While protocols using generic digital signature algorithms are proposed, the focus of the experimental work was to implement a solution based on Crystals-Dilithium, a post-quantum cryptographic algorithm under standardization. Crystals-Dilithium generates public keys consisting of two interrelated parts, a matrix generating seed, and a vector computed from the matrix and two randomly picked vectors forming the secret key. We show how such a split of the public keys lends itself to a two-way authentication of both the trusted entity and the users.

1. Introduction

The crucial step needed to implement asymmetrical cryptographic schemes is to generate public/private key pairs and distribute them securely to a set of trusted users through open networks. Practical applications include existing credit cards, SIM cards, cryptocurrencies, secure mail, and many others. In structured implementations such as Public Key Infrastructures (PKIs), a trusted entity that can be called the Certificate Authority (CA) delegates the generation of the key pairs for each user to Hardware Security Modules (HSMs) or another private key store; a second trusted entity that can be called the Registration Authority (RA) solely keeps track of the public keys transmitted to each user by the CA. The transmission of the private keys to the users through an unsecured network is a weak link as third parties could intercept the keys or pretend to be the CA through man-in-the-middle attacks. We developed a challenge–response–pair (CRP) mechanism enabling each user to generate its own key pair with standardized asymmetrical algorithms while allowing the CA to independently verify that the public keys are legitimate and originate from approved users. The role of the RA is then kept unchanged, enabling users to use Digital Signature Algorithms (DSAs) and Key Encapsulation Mechanisms. In the new protocol, each user is equipped with a cryptographic table (CT) that can be generated from, not to be limited to, a Physical Unclonable Function (PUF), a biometric image, or simply a digital file. During an initial enrollment cycle that is performed securely, the users share their cryptographic tables with the CA. The process to generate on-demand a new key pair is summarized as follows:
  • The CA picks a random “challenge” and communicates it to the user.
  • The user receives a “response” from this “challenge”, the CRP mechanism, and root of trust.
  • The user picks a random number to generate a public/private key pair.
  • The user sends back to the CA the public key and a digital signature.
  • The CA finally computes independently a “response” from the same “challenge” with the same CRP mechanism and Crypto-Table to verify the digital signature and the validity of the public key without having to ever know the private key.
In the development of a prototype demonstrating the approach, we implemented two factors for the CRP mechanism: (1) a physical root of trust and template-less biometry and (2) standardized PQC algorithms.
The National Institute of Standards and Technology (NIST) standardization effort has been narrowed over the last 7 years [1]. In July 2020, NIST announced the finalists for phase three of the program [2]: CRYSTALS-Kyber, CRYSTALS-Dilithium, SABER, NTRU, and FALCON with lattice cryptography [3,4,5,6,7]; RAINBOW with multivariate cryptography [8]; and Classic McEliece with code-based cryptography [9,10]. The standardized software targets DSA applications and KEM. Recently, NIST decided to standardize lattice cryptography because of its relative maturity. Lattice-based algorithms exploit hard problems to resolve issues such as the closest vector problem (CVP) and learning with error (LWE) algorithms. NIST was able to gather impressive resources to demonstrate the strength of these algorithms for replacing the classical digital signature schemes due to the potential security threat of quantum computers’ ability to break classical cryptographic schemes. We selected the standardized post-quantum algorithm CRYSTALS Dilithium, which is based on lattices and learning with errors. The end applications that were considered for commercial deployments included existing those for public key infrastructures, financial transactions, loyalty programs, online retail, travel, cryptocurrencies, banking cards, and private networks.
The structure of this paper is as follows. Firstly, we start with the abstract and introduction. Then, we dive into explaining the current state of the art of Public Key Infrastructure in Section 2 and continue to develop our work based on related research on “Relevant Prior work”. In Section 3 and Section 3.1, we discuss how Physically Unclonable Functions (PUFs) work with the challenge–response mechanism and incorporate this idea into current generalized digital signature schemes in Section 3.2. In Section 3.3 and Section 3.4, we propose our novel Public Key Infrastructure, where we discuss the protocol in detail. In Section 4, we provide the results from our conducted experiment. In Section 4.1, we discuss the potential method of the Johnson Lindenstrauss scheme for biometric key generation. In Section 4.2, we discuss the potential security vulnerability of the architecture. In Section 5 and Section 6, we draw some conclusions.

2. Background Information

Current Public Key Infrastructure (PKI) for establishing secure communication and verifying an end user depends on the Certificate Authority (CA) for the authentication of a user [11]. The role of the CA is one of the most crucial parts of security in PKI. This authentication of the user can be performed by applying various methods. In the user drive certificates method (see Figure 1a), the CA provides a certificate to an end user. The user generates its public and private key pair, encrypts the certificate with the private key, and sends it back to the CA. The Registration Authority (RA) keeps the signed certificate and the public key of the user. To establish secure communication with the user, other users will verify the signed certificate and public key against the RA record. User-driven certificates give the highest privacy to the users as the private keys are never disclosed; however, they can be prone to “Man-in-the middle” (MIM) attacks [12]. Such a protocol is also vulnerable to bad players due to the weak authentication capabilities.
Most commercial PKIs rely on third parties to authenticate the users and generate the public and private key pairs. The service provider communicates with both the CA and the end user to provide the key pairs. The CA provides a certificate to the user for signing with the private key and sending the signature to the CA. The RA keeps a record of the public key received from the service provider and the signature from the user. Figure 1b shows a graphical representation of the overall process. For example, a Hardware Security Module (HSM) generates the key pairs. The RA receives the user’s public key from the HSM, and the private key can be stored in a device, e.g., a smart card or token, and given to the end user. During the authentication process, the CA issues a certificate for the user and the user signs the certificate utilizing the private key and sends it back to the CA. Other users can verify the user by using the signature from the CA and the public key from the RA. The described methods have their own strengths and weaknesses.
The third-party driven certificates discussed here can generate new keys easily and are less sensitive to MIM attacks, but the key distribution is a weak link and must rely on additional protections. Temper-resistant tokens might give the user leverage against MIM attacks and a better key distribution policy, but the whole security depends on the level of confidence in the HSM. In addition to this, none of the schemes can fully guarantee confident protection against impersonation attacks. Also, the end users always must rely on a service provider or CA for the key pairs. This leads to creating a weak link in the security scheme and trust issues among the users and service providers. Most cryptocurrencies, secure mail, banking cards, and Subscriber Identity Modules (SIMs) still rely on the current PKI, and this could lead to a huge security breach as attacks by malicious groups are getting more sophisticated every day.
An overall block diagram of the schemes proposed in this paper is shown in Figure 2. The objective is to allow users to generate their own key pairs. The Private Key Store can verify the legitimacy of users and their private keys. Such a protocol thereby combines the benefits of the user-driven protocol of Figure 1a with the third party protocol of Figure 1b without their inherent limitations.

Relevant Prior Work

Cambou et al. proposed a methodology of integrating PUFs as the source of seed for NIST-approved lattice, NTRU, and code-based cryptographic protocols. They used semiconductor devices as a PUF to create hardware fingerprints that replace random numbers for private/public key generation. In such a scheme, the server administrates the infrastructure and the user independently generates the key pair, which is strong as a two-way authentication [13].
Maletsky et al. proposed a mechanism where a client device that is associated with a host device sends a public key and the associated certificate to the host device. Secure storage associated with the client device stores the parent public key, private key, and certificate. The client device receives instructions from the host device to generate a new key pair, which is the child’s private and public key. To ensure randomness and security, the child’s private key is created based on the first random number produced within the secure device. Then, the client device computes the signature on the child’s public key using the parent’s private key stored in the secure device. Finally, the client device sends the child public key and the first signature to the host device, and the host device verifies the first signature [14].
Barthelemy et al. designed a system where a signer has an identity associated with a public key; the signer also holds two private keys: the first private key and the second private key. The second private key is stored securely with a reliable third party. When the signer wants to sign a message, they use the first private key to create a pre-signature of the message, and the signer then sends both the message and the pre-signature to the reliable third party. The reliable third party verifies the pre-signature using the signer’s public key to ensure that it is valid and, upon successful verification, the third party uses the second signature to sign the final message [15].
Popp used a token to generate a certificate request and send it to the CA. The request includes a public cryptographic key that is uniquely associated with the token. The CA generates a symmetric cryptographic key specifically for the token. The CA encrypts the symmetric key using the public key that was included in the token’s certificate request. This ensures that only the token, which has the corresponding private key, can decrypt and access the symmetric key. The CA creates a digital certificate for the token and this digital certificate contains the encrypted symmetric key as one of its attributes. The CA sends the digital certificate back to the token. Upon receiving the digital certificate, the token uses its private key to decrypt the encrypted symmetric key and stores the decrypted key [16].
Merrill devised a mechanism where at the time of establishing secure communication between two parties, the first party selects an image that has been digitally signed by the second party. From the image, the first party extracts a portion, referred to as the first image portion. This portion contains an encoded part of the second party’s digital signature. The second portion of the image, which is associated with the first portion of the image, contains an encoded part of the second party’s digital signature. Both the first and second image portions contain encoded parts of the second party’s digital signature. Using the second party’s public key, the reconstructed digital signature is validated [17].
JNix devised a scheme where the server stores its private key, the device’s public key, and the device’s ephemeral public key. On the other hand, the client device stores the server’s public key. The client and the server utilize the Elliptic Curve Diffie–Hellman Exchange (ECDHE) protocol to securely exchange cryptographic keys over a public channel and, through this process, they mutually derive the same symmetric key K1. The client device encrypts its public key with the symmetric key K1 to create the first cipher text and sends it to the server. The server encrypts its public key with the derived symmetric key K1 to create the ciphertext and sends it to the client device [18].
Ekberg et al. used a unique identifier specific to the mobile device for authentication purposes. The mobile device starts by sending a request to the authentication platform for a public key certificate, which allows the mobile device to give access to a particular service. In response, the authentication platform sends an identity challenge back to the mobile device ensuring that the request is coming from the legitimate device. The mobile device responds to the identity challenge by sending a specific tag back to the authentication platform. This tag is unique to the mobile device and serves as a proof of identity. Upon receiving the tag, the authentication platform verifies it to ensure that it corresponds to the mobile device and, after verification, the authentication platform generates a public key certificate of the mobile device [19].
Hwang proposed that a cryptographic trio consisting of three elements, a public modulus, public exponent, and private key-dependent exponent, could be utilized in the digital signature mechanism for user authentication. Users can choose a secret (personalized secret) that can be changed at the user’s discretion. The personalized secret and the two primes, e.g., the public modulus, and public exponent form the cryptographic trio, and the public modulus and public exponent form the public key. During the authentication process, the system issues a challenge, and the user responds to this challenge by generating a digital signature using both the personalized secret and the cryptographic trio. The digital signature is considered valid if the first input matches the user’s personalized secret and the second input matches the trio consisting of the public modulus, public exponent, and private key-dependent exponent. A conventional CA is not required to establish secure communication [20].
Alexander Maximov, Martin Hell, and Bernard Ben SMEETS looked into a method of hashing of one-time signing key. With this method, the device sends the computed hash of the signing key, the identity of the device linked to the public certificate, and the hash path from the one-time signing key to the root hash. The server verifies that the computed hash matches the one-time signing key for the root hash in the device’s public certificate. In addition to this, the server checks that the index corresponding to the hash path from the one-time signing key to the root hash matches the appropriate time slot. If both checks are fulfilled, the server determines that the device indeed possesses the correct one-time signing key [21].
Gentry proposed a scheme where a sender encrypts a digital message using the recipient’s public key. The recipient must have up-to-date authority from the authorizer to decrypt the message. The authorizer grants permission for validation that the recipient is allowed to access the message. If the recipient has the necessary up-to-date authorization, the recipient can decrypt the message using their private key [22].
Gantman et al. proposed that a pseudorandom string could be used to derive public and private keys. By using modulus N and the pseudorandom string value, each user can generate their public key and, corresponding to the public key, each user will generate their private key. The verifier receives the public key of the other entity. By using the common modulo N, their own private key, and the verifying public key, each user calculates a shared secret value. Each user uses this shared secret value to calculate an authentication signature value. During the verification, users regenerate the public key and shared secret value to generate the authentication signature value and match it with the received one for user verification [23].

3. Privately Generated Key Pairs for Post-Quantum Cryptography

This section describes secure challenge–response–pair (CRP) mechanisms that can enhance security in the current PKI systems that use public–private cryptographic keys. The root of trust mechanism of terminal devices can be the result of unique responses that are responding to known challenges.
Examples of CRP-generating systems can be tokens containing Physically Unclonable Functions (PUFs) [24], biometric images that contain unique information about an individual, or a structured digital file (see Figure 3). The PUF can be based on SRAM, RRAM, or MRAM [25,26,27]. During an enrollment cycle, which is performed in a secure setup, the CRP generating system generates what is called a Crypto-Table, which is the lookup table containing a list of challenges and their corresponding responses.
Some of the objectives of the scheme are to mitigate problems such as Man-in-the-middle attacks (MIM), impersonation attacks, compromised key generation, distribution and storage, and malicious HSMs. The unique Crypto-Table should be protected by anti-tampering measures, unique, and sufficiently complex, i.e., with high entropy. Other priorities include the enhancement of the one-wayness of the CRPs and the implementation of ternary logic [28,29], which masks the fuzzy and unstable CRPs.

3.1. Initial Step: Enrollment of Each Terminal Device—Root of Trust

Each terminal device or client needs to either carry with them or possess at least one CRP system establishing a root of trust. The controlling entity needs to have a database of Crypto-Tables tracking these CRP systems. This must be achieved through an initial step, also called an enrollment cycle. This operation must be performed in a highly secure environment, like the ones in which secret keys are downloaded onto banking cards and other secured tokens. Examples of enrollment cycles are as follows:
  • The Crypto-Tables are generated from the PUFs contained in tokens by a trusted service provider operating like an HSM. The CRPs are carefully measured, the tokens are delivered to the clients, and the Crypto-Tables are transmitted to the CA.
  • The clients must visit their bank, which is equipped to generate sets of CRPs from the facial images. The one-wayness of the CRP mechanism is such that the facial images cannot be reconstructed from the CRPs.
  • A service provider generates Crypto-Tables from sets of digital images by encrypting them, hashing them, and applying an extended output function (XOF). The CA receives the Crypto-Table while the clients receive the encrypted files.
After the completion of the enrollment cycle, the CA keeps cryptographic tables for each device to generate the same CRPs as the roots of trust, with a shared response. Malicious insiders at the CAs or HSMs are expected to create a security breach. The proposed protocol has the objective to mitigate the impact of potential attacks in the following way: The clients never disclose their private keys. For many applications, the main role of the CA is limited to the validation of the digital signatures with the public keys. Therefore, a loss of control of the Crypto-Table does not necessarily compromise a financial transaction. However, such a loss of control is not desirable and a two-way authentication of the CA should be considered.

3.2. Generic Digital Signature Algorithm

This section describes a generic mechanism that can strengthen the current PKI system and give clients the authority to generate key pairs rather than relying on the CA. Then, we describe a specific algorithm developed around the learning with errors (LWE)-based lattice cryptography and the standardized Dilithium algorithm. Figure 4 summarizes the proposed generic protocol that can work with all the classical Digital Signature Algorithms (DSAs). The CA controls the validity of client devices through a root of trust while the client generates independently the key pairs. The outline of the protocol for public key distribution and verification for DSAs is as follows:
  • The CA picks a random number to create challenges that are sent over to client devices through a public, i.e., unsecured, network. These challenges act as the instructions for a CRP mechanism in the form of bit streams. For additional security, the transmission of the challenges could be protected by encryption schemes and multi-factor authentication (MFA).
  • Client i uses these challenges and feeds this as input for the root of trust. The root of trust can be a hardware token integrated with a PUF, an image, a file containing biometric information about an individual, or a structured file, as described, for example, in Figure 3.
  • From the root of trust and the CRP mechanism, client i generates responses K e (i) and sends the hashed message digest H( K e (i)) to the CA. An example of a hash function is SHA-3.
  • The CA produces K e (i)′ from the CryptoTable that is approximately identical to K e (i). A search engine such as the RBC can recognize K e (i) by matching the hashed message digest of K e (i)′ with H( K e (i)) and variations of K e (i)′ located at short hamming distances from K e (i). Clients without a valid root of trust cannot generate a recognizable response K e (i); thus, the method allows the CA to authenticate client i.
  • The client generates a public–private key pair ( P k ( i ) , S k ( i ) ) by using its own random number generator and an agreed-upon asymmetrical algorithm.
  • Client i generates a message (M) and performs an agreed-upon concatenation operation with K e (i) and hashes the result to obtain H( M | | K e (i)).
  • Client i creates the signature by signing the message digest with the secret key Sk(i) and using the agreed-upon DSAs: S = Sign(H( M | | K e (i)), S k (i)).
  • Client i sends M, S, and P k ( i ) to the CA.
  • The CA authenticates client i and verifies the digital signature by using the corrected response K e (i), and independently computing S′= H( M | | K e (i)). The CA uses S′ and the public key P k ( i ) to verify S.
  • If the CA successfully verifies client i, the RA publishes the public key P k ( i ) .
  • As needed, the CA can transmit back S′ to client i to complete a two-way authentication scheme, allowing the client to verify the authenticity of the CA.

3.3. PKI for Lattice-Based Digital Signature Algorithm (LWE)

Standardized lattice-based post-quantum cryptographic algorithms with LWE such as Crystals-Dilithium are well suited to designing a variation of the scheme presented in the previous section. The public–private cryptographic key pair generation for a client device “i” is based on polynomial computations in a lattice ring. The knowledge of integer-based vector t i , and matrix A i with t i = A i ( S ( 1 ) i + S ( 2 ) i ) hide the vector S ( 1 ) i and the “small” vector of error S ( 2 ) i .
With LWE algorithms, the private key is S k ( i ) = ( S ( 1 ) i ; S ( 2 ) i ) and the public key is P k ( i ) = ( t i ; A i ), as follows:
  • A randomly picked seed a i generates A i with the extended function SHAKE.
  • A second randomly picked data seed b i generates the vectors S ( 1 ) i and S ( 2 ) i .
  • The vector t i is computed: t i A i ( S ( 1 ) i + S ( 2 ) i ) .
  • Seed a i and t i become the public key P k ( i ) .
  • S ( 1 ) i and S ( 2 ) i become the private key S k ( i ) .
The outline of the signing and verification procedure for Dilithium is as follows:
6.
Step 1—Signature: Generate a masking vector of polynomials y.
7.
Compute vector A.y and set w i to be the high-order bits of the coefficients.
8.
Create the challenge c as the hash of the message and w i .
9.
Compute signature z = y + c. S ( 1 ) i .
10.
If any coefficient of z is larger than a threshold, reject and restart at step 1.
11.
If any coefficient of the low-order bits of A i zc. t i is too large, reject and restart at step 1.
12.
Verification: Compute w 1 to be the high-order bits of A i zc. t i and accept if c is the hash of the message and w 1 .
The proposed scheme leverages LWE algorithms in which the public keys consist of two connected elements, the seed a i and vector t i . As shown in Figure 5, the root of trust only generates seed a i .
The protocol includes the generation and sharing of the challenges Ch-CT from random number RNG#1. After recovery, both communicating parties have access to the same seed a i , hence, matrix A i . A second random number generates the secret key, i.e., vectors S ( 1 ) i and S ( 2 ) i .
The client device transmits t i , the message M, and the signature S, thereby allowing the CA to verify the signature while keeping the private key secret. The seed a i is kept secret during the verification process to allow the CA to authenticate the client device and the full public key, which is released to a registration authority (RA).
The protocol is outlined as follows:
  • With RNG#1, the CA picks a random number to create challenges Ch-CT, which are sent over to the client devices through a public, i.e., unsecured, network. These challenges act as the instructions for the CRP mechanism in the form of bit streams. For additional security, the transmission of the challenges could be protected by encryption schemes and multi-factor authentication (MFA).
  • Client i uses these challenges and feeds this as input for the root of trust. The root of trust can be a hardware token integrated with a PUF, an image, a file containing biometric information about an individual, or a structured file.
  • From the root of trust and the CRP mechanism, client i generates responses of seed ai and sends the hashed message digest H( a i ) to the CA. An example of a hash function is SHA-3.
  • The CA will produce a i from the CryptoTable that is approximately identical to a i . A search engine such as the RBC can recognize a i by matching the hashed message digest of a i with H( a i ) and variations of a i located at short hamming distances from a i . Clients without a valid root of trust cannot generate a recognizable response ai; thus, the method allows the CA to authenticate client i.
  • Client i generates the secret key S k ( i ) : ( S ( 1 ) i ; S ( 2 ) i ) by using its own random number generator.
  • Client i computes t i = A i ( S ( 1 ) i + S ( 2 ) i ) and generates a message M and its hash H(M).
  • Client i signs the message M using the secret key S k ( i ) and produces the signature S = Sign(H(M), S k ( i ) ).
  • Client i sends t i , S, and M to the CA for verification.
  • The CA authenticates client i by reconstructing the full public key { t i i, a i } and verifying the signature.
  • If the CA successfully verifies client “i”, the RA publishes the public key P k ( i ) .
To simplify the block diagram of Figure 5, we can see that the public keys of Dilithium P k 1 ( i ) consist of two parts: the first part P k 1 ( i ) is the seed ai generating matrix A i and the second part P k 2 ( i ) is vector t i . This vector is computed from A i and the secret key S k ( i ) : t i A i ( S ( 1 ) i + S ( 2 ) i ) . The protocol is shown in Figure 6. The scheme to find the same seed ai from the Crypto-Table and the root of trust is not included in the simplified diagram.
Here, we exploit the “one-wayness” of the lattice protocol. If the CA can successfully verify message M with the public key P k ( i ) , the following conclusions can be deducted:
  • Message M is valid.
  • Public key P k ( i ) is valid.
  • The CA knows that client “i” generated the same matrix Ai from the root of trust.
  • The CA knows that client “i” computed a valid private key from its own random number generator that was also used to compute the vector t i = P k 2 ( i ) .

3.4. Implementation with Multifactor Authentication

The protocol that we implemented and that is described in this section is based on two factors: a root of trust integrated into a token and facial image-based biometry. During enrollment, a Crypto-Table is generated from the root of trust, and a set of CRPs is extracted from the facial image of the client. The one-wayness of the CRP mechanism is such that the knowledge of the pairs does not disclose the facial image, thereby protecting privacy. The overview of the protocol is shown in Figure 7.
A Key Encapsulation Mechanism (KEM) that is based on the CRPs of the facial images is embedded in the protocol. For this purpose, the random number generator RNG#3 generates the 256 bit long ephemeral key K 1 and uses it to filtrate the responses. Then, 256 responses, each 256 bit long, are generated from the CRP mechanism. A subset of these responses (SBR) is kept after filtration and encrypted based on the following method:
  • All responses corresponding to a state of “0” of K 1 are erased.
  • All responses corresponding to a state of “1” of K 1 are kept, forming SBR.
  • SBR is encrypted with a symmetrical algorithm, e.g., AES, using seed a 1 as a key.
  • The cipher text SBR’ is transmitted to the CA.
The CA decrypts SBR’ with the same symmetrical algorithm and seed a 1 and then recovers K 1 by comparing SBR with the full set of responses generated with the CRP mechanism. After the completion of the verification cycle, the CA and the client device can use Ki as a cryptographic key to protect the exchange of information between them.

4. Experimental Validation

The proposed protocol was validated with an SRAM-based PUF token provided by Castle Shield Holdings, LLC, Scottsville, VA, USA [30]. For the root of trust, see Figure 8. The token was manufactured with STM32F401RBT MCU and IS61LV6416 SRAM and had 44 TSOP pin packages. Each SRAM had 65,536K words, which gave access to 1,048,576 bits that were unique; 256 of them were randomly selected during each key generation to obtain seed a 1 .
In this implementation, Crystals-Dilithium 2 was selected, which has 1312-byte-long public keys and 2420-byte-long signatures [31]. In total 20,000 verification cycles were run with the protocol, with a success rate of 100%. An upper bound of 45 s for each test cycle was set. The Inter(R) Core (™) i9-9900K CPUY @ 3.l60Ghz processor was used for the test. The Ternary Addressable Public Key Infrastructure (TAPKI) protocol was an integral part of the verification of the root of trust [29]. The TAPKI server sent the challenges to the token in the form of a handshake.
Four different handshake sizes were used in the test: 1024 bit, 1280 bit, 1536 bit, and 1792 bit. We found that handshake size did not have much of an impact on the overall protocol performance. So, in this paper, the reporting is based on a 1024 bit handshake size (see Figure 9). The integration of fragmentation, which is a technique to improve latency and security with the assistance of a masking scheme for Response-based Error Correction (RBC), was integrated [32]. In RBC, an error correction process can be applied to the hashed key and fix errors to a certain level to recover the correct key. A fragmentation scheme was designed to create subkeys out of the full key as, during the key recovery process, this reduced exponentially the search space for each subkey, eventually leading to decreasing latencies for the server. However, there should be a limit to the fragmentation level, as too much fragmentation has a negative impact on latencies for client devices. With 256 bit long keys and a fragmentation level of 16, the client devices used nonces to generate sixteen 256 bit length subkeys, hash them, and transmit the message digests to the server equipped with the RBC search engine. There was a trade-off as no fragmentation was prohibitive for the server, and excessive fragmentation slowed down the client devices. The distribution of errors from the token is shown in Figure 9, with four different handshake sizes and five different fragmentation levels (1, 2, 4, 8, 16), making 20,000 verification cycles.
For fragmentation level 1, no subkeys were created, but only three errors could be processed: the time to find the correct key rose exponentially and the 45 s threshold setup for the test eliminated the cases with more than three errors. Fragmentation levels 2 4, 8, and 16 showed a similar pattern of error distribution: errors 0, 1, and 2 consisted of more than 72% of the errors, while more than 24% of the errors came from errors 3 and 4; errors 5, 6, 7, and 8 were less frequent. Figure 10 shows the impact of fragmentation of overall latency on the RBC key recovery scheme. The time complexity of the RBC error correction scheme was 2 N for no fragmentation, where N is the length of the key, and it had to try all possible combinations for the worst case. However, fragmentation scaled down the time complexity of the error correction scheme by 2 N / K , where K is the fragmentation level for the worst case. Fragmentation level 16 was the optimal solution for the server; however, the burden on light client devices was a problem. From Figure 10, we can observe that the average latency difference between fragmentation levels 2 and 4 was 236.68 ms.
The difference between fragmentation levels 4 and 16 was 38.54 ms. Considering the security aspect of fragmentation, we can safely state that the fragmentation level of 4 is practical as it will not compromise latencies.
In Figure 11, the average latencies for elements of the protocol are shown, including Dilithium public key generation time and server verification time. Overall server time scaled down as the RBC progressed on the server, and we can clearly observe the impact of the RBC for different fragmentation levels on the total server execution time. The latencies on the server side could be optimized greatly if we parallelized the RBC error correction scheme, which would greatly reduce the overall RBC time; in addition, the server side would have a more powerful computation system for execution in practical cases.

4.1. Biometry as a Multi-Factor with Johnson Lindenstrauss

After the completion of the verification cycle, the CA and the client device can use K i as a cryptographic key to protect the exchange of information between them. A public–private key pair for an asymmetric cryptosystem is generated from a random seed. If the entropy of the space that this seed is chosen from is not high enough, the cryptosystem becomes vulnerable to a brute-force search. Recall that in the new protocol discussed here, each user is equipped with a cryptographic table (CT) that can be generated from, for example, a Physical Unclonable Function (PUF), a biometric image, or simply a digital file. During an initial enrollment cycle that is performed securely, the users share their cryptographic tables with the CA. A string of bits related to identifying information is fed into the table and a deterministic output is then used as the source of the secret key.
Until the present work, it has not been possible to use a biometric image for this function without facing some serious disadvantages. This is because the mapping must be deterministic, that is, the same input must always produce the same output. This can be accomplished with a biometric source if many images are combined and averaged into a template, which is then stored. This template can then be leveraged into a cryptographic table, but there are some serious security issues. First, if the information is stored anywhere, then a breach of the security of that storage space reveals sensitive biometric information. Second, if the image is lost or corrupted, at the current level of technology, the creation of a new template would not produce the same lookup table. Biometric information is simply too fluid and changeable for a second set of measurements to produce an identical lookup table or even a particularly close approximation to the original. However, if a solution could be found whereby a biometric source could be consistently mapped to the same lookup table, then a person’s biometric characteristics could be tied directly to the creation of an asymmetric key pair—a very desirable objective.
In the following, we will use the example of a face as a source of biometric information, but this is only one of a variety of different possibilities. A face has several characteristics that can be very easily, at low computational cost, detected and measured. For example, the centers of the eyes and the tips of the nose can be identified and used to establish a coordinate system, and the relative positions of various points on the face such as the corners of the eye, cheekbones, etc. can be recorded. Approximately 400 of such “landmarks” can be identified and assigned coordinates. At this point, a random subset of these landmarks can be chosen, and distances between landmarks and angles between lines connecting one point to two distinct points can be measured and recorded as a several-hundred-dimensional vector. The naive thing to do at this point would be to use this vector as a source of “deterministic” randomness and map it to a private key, from which an associated public key would be created. This would suffer from several defects and security flaws. First of all, there is sufficient randomness and variability in the process of taking a photo and performing these measurements that an attempt to reconstruct the same vector from a different photo would fail. A new vector would be created that would be, from the point of view of Euclidean distance, close to the original vector, but if this new vector were mapped to, for example, a second 256 bit long sequence to be used for key creation, it would differ from the 256 bit sequence created by the first vector at enough points to make it computationally infeasible to reconstruct the original key. In addition, even though this vector of measurements is intuitively quite far from the original landmark distribution of the face, it is not at all clear that such a vector, if revealed, would not leak valuable biometric information.
We are thus left with a seemingly intractable problem. Each individual face can be made to correspond to a collection of vectors in a high-dimensional space. Think of these as lying in a sphere of small radius. We hypothesize that distinct faces will map to the interiors of distinct, non-overlapping spheres and can thus be distinguished from each other. However, how can we associate individual spheres with unique 256 bit long strings and simultaneously establish that the inverse problem of mapping back from a sphere to the original face is computationally infeasible?
We have found an answer that relies on a relatively little-known feature of high-dimensional space. In low dimensions, say 2, 3, or 4, unit spheres and cubes have roughly the same volume. In dimension 1, a sphere of unit radius has a volume (i.e., length) of 2, as does a unit cube centered at the origin. In dimension 2, a unit sphere (circle) has an area of pi = 3.14159…, while a cube (square) centered at the origin with corners (1, 1), (−1, 1), (−1, −1), (1, −1) has an area of 2. In the 3 space, a unit sphere has a volume of 4pi/3, while the corresponding cube has a volume of 8. In general, in even dimensions n, the volume of a unit n-sphere is p i ( n 2 ) / ( n 2 ), which tends towards zero exponentially quickly with n. The volume of a unit cube, however, is 2 n , which grows exponentially large with n (when n is odd the answer is similar for spheres, but slightly more complicated). Thus, from an intuitive point of view, if you look at all points in the n-dimensional space with coordinates having absolute value less than or equal to 1, there is exactly one cube, but room for exponentially more spheres. Continuing with this intuition, spheres have a lot of room to hide in high dimensions. One can pack exponentially more distinct spheres of radius 1 into a unit cube. This is the insight upon which our work is based.
What we do is first map the original measurement vector via a continuous function to a high (that is, several-thousand-dimensional) space. The mapping to a high dimension has the effect of keeping points that are very close together, that is, different vectors created from different photos of the same user, still close together in the high-dimensional target space, but separating vectors created by different users. We then refer to a result dating from 1984: W. B. Johnson and J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space, Contem Math 26 (1984), 189–206 [33]. In this paper, they demonstrate that there exist mappings from high-dimensional spaces to low-dimensional spaces (with a logarithmic relation between the high and low dimensions) that, given any finite set of points, preserve the distances between the points in the low-dimensional range space. They did not provide a way of constructing such functions, but, in 2010, in the paper “Johnson Lindenstrauss lemma for circulant matrices”, Hinrichs and Vybíral provided extremely efficient methods for creating such functions from random seeds [34]. Essentially, to project from dimension n down to dimension k, they constructed a family of close-to-orthogonal k by n matrices that send, via multiplication, n-dimensional vectors to k-dimensional vectors, while very closely preserving the distance between pairs of vectors.
We take advantage of two distinct features of this mapping. Multiplication of an n-dimensional vector by such a matrix can be viewed as taking a k-dimensional “slice” of the vector. As the space of possible matrices is combinatorically enormous, the number of inverse images in n space of a k-dimensional image slice is exponentially large, and we may safely view such a slice as leaking no useful information about the original biometrically created vector. Secondly, because of the distance-preserving property of the mapping, applying the matrix to two different n-dimensional images created by the same biometric input will result in the output of two close k-dimensional images. Applying the matrix to a vector created by different biometric inputs will result in a very different output. We will now provide details of how a lookup table can be created. Suppose we want to create an M by M table, with M as a large integer (128, 256, 4096, …).
  • Begin with a vector v created from biometric measurements that have been mapped to the n-dimensional space.
  • Randomly generate M seeds S 1 ,…, S M .
  • Map these seeds S i to M and distinguish k by n close-to-orthogonal matrices C i . Call these “challenges”.
  • Compute M distinct responses R 1 = C 1 .v These are k-dimensional vectors.
  • This collection ( C i , R i ), i = 1, …, M is the lookup table.
Now, choose a random subset of half of the integers from 1 to M: { j 1 , j 2 , …, j M / 2 } . This random subset can be used to map a biometric input to a unique and deterministic sequence of M bits as follows. We assume that the user of the lookup table has knowledge of the subset of landmarks to use, which measurements to take, and how to map the vector of measurements to the n-dimensional space.
  • Receive a photo from the user (along with proof of liveness, etc. …)
  • Take measurements and create an n-dimensional vector
  • For each k between 1 and M 2 , pick the challenge matrix C j k and compute R j k = C j k . v
  • Compare R j k with R j k from the lookup table.
  • If the distance between vectors is below a threshold, then place a 1 in position J k and continue.
  • If the distance is above the threshold output: INVALID USER.
  • When all k responses have been compared and matched the vector with 1’s in the positions J k , the validation as the secret key corresponding to the user is complete.
If this is performed in a signature verification setting, so knowledge of a secret key corresponding to a given public key is being proven (rather than the key itself being revealed), then the subset would be randomly generated by the user, signed by the public key, and then given to the CA by the user, along with a photo, which would regenerate the same subset that was signed.

4.2. Security and Reliability

The security of the protocol heavily depends on the token developed by Castle Shield. However, this hardware is susceptible to side-channel analysis attacks. Due to the multifactor nature of the protocol, even if the token is compromised, security still relies on the user’s password. Without the password, the client cannot extract the correct challenge from the handshake, resulting in incorrect responses from the PUF.
Even if both the token and password are compromised, there is an additional layer of security. As illustrated in Figure 7, we can incorporate the user’s facial biometric information to encapsulate the key generated by the token. Without the actual user and their authentic biometric data, it will be impossible to breach the protocol.
In summary, the proposed protocol relies on multiple layers of security. We utilized the NIST-approved and recommended post-quantum digital signature scheme, CRYSTALS Dilithium, which is set to replace classical digital signature protocols such as RSAs. Therefore, we assume that Crystals-Dilithium is secure.

5. Discussion and Future Work

We presented a CRP-based mechanism enabling the generation and distribution of cryptographic public–private key pairs in a public network in which client devices never share private keys. However, the CA can still verify the legitimacy of public keys, thereby protecting the privacy of clients. We presented an efficient way to implement the protocols with standardized Crystals-Dilithium. A multi-factor authentication scheme was suggested to add a KEM protocol to equip the communicating parties with ephemeral keys for symmetrical cryptography. Examples of practical applications include the following:
  • Standard Credit Cards and SIM Cards: During the personalization process, a Crypto-Table can be seamlessly integrated into the non-volatile memory of the credit card. This approach requires no additional changes to the hardware infrastructure; however, an update to the embedded software is necessary.
  • Enhanced Credit Cards and SIM Cards: For modified cards, the microcontroller can be equipped with a Physical Unclonable Function (PUF), for instance, SRAM- or RRAM-based PUF technology. During personalization, the Crypto-Table is generated from the PUF and transmitted to the Certificate Authority (CA), where it is integrated into the non-volatile memory of the card.
  • Cryptocurrencies and Tokenless Blockchains: An effective implementation option for the protocol is to generate Crypto-Tables from encrypted files. In this model, the trusted party transmits the Crypto-Table directly to clients instead of distributing key pairs. That is how the user will not require any HSM or token for key generation.
  • On-Demand Utilization: We propose utilizing both tokens and biometric authentication, alongside the Key Encapsulation Mechanism (KEM) protocol, to enhance security and user experience. This can be directly implemented in secure retailing, online transactions, authentications of users utilizing template-less biometry, verifications of users, and so on.
As part of future work, we are preparing a step-by-step optimization of the protocol:
  • Use of Johnson–Lindenstrauss protocol as the CRP mechanism for biometry. Such a method has the potential to enhance the one-wayness of the mechanism and entropy.
  • Formulation of Design of Experiment (DOE) for testing purposes to enhance the performance of the overall protocol.
  • Optimization of the correcting schemes to recover the same responses from the Crypto-Tables and roots of trust.
  • Development of real-case applications with commercial partners.

6. Conclusions

In this research, we proposed a novel Public Key Infrastructure (PKI) in which users do not need to rely on any third party, Certificate Authority, or Hardware Security Module (HSM) to generate their public–private key pairs for digital signature schemes. Clients generate their own private keys using the NIST-standardized Crystals-Dilithium algorithm. The generation of the seed for the public key depends on the client’s token, which has an SRAM PUF embedded in it.
Additionally, we incorporated a biometric scheme as an extra layer of security for encapsulating the seed derived from the token. In the experimental section, we demonstrated that our initial implementation is feasible for practical applications.

Author Contributions

Conceptualization, B.C.; methodology, B.C.; software, M.A.; validation, M.A. and B.C.; formal analysis, M.A.; investigation B.C. and M.A.; resources, M.A. and B.C.; data curation, M.A.; writing—original draft preparation, M.A., B.C. and J.H.; writing—review and editing, M.A. and B.C.; visualization, B.C. and M.A.; supervision, B.C.; project administration, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to it’s a part of ongoing research. However, request can be made to the corresponding author for the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ding, J.; Chen, M.-S.; Petzoldt, A.; Schmidt, D.; Yang, B.-Y. Rainbow; NIST PQC project round 2, documentation. In Proceedings of the 2nd NIST Standardization Conference for Post-Quantum Cryptosystems, Santa Barbara, CA, USA, 22 September 2019. [Google Scholar]
  2. NIST Status Report of Phase 3 of PQC Program, NISTIR.8309. Available online: https://www.nist.gov/publications/status-report-second-round-nist-post-quantum-cryptography-standardization-process (accessed on 22 July 2020).
  3. Ducas, L.; Kiltz, E.; Lepoint, T.; Lyubashevsky, V.; Schwabe, P.; Seiler, G.; Stehlé, D. CRYSTALS-Dilithium Algorithm Specifications and Supporting Documentation. Part of the Round 3 Submission Package to NIST. Available online: https://pq-crystals.org/dilithium (accessed on 19 February 2021).
  4. Fouque, P.-A.; Hoffstein, J.; Kirchner, P.; Lyubashevsky, V.; Pornin, T.; Prest, T.; Ricosset, T.; Seiler, G.; Whyte, W.; Zhang, Z. Falcon: Fast-Fourier Lattice-Based Compact Signatures over NTRU; NIST PQC Project Round 2, Documentation. Available online: https://falcon-sign.info/falcon.pdf (accessed on 1 October 2020).
  5. Peikert, C.; Pepin, Z. Algebraically Structured LWE Revisited; Springer: Berlin/Heidelberg, Germany, 2019; pp. 1–23. [Google Scholar]
  6. IEEE Standard 1363.1-2008; Specification for Public Key Cryptographic Techniques Based on Hard Problems over Lattices. IEEE: Piscataway, NJ, USA, 2009.
  7. Regev, O. New lattice-based cryptographic constructions. J. ACM 2004, 51, 899–942. [Google Scholar] [CrossRef]
  8. Casanova, A.; Faugere, J.-C.; Macario-Rat, G.; Patarin, J.; Perret, L.; Ryckeghem, J. GeMSS: A Great Multivariate Short Signature; NIST PQC Project Round 2, Documentation. Available online: https://csrc.nist.gov/Projects/post-quantum-cryptography/round-2-submissions (accessed on 3 January 2017).
  9. McEliece, R.J. A Public-Key Cryptosystem Based on Algebraic Coding Theory; California Institute of Technology: Pasadena, CA, USA, 1978; pp. 114–116. [Google Scholar]
  10. Biswas, B.; Sendrier, N. McEliece Cryptosystem Implementation: Theory and Practice. In Post-Quantum Cryptography; PQCrypto, Lecture Notes in Computer Science; Buchmann, J., Ding, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5299. [Google Scholar]
  11. Buchmann, J.A.; Karatsiolis, E.; Wiesmaier, A. Introduction to Public Key Infrastructures; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  12. Mallik, A. Man-in-the-Middle-Attack: Understanding in Simple Words. Cyberspace J. Pendidik. Teknol. Inf. 2019, 2, 109. [Google Scholar] [CrossRef]
  13. Cambou, B.; Ghanaimiandoab, D.; Gowanlock, M.; Lee, K.; Nelson, S.; Philabaum, C.; Stenberg, A.; Wright, J.; Yildiz, B. PUF-Based Key Generation for Lattice and Code Cryptography. 2024. Available online: https://nau.edu/research/innovations/available-technologies/cybersecurity/puf-based-key-generation-for-lattice-and-code-cryptography/ (accessed on 1 September 2024).
  14. Maletsky, K.D.; Seymour, M.J.; Garner, B.P. Generating Keys Using Secure Hardware. 2015. Available online: https://patents.google.com/patent/US9118467B2/en?inventor=Maletsky&oq=Maletsky+ (accessed on 3 September 2024).
  15. Barthelemy, S.; Thieblemont, J. Method for Two Step Digital Signature. U.S. Patent No. 8,589,693, 22 October 2008. [Google Scholar]
  16. Popp, N. Token Provisioning 2004. Available online: https://patents.google.com/patent/US8015599B2/en?q=(Token+Provisioning)&inventor=popp&oq=Token+Provisioning++popp (accessed on 3 September 2024).
  17. Merrill, T.A. Methods and Systems for Digital Authentication Using Digitally Signed Images. U.S. Patent No. 8,122,255, 22 January 2007. [Google Scholar]
  18. Nix, J.A. Public Key Exchange with Authenicated ECDHE and Security against Quantum Computers. U.S. Patent No. 11,777,719, 3 October 2023. [Google Scholar]
  19. Ekberg, J.-E.; Kostiainen, K.; Laitinen, P.; Aarni, V.; Sainio, M.; Knorring, N.V.; Kolesnikov, D.; Lahtiranta, A. Method and Apparatus for Authenticating a Mobile Device. U.S. Patent No. 8,621,203, 31 December 2013. [Google Scholar]
  20. Hwang, J.-J. User Authentication Based on Asymmetric Cryptography Utilizing RSA with Personalized Secret. U.S. Patent 7,958,362, 11 October 2005. [Google Scholar]
  21. Maximov, A.; Hell, M.; Smeets, B.B. Methods of Proving Validity and Determining Validity, Electronic Device, Server and Computer Programs. U.S. Patent No. 10,511,440, 20 February 2015. [Google Scholar]
  22. Gentry, C.B. Certificate-Based Encryption and Public Key Infrastructure. 2002. Available online: https://patents.google.com/patent/US8074073B2/en?q=(Gentry+certificate)&oq=Gentry+certificate (accessed on 29 August 2024).
  23. Gantman, A.; Rose, G.G.; Noerenberg, J.W., II; Hawkes, P.M. Small Public-Key Based Digital Signatures for Authentication. Patent No. US11/817,146, 25 February 2005. [Google Scholar]
  24. Herder, C.; Yu, M.-D.; Koushanfar, F.; Devadas, S. Physical Unclonable Functions and Applications: A Tutorial. Proc. IEEE 2014, 102, 1126–1141. [Google Scholar] [CrossRef]
  25. Schrijen, G.-J.; van der Leest, V. Comparative Analysis of SRAM Memories Used as PUF Primitives. In Proceedings of the 2012 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 12–16 March 2012. [Google Scholar]
  26. Liu, R.; Wu, H.; Pang, Y.; Qian, H.; Yu, S. A Highly Reliable and Tamper-Resistant RRAM PUF: Design and Experimental Validation. In Proceedings of the 2016 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), McLean, VA, USA, 3–5 May 2016; Volume 100, pp. 13–18. [Google Scholar]
  27. Aguilar Rios, M.; Alam, M.; Cambou, B. MRAM Devices to Design Ternary Addressable Physically Unclonable Functions. Electronics 2023, 12, 3308. [Google Scholar] [CrossRef]
  28. Garrett, M.L. Experimental Performance Analysis of Physically Unclonable Session Key Protocol for Zero-Trust Environments. Master’s Thesis, Northern Arizona University, Flagstaff, AZ, USA, 2022. [Google Scholar]
  29. Cambou, B.; Telesca, D. Ternary Computing to Strengthen Cybersecurity. In Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2018; pp. 898–919. [Google Scholar]
  30. Castle Shield Holdings, LLC. Available online: https://castle-shield.com/ (accessed on 29 August 2024).
  31. Castle Shield Holdings, LLC. Announces a Research Development Partnership with Northern Arizona University. Available online: https://castle-shield.com/castle-shield-holdings-llc-announces-a-research-development-partnership-with-northern-arizona-university/ (accessed on 29 August 2024).
  32. Cambou, B.; Mohammadi, M.; Philabaum, C.; Booher, D. Statistical Analysis to Optimize the Generation of Cryptographic Keys from Physical Unclonable Functions. In Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2020; pp. 302–321. [Google Scholar]
  33. Johnson, W.B.; Lindenstrauss, J. Extensions of Lipschitz Mappings into a Hilbert Space. Contemp. Math. 1984, 189–206. [Google Scholar] [CrossRef]
  34. Hinrichs, A.; Vybíral, J. Johnson-Lindenstrauss Lemma for Circulant Matrices. Random Struct. Algorithms 2011, 39, 391–398. [Google Scholar] [CrossRef]
Figure 1. Key generation for PKI: (a) user-driven (b) third party-driven.
Figure 1. Key generation for PKI: (a) user-driven (b) third party-driven.
Applsci 14 08863 g001
Figure 2. Key generation for Public Key Infrastructure as proposed in this paper.
Figure 2. Key generation for Public Key Infrastructure as proposed in this paper.
Applsci 14 08863 g002
Figure 3. Root of trust with challenge–response–pair mechanism.
Figure 3. Root of trust with challenge–response–pair mechanism.
Applsci 14 08863 g003
Figure 4. Public key distribution and verification for DSA (generic).
Figure 4. Public key distribution and verification for DSA (generic).
Applsci 14 08863 g004
Figure 5. Public key distribution and verification for DSA with LWE cryptography.
Figure 5. Public key distribution and verification for DSA with LWE cryptography.
Applsci 14 08863 g005
Figure 6. Simplified diagram of a key distribution scheme with Dilithium cryptography.
Figure 6. Simplified diagram of a key distribution scheme with Dilithium cryptography.
Applsci 14 08863 g006
Figure 7. Implementation with multifactor authentication and root of trust with biometry.
Figure 7. Implementation with multifactor authentication and root of trust with biometry.
Applsci 14 08863 g007
Figure 8. SRAM-based PUF token developed by Castle Shield, LLC.
Figure 8. SRAM-based PUF token developed by Castle Shield, LLC.
Applsci 14 08863 g008
Figure 9. Error distribution for different fragmentation levels.
Figure 9. Error distribution for different fragmentation levels.
Applsci 14 08863 g009
Figure 10. Latency graph for different fragmentation levels.
Figure 10. Latency graph for different fragmentation levels.
Applsci 14 08863 g010
Figure 11. Average latencies for important factors of the protocol.
Figure 11. Average latencies for important factors of the protocol.
Applsci 14 08863 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alam, M.; Hoffstein, J.; Cambou, B. Privately Generated Key Pairs for Post Quantum Cryptography in a Distributed Network. Appl. Sci. 2024, 14, 8863. https://doi.org/10.3390/app14198863

AMA Style

Alam M, Hoffstein J, Cambou B. Privately Generated Key Pairs for Post Quantum Cryptography in a Distributed Network. Applied Sciences. 2024; 14(19):8863. https://doi.org/10.3390/app14198863

Chicago/Turabian Style

Alam, Mahafujul, Jeffrey Hoffstein, and Bertrand Cambou. 2024. "Privately Generated Key Pairs for Post Quantum Cryptography in a Distributed Network" Applied Sciences 14, no. 19: 8863. https://doi.org/10.3390/app14198863

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop