Next Article in Journal
A Survey on Group Signatures and Ring Signatures: Traceability vs. Anonymity
Previous Article in Journal
Functional Encryption for Pattern Matching with a Hidden String
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A CCA-PKE Secure-Cryptosystem Resilient to Randomness Reset and Secret-Key Leakage

Department of Computer Science, University of the Philippines Diliman, Quezon City 1101, Philippines
*
Author to whom correspondence should be addressed.
Cryptography 2022, 6(1), 2; https://doi.org/10.3390/cryptography6010002
Submission received: 6 December 2021 / Revised: 24 December 2021 / Accepted: 27 December 2021 / Published: 4 January 2022

Abstract

:
In recent years, several new notions of security have begun receiving consideration for public-key cryptosystems, beyond the standard of security against adaptive chosen ciphertext attack (CCA2). Among these are security against randomness reset attacks, in which the randomness used in encryption is forcibly set to some previous value, and against constant secret-key leakage attacks, wherein the constant factor of a secret key’s bits is leaked. In terms of formal security definitions, cast as attack games between a challenger and an adversary, a joint combination of these attacks means that the adversary has access to additional encryption queries under a randomness of his own choosing along with secret-key leakage queries. This implies that both the encryption and decryption processes of a cryptosystem are being tampered under this security notion. In this paper, we attempt to address this problem of a joint combination of randomness and secret-key leakage attacks through two cryptosystems that incorporate hash proof system and randomness extractor primitives. The first cryptosystem relies on the random oracle model and is secure against a class of adversaries, called non-reversing adversaries. We remove the random oracle oracle assumption and the non-reversing adversary requirement in our second cryptosystem, which is a standard model that relies on a proposed primitive called L M lossy functions. These functions allow up to M lossy branches in the collection to substantially lose information, allowing the cryptosystem to use this loss of information for several encryption and challenge queries. For each cryptosystem, we present detailed security proofs using the game-hopping procedure. In addition, we present a concrete instantation of L M lossy functions in the end of the paper—which relies on the DDH assumption.

1. Introduction

Adaptive Chosen ciphertext attack (CCA2) secure cryptosystems. Since the invention of the Diffie–Helman key exchange and the RSA primitive, public-key cryptography has become one of the most well-studied areas in cryptography research [1]. Currently, the security notion required of any public-key cryptosystem is security against adaptive chosen ciphertext attacks [2,3], or CCA2 security. Security against adaptive chosen ciphertext attacks guarantees that ciphertexts are not malleable, which implies that ciphertexts cannot be modified in transit by some efficient adversarial algorithm. Initially, encryption schemes provided CCA2 security under the random oracle model [4]. Random oracle-based models are heuristic in approach and are randomness-recovering, i.e., they allow a scheme to recover its randomness during an encryption. However, they rely on very strong assumptions, for example, that some functions, i.e., hash functions, are indistinguishable from truly random functions. A more practical CCA2-secure public-key encryption scheme is presented in [3], which relies on the decisional Diffie–Helman (DDH) assumption. The scheme of [3] uses hash-proof systems, which involve projective hash functions that perform function delegation through auxiliary information. Without this auxiliary information, however, the function’s behaviour is close to uniform and it is hard to distinguish as non-random. The design of several practical public-key encryption schemes essentially use this hash proof system by [5] or some variants of it [6,7].
Following the hash proof system of [3], several other CCA2 secure cryptosystems have been proposed. Among these is the CCA2 secure cryptosystem of [8] that relies on a lossy function primitive. The lossy function primitive is a collection of functions, such that some functions in the collection, i.e., the lossy functions, substantially lose significant information from the input. It is difficult, however, to determine if a function is lossy or not. By exploiting the loss of information in the function, along with the computational difficulty in determining the type of a function, several CCA2 secure cryptosystems can be developed under the standard model, thereby presenting an alternative to the practical CCA2 cryptosystem of [3].
Yet, while more practical and efficient CCA2 cryptosystems are being developed, recently, a number of cryptography papers have called into question the security guarantees provided by CCA2 security [9,10,11,12,13,14], due to newer types of attacks. Among the common categories of these newer attacks are (i) randomness attacks and (ii) secret-key leakage attacks. Both of these attack categories have been shown to be strong enough to trivially break CCA2 security. These attacks are briefly described as follows.
Randomness Attacks. The first category, i.e., randomness attacks, considers the case where the randomness used in cryptosystems fails to be truly random. These attacks tamper with the encryption process of a cryptosystem. Randomness failures can be due to a faulty, pseudorandom generator design and implementation [15], or due to simple attacks, such as virtual machine resets [16]. In a virtual machine reset—called randomness reset attack—a computer is forced to restart to some previous state stored in memory and random number variables are reset to some previous value. This applies, especially, to virtual machine systems that use virtual machine monitors (or hypervisors) to manage several operating systems. Given the increased use of cloud computing, such as with Amazon’s servers, several systems have become reliant on virtual machines. A feature of a virtual machine monitor is the taking of snapshots [16] of the system’s state, where the snapshot includes all items of the system in memory, at a certain point in time, for backup and fault tolerance. Included in this snapshot are the random numbers used by the operating system for its encryptions. A hacker, however, may force a virtual machine to be reset to a prior snapshot and re-use the randomness therefrom. For instance, a hacker may perform a denial-of-service attack against a virtual machine, whereupon the virtual machine is forced to be reset to some previous state. In particular [17] point out that snapshots of virtual machines may impair security due to the reuse of security-critical states. Ref. [18] exhibits virtual machine reset vulnerabilities in TLS clients and servers, wherein the attacker takes advantage of snapshots to expose a server’s DSA signing key. Given these actual examples, ref. [16] considered the effect of virtual machine resets on existing CCA2 secure cryptosystems. The results were not positive, as [16] showed a scenario wherein an adversary can trivially break CCA2 security by exploiting such vulnerabilities.
We demonstrate such vulnerabilities on a simpler cryptosystem in Section 3, wherein we present a concrete example of a randomness reset attack in the context of the ElGamal cryptosystem, along with the effect of a randomness reset attack on the formal definition of CCA2 security as done in [16]. Briefly, CCA2 security is formally defined in terms of an attack game between a challenger and an adversary [19], where the adversary may perform challenge queries and decryption queries. The adversary’s task is to correctly guess the underlying message of a challenge ciphertext, given that the challenge ciphertext cannot serve as input to a decryption query. An attack game with a randomness reset attack will incorporate additional encryption queries, in which random numbers can be set, by the adversary, to some previous value. These may present difficulties to some existing cryptosystems, since, unlike challenge ciphertexts, ciphertexts from encryption queries may validly serve as input for decryption queries. In fact, randomness reset attacks are so strong that [11,16,20] have been forced to rule-out situations wherein adversaries can perform arbitrary queries. Instead, adversaries are assumed to satisfy the equality pattern, respecting constraint. The cryptosystem of [16] provides a generic transformation to render a CCA2 public-key cryptosystem secure against randomness reset attacks. The transformation involves feeding a random number and an associated plaintext message to a pseudorandom function. If the joint entropy of the random number and the plaintext message is high enough, the security properties of the pseudorandom function can fix any faulty randomness in the cryptosystem. In Section 4, we present a list of schemes that consider various other types of randomness reset attacks.
Secret Key Leakage Attacks. The second category, i.e., secret-key leakage attacks, considers the case where an adversary learns bits of the secret key [9].These attacks tamper with the decryption process of a cryptosystem. Leakage of secret keys may, perhaps, be due to some devious means, such as side-channel attacks. For instance [21] have reported that practical implementations of cryptosystems in software are often vulnerable to side-channel attacks. For example, the power traces of 8000 encryptions are sufficient to extract the secret key of ASIC AES, which is substantially faster than a brute-force search for the secret key. It follows that, given enough bits of the secret key, a simple exhaustive search over the set of candidate keys can break any cryptosystem’s security. A formal definition of this attack is first considered in [9]. Several articles have provided cryptosystems that are provably secure against secret-key leakage attacks, on top of CCA2 security [12,13]. In particular, the scheme of [13] provides a cryptosystem that is secure against a constant factor of secret key bits leaked to the adversary, where the factor can be as high as 1 / 2 o ( 1 ) bits of the secret key. The scheme of [13] is composed of an ensemble of various cryptographic primitives, such hash proof systems [5], lossy functions [8], and randomness extractors [22,23]. In particular, ref. [13] proposed the one-time lossy filter, which is a special type of lossy function that does not implement a trapdoor. Unlike the cryptosystem of [3], the scheme of [13] is randomness-recovering and can tolerate a higher degree of secret-key leakage. The paper of [12] showed that the cryptosystem of [13] is also secure against arbitrary functions of secret-key leakage. In Section 3, we illustrate how secret-key leakage attacks would affect the formal definition of CCA2 security (expressed as an attack game between a challenger and an adversary). Briefly, secret-key leakage attacks would provide additional leakage queries for the adversary, where he gets to learn a constant factor of bits of the secret key. In Section 4, we present a list of various other types of secret-key leakage attacks along with the primitives that they use.
Our contributions. As noted in [11], given these new types of attacks, an interesting problem is the construction of a public-key cryptosystem that is both resistant to randomness attacks and secret-key leakage attacks. Attacks that jointly involve both types effectively tamper with both the encryption and decryption processes of a cryptosystem, and the cryptosystem has to deal with attacks from both sides. On the one hand, in a randomness attack, the randomness involved in encryption is tampered to some value dictated by the adversary. On the other hand, in a secret-key leakage attack, the adversary learns information about the secret key involved in decryption. In terms of attack games between a challenger and an adversary, this implies that the adversary has access to additional encryption queries and secret-key leakage queries, aside from the usual decryption and challenge queries in a CCA2 attack game. To address these challenges, we propose two cryptosystems; the first is a random oracle model, and the second is a standard model that relies on a proposed primitive, called L M lossy functions. A collection of L M lossy functions provides multiple lossy branches and is simple to construct from existing ABO lossy functions [8]. Having multiple lossy branches is crucial for our cryptosystem, given that the adversary may query multiple encryption and challenge queries, and, unlike challenge ciphertexts, encrypted ciphertexts can validly serve as input for decryption. By having several lossy branches, the cryptosystem is able to exploit the loss of information given by the lossy branch even under multiple encryption and challenge queries. We can say that the L M lossy function forms the core primitive of our second cryptosystem since, without this primitive, the hash proof systems would be insufficient for security (at least in the context of constructions). The presentation of our cryptosystems follows [2,10], which begins from random oracle models and is followed by standard models. This is because, while the random oracle model from [4] is useful for simplifying security proofs, it relies on the strong assumption that some hash functions are truly random, which may not necessarily be true, in practice [2]. For this reason, standard models usually follow initial random models, albeit with some added complexity in their schemes. Both of our proposed cryptosystems apply several primitives, such as hash proof systems, pseudorandom functions, and randomness extractors.
To put our contributions into context, the problem mentioned in [11] considers general classes of related randomness and related secret-key leakage attacks. For this paper, however, we approach the problem under a more limited class of randomness reset attacks [16], in which random numbers are reset to previous values, and under constant-bit secret-key leakage attacks [13], where a constant number of bits of the secret key are leaked. At the end of the paper, we present concrete instances of L M collections that rely on the decisional Diffie–Helman assumption, and on ElGamal matrix encryptions. We present security proofs for our proposed cryptosystems using the well-known game-hopping proving scheme, as described in [2,19].

2. Preliminaries

2.1. Notations

Given the set of natural numbers N , let [ a ] denote the set { 1 , 2 ,   , a } for any a N . Let κ N denote a security parameter following standard cryptography literature [2]. A function f ( κ ) is negligible in κ if f ( κ ) = o ( κ c ) for every fixed constant c. A function f ( κ ) is superpolynomial in κ if 1 / f ( κ ) is negligible. Throughout the paper, the notation x X refers to x being randomly drawn from the probability distribution of a random variable, X. Let A be any probabilistic polynomial-time algorithm. The advantage of A is defined to be its capacity to distinguish between the probability distributions of two collections of random variables. For instance, let X = { X κ } κ N and Y = { Y κ } κ N be two collections of random variables indexed by κ . The advantage of A , in this instance, is | Pr ( A ( 1 κ , x ) = 1 ) Pr ( A ( 1 κ , y ) = 1 ) | for x X κ and y Y κ . Two collections of random variables X and Y are computationally indistinguishable if the advantage of any polynomial-time algorithm is negligible in κ . The statistical distance between two random variables X and Y with the same domain D is denoted as Δ ( X , Y ) = ( 1 / 2 ) z D | Pr ( X = z ) Pr ( Y = z ) | [22]. The min-entropy of X is denoted as Γ ( X ) = log ( max z Pr ( X = z ) ) . If X is conditioned on Y, the average min-entropy of X conditioned on Y is Γ ˜ ( X | Y ) = log ( E y Y 2 Γ ( X | Y = y ) ) .

2.2. Hashing and Randomness Extractors

Given the security parameter κ N , let l ( κ ) and l ( κ ) be values of polynomials in κ . A hash function maps inputs of length l ( κ ) to outputs of length l ( κ ) , where l ( κ ) < l ( κ ) . A family of hash functions H = { h i : X Y } i N with domain X and range Y is pairwise independent if, for every distinct pair x , x X and y , y Y , the probability that h i ( x ) = y and h i ( x ) = y is equal to 1 / | Y | 2 for any h i H . On the other hand, if, for every distinct pair x , x X , we have Pr h i H ( h i ( x ) = h i ( x ) ) = 1 / | Y | for any h i H , the family H is a universal family of hash functions, which is a strictly weaker property than pairwise independence [13]. A family of hash functions is collision resistant if no polynomial time algorithm can compute a distinct pair x , x X such that h i ( x ) = h i ( x ) for any h i H . The following useful result regarding average min-entropy will be used in several security proofs.
Lemma 1
([24]). Given the random variables X, Y, and Z, suppose that Y has 2 r possible values; then, Γ ( X | Y , Z ) Γ ( X | Z ) r .
Definition 1.
Randomness Extractor. Let X and Z be random variables such that X { 0 , 1 } a and Z { 0 , 1 } b with a , b N and b < a . Let Y any random variable such that Γ ˜ ( X | Y ) ν for some ν 0 R . Let R be any random variable. An efficiently computable function, E : X × R Z is an average case ( ν , ϵ ) strong extractor if Δ ( ( Y , r , E ( X , r ) ) , ( Y , r , U z ) ) ϵ , where r R , U z is the uniform distribution over Z, and ϵ > 0 R .
Concrete instantiations of strong randomness extractors involve a family of universal hash functions. This leads to the following lemma.
Lemma 2
([25]). A universal family of hash functions H can be used as average-case ( Γ ˜ ( X | Y ) , ϵ ) strong extractors whenever Γ ˜ ( X | Y ) log | Z | + 2 log ( 1 / ϵ ) .

2.3. Public Key Cryptosystems and CCA2 Security

A public-key cryptosystem consists of three probabilistic algorithms, ( G , E , D ) , described as follows.
  • G ( 1 κ ) is an initialization algorithm that outputs a public/secret key pair ( p k , s k ) given security parameter κ N .
  • E ( p k , m ; r ) is an encryption algorithm that outputs a ciphertext, c, given p k , a plaintext message, m, and a sampled random number, r, during computation.
  • D ( s k , c ) is a decryption algorithm that outputs m such that m = D ( s k , E ( p k , m ; r ) ) .
We now describe security against adaptive chosen ciphertext attacks or CCA2 security using attack games, following [4].

Security Notion of Adaptive Chosen Ciphertext Attack (CCA2 Security)

This security notion is defined in terms of an attack game between a challenger and an adversary, A , in which both are polynomial-time algorithms. On input κ , the challenger draws a { 0 , 1 } then generates the public/secret key pair ( p k , s k ) G ( 1 κ ) . It forwards p k to A . A can perform decryption queries by providing a ciphertext, c, to the challenger, and the challenger returns D ( s k , c ) . A performs a challenge query by giving a message pair ( m 0 , m 1 ) to the challenger, and the challenger returns the challenge ciphertext, c * = E ( p k , m a ) . To prevent trivial wins, any ciphertext input for decryption should not equal c * . The game ends when A outputs a guess a { 0 , 1 } , who wins the game if a = a . The advantage of A is | Pr ( a = a ) 1 / 2 | . If the advantage of any polynomial-time adversary for this game is negligible in κ , the public-key cryptosystem is CCA2 secure.

2.4. Hash Proof Systems

A hash proof system is an encapsulation system that uses a projective hash [5]. The domain of a projective hash consists of two disjoint sets, the valid set and the invalid set. Each projective hash function is associated with a projection function whose role is to provide auxiliary information. Without this auxiliary information, it is computationally difficult to evaluate the projective hash over the valid set in its domain, and its behaviour is close to uniform. In more detail, let Λ s k H denote a projective hash with ciphertext domain C. Let V C denote the set of valid ciphertexts and C \ V denote the set of invalid ciphertexts. Let K denote a set of encapsulated ciphertexts. A hash proof system, H, consists of three polynomial time algorithms ( H g , H p u b , H p r i v ) that are as follows.
  • H g ( 1 κ ) is a parameter generation algorithm that generates a secret key s k H and projective hash Λ s k H : C K , with the associated projection function μ . It computes public key p k H = μ ( s k H ) , representing auxiliary information.
  • H p u b ( p k H , c H , ω ) is a public evaluation algorithm that, given p k H , ciphertext c H C , and witness, ω , of the fact that c H V , outputs k K .
  • H p r i v ( s k H , c ) is a private evaluation algorithm that, given s k H , ciphertext c H C , outputs k K , without requiring witness, ω , of the fact that c H V .
A key property required of H is the subset membership hardness property, whereby V is computationally difficult to distinguish from C \ V . Formally, let A be any polynomial-time algorithm. The advantage of A with respect to the subset membership problem over { C , V } is defined as | Pr ( A ( c 0 ) = 1 | c 0 V ) Pr ( A ( c 1 ) = 1 | c 1 C \ V ) | . If this advantage is negligible for any A , the subset membership problem over { C , V } is computationally hard.
Definition 2.
Given ϵ 0 , a projective hash function, Λ s k H , with corresponding projection function, μ, is ϵ-universal if, for all p k H , c C \ V , we have Pr ( Λ s k H ( c ) = k | ( p k , c ) ) ϵ , where the probability is computed over all s k H and p k H = μ ( s k H ) .
The following lemma and definition will be used in the security proofs.
Lemma 3
([13]). Let Λ s k H be an ϵ-universal projective hash function with the associated projection function, μ. For all p k H and invalid ciphertexts c H C \ V , we have Γ ( Λ s k H ( c H ) | ( p k H , c H ) ) log ( 1 / ϵ ) , where s k H is a randomly drawn secret key, and p k H = μ ( s k H ) .
Definition 3.
A hash proof system, H, is ϵ-universal if the underlying projective hash function is ϵ-universal and the underlying subset membership problem is computationally hard.

2.5. Lossy Functions

2.5.1. Lossy Functions

A lossy function is a function that loses information from its input. A collection of lossy functions (lossy collection) consists of a set of injective functions, along with a set of lossy functions [8]. Let n ( κ ) and p ( κ ) be values of polynomials in κ N . Given κ , the input length of any function in the collection is n ( κ ) , and the size of the domain is 2 n ( κ ) . A lossy function in the collection has an image size of, at most, 2 log p ( κ ) = p ( κ ) , where p ( κ ) < 2 n ( κ ) . For convenience, the dependence of n and p on κ is omitted hereafter. The functions in the collection are indexed by the set S. A collection of ( n , p ) lossy functions is given by the polynomial-time algorithms ( L g , L e ) :
  • L g ( 1 κ , i ) for i { 0 , 1 } is a function index sampling algorithm. If i = 1 , it outputs s = L g ( 1 κ , 1 ) S , where s is the index of an injective function. If i = 0 , its output is s = L g ( 1 κ , 0 ) S , where s is the index of a lossy function.
  • L e ( s , k ) is an evaluation algorithm that, on input s S and k { 0 , 1 } n , outputs an element in { 0 , 1 } n . If s refers to an injective function, L e is injective. If s refers to a lossy function, the image size of L e is, at most, p.
Definition 4.
Required properties of a collection of lossy functions: (i) the index of an injective or lossy function can be efficiently sampled, (ii) the distribution of L g ( 1 κ , 0 ) is computationally hard to distinguish from the distribution of L g ( 1 κ , 1 ) .

2.5.2. ABO Lossy Functions

A collection of all-but-one (ABO) lossy functions (ABO collection) consists of functions that are each equipped with a set of branches [8]. One branch corresponds to a lossy branch, while the rest are injective branches. Let B = { B κ } κ N denote the collection of branches indexed by κ . Given κ , define B : = B κ with lossy branch b B . The functions in the ABO collection are indexed by the set S. A ( n , p ) collection of ABO lossy functions is given by polynomial-time algorithms ( L g a b o , L e a b o ) , which are as follows.
  • L g a b o ( 1 κ , b ) is a function index sampling algorithm that, given lossy branch b B , outputs function index s = L g a b o ( 1 κ , b ) S .
  • L e a b o ( s , b , k ) is an evaluation algorithm that, on input s S , branch b B and k { 0 , 1 } n , outputs an element in { 0 , 1 } n . If b = b , its image has size at most p. Otherwise, it is injective.
Definition 5.
Required properties of a collection of ABO lossy functions: (i) given κ, a lossy branch b B can be efficiently sampled; (ii) L g a b o can efficiently sample s given b ; (iii) it is computationally difficult to distinguish the distributions of L g a b o ( 1 κ , b 0 ) from L g a b o ( 1 κ , b 1 ) for any b 0 b 1 ; and (iv) given s, it is computationally difficult to determine b .

2.5.3. L M Lossy Functions

A collection of L M lossy functions (or L M collection for short) generalizes the ABO collection. Each function in the L M collection is equipped with a set of branches, but there are several possible lossy branches. L M collections are similar to ABN lossy functions in [26] and ABM lossy functions in [27]. However, they are simpler and can be constructed from a set of ABO collections using Cartesian products. Let B = { B κ } κ N denote the collection of branches indexed by κ . Define B : = B κ , with lossy branch set B B of size M = | B | and with elements b B . Define q to be an ordered tuple that corresponds to B , i.e., q = ( b 1 , b 2 , . . . , b M ) and b i B for i [ M ] . The functions in the L M collection are indexed by the set S. A ( n , p ) collection of L M lossy functions is given by the polynomial-time algorithms ( L g M , L e M ) , which are as follows.
  • L g M ( 1 κ , q ) is a function index sampling algorithm that takes as input q corresponding to B and outputs s S .
  • L e M ( s , b , k ) is an evaluation algorithm that, on input s S , branch b B , and k { 0 , 1 } n , outputs an element in { 0 , 1 } n M . If b B , the image size is at most 2 n ( M 1 ) + log ( p ) .
Definition 6.
Required properties of a collection of L M lossy functions: (i) given κ, a lossy branch set B B can be efficiently sampled; (ii) L g M can efficiently sample s, given q corresponding to B ; (iii) it is computationally difficult to distinguish the distributions of L g M ( 1 κ , q 0 ) from L g a b o ( 1 κ , q 1 ) for q 0 q 1 ; and (iv) given s, it is computationally difficult to generate an element of B .

2.6. Pseudorandom Functions

A pseudorandom function  P : R × M Y , where R is a key space and M is an input data block, is a deterministic algorithm that behaves like a truly random function [2].

Security Notion of a Pseudorandom Function

The security of a pseudorandom function, P, is defined in terms of an attack game between a challenger and an adversary. Given κ , at the start of the game the challenger draws a { 0 , 1 } and selects a random function, f, from M to Y. The adversary submits a sequence of queries to the challenger, where each query consists of an element m M . If a = 0 , the challenger draws r R and submits P ( r , m ) to the adversary. If a = 1 , the challenger submits f ( m ) to the adversary. The game ends once the adversary submits a guess a { 0 , 1 } who wins if a = b . The advantage of the adversary in this game is defined as | Pr ( a = a ) 1 / 2 | . The pseudorandom function, P, is a secure PRF if the advantage of any polynomial time adversary in this game is negligible in κ .

2.7. Strongly Unforgeable One-Time Signatures

A strongly unforgeable one-time signature scheme has the strong one-time unforgeability property [8] and is given by the algorithms below.
 Key Generation. 
F g ( 1 κ ) . On input κ , F g outputs the verification/signing key pair ( v k σ , s k σ ) .
 Signing. 
F s ( s k σ , x ) : given s k σ and a plaintext x, outputs a signature σ .
 Verification. 
F v ( v k σ , x , σ ) : given v k σ , x, and σ , it outputs 0 if σ F s ( s k σ , x ) and 1 otherwise.

Security Notion of a Strongly Unforgeable One-Time Signature Scheme

The security of a strongly unforgeable one-time signature scheme is defined in terms of an attack game consisting of a challenger and an adversary, A . Given κ , at the start of the game, the challenger generates ( v k σ , s k σ ) and gives v k σ to adversary A . A queries a plaintext message, x, to the challenger and the challenger returns σ F s ( s k σ , x ) . A wins the game if it outputs a distinct message signature pair ( x , σ ) such that F v ( v k σ , x , σ ) = 1 . A signature scheme is strongly unforgeable one-time secure if no probabilistic polynomial time adversary can win the attack game described with non-negligible probability.

3. Security Notions

For illustration, we first describe randomness reset attacks and secret-key leakage attacks in the context of the ElGamal public-key cryptosystem. Recall that given a group, G , of prime order p, with generator g, the ElGamal cryptosystem draws a secret key, s k = z Z p , and defines the public key as p k = h = g z . A message, m Z p , is encrypted as c = ( g r , h r g m ) for a randomly chosen r Z p .
Randomness Reset Attack Example. In a randomness reset attack [16], an adversary can force the cryptosystem to re-use a previous random number. In terms of the ElGamal cryptosystem above, suppose that Alice draws a secret key, s k = z , and gives the public key, p k = h = g z , to Bob. Bob now encrypts a message, m, by drawing a random number, r 0 , and sends the ciphertext c 0 = ( g r 0 , h r 0 g m ) to Alice. In a normal setting, without randomness reset attacks, suppose that Bob wants to send another message, m a { m 0 , m 1 } , for m m 0 m 1 , to Alice. To do this in the normal setting Bob draws a fresh random number, r 1 , and sends the new ciphertext c 1 = ( g r 1 , h r 1 g m a ) to Alice. Given that c 1 and c 0 involve different random numbers, they are computationally indistinguishable for any efficient adversary. In a setting with randomness reset attacks, however, an adversary forces Bob to re-use r 0 in encrypting m, i.e., c 1 = ( g r 0 , h r 0 g m ) . This arbitrarily breaks the security of the ElGamal cryptosystem. To see this, suppose that some adversary obtained c 0 and c 1 , and let the adversary know, as well, m, m 0 , and m 1 . The adversary can compute c 0 / c 1 = h r 0 g m / h r 0 g m 1 = g m m a . It follows that if m a = m 0 , we have c 0 / c 1 = g m m 0 , but if m a = m 1 , we have c 0 / c 1 = g m m 1 . The adversary can compute g m m 0 and g m m 1 on its own, given that it knows g, m 0 , and m 1 . It follows that, with randomness reset attacks, the ElGamal cryptosystem is not even semantically secure. Relating this scenario to a CCA2 attack game between a challenger and an adversary, a randomness reset attack allows the adversary to perform encryption queries apart from challenge queries. In the example above, the adversary can ask the challenger to encrypt m (encryption query) followed by the encryption of m a { m 0 , m 1 } . The adversary can also force the challenger to re-use a previous random number in both encryption and challenge queries. For more details on the power of randomness attacks, in [16], it has been shown that randomness reset attacks may break arbitrary CCA2 cryptosystems that are more complicated than the ElGamal crypstosystem if no additional primitives are applied to secure the randomness.
Constant secret-key leakage attack example. In a secret-key leakage attack, the adversary can obtain bits of the secret key. The secret-key leakage attack in [13] considers the leakage of a constant factor of bits of the secret key, where it should be a percentage of the secret key’s length. The percentage for any public-key cryptosystem should obviously not equal one. Otherwise, the entire secret key is leaked. In terms of the ElGamal cryptosystem, this implies that s k = z is leaked to the adversary, which arbitrarily breaks the cryptosystem. However, in some cryptosystems, such as the Cramer and Shoup CCA2 cryptosystem [3], the allowable amount leakage is even lower—by a factor of 1 / 2 o ( 1 ) . This is because the security of the Cramer and Shoup cryptosystem involves jointly using two secret keys, and, if either is leaked, the entire cryptosystem is insecure. Relating this scenario to a CCA2 attack game between a challenger and an adversary, a constant secret-key leakage attack involves leaking some constant number of bits to the adversary through a leakage query.
We now present the attack game corresponding to the security notion of a public-key cryptosystem that is secure against (i) adapative chosen ciphertext attacks, (ii) randomness reset attacks, and (iii) constant secret-key leakage attacks. Table 1 presents the attack game.
The attack game is initialized by the challenger through Initialize, where he generates the public/secret key pair ( p k , s k ) , and forwards p k to the adversary. The adversary has access to (i) decryption queries Dec, (ii) secret-key leakage queries Leak, (iii) challenge query Challenge, and (iv) encryption queries Enc, which are all described in Table 1. In Table 1, the adversary has access to a set of indices that are mapped to prior random numbers generated during encryption or challenge queries. In any subsequent encryption query or challenge query, the adversary can use any of these indices at will, representing a randomness reset attack. The adversary can ask the challenger in an encryption query to use any public key—this follows [16]. In a leakage query, Leak, the adversary may request up to λ ( κ ) bits of the secret key s k . The game ends once the adversary outputs a bit, a , through Finalize, who wins if a = a . The advantage of the adversary, in this case, is defined as | Pr ( a = a ) 1 / 2 | , and a cryptosystem is secure with respect to the attack game of Table 1 if no polynomial-time adversary has non-negligible advantage.

3.1. Attack Game with Random Oracles

In certain situations, the random oracle heuristic [4] is convenient for simplifying security proofs. A random oracle Φ captures a truly random function and can be incorporated in cryptosystems. Suppose that a random oracle Φ : X Y is part of a cryptosystem. The corresponding attack game incorporates additional random oracle queries, whereby, on input x X from the adversary, the challenger returns y Y such that Φ ( x ) = y .

3.2. Adversary Constraints

As stated in [11,16], if the adversary can perform a randomness reset attack and has no constraints on encryption/challenge queries, it can trivially win. To prevent this, the adversary is assumed to be equality-pattern-respecting. We provide the definition of an equality-pattern-respecting adversary below, along with the corresponding notion of a non-reversing adversary that will be used in cryptosystem 1.
Definition 7.
Let A be an adversary in Table 1’s attack game. Let I represent the set of randomness indices mapped to random numbers generated by the challenger during A ’s challenge and encryption queries. Let A perform Q e encryption queries. Let A perform Q c , i challenge queries using index i I . Let M I represent the set of input messages, m, in the encryption queries done by A using the public key, p k * , and randomness index, i I , i.e., E n c ( p k * , i , m ) . Let ( m 0 i , 1 , m 1 i , 1 ) . . . ( m 0 i , Q c , i , m 1 i , Q c , i ) represent the set of message pairs given in challenge queries using randomness index i I . For Table 1’s attack-game, we say that A is equality-pattern-respecting: (1) if we have, for all i I and for all j k [ Q c , i ] , m 0 i , j = m 0 i , k if m 1 i , j = m 1 i , k , and (2) if, for all i [ Q e ] and j [ Q c , i ] , we have m 0 i , j M I and m 1 i , j M I .
Definition 8.
Let A be an adversary in Table 1 attack game that performs Q e encryption queries and Q d decryption queries. Let { c i } i [ Q e ] denote the set of ciphertext outputs c i received by A from the challenger in its encryption queries. Let { c j } j [ Q d ] denote the set of ciphertexts c j submitted by A for its decryption queries. We say that A is non-reversing if, for any c i { c i } i [ Q e ] , we have c i { c j } j [ Q d ] at any point in the game.

4. Comparison of Cryptosystems/Lossy Functions

Given the security notion presented in the previous section, for context, we present a list of various cryptosystems in Table 2 that deal with the related notions of randomness attacks and secret-key leakage attacks. From Table 2, the types of randomness attacks considered in the literature involve linear and polynomial functions of random numbers involved in the encryption process. The same holds for secret-key leakage attacks, where leakage may consist of affine or polynomial functions of bits of the secret key, along with bounded degrees of secret-key tampering. Our proposed cryptosystems are listed in the last two lines of Table 2 and consider joint attacks involving randomness reset and constant secret-key leakage. Similar to the other constructions in Table 2, we propose both a random oracle and standard model of our cryptosystems.
In Table 3, we list several lossy function constructions from the literature. From Table 3, the first lossy function collection is from [8] which uses the DDH assumption. The next lossy function constructions present improvements in terms of the number of lossy function branches or tags that can be sampled efficiently while retaining their amounts of lossiness. In particular, the construction of [27] provides an efficient lossy function that can sample a superpolynomially large number of lossy tags. The construction of [27], however, is quite complicated, since it involves Waters signatures along with chameleon hashing. For the purposes of our cryptosystems, our proposed lossy function collection (the last line of Table 3) is able to sample up to M lossy branches per function index but is compromised by a rather high amount of lossiness, i.e., 2 n ( M 1 ) + log ( p ) . Yet, despite this amount of lossiness, the security proof is still held intact, given that the factor 2 n ( M 1 ) + log ( p ) / 2 n is still superpolynomial in κ . In addition, our proposed lossy function collection is simpler to construct—involving only the DDH assumption, along with the Cartesian product operation. In terms of size complexity, we show, in the FIS and LBS columns, the size of the function index and the lossy branch index, respectively, where size is measured in terms of matrix representations. For instance, the function index size of the lossy functions in [8] is n 2 , which means that the index is a square matrix consisting of n rows and n columns. Our proposed L M collection has a larger function index size, of n 2 M , and a larger branch size, of n M . This is because it relies on the M Cartesian product operation. A concrete instantation of an L M collection is presented in Section 6.
Table 2. List of CCA2 secure PKE schemes that incorporate either randomness attack or secret-key leakage attack along with their primitives. Scheme models are classified according to random oracle or standard, where standard refers to schemes that do not use random oracles. Our proposed schemes are in the last two lines of the table and incorporate joint randomness reset attack and constant-bit secret-key leakage attack.
Table 2. List of CCA2 secure PKE schemes that incorporate either randomness attack or secret-key leakage attack along with their primitives. Scheme models are classified according to random oracle or standard, where standard refers to schemes that do not use random oracles. Our proposed schemes are in the last two lines of the table and incorporate joint randomness reset attack and constant-bit secret-key leakage attack.
ReferenceRandomness AttackSecret-Key Leakage AttackModelPrimitives/Assumptions
Canetti and Goldwasser [28]random oraclerandom oracle assumption
Cramer and Shoup [3]standardhash proof system/DDH
Yilek [16]randomness resetstandardpseudorandom function
Peikert and Waters [8]standardlossy functions/DDH/DCR
Wee [29]linear leakagestandardBDDH/LWE
Qin and Liu [13]constant leakagestandardhash proof system + lossy filter/DDH
Bellare et al. [10]chosen distribution attackrandom oraclerandom oracle assumption
Bellare et al. [10]chosen distribution attackstandardlossy functions
Paterson et al. [11]linear/polynomialrandom oraclerandom oracle assumption
Paterson et al. [11]linear functionsstandardpseudorandom function
Paterson et al. [11]polynomial functionsstandardCIS hash functions
Paterson et al. [30]vector of functionsstandardGoldreich-Levin extractor
Boneh et al. [31]affine leakagerandom oraclerandom oracle assumption
Boneh et al. [31]polynomial leakagestandardEDBDH
Faonio and Venturi [12]leakage + bounded tamperingstandardhash proof system + lossy filter/RSI
oursrandomness resetconstant leakagerandom oraclerandom oracle assumption
oursrandomness resetconstant leakagestandardhash proof system + L M lossy functions
Table 3. List of lossy function constructions found in the literature. Our proposed L M construction is shown in the last line of the table, where up to M lossy branches are given for each function index. While the lossiness of our construction is higher than the other schemes, it is simpler to construct and uses only the DDH assumption. ABO: all-but-one lossy functions; ABN: all-but-N lossy functions; ABM: all-but-many lossy functions; LF: lossy function; LB: lossy branch; LT: lossy tag; DS: domain size, LS: lossiness size; FIS: function index size (in terms of matrix representation, where n is the number of rows/columns); LBS: lossy branch index size; DDH: decisional Diffie–Helman; DCR: decisional composite residuousity; QR: quadratic residuousity; CH: chameleon hash.
Table 3. List of lossy function constructions found in the literature. Our proposed L M construction is shown in the last line of the table, where up to M lossy branches are given for each function index. While the lossiness of our construction is higher than the other schemes, it is simpler to construct and uses only the DDH assumption. ABO: all-but-one lossy functions; ABN: all-but-N lossy functions; ABM: all-but-many lossy functions; LF: lossy function; LB: lossy branch; LT: lossy tag; DS: domain size, LS: lossiness size; FIS: function index size (in terms of matrix representation, where n is the number of rows/columns); LBS: lossy branch index size; DDH: decisional Diffie–Helman; DCR: decisional composite residuousity; QR: quadratic residuousity; CH: chameleon hash.
ReferencePrimitiveNo. of LF/LB/LTDSLSFISLBSAssumptions
Peikert and Waters [8]lossy functionsseveral 2 n 2 log p n 2 nDDH/DCR (lattice)
Peikert and Waters [8]ABO lossy functions1 LB/function 2 n 2 log p n 2 nDDH
Hemenway et al. [26]ABN lossy functionsN LB/function 2 n 2 log p --DDH/DCR/QR
Hofheinz [27]ABM lossy functionssuperpolynomial LT 2 n 2 log p --Waters sig./CH
Qin and Liu [13]one-time lossy filtersuperpolynomial LT 2 n 2 log p n 2 nDDH/CH
ours L M lossy functionsM LB/function 2 n M 2 n ( M 1 ) + log ( p ) n 2 M n M DDH

5. Proposed Cryptosystems

5.1. Cryptosystem 1

In this section, we present our first cryptosystem that is secure against the attack game of Table 1. It uses several primitives, such as hash proof systems, randomness extractors, and pseudorandom functions, along with a random oracle Φ . Using a random oracle assumption simplifies the security proof and usually serves as the starting point in cryptosystem design, as done in [10]. However, as mentioned, the random oracle assumption is rather strong. In addition, this cryptosystem is limited to facing non-reversing adversaries who cannot submit prior-encrypted ciphertexts for decryption. We overcome the non-reversing limitation in the next cryptosystem—which also does away with the random oracle requirement.

5.2. Cryptosystem 1 Requirements

Let Q e , Q d , and Q c denote the bounds in the number of encryption, decryption and challenge queries respectively. The requirements of cryptosystem 1 are as follows.
  • An ϵ 1 -universal hash proof system H for some ϵ 1 > 0 given by ( H g , H p u b , H p r i v ) . The ciphertext domain of H is C, with V C as its valid subset. S K H and P K H denote the secret-key space and public-key space of H, with elements s k H S K H and p k H P K H . The space W of the witnesses of H is set to R. The encapsulated key space of H is K, with elements k K . H p u b is set as H p u b : P K H × C × R K , and H p r i v : S K H × C K . The projective function is Λ s k H : C K , with associated projection function μ : S K H P K H .
  • A ( n , p ) ABO collection given by ( L g a b o , L e a b o ) . The set of branches is B with elements b B , and lossy branch b B . Functions are indexed by S, and L e a b o : S × B × K { 0 , 1 } n
  • E is a ( ( ν λ ( Q c + Q e ) ( l ) ) / Q c , ϵ 2 ) average-case strong-randomness extractor
  • A secure pseudorandom function P : R × M R
  • A strongly unforgeable one-time signature scheme given by ( F g , F s , F v ) . S K σ and V K σ denote the spaces of signature and verification keys, with elements s k σ S K σ and v k σ V K σ , and where the domain of F s is C × { 0 , 1 } l , and the domain of V K σ is equal to B.
  • Elements of K , R , C , S K H , P K H , S , B have length n. Elements of M have length l.
  • The values of ν , l, and λ are such that λ ν λ ( Q c + Q e ) ( l ) α ( log κ ) for some α 0 .
  • n and p satisfy p < 2 n .
  • The polynomial in κ whose value is n is superpolynomial with respect to Q e , Q d , and Q c .

5.3. Cryptosystem 1

 Key Generation. 
G ( 1 κ ) first runs H g ( 1 κ ) to obtain s k H S K H and Λ s k H with μ . It computes p k H = μ ( s k H ) P K H . The output is a public/secret key pair ( p k , s k ) , where p k = p k H and s k = s k H .
 Encryption. 
E ( p k , m ) : on input p k = p k H and message m M , let Φ : K { 0 , 1 } n denote a random oracle. It performs the following:
  • It samples r 1 R then computes r 1 = P ( r 1 , m ) . It sets ω = r 1 . Using ω , it chooses c H V
  • It samples r 2 R , then computes r 2 = r 2 p k H , followed by r 2 = P ( r 2 , m ) R .
  • Using r 2 , it computes k = H p u b ( p k H , c H , ω ) and Ψ = E ( k , r 2 ) m
  • It samples ( v k σ , s k σ ) F g ( 1 κ ) and computes σ = F s ( s k σ , ( c H , r 2 , Ψ ) )
  • It computes Π = Φ ( k )
  • It returns ciphertext c = ( σ , c H , r 2 , v k σ , Ψ , Π )
 Decryption. 
D ( s k , c ) : on input s k = s k H and c = ( σ , c H , r , v k σ , Ψ , Π ) , performs the following:
  • The algorithm checks if F v ( v k σ , ( c H , r 2 , Ψ ) , σ ) = 1 . If not, it outputs ⊥.
  • It computes k = H p r i v ( s k H , c h )
  • It computes Π = Φ ( k )
  • It checks if Π = Π . If not, it outputs ⊥
  • It returns the plaintext message m = Ψ E ( k , r ) .
For cryptosystem 1, we note that the role of P is to generate fresh random numbers r 1 and r 2 using the joint entropy of the message m and old random numbers r 1 and r 2 (due to reset attacks). It follows that r 1 and r 2 serve as the actual randomness input to H p u b and E, respectively. To show correctness of the cryptosystem, given m and p k , E first computes r 1 and r 2 followed by k H p u b ( p k H , c H , r 1 = ω ) , Ψ = E ( k , r 2 ) m , and Π = Φ ( k ) . Let c = ( σ , C H , r 2 , v k σ , Ψ , Π ) be the corresponding ciphertext where v k σ is jointly sampled with s k σ under F g , and σ is derived using F s as shown above. If this c were given to D , it follows that F v ( v k σ , ( c H , r 2 , Ψ ) , σ ) = 1 given that σ = F s ( s k σ , ( c H , r 2 , Ψ ) ) and the signature scheme ( F g , F s , F v ) satisfies the correctness property. Having passed this first check, D computes k = H p r i v ( s k H , c H ) and we have k = k , given that H p r i v uses the same c H from encryption and s k H is paired with p k H using H g . Given that ( H g , H p u b , H p r i v ) satisfies the correctness property, it follows that k = k as claimed. Given that k = k , we have Π = Π under Φ given that Φ is a function, thereby passing the second check. Finally, with k = k , we have m = Ψ E ( k , r 2 ) = Ψ E ( k , r 2 ) , and the original message m is recovered.

5.4. Security Results for Cryptosystem 1

Theorem 1.
Let Φ : K { 0 , 1 } n be a random oracle and let ( G , E , D ) denote cryptosystem 1. For any non-reversing, equality-respecting, polynomial-time adversary that makes (a) at most Q c challenge queries under multiple randomness indices, (b) at most Q e encryption queries under multiple randomness indices, and (c) at most Q d decryption queries, and following the attack game of Table 1, then cryptosystem 1 is secure against (i) adaptive chosen ciphertext attack, (ii) λ bits of secret-key leakage, and (iii) randomness reset attacks.
 Game 0. 
This game implements the original cryptosystem with no modifications. The following attack game incorporates random oracle Φ queries in the attack game of Table 1. Φ is modelled using an associative array, map , which follows the faithful/forgetful gnome method [2]. The notation map [ j ] for j 1 refers to the jth element of map .
  • proc.Initialize( κ )
    • ( p k , s k ) G ( 1 κ )
    • initialize empty associative array map : K { 0 , 1 } n
    • initialize empty arrays coins and ciphers
    • send p k to adversary
  • proc.Enc( p k , j , m )
    • if coins [ j ] = then randomly sample ( r 1 , r 2 , v k σ , s k σ ) and set coins [ j ] = ( r 1 , r 2 , v k σ , s k σ )
    • compute c E ( p k , m ) with line 5 modified as:
      -
      if map [ k ] = then ζ { 0 , 1 } n and set map [ k ] = ζ . Set Π = map [ k ]
    • return c
  • proc.Challenge( j , m 0 , m 1 )
    • if | m 0 | | m 1 | return ⊥
    • if coins [ j ] = then randomly sample ( r 1 , r 2 , v k σ , s k σ ) and set coins [ j ] = ( r 1 , r 2 , v k σ , s k σ )
    • compute c * E ( p k , m ) with the line 5 modified as:
      -
      if map [ k ] = then ζ { 0 , 1 } n and set map [ k ] = ζ . Set Π = map [ k ]
    • ciphers ciphers c *
    • return c *
  • proc.Dec(c)
    • if c ciphers , return ⊥
    • compute m = D ( s k , c ) with line 3 modified as:
      -
      if map ( k ) = , then ζ { 0 , 1 } n and set map [ k ] = ζ . Set Π = map [ k ]
    • return m
  • proc.Leak( λ ( κ ) )
    • return λ ( κ ) bits of s k
  • proc.Oracle(k)
    • if map ( k ) = , then ζ { 0 , 1 } n and set map [ k ] = ζ
    • return map [ k ]
The adversary can perform any number of encryption queries under different randomness indices in coins . Prior to making any challenge query, the adversary can request λ bits of the secret key. At any point in the game, the adversary can perform a decryption query under the non-reversing condition.
 Game 1. 
This game is similar to Game 0, except that ( r 1 , r 2 ) are drawn randomly, instead of being computed using the pseudorandom function P in E .
 Game 2. 
This game is similar to Game 1, except that once the adversary submits a ciphertext c = ( σ , c H , r , v k σ , Ψ , Π ) for decryption such that v k σ c for some c ciphers , the challenger returns ⊥.
 Game 3. 
This game is similar to Game 2, except that Π is sampled randomly in E instead of being queried using Φ .
 Game 4. 
This game is similar to Game 3, except that the challenger computes k = H p r i v ( s k H , c H ) instead of H p u b in E .
 Game 5. 
This game is similar to Game 4, except that c H is sampled from C \ V instead of V in E .
 Game 6. 
This game is similar to Game 5, except that if the adversary submits a ciphertext c = ( σ , c H , r , v k σ , Ψ , Π ) for decryption such that c H C \ V , the challenger returns ⊥.
 Game 7. 
This game is similar to Game 6, except that the challenger draws Ψ uniformly at random from { 0 , 1 } l instead of computing Ψ = E ( k , r ) m in E
Proposition 1.
Game 0 and Game 1 are computationally indistinguishable, given the security of the pseudorandom function P.
Proof. 
To prove this claim, we define hybrid experiments, H 0 , H 1 , H 2 , where H 0 emulates the challenger in Game 0, H 1 samples r 1 randomly, and H 2 samples both r 1 and r 2 randomly. We have to show show that for any i { 0 , 1 } , experiments H i and H i + 1 are computationally indistinguishable.
Suppose that some adversary can efficiently distinguish between H i and H i + 1 for i { 0 , 1 } . Using this adversary, we construct a simulator that breaks the security of P. The simulator has access to an oracle that, on input m, returns a number, y, where we either have y = P ( r , m ) for some random number r, or y is sampled using a truly random function. For i { 0 , 1 , 2 } , the simulator emulates the challenger in Game 0 perfectly in the initialization phase and draws a { 0 , 1 } . If i = 0 , the simulator does not modify anything from the challenger in Game 0. For H 0 , H 1 , H 2 , the simulator knows s k and it can answer decryption and secret-key leakage queries. In encryption queries with input ( p k , m ) , if i = 1 , the simulator sends m to its oracle and receives y, where we either have y = P ( r 1 , m ) or y is sampled randomly. It sets r 1 = y and proceeds with the rest, as before. If i = 2 , it samples r 1 randomly and sends m to its oracle. It receives y where we either have y = P ( r 2 p k H , m ) or y is sample randomly. It sets r 2 = y .
In challenge queries, the input is a message pair, ( m 1 , m 2 ) . If i = 1 , the simulator sends ( m 1 , m 2 ) to its oracle and receives y, where either have y = P ( r 1 , m a ) or y is sampled randomly. If i = 2 , the simulator samples r 1 randomly and sends ( m 1 , m 2 ) to its oracle. It receives y, where we either have y = P ( r 2 p k H , m a ) or y is sample randomly. When the adversary submits a guess a { 0 , 1 } , the simulator outputs 1 if a = a and 0 otherwise. The probability that the oracle computes y using P and the simulator outputs 1 is equal to the advantage of the simulator in distinguishing between outputs of P and randomly sampled numbers. In turn, the simulator’s advantage is equal to the probability that the adversary outputs a such that a = a less 1/2. However, due to the pseudo-randomness of P, no efficient adversary is able to output a such that a = a with non-negligible probability. Hence, the simulator’s advantage is likewise negligible. By construction, experiment H 0 perfectly simulates Game 0, while experiment H 2 perfectly simulates Game 1. The proposition thus follows. □
Proposition 2.
Game 1 and Game 2 are computationally indistinguishable, given the strong one-time existential unforgeability of the signature scheme.
Proof. 
Games 1 and 2 behave the same, except when the adversary submits a ciphertext query c = ( σ , c H , r , v k σ , Ψ , Π ) , such that F v ( v k σ , ( c H , Ψ ) , σ ) = 1 and v k σ c for some c ciphers but c c . We construct a simulator that attacks the security of the signature scheme as follows. The simulator emulates Game 2 against an adversary. The simulator has access to an oracle that provides it with a verification key v k σ upon request. Since the simulator does not know s k σ , it can query the oracle for a signature, where on input ( c H , r , Ψ ) , the oracle returns σ computed using the hidden s k σ associated with the latest v k σ provided to the simulator. The simulator emulates the challenger in every aspect of the initialization phase of Game 1, but requests for a preliminary v k σ 0 . Since the simulator knows s k , it can answer any decryption query and secret leakage query. Moreover, prior to any challenge or encryption query, for any decryption query with input c = ( σ , c H , r , v k σ , Ψ , Π ) , the simulator checks if v k σ = v k σ 0 and if F v ( v k σ , ( c H , r , Ψ ) , σ ) = 1 . If this condition is met, the simulator outputs ( ( c H , r , Ψ ) , σ ) as a forgery and terminates the simulation. If this event does not occur and the simulator encounters the first challenge or encryption query, the simulator uses v k σ 0 as the verification key and queries the oracle for σ . Subsequent challenge or encryption queries require the simulator to ask the oracle for a fresh v k σ as well as for σ . Once it receives another decryption query with input c = ( σ , c H , r , v k σ , Ψ , Π ) , it checks if v k σ = v k σ with v k σ c for some c ciphers . If this is true for some c , it checks if σ c and ( c H , r , Ψ ) c and F v ( v k σ , ( c H , r , Ψ ) , σ ) = 1 . If this is true as well, the simulator outputs ( ( c H , r , Ψ ) , σ ) and terminates the simulation. By construction, the simulator emulates Game 2 perfectly against the adversary, and the advantage of the simulator in coming up with a forgery is equal to the probability that the adversary queries a ciphertext that meets the conditions mentioned. However, because the signature scheme is strongly one-time unforgeable, no efficient adversary can query such a ciphertext with non-negligible probability. It follows that the probability whereby the simulator outputs a valid forgery is negligible. Thus, with large probability, the ciphertexts that involve v k σ = v k σ with v k σ c for some c ciphers , and which meet the check requirements for decryption are not forgeries and are equal to some prior challenge ciphertext. However, no challenge ciphertexts can be valid for decryption as of Game 0. □
Proposition 3.
Game 2 and Game 3 are computationally indistinguishable, given that Φ is a random oracle.
Proof. 
We note that due to the non-reversing nature of the adversary, no ciphertext outputs of prior encryption or challenge queries can be submitted for decryption. It follows that in Game 3, the adversary cannot use decryption to check if Π is randomly drawn or not. Thus, the only event where Games 2 and 3 differ is when the adversary performs an oracle query in Game 3 on some input k, where k is computed under line 3 of E in some prior encryption or challenge query, and is associated with some randomly drawn Π such that Φ ( k ) = map [ k ] Π in Game 2. The probability that this event occurs is ( Q c + Q e ) / | K | . Since | K | = 2 n , this probability is negligible. □
Proposition 4.
Game 3 and Game 4 are perfectly equivalent.
Proof. 
The claim readily follows, since the change from computing k using H p u b to computing k using H p r i v and is merely conceptual. □
Proposition 5.
Game 4 and Game 5 are computationally indistinguishable, given the hardness of the underlying subset membership problem in the hash proof system.
Proof. 
To prove this claim, we define two experiments, H 0 and H 1 . H 0 and H 1 behave the same, except that H 0 samples c H from V, while H 1 samples c H from C \ V . Suppose that some adversary can distinguish H 0 and H 1 with non-negligible probability. Using this adversary, we construct a simulator that can break the hardness of the underlying subset membership problem of hash proof system H. The simulator has access to an oracle that, on input p k H , provides it with c H , where we either have c H V , or c H C \ V . At initialization, the simulator emulates the challenger of Game 4 in all aspects. In encryption queries with input ( p k , m ) the simulator forwards p k H p k to its oracle and receives c H . In challenge queries, the simulator forwards p k H to its oracle and receives c H * . In both challenge and encryption queries, it does not compute for c H using H p u b . Since the simulator knows s k , the simulator can answer any decryption or secret-key leakage queries. Once the adversary submits a guess a { 0 , 1 } , the simulator outputs 1 if a = a . It follows that the advantage of the simulator in distinguishing c H V from c H C \ V is equal to the probability that the adversary outputs a such that a = a . However, given the underlying subset membership problem of hash proof system H is hard, the probability that a = a is negligible. It follows that the simulator’s advantage is likewise negligible. Experiment H 0 emulates Game 4, while H 1 emulates Game 5, thereby proving the proposition. □
Proposition 6.
Game 5 and Game 6 are computationally indistinguishable, given the ϵ 1 -universal hash proof system.
Proof. 
Define Z to be the event that some ciphertext c = ( σ , c H , r , v k σ , Ψ , Π ) with c H C \ V , is accepted for a decryption query in Game 5, but is rejected in Game 6. Games 5 and 6 proceed identically until Z occurs. We claim the following.
Pr ( Z ) Q d 2 λ + ( Q c + Q e ) ( l ) 2 ν Q d
Let c = ( σ , c H , r , v k σ , Ψ , Π ) be a decryption query input that triggers Z. In encryption, c H serves as input to Λ s k H . From the adversary’s point of view, Λ s k H is dependent on the set of challenge ciphertexts { c j * } j [ Q c ] where c j * = ( σ j * , c H , j * , r j * , v k σ , j * , Ψ j * , Π j * ) and on the set of encryption query outputs { c i } i [ Q e ] , where c i = ( σ i , c H , i , r i , v k σ , i , Ψ i , Π i ) . For j [ Q c ] , σ j * do not provide additional information on Λ s k H since it is a function of ( c j * , Ψ j * , v k σ , j * ) . The same applies to σ i for i [ Q e ] . The sets of verification keys { v k σ , j * } j [ Q c ] and { v k i * } i [ Q e ] do not provide additional information on Λ s k H since they are sampled independently. Likewise, { Π j * } j [ Q c ] and { Π i } i [ Q e ] do not provide additional information since they are randomly sampled as of Game 5. It follows that only { c H , j * , Ψ j * } j [ Q c ] and { c H , i , Ψ i } i [ Q e ] provide information on Λ s k H . Given that for all j [ Q c ] , Ψ j * has 2 l possible values and for all i [ Q e ] , Ψ i has 2 l possible values, we have the following using Lemma 1.
Γ ˜ ( Λ s k H ( c H ) | ( p k H , c H , λ , { c j * } j [ Q c ] , { c i } i [ Q e ] ) ) Γ ˜ ( Λ s k H ( c H ) | ( p k H , c H , λ ) ) Q e ( l ) Q c ( l )
Applying Lemma 1 to the secret-key leakage λ , the above reduces to:
Γ ˜ ( Λ s k H ( c H ) | ( p k H , c H , λ , { c j * } j [ Q c ] , { c i } i [ Q e ] ) ) Γ ˜ ( Λ s k H ( c H ) | ( p k H , c H ) ) λ ( Q c + Q e ) ( l )
Using the fact that H is an ϵ 1 -universal hash proof system we have:
Γ ( Λ s k H ( c H ) | p k H , c H ) ν = log ( 1 / ϵ 1 )
In addition, we can assume that k = Λ s k H ( c H ) has not been queried to the oracle, since the event that k is queried is taken into consideration in Proposition 3. It follows that, from the adversary’s point of view, the mapping of Π to k is injective. Since injective mappings preserve average min-entropies, we have:
Γ ˜ ( Π | ( p k H , c H , λ , { c j * } j [ Q c ] , { c i } i [ Q e ] ) ) ν λ ( Q c + Q e ) ( l )
Taking the logarithm of both sides and multiplying by 1 , the probability that Z occurs is at most Q d 2 λ + ( Q c + Q e ) ( l ) / 2 ν . Assuming that up to Q d decryption queries are not rejected, the adversary can rule out up to Q d values of k K (i.e., outputs of Λ s k H ). Combining these, we have Equation (1) which represents an upper bound on the probability of Z. This probability is negligible given that λ ν λ ( Q c + Q e ) ( l ) α ( log κ ) under the assumptions. This proves the proposition. □
Proposition 7.
Game 6 and Game 7 are computationally indistinguishable, given that E is a ( ( ν λ ( Q c + Q e ) ( l ) ) / Q c , ϵ 2 ) average case strong randomness extractor.
Proof. 
By Game 6, all ciphertexts with an invalid c H component are explicitly rejected for decryption. Given any challenge ciphertext c j * = ( σ j * , c H , j * , r j * , v k σ , j * , Ψ j * , Π j * ) for j Q c , the adversary cannot learn any additional information on Λ s k H ( c H , j * ) aside from those provided by p k , Ψ j * , the secret-key leakage λ , and outputs of up to Q e encryption queries: { c i } i [ Q e ] and Q c challenge queries: { c j * } j [ Q c ] . Let Ψ i c i and Ψ j * c j * . Both Ψ i and Ψ j * have 2 l possible values for i [ Q e ] and j [ Q c ] . Denote by v A the information from the point of view of the adversary, i.e., v A = ( p k H , { Ψ j * } j [ Q c ] , { Ψ i } i [ Q e ] , λ ) . Under the assumptions of the cryptosystem, for all p k H and c H , j * C \ V , we have Γ ˜ ( Λ s k H ( c H , j * ) | ( p k H , c H , j * ) ) ν . Combining these, we apply Lemma 1, and have the following result for each j Q c .
Γ ˜ ( Λ s k H ( c H , j * ) | v A ) Γ ˜ ( Λ s k H ( c H , j * ) | ( p k H , c H , j * ) ) λ ( Q c + Q e ) ( l ) ν λ ( Q c + Q e ) ( l )
Given that extractor E is a ( ( ν λ ( Q c + Q e ) ( l ) ) / Q c , ϵ 2 ) average case strong randomness extractor, the value of E ( Λ s k H ( c H , j * ) ) is ϵ 2 close to uniform from the point of view of the adversary. The claim thus follows. Combining all these claims prove the stated theorem as well. □

5.5. Cryptosystem 2

The rationale for constructing cryptosystem 2 over cryptosystem 1 is to do away with the non-reversing property of the adversary along with the need for random oracles. Cryptosystem 2 uses an L M collection of lossy functions. The idea behind an L M collection is that the system can sample up to M lossy branches. Having several lossy branches is useful given that, in a cryptosystem that uses lossy function primitives, the branch is published in the ciphertext.

5.6. Cryptosystem 2 Requirements

Let Q e , Q d , and Q c denote the bounds in the number of encryption, decryption, and challenge queries, respectively. Define θ : = n ( M 1 ) + log ( p ) . Define M : = Q e + Q c . All requirements of cryptosystem 2 are the same as cryptosystem 1, except for 2, 3, 6, and 7 which are, now, as follows.
2.
An ( n , p ) L M lossy function collection given by ( L g M , L e M ) with function index set S, branch set B, set of lossy branches B B , and lossy branch b B , and where L e M : S × B × K { 0 , 1 } n M .
3.
E is a ( ( ν λ ( ( Q c + Q e ) ( θ + l ) ) / Q c ) , ϵ 2 ) average-case strong-randomness extractor
6.
Elements of K , R , C , S K H , P K H have length n. Elements of M have length l. Elements of S have length n 2 M .
7.
ν , l, λ , and p are such that λ ν λ ( ( Q c + Q e ) ( θ + l ) ) α ( log ( κ ) ) for some constant α 0 .

5.7. Cryptosystem 2

 Key Generation. 
G ( 1 κ ) first runs H g ( 1 κ ) to obtain s k H S K H , and Λ s k H with μ . It computes p k H = μ ( s k H ) P K H . It defines q : = ( 0 1 n , 0 2 n , . . , 0 M n ) and generates s L g a b o ( 1 κ , q ) . The output is a public/secret key pair ( p k , s k ) , where p k = ( p k H , s ) and s k = s k H along with q.
 Encryption. 
E ( p k , m ) : on input p k = ( p k H , s ) and message m M , performs the following:
  • It samples r 1 R , then computes r 1 = P ( r 1 , m ) and sets ω = r 1 . Using p k H and ω , it chooses c H V .
  • It samples r 2 R , then computes r 2 = r 2 p k H , followed by r 2 = P ( r 2 , m ) R . Using r 2 , it computes k = H p u b ( p k H , c H , ω ) and Ψ = E ( k , r 2 ) m .
  • It generates ( v k σ , s k σ ) F g ( 1 κ ) . It defines b = v k σ , and computes σ = F s ( s k σ , ( c H , r 2 , Ψ ) ) .
  • It computes Π = L e M ( s , b , k ) .
  • It returns ciphertext c = ( σ , c H , r 2 , b , Ψ , Π )
 Decryption. 
D ( s k , c ) : on input s k = s k H and c = ( σ , c H , r , b , Ψ , Π ) , performs the following:
  • The algorithm checks if F v ( b , ( c H , r 2 , Ψ ) , σ ) = 1 . If not, it outputs ⊥
  • It computes k = H p r i v ( s k H , c h ) and Π = L e M ( s , b , k )
  • It checks if Π = Π . If not, it outputs ⊥
  • It returns the plaintext message m = Ψ E ( k , r )
For cryptosystem 2, the role of P is to generate fresh random numbers, r 1 and r 2 , using the joint entropy of the message m and old random numbers r 1 and r 2 (due to reset attacks). Similar to cryptosystem 1, r 1 and r 2 serve as the actual randomness input to H p u b and E, respectively. To show correctness of the cryptosystem, given m and p k , E first computes r 1 and r 2 followed by k H p u b ( p k H , c H , r 1 = ω ) , Ψ = E ( k , r 2 ) m , and Π = L e M ( s , b , k ) , where s is part of p k and b is equal to v k σ . Let c = ( σ , C H , r 2 , b , Ψ , Π ) be the corresponding ciphertext where b = v k σ is jointly sampled with s k σ under F g and σ is derived using F s as shown above. If this c were given to D , it follows that F v ( b , ( c H , r 2 , Ψ ) , σ ) = 1 given that b = v k σ , σ = F s ( s k σ , ( c H , r 2 , Ψ ) ) and the signature scheme ( F g , F s , F v ) satisfies the correctness property. Having passed this first check, D computes k = H p r i v ( s k H , c H ) and we have k = k given that H p r i v uses the same c H from encryption and s k H is paired with p k H using H g . Given that ( H g , H p u b , H p r i v ) satisfies the correctness property of a hash proof system, we have k = k as claimed. Given that k = k , we have Π = Π under L e M , given that s and b are the same as those is used in E and L e M is a function—thereby passing the second check. Finally, with k = k , we have m = Ψ E ( k , r 2 ) = Ψ E ( k , r 2 ) , and the original message m is recovered.

5.8. Security Results for Cryptosystem 2 Scheme

Theorem 2.
Let ( G , E , D ) denote cryptosystem 2. For any equality-respecting, polynomial-time adversary that makes (a) at most Q c challenge queries under multiple randomness indices, (b) at most Q e encryption queries under multiple randomness indices, and (c) at most Q d decryption queries, and following the attack game of Table 1, then cryptosystem 2 is secure against (i) adaptive chosen ciphertext attack, (ii) λ bits of secret-key leakage, and (iii) randomness reset attacks.
Denote the challenge ciphertext as c j * = ( σ j * , c j , H * , r j * , b j * , Ψ j * , Π j * ) for j [ Q c ] .
 Game 0. 
This game implements the original cryptosystem with no modifications.
  • proc.Initialize( κ )
    • ( p k , s k , q ) G ( 1 κ )
    • initialize empty arrays coins and ciphers
    • initialize empty associative arrays keys e and keys c
    • for i [ Q e ] sample ( v k σ , i , s k σ , i ) F g ( 1 κ , q ) and set keys e = keys e ( v k σ , i , s k σ , i )
    • for j [ Q c ] sample ( v k σ , j * , s k σ , j * ) F g ( 1 κ , q ) and set keys c = keys c ( v k σ , j * , s k σ , j * )
    • send p k to adversary
  • proc.Enc( p k , j , m )
    • if coins [ j ] = then randomly sample ( r 1 , r 2 , v k σ , s k σ ) and set coins [ j ] = ( r 1 , r 2 , v k σ , s k σ )
    • compute c E ( p k , m )
    • return c
  • proc.Challenge( j , m 0 , m 1 )
    • if | m 0 | | m 1 | return ⊥
    • if coins [ j ] = then randomly sample ( r 1 , r 2 , v k σ , s k σ ) and set coins [ j ] = ( r 1 , r 2 , v k σ , s k σ )
    • compute c * E ( p k , m )
    • ciphers ciphers c *
    • return c *
  • proc.Dec(c)
    • if c ciphers , return ⊥
    • compute m = D ( s k , c )
    • return m
  • proc.Leak( λ ( κ ) )
    • return λ ( κ ) bits of s k
 Game 1. 
This game is similar to Game 0, except that in E , the challenger samples r 1 , r 2 randomly.
 Game 2. 
This game is similar to Game 1, except that once the adversary submits a ciphertext c = ( σ , c H , r , b , Ψ , Π ) for decryption such that b = v k σ * and v k σ * c for some c ciphers , the challenger automatically returns ⊥.
 Game 3. 
This game is similar to Game 2, except in encryption query i for i [ Q e ] , on input ( p k , j , m ) from the adversary such that coins [ j ] = , instead of sampling a fresh verification/signing key pair ( v k σ , s k σ ) F g ( 1 κ , q ) , it sets the verification/signing key pair as ( v k σ , i , s k σ , i ) keys e from the initialization phase.
 Game 4. 
This game is similar to Game 3, except in challenge query j for all j [ Q c ] , on input ( j , m 0 , m 1 ) from the adversary such that coins [ j ] = , instead of sampling a fresh verification/signing key pair ( v k σ * , s k σ * ) F g ( 1 κ , q ) , it sets the verification/signing key pair as ( v k σ , j * , s k σ , j * ) keys e from the initialization phase.
 Game 5. 
This game is similar to Game 4, except that during initialization, it defines q : = ( v k σ , 1 , v k σ , 2 , . . . , v k σ , Q e , v k σ , 1 * , v k σ , 2 * , . . . , v k σ , Q c * ) instead of q : = ( 0 1 n , 0 2 n , . . . , 0 M n ) .
 Game 6. 
This game is similar to Game 5, except that in E , the challenger computes k = H p r i v ( s k H , c H ) instead of using H p u b .
 Game 7. 
This game is similar to Game 6, except that in encryption queries or challenge queries, c H is sampled from C \ V instead of V.
 Game 8. 
This game is similar to Game 7, except that if the adversary submits a ciphertext c = ( σ , c H , r , b , Ψ , Π ) for decryption such that c H C \ V , the challenger returns ⊥.
 Game 9. 
This game is similar to Game 8, except that in E , the challenger draws Ψ uniformly at random from { 0 , 1 } l instead of computing Ψ = E ( k , r ) m .
Proposition 8.
Game 0 and Game 1 are computationally indistinguishable, given the security of the pseudorandom function P.
Proof. 
The proof for this proposition is similar to the proof for Proposition 1. □
Proposition 9.
Game 1 and Game 2 are computationally indistinguishable, given the strong one-time existential unforgeability of the signature scheme.
Proof. 
The proof for this proposition is similar to the proof for Proposition 2 since b = v k σ in E . □
Proposition 10.
Games 2 and 3 are perfectly equivalent.
Proof. 
We note that the only difference in Games 2 and 3 is that, in Game 3, the verification/signature keys used in encryption queries are drawn from the initialization phase instead of being sampled on the fly. This does not affect any other part of the computation. □
Proposition 11.
Games 3 and 4 are perfectly equivalent.
Proof. 
We note that the only difference in Games 3 and 4 is that, in Game 4, the verification/signature keys used in challenge queries are drawn from the initialization phase instead of being sampled on the fly. This does not affect any other part of the computation. □
Proposition 12.
Game 4 and 5 are indistinguishable given that two candidate lossy branch sets of the L M collection are computationally indistinguishable.
Proof. 
To prove this proposition, we two experiments A and B. Experiment A is a sequence of sub-experiments H 1 , . . . , H Q e + 1 . Experiment B is a sequence of sub-experiments H 1 , . . . , H Q c + 1 . For all sub-experiments in A and B, assume a fixed keys e and keys c .
For experiment A, sub-experiment H 1 emulates the challenger of Game 4 perfectly. Given i [ Q e + 1 ] , sub-experiment H i defines q as q : = ( v k σ , 1 , v k σ , 2 , . . . , v k σ , i 1 , 0 i n , . . . , 0 M n ) in the initialization phase and computes s L g ( 1 κ , q ) , where for i [ Q e ] , we have v k σ , i ( v k σ , i , s k σ , i ) such that ( v k σ , i , s k σ , i ) keys e . Suppose that there exists an efficient adversary that can distinguish between H i and H i + 1 . Using this adversary, we construct a simulator that can distinguish between two candidate lossy branch sets of the L M collection. The simulator has access to an oracle that, on input ( q 0 , q 1 ) , provides it with the function index s, where s can either be s L g ( 1 κ , q 0 ) or s L g ( 1 κ , q 1 ) . At initialization, given i [ Q e ] , the simulator constructs q 0 = ( v k σ , 1 , v k σ , 2 , . . . , v k σ , i 1 , 0 i n , . . . , 0 M n ) and q 1 = ( v k σ , 1 , v k σ , 2 , . . . , v k σ , i 1 , v k σ , i , 0 i + 1 n . . . , 0 M n ) . It forwards ( q 0 , q 1 ) to its oracle and receives function index s. Using s, it constructs p k and s k , draws a { 0 , 1 } , then forwards p k to the adversary. Since the simulator knows p k , it can answer encryption and challenge queries. Since it knows s k , it can answer decryption and secret-key leakage queries. Once the adversary outputs a guess a , the simulator outputs 1 if a = a . The advantage of the simulator in distinguishing between two candidate lossy branch sets of the L M collection is equivalent to the probability that the adversary outputs a such that a = a less 1/2. However, given that it is computationally difficult to distinguish between two lossy branch sets in an L M collection, no efficient adversary can output a such that a = a with non-negligible probability. It follows that the advantage of the simulator is likewise negligible. By construction, if the oracle computes s L g ( 1 κ , q 0 ) , the simulator is performing sub-experiment H i . If the oracle computes s L g ( 1 κ , q 1 ) , the simulator is performing sub-experiment H i + 1 for any i [ Q e ] .
For experiment B, sub-experiment H 1 emulates sub-experiment H Q e + 1 of experiment A perfectly. Given j [ Q c ] , sub-experiment H j defines q as:
q = ( v k σ , 1 , v k σ , 2 , . . . , v k Q e , v k σ , 1 * , v k σ , 2 * , . . . , v k σ , i + j 1 * , 0 i + j n , . . . , 0 M n )
in the initialization phase and computes s L g ( 1 κ , q ) , where for j [ Q c ] , we have v k σ , j * ( v k σ , j * , s k σ , j * ) such that ( v k σ , j * , s k σ , j * ) keys c , and where for i [ Q e ] , we have v k σ , i ( v k σ , i , s k σ , i ) such that ( v k σ , i , s k σ , i ) keys e . Suppose that there exists an efficient adversary that can distinguish between H j and H j + 1 . Using this adversary, we construct a simulator that can distinguish between two candidate lossy branch sets of the L M collection. The simulator has access to an oracle that, on input ( q 0 , q 1 ) , provides it with function index s, where s can either be s L g ( 1 κ , q 0 ) or s L g ( 1 κ , q 0 ) . At initialization, given j [ Q c ] , the simulator constructs ( q 0 , q 1 ) as follows
q 0 = ( v k σ , 1 , v k σ , 2 , . . . , v k Q e , v k σ , 1 * , v k σ , 2 * , . . . , v k σ , Q e + j 1 * , 0 Q e + j n , . . . , 0 M n ) q 1 = ( v k σ , 1 , v k σ , 2 , . . . , v k Q e , v k σ , 1 * , v k σ , 2 * , . . . , v k σ , Q e + j 1 * , v k σ , Q e + j * , 0 Q e + j + 1 n . . . , 0 M n )
It forwards ( q 0 , q 1 ) to its oracle and receives function index s. Using s, it constructs p k and s k , draws a { 0 , 1 } , then forwards p k to the adversary. Since the simulator knows p k , it can answer encryption and challenge queries. Since it knows s k , it can answer decryption and secret-key leakage queries. Once the adversary outputs a guess a , the simulator outputs 1 if a = a . The advantage of the simulator in distinguishing between two candidate lossy branch sets of the L M collection is equivalent to the probability that the adversary outputs a such that a = a less 1/2. However, given that it is computationally difficult to distinguish between two lossy branch sets in an L M collection, no efficient adversary can output a such that a = a with non-negligible probability. It follows that the advantage of the simulator is likewise negligible. By construction, if the oracle computes s L g ( 1 κ , q 0 ) , the simulator is performing sub-experiment H j . If the oracle computes s L g ( 1 κ , q 1 ) , the simulator is performing sub-experiment H j + 1 for any j [ Q c ] .
Combining the above, we have that sub-experiment H 1 of experiment A perfectly simulates Game 4. Sub-experiment H 1 of experiment B perfectly simulates sub-experiment H Q e + 1 of experiment A. Sub-experiment H Q c + 1 of experiment B perfectly simulates Game 5. Since all sub-experiments in A and B are pairwise indistinguishable, the proposition thus follows. □
Proposition 13.
Games 5 and 6 are perfectly equivalent.
Proof. 
The proof for this proposition is similar to the proof for Proposition 4. □
Proposition 14.
Games 6 and 7 are indistinguishable given the hardness of the underlying subset membership problem in H.
Proof. 
The proof for this proposition is similar to the proof for Proposition 5. □
Proposition 15.
Game 7 and Game 8 are computationally indistinguishable, given (i) the lossy property of the L M collection, (ii) the computational hardness of determining lossy branches for the L M collection, and (iii) the ϵ 1 -universal property of H.
Proof. 
Define Z to be the event that some ciphertext c = ( σ , c H , r , b , Ψ , Π ) with c H C \ V , is accepted for a decryption query in Game 7, but is rejected in Game 8. It follows that Game 7 and Game 8 proceed identically until Z occurs. Given at most Q d decryption queries, Q e encryption queries, and Q c challenge queries, we claim the following.
Pr ( Z ) Pr ( Z 1 ) + Pr ( Z 2 ) Q d Θ + Q d γ
where γ represents the probability of some adversary generating a lossy branch for the L M collection and Θ = ( 2 λ + Q c ( θ + l ) + Q e ( θ + l ) ) / ( 2 ν Q d ) , with θ : = n ( M 1 ) + log ( p ) and ν = log ( 1 / ϵ 1 ) . We first prove the equation for Q d Θ . Q d Θ represents the event Z 1 that c is accepted for decryption, and where b c is an injective branch with c h C \ V . Let c = ( σ , c H , r , b , Ψ , Π ) be a decryption query input that triggers Z 1 . In encryption, c H serves as input to Λ s k H . From the adversary’s point of view, Λ s k H is dependent on p k H , c H , λ , the set { c i } i [ Q e ] of encryption query outputs and the set { c j * } j [ Q c ] of challenge ciphertexts. For i [ Q e ] and j [ Q c ] , the signatures σ i c i and σ j * c j * do not provide additional information on Λ s k H , since they are functions of ( c H , i , Ψ i , b i ) and ( c H , j * , Ψ j * , b j * ) , respectively. For i [ Q e ] and j [ Q c ] , the branches b i and b j * likewise do not provide additional information as they are independently sampled. Using results of Lemma 1, we prove the following:
Γ ˜ ( Λ s k H ( c H ) | ( p k H , c H , Λ , { c i } i [ Q e ] , { c j * } j [ Q c ] ) ) Γ ˜ ( Λ s k H ( c H ) | ( p k , c H , { c i } i [ Q e ] , { c j * } j [ Q c ] ) ) λ Γ ˜ ( Λ s k H ( c H ) | ( p k , c H , { c i } i [ Q e ] ) ) λ Q c θ Q c l Γ ˜ ( Λ s k H ( c H ) | ( p k , c H ) ) λ Q c θ Q c l Q e θ Q e l ν λ Q c ( θ + l ) Q e ( θ + l )
The first inequality applies Lemma 1. For the second inequality, given challenge ciphertext c j * = ( σ j * , c H , j * , r j * , b j * , Ψ j * , Π j * ) for j [ Q c ] , information on s k H is provided by c H , j * , Ψ j * and Π j * as shown above. Given that Ψ j * has 2 l possible values, and Π j * has 2 log ( p ) = p possible values, the second inequality follows by applying Lemma 1. For the third inequality, given any encryption query output c i = ( σ i , c H , i , r i , b i , Ψ i , Π i ) for i [ Q e ] , information on s k H is covered by c H , i , Ψ i and Π i as shown above, where Ψ i has 2 l possible values, while Π i has 2 log ( p ) = p possible values. Given that the total number of encryption queries is Q e , the third inequality follows from Lemma 1. For the last inequality, for all p k H and c H C \ V , we have Γ ( Λ s k H ( c H ) | ( p k H , c H ) ) ν = log ( 1 / ϵ 1 ) , under the assumption that H is an ϵ 1 -universal hash proof system.
Using the above set of inequalities, we note that Π = L e M ( s , b , k ) with k = Λ s k H ( c H ) . Starting with Game 2, all ciphertexts for decryption involve injective branches. Injective functions preserve the min-entropy of its input, and we have Γ ˜ ( L e a b o ( s , b , Λ s k H ( c H ) ) | v A ) ν , where v A = ( p k H , c H , Λ , { c i } i [ Q e ] , { c j * } j [ Q c ] ) . Using the fact that H is an ϵ 1 -universal hash proof system we have Γ ( Λ s k H ( c H ) | ( p k H , c H ) ) ν = log ( 1 / ϵ 1 ) . We then have:
Γ ˜ ( L e M ( s , b , Λ s k H ( c H ) ) | v A ) ν λ Q c ( θ + l ) Q e ( θ + l )
Taking the logarithm of both sides and multiplying by 1 , the probability that Z 1 occurs is at most Q d 2 λ + ( Q e + Q c ) ( θ + l ) / 2 ν . Assuming that up to Q d decryption queries are not rejected, the adversary can rule out up to Q d values of k K (i.e., outputs of Λ s k H ). Combining these, we have:
Pr ( Z 1 ) Q d 2 λ + ( Q e + Q c ) ( θ + l ) 2 ν Q d = Q d Θ
Pr ( Z 1 ) is negligible given that λ ν λ ( Q e + Q c ) ( θ + l ) α ( log κ ) by assumption. We now state the proof for the second element Q d γ at the right hand side of Equation (2). Q d γ accounts for the event Z 2 that an adversary submits c for decryption such that b c is a new lossy branch, i.e., b c i for any prior encryption query output c i and b c j * for any c j * ciphers . Suppose that Z 2 occurs under some efficient adversary. Using this adversary, we construct a simulator that can efficiently generate a lossy branch for L M . The simulator has access to an oracle that provides it with a function index s, but the oracle does not disclose the corresponding set of lossy branches. In encryption query i for i [ Q e ] , the oracle provides the simulator with a signing/verification key pair ( s k i , σ , v k i , σ ) such that v k i , σ is equal to a lossy branch in L M . The same is done by the oracle for challenge queries. At initialization, the simulator completely emulates the challenger in Game 10, but requests its oracle for s. Since the simulator knows p k H , s, coins , and can request its oracle for signing and verification keys, it can answer any encryption or challenge query. Since the simulator knows s k , it can answer secret-key leakage queries and decryption queries. The simulator keeps a list of all the ciphertext inputs it received for a decryption query. At the end of the game, the simulator randomly picks i [ Q d ] and outputs the branch b i of the ith decryption query ciphertext input. Suppose that with probability γ , the adversary is able to query a ciphertext for decryption such that its branch is lossy. The probability that the simulator outputs a lossy branch for L M is then γ / Q d . However, given that it is computationally difficult to generate a lossy branch for L M , γ is negligible. It follows that Q d γ in Equation (2) or Pr ( Z 2 ) is also negligible. Combining all these facts the proposition follows. □
Proposition 16.
Game 8 and Game 9 are computationally indistinguishable, given that E is a ( ( ν λ ( ( Q c + Q e ) ( θ + l ) ) / Q c ) , ϵ 2 ) average case strong randomness extractor.
Proof. 
By Game 8, all ciphertexts with an invalid c H component are explicitly rejected for decryption. Denote challenge ciphertext c j * as c j * = ( σ j * , c H , * , r j * , b j * , Ψ j * , Π j * ) for j Q c . Denote an encryption query output as c i as c i = ( σ i , c H , i , r i , b i , Ψ i , Π i ) for i Q e . For each c j * , the adversary cannot learn any information on Λ s k H ( c H , j * ) aside from those provided by p k H , Π j * and Ψ j * , λ , encryption query outputs { c i } i [ Q e ] and challenge ciphertexts { c j * } j [ Q j ] . Both Ψ j * and Ψ i have 2 l possible values for i [ Q e ] and j [ Q c ] . Π j * has 2 log ( p ) = p possible values, and Π i has 2 log ( p ) = p possible values for i [ Q e ] and j [ Q c ] . Under the assumptions of the cryptosystem, for all p k H and c H , j * C \ V , we have Γ ˜ ( Λ s k H ( c H , j * ) | ( p k H , c J , j * ) ) ν . Combining these, we apply Lemma 1, and have the following result for each j [ Q c ] .
Γ ˜ ( Λ s k H ( c H , j * ) | ( p k H , c j * ) ) Γ ˜ ( Λ s k H ( c H , j * ) | ( p k H , c H , j * , λ , { Ψ j * } j [ Q c ] , { Ψ i } i [ Q e ] , { Π j * } j [ Q c ] , { Π i } i [ Q e ] ) ) Γ ˜ ( Λ s k H ( c H , j * ) | ( p k H , c H * ) ) λ Q c ( log ( p ) + l ) Q e ( log ( p ) + l ) ν λ ( Q c + Q e ) ( log ( p ) + l )
Applying the above for j [ Q c ] , and given that extractor E is a ( ( ν λ ( ( Q c + Q e ) ( θ + l ) ) / Q c ) , ϵ 2 ) average case strong randomness extractor, the value of E ( Λ s k H ( c H * ) ) is ϵ 2 close to uniform from the point of view of the adversary. The claim thus follows. Combining all these claims prove the stated theorem as well. □

6. Concrete Instantiations

Let A d d h be a group sampling algorithm that takes as input a security parameter, κ , and which outputs a tuple, ( p , G , g ) , where G is a cyclic group of order p for some prime p, and g is the generator of the group. Let a , b , c Z p . The DDH assumption states that the ensemble of tuples { ( G , g a , g b , g a b ) } κ N is computationally difficult to distinguish from the ensemble of tuples { ( G , g a , g b , g c ) } κ N

6.1. Hash Proof System Based on DDH

This concrete example of a hash proof system follows [2].
 Parameter Generation. 
H g ( 1 κ ) first runs A d d h to obtain ( p , G , g ) A d d h ( 1 κ ) . It randomly samples u G followed by σ , τ A p . It defines the secret key as s k H = ( σ , τ ) . It defines the projective hash function Λ s k H : G 2 G as Λ s k H ( v , w ) = v σ , w τ . In this case, the projection function μ associated with Λ s k H is the same, i.e., μ : = Λ s k H . Using μ , it constructs the public key p k H = ( h , u ) , where h : = μ ( g , u ) = g σ u τ . In this case, the set of ciphertexts C consists of G 2 , i.e., C : = G × G . The set of valid ciphertexts V C consists of all element ( u , v ) of G × G for which ( u , v , w ) is a DH-triple, i.e., u ω v ω = w ω .
 Public Evaluation. 
H p u b ( p k H , c H , ω ) takes as input public key p k H = ( h , u ) , c H G × G and witness ω Z p that c H V . Let c H = ( v , w ) . Given u, H p u b first checks if ( v , w ) = g ω u ω . If true, H p u b outputs h ω , which equals to Λ s k H ( v , w ) given that:
Λ s k H ( v , w ) = v σ w σ = ( g ω ) σ ( u ω ) τ = ( g σ u τ ) ω = h ω
We note here that the value of Λ s k H ( v , w ) is evaluated using only the auxiliary information h, without knowing the secret key ( σ , τ ) .
 Private Evaluation. 
H p r i v ( s k H , c H ) takes as input private key s k H = ( σ , τ ) , and c H G × G . Let c H = ( v , w ) . H p r i v evaluates Λ s k H on any element of C without requiring a witness. In this case, H p r i v outputs v σ u τ .
As shown in [2] if Λ s k H is evaluated in ( v , w ) without using auxiliary information h, the probability that Λ s k H ( v , w ) equals some number is 1 / p , which is close to uniform.

6.2. ABO Function Collection Based on DDH

Given κ , let ( p , G , g ) A d d h ( 1 κ ) . The ElGamal cryptosystem is initiated as follows: (i) a secret key is drawn randomly z Z p and (ii) the public key is computed as h = g z [8]. Given public key h and secret key z, let E h denote the encryption algorithm of ElGamal, while D z is the decryption algorithm. Encryption of a message m Z p is done by first drawing random number z Z p , and the ciphertext c is the tuple E h ( m , r ) = c = ( g r , h r g m ) . Decryption on the other hand is computed as D z ( c ) = log g ( h r g m / g r z ) . The correctness of the ElGamal cryptosystem follows from log g ( h r g m / g r z ) = log g ( g r z g m / g r z ) = log g ( g m ) = m . From [8], this cryptosystem is semantically secure under the DDH assumption and is additively homomorphic in the sense that, given two ciphertexts c a = E h ( m 1 , r 1 ) and c b = E H ( m 2 , r 1 ) for any pair of messages m 1 , m 2 Z p and random elements r 1 , r 2 Z p , they can be ’added’, in the sense that c a c b = E h ( m 1 + m 2 , r 1 + r 2 ) , where ⊠ is the coordinate-wise multiplication of ciphertexts. This can, likewise, be applied to the exponentiation operation, i.e., E h ( m , r ) x = E h ( m x , r x ) . We also define the operation ⊞; given ciphertext c = ( c 1 , c 2 ) , we can add a scalar v Z p to the underlying plaintext, i.e., c v : = ( c 1 , c 2 g v ) = E h ( m + v , r ) , using the r used in computing c.

6.2.1. Matrix Encryption

Let n > 0 be some integer and p a prime number. Given a matrix M = ( m i , j ) Z p n × n , M can be encrypted based on ElGamal. Encryption produces indistinguishable ciphertexts under the DDH assumption (Lemma 4 below). The first step is to sample z j Z p for j [ n ] , followed by computing h j = g z j for j [ n ] . Encrypting M is done by drawing n random numbers r i Z p , and then, for rows i [ n ] and columns j [ n ] , we compute the ciphertext c i j = E h j ( m i j , r i ) . This scheme outputs a ciphertext tuple ( C 1 , C 2 ) that serves as the encryption of M , where the vector C 1 and the matrix C 2 are as follows.
C 1 = g r 1 g r n C 2 = h 1 r 1 g m 1 , 1 h 2 r 1 g m 1 , 2 h n r 1 g m 1 , n h 1 r n g m n , 1 h 2 r n g m n , 2 h n r n g m n n
Lemma 4
([8]). The ElGamal matrix encryption scheme produces indistinguishable ciphertexts.

6.2.2. DDH-Based ABO Function Collection

Let a ( n , p ) collection of ABO functions be given by L g a b o , L e a b o ([8] also includes a trapdoor algorithm but we do not include it here). Given κ , let B : = B κ denote the set of branches with lossy branch b B . Each element of B belongs to Z p 1 × n . The algorithms are described as follows:
 Function sampling algorithm. 
L g a b o ( 1 κ , b ) takes as input the security parameter κ , and the lossy branch b B . Denote b : = ( b 1 , b 2 , . . . , b n ) . The algorithm first runs ( p , G , q ) A d d h ( 1 κ ) . Let I denote the identity matrix over Z p n × n . L g a b o samples a vector of secret keys z = { z j } i [ n ] and forms the vector h = g z Z p 1 × n . It also samples a random vector, r Z p 1 × n . The output s of L g a b o consists of the function index s Z p n × n , which is the second element (i.e., the matrix C 2 ) of the ElGamal matrix encryption of ( b I ) (using h , r , and z ), along with the t consisting of the vector of secret keys z = { z j } i [ n ] Z p 1 × n . The vectors r and h do not need to be published publicly. In detail, we have s as follows.
s = h 1 r 1 g b 1 h 2 r 1 h n r 1 h 1 r 2 h 2 r 2 g b 2 h n r 2 h 1 r n h 2 r n h n r n g b n
 Evaluation algorithm. 
L e a b o ( s , b , m ) takes as input the function index consisting of the matrix s = ( b I ) sampled by L g a b o , a desired branch b from B, and a vector m Z p 1 × n . The output consists of a vector Π = m ( s b I ) Z p 1 × n . In summary:
Π = m ( s b I ) = g m 1 g m 2 g m n T h 1 r 1 g ( b 1 b 1 ) h 2 r 1 h n r 1 h 1 r 2 h 2 r 2 g ( b 2 b 2 ) h n r 2 h 1 r n h 2 r n h n r n g ( b n b n ) = h m , r g m 1 ( b 1 b 1 ) h m , r g m 2 ( b 2 b 2 ) h m , r g m n ( b n b n ) = g z 1 ( m 1 r 1 + m 2 r 2 + . . . m n r n ) + m 1 ( b 1 b 1 ) g z 2 ( m 1 r 1 + m 2 r 2 + . . . m n r n ) + m 2 ( b 2 b 2 ) g z n ( m 1 r 1 + m 2 r 2 + . . . m n r n ) + m n ( b n b n )
Let y j denote the jth coordinate of Π and let x , y denote the dot product of x and y . Let m j denote the jth coordinate of m , and let E h j c 2 denote the function E h j that discards the first element c 1 in its output and returns only c 2 . Using the homomorphic properties of the system we restate y j as: y j : = E h j c 2 ( ( b b ) m j , r : = m , r ) . This is computed without knowing r and h from the sampling stage; oOnly s is needed to compute Π .
Lemma 5
([8]). The algorithms L g a b o , L e a b o , described above, give a collection of ( n , p ) ABO lossy functions under the DDH assumption for A d d h .

6.3. L M Lossy Function Collection

Let n, p be values of polynomials in κ , and let M N . Let S denote the set of function indices of a L M collection and let B = { B } κ N denote the collection of sets of branches indexed by κ . A concrete instantiation of a ( n , p ) L M collection is given by algorithms L g M , L e M , which are as follows.

Concrete Instantiation of the L M Collection

Given κ , define B : = B as the branch set corresponding to function index s S . Define B B as a lossy branch set of size M with elements b i Z p 1 × n for i [ M ] . Using B , define q as the ordered tuple q : = ( b 1 , b 2 , . . . , b M ) for b i B , i [ M ] . Suppose that ( L g a b o , L e a b o ) are algorithms that give a ( n , p ) ABO collection satisfying the required properties. Using ( L g a b o , L e a b o ) , we implement ( L g M , L e M ) as follows.
 Function sampling algorithm. 
L g M ( 1 κ , q ) takes as input κ and q. The algorithm computes ( p , G , q ) A d d h ( 1 κ ) . Given q and p, L g M outputs s, where s S . Let s Z p , 1 n × n × Z p , 2 n × n × . . . × Z p , M n × n . Denote by s i the ith coordinate of s, denote by q i the ith coordinate of q. Using the algorithm L g a b o , we compute s i L g a b o ( 1 κ , q i ) for i [ M ] so that s = ( s 1 , s 2 , . . . , s M ) .
 Evaluation algorithm. 
L e M ( s , b , m ) : on input s, branch b B and message m Z p 1 × n , the output is a vector π Z p , 1 1 × n × Z p , 2 1 × n × . . . × Z p , M 1 × n . Denote by s i the ith coordinate of s and denote by π i the ith coordinate of π . Using the algorithm L e a b o , π i is computed as π i = L e a b o ( s i , b , m ) for i [ M ] , so that π = ( π 1 , π 2 , . . , π M ) .
Lemma 6.
Assume that ( L g a b o , L e a b o ) are algorithms that give a ( n , p ) ABO collection of lossy functions satisfying the required properties for an ABO collection. Using ( L g a b o , L e a b o ) , the algorithms ( L g M , L e M ) , presented above, give a L M collection of lossy functions that satisfies the required properties of a L M collection under the DDH assumption.
Proof. 
Property 1 is satisfied since q Z p , 1 1 × n × Z p , 2 1 × n × . . . × Z p , M 1 × n , which can be constructed in polynomial time by drawing M elements of Z p 1 × n . For property 3, each coordinate s i of the function index s is computed as s i = L g a b o ( 1 κ , q i ) in polynomial time. Since this is done M times, the total time of L g M is polynomial. For property 4, to generate an element of B , one has to sample elements of Z p 1 × n , which cannot be done in polynomial time. To prove property 3, we construct experiments H 1 , . . . , H M + 1 . Let q a , q b Z p ( 1 × n ) M with q a q b represent two ordered tuples. Denote q a = ( q 1 , 1 , q 1 , 2 , . . . , q 1 , M ) and q b = ( q 2 , 1 , q 2 , 2 , . . . , q 2 , M ) . Experiment H 0 computes s L g M ( 1 κ , q a ) . For i [ M + 1 ] , experiment H i constructs q as follows:
q = ( q 2 , 1 , q 2 , 2 , q 2 , 3 , . . . , q 2 , i 1 , q 1 , i , q 1 , i + 1 , . . . , q 1 , M )
Suppose that some polynomial time adversary can distinguish between H i and H i + 1 for any i [ M 1 ] . Using this adversary, we construct a simulator that breaks property 3 of the ABO collection given by ( L g a b o , L e a b o ) . The simulator has access to an oracle that, on input ( b 0 , b 1 ) , provides it with either s L g a b o ( 1 κ , b 0 ) or s L g a b o ( 1 κ , b 1 ) . For i [ M 1 ] , let q a , q b be fixed with q a q b and where q a = ( q 1 , 1 , q 1 , 2 , . . . , q 1 , M ) and q b = ( q 2 , 1 , q 2 , 2 , . . . , q 2 , M ) . Given i [ M 1 ] , the simulator constructs q 0 and q 1 as follows during initialization:
q 0 = ( q 2 , 1 , q 2 , 2 , q 2 , 3 , . . . , q 2 , i 1 , q 1 , i , q 1 , i + 1 , . . . , q 1 , M ) q 1 = ( q 2 , 1 , q 2 , 2 , q 2 , 3 , . . . , q 2 , i 1 , q 2 , i , q 1 , i + 1 , . . . , q 1 , M )
For i [ M ] \ i , the simulator computes s i L g a b o ( 1 κ , q 0 ) , since q 0 and q 1 are equal for all coordinates not equal to i . For s i , the simulator forwards ( q 1 , i , q 2 , i ) to its oracle and it receives s i . The simulator then forms s = ( s 1 , s 2 , . . . , s M ) and forwards it to its adversary. Once the adversary outputs a guess a { 0 , 1 } , the simulator outputs a . The advantage of the simulator is thus equal to the probability that the adversary outputs a = 0 and the oracle computes s i L g a b o ( 1 κ , q 0 ) . However, given that the ABO collection given by ( L g a b o , L e a b o ) satisfies the required properties, no efficient adversary would be able to output a correctly with non-negligible probability. It follows that the advantage of the simulator is likewise negligible. By construction, if the oracle computes s i L g a b o ( 1 κ , q 1 , i ) , the simulator is performing experiment H i , while if the oracle computes s i L g a b o ( 1 κ , q 2 , i ) , the simulator is performing experiment H i + 1 for i [ M 1 ] . Experiment H 0 computes s L g M ( 1 κ , q a ) , while H M + 1 computes s L g M ( 1 κ , q b ) . This proves property 3.
We now show that ( L g M , L e M ) satisfy the lossiness property. Let s L g M ( 1 κ , q ) for some q. We note that to efficiently sample a lossy branch, simply pick any ith coordinate q i of q. Let π j denote the jth coordinate of π , where π is the output of L e M ( s , q i , k ) for some k Z p 1 × n . For j [ M ] with i j , we have π j = L e a b o ( s j , q i , k ) , as injective since q i q j . For j = i , we have π j = L e a b o ( s i , q i , k ) , which is lossy given that s i L g a b o ( 1 κ , q i ) , i.e., s i is computed with q i as the lossy branch. It follows that π i has, at most, p possible values. Transferring from the Z p domain to the binary domain, let π j , with i j have 2 n possible values for some n N . It follows that π can have, at most, 2 n ( M 1 ) + log ( p ) possible values. This proves the lossiness property. □

7. Conclusions and Future Work

In this paper, we proposed cryptosystems that seek to address the problem of constructing CCA2 secure public-key cryptosystems that are resilient against related randomness attacks and related key attacks, albeit under the more limited classes of randomness reset attacks and constant-bit secret-key leakage attacks. Under this security notion, attacks from both the encryption and decryption processes of a cryptosystem are considered. We formally define this security notion in terms of an attack game between a challenger and an adversary, where the adversary has access to encryption queries with randomness reset and secret-key leakage queries aside from the standard challenge queries and decryption queries in CCA2 security. Under this attack game, we have presented two cryptosystems that are provably secure, where the first cryptosystem relies on the random oracle assumption, while the second cryptosystem is a standard model that relies on a proposed primitive called L M lossy functions, which can provide up to M lossy branches. In particular, the second cryptosystem uses the loss of information provided by multiple lossy branches to render it secure under a bounded number of encryption and challenge queries. While the L M collection exhibits a higher degree of information in the lossy branch and requires more memory than other lossy functions, it is easy to construct, as it uses Cartesian product operations over ABO lossy function primitives.
For future work, stronger public-key cryptosystems could, perhaps, be developed such that they are secure against more general types of related randomness attacks or related key attacks. For instance, instead of a randomness reset, a randomness attack may involve linear or polynomial functions of the randomness. The same can be said of secret-key leakage attacks, wherein, instead of the leakage of a constant number of bits, the leakage could be arbitrary functions of the secret key. In addition, further analysis can be done on the efficiency and experimental performance of the algorithm. In particular, the following can be explored: (1) to experimentally assess the efficiency/speed of cryptosystem 1 using an appropriate hash function as a simulated random oracle, and to compare this with cryptosystem 2 using the LM hash function and with other algorithms; (2) to experimentally assess the memory complexity of the LM lossy function relative to the other hash functions in the literature; (3) to provide a discussion on the time and space complexity of our algorithm relative to other algorithms; and (4) to perform an experimental simulation of a randomness reset attack and constant secret-key leakage attack, and demonstrate the algorithm’s response and resiliency to these attacks

Author Contributions

Conceptualization, A.L.; formal analysis, A.L. and H.A.; writing—review and editing, H.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Engineering Research and Development Technology (ERDT) program of the Department of Science and Technology (DOST), Philippines.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koç, Ç.K.; Özdemir, F.; Ödemiş Özger, Z. Rivest-Shamir-Adleman Algorithm. In Partially Homomorphic Encryption; Springer: Berlin/Heidelberg, Germany, 2021; pp. 37–41. [Google Scholar]
  2. Boneh, D.; Shoup, V. A graduate Course in Applied Cryptography. 2020. Available online: https://toc.cryptobook.us/book.pdf (accessed on 23 December 2021).
  3. Cramer, R.; Shoup, V. A practical public key cryptosystem provably secure against adaptive chosen ciphertext attack. In Annual International Cryptology Conference; Springer: Berlin/Heidelberg, Germany, 1998; pp. 13–25. [Google Scholar]
  4. Bellare, M.; Rogaway, P. Random oracles are practical: A paradigm for designing efficient protocols. In Proceedings of the 1st ACM Conference on Computer and Communications Security, Fairfax, VA, USA, 3–5 November 1993; pp. 62–73. [Google Scholar]
  5. Cramer, R.; Shoup, V. Universal hash proofs and a paradigm for adaptive chosen ciphertext secure public-key encryption. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Amsterdam, The Netherlands, 28 April–2 May 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 45–64. [Google Scholar]
  6. Lucks, S. A variant of the Cramer-Shoup cryptosystem for groups of unknown order. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Queenstown, New Zealand, 1–5 December 2002; Springer: Berlin/Heidelberg, Germany, 2002; pp. 27–45. [Google Scholar]
  7. Lafourcade, P.; Robert, L.; Sow, D. Fast Cramer-Shoup Cryptosystem. In Proceedings of the 18th International Conference on Security and Cryptography SECRYPT 2021, Paris, France, 6–8 July 2021. [Google Scholar]
  8. Peikert, C.; Waters, B. Lossy trapdoor functions and their applications. SIAM J. Comput. 2011, 40, 1803–1844. [Google Scholar] [CrossRef] [Green Version]
  9. Naor, M.; Segev, G. Public-key cryptosystems resilient to key leakage. In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 16–20 August 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 18–35. [Google Scholar]
  10. Bellare, M.; Brakerski, Z.; Naor, M.; Ristenpart, T.; Segev, G.; Shacham, H.; Yilek, S. Hedged public-key encryption: How to protect against bad randomness. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Tokyo, Japan, 6–10 December 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 232–249. [Google Scholar]
  11. Paterson, K.G.; Schuldt, J.C.; Sibborn, D.L. Related randomness attacks for public key encryption. In Proceedings of the International Workshop on Public Key Cryptography, Buenos Aires, Argentina, 26–28 March 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 465–482. [Google Scholar]
  12. Faonio, A.; Venturi, D. Efficient public-key cryptography with bounded leakage and tamper resilience. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Hanoi, Vietnam, 4–8 December 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 877–907. [Google Scholar]
  13. Qin, B.; Liu, S. Leakage-resilient chosen-ciphertext secure public-key encryption from hash proof system and one-time lossy filter. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Bengaluru, India, 1–5 December 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 381–400. [Google Scholar]
  14. Marton, K.; Suciu, A.; Ignat, I. Randomness in digital cryptography: A survey. Rom. J. Inf. Sci. Technol. 2010, 13, 219–240. [Google Scholar]
  15. Yuen, T.H.; Zhang, C.; Chow, S.S.; Yiu, S.M. Related randomness attacks for public key cryptosystems. In Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, Singapore, 14–17 April 2015; pp. 215–223. [Google Scholar]
  16. Yilek, S. Resettable public-key encryption: How to encrypt on a virtual machine. In Proceedings of the Cryptographers’ Track at the RSA Conference, San Francisco, CA, USA, 1–5 March 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 41–56. [Google Scholar]
  17. Garfinkel, T.; Rosenblum, M. When Virtual Is Harder than Real: Security Challenges in Virtual Machine Based Computing Environments. In HotOS; USENIX: Berkeley, CA, USA, 2005. [Google Scholar]
  18. Ristenpart, T.; Yilek, S. When Good Randomness Goes Bad: Virtual Machine Reset Vulnerabilities and Hedging Deployed Cryptography. In NDSS; Internet Society: Reston, VA, USA, 2010. [Google Scholar]
  19. Bellare, M.; Rogaway, P. Code-Based Game-Playing Proofs and the Security of Triple Encryption. IACR Cryptol. ePrint Arch. 2004, 2004, 331. [Google Scholar]
  20. Austrin, P.; Chung, K.M.; Mahmoody, M.; Pass, R.; Seth, K. On the impossibility of cryptography with tamperable randomness. Algorithmica 2017, 79, 1052–1101. [Google Scholar] [CrossRef] [Green Version]
  21. Tiri, K. Side-channel attack pitfalls. In Proceedings of the 2007 44th ACM/IEEE Design Automation Conference, San Diego, CA, USA, 4–8 June 2007; pp. 15–20. [Google Scholar]
  22. Nisan, N.; Ta-Shma, A. Extracting randomness: A survey and new constructions. J. Comput. Syst. Sci. 1999, 58, 148–173. [Google Scholar] [CrossRef] [Green Version]
  23. Dodis, Y.; Vaikuntanathan, V.; Wichs, D. Extracting Randomness from Extractor-Dependent Sources. In Advances in Cryptology–EUROCRYPT 2020; 2020; pp. 313–342. Available online: https://par.nsf.gov/servlets/purl/10165162 (accessed on 23 December 2021).
  24. Dodis, Y.; Reyzin, L.; Smith, A. Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Interlaken, Switzerland, 2–6 May 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 523–540. [Google Scholar]
  25. Håstad, J.; Impagliazzo, R.; Levin, L.A.; Luby, M. A pseudorandom generator from any one-way function. SIAM J. Comput. 1999, 28, 1364–1396. [Google Scholar] [CrossRef]
  26. Hemenway, B.; Libert, B.; Ostrovsky, R.; Vergnaud, D. Lossy encryption: Constructions from general assumptions and efficient selective opening chosen ciphertext security. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Seoul, Korea, 4–8 December 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 70–88. [Google Scholar]
  27. Hofheinz, D. All-but-many lossy trapdoor functions. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cambridge, UK, 15–19 April 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 209–227. [Google Scholar]
  28. Canetti, R.; Goldwasser, S. An efficient threshold public key cryptosystem secure against adaptive chosen ciphertext attack. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Prague, Czech Republic, 2–6 May 1999; Springer: Berlin/Heidelberg, Germany, 1999; pp. 90–106. [Google Scholar]
  29. Wee, H. Public key encryption against related key attacks. In Proceedings of the International Workshop on Public Key Cryptography, Darmstadt, Germany, 21–23 May 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 262–279. [Google Scholar]
  30. Paterson, K.G.; Schuldt, J.C.; Sibborn, D.L.; Wee, H. Security against related randomness attacks via reconstructive extractors. In Proceedings of the IMA International Conference on Cryptography and Coding, Oxford, UK, 15–17 December 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 23–40. [Google Scholar]
  31. Boneh, D.; Boyen, X.; Shacham, H. Short group signatures. In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 15–19 August 2004; Springer: Berlin/Heidelberg, Germany, 2004; pp. 41–55. [Google Scholar]
Table 1. Attack game with adaptive chosen ciphertext attack, randomness reset, and constant secret-key leakage, where r ( κ ) is the value of a polynomial in κ that represents the combined length of the random numbers used during encryption. The adversary in this game is allowed to perform multiple encryption and challenge queries under different randomness indices. The notation coins [ j ] and ciphers [ j ] , for j 1 , refers to the jth element of coins and ciphers , respectively.
Table 1. Attack game with adaptive chosen ciphertext attack, randomness reset, and constant secret-key leakage, where r ( κ ) is the value of a polynomial in κ that represents the combined length of the random numbers used during encryption. The adversary in this game is allowed to perform multiple encryption and challenge queries under different randomness indices. The notation coins [ j ] and ciphers [ j ] , for j 1 , refers to the jth element of coins and ciphers , respectively.
proc.Initialize( κ )
a { 0 , 1 }
( p k , s k ) G ( 1 κ )
coins
ciphers
return p k
proc.Challenge( j , m 0 , m 1 )
if | m 0 | | m 1 | return
if coins [ j ] = then coins { 0 , 1 } r ( κ )
r j coins [ j ]
c E ( p k , m , r j )
ciphers ciphers c
return c
proc.Dec(c)
if c ciphers then return
else return  D ( s k , c )
proc.Enc( p k , j , m )
if coins [ j ] = then coins { 0 , 1 } r ( κ )
r j coins [ j ]
c E ( p k , m , r j ) return c
proc.Leak( λ ( κ ) )
return random λ ( κ ) bits of s k
proc.Finalize( a )
return ( a = a )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Labao, A.; Adorna, H. A CCA-PKE Secure-Cryptosystem Resilient to Randomness Reset and Secret-Key Leakage. Cryptography 2022, 6, 2. https://doi.org/10.3390/cryptography6010002

AMA Style

Labao A, Adorna H. A CCA-PKE Secure-Cryptosystem Resilient to Randomness Reset and Secret-Key Leakage. Cryptography. 2022; 6(1):2. https://doi.org/10.3390/cryptography6010002

Chicago/Turabian Style

Labao, Alfonso, and Henry Adorna. 2022. "A CCA-PKE Secure-Cryptosystem Resilient to Randomness Reset and Secret-Key Leakage" Cryptography 6, no. 1: 2. https://doi.org/10.3390/cryptography6010002

Article Metrics

Back to TopTop