Next Article in Journal
Molecular Blueprinting by Word Processing
Previous Article in Journal
Novel Properties of q-Sine-Based and q-Cosine-Based q-Fubini Polynomials
Previous Article in Special Issue
Research on a Vehicle Authentication and Key Transmission Protocol Based on CPN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Discrete Pruned Enumeration in Solving BDD

1
Henan Key Laboratory of Network Cryptography Technology, Zhengzhou 450001, China
2
PLA Information Engineering University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(2), 355; https://doi.org/10.3390/sym15020355
Submission received: 30 December 2022 / Revised: 20 January 2023 / Accepted: 20 January 2023 / Published: 28 January 2023
(This article belongs to the Special Issue Frontiers in Cryptography)

Abstract

:
The bounded distance decoding (BDD) is a fundamental problem in lattice-based cryptography which is derived from the closest vector problem (CVP). In this paper, we adapt the lattice enumeration with discrete pruning, a burgeoning method for the shortest lattice vector problem (SVP), to solve BDD in various cryptanalysis scenarios using direct method. We first transfer the basic definition involved in discrete pruning technique from SVP to CVP, prove corresponding properties and give the specific procedures of the algorithm. Additionally, we use the discrete pruning technique to interpret the classical CVP algorithms, including Babai’s nearest plane and Lindner–Peikert nearest planes, which can be regarded as discrete pruned enumeration on some special pruning sets. We propose three probability models in the runtime analysis to accurately estimate the cost of our algorithm in different application scenarios. We study the application of discrete pruned enumeration for BDD mainly on LWE-based cryptosystem and DSA with partially known nonces. The experimental results show that our new algorithm has higher efficiency than the previous algorithms which directly solve BDD, including the nearest plane(s) algorithms and the lattice enumeration with classical pruning strategies, and we are able to recover the DSA secret with less leaked information than the previous works.

1. Introduction

The closest vector problem (CVP) is one of the cornerstones of lattice-based cryptanalysis which has many variants. Its most significant derivative, the bounded distance decoding problem (BDD), is widely applied in the analysis of post-quantum and classical public-key cryptosystems.
In the analysis of lattice-based post-quantum public-key cryptosystems, BDD attack is the most direct way to solve the learning with errors (LWE) problem, which makes up the most important hardness assumption underlying many lattice-based cryptographic primitives. Much work has been devoted to solving the BDD problem underlying LWE [1,2,3,4,5,6,7,8].
For classical public-key cryptosystems, the application of lattice is usually conjuncted with side channel attack and lattice algorithms are applied to solve the hidden number problem (HNP), which is also a variant of BDD. The lattice method is intensively discussed in the key recovery attack of DSA/ECDSA with partially known nonces [9,10,11,12,13,14].
For most of the cryptanalysis scenarios related to BDD, the target in implementation level is to find a particular lattice vector whose distance from a target vector is less than a given bound. To solve these problems, one route is to directly solve BDD using some approximation algorithms, such as the famous Babai nearest plane algorithm [15] and its extended nearest planes algorithm proposed by Lindner and Peikert [1]. This route is further developed by Li and Nguyen [2] by introducing a SVP algorithm, GNR enumeration [16], into CVP. Another indirect route is Kannan’s embedding techniques [17], which embeds the target vector into a lattice basis and recovers the solution to original CVP by solving a shortest vector problem (SVP) on the new constructed lattice. This method shows great performance in practice benefiting from the abundant theoretical work and practical implementation of various SVP algorithms, especially the lattice reduction algorithms [18,19,20,21]. However, this method only works well on unique SVP, which ensures that the shortest lattice vector corresponds to the secret we want to recover. For lattice method combined with side channel attack, this route sometimes failed to recover the secret, since the lattice constructed from the leaked information cannot guarantee such a “gap” between the secret-related vector and other lattice vectors.
In this work, we further explore the capability of the first route by using some state-of-art techniques called discrete pruning to directly solve CVP and BDD problem. The prototype of discrete pruning was first proposed by Schnorr [22] and improved by [23,24] for solving SVP. The formal concept and meticulous theoretical analysis were given by [25]. The authors of [26] recently proposed further improvements and rectification and experimentally proved the efficiency of discrete pruning is higher than GNR enumeration, which indicates the potential advantage in solving problems deriving from CVP.
Contributions. From the theoretical point of view, we define the translated lattice partition and prove its necessary properties inherited from the original definition, which is the core concept of discrete pruning. Benefiting from the detailed proof, we are able to discover the particular relationship between discrete pruning and the classical CVP algorithm, Babai’s nearest plane and its extended version, nearest planes algorithm. We declare that the nearest plane and nearest planes algorithm are special cases of discrete pruning enumeration with some specific pruning sets, which in turn indicates that the discrete pruning method is a generalization of the nearest plane(s) algorithm with a global optimal pruning set. Then, we propose the overall structure and implementation detail of discrete pruned enumeration for solving the generalized BDD problem, and give the cost estimation method with three success probability models considering the different distributions of error vector in most of the possible application scenarios.
From a practical point of view, we apply discrete pruned enumeration to solving the BDD problems underlying the LWE problem and HNP derived from DSA with partially known nonces, respectively. We experimentally proved that for the same scale, the discrete pruning method has a shorter running time than other algorithm directly solving BDD, such as nearest plane(s) algorithm and enumeration with classical pruning. In the DSA case, we also find that our discrete pruning method with the generalized success probability model could even work with less leaked information, which is beyond the ability of other general methods for solving BDD.

2. Preliminaries

2.1. Lattice

A lattice L is a discrete additive subgroup of R m . L is a rank n lattice if any point v in L can be represented by an integral combination of n linear independent vectors b 1 , , b n R m . We use column matrix B = b 1 , , b n to denote a basis of lattice L , and L can be defined as L ( B ) = i = 1 n x i b i : x i Z . In this work, we typically consider the full-rank integral lattice with basis B Z n × n from the perspective of cryptography. A basis B of L can be transformed to another basis B by applying elementary column operations, i.e., there exists a unimodular matrix U such that B = B U . A lattice basis corresponds to a fundamental parallelepiped P ( B ) = i = 1 n a i b i : 0 a i < 1 , i = 1 , , n . Obviously, the volume of the fundamental parallelepiped is an invariant of the lattice L , which can be computed by vol ( L ) = det ( B T B ) . It can also be defined as an origin-symmetric parallelepiped P 1 / 2 ( B ) = i = 1 n a i b i : 1 2 a i < 1 2 , i = 1 , , n .
Let S be a measurable set of R n , Gaussian heuristic assumes that the number of lattice points contained in S is
# { S L } vol ( S ) / vol ( L )
Under the Gaussian heuristic assumption, the shortest vector length λ 1 ( L ) can be estimated by GH ( L ) , which is the radius of an n-dimensional ball with volume vol ( L ) , especially when n is sufficiently large ( n 45 ):
GH ( L ) = vol ( L ) 1 / n B n ( 1 ) 1 / n = 1 π Γ n 2 + 1 1 n vol ( L ) 1 n n 2 π e vol ( L ) 1 n
Orthogonalization and projection. For a lattice basis B = b 1 , b n Z m × n , its corresponding Gram–Schmidt orthogonal basis is defined as B * = b 1 * , b n * Q m × n where b i * = b i j = 1 i 1 μ i , j b j * , and the orthogonal coefficient μ i , j = b i , b j * b j * , b j * . A more general definition is i-th orthogonal projection π i : R m s p a n ( b 1 , , b i 1 ) . For any vector v R n , we have π i ( v ) = v j = 1 i 1 v , b j * b j * 2 b j * . Furthermore, any v R n can be represented under orthogonal basis B * as v = i = 1 n u i b i * where u i = v , b i * b i * 2 . We also define the parallelepiped of B * as P 1 / 2 ( B * ) = i = 1 n a i b i * : 1 2 < a i 1 2 , i = 1 , , n , which is essentially a hypercuboid. (It is convention to use the “left closed and right open” interval to define the parallelepiped, but to unify all the works introduced below and maintain the consistency, we slightly modify the definition of P 1 / 2 ( B * ) by using “left open and right closed” intervals).
It is a common scenario in lattice-based cryptanalysis to consider a lattice vector very close to a given target vector t R n . Finding such a lattice vector usually means exposing the information of secret key of a cryptosystem, which is always a hard computing problem.
Definition 1
(Closest Vector Problem (CVP)). Given a target vector t and lattice L , finding the vector v L such that v t v t for any other v L .
The Closest Vector Problem (CVP) is to find v L to minimize v t . Bounded Distance Decoding (BDD) is a more common variant of CVP in cryptography.
Definition 2
(Original definition of BDD). Given a target vector t which is very close to L ( B ) , the goal of Bounded Distance Decoding (BDD) problem BDD ( B , t ) is to find the unique vector v 0 L closest to t .
To guarantee the uniqueness of such a lattice vector, we generally require dist ( t , L ) d where d < λ 1 / 2 . The smaller dist ( t , L ) is, the easier BDD is. The vector v 0 t is called BDD error. However, in some cases, the lattice structure cannot ensure the uniqueness of such a v 0 . In other words, the v 0 related to the secret may not be the lattice vector closest to t or there is more than one lattice vector contained in the searching area of the CVP algorithm. To describe this scenario in cryptanalysis, here, we propose a generalized description of the BDD problem.
Definition 3
(generalized definition of BDD). Given a target vector t , lattice L ( B ) and an oracle O , which only outputs 1 on some certain v 0 and outputs 0 on any other input, the generalized BDD problem BDD ( B , t , O ) is to find out v 0 L which is very close to t .

2.2. Approximate Algorithms for Solving CVP

The most classical and well-known method for approximately solving CVP (and BDD) is Babai’s Nearest Plane algorithm [15]. The detailed procedures are given by Algorithm 1.
Babai’s Nearest Plane algorithm outputs a lattice vector v “relatively close” to t with polynomial time and space complexity, which guarantees v t P 1 / 2 ( B * ) . However, a typical well-reduced lattice basis has a geometrically decreasing Gram–Schmidt sequence b i * i = 1 n , which means the shape of P 1 / 2 ( B * ) with unbalanced side length b i * is a narrow box, while the components of BDD error vector e = v 0 t generally have identical uniform or discrete Gaussian distribution. Therefore, NearestPlane would fail to find the right solution of BDD with a high probability, since t may lie outside v 0 + P 1 / 2 ( B * ) .
Algorithm 1NearestPlane( B , t )
Require: 
   Lattice basis B , target vector t R n
Ensure: 
   Lattice vector v such that v t P 1 / 2 ( B * )
1:
t t
2:
v 0
3:
for  i = n  to 1 do
4:
     x i t , b i * b i * , b i *        / / x = d e f x + 0.5
5:
     t t x i b i
6:
     v v + x i b i
7:
end for
8:
return  v
Lindner and Peikert [1] proposed an extended version of NearestPlane to enlarge and balance the searching area to increase success probability. Their NearestPlanes algorithm searches for all v L such that v t P 1 / 2 ( B * · D ) , where D is a diagonal matrix with D i , i = d i , which is manually designated. To keep the “balance” of side length d i · b i * of P 1 / 2 ( B * · D ) , the smaller b i * is, the larger d i should be.
The running time of NearestPlanes is exactly i = 1 k d i times of Babai’s NearestPlane algorithm. In essence, NearestPlanes can be implemented by conducting a depth-first exhaustive search for hypercuboids i = 1 n a i b i * : 1 2 + x i a i < 1 2 + x i , i = 1 , , n with all possible combinations of ( x 1 , , x n ) , where x i = x i ( 0 ) , x i ( d i 1 ) are computed by line 4 of Algorithm 2.
Algorithm 2NearestPlanes( B , t , d )
Require: 
   Lattice basis B = [ b 1 , , b k ] , d = ( d 1 , , d k ) N k , target vector t R k
Ensure: 
   A vector set S = { v i L : i = 1 , , i = 1 k d i } such that v i t P 1 / 2 ( B * · diag ( d ) )
1:
if  k = 0   then
2:
      return  0
3:
else
4:
      Compute the d k integers x k ( 0 ) , , x k ( d k 1 ) closest to t , b k * b k * , b k *
5:
      return  i = 0 , , d k 1 { x k ( i ) b k   + NearestPlanes [ b 1 , , b k 1 ] , t x k ( i ) b k , ( d 1 , , d k 1 ) }
6:
end if
In CT-RSA’13, Liu and Nguyen [2] made some improvements on NearestPlanes algorithm and reveal its connection with Schnorr’s random sampling method [22] and lattice enumeration [16,27,28], both of which are originally proposed for solving the shortest lattice vector problem (SVP). Lattice enumeration for SVP tries to find all the vector v L such that π n + 1 k ( v ) R · f ( k ) for all k = 1 , , n , where f ( k ) ( 0 , 1 ] is the pruning function. Liu and Nguyen treat BDD decoding as lattice vector enumeration in an n-dimensional ball with radius R. This new method requires an extra parameter R v 0 t , which is a reasonable estimation of BDD error.
For solving BDD, enumeration algorithm is to find v L such that π n + 1 k ( v t ) R · f ( k ) . From the perspective of implementation, their BDD enumeration essentially changes the “center” of searching at each layer of the enumeration tree. Given the target vector represented by orthogonal basis t = i = 1 n s i b i * , the algorithm at layer k start searching for the component value x k of v = i = 1 n x i b i from c k = s k i = k + 1 n μ i k x i , while this value for classical SVP enumeration is c k = i = k + 1 n μ i k x i (see [28], Section 6, Algorithm ENUM, line 7–8). They also use the extreme pruning method [16] to optimize the pruning function f ( k ) to accelerate BDD enumeration with utmost effort.
Another indirect way to solve BDD is to convert it to a unique-SVP problem by Kannan’s embedding method [17] and use SVP algorithms, mainly referring to lattice reduction, to recover the BDD error as the unique shortest vector in the new problem. For a BDD instance BDD ( B , t ) , the construction of new lattice L ( B ) is
B = B t 0 τ
The basis B = ( b 1 , , b n + 1 ) is a ( n + 1 ) × ( n + 1 ) column matrix with the “embedding factor” τ . If the lattice reduction algorithm outputs a short vector in the form of v = i = 1 n x i b i 1 · b n + 1 , then one can recover the solution to BDD ( B , t ) by extracting i = 1 n x i b i , where b i is the i-th column vector of basis B . The structure of B and the value of τ should be elaborately dealt with to construct the “gap” between the shortest vector and other lattice vectors to guarantee that lattice reduction algorithm can find it. Benefiting from the strong power of some state-of-art lattice reduction techniques [18,20,21,29,30,31], this method is widely applied in the security analysis of LWE-based cryptosystem [3,4,5,6,7,8,32] since it is relatively easy to construct such a “gap” when the number of LWE samples is sufficient. While solving BDD instance reduced from the hidden number problem (HNP), this method usually comes up with a “barrier” that the lattice vector corresponding to the secret is not significantly shorter than other vectors, since the information acquired from the cryptosystem is limited and cannot build a proper “gap”. Albrecht and Heninger [14] break this barrier by introducing a verification subalgorithm into uSVP to check whether a point is the valid solution.

2.3. Lattice Enumeration

Enumeration is the most practical polynomial-space algorithm for solving SVP [28,33]. For lattice vector written in the form
v = j = 1 n u j b j * = j = 1 n x j + i = j + 1 n μ i j x i b j *
This algorithm traverses an enumeration tree of the coordinates x n , , x 1 by a depth-first search method, and on k-th layer of the tree, it find all x n k + 1 such that partial sum π n k + 1 ( v ) = j = n k + 1 n | x j + i = j + 1 n μ i j x i | b j * R for some fixed ( x n k + 2 , , x n ) . Pruning technique efficiently accelerates lattice enumeration by cutting off some branches by restricting π n + 1 k ( v ) R · f ( k ) . The pruning function f ( k ) ( 0 , 1 ] naturally satisfies f ( k ) f ( k ) for all k < k . Since the shortest vector might be cut off in the middle layer, the classical lattice enumeration with pruning becomes a probabilistic algorithm from a deterministic one.
Extreme pruning (which is also called GNR enumeration) proposed by Gama et al. [16] is the most important method which makes a contribution of exponential acceleration to full enumeration. They provided a detailed analysis on the success probability p s u c c and speedup ratio for various pruning functions, and the key idea of extreme pruning method is to minimize the enumeration cost:
Cos t e x t r e m e = T r e d u c t i o n + # N o d e s · T n o d e p s u c c
# N o d e s denotes the number of nodes on the pruned enumeration tree, which is estimated as # N o d e s = 1 2 k = 1 n V k ( R · f ( k ) ) i = n + 1 k n b i * where V k ( R ) denotes the volume of a k-dimensional ball with radius R.
Another burgeoning and promising technique is discrete pruning. It originated from random sampling reduction algorithms [22,23], which heuristically search for lattice vectors v = i = 1 n u i b i * with some certain “pattern” in their Gram–Schmidt coefficients ( u 1 , , u n ) . In EUROCRYPT’17, Aono and Nguyen summarized these random sampling strategies into a common framework, called discrete pruned enumeration [25]. They proposed the concept of “lattice partition” to build a bijection between lattice vector with the “tag” of partition cells, which indicates the “pattern” of Gram–Schmidt coefficients. Then, the algorithm enumerates some tags which are likely to correspond with very short lattice vectors and check if they are valid. The name “discrete pruning” comes from its discrete searching area, which is a union of many discontinuous hypercuboids.
Definition 4
(Lattice partition [25]). Let L to be a full-rank lattice in Z n . An L -partition ( C ( ) , T ) is a partition of R n such that:
  • R n = z T C ( z ) and C ( z ) C ( z ) = . The index set T is a countable set.
  • There is exactly one lattice point in each cell C ( t ) , and there exists a polynomial time algorithm to convert a tag z to the corresponding lattice vector v C ( z ) L .
A non-trivial partition is generally related to the orthogonal basis B * . We now introduce Babai’s partition and natural partition, which are abstracted from Babai’s Nearest Plane algorithm and natural number representation (NNR) of Fukase–Kashiwabara algorithm [23], respectively.
Definition 5
(Babai’s partition [25]). Given a lattice L ( B ) with B Z n × n , Babai’s partition ( C Z ( ) , Z n ) defines the cell C Z ( z ) as
C Z ( z ) = i = 1 n x i b i * : z i 1 2 < x i z i + 1 2
Definition 6
(Natural partition [25]). Given a lattice L ( B ) with B Z n × n , the natural partition ( C N ( ) , N n ) defines the cell C N ( z ) as
C N ( z ) = i = 1 n x i b i * , z i + 1 2 < x i z i 2 or z i 2 < x i z i + 1 2
In other words, given lattice vector v = j = 1 n u j b j * , under Babai’s partition, v C Z ( z ) if and only if z = ( z 1 , , z n ) satisfies u j z i 1 2 , z i + 1 2 for i = 1 , , n . Under natural partition, v C N ( z ) if and only if z satisfies u j z i + 1 2 , z i 2 z i 2 , z i + 1 2 for i = 1 , , n .
If we consider v as an n-dimensional random variable distributed in the corresponding C Z ( z ) or C N ( z ) , then based on some putative distribution assumptions, one can compute the first moments (mathematical expectation) of v 2 = i = 1 n | u i | 2 b i * 2 with the information about a cell’s tag z :
E C Z ( z ) = i = 1 n z i 2 + 1 / 12 b i * 2
E C N ( z ) = i = 1 n z i 2 / 4 + z i / 4 + 1 / 12 b i * 2
The value of E C ( z ) is an indicator of the “pruning”: the smaller E C ( z ) is, the shorter v C ( z ) might be. Then, a bunch of tags with minimal E C ( z ) will survive the pruning procedure, and go to subsequent processing. The whole route of discrete pruned (DP) enumeration is given in Algorithm 3.
Algorithm 3DPenumeration
Require: 
   Lattice basis B , number of tags M, target vector length R
Ensure: 
   v L ( B ) such that v < R
1:
Reduce lattice basis B
2:
while true do
3:
      S
4:
      Use binary search to find r such that there are M tags z satisfying f ( z ) = E [ C ( z ) ] < r
5:
      Enumerate all these M tags and save them in set S
6:
      for  z S  do
7:
            Decode z to recover the corresponding v such that v C ( z )
8:
            if  v 2 < R 2  then
9:
                  return  v                          / / Find a solution
10:
            end if
11:
      end for
12:
      Rerandomize B
13:
      Reprocess B using lattice reduction algorithms such as BKZ or LLL
14:
end while

3. Discrete Pruning for Solving BDD

3.1. Lattice Partition with Translation

In a sense, lattice partition in the context of SVP can be considered as a tessellation centered at the origin 0 . For example, in Babai’s partition, the cell C Z ( z ) with tag z = 0 exactly corresponds to the origin-symmetric parallelepiped P 1 / 2 ( B * ) , and for any tag z , the closure of C Z ( z ) and C Z ( z ) are central symmetrical with regard to the origin. In addition, if t is close to 0 ( t has sparse coordinates or have light “weight”), it is more likely to contain a short lattice vector. Natural partition also has analogous properties. (See Figure 1 in [25].)
Since DP enumeration shows its advantages in lattice enumeration for solving SVP, which can be explained as a special instance of CVP with target vector t = 0 R n , then it is a natural idea to generalize it to CVP instances with t 0 .
For a given basis B , and a target vector t , in the following, we write v = i = 1 n u i b i * and t = i = 1 n s i b i * to denote their decomposition under Gram–Schmidt basis B * . To minimize the distance v t 2 = i = 1 n ( u i s i ) 2 b i * 2 , discrete pruning technique requires a proper definition of partition which is centered at t .
Definition 7
(translated Babai’s partition). Given basis B , target vector t = i = 1 n s i b i * , Babai’s partition translated to t is defined by lattice partition ( C t , Z ( ) , Z n ) , where for tag z Z n , the cell
C t , Z ( z ) = i = 1 n x i b i * : z i 1 2 < x i s i z i + 1 2
Definition 8
(Translated Natural partition). Given B and t as above, the natural partition translated to t is defined by lattice partition ( C t , N ( ) , N n ) , where for tag z N n , the cell
C t , N ( z ) = i = 1 n x i b i * , z i + 1 2 < x i s i z i 2 or z i 2 < x i s i z i + 1 2
Since ( C t , Z ( ) , Z n ) and ( C t , N ( ) , N n ) are the translation of ( C Z ( ) , Z n ) and ( C N ( ) , N n ) with offset t , respectively, obviously they are tessellations of R n and therefore meet the first property of Definition 4. To prove the uniqueness of lattice vector contained in ( C t , Z ( ) , Z n ) and ( C t , N ( ) , N n ) , we first give the algorithms to recover v L from tag t and give proofs of their correctness, then from the proofs we can derive the bijection property between lattice vector and tag vector.
Proposition 1.
Algorithm BabaiDecode( t , B , z ) outputs the unique v L ( B ) such that v C t , Z ( z ) , and therefore, the translated Babai’s partition ( C t , Z ( ) , Z n ) is a L -partition.
Proof. 
We first prove the correctness of Algorithm 4, and then prove the uniqueness of vector contained in C t , Z ( z ) , which also verifies that the ( C t , Z ( ) , Z n ) keeps the second property of lattice partition in Definition 4 after translation.
Consider the output of BabaiDecode( t , B , z ) in the form of Equation (3), i.e., v = k = 1 n u k b k * , we have u k = x k + i = k + 1 n μ i k x i , where x k are calculated by line 7–8 in Algorithm 4. Then, the following holds
u k s k z k = x k + i = k + 1 n μ i k x i s k z k = z k + s k i = k + 1 n x i μ i , k + i = k + 1 n μ i k x i s k z k
for all k = 1 , , n . From the stipulation of rounding off operation · , it is easy to see 1 2 < u k s k z k 1 2 and more directly, 1 2 + z k < u k s k 1 2 + z k , which exactly satisfies the definition of cell C t , Z ( z ) and we can conclude that v C t , Z ( z ) .
Now, we assume there are two lattice vector v v contained in the same cell C t , Z ( z ) . Let v = i = 1 n x i b i and v = i = 1 n x i b i , and assume there exists an index 1 k n such that x k x k and x k + 1 = x k + 1 , x k + 2 = x k + 2 , , x n = x n . This means in line 8 of Algorithm 4, x k and x k are derived from the different y k value, denoted by y k and y k , respectively, and y k y k . This makes a contradiction since both y and y are calculated by the same z k , s k , and x i μ i , k ( i k ).    □
Algorithm 4BabaiDecode( t , B , z )
Require: 
  Target vector t , lattice basis B , tag z Z n
Ensure: 
   lattice vector v C t , Z ( z )
1:
Compute orthogonal basis B * R n × n and Gram–Schmidt coefficients U = ( μ i , j ) n × n
2:
for k = 1 tondo
3:
      x k 0
4:
      s k t , b k * b k * , b k *                         / / t = i = 1 n s i b i *
5:
end for
6:
for  k = n to 1 do
7:
      y k z k + s k i = k + 1 n x i μ i , k
8:
      x k y k
9:
end for
10:
return v = Bx
Similarly, for natural partition, we also give the decoding algorithm NaturalDecode( t , B , z ) in the following and provide an analogous declaration and proof.
Proposition 2.
Algorithm NaturalDecode( t , B , z ) outputs the unique v L ( B ) such that v C t , N ( z ) , and therefore the translated natural partition ( C t , N ( ) , Z n ) is an L -partition.
Proof. 
Consider the output of NaturalDecode( t , B , z ) written in the form as above, according to Algorithm 5, then for all k = 1 , , n , there holds:
u k s k = x k + i = k + 1 n x i μ i , k s k = s k i = k + 1 n x i μ i , k + i = k + 1 n x i μ i , k s k + δ k · ( 1 ) z k z k / 2 = y k y k + δ k · ( 1 ) z k z k / 2
where δ k = sign ( y k y k ) and especially, we stipulate sign ( 0 ) = 1 .
Let σ N , now, we consider four cases:
Case 1: z k = 2 σ and δ k = 1 . In this case, δ k · ( 1 ) z k z k / 2 = σ , and since y k y k 0 , we have y k y k ( 1 2 , 0 ] . With the third line in Equation (9), it is easy to see
1 2 σ < u k s k σ z k + 1 2 < u k s k z k 2
Case 2: z k = 2 σ + 1 and δ k = 1 . In this case, δ k · ( 1 ) z k z k / 2 = σ + 1 . Similar to case 1, we have
1 2 + ( σ + 1 ) < u k s k σ + 1 z k 2 < u k s k z k + 1 2
Case 3: z k = 2 σ and δ k = 1 . In this case, δ k · ( 1 ) z k z k / 2 = σ . Additionally, we have y k + 0.5 y k ( 0 , 1 2 ] , and therefore,
σ < u k s k 1 2 + σ z k 2 < u k s k z k + 1 2
Case 4: z k = 2 σ + 1 and δ k = 1 . In this case, δ k · ( 1 ) z k z k / 2 = ( σ + 1 ) . Similar with case 3, we have
( σ + 1 ) < u k s k 1 2 ( σ + 1 ) z k + 1 2 < u k s k z k 2
For all the possible cases for Equation (9), we have proved u k s k satisfies the Definition 8, which means v C t , N ( z ) . The uniqueness of vector v can be proved by the similar method of proving Proposition 1.    □
Algorithm 5NaturalDecode( t , B , z )
Require: 
  Target vector t , lattice basis B , tag z Z n
Ensure: 
   lattice vector v C t , N ( z )
1:
Compute orthogonal basis B * R n × n and Gram–Schmidt coefficients U = ( μ i , j ) n × n
2:
for k = 1 tondo
3:
      x k 0
4:
      s k t , b k * b k * , b k *
5:
end for
6:
for k = n to 1 do
7:
      y k s k i = k + 1 n x i μ i , k
8:
      x k y k
9:
      if  x k y k  then
10:
          x k x k + ( 1 ) z k z k / 2
11:
      else
12:
          x k x k ( 1 ) z k z k / 2
13:
      end if
14:
end for
15:
return v = Bx
Although a certain lattice vector v C t , Z ( z ) (or C t , N ( z ) ) is determined by the tag z , for vectors randomly selected in the lattice, their distribution behavior still shows some pseudo-randomness, which is called randomness assumption and is commonly applied in the analysis of many SVP algorithms [22,23,25]. The essence of this assumption is to treat the fractional part of GS coefficients u i of lattice vector v = i = 1 n u i b i * as uniformly distributed in a certain interval, and we believe that this is also applicable to the translated cases. Here, we propose the randomness assumption for our lattice partition.
Assumption 1.
Given lattice L ( B ) and some t = i = 1 n s i b i * , let lattice vector v = i = 1 n u i b i * be a random variable, then, we assume u 1 , , u n are n independent random variables and each u i is uniformly distributed in 1 2 + z i + s i , 1 2 + z i + s i (resp. z i + 1 2 + s i , z i 2 + s i z i 2 + s i , z i + 1 2 + s i ), where z is the tag of cell C t , Z ( z ) (resp. C t , N ( z ) ) containing v . As the first moment of each ( u i s i ) 2 is z i 2 4 + 1 12 (respectively, z i 2 4 + z i 4 + 1 12 ), the expected values of v t 2 in Babai’s partition and natural partition are
E C t , Z ( z ) = d e f E v t 2 = 1 12 i = 1 n ( 12 z i 2 + 1 ) b i * 2
and
E C t , N ( z ) = d e f E v t 2 = 1 12 i = 1 n ( 3 z i 2 + 3 z i + 1 ) b i * 2 .
Similarly, the variances of v t 2 are
V ( C t , Z ( z ) ) = i = 1 n ( z i 2 3 + 1 180 ) b i * 2
and
V ( C t , N ( z ) ) = i = 1 n ( z i 2 48 + z i 48 + 1 180 ) b i * 2 .
We note that Assumption 1 is a generalization of the original randomness assumption (see Corollary 2 and 3 in [25]). In other words, the original randomness assumption is a special case of Assumption 1 where s i = 0 for all i = 1 , , n .

3.2. Interpreting Nearest Plane(s) Algorithm with Lattice Partition

From the perspective of output results, we can immediately conclude that t + P 1 / 2 ( B * ) = C t , Z ( 0 ) = C t , N ( 0 ) , which means that the output of NearestPlane( B , t ) exactly equals to BabaiDecode( t , B , z = 0 ) and NaturalDecode( t , B , z = 0 ).
For NearestPlanes algorithm, since
t + P 1 / 2 ( B * · diag ( d ) ) = t + i = 1 n a i b i * : d i 2 < a i d i 2 , i = 1 , , n = { z : z i < d i } C t , N ( z )
we can see that the output set of NearestPlanes( B , t , d ) equals with the set of vectors output by calling NaturalDecode( t , B , z = z ) on all tags z with z i < d i for all i = 1 , , n .
From the perspective of calculation process, we also give an interpretation of the essential equivalence between Nearest Plane(s) algorithms and lattice partition in depth. In NearestPlane( B , t ) (Algorithm 1), for k = n , we can easily verify that x n = t , b n * b n * , b n * = s n . For iteration indexes k k , we assume x k = s k i = k + 1 n x i μ i , k , and in line 5 of Algorithm 1, we have t = t i = k n x i b i . Then, in the next loop, we actually compute x k 1 = t i = k n x i b i , b k 1 * b k 1 * , b k 1 * = s k 1 i = k n x i μ i , k 1 , which is equivalent to line 3 of Algorithm 5 with z i = 0 .
As for NearestPlanes( B , t , d ) (Algorithm 2), the combination coefficients x = ( x 1 , , x n ) of output vector v = Bx form a recursion tree with depth n, and each parent node with path x n , x n 1 , , x k + 1 in the ( n k ) -th layer leads to d k child nodes x k ( 0 ) , , x k ( d k 1 ) (line 4), where x k ( z ) is the ( z + 1 ) -th closest integer to t i = k + 1 n x i b i , b k * b k * , b k * = s k i = k + 1 n x i μ i , k = y k , which is also calculated in line 7 of Algorithm 5. Now, we explain how the “ ( z + 1 ) -th closest integer to y k ” is calculated. Consider the quadratic function ( x y k ) 2 of variable x, the integral value of x closest to y k is x k ( 0 ) = y k . If the rounding off goes to direction, i.e., y k y k , then the next integral value of x closest to y k takes the + direction and x k ( 1 ) = y k + 1 . Then, the second integer goes in the same direction with rounding off and x k ( 2 ) = y k 1 , and so on. It can be easily deduceed that x k ( z ) = δ k · ( 1 ) z z / 2 which is what NaturalDecode (Algorithm 5) actually does. Therefore, for each leaf node correspond to coefficients ( x 1 ( z 1 ) , , x n ( z n ) ) reached by NearestPlanes algorithm, it is equivalent to NaturalDecode with input tag z = ( z 1 , , z n ) .
The work of AN17 [25] gave experimental evidence to show that the natural partition is expected to contain a short vector with higher probability than Babai’s partition. In this section, we also exhibit that natural partition has more explainability by using it to interpret the nearest plane(s) algorithms.

3.3. DP Enumeration for BDD

Babai’s Nearest plane algorithm or nearest planes algorithm is actually a single round of lattice decoding on a certain (but not the optimal) set of cells. Inspired by discrete pruned enumeration for SVP, here, we propose the DP enumeration based on the translated lattice partition to solve BDD problem. This algorithm includes nearest plane(s) algorithms as special cases, while it is expected to achieve an optimal performance when solving BDD.
To migrate DP enumeration from SVP to BDD, a peculiar case one has to consider in addition is that, in some cryptanalysis scenarios, the information an adversary can acquire is insufficient to construct a “gap” in the lattice and therefore the lattice vector found by BDD algorithm might not be the “unique” vector close to a target vector t . For example, solving LWE with few samples [32], and the lattice attack of ECDSA with very few leaked nonce bits [12,14]. Assume that the BDD instance has a right solution v 0 such that v 0 t R , and then by searching in an n-dimensional ball centered at t with radius R, the enumeration algorithm might find more than one lattice vectors v 1 , , v K such that all v i t R . In such a case, each of them should be checked to see which is the right solution corresponding to the secret of the cryptosystem. This oracle related to a certain cryptographic algorithm is also a “BDD predicate” by Albrecht and Heninger [14], and we will call it a BDD oracle Orc ( · ) to be consistent with the definition of generalized BDD.
Algorithm 6 gives the framework of DP enumeration with natural partition ( C t , N ( ) , N n ) . We note that in a standard BDD instance, i.e., dist ( t , L ) d < λ 1 / 2 , the lattice vector satisfying v t < R = d is unique and we can omit the BDD oracle Orc ( · ) . To solve the BDD instance derived from different cryptosystems, the Pred ( · ) is also different and we will give their detail in Section 5.  
Algorithm 6DPenum4BDD
Require: 
   BDD target vector t , lattice basis B , expected length of BDD error length R, BDD oracle Orc ( · ) , number of tags M, BKZ parameter β
Ensure: 
   v L ( B ) such that v t < R
1:
Run BKZ reduction on B with blocksize β
2:
while true do
3:
      S
4:
      r BinSearch( B , t , M )                    / / Find r such that there are M tags z satisfying E C t , N ( z ) < r
5:
      S CellEnum( B * , t , r )                   / / Enumerate these M tags and save them in set S
6:
      for  z S  do
7:
            v NaturalDecode( t , B , z )
8:
            if  v t 2 < R 2 and Orc ( v ) = 1  then
9:
                 return  v                                                     / / Find the solution
10:
            end if
11:
      end for
12:
      Reprocess( B , β )                   / / generate a different reduced basis and repeat the enumeration
13:
end while
This framework is inherited from the improved version of DP enumeration [26], and some details should be explained in depth in order to make our algorithm adapt to the BDD problem.
Binary search and cell enumeration.
The goal of line 4 and 5 of Algorithm 6 is to generate a batch of cells S = { C t , N ( z ) ) } which are “most likely” to contain a lattice vector very close to t . The optimality of such a set is generally indicated by the expected distance E C t , N ( z ) . However, the original definition of E C t , N ( z ) cannot guarantee the polynomial time complexity of CellEnum and the indicator E C t , N ( z ) should be slightly modified. The pivotal modification is removing the constant part of each item 1 12 i = 1 n ( 3 z i 2 + 3 z i + 1 ) b i * 2 , and the indicator becomes
f ( C t , N ( z ) ) = d e f i = 1 n 1 4 ( z i 2 + z i ) b i * 2
Since the bound r for cell enumeration is hard to determine, one should specify the size of set S, and then r can be computed by a binary search method by calling CellEnum (Algorithm 7) as a subalgorithm.
Algorithm 7CellEnum
Require: 
   B , r
Ensure: 
   S = z N n : f ( z ) r
1:
Compute orthogonal basis B * = [ b 1 * , , b n * ] from B
2:
S
3:
z 1 = z 2 = = z n = 0
4:
c 1 = c 2 = = c n + 1 = 0
5:
k 1
6:
while true do
7:
      c k c k + 1 + 1 4 ( z i 2 + z i ) b i * 2                            / / Equation (10)
8:
      if  c k < r  then
9:
            if  k = 1  then
10:
                 S S { z = ( z 1 , z 2 , , z n ) }
11:
                 z k z k + 1 ;
12:
            else
13:
                 k k 1
14:
                 z k 0
15:
            end if
16:
      else
17:
            k k + 1
18:
            if  k = n + 1  then
19:
                exit
20:
            else
21:
               z k z k + 1
22:
            end if
23:
       end if
24:
end while
25:
  return S
Algorithm 8 gives a polynomial-time binary search method to calculate the cell enumeration radius r. If ϵ is small, say ϵ = 0.005 or smaller, the number of tags satisfying f ( z ) < r can be approximately counted as M. Note that in line 5, CellEnum can be terminated instantly as long as it outputs more than ( 1 + ϵ ) M tags.
Algorithm 8BinSearch
Require: 
   B , t , M , ϵ
Ensure: 
   r R such that # { z : f ( z ) < r [ ( 1 ϵ ) M , ( 1 + ϵ ) M ] }
1:
R l f ( 0 )
2:
R r f ( [ M 1 n , , M 1 n ]
3:
while R l < R r do
4:
      R m R l + R r 2
5:
      if CellEnum( B * , t , r ) returns more than ( 1 + ϵ ) M tags then
6:
           R l R m                  / / R m is too large
7:
      else if CellEnum( B * , t , r ) returns less than ( 1 ϵ ) M tags then
8:
           R r R m                  / / R m is too small
9:
      else
10:
           return  r R m              / / R m is acceptable
11:
      end if
12:
end while
According to [26], the time complexity of CellEnum is proved to be O ( 2 n 1 ) · M / 2 , and BinSearch calls CellEnum for O log n + log 1 ϵ + n log n times.
Lattice basis reprocessing.
When solving BDD, a single round of DP enumeration might not find the solution, since its searching space does not cover the whole B n ( t , R ) . A standard way is to re-randomize the lattice basis and repeat the procedures on the new basis. To improve the efficiency and for the convenience of concrete time analysis, Luan et al. [26] proposed a set of comprehensive strategies along with the preprocessing of DP enumeration, which can be instantiated as Reprocess (Algorithm 9).
At the very beginning of DPenum4BDD, the lattice basis is BKZ β -reduced, and in the following loops, Reprocess heuristically guarantees that the properties of output basis are as good as those of a fully BKZ β -reduced basis at a very small and controllable cost, while the latter usually has rather long and unpredictable running time.
Algorithm 9Reprocess
Require: 
   B , β , k
Ensure: 
   B n e w
1:
Q I Z n × n                         / / I is identity matrix
2:
Randomly select n position ( i , j ) with i < j and let Q i j { ± 1 , ± 2 }
3:
B BQ
4:
Run k-tours of BKZ reduction on B                  / / Usually k = 8
5:
return B

4. Complexity Analysis and Cost Estimation

We note that algorithm NaturalDecode can be called in line 10 of algorithm CellEnum, and the invalid tags can be directly discarded. Therefore, the whole DP enumeration does not need to save any output of CellEnum. It is easy to see that DPenum4BDD essentially has polynomial-space complexity. In this section, we only focus on the running time of DPenum4BDD. The cost estimation has an overall framework
T t o t a l = T p r e + T r e p r o + T b i n + T c e l l + M · T d e c o d e + K · T o r c p s u c c
  • T p r e : Cost of preprocessing, which is generally a full BKZ β reduction on a n-dimensional lattice. For preprocessing, where the required blocksize β is usually very small, the most practical algorithm is BKZ 2.0 [18] and the running time of such a preprocessing method is easy to estimate using BKZ simulator. In particular, the fraction part in (11) is much larger than T p r e when the scale of BDD problem is very large, hence, it can be ignored in the asymptotic estimation.
  • T r e p r o , T b i n , T c e l l , T d e c o d e : The running time of Reprocess, BinSearch, CellEnum and NaturalDecode, respectively. These inherent subalgorithms of the original DP enumeration are studied in detail by [26] and have explicit cost estimation formulae.
  • K: The number of lattice vector v such that v t < R and should be verified by the BDD oracle in line 8 of DPenum4BDD.
  • T o r c : The cost of recovering a lattice vector to the corresponding cryptographic secret and checking whether the BDD attack is successful.
  • p s u c c : The success probability of finding out the right solution in a single round of DP enumeration.
Let S be the output tag set of algorithm CellEnum in one round of DPenum4BDD, then the searching area in this round can be represented by
Ball n ( t , R ) P = d e f Ball n ( t , R ) z S C t , N ( z )
where Ball n ( t , R ) denotes the n-dimensional ball with radius R centered at t .
Under Gaussian heuristic, the number of lattice contained in Ball n ( t , R ) P can be estimated by
K vol Ball n ( t , R ) z S C t , N ( z ) vol ( L ) = z S vol Ball n ( t , R ) C t , N ( z ) i = 1 n b i *
We note that if K 1 in Equation (12), which implies the solution is unique and the BDD problem is the original version (Definition 2), then there is no need to call BDD oracle. In this situation, we let K = 0 in the cost model in Equation (11).
Then, only the value of success probability p s u c c remains to be decided. The discussion of p s u c c can be classified into the following three cases.
Case 1: Generalized model
In the generalized definition of BDD, the estimated bound R of BDD cannot guarantee R < λ 1 / 2 , which indicates that there might be more than one lattice vectors lying in Ball n ( t , R ) . This case was recently discussed by [14] for solving the hidden number problem underlying ECDSA. In this case, K can be directly derived from Equation (12) under Gaussian heuristic as above. However, there are about N vol Ball n ( t , R ) vol ( L ) lattice vector contained in Ball n ( t , R ) and there is only one corresponding to the right secret. Assume that all the lattice vectors in the ball are uniformly distributed, and in each round of DP enumeration, we can randomly find K candidate vectors, then the possibility that the solution is contained in them is
p s u c c = N 1 K 1 N K = K N z S vol Ball n ( t , R ) C t , N ( z ) Ball n ( t , R )
Case 2: Strict BDD with general distribution assumption
If the only one lattice vector v contained in Ball n ( t , R ) is considered to be randomly distributed over Ball n ( t , R ) , then the probability that v falls in a cell C t , N ( z ) can be derived from the geometric probability model
p s u c c ( z ) = Pr v Ball n ( t , R ) v C t , N ( z ) = vol Ball n ( t , R ) C t , N ( z ) vol ( C t , N ( z ) )
This is also the probability estimation method of [25] (refer to Heuristic 2 in [25]). Furthermore, if we assume that the lattice basis has a stable quality, especially the shape of Gram–Schmidt sequence { b i * } i = 1 n , which is experimentally verified in [26], then the value of Equation (12) is also stable during the repetition of DP enumeration. Therefore, we can use the output set S of a single round DP enumeration to estimate the success probability of the algorithm on average:
p s u c c = z S p s u c c ( z ) = z S vol Ball n ( t , R ) C t , N ( z ) i = 1 n b i *
Case 3: BDD with known error distribution
For a common case in lattice-based cryptography, the error vector e = v 0 t is pre-selected following some certain distribution. In typical this refers to discrete Gaussian distribution D Z , α · q , with parameter s = α q . Lindner and Peikert gives the success probability of their nearest planes algorithm by estimating approximately using the continuous Gaussian distribution ([1], Equation (5)). [2] also adopts this method and gives the corresponding analysis for GNR enumeration. Following the same assumption, the success probability of finding v 0 in a given cell C t , N ( z ) can be estimated by
p s u c c ( z ) = Pr { v 0 C t , N ( z ) } = i = 1 n Pr | e , b i * b i * , b i * | z i 2 , z i + 1 2 = i = 1 n erf ( π s · z i + 1 2 b i * ) erf ( π s · z i 2 b i * )
and
p s u c c = z S Pr { e C t , N ( z ) }
To accelerate the computation of Equations (12), (15) and (17), it is suggested that one can apply a stratified sampling method to calculate the sum of each on the tag set S. The work in [25] also provides a detailed numerical calculation method on the volume of hyper ball-box intersection Ball n ( t , R ) C t , N ( z ) .

5. Experiments and Application

5.1. Solving LWE

We first compare the performance of DPenum4BDD with the previous work of [2], which includes the rerandomized nearest planes algorithm and classical enumeration with pruning. We take the LWE instances provided by LWE challenge [34]. Since the original nearest planes algorithm only searches once without any repetition, the rerandomized nearest planes algorithm is a more reasonable reference. As explained in Section 3.2, the rerandomized nearest planes algorithm with parameter ( d 1 , , d n ) is essentially the discrete pruning on some designated cells of natural partition P = C t , N ( z ) : z i < d i , i = 1 , , n . Therefore, we run the nearest planes algorithm by using the same implementation of our DPenum4BDD but only with different tag enumeration strategy: for the nearest planes algorithm, we take S = { z : z i < d i , i = 1 , , n } and for the discrete pruning enumeration, we obtain S by Algorithm 7.
For each LWE challenge instance with parameter ( n , q , s ) , we first determine the lattice dimension m such that e m s is slightly larger than 1 2 λ 1 ( L ) . For both nearest planes and discrete pruned enumeration, we use the cost model defined by (11) along with a BKZ simulator [21], and the success probability p s u c c corresponds to Equation (17) in Case 3, since all the LWE instances in this subsection use discrete Gaussian distribution. We calculate the optimal parameter for each algorithm to minimize the estimated cost by a Nelder–Mead optimization method which is applied in the DP enumeration for SVP [26]. For the classical enumeration, we take the implementation in fplll library [35] and compute the optimal pruning parameters using the Pruner module in fplll. The preprocessing algorithm is BKZ 2.0 in fplll.
For each LWE instance, we randomly choose m samples to construct a lattice. On each lattice basis, we run the three algorithms with their optimal parameter respectively, and the average running time for finding the LWE error e on 30 trials is shown in Table 1. The running time is in seconds, and the last column gives the parameter we used for DPenum4BDD as a reference.
Then, we give a numerical comparison of the cost estimation on LWE instances of Lindner–Peikert cryptosystem [1]. The LWE instances are generated using sage.crypto.lwe module in Sage toolkit. In Table 2, the second and third columns are the numerical results from [2] and the last column gives the prediction of our algorithm. The cost is in base-2 logarithm (seconds). In experiments, we are even able to solve the LWE instance of n = 128 in several minutes, which is only about 9.03 log(s) and is rather lower than our prediction.
Both the experiments and simulated prediction results show that lattice enumeration with improved discrete pruning strategy performs better than the previous algorithms in the category of directly solving BDD problem.

5.2. Solving HNP under Generalized BDD Model

In this part, we apply our algorithm on the attacking of DSA described by Nguyen and Shparlinski [10], which is also reduced to solving a hidden number problem.
The Digital Signature Algorithm (DSA) is, given large prime numbers q , p such that q | p 1 , the signer uses his secret key α F q * and a randomly chosen nonce k F q * to sign a message μ and output two elements in F q
r ( k ) = ( g k mod p ) mod q s ( k , μ ) = k 1 ( h ( μ ) + α r ( k ) ) mod q
where g F p has multiplicative order q and h ( · ) is a random hash function. In the following, the signature ( r ( k ) , s ( k , μ ) ) is denoted by ( r , s ) , and h ( μ ) by h for short.
The hidden number problem originating from DSA requires to recover the secrete α F q * for many known t F q and the corresponding “approximation” of α t , which is denoted by APP , q ( α , t ) . APP , q ( α , t ) is any rational number r such that | α t | q q / 2 + 1 where | x | q denotes the minimal residue of x modulo q. As a matter of fact, the leaked information of k reveals an approximation of α t . If the least significant bits of k are known, i.e., we can obtain a number a of -bit length such that 2 | k a , then we can define two elements
t = 2 · r · s 1 u = 2 ( a s 1 · h ) mod q q / 2 + 1
It is proved that u satisfies the definition of APP , q ( α , t ) .
Given enough signatures with leaked k, one can easily compute the HNP pairs ( t i , u i ) ( i = 1 , , d ) and construct a ( d + 1 ) -dimensional lattice L ( B ) with column basis
B = q 2 + 1 t 1 2 + 1 q 2 + 1 t 2 2 + 1 q 2 + 1 t d 2 + 1 0 0 0 1
and has volume vol ( L ( B ) ) = ( q · 2 + 1 ) d .
Let u = ( u 1 2 + 1 , , u d 2 + 1 , 0 ) be a target vector. Then, there exists a vector in the form of v 0 = ( ( α t 1 + q 1 x 1 ) 2 + 1 , , ( α t d + q d x d ) 2 + 1 , α ) L ( B ) . Nguyen and Shparlinski proves that v 0 u q [10], and naturally leads to a Euclidean norm bound v 0 u d + 1 · q . From a practical point of view, the value of v 0 u is usually relatively small. Intuitively, if the number of leaked bits goes very small, λ 1 ( L ( B ) ) d + 1 2 π e ( q 2 + 1 ) d / ( d + 1 ) also goes small, and the corresponding BDD problem becomes harder. There might hold e λ 1 ( L ( B ) ) , which implies a generalized BDD problem with a BDD oracle as described in Algorithm 10. From another perspective, to amount a successful attack, a small requires more signatures with leaked nonce, which increases the workload of leakage detection.
Algorithm 10OrcDSA-HNP
Require: 
   v L ( B ) as defined in Equation (18)
1:
Randomly choose a DSA signature ( r , s ) and corresponding message μ
2:
α v d + 1 · 2 + 1
3:
k α r + h ( μ ) · s 1 mod q
4:
if r r ( k ) then
5:
       return 0
6:
else
7:
       return 1
8:
end if
We run the DPenum4BDD with OrcDSA-HNP to solve HNP problem stated above, on lattice defined in Equation (18). For DSA signatures, we randomly chosen 160-bit q and 512-bit p following [2,10]. Then, we fix a secrete α F q * , and generate d signatures with different nonces k. We assume the last bits of each k is known and construct lattice as described above to solve the corresponding HNP instance. The searching bound is set to be R = γ d + 1 q for some reasonable γ such that v 0 u < R for most of the instances.
Table 3 presents the experimental results of the performance of discrete pruned enumeration, along with the success probability prediction and parameter we heuristically used. The data in each line take the average value (with rounding off) of 10 trials. The column of p s u c c is the success probability predicted by the model in Section 4. The distribution of BDD error v 0 u is hard to deduce and we simply assume it follows a uniform distribution over a ball. For = 3 , d = 64 and = 2 , d = 96 , 98 , the probability model refers to case 1 in Section 4 since the searching bound of DP enumeration is slightly larger than λ 1 under these settings. As for the rest experiments, the bound is small enough compared with λ 1 and we use the model given in case 2. The 4th column in Table 3 is the actual number of rounds that DP enumeration uses for recovering the right secrete α , which approximately verifies the expectation given by 1 / p s u c c .
For = 3 , it is already verified that the attack is mostly successful and very efficient with d = 100 signatures by Babai’s nearest plane algorithm, while our results show that discrete pruned enumeration could recover the secret in a lower dimension d 68 within a few minutes, which is also an acceptable running time. For = 2 , we first compare the performance of DP enumeration with the result given by classical enumeration [2] under the same condition d = 100 . In terms of the overall expected cost of recovering the secrete, DP enumeration is slightly faster than classical enumeration. We note that the success probability of a single round of discrete pruning is much lower than classical pruning method, but it searches for much fewer vectors in a round and could repeat very fast, and achieve a better performance globally. We also try to solve the HNP with fewer signatures, which is also feasible in d = 98 , but the experiment is almost unreachable for d = 96 under the current parameters for DPenum4BDD, and the estimation shows that it might take several days to recover the secret.

6. Conclusions and Further Work

In this work, the SVP algorithm—discrete pruned enumeration is transplanted to an approximate algorithm for solving CVP. We reveal the internal connection between this emergent algorithm and some classical CVP algorithms, including Babai’s nearest plane and its variant, the nearest planes algorithm. We propose several success probability models for its application in various cryptanalysis scenarios. The experimental results show that our model can approximately estimate the cost, and the actual performance of DP enumeration exceeds some previous work: For both the LWE and DSA-HNP cases, the efficiency of DP enumeration is higher than the classical enumeration and nearest planes algorithms, and for the DSA-HNP case, it requires less information about the leaked nonces.
For further research on the application of DP enumeration, it might be valuable to consider some more scenarios in cryptanalysis, such as solving the extended HNP problem for attacking DSA/ECDSA utilizing the non-consecutive leaked bits [13,36]. The parallelized implementation is also a practical issue to be considered.

Author Contributions

Conceptualization and methodology, L.L.; writing—original draft preparation, L.L.; writing—review and editing, Y.Z. and Y.S.; supervision, C.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Publicly available data was analyzed in this study. These data can be found here: https://www.latticechallenge.org/lwe_challenge/challenge.php/ (accessed on 28 December 2022); https://www.sagemath.org/ (accessed on 28 December 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lindner, R.; Peikert, C. Better Key Sizes (and Attacks) for LWE-Based Encryption. In Proceedings of the Topics in Cryptology—CT-RSA, San Francisco, CA, USA, 14–18 February 2011; Kiayias, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 319–339. [Google Scholar]
  2. Liu, M.; Nguyen, P.Q. Solving BDD by Enumeration: An Update. In Proceedings of the Topics in Cryptology—CT-RSA, San Francisco, CA, USA, 25 February–1 March 2013; Dawson, E., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 293–309. [Google Scholar]
  3. Albrecht, M.R.; Fitzpatrick, R.; Göpfert, F. On the Efficacy of Solving LWE by Reduction to Unique-SVP. In Proceedings of the International Conference on Information Security and Cryptology, Fuzhou, China, 5–8 May 2014. [Google Scholar]
  4. Herold, G.; Kirshanova, E.; May, A. On the asymptotic complexity of solving LWE. Des. Codes Cryptogr. 2015, 86, 55–83. [Google Scholar] [CrossRef]
  5. Albrecht, M.R.; Player, R.; Scott, S. On the concrete hardness of Learning with Errors. J. Math. Cryptol. 2015, 9, 169–203. [Google Scholar] [CrossRef] [Green Version]
  6. Albrecht, M.R.; Göpfert, F.; Virdia, F.; Wunderer, T. Revisiting the Expected Cost of Solving uSVP and Applications to LWE. In Proceedings of the Advances in Cryptology—ASIACRYPT, Hong Kong, China, 3–7 December 2017; pp. 297–322. [Google Scholar]
  7. Bai, S.; Miller, S.; Wen, W. A Refined Analysis of the Cost for Solving LWE via uSVP. In Proceedings of the Progress in Cryptology—AFRICACRYPT, Rabat, Morocco, 9–11 July 2019; pp. 181–205. [Google Scholar]
  8. Chen, H.; Chua, L.; Lauter, K.; Song, Y. On the Concrete Security of LWE with Small Secret. Cryptology ePrint Archive, Paper 2020/539. 2020. Available online: https://eprint.iacr.org/2020/539 (accessed on 26 December 2022).
  9. Boneh, D.; Venkatesan, R. Hardness of Computing the Most Significant Bits of Secret Keys in Diffie-Hellman and Related Schemes. In Proceedings of the Advances in Cryptology—CRYPTO ’96, Santa Barbara, CA, USA, 18–22 August 1996; pp. 129–142. [Google Scholar]
  10. Nguyen, P.Q.; Shparlinski, I.E. The Insecurity of the Digital Signature Algorithm with Partially Known Nonces. J. Cryptol. 2002, 15, 151–176. [Google Scholar] [CrossRef]
  11. Nguyen, P.Q.; Tibouchi, M. Lattice-Based Fault Attacks on Signatures. In Fault Analysis in Cryptography; Springer: Berlin/Heidelberg, Germany, 2012; pp. 201–220. [Google Scholar]
  12. Mulder, E.; Hutter, M.; Marson, M.; Pearson, P. Using Bleichenbacher’s Solution to the Hidden Number Problem to Attack Nonce Leaks in 384-Bit ECDSA. In Cryptographic Hardware and Embedded Systems-CHES 2013: 15th International Workshop, Santa Barbara, CA, USA, 20–23 August 2013; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8086. [Google Scholar] [CrossRef] [Green Version]
  13. Li, S.; Fan, S.; Lu, X. Attacking ECDSA Leaking Discrete Bits with a More Efficient Lattice. In International Conference on Information Security and Cryptology; Springer: Cham, Switzerland, 2021; pp. 251–266. [Google Scholar] [CrossRef]
  14. Albrecht, M.R.; Heninger, N. On Bounded Distance Decoding with Predicate: Breaking the “Lattice Barrier” for the Hidden Number Problem. In Proceedings of the Advances in Cryptology—EUROCRYPT, Zagreb, Croatia, 17–21 October 2021; pp. 528–558. [Google Scholar]
  15. On Lovász’ lattice reduction and the nearest lattice point problem. Combinatorica 1986, 6, 1–13. [CrossRef]
  16. Gama, N.; Nguyen, P.Q.; Regev, O. Lattice Enumeration Using Extreme Pruning. In Proceedings of the Advances in Cryptology—EUROCRYPT, French Riviera, France, 30 May–3 June 2010; Gilbert, H., Ed.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 257–278. [Google Scholar]
  17. Kannan, R. Minkowski’s Convex Body Theorem and Integer Programming. Math. Oper. Res. 1987, 12, 415–440. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, Y.; Nguyen, P.Q. BKZ 2.0: Better Lattice Security Estimates. In International Conference on the Theory and Application of Cryptology and Information Security; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7073, pp. 1–20. [Google Scholar] [CrossRef] [Green Version]
  19. Hanrot, G.; Pujol, X.; Stehlé, D. Terminating BKZ. IACR Cryptol. ePrint Arch. 2011, 2011, 198. [Google Scholar]
  20. Aono, Y.; Wang, Y.; Hayashi, T.; Takagi, T. Improved Progressive BKZ Algorithms and Their Precise Cost Estimation by Sharp Simulator. In Proceedings of the Advances in Cryptology—EUROCRYPT, Vienna, Austria, 8–12 May 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 789–819. [Google Scholar] [CrossRef]
  21. Bai, S.; Stehlé, D.; Wen, W. Measuring, Simulating and Exploiting the Head Concavity Phenomenon in BKZ. In Proceedings of the Advances in Cryptology—ASIACRYPT, Brisbane, QLD, Australia, 2–6 December 2018; Peyrin, T., Galbraith, S., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 369–404. [Google Scholar]
  22. Schnorr, C.P. Lattice Reduction by Random Sampling and Birthday Methods. In Proceedings of the Annual Symposium on Theoretical Aspects of Computer Science (STACS 2003), Berlin, Germany, 27 February–1 March 2003; Alt, H., Habib, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 145–156. [Google Scholar]
  23. Fukase, M.; Kashiwabara, K. An accelerated algorithm for solving SVP based on statistical analysis. J. Inf. Process. 2015, 23, 67–80. [Google Scholar] [CrossRef]
  24. Teruya, T.; Kashiwabara, K.; Hanaoka, G. Fast Lattice Basis Reduction Suitable for Massive Parallelization and Its Application to the Shortest Vector Problem. In Proceedings of the Public-Key Cryptography—PKC, Rio de Janeiro, Brazil, 25–29 March 2018; Abdalla, M., Dahab, R., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 437–460. [Google Scholar]
  25. Aono, Y.; Nguyen, P.Q. Random Sampling Revisited: Lattice Enumeration with Discrete Pruning. In Proceedings of the Advances in Cryptology—EUROCRYPT, Paris, France, 30 April–4 May 2017; Coron, J.S., Nielsen, J.B., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 65–102. [Google Scholar]
  26. Luan, L.; Gu, C.; Zheng, Y.; Shi, Y. Lattice Enumeration with Discrete Pruning: Improvement, Cost Estimation and Optimal Parameters. Cryptology ePrint Archive, Paper 2022/1067. 2022. Available online: https://eprint.iacr.org/2022/1067 (accessed on 20 December 2022).
  27. Kannan, R. Improved Algorithms for Integer Programming and Related Lattice Problems. In Proceedings of the Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing, Boston, MA, USA, 25–27 April 1983; Association for Computing Machinery: New York, NY, USA, 1983; pp. 193–206. [Google Scholar] [CrossRef]
  28. Schnorr, C.P.; Euchner, M. Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems. Math. Program. 1994, 66, 181–199. [Google Scholar] [CrossRef]
  29. Albrecht, M.; Bai, S.; Fouque, P.A.; Kirchner, P.; Stehlé, D.; Wen, W. Faster Enumeration-Based Lattice Reduction: Root Hermite Factor k1/(2k) Time kk/8+o(k). In Proceedings of the Advances in Cryptology—CRYPTO, Santa Barbara, CA, USA, 17–21 August 2020; Micciancio, D., Ristenpart, T., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 186–212. [Google Scholar] [CrossRef]
  30. Ducas, L. Shortest Vector from Lattice Sieving: A Few Dimensions for Free. In Proceedings of the Advances in Cryptology—EUROCRYPT, Tel Aviv, Israel, 29 April–3 May 2018; pp. 125–145. [Google Scholar]
  31. Albrecht, M.R.; Ducas, L.; Herold, G.; Kirshanova, E.; Postlethwaite, E.W.; Stevens, M. The General Sieve Kernel and New Records in Lattice Reduction. In Proceedings of the Advances in Cryptology—EUROCRYPT, Darmstadt, Germany, 19–23 May 2019; pp. 717–746. [Google Scholar]
  32. Alkim, E.; Ducas, L.; Pöppelmann, T.; Schwabe, P. Post-Quantum Key Exchange: A New Hope. In Proceedings of the 25th USENIX Conference on Security Symposium, Austin, TX, USA, 10–12 August 2016; pp. 327–343. [Google Scholar]
  33. Fincke, U.; Pohst, M. Improved methods for calculating vectors of short length in a lattice. Math. Comput. 1985, 44, 463–471. [Google Scholar] [CrossRef]
  34. TU Darmstadt, Lattice Challenge. Available online: https://www.latticechallenge.org/lwe_challenge/challenge.php/ (accessed on 18 December 2022).
  35. Lattice Algorithms Using Floating-Point Arithmetic(fplll). Available online: https://github.com/fplll/fplll (accessed on 26 December 2022).
  36. De Micheli, G.; Piau, R.; Pierrot, C. A Tale of Three Signatures: Practical Attack of ECDSA with wNAF. In Proceedings of the Progress in Cryptology—AFRICACRYPT 2020, Cairo, Egypt, 20–22 July 2020; pp. 361–381. [Google Scholar]
Table 1. The experimental performance for solving LWE challenge [34].
Table 1. The experimental performance for solving LWE challenge [34].
LWE ( n , q , s ) Nearest Planes (s)Extreme Pruning (s)DPenum4BDD (s) ( m , β , M )
(40, 1601, 8.005)15.43.25.2(80, 25, 30,000)
(45, 2027, 10.135)38.118.926.1(90, 36, 45,000)
(50, 2503, 12.515)153103.257.3(120, 42, 80,000)
(55, 3037, 15.185)2868.32015.11566.5(141, 52, 120,000)
(60, 3607, 18.035)4910.14681.13664.5(160, 60, 200,000)
Table 2. The numerical cost prediction for solving LWE with Lindner–Peikert parameter setting.
Table 2. The numerical cost prediction for solving LWE with Lindner–Peikert parameter setting.
LWE ( n , q , s ) Nearest Planes (log(s))Linear Pruning (log(s))DPenum4BDD (log(s))Experiment
(128, 2053, 6.7)26.623.612.8526.3 (s)
(192, 4093, 8.9)66.562.838.8-
(256, 4093, 8.3)111.4105.568.5-
Table 3. Results for solving HNP under DSA with partially known nonces.
Table 3. Results for solving HNP under DSA with partially known nonces.
d p succ (Estimated)Actual RoundsTotal Cost (s) ( β , M ) Previous Work
3640.08912523.6(30, 150,000)d = 100, mostly solvable using nearest plane. [10]
3660.1111363.3
3680.345215.8
2963.04 × 10 6 --(45, 300,000)d = 100, success rate = 23 % , expected cost = 4185/0.23 ≈ 18,195.7 s [2]
2986.39 × 10 5 22,89526,578.2
21001.60 × 10 4 826410,315.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luan, L.; Shi, Y.; Gu, C.; Zheng, Y. Application of Discrete Pruned Enumeration in Solving BDD. Symmetry 2023, 15, 355. https://doi.org/10.3390/sym15020355

AMA Style

Luan L, Shi Y, Gu C, Zheng Y. Application of Discrete Pruned Enumeration in Solving BDD. Symmetry. 2023; 15(2):355. https://doi.org/10.3390/sym15020355

Chicago/Turabian Style

Luan, Luan, Yanan Shi, Chunxiang Gu, and Yonghui Zheng. 2023. "Application of Discrete Pruned Enumeration in Solving BDD" Symmetry 15, no. 2: 355. https://doi.org/10.3390/sym15020355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop