Next Article in Journal
Bearing Fault Diagnosis Using Refined Composite Generalized Multiscale Dispersion Entropy-Based Skewness and Variance and Multiclass FCM-ANFIS
Previous Article in Journal
Analysis of Electromagnetic Information Leakage Based on Cryptographic Integrated Circuits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Random Integer Lattice Generation via the Hermite Normal Form

1
School of Cyberspace, Hangzhou Dianzi University, Hangzhou 310018, China
2
State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(11), 1509; https://doi.org/10.3390/e23111509
Submission received: 27 August 2021 / Revised: 8 November 2021 / Accepted: 10 November 2021 / Published: 14 November 2021

Abstract

:
Lattices used in cryptography are integer lattices. Defining and generating a “random integer lattice” are interesting topics. A generation algorithm for a random integer lattice can be used to serve as a random input of all the lattice algorithms. In this paper, we recall the definition of the random integer lattice given by G. Hu et al. and present an improved generation algorithm for it via the Hermite normal form. It can be proven that with probability ≥0.99, this algorithm outputs an n-dim random integer lattice within O ( n 2 ) operations.

1. Introduction

Lattices are discrete subgroups in R n . Since Ajtai’s discovery of the average-case/worst-case connection in lattice problems [1], lattice-based cryptography has attracted much attention [2,3,4,5]. Up to now, lattice-based cryptographic schemes have been considered to be a promising alternative to more traditional ones based on the factoring and discrete logarithm problems since lattice-based schemes can be resistant to efficient quantum algorithms [6]. Lattice algorithms such as LLL [7] and BKZ [8,9] are commonly used in analyzing these lattice-based schemes’ security. The lattices used in cryptography and lattice algorithms are integer lattices (discrete subgroups of Z n ). Thus, the problem of suitably defining and generating a random integer lattice is a meaningful topic. In [10], P. Q. Nguyen found that for dimensions up to 50, LLL almost outputs the shortest lattice vector, while in theory, LLL’s output is just an approximately short vector. Once we are able to generate a random integer lattice, such a generation algorithm can be used to serve as a random input for all lattice algorithms to obtain their output qualities on average.
In [1], M. Ajtai defined a family of “random integer lattices” in terms of the worst-case to average-case connection and showed how to generate one from this lattice family. For uniform A Z q n × m , the lattice family is defined as Λ ( A ) = { Ax Z m : Ax = 0 Z q n } . In [10], P. Q. Nguyen and D. Stehle gave a definition of the “random integer lattice” in the sense of the Haar measure, which was approximated by the Goldstein–Mayer method [11]. For large number N, this “random integer lattice” is uniformly chosen from the set of all n × n Hermite normal forms with the determinant equal to N. When N is prime, to generate such a random integer lattice, one only needs to set h n n = N , h i n [ 0 , N ) uniformly and h i i = 1 for i < n . This type of “random integer lattice” is used in many cryptographic applications. From the perspective of mathematics, studying whether the requirement that N be a prime can be removed is also a meaningful issue.
In [12], G. Maze studied the probabilistic distribution of the random HNF with a special diagonal structure, where the randomness was derived from a random square matrix whose elements were all chosen uniformly from [ B , B ] for large enough B. In [13], G. Hu et al. introduced a different definition of randomness, in which the definition “random integer lattice” means the lattice’s HNF is chosen uniformly from all n × n HNFs whose determinants are upper bounded by a large number M. In the same paper [13], G. Hu et al. also presented a complete random integer lattice generation algorithm. In this algorithm, the first step is to generate a determinant. To make the final output uniform, it is necessary to compute the total number of HNFs with fixed determinant N. Since the total number can be figured out only in the case that the factorization of N is known, a subroutine to factor integers is necessary in this algorithm. In this paper, we improved this algorithm with the help of the diagonal elements’ distribution in the random HNF. This improved algorithm first generates the diagonal elements h 11 , , h n 1 , n 1 without computing the total number of HNFs with a fixed determinant, then it uses the reverse sampling method to generate the final diagonal element h n n . Thus, the factorization subroutine is no longer needed in this improved algorithm, which makes it more efficient.
The remainder of the paper is organized as follows. In Section 2, we give some necessary preliminaries. In Section 3, we recall the definition of the random integer lattice given by G. Hu et al. and discuss the distribution of all the diagonal elements in the random integer lattice’s HNF. For the next section, we present our improved algorithm to generate the random integer lattice via the HNF. Finally, we give our conclusion in Section 5.

2. Preliminaries

We denote by Z the integer ring and R the real number field. We use G L n ( Z ) to denote the general linear group over Z . For convenience, we denote the set of all n × n nonsingular integer matrices by G L n ( R ) Z n × n .

Lattice and the HNF

Given a matrix B = ( b i j ) R n × m with rank n, the lattice L ( B ) spanned by the rows of B is:
L ( B ) = { x B = i = 1 n x i b i | x i Z } ,
where b i is the i-th row of B. We call m the dimension of L ( B ) and n its rank. The determinant of L ( B ) , say det ( L ( B ) ) , is defined as det ( B T B ) . It is easy to see that when B is full-rank ( n = m ), its determinant becomes | det ( B ) | .
Two lattices L ( B 1 ) and L ( B 2 ) are exactly the same when there exists a matrix U G L n ( Z ) s.t. B 1 = U B 2 . Lattices used in cryptography are usually “integer lattices”, whose basis matrices are over Z instead of R . Thus, the space of all full-rank integer lattices is actually ( G L n ( R ) Z n × n ) / G L n ( Z ) .
The Hermite Normal Form (HNF) is a useful tool to study integer matrices:
Definition 1.
A square nonsingular integer matrix H Z n × n is called in the HNF if:
• H is upper triangular, i.e., h i j = 0 for all i > j ;
• All diagonal elements are positive, i.e., h i i > 0 for all i;
• All nondiagonal elements are reduced modulo the corresponding diagonal element at the same column, i.e., 0 h i j < h j j for all i < j .
There exists a famous result for the HNF [14] (Chapter 2, page 66):
Theorem 1.
For every A G L n ( R ) Z n × n , there exists a unique n × n matrix B S n , Z (HNF) of the form B = U A with U G L n ( Z ) .
By this theorem, an integer lattice corresponds to its unique HNF, implying that generating an integer lattice is actually equivalent to generating an HNF.

3. Random Integer Lattice

3.1. Definition

In this part, we refer to [13] to recall some results related to the random integer lattice.
First, for M , N Z + ,
H n ( M ) { H is n - dim HNF | det ( H ) M } ,
H n ( N ) { H is n - dim HNF | det ( H ) = N } .
Gruber [15] counted the size of | H n ( N ) | :
Theorem 2.
If N has prime decomposition N = p 1 r 1 p t r t , then:
| H n ( N ) | = i = 1 t j = 1 n 1 p i r i + j 1 p i j 1 .
There exists an asymptotic estimation for | H n ( M ) | in [13]:
Theorem 3.
For large positive integer M,
| H n ( M ) | = s = 2 n ζ ( s ) n M n + O ( M n 1 log M ) .
H is called an n-dim random nonsingular HNF if for large integer M > 0 , H is chosen from H n ( M ) uniformly, and the lattice L ( H ) generated by such an H is called a random integer lattice.

3.2. Diagonal Distribution

In [13], Hu et al. studied the expectation and variance of every entry and the probability distribution of every diagonal entry:
Theorem 4.
Let H = ( h i j ) be an n-dim random nonsingular HNF with the determinant bounded by M > 0 and t be an integer in [ 1 , n 1 ] , given an increasing subset { i 1 , , i t } of { 1 , , n } and its increasing complementary subset { j 1 , , j n t } , for positive integers b 1 b t ; when M + , we have:
P ( h i k , i k = b k for all k ) = 0 ( i t = n ) k = 1 n t 1 ζ ( n + 1 j k ) s = 2 n ζ ( s ) l = 1 t b l i l n 1 ( i t < n )
If we take t = 1 , a one-element set T = { i } ( i [ 1 , n 1 ] ) , and positive integers b, then the increasing complementary subset of T in { 1 , 2 , , n } is { 1 , , i 1 , i + 1 , , n } . We apply the above theorem and obtain the following corollary:
Corollary 1.
Let H = ( h i j ) be an n-dim random nonsingular HNF with the determinant bounded by M > 0 , then for i [ 1 , n 1 ] and positive integer b, when M + , we have:
P ( h i i = b ) = 1 ζ ( n + 1 i ) · b n + 1 i ( b = 1 , 2 , ) .
We denote this distribution of h i i by D ( n , i ) .
Remark 1.
Notice that in Theorem 4, when i t < n and M , both cases: t = 1 and 1 < t < n are valid conditions, which corresponds to the joint distribution of h i k , i k ( k = 1 , , t ) for 1 < t < n or a marginal distribution of the single variable h i 1 , i 1 for t = 1 as in Corollary 1. Considering Theorem 4 and Corollary 1, it can be deduced that when M , the first n 1 diagonal elements h 11 , , h n 1 , n 1 are independent variables.

4. Generating the Random Integer Lattice via the HNF

In this section, we present our random integer lattice generation algorithm via the HNF. Firstly, we introduce the inverse sampling method in probability theory to generate all the diagonal elements. Then, we generate all the nondiagonal elements accordingly.

4.1. Inverse Sampling Method

Given a distribution D over some ordered set A, we can use the inverse sampling method to generate a random variable according to the distribution D . We present two versions of the inverse sampling method: continuous-ISM and discrete-ISM.
Theorem 5.  
(Continuous-ISM) For distribution D over interval [ a , b ] with cumulative distribution function F X ( x ) , choose a random y uniformly from [ 0 , 1 ] and compute z s.t. F ( z ) = y , then the resulting variable Z has distribution D .
Proof. 
Our goal is to prove Z has F X as its cumulative distribution function. Namely, for any x [ a , b ] , we have to prove P ( Z x ) = F X ( x ) . Since F is a monotonically increasing function, we have:
P ( Z x ) = P ( F Z ( z ) F X ( x ) ) = P ( y F X ( x ) ) = F X ( x )
where the second equality comes from F ( z ) = y and the last one is a direct result of y’s uniformity in [ 0 , 1 ] . Thus, the cumulative distribution function of Z is actually F X , which completes the proof.    □
Theorem 6. 
(Discrete-ISM) For distribution D over finite-ordered set A = { a k } k = 1 n Z with corresponding density f k = P ( X = a k ) , choose a random number y uniformly from [ 0 , 1 ] and compute the minimum j s.t. k = 1 j f k y ; then, we let Z = a j , and Z will have distribution D .
Proof. 
For any a j A , we need to prove P ( Z = a j ) = f j . Since j is the minimum value s.t. k = 1 j f k y , we know that k = 1 j 1 f k < y . Then, we have:
P ( Z = a j ) = P ( k = 1 j f i y , k = 1 j 1 f i < y ) = P ( k = 1 j f k y ) P ( k = 1 j 1 f k y ) ) = k = 1 j f k k = 1 j 1 f k ( since y is uniform in [ 0 , 1 ] ) = f j
which completes the proof.    □

4.2. Generating the Random Integer Lattice via the HNF

From Section 3.1, we can generate a random integer lattice by equivalently generating a random nonsingular HNF. To begin with, we generate the first n 1 diagonal elements h 11 , h 22 , , h n 1 , n 1 . Then, we generate the last diagonal element h n n . Finally, all the nondiagonal elements are generated, and we output the matrix H as a lattice basis for our random integer lattice.

4.2.1. Generating h 11 , h 22 , , h n 1 , n 1

From Corollary 1, we know that for an n-dim nonsingular HNF, when i [ 1 , n 1 ] , the distribution of h i i is:
D ( n , i ) : P ( X = x ) = 1 ζ ( n + 1 i ) · x ( n + 1 i ) ( x = 1 , 2 ) .
Therefore, we generate these diagonal elements h 11 , h 22 , , h n 1 , n 1 according to D ( n , i ) by discrete-ISM (Theorem 6).
For i [ 1 , n 1 ] , we choose y uniformly randomly from [ 0 , 1 ] and increasingly iterate j i starting from 1 until it satisfies 1 ζ ( n + 1 i ) k = 1 j i k ( n + 1 i ) y . Then, we set h i i = j i . By Theorem 6, each diagonal h i i has distribution D ( n , i ) , which is what we need.

4.2.2. Generating h n n

After generating the first n 1 diagonal elements h i i , we set D n 1 i = 1 n 1 h i i . Since the determinant upper bound is M, the last diagonal element h n n should be in [ 1 , M D n 1 ] . We point out that D n 1 is a small number compared to M with high probability. More specifically, the following theorem can be proven.
Theorem 7.
Let H = ( h i j ) be an n-dim random nonsingular HNF with the determinant bounded by M > 0 ; for D n 1 i = 1 n 1 h i i , we have:
E ( D n 1 ) = 1 ζ ( n ) log M + O ( 1 ) .
Moreover, by Markov’s inequality, we find that:
P ( D n 1 ( log M ) 2 ) 1 log M .
To prove Theorem 7, the following lemma from [13] is needed.
Lemma 1.
Given an integer n 4 and a large integer M > 0 , for any non-negative increasing sequence ( s i ) 1 i n s.t. s n s n 3 2 , s n s n 2 1 and a respective summation:
S ( M , s 1 s n ) a i Z + , i = 1 n a i M a 1 s 1 a n s n ,
we have the following Table 1 on asymptotic formulas for S ( M , s 1 s n ) .
where ζ ( s ) = i = 1 i s is the well-known Riemann zeta function and the constant in the O notation is only relevant to n.
Now, we start to prove Theorem 7.
Proof. 
For the expectation of D n 1 = i = 1 n 1 h i i , we find that:
E ( D n 1 ) = k M k · P ( D n 1 = k ) = k M k · | H n 1 ( k ) | · a n M / k a n n 1 | H n ( M ) | = k M j = 1 n 1 a j j = 1 n 1 a j = k j = 1 n 1 a j j 1 a n M / k a n n 1 | H n ( M ) | = k M j = 1 n 1 a j = k j = 1 n 1 a j j a n M / k a n n 1 | H n ( M ) | = j = 1 n a j M j = 1 n 1 a j j · a n n 1 j = 1 n a j M j = 1 n a j j 1 = S ( M , 1 , 2 , , n 2 , n 1 , n 1 ) S ( M , 0 , 1 , , n 2 , n 1 ) ( as in Lemma 1 ) = s = 2 n 1 ζ ( s ) n · M n log M + O ( M n ) s = 2 n 1 ζ ( s ) n M n + O ( M n 1 log M ) ( by Lemma 1 ) = s = 2 n 1 ζ ( s ) n · log M + O ( 1 ) s = 2 n 1 ζ ( s ) n + O ( log M / M ) = 1 ζ ( n ) log M + O ( 1 ) ,
which completes the first part of Theorem 7.
For the second part, recall that for any non-negative random variable X, Markov’s inequality tells us that:
P ( X a ) E ( X ) / a .
Since D n 1 is non-negative, we apply Markov’s inequality to it by setting a = ( log M ) 2 and obtain:
P ( D n 1 ( log M ) 2 ) ( 1 ζ ( n ) log M + O ( 1 ) ) / ( log M ) 2 1 log M
which completes the second part of the proof.    □
From Theorem 7, we know that D n 1 is small compared to M with high probability; thus, M D n 1 is still large enough for us to obtain a similar result for h n n . We think this is a relatively reasonable way to describe the distribution of h n n . Thus, for the random nonsingular HNF with the determinant bounded by M, on the condition that i = 1 n 1 h i i = D n 1 , the distribution of h n n is the following:
D ˜ ( n , M , D n 1 ) : P ( X = x ) = 1 k = 1 M / D n 1 k n 1 · x n 1 = 1 1 n M D n 1 n + O ( M D n 1 n 1 ) · x n 1 ( x = 1 , 2 , M D n 1 ) .
Moreover, the corresponding cumulative distribution function is:
F X ( x ) = P ( X x ) = 1 1 n M D n 1 n + O ( M D n 1 n 1 ) · k = 1 x k n 1 = 1 n x n + O ( x n 1 ) 1 n M D n 1 n + O ( M D n 1 n 1 ) ( x = 1 , 2 , M D n 1 ) .
Since M D n 1 is still super large, we know that:
F X ( x ) x n / n M / D n 1 n / n = ( x M / D n 1 ) n G X ( x ) .
As a result, G X ( x ) is a rather good estimation for F X ( x ) . In fact, if we define the distribution D ˜ 0 ( n , M , D n 1 ) by the cumulative distribution function G X ( x ) as follows:
D 0 ˜ ( n , M , D n 1 ) : P ( X x ) = ( x M / D n 1 ) n ( x = 1 , 2 , M D n 1 ) ,
then we have the following theorem.
Theorem 8.
For large enough M Z + and positive integer D n 1 = o ( M ) , the statistical distance between D ˜ ( n , M , D n 1 ) and D ˜ 0 ( n , M , D n 1 ) is at most n · O ( D n 1 M ) .
Proof. 
According to (4), the cumulative distribution function of D ˜ ( n , M , D n 1 ) is F X ( x ) = 1 n x n + O ( x n 1 ) 1 n M D n 1 n + O ( M D n 1 n 1 ) , since the cumulative distribution function of D ˜ 0 ( n , M , D n 1 ) is G X ( x ) = ( x M / D n 1 ) n ; denote M D n 1 by M ˜ , then x M ˜ , and for every x [ 1 , M ˜ ] , we have:
| F X ( x ) G X ( x ) | = | 1 n x n + O ( x n 1 ) 1 n M ˜ n + O ( M ˜ n 1 ) ( x M ˜ ) n | = | x n + n · O ( x n 1 ) M ˜ n + n · O ( M ˜ n 1 ) ( x M ˜ ) n | = | ( x n + n · O ( x n 1 ) ) M ˜ n ( M ˜ n + n · O ( M ˜ n 1 ) ) x n M ˜ 2 n + n · O ( M ˜ 2 n 1 ) | = | n · O ( x n 1 ) M ˜ n n · O ( M ˜ n 1 ) x n M ˜ 2 n + n · O ( M ˜ 2 n 1 ) | = | n · O ( M ˜ n 1 ) M ˜ n n · O ( M ˜ n 1 ) M ˜ n M ˜ 2 n + n · O ( M ˜ 2 n 1 ) | ( sin ce x M ˜ ) = | n · O ( M ˜ 2 n 1 ) M ˜ 2 n + n · O ( M ˜ 2 n 1 ) | = | n · O ( 1 M ˜ ) 1 + n · O ( 1 M ˜ ) | = n · O ( 1 M ˜ ) = n · O ( D n 1 M )
which implies that the statistical distances D ˜ ( n , M , D n 1 ) and D ˜ 0 ( n , M , D n 1 ) are bounded by n · O ( D n 1 M ) .    □
Since M / D n 1 is still super large, we can generate h n n according to D ˜ 0 ( n , M , D n 1 ) (close enough to D ˜ ( n , M , D n 1 ) ) by continuous-ISM (Theorem 5).
We choose y uniformly randomly from [ 0 , 1 ] and compute z R + s.t.:
( z M / D n 1 ) n = y .
Then, we set h n n = z . By Theorems 6 and 8, the diagonal h n n has distribution D ˜ 0 ( n , M , D n 1 ) , which is close enough to D ˜ ( n , M , D n 1 ) .

4.2.3. Generating h i j ( i j )

This part is relatively easier. For i , j = 1 , , n , let h i j be chosen from [ 0 , h j j ) uniformly randomly if i < j and let h i j = 0 if i > j .

4.2.4. Correctness

By the discussion above, for large enough M > 0 , the distribution of the diagonal h 11 , , h n n generated by this algorithm is close enough to its distribution as a random nonsingular HNF. For i < j [ 1 , n ] , since a random nonsingular HNF’s h i j is uniform in [ 0 , h j j ) and h i j is generated in the same way, we know that the output of this algorithm is also close enough to a real random nonsingular HNF, which implies the correctness of this algorithm.

4.3. Algorithm 1: Generate Random Integer Lattice

Now we present the Algorithm 1 to generate a random integer lattice.
Algorithm 1: Random Integer Lattice Generation
Require: Dimension n, large integer M
Ensure: n-dim random integer lattice L with det ( L ) M
 Step 1: Generate h 11 , , h n 1 , n 1
  D 0 = 1
 for i = 1 to n 1  do
      j i = 1 , s i = 1
     choose y i [ 0 , 1 ] uniformly
     while  s i < ζ ( n + 1 i ) · y i  do
          j i = j i + 1
          s i = s i + j i ( n + 1 i )
     end while
      D i = D i 1 · j i
     set h i i = j i
 end for
 Step 2: Generate h n n
 choose y [ 0 , 1 ] uniformly
  z = y 1 / n
  z = z · M D n 1
 set h n n = z
 Step 3: Generate h i j ( i j )
 for j = 1 to n do
     for  i = 1 to j 1  do
         choose h i j [ 0 , h j j ) uniformly
     end for
     for  i = j + 1 to n do
         set h i j = 0
     end for
 end for
 Step 4: Set H = ( h i j ) , and output L ( H )

4.4. Time Complexity of Algorithm 1

Now, we analyze the time complexity of Algorithm 1. Obviously, the most time-consuming part of Algorithm 1 is the floating-point operations s i = s i + j i ( n + 1 i ) inside the while iteration for each i in Step 1. Denote the number of computing s i = s i + j i ( n + 1 i ) in the i-th while iteration by T ( i ) . Notice that:
P ( h i i = 1 ) = 1 ζ ( n + 1 i ) ;
since ζ ( s ) converges to one quite fast as s grows, the majority of h i i will be set to one. In fact, by the numerical results, we have following result:
Fact 1: For any integer n 10 ,
1 s = 10 n ζ ( s ) 0.999 .
By this fact, for i n 10 , all the h i i are very likely to be set to one, implying that T ( 1 ) , T ( 2 ) , , T ( n 10 ) = 0 with probability 0.999 . Then, we consider T ( n 9 ) , T ( n 8 ) , , T ( n 1 ) . If we set the probability bound for each T ( i ) to be 0.999 , then by accurate numerical results, we have the following Table 2:
Thus, we have the following theorem:
Theorem 9.
The number of floating-point operations performed in Algorithm 1 is bounded by 1300 with probability 0.99 .
Proof. 
By the above table, i = n 9 n 1 T ( i ) is bounded by 640 with probability 0 . 999 9 . Since T ( 1 ) , T ( 2 ) , , T ( n 10 ) = 0 with probability 0.999 , we know that i = 1 n 1 T ( i ) is bounded by 640 with probability 0 . 999 10 0.99 . Notice that each s i = s i + j i ( n + 1 i ) needs two floating-point operations, and it also needs another four floating-point operations to generate h n n in Step 2; thus, with probability 0.99 , the total number of floating-point operations performed in Algorithm 1 is bounded by 640 × 2 + 4 = 1284 < 1300 , which completes the proof. □
Remark 2.
We point out that the accuracy of the floating-point affects the actual running time of Algorithm 1. By experiments, 150 bit are a suitable option.
It is not hard to see that in Algorithm 1, besides the floating-point operations, the remaining parts of Step 1, Step 2, and Step 3 take O ( n 2 ) , O ( 1 ) , and O ( n 2 ) operations, respectively. Combining this with Theorem 9, we have the following result:
Theorem 10.
Algorithm 1 outputs a random integer lattice within O ( n 2 ) operations with probability 0.99 .

5. Conclusions

In this paper, we presented an improved algorithm for generating random integer lattices and discussed its time complexity. We proved that with probability 0.99 , this algorithm outputs an n-dim random integer lattice within O ( n 2 ) operations. We pointed out that there is still space for improvement of our algorithm, and we leave this as an open problem.

Author Contributions

Conceptualization, G.H.; formal analysis, L.Y.; funding acquisition, G.H. and L.Y.; investigation, G.H.; methodology, G.H.; validation, L.L.; writing—original draft, G.H.; writing—review and editing, L.L., L.H., and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China (No. 61602143, No. 61772166) and in part by the Natural Science Foundation of Zhejiang Province of China (No. LZ17F020002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We thank Yanbin Pan for his wonderful suggestions about this paper, and we thank the referees for putting forward their excellent advice on how to improve the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ajtai, M. Gennerating hard instances of lattice problems. In Proceedings of the STOC ’96 Twenty-Eighth Annual ACM Symposium on Theory of Computing, Philadelphia, PA, USA, 22–24 May 1996; Miller, G., Ed.; ACM Press: New York, NY, USA, 1996; pp. 99–108. [Google Scholar]
  2. Ajtai, M.; Dwork, C. A public-key cryptosystem with worst-case/average-case equivalence. In Proceedings of the STOC ’97 Twenty-Ninth Annual ACM Symposium on Theory of Computing, El Paso, TX, USA, 4–6 May 1997; Leighton, F.T., Shor, P., Eds.; ACM Press: New York, NY, USA, 1997; pp. 284–293. [Google Scholar]
  3. Hoffstein, J.; Pipher, J.; Silverman, J.H. NTRU: A Ring-Based Public Key Cryptosystem. In Proceedings of the ANTS-III Third International Symposium on Algorithmic Number Theory, Portland, OR, USA, 21–25 June 1998; Buhler, J.P., Ed.; Springer: Heidelberg, Germany, 1998; Volume 1423, pp. 267–288. [Google Scholar]
  4. Regev, O. On lattices, learning with errors, random linear codes, and cryptography. In Proceedings of the STOC ’05 Thirty-Seventh Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, 22–24 May 2005; Gabow, H.N., Fagin, R., Eds.; ACM Press: New York, NY, USA, 2005; pp. 84–93. [Google Scholar]
  5. Gentry, C.; Peikert, C.; Vaikuntanathan, V. Trapdoors for hard lattices and new cryptographic constructions. In Proceedings of the STOC ’08 Fortieth Annual ACM Symposium on Theory of Computing, Victoria, BC, Canada, 17–20 May 2008; Ladner, R., Dwork, C., Eds.; ACM Press: New York, NY, USA, 2008; pp. 197–206. [Google Scholar]
  6. Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 1997, 26, 1484–1509. [Google Scholar] [CrossRef] [Green Version]
  7. Lenstra, A.K.; Lenstra, H.W., Jr.; Lovasz, L. Factoring polynomials with rational coefficients. Math. Ann. 1982, 261, 513–534. [Google Scholar] [CrossRef]
  8. Schnorr, C.P.; Euchner, M. Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Math. Program. 1994, 66, 181–199. [Google Scholar] [CrossRef]
  9. Chen, Y.; Nguyen, P.Q. BKZ 2.0: Better Lattice Security Estimates. In Proceedings of the ASIACRYPT 2011 17th International Conference on the Theory and Application of Cryptology and Information Security, Seoul, Korea, 4–8 December 2011; Lee, D.H., Wang, X., Eds.; Springer: Heidelberg, Germany, 2011; Volume 7073, pp. 1–20. [Google Scholar]
  10. Nguyen, P.Q.; Stehle, D. LLL on the average. In Proceedings of the ANTS-XII 7th International Symposium on Algorithmic Number Theory, Berlin, Germany, 23–28 July 2006; Hess, F., Pauli, S., Pohst, M.E., Eds.; Springer: Heidelberg, Germany, 2006; Volume 4076, pp. 238–256. [Google Scholar]
  11. Goldstein, D.; Mayer, A. On the equidistribution of Hecke points. Forum Math. 2003, 15, 165–189. [Google Scholar] [CrossRef]
  12. Maze, G. Natural density distribution of Hermite normal forms of integer matrices. J. Number Theory 2011, 131, 2398–2408. [Google Scholar] [CrossRef] [Green Version]
  13. Hu, G.; Pan, Y.; Liu, R.; Chen, Y. On Random Nonsingular Hermite Normal Form. J. Number Theory 2016, 164, 66–86. [Google Scholar] [CrossRef]
  14. Cohen, H. A Course in Computational Algebraic Number Theory; Springer-Verlag: Berlin/Heidelberg, Germany, 1993; Volume 138, p. 66. [Google Scholar]
  15. Gruber, B. Alternative formulae for the number of sublattices. Acta Cryst. 1997, A53, 807–808. [Google Scholar] [CrossRef]
Table 1. Asymptotic formulas of S ( M , s 1 s n ) in different cases.
Table 1. Asymptotic formulas of S ( M , s 1 s n ) in different cases.
S ( M , s 1 s n ) If
j = 1 n 1 ζ ( s n + 1 s j ) s n + 1 M s n + 1 + O ( M s n log M ) s n 3 s n 2 < s n 1 < s n
j = 1 n 1 ζ ( s n + 1 s j ) s n + 1 M s n + 1 + O M s n ( log M ) 2 s n 3 < s n 2 = s n 1 < s n
j = 1 n 2 ζ ( s n + 1 s j ) s n + 1 M s n + 1 log M + O ( M s n + 1 ) s n 3 s n 2 < s n 1 = s n
Table 2. Upper bound for T ( i ) with probability 0.999 .
Table 2. Upper bound for T ( i ) with probability 0.999 .
T(i)Upper Bound
T ( n 9 ) 0
T ( n 8 ) 1
T ( n 7 ) 1
T ( n 6 ) 1
T ( n 5 ) 2
T ( n 4 ) 3
T ( n 3 ) 6
T ( n 2 ) 19
T ( n 1 ) 607
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, G.; You, L.; Li, L.; Hu, L.; Wang, H. Random Integer Lattice Generation via the Hermite Normal Form. Entropy 2021, 23, 1509. https://doi.org/10.3390/e23111509

AMA Style

Hu G, You L, Li L, Hu L, Wang H. Random Integer Lattice Generation via the Hermite Normal Form. Entropy. 2021; 23(11):1509. https://doi.org/10.3390/e23111509

Chicago/Turabian Style

Hu, Gengran, Lin You, Liang Li, Liqin Hu, and Hui Wang. 2021. "Random Integer Lattice Generation via the Hermite Normal Form" Entropy 23, no. 11: 1509. https://doi.org/10.3390/e23111509

APA Style

Hu, G., You, L., Li, L., Hu, L., & Wang, H. (2021). Random Integer Lattice Generation via the Hermite Normal Form. Entropy, 23(11), 1509. https://doi.org/10.3390/e23111509

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop