Next Article in Journal
Emergence of Distinct Spatial Patterns in Cellular Automata with Inertia: A Phase Transition-Like Behavior
Next Article in Special Issue
Competitive Sharing of Spectrum: Reservation Obfuscation and Verification Strategies
Previous Article in Journal
Tunable-Q Wavelet Transform Based Multivariate Sub-Band Fuzzy Entropy with Application to Focal EEG Signal Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Normalized Unconditional ϵ-Security of Private-Key Encryption

1
School of Electronics and Communication Engineering, Yulin Normal University, Yulin 537000, China
2
School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
3
School of Mechanical and Electrical Engineering, Guizhou Normal University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Entropy 2017, 19(3), 100; https://doi.org/10.3390/e19030100
Submission received: 12 January 2017 / Revised: 16 February 2017 / Accepted: 1 March 2017 / Published: 7 March 2017
(This article belongs to the Special Issue Information-Theoretic Security)

Abstract

:
In this paper we introduce two normalized versions of non-perfect security for private-key encryption: one version in the framework of Shannon entropy, another version in the framework of Kolmogorov complexity. We prove the lower bound on either key entropy or key size for these models and study the relations between these normalized security notions.

1. Introduction

Shannon entropy H ( X ) [1]—known as information theory—is a measure of the average uncertainty in the random variable X. Perfect secrecy [2] is a strong security notion which means that the ciphertext leaks no information about the plaintext. Shannon first formally studied this notion using information theory, and defined perfect secrecy as the mutual information between the plaintext X and ciphertext Y being zero; i.e., I ( X ; Y ) = 0 . The mutual information I ( X ; Y ) is a measure of the dependence between the two random variables. In this view, perfect secrecy requires the independence between the plaintext X and ciphertext Y. Perfect secrecy can be achieved by one-time pad (Vernam) cipher [3], but it requires that the key space must be at least as large as the message space; i.e., H ( X ) H ( Y ) . A number of authors have considered alternative definitions of secrecy to achieve more practical cryptosystems. A number of papers [4,5,6] considered the notion of non-perfect security—ϵ-secrecy—which allows a small amount of information leakage about the plaintext after viewing the ciphertext; i.e., I ( X ; Y ) ϵ . Dodis [7] considered the ϵ-security with a completeness error, which allows the decryption to have some small error. Several other notions of secrecy [5,6,8] have been proposed by using other entropy measures, such as min-entropy, conditional min-entropy, Rényi entropy, and conditional Rényi entropy instead of Shannon entropy. However, the security parameter ϵ in [7] is not proper to measure the security level of private-key encryption scheme. As we know, 10 leaked bits is a large amount of information leakage for a 100-bit message, but is small for a 10,000-bit message. However, using the security notion of [7], they have the same level of security. The problem is that the security notion of [7] deals with the absolute amount of information leakage rather than with the relative amount of information leakage. So, in this paper, we propose a notion of normalized ϵ-Shannon security, which is a normalized version of Shannon entropy-based security for a private-key encryption scheme. The new security parameter ϵ is the information leak ratio; i.e., I ( X ; Y ) divided by H ( X ) . This is also a relative amount of information leakage for messages.
Kolmogorov complexity K ( x ) [9,10,11]—known as algorithmic information theory [12,13]—measures the quantity of information in a single string x by the size of the smallest program that generates it.
Antunes et al. [14] considered a notion of individual security for cryptographic systems by using Kolmogorov complexity instead of Shannon entropy. Kaced [15] and Dai et al. [16] considered the Kolmogorov complexity-based security for secret sharing schemes. In this paper, we also propose the notion of a normalized version of Kolmogorov complexity-based security for private-key encryption scheme. Kolmogorov complexity and entropy are different information measures, and therefore a private-key encryption is secure based on one information measure but not secure based on another information measure [5]. Finally, we show the relationship between entropy-based security and Kolmogorov complexity-based security.
This paper is organized as follows: in Section 2, we review some definitions of entropy measures, Kolmogorov complexity, and private-key encryption. In Section 3, we propose several normalized versions of security notions based on Shannon entropy, and derive some lower bounds on the key size for private-key encryption under these security models. In Section 4, several Kolmogorov complexity-based security notions of private-key encryption are given and are compared to Shannon entropy-based security in Section 5. Conclusions are presented in Section 6.

2. Preliminaries

2.1. Entropy

Let X be a finite set. Let X be a random variable over X . We denote the probability of random variable X being equal to x by p X ( x ) , and when no confusion on the random variable used may arise, we simplify it to p ( x ) .
Let X , Y , Z be three finite sets. Let X , Y , Z be random variables over X , Y , Z , respectively. Some basic concepts of information theory are defined as follows.
The Shannon entropy [1] of a random variable X, defined by
H ( X ) = - x X p ( x ) log p ( x ) .
The conditional Shannon entropy with respect to X given Y is defined as
H ( X | Y ) = - y Y p ( y ) H ( X | Y = y ) .
The joint Shannon entropy of X and Y is
H ( X , Y ) = H ( X ) + H ( Y | X ) = H ( Y ) + H ( X | Y ) .
The Mutual information between X and Y is
I ( X ; Y ) = H ( X ) - H ( X | Y ) .
The conditional mutual information between X and Y given Z is defined as
I ( X ; Y | Z ) = H ( X | Z ) - H ( X | ( Y , Z ) ) .
We recall a few properties of Shannon entropy.
Lemma 1.
References [7,12]. Let X , Y , Z be random variables over X , Y , Z , respectively.
(i) 
H ( X ) 0 , with equality if and only if there exists x 0 X such that p ( x 0 ) = 1 ;
(ii) 
H ( X ) log | X | , with equality if and only if p ( x ) = 1 / | X | for all x X ;
(iii) 
H ( X ) H ( X | Y ) ;
(iv) 
H ( X , Y ) H ( X ) + H ( Y ) ;
(v) 
I ( X ; Y ) = I ( Y ; X ) ;
(vi) 
H ( X , Y , Z ) = H ( X ) + H ( Y | X ) + H ( Z | ( X , Y ) ) ;
(vii) 
H ( Y ) H ( X ) - H ( X | ( Y , Z ) ) - I ( X ; Z ) .
(viii) 
(Fano inequality) H ( X | Y ) H ( X | X ) H ( p e ) + p e log | X | , where X = f ( Y ) is a function of Y and p e = p ( X X ) is the probability of error.

2.2. Kolmogorov Complexity

Some definitions and basic properties of Kolmogorov complexity are recalled below. For more details, see [12,13].
Here, we use the prefix-free definition of Kolmogorov complexity. A set of strings A is prefix-free if there are not two strings x and y in A such that x is a proper prefix of y.
The conditional Kolmogorov complexity K ( y | x ) of y given x, with respect to a universal prefix-free Turing machine U, is defined by
K U ( y | x ) = min { | p | : U ( p , x ) = y } ,
where U ( p , x ) is the output of the program p with auxiliary input x when it is run in the machine U.
Let U be a universal prefix-free Turing machine, then for any other Turing machine F:
K U ( y | x ) K F ( y | x ) + c F
for all x , y , where the constant c F depends on F but not on x , y .
The (unconditional) Kolmogorov complexity K U ( y ) of y is defined as K U ( y | ε ) , where ε is the empty string. We fix a universal prefix-free Turing machine once and for all and for convenience, K U ( y | x ) and K U ( y ) are denoted, respectively, by K ( y | x ) and K ( y ) .
The algorithmic mutual information between x and y is the quantity
I ( x : y ) = K ( x ) - K ( x | y ) .
We consider x and y to be algorithmically independent whenever I ( x : y ) is approximately zero.
Lemma 2.
References [12,13]. For any finite strings x , y , we have
(i) 
K ( x ) | x | + O ( 1 ) ;
(ii) 
K ( x | y ) K ( x ) + O ( 1 ) ;
(iii) 
K ( x , y ) K ( x ) + K ( y | x ) + O ( 1 ) ,
where O ( 1 ) is a constant term.
Shannon entropy and Kolmogorov complexity are different information measures, as the former is based on probability distributions and the latter on the length of programs. However, for any computable probability distributions, up to a constant term, the expected value of the (conditional) Kolmogorov complexity equals the (conditional) Shannon entropy [17,18]. Shannon mutual information and algorithmic mutual information are also related.
Lemma 3.
References [17,18]. Let X , Y be two random variables over X , Y , respectively. For any computable probability distribution u ( x , y ) over X × Y ,
(i) 
0 ( x , y u ( x , y ) K ( x | y ) - H ( X | Y ) ) K ( u ) + O ( 1 ) .
(ii) 
I ( X ; Y ) - K ( u ) x , y u ( x , y ) I ( x : y ) I ( X ; Y ) + 2 K ( u ) . When u is given, then I ( X ; Y ) = x , y u ( x , y ) I ( x : y | u ) + O ( 1 ) .

2.3. Private-Key Encryption

A private-key encryption ∏ consists of a message space X , a ciphertext space Y , a key space K , and a pair of an encryption function E n c : X × K Y and a decryption function D e c : Y × K X ; i.e., = d e f ( X , Y , K , E n c , D e c ) .
A private-key encryption ∏ is said to have perfect correctness if
D e c k ( E n c k ( x ) ) = x
for any key k K and plaintext x X .
Dodis [7] relaxed the correctness guarantee to allow for some small decryption error γ. A private-key encryption ∏ is said ( 1 - γ ) -correct on X if
Pr X , K ( D e c K ( E n c K ( X ) ) = X ) > 1 - γ .
Dodis [7] also gave the following natural definition of imperfect correctness based on Shannon entropy. A private-key encryption ∏ is said ( 1 - γ ) -Shannon correct on X if
H ( X | D e c K ( Y ) ) γ .

3. Normalized ϵ-Shannon Security

Now we are ready to propose a new relaxation of non-perfect security for private-key encryption. In the rest of this paper, X , Y , K are always used to denote the random variable for plaintext space X , ciphertext space Y , and key space K , respectively.
Definition 1.
Let ∏ be a private-key encryption. We say ∏ is normalized ϵ-Shannon secure if
I ( X ; Y ) ϵ H ( X ) ; i . e . , I ( X ; Y ) / H ( X ) ϵ .
In the above notion, we use the information leak ratio instead of the absolute amount of information leak to measure the security degree of private-key encryption. It can simply be understood as a normalized version of ϵ-Shannon security.
The new security parameter ϵ takes values in the range [ 0 , 1 ] . When ϵ = 0 , ∏ is perfect secrecy. When ϵ = 1 , ∏ is maximally insecure.
In [7], a private-key encryption ∏ is called ϵ-Shannon secure if I ( X , Y ) ϵ . It is easy to know that if a private-key encryption ∏ is normalized ϵ-Shannon secure (which means the information leak ratio is at most ϵ), then the absolute amount of information leak is at most ϵ H ( X ) , and ∏ is ϵ H ( X ) -Shannon secure. Moreover, by Lemma 1(ii), H ( X ) log | X | , ∏ is ϵ log | X | -Shannon secure.
Our notion can be used to compare the security of private-key encryption schemes that have different message sizes. For example, a 2-Shannon secure encryption 1 with | X 1 | = | Y 1 | = | K 1 | = 2 100 and a 1-Shannon secure encryption 2 with | X 2 | = | Y 2 | = | K 2 | = 2 10 . We cannot say that 2 is more secure than 1 (by 1 < 2 ) because of the different size of the message. By using our notion, however, 1 is normalized 2 - 99 -Shannon secure and 2 is normalized 2 - 10 -Shannon secure, where we assume that X 1 and X 2 are uniformly random, then we can say that 1 is more secure than 2 . Thus, our investigation of the above definition of security has its literature reason.

Lower Bounds

In this subsection, we derive some lower bounds on the key size for private-key encryption under various correctness and security models.
Theorem 1.
Let ∏ be a private-key encryption. If ∏ has normalized ϵ-Shannon security and perfect correctness, then
H ( K ) ( 1 - ϵ ) H ( X ) .
Proof. 
From Lemma 1(vii), we have
H ( K ) H ( X ) - H ( X | ( K , Y ) ) - I ( X ; Y ) .
From the definition of normalized ϵ-Shannon security, we have
I ( X ; Y ) ϵ H ( X ) .
Let X = D e c K ( Y ) . Since ∏ has perfect correctness, then from Lemma 1(i) we have p ( X X ) = 0 . So, by Lemma 1(viii) Fano inequality, 0 H ( X | ( K , Y ) ) H ( X | X ) = 0 , then we have
H ( X | ( K , Y ) ) = 0 .
Then, by Equations (14)–(16), we have
H ( K ) H ( X ) - I ( X ; Y ) - H ( X | ( K , Y ) ) H ( X ) - ϵ H ( X ) = ( 1 - ϵ ) H ( X ) .
From the above result, we know that normalized ϵ-Shannon security and perfect correctness requires H ( K ) ( 1 - ϵ ) H ( X ) ; this is different from ϵ-Shannon security and perfect correctness, which requires H ( K ) H ( X ) - ϵ .
Next, we consider the normalized ϵ-Shannon security with a completeness error γ.
Theorem 2.
Let ∏ be a private-key encryption. If ∏ is normalized ϵ-Shannon secure and ( 1 - γ ) -Shannon correct, then
H ( K ) ( 1 - ϵ ) H ( X ) - γ .
Proof. 
From Lemma 1(vii), we have
H ( K ) H ( X ) - I ( X ; Y ) - H ( X | ( K , Y ) ) .
From the definition of normalized ϵ-Shannon security, we have
I ( X ; Y ) ϵ H ( X ) .
From the definition of ( 1 - γ ) -Shannon correctness and Lemma 1(viii) Fano inequality, we have
H ( X | ( K , Y ) ) H ( X | D e c K ( Y ) ) γ .
Then, by using Equations (18)–(20), we have
H ( K ) H ( X ) - H ( X | ( K , Y ) ) - I ( X ; Y ) H ( X ) - γ - ϵ H ( X ) = ( 1 - ϵ ) H ( X ) - γ .
From Lemma 1(ii) and Theorem 2, we conclude the following.
Corollary 3.
Let X be the uniform distribution over X , ∏ be a private-key encryption. If ∏ is normalized ϵ-Shannon secure and ( 1 - γ ) -Shannon correct, then
H ( K ) ( 1 - ϵ ) log | X | - γ .
Consequently,
| K | 2 - γ | X | 1 - ϵ .

4. Normalized ϵ-Kolmogorov Security

We give a security notion for private-key encryption based on Kolmogorov complexity. This idea is that now a private-key encryption is not a distribution on binary strings, but an individual tuple of binary strings with corresponding properties of secrecy (see [14]). We use Kolmogorov complexity instead of Shannon entropy to measure the security degree for an individual tuple of strings of private-key encryption. Let x X , y Y , and k K be a plaintext, a ciphertext, and a key, respectively. A pair of strings ( x , y ) is used to denote an instance of a private-key encryption which satisfies E n c k ( x ) = y and D e c k ( y ) = x for a key k K .
Definition 2.
Let ∏ be a private-key encryption, ( x , y ) be an instance of ∏. An instance ( x , y ) is normalized ϵ-Kolmogorov secure if
I ( x ; y ) ϵ K ( x ) ; i . e . , I ( x ; y ) / K ( x ) ϵ .
The security parameter ϵ can be seen as the information leak ratio in the sense of Kolmogorov complexity. For Kolmogorov complexity, there is no natural way to define an “absolutely” perfect version of a private-key encryption scheme. We consider a private-key encryption to be approximately-perfect secure in the sense of Kolmogorov complexity whenever ϵ is approximately zero, which means that the plaintext x and the ciphertext y are algorithmically independent.
The notion of normalized ϵ-Kolmogorov security can be simply understood as a normalized version of individual security [14], which is a formal definition of what it means for an individual instance to be secure.
Now, we show a lower bound of key sizes for an instance of private-key encryption.
Theorem 4.
Let ∏ be a private-key encryption, and ( x , y ) be an instance of ∏. Suppose the decryption function D e c is given, and the length of D e c is not dependent on the length of the key k. If an instance ( x , y ) is normalized ϵ-Kolmogorov secure, then
| k | ( 1 - ϵ ) K ( x ) - O ( 1 )
for any key k with D e c k ( y ) = x .
Proof. 
Let p be a shortest binary program that computes x from y. Since D e c k ( y ) = x , the length of p is no more than the length of the decryption function D e c and the key k, | p | | D e c | + | k | . Because the decryption function D e c is given and the length of D e c is not dependent on the length of key k, we have K ( x | y ) | k | + O ( 1 ) .
If ∏ is normalized ϵ-Kolmogorov secure, I ( x ; y ) ϵ K ( x ) ; i.e., K ( x ) - K ( x | y ) ϵ K ( x ) , then we have ( 1 - ϵ ) K ( x ) K ( x | y ) . Then,
K ( x ) - ϵ K ( x ) K ( x | y ) | k | + O ( 1 ) .
Thus, | k | ( 1 - ϵ ) K ( x ) - O ( 1 ) . ☐
From the above theorem, we know that a message with high Kolmogorov complexity—or a nearly Kolmogorov random string—cannot be encrypted by a small key size with high security parameter.

5. Normalized ϵ-Kolmogorov Security versus Normalized ϵ-Shannon Security

In this section, we establish some relations between Shannon entropy-based security and Kolmogorov complexity-based security for private-key encryption.
First, from the notion of normalized ϵ-Kolmogorov security for a private-key encryption, ϵ is a security parameter defined for one instance, not all instances. Many x’s have a small Kolmogorov complexity even if the adversary does not know the ciphertext y. For example, the Kolmogorov complexity of x = 111 . . . 11 { 0 , 1 } n is almost vanishing. So, we give the following definition for all instances in a private-key encryption.
Definition 3.
Let ∏ be a private-key encryption, u a computable distribution over X × Y . ∏ is normalized ( ϵ , δ ) -Kolmogorov secure if
Pr k K , y × Y [ I ( x ; y | u ) ϵ K ( x ) ] δ .
The above security notion means that the probability that an instance is normalized ϵ-Kolmogorov secure at least δ for a computable distribution.
For a private-key encryption, when the probability of an instance with low security parameter is high, this means that most of instances are secure.
Here we give following relations between normalized ( ϵ , δ ) -Kolmogorov security and normalized ϵ-Shannon security.
Theorem 5.
Let ∏ be a private-key encryption. If for any independent variables X , Y over X , Y with a computable joint distribution u, ∏ is normalized ( ϵ , δ ) -Kolmogorov secure, then ∏ is normalized ( 1 + ϵ - δ ) -Shannon secure.
Proof. 
Since ∏ is normalized ( ϵ , δ ) -Kolmogorov secure, then the probability that an instance is normalized ϵ-Kolmogorov secure is at least δ; i.e.,
Pr x X , y × Y [ I ( x ; y | u ) ϵ K ( x ) ] δ .
Let Q be the set of normalized ϵ-Kolmogorov secure instances of private-key encryption ∏; i.e., Q = { ( x , y ) ; I ( x ; y | u ) ϵ K ( x ) } . Then, we have
Pr x X , y × Y [ ( x , y ) Q ] 1 - δ .
By using Lemmas 2 and 3 and Equations (27) and (28), up to a constant, we have
I ( X ; Y ) ( x , y ) Q u ( x , y ) I ( x : y | u ) + ( x , y ) Q u ( x , y ) I ( x : y | u ) ϵ ( x , y ) Q u ( x , y ) K ( x | u ) + ( x , y ) Q u ( x , y ) [ K ( x | u ) - K ( x | y , u ) ] ϵ H ( X ) + ( 1 - δ ) H ( X ) ( 1 + ϵ - δ ) H ( X ) .
By using the above result and Lemma 1(ii), H ( X ) log ( | X | ) , we have ( 1 + ϵ - δ ) log ( | X | ) - Shannon security from normalized ( ϵ , δ ) -Kolmogorov security.
The result of Theorem 5 is different from Reference [14], which we have ϵ + ( 1 - δ ) log ( | X | ) - Shannon security from non-normalized ( ϵ , δ ) -Kolmogorov security.
The result of Theorem 5 shows that if the probability of approximately-perfect secure instances (ϵ is approximately zero) is approximately 1, then the private-key encryption is approximately-perfect secure in the Shannon entropy sense.

6. Conclusions

In this paper, we presented two notions of normalized ϵ-Shannon security and ϵ-Kolmogorov security for private-key encryption. The new security parameters can make the evaluation of the security of private-key encryption more normalized and rationalized. Then, we established the relation of two normalized security notions for private-key encryption.
As future work, we can consider a special case of ϵ that is a negligible function of the message length. In this case, the amount of information leak can be ignored because negligible functions tend to zero very fast as their input (the message length) grows.
Obviously, we only considered the normalized security of private-key encryption schemes based on Shannon entropy and Kolmogorov complexity. Normalized security of other schemes based on other information measures are possible topics for future consideration.

Acknowledgments

Project supported by the Guangxi University Science and Technology Research Project (Grant No. 2013YB193) and the Science and Technology Foundation of Guizhou Province, China (LKS [2013] 35).

Author Contributions

Conceptualization, formal analysis, investigation and writing the original draft is done by Songsong Dai. Validation, review and editing is done by Lvqing Bi and Bo Hu. Project administration and Resources are provided by Lvqing Bi and Bo Hu. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  2. Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  3. Vernam, G.S. Cipher printing telegraph systems for secret wire and radio telegraphic communications. J. Am. Inst. Electr. Eng. 1926, 45, 109–115. [Google Scholar]
  4. Iwamoto, M.; Ohta, K. Security Notions for Information Theoretically Secure Encryptions. In Proceedings of the 2011 IEEE Symposium on Information Theory, Saint Petersburg, Russia, 31 July–5 August 2011; pp. 1777–1781.
  5. Jiang, S. On Unconditional ϵ-Security of Private Key Encryption. Comput. J. 2013, 57, 1570–1579. [Google Scholar] [CrossRef]
  6. Alimomeni, M.; Safavi-Naini, R. Guessing Secrecy. In Proceedings of the 6th International Conference on Information Theoretical Security (ICITS 12), Montreal, QC, Canada, 15–17 August 2012; pp. 1–13.
  7. Dodis, Y. Shannon Impossibility, Revisited. In Proceedings of the 6th International Conference on Information Theoretical Security (ICITS 12), Montreal, QC, Canada, 15–17 August 2012; pp. 100–110.
  8. Iwamoto, M.; Shikata, J. Information theoretic security for encryption based on conditional Rényi entropies. In Proceedings of the International Conference on Information Theoretic Security (ICITS); Springer: Berlin/Heidelberg, Germany, 2013; pp. 103–121. [Google Scholar]
  9. Chaitin, G. On the length of programs for computing finite binary sequences. J. ACM 1966, 13, 547–569. [Google Scholar] [CrossRef]
  10. Kolmogorov, A. Three approaches to the quantitative definition of information. Probl. Inf. Transm. 1965, 1, 1–7. [Google Scholar] [CrossRef]
  11. Solomonoff, R. A formal theory of inductive inference, part I. Inf. Control 1964, 7, 1–22. [Google Scholar] [CrossRef]
  12. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  13. Li, M.; Vitányi, P.M.B. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed.; Springer: New York, NY, USA, 2008. [Google Scholar]
  14. Antunes, L.; Laplante, S.; Pinto, A.; Salvador, L. Cryptographic security of individual instances. In Information Theoretic Security; Springer: Berlin/Heidelberg, Germany, 2009; pp. 195–210. [Google Scholar]
  15. Kaced, T. Almost-perfect secret sharing. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings (ISIT), Saint Petersburg, Russia, 31 July–5 August 2011; pp. 1603–1607.
  16. Dai, S.; Guo, D. Comparing security notions of secret sharing schemes. Entropy 2015, 17, 1135–1145. [Google Scholar] [CrossRef]
  17. Grünwald, P.; Vitányi, P. Shannon Information and Kolmogorov Complexity. arXiv 2008. [Google Scholar]
  18. Teixeira, A.; Matos, A.; Souto, A.; Antunes, L. Entropy measures vs. kolmogorov complexity. Entropy 2011, 13, 595–611. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Bi, L.; Dai, S.; Hu, B. Normalized Unconditional ϵ-Security of Private-Key Encryption. Entropy 2017, 19, 100. https://doi.org/10.3390/e19030100

AMA Style

Bi L, Dai S, Hu B. Normalized Unconditional ϵ-Security of Private-Key Encryption. Entropy. 2017; 19(3):100. https://doi.org/10.3390/e19030100

Chicago/Turabian Style

Bi, Lvqing, Songsong Dai, and Bo Hu. 2017. "Normalized Unconditional ϵ-Security of Private-Key Encryption" Entropy 19, no. 3: 100. https://doi.org/10.3390/e19030100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop