1. Introduction
A secret sharing scheme [
1,
2] is a protocol to share a secret among participants such that only specified subsets of participants can recover the secret. In considering the security notions of secret sharing schemes, some authors have introduced concepts of security for secret sharing schemes based on different information measures [
3–
7]. These information measures include four very important information measures: Shannon entropy, min entropy, Rényi entropy and Kolmogorov complexity. Shannon entropy is the most widely used information measure, which is used to prove bounds on the share size and on the information rate in secret sharing schemes [
3–
5]. Recently, min and Rényi entropies are also used in study of the security of secret sharing schemes [
6,
7].
Kolmogorov complexity
K(
x) [
8–
10], known as algorithmic information theory [
11,
12], measures the quantity of information in a single string
x, by the size of the smallest program that generates it. It is well known that Kolmogorov complexity and entropy measure are different but related measures [
13–
15]. Measuring the security by Kolmogorov complexity offers us some new security criteria. Antunes
et al. [
16] gave a notion of individual security for cryptographic systems by using Kolmogorov complexity. Kaced [
17] defined a normalized version of individual security for secret sharing schemes.
However these information measures are different. This means a scheme is secure based on one information measure but not secure based on another information measure [
18]. Recently, several relations of security notions of cryptography have been studied. Iwamoto
et al. [
6] and Jiang [
18] studied relations between security notions for the symmetric-key cryptography. In this paper, we are interested in relationships of security notions for secret sharing schemes. Antunes
et al. [
16] and Kaced [
17] also studied relations between security notions for secret sharing schemes. However, their studies are between security notions based on Shannon entropy and Kolmogorov complexity. We study relationships of different security notions for secret sharing schemes under various information measures including Shannon entropy, guessing probability, min entropy and Kolmogorov complexity.
This paper is organized as follows: In Section 2, we review some definitions of entropy measures, Kolmogorov complexity and secret sharing schemes. In Section 3, we propose several security notions in entropies, and their relations. In Section 4, by using Kolmogorov complexity, security notions of secret sharing schemes are given, then are compared to entropy-based security in Section 5. Conclusions are presented in Section 6.
2. Preliminaries
In this paper, string means a finite binary string Σ* := {0, 1}*. |x| represents the length of a string x. For the cardinality of a set A we write |A|. Function log means the function log2. ln(·) denotes the logarithm function with natural base e = 2.71828….
Let [n] := {1, 2, …, n} be a finite set of IDs of n users. For every i ∈ [n], let Vi be a finite set of shares of the user i. Similarly, let S be a finite set of secret information. In the following, for any subset U := {i1, i2, …, iu} ⊂ [n], we use the notation
and
.
2.1. Entropy
Let
and
be two finite sets. Let X and Y be two random variables over
and
, respectively. The probability that X takes on the value x from a finite or countably infinite set X is denoted by pX(x); the mutual probability, the probability that both x and y occur, by pXY (x, y) and the conditional probability, the probability that x occurs knowing that y has occurred by pXY (x|y). For convenience, pX(x), pXY (x, y) and pXY (x|y) are denoted by p(x), p(x, y) and p(x|y), respectively. Two random variables X and Y are independent if and only if p(x, y) = p(x) × p(y) for all x ∈ X and y ∈ Y.
The Shannon entropy [
19] of a random variable
X, defined by
H(
X) = −∑
x∊X p(
x) log
p(
x), is a measure of its average uncertainty. The conditional Shannon entropy with respect to
X given
Y is defined as
The Mutual information between
X and
Y is
Guessing probability [
20] of
X, occurred by
G(
X) = max
x∈X p(
x), is the success probability of correctly guessing the value of a realization of variable when using the best guessing strategy (guessing the most probable value of the range as the guess). Conditional guessing probability with respect to
X given
Y is defined as
Min-entropy [
6,
18,
20] is a measure of success chance of guessing
X,
i.e.,
It can also be viewed as the worst case entropy compared to Shannon entropy which is an average entropy. The conditional min entropy with respect to
X given
Y is defined as
2.2. Kolmogorov Complexity
In this subsection, some definitions and basic properties of Kolmogorov complexity are recalled below. We will use the prefix-free definition of Kolmogorov complexity. A set of strings
A is prefix-free if there are not two strings
x and
y in
A such that
x is a proper prefix of
y. For more details and attributions we refer to [
11,
12].
The conditional Kolmogorov complexity
K(
y|x) of
y with condition
x, with respect to a universal prefix-free Turing machine
U, is defined by
Let
U be a universal prefix-free computer, then for any other computer
F:
for all
x,
y, where
cF depends on
F but not on
x,
y. The (unconditional) Kolmogorov complexity
KU(
y) of
y is defined as
KU(
y|Λ) where Λ is the empty string. For convenience,
KU(
y|x) and
KU(
y) are denoted, respectively by
K(
y|x) and
K(
y).
The mutual algorithmic information between
x and
y is the quantity
We consider x and y to be algorithmic independent whenever I(x : y) is zero.
2.3. Secret Sharing Schemes
Then, secret sharing schemes for general access structures are recalled below. For more details refer to [
1,
3,
7,
21,
22].
Each set of shares is classified into either a qualified set or a forbidden set. A qualified set is the set of shares that can recover the secret. Let
and
be families of qualified and forbidden sets, respectively. Then
an access structure. An access structure is monotone if for all
, every
satisfies
and; for all
, every F ⊂ F′ satisfies
.
In particular, the access structure is called (t, n)-threshold access structure if it satisfies that
and
. In this paper, the access structure is a partition of 2[n], namely,
and
.
Let ∏ = (
S, V[n], ∏
share, ∏
comb) be a secret sharing scheme for an access structure Γ, as defined below:
S is set of secret information;
V[n] is set of shares for all users;
∏share is an algorithm for generating shares for all users. It takes a secret s ∈ S on input and outputs (v1, v2, …, vn) ∈ V[n];
∏comb is an algorithm for recovering a secret. It takes a set of shares vQ,
, on input and outputs a secret s ∈ S.
In this paper, we assume that ∏ meets perfect correctness: for any secret s ∈ S, and for all shares ∏share(s) = (v1, v2, …, vn), it holds that ∏comb(vQ) = s for any subset
.
3. Information Theoretic Security of Secret Sharing Schemes
In this section, we first give the security notions of information theoretic security for secret sharing schemes based on Shannon entropy, guessing probability and min entropy, respectively, and then we discuss the relations between these security notions.
Definition 1. Let ∏
be a secret sharing scheme for an access structure Γ.
We say ∏
isε-Shannon security, if I(S; VF) ≤ ε;
ε-guess security, if G(S|VF) − G(S) ≤ ε;
ε-min security, if H∞(S) − H∞(S|VF) ≤ ε
for any forbidden set.
Now, we discuss the relations between above three security notions for secret sharing schemes. The following relations are important for the present paper.
Lemma 1. [
11,
18,
20]
Let X and Y be two random variables over and,
respectively. ThenG(X|Y) ≥ G(X).
H(X|Y) ≤ H(X).
H∞(X|Y) ≤ H∞(X).
I(X; Y) ≥ (2/ ln 2)[G(X|Y) − G(X)]2.
|H∞ (X) − H∞ (X|Y)| ≥ (1/ ln 2)|G(X) − G(X|Y)|.
H∞ (X) − H∞ (X|Y) ≥ I(X; Y), where X is uniformly random over.
From above lemma, several relations of security notions for the symmetric-key cryptography in [
18]. Similarly, from above lemma, we obtain the following.
Theorem 1. Let ∏
be a secret sharing scheme for an access structure Γ.
If ∏ is ε-Shannon security, then it is-guess security.
If ∏ is ε-min security, then it is ε ln 2-guess security
If ∏ is ε-min security and S is uniformly random over S, then ∏ is ε-Shannon security.
From this result, we can see that, for a secret sharing scheme, ε-Shannon and ε-min security both are stronger than ε-guess security. If we assume S is uniformly random, then, for a secret sharing scheme, ε-min security is stronger than ε-Shannon security.
In the following, using a modified example of threshold secret sharing scheme, we showed that a secret sharing scheme is ε-guess security does not imply it is ε-Shannon security.
Example 1. Let s, and v1,
v2, …,
vn be binary strings with same length k. Assume that s and v1,
v2, …,
vn−1 are independent. We generate vn by vn =
s ⊕
v1 ⊕
v2 ⊕ … ⊕
vn−1 where ⊕
denotes the exclusive OR operation. This scheme is (
n, n)−
threshold secret sharing scheme, called Karnin–Greene–Hellman scheme [
5].
Let,
,
and. S is uniformly random over and V1 × V2 is uniformly random over {0, 1}k−1 × {0, 1}k. To share s = s′|s″ for s′ ∈ {0, 1}k−1 and s″ ∈ {0, 1}. Let where v′ ∈ {0, 1}k−1 and v″ ∈ {0, 1}. And s′ and v1,
are independent. Let and. Algorithm for recovering the secret is s = s′|s″ where and s′ = v2″. This scheme is (3, 3)−threshold secret sharing scheme. It is easy to see that G(S|V2, V3) = G(S|V1, V3) = G(S|V1, V2) = 2−(k−1) and hence |G(S|Vi, Vj) − G(S)| = 2−k for 1 ≤ i < j ≤ 3. However, I(S; (V2, V3)) = H(S) − H(S|V2, V3) = k − (k − 1) = 1.
Next, we discuss the relationship between these security notions when ε = 0.
Theorem 2. If a secret sharing scheme is 0-Shannon security, then it is 0-min security. Moreover, if S is uniformly random over, then, for a secret sharing scheme, 0-min security, 0-guess security and 0-Shannon security are all equivalent.
However, a secret sharing scheme is 0-min security does not imply it is 0-Shannon security.
Example 2. [
18].
Let for k ≥ 4.
and.
Let pS(1) = … =
pS(
k − 1) = 1/(2
k − 2)
and pS(0) = 1/2.
s and v1 are independent. We generate v2 by v2 =
v1 +
s(mod
k).
This scheme is (2, 2)−
threshold secret sharing scheme. By max
s∈S PS(
s) = 1/2
and hence H∞(
S) = 1.
By then while for s ≠ 0.
As k ≥ 4,
for any v2,
we have that for s ≠ 0.
So H∞(
S|V2) = 1.
H∞(
S|V1) =
H∞(
S) = 1
by s and v1 are independent. So this scheme is 0-
min security. But this scheme is not 0-
Shannon security. Some implications do not hold in general, but holds when S is uniformly random distribution. From above results, if S is uniformly random over
, then for a secret sharing scheme, ε-min security is stronger than ε-Shannon security, ε-Shannon security is stronger than ε-guess security, and these three security notions are the same when ε = 0.
4. Individual Security of Secret Sharing Schemes
In this section, we first give the security notions of individual security for secret sharing schemes based on Kolmogorov complexity, and then we consider the size of the shares based on the new concept of security in secret sharing schemes.
Definition 2. Let ∏
be a secret sharing scheme for an access structure Γ.
An instance (
s,
v1,
v2, …
; vn)
isKolmogorov ε-security, if for any forbidden set it satisfies normalized Kolmogorov ε-security, if for any forbidden set it satisfies
We know that, in the notion of Kolmogorov ε-security, the security parameter ε of an instance is amount of information leakage, the maximal value of I(s; vF) for any forbidden set F. However, for example, 50 leaked bits is big for a 100-bit secret, but is small for a 1000-bit secret. So, we give the notion of normalized Kolmogorov ε-security. The parameter ε in latter notion is information leak ratio, the maximal value of I(s; vF) for any forbidden set F, divided by K(s).
The notion of normalized Kolmogorov ε-security can simply be understood as a normalized version of individual security.
In fact, for the same instance (s, v1, v2, …, vn), the security parameter ε is small in a forbidden set F but I(s; vF′ is a big variance in another forbidden set F′. It is worth noting that in Definition 2, for Kolmogorov ε-security, ε is a maximum value of
, more precisely,
. And for normalized Kolmogorov ε-security, ε is a maximum value of
.
Now we discuss some results for Kolmogorov ε-security,
By
I(
s;
vF)
≤ I(
s;
vF′) +
O(1), if
F ⊆
F′ (by
K(
x|y)
≤ K(
x|y, z) +
O(1)). We know that, up to a constant, the mutual algorithmic information between
s and
vi is smaller than
ε, because, for any
i ∈
F, we have
Moreover, if access structure Γ is a (t; n)-threshold access structure, then in Definition 2(i), up to a constant, ε is a maximum value of {I(s; vF); |F| = t − 1}, or equivalently, ε = sup|F|=t−1 I(s; vF).
We show some lower bounds of share sizes of secret sharing schemes.
Theorem 3. Let ∏
be a secret sharing scheme for an access structure Γ.
If an instance (
s,
v1,
v2, …,
vn)
is Kolmogorov ε-security, thenfor every i ∈ [
n].
If an instance (
s,
v1,
v2, …,
vn)
is normalized Kolmogorov ε-security, thenfor every i ∈ [
n].
Proof. For any
i ∈ [
n], there exists a forbidden set
such that
i ∉
F and
. Let
p a shortest binary program that computes
s from
vF. By ∏
comb(
vF, vi) =
s, we have
p ≤ | ∏
comb| + |
vi|.
If ∏ is Kolmogorov
ε-security,
K(
s) −
K(
s|vF)
≤ ε, then we have
Thus |
vi| ≥
K(
s) −
ε −
O(1).
If ∏ is normalized Kolmogorov
ε-security,
K(
s) −
K(
s|vF)
≤ εK(
s), then
Thus |
vi| ≥ (1 −
ε)
K(
s) −
O(1).
From above theorem, we know that a string with high Kolmogorov complexity, or a nearly Kolmogorov random string, cannot be split among participants with small share sizes and high security parameter.
5. Information Theoretic Security Versus Individual Security
In this section, we establish some relations between information theoretic security and individual security for secret sharing schemes.
First, we know that, in a secret sharing scheme, the security parameter ε is small for some instances but is a big value for other instances. This means in a secret sharing scheme, it is difficult for every instance is (normalized) Kolmogorov ε-security and ε is a small value. So we consider the case of a secret sharing scheme that the probability of an instance with low security parameter is high, i.e., most of instances are (normalized) Kolmogorov ε-security and ε is a small value.
Definition 3. Let ∏
be a secret sharing scheme for an access structure Γ. ∏
isKolmogorov (
ε,
δ)-
security, if for any forbidden set F, it satisfies normalized Kolmogorov (
ε,
δ)-
security, if for any forbidden set F, it satisfieswhere u a distribution over.
The following relations between Kolmogorov complexity, entropy and mutual information are important for the present paper.
Lemma 2. [
11,
16]
Let X, Y be random variables over,
.
For any computable probability distribution u(
x,
y)
over X ×
Y,
0 ≤ (∑x,y u(x, y)K(x|y) − H(X|Y)) ≤ K(u) + O(1).
I(X; Y) − K(u) ≤ ∑x,y u(x, y)I(x : y) ≤ I(X;Y) + 2K(u). When u is given, then I(X; Y) =∑x,y u(x, y)I(x : y|u) + O(1).
Here we give following relations between information theoretic security and individual security of Definition 3(i).
Theorem 4. For any (t, n)-threshold scheme ∏ where S is the set of secrets and V[n] the set of all shares for all users, for any independent variables S, V[n] over,
with distribution u. If ∏ is Kolmogorov (ε, δ)-security, then, up to a constant, it is ε + (1 − δ) log(|S|)-Shannon security and-guess security.
Proof. For any forbidden set
F, let
Q be the set of Kolmogorov
ε-security instances,
i.e.,
. Then by Lemma 2, up to a constant,
Then by Theorem 1, up to a constant, we have
. □
Then we establish some relations between information theoretic security and normalized individual security of Definition 3(ii).
Theorem 5. For any (t, n)-threshold scheme ∏ where S is the set of secrets and V[n] the set of all shares for all users, for any independent variables S, V[n] over,
with distribution u. If ∏ is normalized Kolmogorov (ε, δ)-security, then, up to a constant, it is (1 +ε − δ) log(|S|)-Shannon security and-guess security.
Proof. ∏ is normalized Kolmogorov (
ε, δ)-security, then the probability that an instance is normalized Kolmogorov
ε-security is at least
δ,
i.e., for any forbidden set
F,
For any forbidden set
F, let
Q be the set of normalized Kolmogorov
ε-security instances,
i.e.,
. Then by Lemma 2, up to a constant,
Then by Theorem 1, up to a constant, we have
. □
Comparing the Theorem 4 with Theorem 5, we have different relations between entropy-based security notions and two versions of individual security for secret sharing schemes.
6. Conclusions
Kolmogorov complexity and entropy measures are fundamentally different measures. They both are used in measuring the security for secret sharing schemes. In this paper, we study relations of several security notions for secret sharing schemes. First we consider three security notions of information theoretic security of secret sharing schemes, ε-Shannon and ε-min security both are stronger than ε-guess security, and ε-min security is stronger than ε-Shannon security when S is uniformly random. However, for a secret sharing scheme, 0-min security, 0-guess security and 0-Shannon security are the same when S is uniformly random. Then after giving the security notions of individual security for secret sharing schemes in the frame work of Kolmogorov complexity, we establish some relations between information theoretic security and two versions of individual security for secret sharing schemes, respectively.
In this paper, we only considered relations of several security notions for secret sharing schemes. Naturally, a more detailed discussion of connections with other security notions in other fields of cryptography, such as the security notions based on conditional Rényi entropies in [
6,
7], will be both necessary and interesting.