Next Article in Journal
Nonlinearities in Elliptic Curve Authentication
Previous Article in Journal
Strategic Islands in Economic Games: Isolating Economies From Better Outcomes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy of Closure Operators and Network Coding Solvability

by
Maximilien Gadouleau
School of Engineering and Computing Sciences, Durham University, South Road, DH1 3LE, Durham, UK
Entropy 2014, 16(9), 5122-5143; https://doi.org/10.3390/e16095122
Submission received: 17 June 2014 / Revised: 16 August 2014 / Accepted: 11 September 2014 / Published: 25 September 2014
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The entropy of a closure operator has been recently proposed for the study of network coding and secret sharing. In this paper, we study closure operators in relation to their entropy. We first introduce four different kinds of rank functions for a given closure operator, which determine bounds on the entropy of that operator. This yields new axioms for matroids based on their closure operators. We also determine necessary conditions for a large class of closure operators to be solvable. We then define the Shannon entropy of a closure operator and use it to prove that the set of closure entropies is dense. Finally, we justify why we focus on the solvability of closure operators only.

1. Introduction

Network coding is a novel means to transmit data through a network, where each intermediate node determines its output packet from all of the packets it receives [1]. Network coding problems are further defined by restrictions on the alphabet of packets and sometimes on what computations the intermediate nodes may do. In particular, linear network coding [2] is optimal in the case of one source; however, this is not the case for multiple sources and destinations [3,4]. Although for large dynamic networks, good heuristics, such as random linear network coding [5,6], can be used, maximizing the amount of information that can be transmitted over a static network is fundamental, but very hard in practice. Solving this problem by brute force, i.e., considering all possible operations at all nodes, is computationally prohibitive. The network coding solvability problem is given as follows: given a network (with corresponding graph, sources, destinations and messages), can all of the messages be transmitted? This problem is very difficult, for instance the problem for some networks is as hard as determining whether k mutually orthogonal Latin squares of order A exist [7,8].
Several major advances have been made on this problem. First of all, it can always be reduced to a multiple unicast instance, where each source sends a different message, requested to a corresponding unique destination. In [9], the network coding solvability problem is reduced to the guessing game (described in Section 2.4 below), a simple cooperative problem on arbitrary directed graphs, thus removing the asymmetry between sources, intermediate nodes and destinations. Notably, [10] introduces the entropy of a directed graph (not to be mistaken with Körner’s graph entropy in [11] or with the Shannon capacity of a graph [12]); calculating this entropy solves the network solvability problem. This problem can be tackled by a more combinatorial approach, based on the so-called guessing number of a graph, which is closely related to the entropy [10]. The guessing number of graphs is studied further in [13], where it is proved that the guessing number of a directed graph is equal to the independence number of a related undirected graph. The guessing number of undirected graphs is further explored in [14].
A closure operator on the vertex set of a digraph is introduced in [8]. Network coding solvability is then proven to be a special case of a more general problem, called the closure solvability problem, for the closure operator defined on a digraph related to the network coding instance. The latter problem also generalises the search for ideal secret sharing schemes [15]. The main interest of closure solvability is that it allows us to use closure operators, which do not arise from digraphs (notably the uniform matroids), but which have been proven to be solvable over many alphabets. In this paper, we introduce the concept of the entropy of an arbitrary closure operator. Again, calculating the entropy of a closure operator determines whether this closure operator is solvable or not. Therefore, this paper aims at studying this quantity in detail.
The closure solvability problem generalises different problems in coding theory, cryptology or combinatorics.
  • As mentioned above, closure operators associated with digraphs are particularly relevant for network coding. Indeed, a network coding instance is solvable if and only if clD is solvable for some digraph D related to the network coding instance [8].
  • When reduced to matroids, this is the problem of representation by partitions in [16], which is equivalent to determining secret-sharing matroids [15].
  • When further reduced to the uniform matroid, this is exactly the problem of finding maximum distance separable (MDS) codes, which is arguably the most important open problem in coding theory (see [17]). Special cases include the famous combinatorial problem of the existence of mutually orthogonal Latin squares.
The rest of the paper is organised as follows. We review the closure operator associated with a digraph and the general closure solvability problem in Section 2. In Section 3, we introduce four kinds of rank functions for a given closure operator. This not only helps us derive bounds on the entropy, but we are also able to provide axioms for matroids that are, to the author’s knowledge, new. Section 4 then studies a natural upper bound on the entropy, based on polymatroids. This helps us prove that the set of closure entropies contains all rational numbers above one. Finally, Section 5 investigates the solvability problem beyond closure operators.

2. Preliminaries

2.1. Closure Operators

Throughout this paper, V is a set of n elements. A closure operator on V is a mapping cl : 2V → 2V, which satisfies the following properties (see Chapter IV in [18]). For any X, YV,
(1)
X ⊆ cl(X) (extensive);
(2)
if XY, then cl(X) ⊆ cl(Y ) (isotone);
(3)
cl(cl(X)) = cl(X) (idempotent).
A closed set is a set equal to its closure. For instance, in a group, one may define the closure of a set as the subgroup generated by the elements of the set; the family of closed sets is simply the family of all subgroups of the group. Another example is given by linear spaces, where the closure of a set of vectors is the subspace they span. Closure operators are central in lattice theory and in universal algebra; moreover, any Galois connection between subset lattices is equivalent to a closure operator on subsets.
We refer to:
r : = min { b : cl ( b ) = V }
as the rank of the closure operator. Any set bV of size r and whose closure is V is referred to as a basis of cl. There is a natural partial order on closure operators of the same set. We denote cl1 ≤ cl2 if for all XV, cl1(X) ⊆ cl2(X); then r(cl1) ≥ r(cl2).
We shall focus on two families of closure operators. Firstly, a matroid closure is a closure operator satisfying the Steinitz–Mac Lane exchange property (in order to simplify notation, we shall identify a singleton {v} with its element v): if XV, vV and u ∈ cl(Xv)\cl(X), then v ∈ cl(Xu) [19]. In particular, the uniform matroid Ur,n of rank r over n vertices is defined by:
U r , n ( X ) = { V if  X r X otherwise .
It is worth noting that matroids constitute a much richer variety than uniform matroids.
Secondly, let D = (V, E) be a digraph on n vertices (possibly with loops, but without any repeated arcs). The in-neighborhood of a vertex v is denoted as v = {u : (u, v) ∈ E}; we extend this definition to subsets of vertices Y = ∪vY v. The D-closure of any XV is defined as follows [8]. We let cD(X) := X ∪ {vV : vX} and the D-closure of X is obtained by applying it cD repeatedly n times: cl D ( X ) : = c D n ( X ).
This definition can be intuitively explained as follows. Suppose we assign a function to each vertex of D, which only depends on its in-neighbourhood (the function that decides which message the vertex will transmit). If we know the messages sent by the vertices of X, we also know the messages that will be sent by any vertex in cD(X). By applying this iteratively, we can determine all messages sent by the vertices in clD(X). Therefore, clD(X) represents everything that is determined by X.
Alternatively, we have clD(X) := XY, where Y is the largest acyclic set of vertices, such that YXY (see Lemma 1 in [8]). Recall that a feedback vertex set is a set of vertices X, such that V\X induces an acyclic subgraph. The rank of clD is therefore the minimum size of a feedback vertex set of D.

Example 1

The D-closure of some classes of graphs can be readily determined.
(1)
If D is acyclic, then clD = U0,n.
(2)
If D = Cn, the directed cycle, then clD = U1,n.
(3)
If D = Kn, the complete graph, then clD = Un−1,n.
(4)
If D has a loop on each vertex, then clD = Un,n.
Conversely, no other uniform matroid can be viewed as a D-closure.
However, the following two questions are still open. For which digraphs are the D-closures matroids? What matroids are represented by D-closures of digraphs?

2.2. Partitions

A partition of a finite set B is a collection of subsets, called parts, which are pairwise disjoint and whose union is the whole of B. We denote the parts of a partition f as Pi(f). If every part of f is contained in a unique part of g, we say f refines g. The equality partition EB with |B| parts refines any other partition, while the universal partition (with only one part) is refined by any other partition of B. The common refinement of two partitions f, g of B is given by h := fg with parts:
P i , j ( h ) = P i ( f ) P j ( g ) : P i ( f ) P j ( g ) .
We shall usually consider a tuple of n partitions f = (f1,, fn) of the same set assigned to elements of a finite set V with n elements. In that case, for any XV, we denote the common refinement of all fv, vX as fX := ∨vX fv. For any X, YV, we then have fXY = fXfY.

2.3. Closure Solvability and Entropy

We now review the closure solvability problem [8]. The instance of the problem consists of a closure operator cl on V with rank r and a finite set A with |A| ≥ 2, referred to as the alphabet.

Definition 1

A coding function for cl over A is a family f of n partitions of Ar (the set of strings of length r over A) into at most |A| parts, such that fX = fcl(X) for all XV.
We remark that the family of partitions f = (f1,, fn) where fi is the partition of Ar with only one part is a coding function for any closure operator of rank r over A.
The problem is to determine whether there exists a coding function for cl over A, such that fV has Ar parts. That is, we want to determine whether there exists an n-tuple f = (f1,, fn) of partitions of Ar in at most |A| parts, such that:
f X = f cl ( X )             for all  X V , f V = E A r .
We make several remarks concerning the closure solvability problem.
(1)
The solvability problem could be defined as searching for families of partitions of any set B with |B| ≥ |A|r, such that fV = EB. However, this can only occur if |B| = |A|r; moreover, since only the cardinality of B matters, we can assume without loss that B = Ar.
(2)
A coding function f naturally yields a closure operator clf on V, where clf (X) = {vV: fXv = fX} = ∪{Y : fY = fX}; we then have cl ≤ clf. Therefore, if cl2 is solvable, then any cl1 with the same rank and cl1 ≤ cl2 is also solvable.
For any partition g of Ar, the entropy of g is defined as the traditional entropy of the probability distribution where each event is a part with probability proportional to its size, scaled by the log of the alphabet size, i.e.,
H ( g ) : = r - A - r i P i ( g ) log A P i ( g ) .
The equality partition on Ar is the only partition with full entropy r. Denoting Hf (X) := H(fX), we can recast the conditions above as:
H f ( v ) 1             for all  v V , H f ( X ) = H f ( cl ( X ) )             for all  X V , H f ( V ) = r .
The first two conditions are equivalent to f being a coding function. In general, the rank cannot always be attained; hence, we define the entropy of a closure operator cl over A as the maximum entropy of any coding function for it:
H ( cl , A ) : = max { H f ( V ) : f coding function for cl over  A } .
The entropy of cl is defined to be the supremum of all H(cl, A).

2.4. Guessing Game and Closure Solvability

The guessing game was first proposed for the study of network coding solvability by Riis. It is a cooperative game with n players, where each can only see a number of hats of the other players, but not their own. All of the players must guess the color of their own hat at the same time; the team wins if everyone guesses correctly. The aim of the guessing game is to devise a guessing strategy (a protocol), which maximises the number of winning configurations.
More formally, a configuration on a digraph D on V over a finite alphabet A is simply an n-tuple x = (x1,, xn) ∈ An. A protocol f = (f1,, fn) of D is a mapping f: AnAn, such that f(x) is locally defined, i.e., fv(x) = fv(xv) for all v. The fixed configurations of f are all of the configurations xAn, such that f(x) = x: Fix(f) = {xAn: f(x) = x}. The guessing number of D is then defined as the logarithm of the maximum number of configurations fixed by a protocol of D:
g ( D , A ) = max f { log A Fix ( f ) } .
The guessing game on D is equivalent to the solvability problem for clD [8].

3. Rank Functions of Closure Operators

In this section, we investigate the properties of closure operators in general and we derive bounds on the entropy of their coding functions. We shall introduce four kinds of ranks for any closure operator. It is worth noting that they are all distinct from the so-called rank function of a closure operator studied in [20].

3.1. Inner and Outer Ranks

First of all, we are interested in upper bounds on the entropy of coding functions.

Definition 2

The inner rank and outer rank of a subset X of vertices are respectively given by:
ir ( X ) : = min { b : cl ( X ) = cl ( b ) } or ( X ) : = min { b : X cl ( b ) } = min { b : cl ( X ) cl ( b ) } .
Although the notations should reflect which closure operator is used in order to be rigorous, we shall usually omit this dependence for the sake of clarity. Instead, if the closure operator is “decorated” by subscripts or superscripts, then the corresponding parameters will be decorated in the same fashion.
A set i with |i| = ir(X) and cl(i) = cl(X) is called an inner basis of X; similarly a set o with |o| = or(X) and cl(X) ⊆ cl(o) is called an outer basis of X.
The following properties are an easy exercise.

Proposition 1

For any X, YV,
(1)
or(cl(X)) = or(X) and ir(cl(X)) = ir(X);
(2)
or(X) ≤ ir(X) ≤ |X|;
(3)
or(XY ) ≤ or(X) + or(Y ) and ir(XY ) ≤ ir(X) + ir(Y );
(4)
or(∅︀) = ir(∅︀) = 0 and or(V ) = ir(V ) = r;
(5)
if XY, then or(X) ≤ or(Y ).
The closure of the empty set is the only closed set of (inner and outer) rank zero, while V is not necessarily the unique closed set of (inner or outer) rank r.
Note that the inner rank is not monotonic, as seen in the example in Figure 1. We have clD(4) = V and, hence, irD(V ) = 1, while irD(123) = 2 for clD(12) = 123, while clD(v) = v for any v ∈ 123.
If cl1(X) ⊆ cl2(X) for some X, then or1(X) ≥ or2(X). Indeed, any outer basis of X with respect to cl1 is also an outer basis of X with respect to cl2. In particular, if cl1 ≤ cl2, then or1(X) ≥ or2(X) for all X.

Lemma 1

Let G : 2V → ℝ satisfying 0 ≤ G(X) ≤ |X| and G(cl(X)) = G(X) for all XV. Then, G(X) ≤ ir(X) for all X. Furthermore, if XY implies G(X) ≤ G(Y ), then G(X) ≤ or(X) for all X.

Proof

First, if i is an inner basis of X, then G(X) = G(cl(i)) = G(i) ≤ |i| = ir(X). Second, if o is an outer basis of X, G(X) ≤ G(cl(o)) = G(o) ≤ |o| = or(X).
This Lemma proves that we get subadditivity for free. Since the entropy satisfies all of the conditions of Lemma 1, we obtain an upper bound on the entropy.

Corollary 1

For any coding function f and any XV, Hf (X) ≤ or(X).

3.2. Flats and Span

Before we move on to lower bounds on the entropy, we define two fundamental concepts.

Definition 3

A flat is a subset F of vertices for which there is no XF with or(X) = or(F).

Proposition 2

Flats satisfy the following properties.
(1)
cl(∅︀) is the only flat with rank zero, and V is the only flat with rank r.
(2)
any flat F is a closed set;
(3)
or(F) = ir(F);
(4)
for any X, there exists a flat FX with or(F) = or(X).

Proof

(1)
is trivial.
(2)
Since cl(F) contains F while having the same rank as F, it cannot properly contain F.
(3)
Let o be an outer basis of F. Since F ⊆ cl(o) while or(F) = or(cl(o)), we obtain F = cl(o) and o is an inner basis of F.
(4)
For any X, let C be a set with rank or(X) and containing X of largest cardinality, then there exists no G, such that CG and or(G) = or(X) = or(C).
It is worth noting that there are closed sets that are not flats. For example, consider the following closure operator on V = {1,, n}, where cl(X) = {1,, max(X)}. Then, it has rank one and, hence, only two flats (the empty set and V ), while it has n + 1 closed sets (the empty set and cl(i) for all i). We shall clarify the relationship between closed sets and flats below.

Definition 4

For any XV, the union of all flats containing X with outer rank equal to that of X is referred to as the span of X, i.e.,
span ( X ) : = { F : F f l a t , X F , or ( F ) = or ( X ) } .

Proposition 3

For any X,
(1)
cl(X) ⊆ span(X) with equality if and only if cl(X) is a flat;
(2)
span(cl(X)) = span(X);
(3)
span(X) := {vV : or(Xv) = or(X)}.

Proof

The first two properties follow directly from the definition. Suppose vF, a flat containing X with or(F) = or(X), then or(Xv) ≤ or(F) = or(X). Conversely, if or(Xw) = or(X), then Xw is contained in a flat with the same outer rank as X and, hence, in span(X).
Flats and spans provide two alternate axioms for matroids.

Theorem 1

The following are equivalent:
(1)
cl is a matroid;
(2)
all closed sets are flats, i.e., cl(X) = span(X) for all XV;
(3)
all closed sets are spans, i.e., for all XV, there exists YV, such that cl(X) = span(Y ).

Proof

The first property clearly implies the third one. Let us now prove that the second property implies the first one. Let XV, vV and u ∈ cl(Xv)\cl(X), then or(Xu) = or(X) + 1 = or(Xv) and, hence, cl(Xu) = cl(Xv). Thus, cl satisfies the Steinitz–Mac Lane exchange axiom.
We now prove that the third property implies the second. Suppose all closed sets are spans, then we shall prove that all closed sets of outer rank k are flats, by induction on 0 ≤ kr. This is clear for k = 0; hence, suppose it holds for up to k − 1. Consider a minimal closed set c of outer rank k, i.e., or(c) = k and or(c′) = k − 1 for any closed set c′ ⊂ c. By hypothesis, we have c = span(Y ) for some Yc; we now prove that c = cl(Y ). Suppose that cl(Y ) ⊂ c, then cl(Y ) = c′, a closet set of outer rank at most k − 1. Then, c′ is a flat, i.e., c′ = span(c′) = span(cl(Y )) = span(Y ) = c, a contradiction. Thus, c = span(c) and c is a flat.
There are solvable closure operators that are not matroids, e.g., the undirected graph 4 displayed in Figure 2. It is solvable because it has rank two and contains K2K2. More explicitly, the following is a solution for it over any alphabet:
P i ( y ) : = { x A 2 : x i = y } ,             i { 1 , 2 } f 1 = f 3 = { P 1 ( y ) : y A } f 2 = f 4 = { P 2 ( y ) : y A }
In that case, note that the outer rank is submodular, and hence, span4 = U2,4 is a matroid; however, cl 4 is not a matroid itself.
We would like to explain the significance of flats in matroids for random network coding. A model for noncoherent random network coding based on matroids is proposed in [21], which generalises routing (a special case for the uniform matroid), linear network coding (the projective geometry) and affine network coding (the affine geometry). In order to combine the messages they receive, the intermediate nodes select a random element from the closure of the received messages. The model is based on matroids, because all closed sets are flats, hence a new message is either in the closure of all of the previously received messages (and is not informative) or it increases the outer rank (and is fully informative).

3.3. Upper and Lower Ranks

We are now interested in lower bounds on the entropy of coding functions. Since any closure operator has a trivial coding function with entropy zero (where the universal partition is placed on every vertex), the entropy of any coding function cannot be bounded below. Therefore, most of our bounds will apply to solutions only.

Definition 5

The lower rank and upper rank of X are respectively defined as:
lr ( X ) : = min { Y : cl ( Y ( V \ X ) ) = V } , ur ( X ) : = r - lr ( V \ X ) .
A few elementary properties of the lower and upper ranks are listed below. Again, if cl1 ≤ cl2, then lr1(X) ≥ lr2(X) and ur1(X) ≥ ur2(X) for all XV.

Lemma 2

The following hold:
(1)
lr(V ) = ur(V ) = r and lr(∅︀) = ur(∅︀) = 0.
(2)
For any XV, lr(X) = 0 if and only if cl(V\X) = V. Hence, ur(X) = r if and only if cl(X) = V.
(3)
For any XV,
ur ( X ) = r - min { or ( Y ) : cl ( X Y ) = V } = r - min { or ( F ) : F flat and cl ( X F ) = V } .
(4)
If XZ, then ur(X) ≤ ur(Z) and lr(X) ≤ lr(Z).
(5)
ur(X) = ur(cl(X)).
(6)
lr(X) ≤ ur(X) ≤ or(X).

Proof

The first three properties are easily proven. Property (4) for the upper rank follows from Property (3); the result for the lower rank follows from lr(X) = r − ur(V\X). For Property (5), Property (4) yields ur(X) ≤ ur(cl(X)), while cl(XY ) = cl(cl(X) ∪ Y ) yields the reverse inequality. We now prove Property (6). The inequality ur(X) ≤ or(X) follows from the subadditivity of the outer rank. To prove that lr(X) ≤ ur(X), let b be a basis for cl. Then:
V = cl ( b ) = cl { ( b X ) ( b ( V \ X ) ) } cl { ( b X ) ( V \ X ) } ,
and hence, cl {(bX) ∪ (V\X)} = V, thus |bX| ≥ lr(X). Similarly, |b ∩ (V\X)| ≥ lr(V\X), and hence, r = |b| ≥ lr(X) + lr(V\X).
We remark that for any solution f, we have r = Hf (V ) ≤ or(X) + or(Y ) for any X, Y, such that cl(XY ) = V. Therefore, we obtain:
H f ( X ) ur ( X )
for all XV.

Corollary 2

For any solution f of cl and any XV,
r - H f ( V \ cl ( X ) ) lr ( cl ( X ) ) ur ( X ) H f ( X ) or ( X ) .
Note that a trivial lower bound on Hf (X) (where f is a solution) is given by rHf (V\X). Therefore, the intermediate bounds on Hf (X) in Corollary 2 refine this trivial bound.
Some of the results above can be generalised for any coding function f: denoting:
lr f ( X ) = min { H f ( Y ) : cl ( Y ( V \ X ) ) = V } , ur f ( X ) = H f ( V ) - lr f ( V \ X ) ,
we obtain:
H f ( V ) - H f ( V \ cl ( X ) ) lr f ( cl ( X ) ) ur f ( X ) H f ( X ) or ( X ) .
We finish this subsection by remarking that Theorem 1 has an analogue for the upper rank. Namely, define an upper flat as a set F, such that FX implies ur(X) > ur(F); define also the upper span of X as:
uspan ( X ) : = { F : F upper flat , X F , ur ( F ) = ur ( X ) } = { v V : ur ( X v ) = ur ( X ) } .

Theorem 2

The following are equivalent:
(1)
cl is a matroid;
(2)
all closed sets are upper flats, i.e., cl(X) = uspan(X) for all XV;
(3)
all closed sets are upper spans, i.e., for all XV, there exists YV, such that cl(X) = uspan(Y).

3.4. Inner and Outer Complemented Sets

We are now interested in a case where the bounds on the entropy are tight.

Definition 6

We say a set X is outer complemented if or(X) = ur(X). Moreover, we say it is inner complemented if ir(X) = ur(X).
Therefore, if X is outer complemented, then Hf (X) = or(X) = ur(X) for any solution f.
Remark that X is outer (inner) complemented if and only if cl(X) is outer (inner) complemented.

Proposition 4

The following are equivalent:
(1)
X is outer complemented;
(2)
there exists Z, such that or(X) + or(Z) = r, cl(XZ) = V and XZ = ∅︀;
(3)
any outer basis of X is contained in a basis of V.
Similar results hold for inner complemented sets. The following are equivalent:
(1)
X is inner complemented;
(2)
X is outer complemented and ir(X) = or(X);
(3)
any inner basis of X is contained in a basis of V.

Proof

The equivalence of the first two properties is easily shown. If X is outer complemented, let o be an outer basis of X, and let Z satisfy cl(XZ) = V and |Z| = r − or(X). Then, oZ is a basis of V. Conversely, if any outer basis can be extended to a basis, then any such extension is a valid Z for Property (2). The properties for an inner complemented set are easy to prove.
We saw earlier that cl(X) ⊆ clf (X) for any coding function f and any X. This can be refined when f is a solution and X is outer complemented.

Lemma 3

If f is a solution of cl, then cl(span(X)) ⊆ clf (X) for any outer complemented X.

Proof

For any outer complemented X, we have Hf (X) = or(X). Suppose v ∈ span(X), then or(X) = or(Xv) ≥ Hf (Xv) ≥ Hf (X) = or(X) and, hence, v ∈ clf (X). Since clf (X) is a closed set of cl, we easily obtain that cl(span(X)) ⊆ clf (X).

Corollary 3

If there exists an outer complemented set X, such that its span has higher outer rank and is also outer complemented, then cl is not solvable over any alphabet.
By extension, we say that cl is outer complemented if all sets are outer complemented. We can characterise the solvable outer complemented closure operators.

Theorem 3

Suppose that cl has rank r and is outer complemented. Then, cl is solvable if and only if span is a solvable matroid with rank r.

Proof

If all sets are outer complemented, then any solution f of cl is also a coding function of span, since span(X) = {vV: Hf (Xv) = Hf (X)}. Since the outer rank is equal to the entropy Hf, it is submodular, and hence, span is a matroid whose rank function is given by the outer rank. Thus, span has rank r and f is a solution for it. Conversely, if span is a solvable matroid with rank r, then we have cl ≤ span and cl is solvable.
For instance, for the undirected cycle 5 in Figure 3, cl5 is outer complemented, though the outer rank is not submodular; hence, span is not a matroid. As such, 5 is not solvable (its entropy is actually 2.5 [10]).
We would like to emphasize that if all sets are outer complemented, then the outer rank must be submodular, i.e., the rank function of a matroid. However, this does not imply that cl should be a matroid itself. For instance, consider cl defined on {1, 2, 3} as follows: cl(1) = 12, cl(2) = 2, cl(3) = 3, cl(13) = cl(23) = 123. Then, any set is inner complemented, cl is solvable (by letting f1 = f2 and f3, such that f1f3 = EA2), but cl is not a matroid.

3.5. Combining Closure Operators

In this subsection, V1 and V2 are disjoint sets of respective cardinalities n1 and n2; cl1 and cl2 are closure operators on V1 with rank r1 and on V2 with rank r2, respectively. We further let V = V1V2 and for any XV; we shall denote X1 = XV1 and X2V2. Different ways of combining closure operators have been proposed in [8].

Definition 7

The disjoint, unidirectional and bidirectional unions of cl1 and cl2 are, respectively:
cl 1 cl 2 ( X ) : = cl 1 ( X 1 ) cl 2 ( X 2 ) cl 1 cl 2 ( X ) : = { V 1 cl 2 ( X 2 ) i f cl 1 ( X 1 ) = V 1 cl 1 ( X 1 ) X 2 o t h e r w i s e cl 1 ¯ cl 2 ( X ) : = { V 1 cl 2 ( X 2 ) i f X 1 = V 1 V 2 cl 2 ( X 1 ) i f X 2 = V 2 X o t h e r w i s e .
If cl is a closure operator on V satisfying cl1∪⃗cl2 ≤ cl ≤ cl1 ∪ cl2, it has rank r1 + r2 and entropy H(cl1) + H(cl2). We can then split the problems into two parts. In that case, we also have:
cl 1 ( X 1 ) = cl ( X ) V 1 = cl ( X 1 V 2 ) V 1 = cl ( X 1 ) V 1
for all XV, i.e., V2 has no influence on cl(X) on V1.
The rank of the bidirectional union is given by:
r ( cl 1 ¯ cl 2 ) = min { n 1 + r 2 , n 2 + r 1 } ,
while its entropy only satisfies the inequality:
H ( cl 1 ¯ cl 2 ) min { n 1 + H ( cl 2 ) , n 2 + H ( cl 1 ) } .
We can determine how the four rank functions given above behave with regards to the three types of union.

Proposition 5

For the disjoint union, let cl := cl1 ∪ cl2, then:
r = r 1 + r 2 or ( X ) = or 1 ( X 1 ) + or 2 ( X 2 ) ir ( X ) = ir 1 ( X 1 ) + ir 2 ( X 2 ) ur ( X ) = ur 1 ( X 1 ) + ur 2 ( X 2 ) lr ( X ) = lr 1 ( X 1 ) + lr 2 ( X 2 ) .
For the unidirectional union, let cl∪⃗ := cl1 ∪⃗cl2, then:
r = r 1 + r 2 or ( X ) = min { r 1 + or 2 ( X 2 ) , or 1 ( X 1 ) + X 2 } ir ( X ) = min { r 1 + ir 2 ( X 2 ) , ir 1 ( X 1 ) + X 2 } ur ( X ) = ur 1 ( X 1 ) + ur 2 ( X 2 ) lr ( X ) = lr 1 ( X 1 ) + lr 2 ( X 2 ) .
For the bidirectional union, let cl∪̄ := cl1 ∪̄cl2, then:
r ¯ = min { n 1 + r 2 , n 2 + r 1 } or ¯ ( X ) = min { n 1 + or 2 ( X 2 ) , n 2 + or 1 ( X 1 ) , X } ir ¯ ( X ) = { min { n 1 + r 2 , n 2 + r 1 } i f cl ( X ) = V n 1 + ir 2 ( X 2 ) i f X 1 = V 1 , cl 2 ( X 2 ) V 2 n 2 + ir 1 ( X 1 ) i f X 2 = V 2 , cl 1 ( X 1 ) V 1 X o t h e r w i s e ur ¯ ( X ) = r ¯ - min { n 1 - X 1 + r 2 - ur 2 ( X 2 ) , n 2 - X 2 + r 1 - ur 1 ( X 1 ) } lr ¯ ( X ) = min { X 1 + lr 2 ( X 2 ) , X 2 + lr 1 ( X 1 ) } .

Proof

The results for the disjoint union easily follow from the definitions. We then turn to the unidirectional union. Again, we remark that cl∪⃗(X) = V if and only if cl1(X1) = V1 and cl2(X2) = V2; this gives the rank, the upper rank of X and then its lower rank. For the upper rank, we have X ⊆ cl∪⃗(o) if and only if either cl1(o1) = V1, X2 ⊆ cl2(o2) or X1 ⊆ cl1(o1), X2o2. The proof for the inner rank is similar.
For the bidirectional union, we have X ⊆ cl∪̄(o) if and only if o1 = V1, X2 ⊆ cl2(o2) or o2 = V2, X1 ⊆ cl1(o1) or Xo; this yields the outer rank. The inner rank is obtained by considering each case separately. If X = V, then cl∪̄(i) = cl∪̄(X) = V if and only if i1 = V1, cl2(i2) = V2 or i2 = V2, cl1(i1) = V1. If X1 = V1, but cl2(X2) ≠ V2, then cl∪̄(i) = cl∪̄(X) if and only if i1 = V1, cl2(i2) = cl2(X2). The third case comes from symmetry. Finally, if X1V1 and X2V2, then cl∪̄(i) = cl∪̄(X) = X if and only if i = X. For the upper rank, we remark that cl∪̄(XY ) = V if and only if either X1Y1 = V1, cl2(X2Y2) = V2 or X2Y2 = V2, cl1(X1Y1) = V1. The lower rank follows from the upper rank.

4. Shannon Entropy

Since finding the entropy of a digraph is difficult in general, [9,10] developed the idea of Shannon entropy of a graph. The main idea is to maximise over all functions that satisfy some of the properties of an entropic function, notably submodularity. This idea can be adapted to general closure operators. For any closure operator cl on V, a Shannon function for cl can be viewed as a cl-compatible polymatroid.

Definition 8

For any closure operator cl on V, a Shannon function for cl is a function r: 2V → ℝ, such that:
(1)
if XV, then
0 r ( X ) X ,
(2)
r is increasing, i.e., if XYV, then
r ( X ) r ( Y ) ,
(3)
r is submodular, i.e., if X, YV, then:
r ( X ) + r ( Y ) r ( X Y ) + r ( X Y ) ,
(4)
for all XV,
r ( X ) = r ( cl ( X ) ) .
The maximum value of r(V ) over all Shannon functions for cl is called the Shannon entropy of cl and is denoted by SE(cl).
Any Shannon function also satisfies the conditions of Lemma 1, hence SE(cl) ≤ or(V ) = r. Moreover, it is clear that if cl1 ≤ cl2, then SE(cl1) ≥ SE(cl2).

4.1. Shannon Entropy and Combining Closure Operators

Lemma 4

If cl1∪⃗cl2 ≤ cl ≤ cl1 ∪ cl2, then for any Shannon function r for cl, the function:
r ( X ) : = r ( X V 1 ) + r ( X V 1 ) - r ( V 1 )
is a Shannon function for cl, such that r′(X) = r′(X1) + r′(X2) and r′(V ) = r(V ).

Proof

Only the closure property is nontrivial to verify. Since cl(X) ∩ V1 = cl(X1) ∩ V1, we obtain:
r ( cl ( X ) ) = r ( cl ( X ) V 1 ) + r ( cl ( X ) V 1 ) - r ( V 1 ) r ( cl ( X 1 ) ) + r ( cl ( X V 1 ) ) - r ( V 1 ) = r ( X ) .

Proposition 6

If cl1∪⃗cl2 ≤ cl ≤ cl1 ∪ cl2, then:
SE ( cl ) = SE ( cl 1 ) + SE ( cl 2 ) .

Proof

First of all, it is clear that SE(cl) ≥ SE(cl1) + SE(cl2). Indeed, let ri be a Shannon function for cli, then r defined by r(X) := r1(X1) + r2(X2) is a Shannon function for cl1 ∪ cl2.
We now show the reverse inequality. By Lemma 4, there exists a Shannon function r for cl with r(X) = r(X1) + r(X2) and r(V ) = SE(cl). It is easily seen that the restriction r2(X) of r(X) to V2 is a Shannon function for cl2, hence r2(V2) = r(V2) ≤ SE(cl2). Furthermore, define the function r1 : 2V1 → ℝ as:
r 1 ( X ) : = r ( X V 2 ) - r ( V 2 ) .
We check that r1 is indeed a Shannon function for cl1. The first two properties are straightforward, while submodularity comes from:
r 1 ( X ) + r 1 ( Y ) = r ( X V 2 ) + r ( Y V 2 ) - 2 r ( V 2 ) r ( ( X Y ) V 2 ) + r ( ( X Y ) V 2 ) - 2 r ( V 2 ) = r 1 ( X Y ) + r 1 ( X Y )
and the closure property comes from the fact that cl1(X) = cl(XV2)\V2.
r 1 ( cl 1 ( X ) ) = r ( cl 1 ( X ) V 2 ) - r ( V 2 ) = r ( cl ( X V 2 ) ) - r ( V 2 ) = r ( X V 2 ) ) - r ( V 2 ) = r 1 ( X ) .
Thus,
r ( V ) = r 1 ( V 1 ) + r 2 ( V 2 ) SE ( cl 1 ) + SE ( cl 2 ) .
The Shannon entropy of the bidirectional union satisfies a similar inequality to the one for the corresponding entropy.

Proposition 7

For any cl1 and cl2, we have:
SE ( cl 1 ¯ cl 2 ) min { SE ( cl 1 ) + n 2 , SE ( cl 2 ) + n 1 } .

Proof

We say a function r : 2V → 2V is a V1-function if it satisfies all of the properties of a Shannon function, but only for all X, Y containing V1. The maximum value of r(V ) over any V1-function, denoted as S, is greater than or equal to SE(cl1∪̄cl2). Let r be a V1-function and consider:
r 2 ( X ) : = r ( X V 1 ) - r ( V 1 )
for all XV2. We then prove that r2 is a Shannon function for cl2. Only Property (4) is nontrivial to check; we have:
r 2 ( cl 2 ( X ) ) = r ( cl 2 ( X ) V 1 ) - r ( V 1 ) = r ( cl 1 ¯ cl 2 ( X V 1 ) ) - r ( V 1 ) = r ( X V 1 ) - r ( V 1 ) = r 2 ( X ) .
If r achieves r(V ) = S, we obtain Sr2(V2) + r(V1) ≤ SE(cl2) + n1. Thus, SE(cl1∪̄cl2) ≤ S ≤ SE(cl2) + n1. Symmetry finishes the proof.

4.2. Density of Closure Entropies

We remark that any closure operator of rank at least one has entropy at least one (assign the universal partition to every vertex in cl(∅︀) and the same partition g of Ar into |A| parts to any other vertices). Moreover, any D-closure for a digraph D with rank (i.e., minimum feedback vertex set size) of two has entropy two; in fact, such closure operators are solvable over any sufficient large alphabet [8]. This shows that multiple unicast instances with two source-destination pairs are solvable over all sufficiently large alphabets. This proof technique cannot be generalised for general digraphs, for 5 has rank three, but entropy of only 2.5. Another direction could then be to consider other families of closure operators and find “gaps” in the entropy distribution; in particular, we may ask whether all closure operators of rank two are solvable. Theorem 4 gives an emphatic negative answer to the last question: the set of all possible closure entropies is always dense above one.

Theorem 4

For any r ≥ 2 and any rational number H in (1, r], there exists a closure operator of rank r with entropy equal to H.
The proof is constructive, i.e., for any H, we give a closure operator with entropy equal to H and the corresponding coding functions with entropy H.
First of all, we introduce some notation regarding rooted trees. A rooted tree is a tree with a specific vertex, called the root, denoted as R. The vertices at distance k from the root form level k of the tree (hence, the root is the only vertex on level 0), this is denoted lk. For any vertex v in level k, its parent is the only vertex adjacent to v on level k − 1; we denote it as p(v). Moreover, we denote its ancestry as a(v) := {v, p(v), p2(v), . . . , R} (remark that we include v in its ancestry). Conversely, a child of v is any vertex on level k + 1 adjacent to v, and any vertex without any children is a leaf of the tree. We denote the set of children of v as c(v). We extend the definitions above to any set of vertices X, e.g., p(X) = ∪vX p(v). The following properties easily follow.
(1)
If ua(v), then a(u) ⊆ a(v). Therefore, a(a(X)) = a(X) for all X.
(2)
If XY, then a(X) ⊆ a(Y ).
(3)
c(v) ⊆ a(X) only if |X| ≥ |c(v)|.

Definition 9

An (L, C)-tree, with 0 ≤ LC, is a rooted tree with root R with L + 1 levels, such that any vertex of level k has Ck children for 0 ≤ kL − 1. If L = 0, this tree reduces to a single vertex.
We then have:
(4)
Each vertex of level L − 1 has CL + 1 children (which are leaves).
(5)
For all k, l k = C ! ( C - k ) !.
We can express the rational number H as:
H = 1 D t = 1 r N t ,
where D r - 1 H - 1, 0 < Nt < D for all 1 ≤ tr − 1, and Nr = D. We then introduce the following closure operator. Consider r disjoint trees T1, . . . , Tr, where Tt is an (Lt := DNt, Ct := DHNt) tree with root Rt for all 1 ≤ tr. Then, V is the set of all vertices of the r trees and:
cl ( X ) : = { V if  v : c ( v ) a ( X ) or  X T t 1 t r a ( X ) otherwise .

Lemma 5

The operator cl is a closure operator of rank r.

Proof

We first prove that this is indeed a closure operator. First of all, it is trivial to check that X ⊆ cl(X). Secondly, if XY, then we need to check that cl(X) ⊆ cl(Y ). If cl(Y ) = V, this is trivial, otherwise cl(Y ) = a(Y ) ≠ V; and hence, cl(X) = a(X) ⊆ cl(Y ) by Proposition (2). Thirdly, we need to prove that cl is idempotent. Again, this is trivial if cl(X) = V; hence, let cl(X) = a(X) ≠ V. By definition, there exists t, such that XTt = ∅︀; and hence, a(X) ∩ Tt = ∅︀. Furthermore, for any non-leaf v, there exists a child u of v that does not belong to a(X); then u does not belong to a(a(X)) = a(X), either. Therefore, we have cl(a(X)) = a(a(X)) = a(X) by Property (1); and hence, cl(cl(X)) = cl(a(X)) = a(X) = cl(X).
We now prove that it has rank r. Since the set of roots has cardinality r and intersects all trees, the rank is at most r. Conversely, suppose cl(X) = V. Firstly, if there exists v, such that c(V ) ⊆ a(X), then |X| ≥ |c(v)| by Property (3); thus, |X| ≥ CtLt + 1 = D(H − 1) + 1 ≥ r. Secondly, if X intersects all trees, then |X| ≥ r. Thirdly, if cl(X) = a(X), then a(X) = V and X intersects all trees; thus, |X| ≥ r.

Lemma 6

The entropy of cl is at most H.

Proof

The proof uses the submodular inequality recursively on all levels of a tree, and then successively for all trees. More precisely, we shall use the following application of the submodular inequality: if r : 2V → 2V is submodular and X1, . . . , Xk are subsets of V, such that XiXj = X for all ij and ∪i Xi = Y, we have:
r ( Y ) i = 1 k r ( X i ) - ( k - 1 ) r ( X ) .
Fix a coding function f for cl. For any non-leaf v, the submodular inequality gives (with the sets X1, . . . , Xk corresponding to {u, v} : uc(v); and hence, X = v, Y = vc(v))
H f ( V ) = H f ( c ( v ) ) u c ( v ) H f ( u ) - ( c ( v ) - 1 ) H f ( v ) .
We first add up by level; for level k (0 ≤ kLt) of tree Tt, we denote Hk := ∑vlk Hf (v), and we obtain:
C ! ( C t - k ) ! H f ( V ) H k + 1 - ( C t - k - 1 ) H k C ! ( C t - k ) ! ( C t - k - 2 ) ! H f ( V ) ( C t - k - 2 ) ! H k + 1 - ( C t - k - 1 ) ! H k
Let us now add up for all levels:
[ k = 0 L t - 1 C t ! ( C t - k ) ( C t - k - 1 ) ] H f ( V ) ( C t - L t - 1 ) ! H L t - ( C t - 1 ) ! H 0 C ! C t - L t - ( C t - 1 ) ! H 0 ,
where we used H L t l L t = C t ! ( C t - L t ) !. Simplifying, we obtain:
L t H ( V ) C t - ( C t - L t ) H 0 = C t - D ( H - 1 ) H f ( R t )
since, by definition, H0 = Hf (Rt). We now add up for all trees T1 up to Tr−1, we obtain:
[ D ( r - 1 ) - D ( H - 1 ) ] H ( V ) D H ( r - 1 ) - D ( H - 1 ) - D ( H - 1 ) t = 1 r - 1 H f ( R t ) ,
where we used the following relations:
b = 1 r - 1 N t = D ( H - 1 ) b = 1 r - 1 L t = D ( r - 1 ) - D ( H - 1 ) b = 1 r - 1 C t = D H ( r - 1 ) - D ( H - 1 ) .
Moreover, the set of all roots is a basis for the closure operator, hence:
H f ( V ) = H f ( { R 1 , , R r } ) 1 + t = 1 r - 1 H f ( R t ) .
Multiplying (2) by D(H − 1) and adding it with (1) eventually yields:
H f ( V ) D H ( r - 1 ) D ( r - 1 ) = H .
We now construct the coding function with entropy H. Consider A = BD, where B is any finite set of cardinality at least two. Any xAr can be expressed as x = (x1, . . . , xrD) ∈ BrD. For any subset S ⊆ {1, . . . , rD}, say S = {i1, . . . , i|S|} once sorted in increasing order, we define the partition gS of BrD into exactly |B|S|| parts of equal size as:
P y ( g S ) : = { ( x i 1 , , x i S ) = y : y B S } .
We remark that H(gS) = |S|=D. We shall assign a partition fv := gS(v) to each vertex v; we only need to specify S(v) for all v. Denoting S(X) = ∪vX S(v) for all XV, we have Hf (X) = |S(X)|=D.
S(v) is defined recursively for all trees, level by level. The set S(Rt) for the root of tree Tt (1 ≤ tr) is given by:
S ( R t ) = { 1 + s = 1 t - 1 N s , , s = 1 t N s } .
We denote ∑ := {1, . . . , DH}. Then, for any non-leaf v, the corresponding sets of its children are obtained by adding one element of ∑ to S(v); all added elements are distinct. That is, for all u, u′ ∈ c(v), we have:
S ( u ) Σ S ( u ) = S ( v ) + 1 S ( u ) S ( u ) = S ( v ) .
Let v be a non-leaf on level k. Since |S(v)| = Nt+k and |c(v)| = Ctk = DHNtk = |∑|−|S(v)|, we obtain S(c(v)) = ∑ for all non-leaves v.

Lemma 7

The partitions f form a coding function for cl with entropy H.

Proof

Let us prove that it is indeed a coding function for cl. Since |S(v)| = Nt + kNt + L = D for any v in level k of tree t, we obtain Hf (v) ≤ 1 for any vV. We then need to check that S(X) = S(cl(X)) for any subset X of vertices. We first remark that if va(u), then S(v) ⊆ S(u); hence S(X) = S(a(X)). This proves the claim when cl(X) = a(X). Otherwise, if c(v) ⊆ a(X), then ∑ = S(c(v)) ⊆ S(a(X)) ⊆ S(X); if X intersects all trees, then ∑ = S({R1, . . . , Rr}) ⊆ S(X). Finally, we have S(V ) = ∑; hence, its entropy is equal to H.

5. Solvability of Operators

Since the closure solvability problem generalises different problems, we may be tempted to generalise it even further by consider operators that are not necessarily closures. In this section, we justify why we only need to focus on the closure solvability problem. Let us consider the most general way of defining the solvability problem.

Definition 10

Let V be a finite set of n elements, a : 2V → 2V, and A, B be finite sets (A is referred to as the alphabet, |A| ≥ 2). A coding function for (a, A, B) is a tuple f = (f1, . . . , fn) of n partitions of B, where each partition is in at most |A| parts, such that fa(X) = fX for all XV.
We say that a, a′ : 2V → 2V are equivalent if any tuple of partitions f is a coding function of a if and only if it is a coding function for a′.

Theorem 5

Let a : 2V → 2V ; then, there exists a closure operator on V which is equivalent to a. Therefore, the solvability problem for a can be reduced to the solvability problem of some closure operator.

Proof

We take three steps. Firstly, construct the digraph on 2V with arcs (Y, a(Y )) for all YV. For any XV, denote the connected component containing X as C(X). Then, we claim that b(X) := ∪Y C(X) a(Y ) is equivalent to a (we note that b is extensive). Indeed, if f is a coding function for a, then fX = fa(X). Hence, for any YC(X), fY = fX, and we obtain fb(X) = fX. Conversely, we have b(X) = b(a(X)); and hence, if f is a coding function for b, then fX = fb(X) = fb(a(X)) = fa(X) for all X.
Secondly, we claim that c(X) := ∪Y X b(Y ) is equivalent to b (we note that c is extensive and isotone). Indeed, if f is a coding function for b and YX, then fX refines fY = fb(Y ); thus, fX refines fc(X). The converse is immediate; hence, fX = fc(X), and f is a coding function for c. Conversely, if f is a coding function for c, then fX = fc(X) refines fb(X) and, hence, is equal to fb(X) for all X.
Thirdly, we claim that cl(X) := cn(Y ) is equivalent to c (we note that cl is a closure operator). Indeed, if f is a coding function for c, then fX = fc(X) = . . . = fcn (X). Conversely, fX = fcl(X) refines fc(X).

6. Conclusions

In this paper, we pursued the study of closure solvability introduced in [8] for network coding and secret sharing.
We first investigated the nature of solvable closure operators. This yielded numerous new definitions (four different kinds of ranks) and a new criterion for non-solvability (Theorem 3). In passing, we give two new axioms for matroids in Theorems 1 and 2.
We then introduce the entropy of a closure operator, and we thoroughly investigate its properties. We are able to define the equivalent of the Shannon entropy of a graph. This yields Theorem 4, which shows that the set of entropy values of closure operators contains all rational numbers above one. However, it is easy to show that there are gaps between the entropy values of undirected graphs, for instance the only possible value between two and three is equal to 2.5. For directed graphs, we still do not know whether the set of entropy values is dense in [1, ∞).

Acknowledgments

The author would like to thank the anonymous reviewers for their interesting comments and suggestions.
This work is partially supported by EPSRC grant EP/K033956/1.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ahlswede, R.; Cai, N.; Li, S.Y.R.; Yeung, R.W. Network information flow. IEEE Trans. Inf. Theory 2000, 46, 1204–1216. [Google Scholar]
  2. Li, S.Y.R.; Yeung, R.W.; Cai, N. Linear Network Coding. IEEE Trans. Inf. Theory 2003, 49, 371–381. [Google Scholar]
  3. Riis, S. Linear versus non-linear Boolean functions in Network Flow. Proceedings of 38th Annual Conference on Information Science and Systems (CISS), Princeton, NJ, USA, 17–19 March 2004.
  4. Dougherty, R.; Freiling, C.; Zeger, K. Insufficiency of linear coding in network information flow. IEEE Trans. Inf. Theory 2005, 51, 2745–2759. [Google Scholar]
  5. Kötter, R.; Médard, M. An Algebraic Approach to Network Coding. IEEE/ACM Trans. Netw 2003, 11, 782–795. [Google Scholar]
  6. Ho, T.; Médard, M.; Kötter, R.; Karger, D.R.; Effros, M.; Shi, J.; Leong, B. A random linear network coding approach to multicast. IEEE Trans. Inf. Theory 2006, 52, 4413–4430. [Google Scholar]
  7. Riis, S.; Ahlswede, R. Problems in network coding and error-correcting codes. Proceedings of FirstWorkshop on Network Coding, Theory, and Applications, Riva del Garda, Italy, 7 April 2005.
  8. Gadouleau, M. Closure solvability for network coding and secret sharing. IEEE Trans. Inf. Theory 2013, 59, 7858–7869. [Google Scholar]
  9. Riis, S. Utilising public information in network coding. In General Theory of Information Transfer and Combinatorics; Ahlswede, R., Baumer, L., Cai, N., Aydinian, H., Blinovsky, V., Deppe, C., Mashurian, H., Eds.; Lecture Notes in Computer Science, Volume 4123; Springer: Berlin/Heidelberg, Germany, 2006; pp. 866–897. [Google Scholar]
  10. Riis, S. Information flows, graphs and their guessing numbers. Electron. J. Comb 2007, 14, 1–17. [Google Scholar]
  11. Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. Proceedings of Transactions of the 6th Prague Conference on Information Theory, Statistical Decision Function, Random Processes, Prague, Czech Republic, 19–25 September 1971; pp. 411–425.
  12. Lovász, L. On the Shannon Capacity of a Graph. IEEE Trans. Inf. Theory 1979, 25, 1–7. [Google Scholar]
  13. Gadouleau, M.; Riis, S. Graph-Theoretical Constructions for Graph Entropy and Network Coding Based Communications. IEEE Trans. Inf. Theory 2011, 57, 6703–6717. [Google Scholar]
  14. Christofides, D.; Markström, K. The Guessing Number of Undirected Graphs. Electron. J. Comb 2011, 18, 1–19. [Google Scholar]
  15. Brickell, E.F.; Davenport, D.M. On the Classification of Ideal Secret Sharing Schemes. J. Cryptol 1991, 4, 123–134. [Google Scholar]
  16. Matúš, F. Matroid representations by partitions. Discret. Math 1999, 203, 169–194. [Google Scholar]
  17. MacWilliams, F.J.; Sloane, N.J.A. The Theory of Error-Correcting Codes; North-Holland: Amsterdam, The Netherlands, 1977. [Google Scholar]
  18. Birkhoff, G. Lattice Theory; American Mathematical Society: Providence, RI, USA, 1948. [Google Scholar]
  19. Oxley, J.G. Matroid Theory; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  20. Batten, L.M. Rank functions of closure spaces of finite rank. Discret. Math 1984, 49, 113–116. [Google Scholar]
  21. Gadouleau, M.; Goupil, A. A Matroid Framework for Noncoherent Random Network Communications. IEEE Trans. Inf. Theory 2011, 57, 1031–1045. [Google Scholar]
Figure 1. Example where the inner rank is not monotonic.
Figure 1. Example where the inner rank is not monotonic.
Entropy 16 05122f1
Figure 2. The graph 4 whose closure operator is solvable, but not a matroid.
Figure 2. The graph 4 whose closure operator is solvable, but not a matroid.
Entropy 16 05122f2
Figure 3. The graph 5 whose closure operator is outer complemented and not solvable.
Figure 3. The graph 5 whose closure operator is outer complemented and not solvable.
Entropy 16 05122f3

Share and Cite

MDPI and ACS Style

Gadouleau, M. Entropy of Closure Operators and Network Coding Solvability. Entropy 2014, 16, 5122-5143. https://doi.org/10.3390/e16095122

AMA Style

Gadouleau M. Entropy of Closure Operators and Network Coding Solvability. Entropy. 2014; 16(9):5122-5143. https://doi.org/10.3390/e16095122

Chicago/Turabian Style

Gadouleau, Maximilien. 2014. "Entropy of Closure Operators and Network Coding Solvability" Entropy 16, no. 9: 5122-5143. https://doi.org/10.3390/e16095122

Article Metrics

Back to TopTop