Next Article in Journal
Information Management for Intelligent Retail Environment: The Shelf Detector System
Previous Article in Journal
It from Qubit: How to Draw Quantum Contextuality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Algebraic Theory of Information: An Introduction and Survey

1
Department of Informatics, University of Fribourg, Bvd. de Perolles 90, CH-1700 Fribourg, Switzerland
2
Mathematical Institute, University of Bern, Sidlerstrasse 5, CH-3012 Bern, Switzerland
*
Author to whom correspondence should be addressed.
Information 2014, 5(2), 219-254; https://doi.org/10.3390/info5020219
Submission received: 22 November 2013 / Revised: 8 April 2014 / Accepted: 8 April 2014 / Published: 10 April 2014
(This article belongs to the Section Information Theory and Methodology)

Abstract

:
This review examines some particular, but important and basic aspects of information: Information is related to questions and should provide at least partial answers. Information comes in pieces, and it should be possible to aggregate these pieces. Finally, it should be possible to extract that part of a piece of information which relates to a given question. Modeling these concepts leads to an algebraic theory of information. This theory centers around two different but closely related types of information algebras, each containing operations for aggregation or combination of information and for extracting information relevant to a given question. Generic constructions of instances of such algebras are presented. In particular, the close connection of information algebras to logic and domain theory will be exhibited.

1. Introduction: Modeling Information

The approach to the concept of information proposed here is based on the view that information comes in pieces, that information relates to questions and that it provides at least partial answers to questions. Since information comes in pieces, it should be aggregated or combined. It must be possible to extract from any piece of information the part pertaining to a given question, to focus the information on a given question. This gives rise to algebraic structures capturing these aspects of information. The presentation and discussion of these structures is the aim of this survey.
In fact, we propose two different types of algebraic structures, each one emphasizing a different point of view, but shown to be essentially equivalent. So each of these two types may be used at convenience. The first type starts from the idea that each piece of information is associated with a specified question, a question it answers at least partially. The piece of information refers to the associated question by its label. Two or more pieces of information can be combined into a new piece of information whose label is determined by the labels of the pieces to be combined. This represents aggregation of information. Further a piece of information can be transported or projected to another question. This models extraction of information. These ideas are captured by algebraic structures with the two basic operations of combination and transport (or projection). Such structures are called called labeled information algebras. Section 2 is devoted to an introduction and a first discussion of such algebras.
Alternatively, one may also consider pieces of information as abstract entities, unrelated to particular questions. Again such pieces of information can be combined to aggregate them into a new piece of information. Further it must be possible to extract the part relevant to any given question by extraction operations which associate to a piece of information others which represent its extracted parts. Again, we obtain algebraic structures with two basic operations, namely combination and extraction. Such structures are called domain-free information algebras. They are presented in Section 3.
Both types of algebraic structures are intimately related. They represent essentially two different views on the same subject, namely the concept of information. Each point of view has its own merits. The labeled algebras originated as a structure representing a certain computational approach. The domain-free algebras are better suited for theoretical investigations. We shall show that from each of the two types it is possible to pass to the other one and back: They represent two sides of the same coin. According to our purpose, we may thus work with one or the other type at our convenience. The domain-free version of an information algebra is generally more suited for theoretical investigations, while the labeled version is better suited for modeling and computational purposes.
There are many instances or models of information algebras. Some of them will be presented in this paper. Furthermore there are different general, generic ways to construct whole classes of such algebras. In particular, systems of mathematical logic provide important sources of information algebras, illustrating the close relationship between logic and information. Information algebras appear in this respect as objects of algebraic logic, adding some new features to this field. This is the subject of Section 4.
The algebraic structures presented here, especially the labeled ones, are based on a proposal for certain axiomatic algebraic structures in [1,2]. These structures, in turn, were proposed with the goal to generalize a successful computation scheme, presented in [3] for Bayesian networks. The algebraic structures proposed in [2] are now called valuation algebras [2,4,5,6]. So, valuation algebras provide a basis for generic, so-called local computation schemes [7]. Information algebras, in our sense, add however one important concept to valuation algebras, namely idempotency. That is, the combination of a piece of information with itself or part of itself gives nothing new. This allows to introduce an order between pieces of information, reflecting their information content. Order is an essential element of the present theory of information algebras, representing a very important feature of any concept of “information”.
With this order structure, information algebras become related to domain theory. Domain theory essentially studies information order with the goal to capture some aspects of computation and semantics of programming languages. By introducing the concept of finite information pieces, information algebras as ordered structures become in fact domains. They are more rich, however, because of the operations of combination and especially information extraction. It turns out that many results and concepts of domain theory carry over to information algebras. This is discussed in Section 5.
In the concluding Section 6, an outlook on subjects not treated in detail in this survey and on open questions in the theory of information algebras is given. So far, there are not too many publications on information algebras available. Most of this survey is based on Chapter 6 in the book [6] and on [8]. A related paper is [9]. So our hope is that this survey may attract some interest to this new and promising theory of important aspects of information.

2. Labeled Algebras

2.1. Modeling Questions

According to the introductory section, we need to consider two types of elements in a labeled information algebra: pieces of information and questions. We start by addressing the concept of questions. First, there are, in general, many questions of interest in a given situation. Further, questions have a certain granularity. A question may be refined, made more precise, or on the contrary coarsened, made less precise. In addition, a question may be the refinement respectively the coarsening of several other questions. This can be captured by an ordering of questions where xy means that question x is coarser than y or y finer than x. So, we consider a family Q of elements thought to represent questions, and we assume that (Q, ≤) is a partially ordered set, that is, ≤ satisfies:
(1)
Reflexivity: xx for all xQ.
(2)
Antisymmetry: xy and yx imply x = y.
(3)
Transitivity: xy and yz imply xz.
The following example is based on the idea that a question may be identified with the set of all its possible answers. Let { X i , i N } be a countable set of variables, and fix, for all iN, a set of values Vi which Xi may take (the sets Vi need not be distinct). For any IN the set i I V i represents all possible answers to the question “which combinations of values of the variables Xi with iI occur?” The elements of i I V i , or more precisely, the set of all maps s from I to i I V i , satisfying s ( i ) V i for all iI, are called I-tuples with components in Vi.
Write Q for the collection of all such questions i I V i , IN. It is natural to call a question I coarser than a question J, if IJ. This imposes a partial order ≤ on Q. In this way, (Q, ≤) becomes a distributive lattice, where the lattice operations of meet (∧) and join (∨) are given by
i I V i i J V i = i I J V i i I V i i J V i = i I J V i
So (Q, ≤) is isomorphic to the power set P ( N ) , ordered by set inclusion. A variant of this example is obtained by admitting only finite sets IN. The result is again a distributive lattice, although without a greatest element. We call this important model for a system of questions a multivariate system for further reference.
It is convenient to assume generally that (Q, ≤) is a lattice, although not necessarily a distributive one. The join of two question x, yQ, written xy, is the coarsest question, finer than both x and y. It represents the combined question of x and y. The meet, written as xy, is the finest question, coarser then both x and y. It represents the common part of the questions x and y. A nondistributive model for a set of questions is given by the partitions of some universe U. The blocks of a partition P are the possible answers of the question represented by the partition. A partition P 1 is finer than a partition P 2 , if every block of P 1 is contained in some block of partition P 2 . We write P 2 P 1 in this case. Note that this is the reverse of the order usually considered when comparing partitions [10]. However, in our case this order makes sense semantically, since P 1 has more possible answers than P 2 , or any answer to P 1 determines also an answer to P 2 , but not vice versa. The join of two partitions in this order is defined by
P 1 P 2 = { P 1 P 2 : P 1 P 1 , P 2 P 2 }
The meet is a bit more difficult to define, we abstain from giving a definition here, see [10]. The lattice part(U) of partitions of an universe U is not distributive except for trivial cases. In most cases sublattices of the lattice part(U) are considered, which generally are nondistributive too.
Therefore, in the following, we assume (Q, ≤) to be a lattice. According to the examples above, questions are often represented by the set of their possible answers, representing the domain of the information pertaining to it. Therefore we also speak sometimes of domains instead of questions. Next we consider pieces of information in a labeled view, that is, pieces of information referring to specified questions, hence to elements of Q.

2.2. Labeled Information

Let Ψ be a set of elements, thought to represent labeled pieces of information, generic elements denoted by small Greek letters ψ , ϕ , . We assume that each of these pieces of information pertains to a unique questions in a lattice (Q, ≤) whose elements will be denoted by lower case letters like x , y , . That is, each element of Ψ is labeled with the question it refers to. Pieces of information can be combined to give new aggregated pieces of information. Further any piece of information can be projected to a question coarser than the question it refers to. More formally, three operations are defined in the pair (Ψ, Q):
(1)
Labeling: d: Ψ → Q; ψd(ψ).
(2)
Combination: ·: Ψ × Ψ → Ψ; ψ, ϕψ · ϕ.
(3)
Projection: π : Ψ × Q → Ψ; ψ, xπx(ψ), defined for xd(ψ).
The labeling operation retrieves the question d ( ψ ) Q the information ψ pertains to. The combination operation returns the aggregated piece of information ψ · ϕ of the two element ψ and ϕ. Projection finally represents the operation of extracting that part of information which pertains to a coarser question than the original one. It is a partial operation, defined only for pairs ψ , x such that x d ( ψ ) . Thus, we have a two sorted algebra ( Ψ , Q ; d , · , π ) with a unary and two binary operations, one of them partial. We collect the elements of Ψ having the same label into sets Ψ x = { ψ Ψ : d ( ψ ) = x } for every element x Q . Thus
Ψ = x Q Ψ x
We impose the following axioms:
(1)
Semigroup: ( Ψ ; · ) is a commutative semigroup
(2)
Labeling: For all ψ , ϕ Ψ and x Q such that x d ( ψ ) ,
d ( ψ · ϕ ) = d ( ψ ) d ( ϕ ) , d ( π x ( ψ ) ) = x
(3)
Unit and Null: For all x Q there exist unit and null elements 1 x and 0 x in Ψ x such that 1 x · ψ = ψ · 1 x = ψ and 0 x · ψ = ψ · 0 x = 0 x for all ψ with d ( ψ ) = x . Further, for all y x , π y ( 1 x ) = 1 y and 0 y · 1 x = 0 x .
(4)
Projection: For all ψ Ψ and x , y Q such that x y d ( ψ ) ,
π x ( π y ( ψ ) ) = π x ( ψ )
(5)
Combination: For all ψ , ϕ Ψ , if d ( ψ ) = x and d ( ϕ ) = y , then
π x ( ψ · ϕ ) = ψ · π x y ( ϕ )
(6)
Idempotency: For all ψ Ψ and x Q such that x d ( ψ ) ,
ψ · π x ( ψ ) = ψ
An algebra ( Ψ , Q ; d , · , π ) satisfying these axioms is called a labeled information algebra.
The Semigroup axiom states that information can be combined in any order without changing the resulting information. As a result, we do not need to write parentheses when combining several pieces of information. The Labeling axiom says that the combined information refers to the combined question of the questions of the factors to be combined. Similarly, if a piece of information is projected to a coarser question, the result refers to this coarser question. The axiom of Unit and Null means, together with the semigroup and labeling axioms, that all Ψ x are subsemigroups of Ψ, each having unit and null elements. Unit elements represent vacuous information with respect to the corresponding questions: Combining with whatever other information relating to the same question does not change this later information. Unit elements project to unit elements. Null elements represent contradiction, destroying any other information. According to the Projection axiom a projection may be carried out in two (or several) steps. The most powerful axiom is Combination. It permits to avoid large domains in combination, if a combination is followed by a projection. This is important for computations, but not exclusively. Idempotency finally tells us that to combine a piece of information with a part (a projection) of itself gives nothing new. This important property of information allows to order pieces of information, and permits to develop an order theory for information algebra, as we shall see later.
As an example, let us consider the multivariate model of questions introduced above. The lattice of questions is given by the lattice of subsets of variables. A piece of information R with label d ( R ) = I is in this model an arbitrary set of I-tuples, with I finite (see Section 2.1). Note the similarity to relational algebra of relational database theory. A subset of I-tuples is called a relation over I there. So the elements of Ψ are relations over sets of variables I, or more precisely, pairs ψ = ( R , I ) , where R is a relation over I. The label of a relation ψ over I is just I, d ( ψ ) = I . Combination is essentially relational join. In order to define this operation define the label of an I-tuple s to be d ( s ) = I and the projection π J of I-tuples to J-tuples with J I to be the restriction of the map s : I V to J, that is, π J ( s ) ( j ) = s ( j ) for j J . The relational join of two relations R and S over I and J respectively is then defined as
R S = { s : d ( s ) = I J , π I ( s ) R , π J ( s ) S }
Then, if ψ = ( R , I ) and ϕ = ( S , J ) , we define ψ · ϕ = ( R S , I J ) . This clearly defines a commutative semigroup; the null element in domain I is the pair ( , I ) , and the unity element is the pair ( T I , I ) , where T I denotes the set of all I-tuples. Projection is based on relational projection. If R is a relation over I and J I , then relational projection is defined as
π J ( R ) = { π J ( s ) : s R }
Projection in Ψ is then defined for ψ = ( R , I ) by π J ( ψ ) = ( π J ( R ) , J ) . It can be verified that ( Ψ , Q ; d , · , π ) with labeling, combination and projection as just defined satisfies the axioms of a labeled information algebra. If we restrict ourselves to finite subsets of variables, then we still obtain a labeled information algebra, in fact, a subalgebra of the original one.
The pair notation is cumbersome, it is only necessary to distinguish the empty relations of no I-tuples. Once this is understood, we may drop the pair-notation and just work with relations R , S , , their domains being implicitly defined. In this case we denote the pair ( , I ) simply by I and call it the empty set on domain I. We refer to this example as a relational information algebra. Many labeled information algebras are subalgebras of a relational information algebra, especially with variables taking real numbers as values, see [6], as for instance the information algebras of convex sets, convex polyhedra or affine subspaces of linear spaces.
As a second example, let us consider the lattice ( Q , ) of partitions of an universe U, discussed in the previous subsection. Pieces of information refer to a partition P as label there. Similarly as for relational information algebras, we take as elements of Ψ pairs ψ = ( S , P ) , where S is a subset of blocks of partition P . The label d ( ψ ) of the piece of information ψ is P . The idea here is that an unknown element of the universe U represents the state of a system or the actual world out of a set of possible worlds. A partition P represents a question about the unknown state. A set of blocks S is a (partial) answer to the question which block P of the partition P contains the unknown state. In this sense ψ is a piece of information about the question represented by the partition P . Such situations arise often in expert systems, especially diagnostics systems, where faults for instance may be decomposed or refined into finer descriptions. The partitions of an universe U form a lattice P a r t ( U ) as we have seen in Section 2.1. However we consider only sublattices S of Part(U) satisfying an additional condition. For two elements m , n U we write m P n if they belong to the same block of the partition P . Then we require, for any pair of partitions P and Q in the sublattice S , that
m P Q n l : m P l , l Q n
This condition is necessary and sufficient for a subset algebra to become an information algebra (see below). It is clear that a partition lattice in general does not satisfy this condition.
Let us now Φ denote the family of subsets of blocks of partitions of a sublattice S satisfying Equation (6). Then we define combination and projection similarly as in the case of a relational algebra: If Q is a partition coarser as P , and P a block of P , then define π Q ( P ) = Q , where Q is the block of Q which contains P. The label d ( P ) of a block of a partition P is just P . Then combination of two pieces of information ψ = ( S , P ) and ϕ = ( T , Q ) is based on a kind of generalized relational join,
S T = { P : d ( P ) = P Q , π P ( P ) S , π Q ( P ) T }
by ψ · ϕ = ( S T , P Q ) . Projection of a piece of information ψ = ( S , P ) to a partition Q coarser than P is based on a generalization of relational projection:
π Q ( S ) = { π Q ( P ) : P S }
by π Q ( ψ ) = ( π Q ( S ) , Q ) . Condition in Equation (6) is sufficient to guarantee that the structure ( Φ , S ; · , ) is a labeled information algebra. It is called a subset information algebra. Note that in a general lattice of partitions this is not true. Condition in Equation (6) is sufficient, and in fact also necessary, for ( Φ , S ; · , ) to satisfy the axioms of a labeled information algebra. We shall also see later that it is no coincidence that this example resembles relational information algebras so much (see Section 4.1).
To complete the picture let’s consider a distributive lattice ( L , L ) with least element 0 L and greatest element 1 L . Meet and join in L are denoted by L and L . Further consider any index set I and an associated indexed family of finite sets Ω i for i I . For any finite subset s of I form the Cartesian product Ω s = i s Ω i . The generic elements of Ω s are denoted by x , y , . These elements can also be considered as maps x from s into Ω s such that x ( i ) Ω i , that is, as s-tuples. If t s , then let π t ( x ) denote the restriction of x to t. Let ( Q , ) = ( P ( I ) , ) be the lattice of all subsets of I ordered under inclusion.
Lattice valuations on s are maps ψ : Ω s L . Denote the set of all such lattice valuations on s by Ψ s and define
Ψ = s I Ψ s
Consider the following operations:
(1)
Labeling: For ψ Ψ s , let d ( ψ ) = s .
(2)
Combination: For ψ , ϕ Ψ with d ( ψ ) = s , d ( ϕ ) = t , combination ψ · ϕ is defined pointwise for all x Ω s t as
( ψ · ϕ ) ( x ) = ψ ( π s ( x ) ) L ϕ ( π t ( x ) )
(3)
Projection: For ψ Ψ with d ( ψ ) = s and t s , the projection π t ( ψ ) is defined for all x Ω t as
π t ( ψ ) ( x ) = L { ψ ( z ) : z Ω s , π t ( z ) = x }
This defines a labeled information algebra, see [11]. The vacuous information on domain s is the lattice valuation 1 s ( x ) = 1 L , the null element 0 s ( x ) = 0 L for all x Ω s . This is called a Lattice Induced Information Algebra. For the choice of L we have many possibilities. For instance we may take the Boolean lattice { 0 , 1 } with the natural order 0 1 . Here lattice valuations represent indicator functions. So, this gives an algebra similar to a subset information algebra. Secondly, we may take for L the real interval [ 0 , 1 ] with its natural order. Meet is then minimum and join maximum. Lattice valuations here can be thought to represent fuzzy sets or possibility distributions. Combination becomes
( ψ · ϕ ) ( x ) = min { ψ ( π s ( x ) ) , ϕ ( π t ( x ) ) }
and projection is defined as
π t ( ψ ) ( x ) = max { ψ ( z ) : z Ω s , π t ( z ) = x }
This corresponds to widely used definitions of fuzzy set intersection and fuzzy set projection. These are examples of generic constructions of labeled information algebras. We shall see more such generic constructions later (Section 4).

2.3. Transport of Information

The projection or extraction operator of a labeled information algebra is only defined as a partial map. We shall now extend it to a total map in two steps. So let ( Ψ , Q ; d , · , π ) be a labeled information algebra. Consider an element ψ Ψ and a question x d ( ψ ) = y . Define e x ( ψ ) = ψ · 1 x . This is an information element on domain x (according to the Labeling axiom). If we project it back to the original domain y of ψ, we obtain by the Combination and Unit axioms that π y ( ψ · 1 x ) = ψ · π y ( 1 x ) = ψ · 1 y = ψ . So the extension of ψ to domain x by e x ( ψ ) can be inversed without loss of information. Also the combination of ψ with the vacuous information 1 x does only change the domain, but does not add information. Therefore the extension operation e x is called vacuous extension. It still is a partial operation.
Still assume d ( ψ ) = y , but consider now an arbitrary element x Q . Then we define a transport operation t x by
t x ( ψ ) = π x ( e x y ( ψ ) )
That is, we extend ψ to the larger domain x y and then project the result back to domain x. Clearly this operation extracts the information pertaining to question x from ψ. The map t : ψ × Q Ψ defined by ψ , x t x ( ψ ) is called the transport operation, since it transports ψ to domain x. It is a total map. Evidently, if x y = d ( ψ ) , t x reduces to the projection π. So it is indeed an extension of the partial map π. If x d ( ψ ) = y , then it reduces to vacuous extension; it is therefore also an extension of vacuous extension.
There are alternative, equivalent ways to define transport. Using the axioms of a labeled information algebra, it follows (see [6]) that for any z x y ,
t x ( ψ ) = π x ( e z ( ψ ) )
and also
t x ( ψ ) = e x ( π x y ( ψ ) )
For the sake of completeness we mention a few other results on transport, which are easy to derive [6]
(1)
If d ( ψ ) = x , then t x ( ψ ) = ψ ,
(2)
If y z , then t y ( ψ ) = t y ( t z ( ψ ) ) ,
(3)
t y ( t x ( ψ ) ) = t y ( t x y ( ψ ) ) ,
(4)
t x ( t x ( ψ ) · ϕ ) = t x ( ψ ) · t x ( ϕ ) .
A piece of information ψ and its vacuous extension e x ( ψ ) represent in some way the same information, although with reference to different questions. More generally, if ψ and ϕ are two elements of Ψ with domains d ( ψ ) = x and d ( ϕ ) = y and t y ( ψ ) = ϕ and t x ( ϕ ) = ψ , then we may shuffle these elements around between the domains x an d y, without any loss of information. Again, ψ and ϕ represent the same information, although each with reference to a different question. This observation can be captured by defining the following relation σ :
ψ σ ϕ , if d ( ψ ) = x   and   d ( ϕ ) = y   together   imply   t y ( ψ ) = ϕ   and   t x ( ϕ ) = ψ .
This is an equivalence relation in Ψ. It is even more, namely a congruence with respect to combination and transport (see [6]). That means that if ψ 1 σ ϕ 1 and ψ 2 σ ϕ 2 , then ψ 1 · ψ 2 σ ϕ 1 · ϕ 2 and if ψ σ ϕ , then t x ( ψ ) σ t x ( ϕ ) for any x Q . Furthermore 0 x σ 0 y and 1 x σ 1 y for all x , y Q . This allows us to consider the quotient algebra Ψ / σ , consisting of the equivalence classes [ ψ ] σ for all ψ Ψ . Then, since σ is a congruence, the following operations between equivalence classes are all well-defined
[ ψ ] σ · [ ϕ ] σ = [ ψ · ϕ ] σ
ϵ x ( [ ψ ] σ ) = [ t x ( ψ ) ] σ
So, Ψ / σ becomes a new type of algebra, again with a combination and an extraction operation, but this time applied to equivalence classes and not to individual elements of Ψ. We note that Ψ / σ is a commutative and idempotent semigroup under combination. The ϵ x are mappings of Ψ / σ into itself for every x Q . Under composition these maps form also a commutative and idempotent semigroup, ϵ x ϵ y = ϵ y ϵ x and ϵ x ϵ x = ϵ x . Further, the maps ϵ x have the interesting properties that (1) ϵ x ( [ 1 y ] σ ) = [ 1 y ] σ , further (2) [ ψ ] σ · ϵ x ( [ ψ ] σ ) = [ ψ ] σ and finally (3) ϵ x ( ϵ x ( [ ψ ] σ ) · [ ϕ ] σ ) = ϵ x ( [ ψ ] σ ) · ϵ x ( [ ϕ ] σ ) . This indicates an algebraic structure which is of interest in itself. It will be formalized and studied in the next section.

3. Domain-free Algebras

3.1. Another View

In this section a quite different view of an information algebra will be presented. It is motivated by the remarks at the end of the previous section. This time we start from a set Φ whose element are thought of as pieces of information, considered as abstract entities independent of any domain or question. That is, the elements of Φ represent domain-free information. The generic elements of Φ are again denoted by lower case Greek letters like ϕ , ψ , . And again the operation of combination is defined between elements of Φ:
Combination: · : Φ × Φ Φ , ϕ , ψ ϕ · ψ .
As before, combination expresses the idea of aggregation of pieces of information, resulting in a new piece of information. Also, there is a unit or neutral element 1 for combination such that 1 · ϕ = ϕ · 1 = ϕ for all ϕ Φ . It represents vacuous information. Further there is a null element 0 such that 0 · ϕ = ϕ · 0 = 0 i . It represents contradiction, which destroys any information. Strictly speaking it does not really represent information as the other elements of Φ do, but it stands for the possibility that pieces of information may be contradictory.
Further we assume the existence of a family E of extraction operators, represented by mappings of Φ into itself:
Extraction: ϵ : Φ Φ .
We may think of each element of E as an operation to extract information from the elements of Φ with respect to some question. This point of view will be clarified below. Guided by the algebraic structure derived from a labeled information algebra at the end of the previous section, we impose the following conditions on Φ, E and the interaction of combination and extraction:
  • Combination:Φ is a commutative and idempotent semigroup under the combination operation · with a unit 1 and a null element 0.
  • Extraction: Each extraction operator ϵ E satisfies the following conditions:
    (a)
    ϵ ( 0 ) = 0 ,
    (b)
    ϕ · ϵ ( ϕ ) = ϕ for all ϕ Φ ,
    (c)
    ϵ ( ϵ ( ϕ ) · ψ ) = ϵ ( ϕ ) · ϵ ( ψ ) for all ϕ , ψ Φ .
  • Composition:E is a commutative and idempotent semigroup under ordinary composition of mappings.
An algebra ( Φ , E ; · , ) satisfying these conditions is a called a Domain-Free Information Algebra.
Item 1 states that combination of pieces of information can be done in any order with the same result. As stated above, the unit element 1 represents vacuous information and the null element 0 contradiction. Item 2 (a) says that extraction of information from the vacuous information results simply in vacuous information, there is no information to extract from vacuous information. Further, item 2 (b) requires that combining a piece of information with a part extracted from it gives nothing new, preserving the original piece of information. This corresponds to the Idempotency axiom of labeled algebras. Item 2 (c) corresponds to the Combination axiom of labeled information algebras. Finally, item 3 says that if information is extracted with respect to two questions, the order in which this is done is irrelevant. And once information is extracted with respect to a fixed question, extracting information with respect to the same question gives nothing new.
As discussed at the end of the previous section, any labeled information algebra gives rise to a domain-free information algebra, if equivalent labeled information elements representing the same information are collected into equivalence classes. Combination is inherited from labeled combination and extraction from the transport operation. This hints already at a close connection between labeled and domain-free algebras. We shall in fact show that they are essentially two sides of a same coin and that we may switch between the two types of algebraic structures at our convenience.
If ( Ψ , Q ; d , · , π ) is a labeled information algebra and the lattice of questions has a top element , that is a finest, universal question, then any element ψ Ψ is equivalent to its vacuous extension to the top domain, ψ σ e ( ψ ) . In the quotient algebra Ψ / σ , we may therefore always take e ( ψ ) as a representative of the class [ ψ ] σ . Since we then have
[ e ( ψ ) ] σ · [ e ( ϕ ) ] σ = [ e ( ψ ) · e ( ϕ ) ] σ
ϵ x ( [ e ( ψ ) ] σ ) = [ e ( π x ( e ( ψ ) ) ) ] σ
we see that we may represent the domain-free version of the labeled algebra ( Ψ , Q ; d , · , π ) essentially by the elements e ( ψ ) of the labeled algebra. So, e.g., in the case of a relational information algebra (see previous section), we may adjoin to the lattice Q of the finite subsets of N the whole index set N as top element. Let f : { 1 , 2 , } i N V i be sequences of elements f ( i ) with f ( i ) V i and define π I ( f ) to be the restriction of f to a subset I of variables. Then the vacuous extension of a relation R on I is its so-called cylindrical extension or saturation, defined by
e ( R ) = { f : π I ( f ) R }
Thus, the sets e ( R ) are subsets of i N V i , the set of all infinite sequences f. Hence, the domain-free version of a relational information algebra is essentially the algebra of subsets e ( R ) with intersection as combination and ϵ I ( e ( R ) ) = e ( π I ( e ( R ) ) ) as extraction.
To complete the picture let us present two further, simple examples of domain-free information algebras. Consider a finite alphabet Σ, the set Σ * of finite strings over Σ, including the empty string ϵ, and the set Σ ω of countably infinite strings over Σ. Let Σ * * = Σ * Σ ω { 0 } , where 0 is a symbol not contained in Σ. For two strings r , s Σ * * , define r s , if r is a prefix of s or if s = 0 . Then, we define a combination operation in Σ * * as follows:
r · s = s , if r s , r if s r , 0 , otherwise .
Clearly, this is a commutative, idempotent semigroup. The empty string ϵ is the unit element, and the adjoined element 0 is the null element of combination. For extraction, we define operators ϵ n for any n N and also for n = . Let ϵ n ( s ) be the prefix of length n of string s, if the length of s is at least n, and let ϵ n ( s ) = s otherwise. In particular, define ϵ ( s ) = s for any string s and ϵ n ( 0 ) = 0 for any n. So, any ϵ n maps Σ * * into itself. It is easy to verify the each ϵ n satisfies conditions (a) to (c) of item 2 above in the definition of a domain-free information algebra. Further, it can also be easily verified, that E = { ϵ n : n N { } } is a commutative and idempotent semigroup under composition of maps. So, the string algebra ( Σ * * , E , · , } is an instance of a domain-free information algebra.
As another example consider any set X. A partial map on X is a map σ : S X , where S is a subset of X. We denote the domain of σ by d ( σ ) . The set of all partial maps on X, including an adjoined new element 0, is denoted by X X . Partial maps are ordered by σ τ , if d ( σ ) d ( τ ) and σ ( x ) = τ ( x ) for all x d ( σ ) or if τ = 0 . We define combination in X X by
σ · τ = σ if τ σ , τ if σ τ , 0 otherwise
Again, under this operation, X X is a commutative and idempotent semigroup, with the empty function ϵ, defined on the empty set, as unit, and 0 as null element. For any subset S of X, we define an extraction operation ϵ S by restricting a map σ to S, more precisely,
ϵ S ( σ ) = σ | d ( σ ) S
and ϵ S ( 0 ) = 0 . Each of these maps satisfies conditions (a) to (c) in item 2 of the definition of a domain-free information algebra. Further, E = { ϵ S : S X } is a commutative and idempotent semigroup under composition. Thus the algebra of partial maps ( X X , E , · , ) is a domain-free information algebra.
Further examples will be given later, in particular in Section 4. Here we shall first show that idempotency of Φ and E permits to define an order in each of these two sets. Especially important is the order of information in Φ. Many interesting features of an information algebra are in fact order-theoretic, as we shall see subsequently and in particular in Section 5. Order will be introduced in Section 3.2 and exploited in Section 3.3. Then, just as labeled information algebras induce associated domain-free algebras, we shall show in Section 3.4 that from a domain-free algebra we may also construct a labeled one. Finally, a further important structural property will be introduced and discussed in Section 3.5.

3.2. Order

Can pieces of information be compared as to their information content? Suppose ( Φ , E ; · , ) to be a domain-free information algebra. If ϕ , ψ are two pieces of information such that ϕ · ψ = ϕ , then we may say that ψ contains less information than ϕ, since knowing ϕ and ψ gives nothing more than ϕ. The piece of information ψ adds nothing to ϕ. So, in this case we write ψ ϕ , meaning that ψ contains less information than ϕ, is less informative than ϕ. This relation is indeed a partial order on Φ. This order has a number of simple consequences. So, ϕ , ψ ϕ · ψ . In fact, the combination of ϕ and ψ is the least upper bound, the supremum of the two elements, ϕ · ψ = sup { ϕ , ψ } . Further, 0 is an upper bound and 1 a lower bound for any element in Φ, 1 ϕ 0 . This means that ( Φ , ) is a join-semilattice with minimal and maximal elements 1 and 0. We often write combination as join, when the order-theoretic point of view is to be stressed, ϕ · ψ = ϕ ψ . We mention also that in many cases, as for example in the string algebra and the algebra of partial maps, combination is defined primarily by order.
In any commutative and idempotent semigroup there exist exactly two orders compatible with the semigroup operation. Here, we have selected the one which represents the information order. This choice implies that 1 ϕ 0 ; the vacuous information is less informative than any other information, and the contradiction 0, which is not really an information, dominates in this order any piece of information.
In the same way we may define an order in the commutative, idempotent semigroup E of extraction operators: Here we define ϵ η if η ϵ = ϵ . So, ϵ η means that once information is extracted according to ϵ, another extraction according to η does no more change anything. Under this order, composition of extraction corresponds to the infimum, η ϵ = inf { η , ϵ } . In this way, E becomes a meet-semilattice, and composition can be written as meet, η ϵ = η ϵ . In the examples presented above, we have seen that the elements of E are often indexed by another set Q, E = { ϵ x : x Q } . We may consider the elements x of Q as representing questions, and the maps ϵ x as extraction operators, which extract the information pertaining to question x from the pieces of information. The order in E induces an order in Q: x y in Q, if ϵ x ϵ y in E. Here x y means that the question represented by x is coarser than the one represented by y; once the information relating to x is extracted, the information relating to the finer question y is lost and cannot be retrieved.
In this view, in domain-free information algebras, the system Q of questions is only assumed to be a meet-semilattice, whereas in labeled information algebras Q is assumed to be a lattice. This is a difference between the domain-free and the labeled view of information. In many cases however, even in domain-free information algebras, Q is in fact also a lattice, see the examples above.
We adopt in the sequel the view that the family of extraction operators E is indexed by the elements of a question semilattice Q. Here, we give a few elementary results on the interplay between the order in Φ and the one in Q in a domain-free information algebra:
(1)
ϵ x ( 1 ) = 1 for all x Q ,
(2)
x y implies ϵ x ( ϕ ) ϵ y ( ϕ ) for all ϕ Φ ,
(3)
ϕ ψ implies ϵ x ( ϕ ) ϵ x ( ψ ) for all x Q ,
(4)
ϵ x ( ϕ ) ϕ for all ϕ Φ and all x Q ,
(5)
ϵ x ( ϕ ) · ϵ x ( ψ ) ϵ x ( ϕ · ψ ) .
In particular, conditions (a) to (c) in item 2 of the definition of a domain-free information algebra above may be rewritten as follows:
(a)
ϵ x ( 0 ) = 0 ,
(b)
ϵ x ( ϕ ) ϕ for all ϕ Φ ,
(c)
ϵ x ( ϵ x ( ϕ ) ψ ) = ϵ x ( ϕ ) ϵ x ( ψ ) .
An operator on a semilattice, satisfying these conditions, is also called an existential quantifier in algebraic logic, see [12,13], although often the reversed order is used. This difference is explained by our choice of the order, dictated by the semantics of information. This is a first reference to the close connection between information algebras and logic, in particular algebraic logic. This subject will be taken up in the next subsection and in particular in Section 4.
To conclude our discussion of order, let’s mention that in just the same way an order may also be introduced between pieces of information in labeled information algebras. We take this for granted. This section can serve as a model how to do it. Many other concepts related to order, for instance ideal completion (Section 3.3) or compactness (Section 5) have their counterpart in the labeled version.

3.3. Ideal Completion

If an agent asserts a piece of information ϕ, then he should consistently also assert any lesser informative information ψ ϕ . So, the down set ϕ = { ψ : ψ ϕ } is then a consistent body of information deduced from ϕ. More generally, if an agent asserts two pieces of information ϕ and ψ, then he should also consistently assert their combination ϕ · ψ . In other words, a consistent body of information should form an ideal in Φ. Assume ( Φ , E ; · , ) is a domain-free information algebra. An ideal in Φ is then defined as follows:
A non-empty set I Φ is called an ideal in Φ, if
(1)
ϕ I , ψ Φ and ψ ϕ jointly imply ψ I ,
(2)
ϕ , ψ I imply ϕ · ψ I .
This corresponds to the usual concept of an ideal in semilattices or lattices, see [14]. In our context, an ideal represents a coherent and consistent theory. The down set ϕ of an element ϕ of Φ is called a principal ideal, and if an ideal I is different from Φ it is called proper. Let I Φ denote the set of all ideals of Φ. We show that these ideals form an information algebra.
For this purpose we consider ideals I of Φ as pieces of information and define combination of two ideal I 1 and I 2 by
I 1 · I 2 = { ϕ Φ : ϕ ϕ 1 · ϕ 2   for   some   ϕ 1 I 1 , ϕ 2 I 2 }
Clearly, I 1 · I 2 is an ideal in Φ, so combination is well defined. It is commutative and idempotent. The ideal { 1 } is the unit element and the ideal Φ the null element of this combination operation.
Next, assume extraction operators in E to be indexed by elements x Q , where Q is a meet-semilattice. Define for x Q and an ideal I of Φ
ϵ ¯ x ( I ) = { ϕ Φ : ϕ ϵ x ( ψ )   for   some   ψ I }
Again, ϵ ¯ x ( I ) is an ideal in Φ, so ϵ ¯ x ( I ) is well defined. It can be shown that E ¯ = { ϵ ¯ x : x Q } is a commutative and idempotent semigroup under composition. It can also be shown that ϵ ¯ x as a map from the set of ideals I Φ into itself satisfies conditions (a) to (c) of item 2 in the definition of a domain-free information algebra, that is, it is an existential quantifier, see [6,8]. Therefore, ( I Φ , E ¯ , · , ) is a domain-free information algebra. It is called the ideal extension of the information algebra ( Φ , E ; · , ) . Note that in the information order we have I 1 I 2 exactly if I 1 I 2 . Thus I 2 is the larger theory than I 1 , hence more informative!
The original algebra ( Φ , E ; · , ) is embedded into its ideal extension by the map ϕ ϕ , because ( ϕ · ψ ) = ϕ · ψ , ϵ x ( ϕ ) = ϵ ¯ x ( ϕ ) and the map is one-to-one. To simplify notation we identify henceforth Φ with its image under this map and also write ϵ x instead of ϵ ¯ x . Also ϕ I is often written as ϕ I , referring to the order in I Φ .
The intersection of any family of ideals is again an ideal. So the collection of all ideals is a ∩-system; and any ∩-system is a complete lattice under set-inclusion [14]. Meet is just set-interesection and join is defined by ideal-combination. So, I Φ is not just a join-semilattice under information order, but a complete lattice, where meet and join are defined for any family of ideals. The role of ideal completion will become clearer in Section 5.

3.4. Support

We claimed that an extraction operation ϵ x extracts exactly the information pertaining to a question x Q from any piece of information ϕ Φ . Now, it may be that a piece of information ϕ Φ just pertains fully to a question x, that is extracting information relative to x does not change ϕ, ϵ x ( ϕ ) = ϕ . Then x Q is called a support of ϕ. Note that, in general, not every element ϕ Φ has a support. However, the identity map i d of Φ into itself satisfies condition (a) to (c) of item 2 in the definition of a domain-free information algebra, so E { i d } is still a commutative, idempotent semigroup, containing E as a sub-semigroup. So, if ( Φ , E ; · , ) is a domain-free information algebra, then so is ( Φ , E { i d } , · , ) . However, in this algebra every element of Φ has a support, namely i d . To the extraction operator i d we may adjoin a new top element in Q, satisfying i d = ϵ . We mentioned already above that such a top element represents a finest question, which contains all possible answers. So, in this way, we may easily extend any domain-free information algebra in such a way that any element of Φ has a support. An algebra where each element has a support is called supported.
We list a few results concerning support:
(1)
ϵ x ( ϵ x ( ϕ ) ) = ϵ x ( ϕ ) , that is, x is a support of ϵ x ( ϕ ) ,
(2)
ϵ x ( ϕ ) = ϕ implies ϵ x ( ϵ y ( ϕ ) ) = ϵ y ( ϕ ) , that is, if x is a support of ϕ, then x is a also a support of any ϵ y ( ϕ ) ,
(3)
ϵ x ( ϕ ) = ϵ y ( ϕ ) = ϕ implies ϵ x y ( ϕ ) = ϕ , that is, if x and y are supports of ϕ, then so is x y ,
(4)
ϵ x ( ϕ ) = ϕ implies ϵ x y ( ϕ ) = ϵ y ( ϕ ) , that is, if x is a support of ϕ, then x y is a support of ϵ y ( ϕ ) ,
(5)
ϵ x ( ϕ ) = ϕ and x y imply ϵ y ( ϕ ) = ϕ , that is, if x is a support ϕ, then any finer question y is a support of ϕ too,
(6)
ϵ x ( ϕ ) = ϕ and ϵ x ( ψ ) = ψ imply ϵ x ( ϕ · ψ ) = ϕ · ψ , that is, if x is a support of ϕ and ψ, then it is a support of ϕ · ψ too,
(7)
If Q is a lattice, then ϵ x ( ϕ ) = ϕ and ϵ y ( ψ ) = ψ imply ϵ x y ( ϕ · ψ ) = ϕ · ψ , that is, if x is a support of ϕ, and y a support of ψ, then x y is a support of ϕ · ψ .
The last item reminds of the Labeling axiom of a labeled information algebra. In fact, if Q is a lattice, then we may associate a labeled information algebra to a supported domain-free information algebra ( Φ , E ; · , ) . Here we assume as usual that E is labeled by the lattice Q. Consider the set of all pairs ( ϕ , x ) , where ϕ Φ and x is a support of ϕ. That is, we label all elements of Φ with all of its supports. Let then
Ψ = { ( ϕ , x ) : ϕ Φ , ϵ x ( ϕ ) = ϕ }
Then we define a labeling operation on Ψ by
d ( ϕ , x ) = x
Further, within Ψ, we define a combination operation by
( ϕ , x ) · ( ψ , y ) = ( ϕ · ψ , x y )
Note that this operation is well defined by property (7) in the list above. Clearly, Ψ is a commutative and idempotent semigroup under this combination operation. It has ( 1 , x ) and ( 0 , x ) as unit and null elements on domains x Q . Finally, we define a projection operation for any x y by
π x ( ϕ , y ) = ( ϵ x ( ϕ ) , x )
It can be verified that the algebra ( Ψ , Q ; d , · , π ) indeed satisfies all axioms of a labeled information algebra [6]. If we go back from this labeled information algebra to the domain-free algebra by forming the quotient algebra Ψ / σ , then Ψ / σ will be isomorphic to the initial one, namely ( Φ , E ; · , ) . A formal definition of a morphism between information algebras will be given in Section 4.5, for the moment an intuitive understanding is sufficient. Similarly, if we start with a labeled information algebra ( Ψ , Q ; d , · , π ) , construct the domain-free algebra on Ψ / σ and take the associated labeled algebra using the pair construction above, then this labeled algebra will be isomorphic to the initial one. In this sense, labeled and domain-free algebra are closely associated. Therefore, in the sequel, we shall freely select at convenience either the labeled or the domain-free version of an information algebra, and we take it for granted that any concepts and constructions may be developed in parallel in both versions of an information algebra—although typically one of the two settings will be preferable in most cases.

3.5. Atoms

In many cases there are maximal, most informative information elements; and often any piece of information is somehow composed from such elements. This issue will be discussed in this section. Here we consider a labeled information algebra ( Ψ , Q ; d , · , π ) , although a similar development would also be possible for domain-free algebras.
An element α Ψ with label d ( α ) = x is called an atom on domain x, if
(1)
α 0 x ,
(2)
For all ψ Ψ , d ( ψ ) = x and α ψ jointly imply either ψ = α or ψ = 0 x .
Therefore, atoms are the most informative elements on domain x (with the exception of the non-information 0 x , the contradiction). In a pure order-theoretic view atoms would be called co-atoms, since atoms there are the smallest nonzero elements. However, in the view of information order, the present notion of an atom is justified, since it turns out that a piece of information may be identified in many important cases just with a set of atoms implying the information. In relational information algebras (see Section 2.2) the one-element relations, the relations containing exactly one tuple, are atoms in the domain of the relation. Of course, information algebras may have no atoms. Note also that in a subset information algebra (see Section 2.2) the individual blocks of a partition P are atoms on the domain represented by this partition.
Here are a few properties of atoms:
(1)
If α is an atom on domain x, ψ Ψ and d ( ψ ) = x , then either ψ α or ψ · α = 0 x .
(2)
If α is an atom on domain x, and y x , then π y ( α ) is an atom on domain y.
(3)
If α and β are atoms on domain x, then either α = β or α · β = 0 x .
For a proof we refer to [6]. Two pieces of information ϕ and ψ on a same domain x are called contradictory (or disjoint), if ϕ · ψ = 0 x . So atoms are mutually contradictory and an atom and a piece of information on the same domain are either contradictory or the information is smaller than the atom.
Let A t ( Ψ x ) denote the set of all atoms with domain x, A t ( Ψ ) the set of all atoms, and for ψ Ψ with label d ( ψ ) = x , let A t ( ψ ) be the set of all atoms implying ψ,
A t ( ψ ) = { α A t ( Ψ x ) : ψ α }
If α A t ( ψ ) we say that α is contained in ψ. For instance, in relational algebras, the atoms are the one-tuple relations: { α } is an atom contained in R, exactly if α R .
We introduce a few notions with respect to atoms and labeled information algebras:
(1)
A labeled information algebra is called atomic if for all ψ Ψ different from 0 x the set A t ( ψ ) is not empty, i.e., if every piece of information contains an atom.
(2)
A labeled information algebra is called atomistic, if for all ψ Ψ different from 0 x , ψ is the infimum of the atoms it contains,
ψ = A t ( ψ )
(3)
A labeled information algebra is called completely atomistic, if it is atomistic and if for all x Q and for every subset of atoms A A t ( Ψ x ) , the infimum A exists and belongs to Φ.
A relational information algebra is completely atomistic. If in a relational information algebra V is the set of real numbers and only convex subsets in each domain are considered, then we still have a labeled information algebra (a subalgebra of the relational one), which is atomistic but not completely atomistic, and this is the case for many other subalgebras of a relational information algebra.
For atoms we have essentially two basic operations, namely Labeling: α d ( α ) and Projection: α π x ( α ) . In an atomic information algebra the system ( A t ( Ψ ) , Q ; d , π ) has the following properties, inherited from the corresponding labeled information algebra (see [6]):
(1)
If x d ( α ) , then d ( π x ( α ) ) = x .
(2)
If x y d ( α ) , then π x ( π y ( α ) ) = π x ( α ) .
(3)
If d ( α ) = x , then π x ( α ) = α .
(4)
If d ( α ) = x , d ( β ) = y and π x y ( α ) = π x y ( β ) , then there exists a γ such that d ( γ ) = x y and π x ( γ ) = α , π y ( γ ) = β .
(5)
If d ( α ) = x and x y , then there exists a γ such that d ( γ ) = y and π x ( γ ) = α .
Note that the ordinary tuples of a relational algebra also satisfy these conditions. Therefore we call a system like ( A t ( Ψ ) , Q ; d , π ) a tuple system. More on such systems will be said in Section 4.1.
Still considering an atomic labeled information algebra ( Ψ , Q ; d , · , π ) . We call subsets of A t ( Ψ x ) generalized relations on domain x and denote the family of all such relations by R ( A t ( Ψ ) ) .
In R ( A t ( Ψ ) ) , we may define an operation of combination among generalized relations R and S on x and y respectively, according to the model of relational join:
R S = { α A t ( Ψ ) : d ( α ) = x y , π x ( α ) R , π y ( α ) S }
We may also define projection of a generalized relation R on domain x to some domain y x as in relational algebra:
π y ( R ) = { π y ( α ) : α R }
Then, exactly as in relational information algebras (see Section 2.2), we may define a generalized relational information algebra of pairs ( R , x ) of generalized relations in A t ( Ψ ) with domain x Q . We call this algebra ( R ( A t ( Ψ ) ) , Q ; d , · , π ) (see [6,8]). As before, we may drop the pair notation, and denote the pair ( , x ) by x .
The map ψ A t ( ψ ) between Ψ and R ( A t ( Ψ ) ) is a homomorphism (see Section 4.5 and [6]) between the atomic information algebra ( Ψ , Q ; d , · , π ) and the generalized relational algebra ( R ( A t ( Ψ ) ) , Q ; d , · , π ) , that is,
A t ( ψ · ϕ ) = A t ( ψ ) A t ( ϕ )
A t ( π x ( ψ ) ) = π x ( A t ( ψ ) )
A t ( 1 x ) = A t ( Ψ x )
A t ( 0 x ) = x
If the labeled information algebra ( Ψ , Q ; d , · , π ) is atomistic, then this map is one-to-one, hence an embedding of ( Ψ , Q ; d , · , π ) into the relational algebra ( R ( A t ( Ψ ) ) , Q ; d , · , π ) . If the algebra ( Ψ , Q ; d , · , π ) is completely atomistic, then the map is even onto R ( A t ( Ψ ) ) , and hence an isomorphism [6].
A similar analysis can be carried out in the domain-free version of an information algebra, which links atomic algebras to information algebras of subsets. These results are partial answers to the Representation Problem: Is any information algebra (labeled or domain-free) isomorphic to an information algebra of sets, and if yes, how? More on this question will be said in Section 6.

4. Generic Constructions

Where do information algebras come from? Are there general ways to construct information algebras? There are! In this Section we describe several generic ways to construct information algebras. There are at least three different approaches: First, similarly to atomic algebras, information algebras may be obtained from abstract tuple systems as generalized relational algebras. This is presented in Section 4.1. Secondly, logic provides a rich source for information algebras. There are several ways information algebras are related to logic, especially algebraic logic. Section 4.2 to Section 4.4 are devoted to this important subject. Finally, new information algebras may be obtained from given ones by constructions used in Universal Algebra and Category Theory. This will be discussed in Section 4.5.

4.1. Generalized Relational Algebras

Regarding atoms of an atomic information algebra, we have said that atoms form a tuple system. Here we give a formal definition of the concept of an abstract tuple system. Let Q be a lattice and T a set, whose generic elements will be denoted by t , s , . Suppose that two operations representing labeling and projection are defined:
(1)
Labeling: d : T Q , t d ( t ) ,
(2)
Projection: π : T × Q , t , x π x ( t ) , defined for x d ( t ) .
The system ( T , Q ; d , π ) is called a tuple system, if the following conditions are satisfied for t , s T and x , y Q :
(1)
If x d ( t ) , then d ( π x ( t ) ) = x ,
(2)
If x y d ( t ) , then π x ( π y ( t ) ) = π x ( t ) ,
(3)
If d ( t ) = x , then π x ( t ) = t ,
(4)
If d ( t ) = x , d ( s ) = y and π x y ( t ) = π x y ( s ) , then there exists r T such that d ( r ) = x y and π x ( r ) = t , π y ( r ) = s ,
(5)
If d ( t ) = x and x y , then there exists s T such that d ( s ) = y and π x ( s ) = t .
The elements of T are called generalized tuples, since ordinary tuples evidently satisfy the conditions of a tuple system. The label d ( t ) of a tuple t is called its domain.
As usual, the elements of Q may be thought of as representing questions. The tuples with domain x are the possible answers to the question represented by x Q . Projection of a tuple t to some coarser domain y links the answers to a question with the answers to a coarser question. If π y ( t ) = π y ( s ) , then s and t belong to the same answer to the coarser question. As we have seen, the atoms of an atomic labeled information algebra form a tuple system. So do the blocks of a partition lattice satisfying condition in Equation (6). In fact, the elements of any labeled information algebra ( Ψ , Q ; d , · , π ) form a tuple system ( Ψ , Q ; d , π ) (see [6]).
If we fix x Q , then any subset R of tuples with domain x represents a partial answer to the question x. Such a subset R is called a generalized relation with domain x. Denote by R ( T ) the family of all such generalized relations. In the case of the tuple system of the blocks of a partition lattice, generalized relations are simply sets of blocks of a same partition. As for atomic labeled information algebras in Section 3.5, we may define generalized relational joins between generalized relations R and S with domain x and y respectively, as well as relational projection to y x of a generalized relation relation R with domain x:
R S = { t T : d ( t ) = x y , π x ( t ) R , π y ( t ) S }
π y ( R ) = { π y ( t ) : t R }
As in the atomic case (Section 3.5) it can be shown that the pairs ( R , x ) , where R is a generalized relation with domain x, form a labeled information, where the operations are defined as follows:
(1)
Labeling: d ( R , x ) = x ,
(2)
Combination: ( R , x ) · ( S , y ) = ( R S , x y ) ,
(3)
Projection: π x ( R , y ) = ( π x ( R ) , x ) .
We denote this algebra by ( R ( T ) , Q ; d , · , π ) and call it the generalized relational algebra over the tuple system ( T , Q , d , π ) . Note that this is an information algebra whose elements are sets of tuples; it is a set algebra.

4.2. Quantifier Algebras

A monadic algebra [15,16] is a Boolean algebra Φ together with a map : Φ Φ , which is an existential quantifier (see Section 3.2), that is,
(1)
( 1 ) = 1 ,
(2)
( ϕ ) ϕ ,
(3)
( ( ϕ ) ψ ) = ( ϕ ) ( ψ ) .
Now, since only the join operation is involved in the definition of an existential quantifier, this concept applies to any join-semilattice, see [12]. Therefore, let Φ be a commutative, idempotent semigroup, hence a semilattice. We add a family of extraction operators in the following way: Let { i : i V } be a finite set of commuting existential quantifiers, that is:
i ( j ( ϕ ) ) = j ( i ( ϕ ) )
for all i , j V and all ϕ Φ . Then we may define a map s for any subset s = { i 1 , , i n } V by
s ( ϕ ) = i n ( i n 1 ( i i 1 ( ϕ ) ) )
Clearly, s is an existential quantifier, and for subsets s and t of V,
s t ( ϕ ) = s ( t ( ϕ ) )
In this way, we obtain a quantifier algebra ( Φ , P ( V ) ; · , ) , where : Φ × P ( V ) Φ , and where P ( V ) is the power set of V. This is closely related to an information algebra: We may define extraction operators ϵ s for subsets s of V by
ϵ s ( ϕ ) = V s ( ϕ )
Then, ( Φ , P ( V ) ; · , ϵ ) is a domain-free information algebra. In specific applications of this construction, existential quantification usually corresponds to the elimination of variables, which amounts to extract information pertaining to the remaining variables (see [6]). For this type of algebras we refer to cylindric algebras [17], which are an important concept in algebraic logic.

4.3. Consequence Operators

How can we express information? One possibility is to do this with propositions expressed by sentences of some appropriate formal language. The internal syntactic structure of this language is of no concern here. The essential point is that there is an entailment relation which permits to deduce further sentences from a set of given sentences. This captures the essence of a logic. The present section is essentially based on [18].
Let then L be a set of sentences, representing the language to be considered. An entailment relation relates sets of sentences X L to single sentences s L , such that the following conditions are satisfied:
(1)
X s for all s X ,
(2)
If X s for all s Y and Y t , then X t .
Such entailment relations are used, for instance, in information systems introduced by Scott [14]. The relation X s means that the sentence s can be deduced from X, or is entailed by X.
We may ask for the set of all sentences entailed by a set X. This leads to an operator C mapping the power set of L into itself,
C ( X ) = { s L : X s }
This mapping is a consequence operator (or also a closure operator); that is, it satisfies the following three conditions [6]:
(1)
X C ( X ) ,
(2)
C ( C ( X ) ) = C ( X ) ,
(3)
if X Y , then C ( X ) C ( Y ) .
Sets X such that X = C ( X ) are called closed; in particular any set of the form C ( X ) is closed. We consider the closed sets as our pieces of information. Thus let Φ denote the family of all closed sets. Note that, in general, any closed set A can be written as A = C ( X ) with X A in many different ways. Our interest usually is in finding small, at best finite sets X generating A in this way. So a piece of information A is coded nonuniquely by a set of sentences X contained in A. If we have two pieces of information coded by X and Y respectively, then the combined information is the closure of the set X Y . So, we define a combination operation within Φ by
C ( X ) · C ( Y ) = C ( X Y )
Since C ( X Y ) = C ( C ( X ) C ( Y ) ) , this operation is well defined among closed sets. Clearly, under this operation Φ is a commutative, idempotent semigroup. The set C ( ) of all tautologies is the unit element of this combination and L is its null element. In fact, closed sets form a ∩-system, since the intersection of any family of closed systems is closed [14]. So Φ is a complete lattice under inclusion as order, intersection is the meet and the join is given by our combination operation.
In order to obtain an information algebra of closed sets, we need to introduce extraction operators into the semigroup Φ. For this purpose, we consider sublanguages M L of the original language. The information C ( X ) restricted to M is simply C ( X ) M . However, this set is not necessarily closed. Thus a candidate for an extraction operator relative to the sublanguage M is obtained by taking the closure of this set,
C M ( C ( X ) ) = C ( C ( X ) M )
This is a map C M : Φ Φ .
We want this map to be an existential quantifier on the semilattice Φ of closed sets (see Section 3.2) in order to obtain an information algebra. This means that conditions (a), (b) and (c) in Section 3.2 must be satisfied, which imposes additional conditions on the consequence operator C.
Condition (a) requires that C ( M ) = L , that is, the sublanguage M by itself should not yet convey any information. Then C M ( L ) = C ( L M ) = C ( M ) = L . So condition (a) of an existential quantifier is satisfied. Condition (b) is also satisfied, since C M ( C ( X ) ) C ( X ) .
Condition (c) requires that for any two closed sets E , F Φ ,
C M ( C ( C M ( E ) F ) ) = C ( C M ( E ) C M ( F ) )
This is equivalent to the following condition on the consequence operator C, for any subsets E and F of L [6,18,
C ( C ( ( E M ) F ) M ) = C ( ( E M ) ( F M ) )
Next, we are going to consider a family S of sublanguages M of L which is closed under finite intersection, and such that each operator C M is an existential quantifier, i.e., satisfies in particular Equation (46) for all M S . These sublanguages M may be thought of representing questions, namely what propositions in M are asserted by a piece of information C ( X ) Φ . Further, we require that E = { C M : M S } becomes a commutative, idempotent semigroup of extraction operators C M : Φ Φ . A sufficient condition for this is
C M C N = C M N
for any pair M , N S . Then in this way ( Φ , E ; · , ) becomes a domain-free information algebra; an algebra associated with an entailment system ( L , ) and an intersection-semilattice S of sublanguages of L . This shows one way a system of logic may induce an information algebra. So, an information algebra can be obtained from a consequence operator satisfying the additional conditions in Equations (46) and (47) on some language L .
Condition in Equation (47) means, in terms of the consequence operator C, that for any closed set E
C ( C ( E N ) M ) = C ( C ( E M ) N ) = C ( E M N )
This last condition has a natural characterization in terms of interpolation [18]: A consequence operator C has the interpolation property with respect to a class of sublanguages S , if, for all sublanguages M and N of S , from s M , X N and s C ( X ) it follows that there exists a set of sentences Y M N such that Y C ( X ) and s C ( Y ) . The set Y is called the interpoland between s and X. It can be shown [18] that a consequence operator C satisfies Equation (48) if and only if it has the interpolation property with respect to the family of sublanguages S . A similar characterization of condition in Equation (46) is not known so far. In [18] a sufficient condition implying Equation (46) is given. Further, we refer to [19] for a discussion of information algebras related to various systems of logic.

4.4. Language and Models

In the previous section, information was defined on a purely syntactical level as closed theories. No semantics was considered. In this section we add semantics to language. As before, consider a set L of sentences. Add now another set M , whose elements are considered to be models or possible worlds. We consider triples ( L , M , ) , where ⊧ is a binary relation L × M between sentences and models. We write m s (m models s) instead of ( s , m ) . Such triples are called classifications in the book [20] on information flow, and the terms types and tokens are used instead of our terms of sentences and models. Further, they occur in formal concept analysis [14,21] where the elements of L and M are called attributes and objects.
In our discussion, which is based on [9], we assume that the relation m s means that model msatisfies sentence s or that m is a model, or a possible interpretation, of sentence s. An example is provided by propositional logic, where L is a propositional language and the elements of M are valuations, and m s means that m satisfies s in the usual definition in propositional logic. Another example is provided by predicate logic. The language L is given by formulae of predicate logic and models M are structures used to interpret formulae. We refer to [6,22] for a discussion of information algebras relating to propositional and predicate logic, which are special cases of the structures considered here.
In a triple ( L , M , ) a set X of sentences determines a set of models by
r ^ ( X ) = { m M : m s   for   all   s X }
We call r ^ ( X ) the models of X. We assume here that there is no model which satisfies all sentences of L , that is, r ^ ( L ) = . Similarly, a set A of models determines a set X of sentences by
r ˇ ( A ) = { s L : m s   for   all   m A }
The set r ˇ ( A ) is called the theory of A. Note that r ˇ ( ) = L . The significance of r ^ and r ˇ will become clear in a moment. We note that
r ^ ( X ) A X r ˇ ( A )
This means that r ^ and r ˇ form a Galois connection [14]. We define further for X L and A M ,
It is well known that C and C are consequence operators in L and M respectively [14]. We consider the family of C -closed sets Φ L and C -closed sets Φ M . Again, it is well known that Φ and Φ are complete lattices under inclusion [14].
We now construct two closely related information algebras based on the lattices Φ and Φ . Note that the models of a set of sentences X, r ^ ( X ) , are C closed sets in M [14]. We consider the models of X as the information about possible worlds expressed by a set of sentences X. Similarly, the theory r ˇ ( A ) of a set of models is C -closed. This is the theory of A. In Φ we define combination, as in the preceding Section 4.3 by
C ( X ) · C ( Y ) = C ( X Y )
This corresponds to the information order C ( X ) C ( Y ) if C ( X ) C ( Y ) . The larger a theory, the more informative it is. In Φ however, we define
C ( A ) · C ( B ) = C ( A ) C ( B )
This corresponds to the information order C ( A ) C ( B ) if C ( A ) C ( B ) . The smaller a model set is the more precise and the more informative it is. This makes sense, if the questions we consider search for an unknown possible world, an unknown model. In both cases Φ as well as Φ become commutative, idempotent semigroups. In the first case the vacuous information is the tautology C ( ) , in the second case the whole model set M . The null element in the first case is the whole language L , in the second case it is the empty set.
Next, we introduce extraction operators. We do this here first with respect to Φ ; corresponding extraction operators for Φ will be derived later. For this purpose we consider a partition of M and use the associated equivalence relation defined by m n iff m and n belong to the same block of the partition. Extraction then means to restrict an element ϕ Φ to this partition. This is expressed by the saturation operatorsϵ with respect to its partition. If A is any subset of M , then
ϵ ( A ) = { m M : n A , m n }
We require that the saturation of a C -closed set is still C -closed. This requirement is fulfilled if and only if closure and saturation commute, C ( ϵ ( A ) ) = ϵ ( C ( A ) ) . Under this assumption it turns out that ϵ satisfies conditions (a) to (c) of item 2 of the definition of a domain-free algebra automatically. Or, in other words, ϵ is an existential quantifier with respect to the intersection semilattice of C -closed sets (see Section 3.2),
(a)
ϵ ( ) = ,
(b)
ϵ ( ϕ ) ϕ ,
(c)
ϵ ( ϵ ( ϕ ) ψ ) = ϵ ( ϕ ) ϵ ( ψ ) .
Note that here the information order is the reverse of set inclusion, join in the information order is therefore set intersection.
Consider now a lattice Q of partitions of M , more precisely, a sublattice of the lattice p a r t ( M ) of partitions of M . We require that the following independence condition is satisfied (see Section 2.2, Equation (6)): If P , Q are two partitions from the lattice Q, then
m P Q n l : m P l , n Q l
Let ϵ P be the saturation operator relative to the partition P and E the family of all operators ϵ P for all P Q . Then the independence condition above is sufficient for E to be a commutative, idempotent semigroup under composition, or what is the same:
ϵ P ϵ Q = ϵ P Q
So, ( Φ , E , · , ) defined in this way is a domain-free information algebra.
To any ϕ Φ we may associate its theory r ˇ ( ϕ ) Φ . We have already defined combination in Φ . Now, we also define extraction operators for P Q by
τ P ( ψ ) = r ˇ ( ϵ P ( r ^ ( ψ ) ) )
for ψ Φ . Then, with T = { τ P : P Q } , the system ( Φ , T , · , ) also becomes a domain-free information algebra. In fact, the maps ϕ r ˇ ( ϕ ) and ϵ P τ P define an isomorphism from ( Φ , E , · , ) onto ( Φ , T , · , ) , see [9].
We refer also to [9] for a discussion of the associated labeled construction and a discussion of the relation to the notions of infomorphism and channels in [20].
This is a generic way to construct information algebras from models of propositional and predicate logic, of systems of linear equations and inequalities, etc. It extends and generalizes semantic information theory [23,24,25,26,27,28,29]. In all cases mentioned the lattice Q of partitions of M is usually obtained from a multivariate system (see Section 2.1), since these model are based on sets of variables and models are either vectors or sequences, see [6,9]. We refer also to [19], where structures called similarity models are introduced, using a similar approach to the one presented here to construct commutative and idempotent semigroups of extraction operators.

4.5. Morphisms

Usually, new algebraic structures are constructed from algebras of the same type. For example, Cartesian products of information algebras should be again an information algebras. This will by shown in Section 5.3. Here we consider maps between information algebras. This is also a step towards a category theoretical study of information algebra. More on this subject is to be found in Section 5.3.
Consider two domain-free information algebras ( Φ 1 , E 1 , · , ) and ( Φ 2 , E 2 , · , ) . To simplify notation we use the same symbol for combination and composition in both algebras; it will always be clear from the context which operation is meant. In the same way, ≤ denotes the order either in Φ 1 or in Φ 2 . A map f : Φ 1 Φ 2 is called order preserving, if ϕ ψ in Φ 1 implies f ( ϕ ) f ( ψ ) in Φ 2 . Let [ Φ 1 Φ 2 ] denote the set of order preserving maps from Φ 1 into Φ 2 . We define in [ Φ 1 Φ 2 ] a combination operation pointwise by the combination in Φ 2 ,
( f · g ) ( ϕ ) = f ( ϕ ) · g ( ϕ )
Clearly f · g is still an order-preserving map and this combination operation of order-preserving maps is associative, commutative and idempotent. So [ Φ 1 Φ 2 ] becomes a commutative and idempotent semigroup under combination. The unit element of combination is given by the map ϕ 1 , where 1 is the unit in Φ 2 and the null element is given by the map ϕ 0 , where here 0 is the null element in Φ 2 . The associated order is defined by f g if and only if f ( ϕ ) g ( ϕ ) for all ϕ Φ 1 .
In order to introduce extraction operations on [ Φ 1 Φ 2 ] , we consider the cartesian product E 1 × E 2 and define maps ( ϵ 1 , ϵ 2 ) f : Φ 1 Φ 2 by
( ( ϵ 1 , ϵ 2 ) f ) ( ϕ ) = ϵ 2 ( f ( ϵ 1 ( ϕ ) )
Clearly , the map ( ϵ 1 , ϵ 2 ) f is still order preserving. It further satisfies conditions (a) to (c) of item 2 in the definition of a domain-free information algebra. The family E 1 × E 2 of these maps is further commutative and idempotent under composition. So, ( [ Φ 1 Φ 2 ] , E 1 × E 2 , · , ) with combination and extraction as defined above is a domain-free information algebra. We shall argue in Section 5.3 that these maps are the correct choice of morphisms for making the class of domain-free information algebras a category.

5. Compact Algebras

5.1. Finiteness

Computers can only treat “finite” information, and “infinite” information can often be approximated by “finite” elements. This point of view is developed in particular in domain theory, a subject at the interface between order theory and computer science [14,30,31,32,33]. Here, we treat the problem of finiteness in a similar way and show that in this respect information algebras extend domain theory and that many results of this theory can be extended to information algebras.
We discuss finiteness in the context of domain-free information algebras, although it could also be studied in the context of labeled algebras. The relation between finiteness in domain-free and labeled information algebras is interesting and it is not yet fully understood, see [8].
Consider then a domain-free information algebra ( Φ , E ; · , ) . In the set Φ of information elements we single out a subset Φ f of elements which we consider to be finite. To qualify for this, they must of course satisfy a number of requirements, which we shall enumerate below. Before, we need a new concept, which serves to express the idea that any element of Φ should be approximated by finite elements. This is the concept of a directed set: A nonempty subset X Φ is called directed, if for every pair ϕ , ψ X there is an element χ X such that ϕ , ψ χ .
Here are now the conditions we impose on Φ f :
(1)
Subalgebra: The system ( Φ f , E , · , ) is a subalgebra of ( Φ , E , · , ) , that is it is closed under combination and extraction and the unit 1 and the null element 0 belong to Φ f .
(2)
Convergence: If X Φ f is directed, then the supremum X exists in Φ.
(3)
Density: For all ϕ Φ and ϵ E ,
ϵ ( ϕ ) = { ψ Φ f : ψ = ϵ ( ψ ) ϕ }
(4)
Compactness: If X Φ f is directed, and ψ Φ f satisfies ψ X , then there is a ϕ X such that ψ ϕ .
If these conditions are satisfied, we call the system ( Φ , Φ f , E ; · , ) a compact information algebra. Sometimes weaker conditions are sufficient: [6,8]: For instance it is often sufficient that Φ f is closed under combination, but not under extraction. Further, if the information algebra is supported, then from the density condition it follows also that
ϕ = { ψ Φ f : ψ ϕ }
This says that any element of Φ can be approximated by the finite elements below it. Conversely, this is not sufficient for the density condition above to hold, which says that any element supported by some domain can be approximated by finite elements supported by the same domain. Therefore it is called the weak density. We shall not discuss these refinements here, see [8].
As examples of compact information algebras we cite the algebra of strings, where the finite strings are the finite elements. In the information algebra of convex sets, the convex polyhedra are the finite elements, since any convex set can be approximated by the convex polyhedra it is contained in. Further a domain-free information algebra ( Φ , E ; · , ) , where Φ is a finite set is always compact. However, in a set algebra, the cofinite sets (the complements of finite sets) do not qualify as finite elements, since only weak density is satisfied.
Important examples of compact information algebras are provided by consequence operators. Consider an entailment system ( L , ) and its associated consequence operator C (see Section 4.3). Assume that C satisfies conditions in Equations (46) and (48) specificd there, such that the system ( Φ , E ; · , ) associated with the entailment system ( L , ) and an intersection-semilattice S of sublanguages is an information algebra. Assume now further that C satisfies the following condition:
C ( X ) = { C ( Y ) : Y X , Y   finite }
This is equivalent to the condition that if X s , then there is a finite subset Y of X such that Y s . With this additional condition on C, the information algebra ( Φ , E ; · , ) of closed sets becomes a compact algebra whose finite elements are the sets C ( X ) , where X is a finite set (see [6]).
Another example of a compact information algebra is the ideal completion of a domain-free information algebra, see Section 3.3. In fact, the elements of Φ or rather the principal ideals ϕ for ϕ Φ are the finite elements in the compact algebra ( I Φ , E ; · , ) , see [6]. So, embedding an information algebra into its ideal completion is a way to compactify this algebra. We refer to [34] for a variant of this compactification, yielding an analoguous result. On the other hand the following theorem is an extension of a well-known representation theorem of domain theory [35]: If ( Φ , Φ f , E ; · , ) is a compact information algebra, then the ideal completion ( I Φ f , Φ f , E ; · , ) is a compact information algebra, isomorphic to ( Φ , Φ f , E ; · , ) . For a proof we refer to [8].
The ideal completion can also be seen in another perspective. The set
I ( X ) = { ϕ Φ : ϕ ϕ 1 · · ϕ m   for   some   finite   set   ϕ 1 , , ϕ m X }
is the ideal generated by the set X Φ . This is a consequence operator in Φ. If we take Φ to be a language, and the information order as an entailment relation, that is, X ϕ if ϕ X , then I ( X ) is the consequence operator associated with the entailment relation. Further, for any ϵ E , let Φ ϵ be the set of all element of Φ, supported by ϵ, that is Φ ϵ = { ϕ Φ : ϕ = ϵ ( ϕ ) } . The family Φ ϵ for ϵ E is closed under finite intersections. It turns out that the family of I-closed sets is just the set I Φ of ideals of Φ. Moreover, the consequence operator I satisfies conditions in Equations (46) and (48) and also Equation (62) above, see [6]. Therefore, the information algebra generated by the consequence operator I together with the family of sublanguages { Φ ϵ : ϵ E } is just the ideal completion of the domain-free information algebra ( Φ , E ; · , ) . In this way, any compact information algebra is generated by a consequence operator, as described in Section 4.3.
If ( Φ , Φ f , E ; · , ) is a compact information algebra, then it follows that Φ is a complete lattice under the information order, see [6]. Further, it follows that ψ Φ f if and only if, for every directed set X contained in Φ, ψ X implies that there is a ϕ X such that ψ ϕ . In order theory, elements of a complete partially ordered (cpo) set satisfying this condition are called finite elements,14]. Thus the order theoretic concept of finiteness corresponds with our concept. Also ψ Φ f if and only if, for every set X Φ , ψ X implies that there is a finite subset F of X such that ψ F . In order theory, elements of a cpo satisfying this condition are called compact elements,14]. Thus the order theoretic concept of compactness corresponds also to our concept of finite elements. Finally, all of this together means that Φ is an algebraic lattice, see [14] and Φ { 0 } is an algebraically complete order such that ϕ ψ Φ { 0 } if ϕ ψ 0 . Such a structure is called a Scott-Ershov domain. This links domain-free information algebras to domain theory [35]. Compact information algebras are domains with an additional operation, namely the extraction. Many results from domain theory extend to compact information algebras. In particular, compactness could be generalized to continuity by use of the way-below relation. see [8,34].

5.2. Continuous Maps

Maps between sets of pieces of information also represent information. In fact, we have seen in Section 4.5 that order preserving maps between domain-free information algebras themselves form an information algebra. In the case of compact information algebras the associated morphisms are not simply order preserving maps, but continuous maps. The concept of an continuous map is inherited from c p o s : Let ( Φ 1 , Φ 1 , f , E 1 ; · , ) and ( Φ 2 , Φ 2 , f , E 2 ; · , ) be two compact information algebras. A map f : Φ 1 Φ 2 is called continuous, if for all ϕ Φ 1 ,
f ( ϕ ) = { f ( ψ ) : ψ Φ 1 , f , ψ ϕ }
This is also equivalent to the condition that f ( X ) = f ( X ) for every directed set X Φ . Extraction operators of a compact information algebra ( Φ , Φ f , E ; · , ) are continuous, that is, ϵ ( X ) = ϵ ( X ) for any directed set X and any ϵ E , see [6]. Further, a continuous map is order preserving.
Just in the same way as we defined an information algebra of order preserving maps, we can also obtain an information algebra of continuous maps. Let [ Φ 1 Φ 2 ] c be the set of continuous maps between the two compact algebras ( Φ 1 , Φ 1 , f , E 1 ; · , ) and ( Φ 2 , Φ 2 , f , E 2 ; · , ) . Define combination and extraction as in Section 4.5 for the case of order preserving maps, for f , g [ Φ 1 Φ 2 ] c and ϵ 1 E 1 , ϵ 2 E 2 ,
(1)
Combination: ( f · g ) ( ϕ ) = f ( ϕ ) · g ( ϕ ) ,
(2)
Extraction: ( ( ϵ 1 , ϵ 2 ) f ) ( ϕ ) = ϵ 2 ( f ( ϵ 1 ( ϕ ) ) ) .
With these operations ( [ Φ 1 Φ 2 ] c , E 1 × E 2 ; · , ) becomes a domain-free information algebra, a subalgebra of the algebra of order preserving maps, Section 4.5. In fact, this is a compact algebra. Its finite elements are defined using maps s : Y Φ 2 , f , where Y is a finite subset of Φ 1 , f . These maps are called simple. Let S be the set of simple maps. For any s S let Y ( s ) be its domain and define
s ^ ( ϕ ) = { s ( ψ ) : ψ Y ( s ) , ψ ϕ }
If the set on the right hand side is empty, then s ^ ( ϕ ) = 1 . It turns out that these maps s ^ are the finite elements of the information algebra of continuous maps, see [6].

5.3. Categories of Information Algebras

To conclude this review of information algebras we are going to consider the following two categories of information algebras:
(1)
The category IA has as its objects domain-free information algebras ( Φ , E ; · , ) and as its morphisms order preserving maps.
(2)
The category CompIA has as its objects compact information algebras ( Φ , Φ f , E ; · , ) and as its morphisms continuous maps.
The category CompIA is a subcategory of IA. It turns out that these categories are both Cartesian Closed.
This means that these categories satisfy the following three conditions:
(1)
both have terminal objects,
(2)
both have direct products,
(3)
both have exponentials.
An object T is a terminal object if there is exactly one morphism from any other object to T. In our case this is simply the information algebra ( { 1 } , { i d } ; · , ) , where Φ and E both consist of only one element.
Secondly, the Cartesian product ( Φ 1 × Φ 2 , E 1 × E 2 ; · , ) of two information algebras, where both combination and extraction are defined component-wise, belongs to IA. If the two algebras are compact, then ( Φ 1 × Φ 2 , Φ 1 , f × Φ 2 , E 1 , f × E 2 ; · , ) is compact too, hence belongs to CompIA. In fact, these Cartesian products are the categorial direct product. That is, if the projections p 1 and p 2 are defined in the usual way, p 1 ( ϕ 1 , ϕ 2 ) = ϕ 1 and p 2 ( ϕ 1 , ϕ 2 ) = ϕ 2 , then, for any pair of morphisms (order preserving or continuous maps) f 1 and f 2 from a third information algebra ( Φ , E ; · , ) into ( Φ 1 , E 1 , · , ) and ( Φ 2 , E 2 , · , ) , respectively, there is a morphisms f from ( Φ , E ; · , ) into ( Φ 1 × Φ 2 , E 1 × E 2 ; · , ) such that p 1 f = f 1 and p 2 f = f 2 .
Finally both categories IA and CompIA have exponentials. In fact, the information algebras ( [ Φ 1 Φ 2 ] , E 1 × E 2 ; · , ) and ( [ Φ 1 Φ 2 ] c , E 1 × E 2 ; , · , ) of order preserving or continuous maps respectively are the exponentials in their respective categories. Define the map e v a l : [ Φ 1 Φ 2 ] c × Φ 1 Φ 2 by
e v a l ( f , ϕ ) = f ( ϕ ) .
This map is continuous, hence a morphism in CompIA. Next let f be a morphism f : Φ × Φ 1 Φ 2 for a third compact information algebra ( Φ , Φ f , E ; · , ) . Then define a map λ f : Φ [ Φ 1 Φ 2 ] c by
( λ f ( χ ) ) ( ϕ ) = f ( χ , ϕ )
It turns out that the map λ f is continuous, hence a morphism of the category CompIA and that e v a l ( λ f , i d Φ 1 ) = f . The same holds with respect to ordinary information algebras. This shows that the information algebras ( [ Φ 1 Φ 2 ] , E 1 × E 2 ; · , ) and ( [ Φ 1 Φ 2 ] c , E 1 × E 2 ; · , ) are the exponentials in their respective categories.
So, indeed both categories IA and CompIA are Cartesian closed. For proofs we refer to the manuscript [8].

6. Outlook

In this section we address a few issues not discussed above. For a start, we consider the question what part information algebras could have in a general theory of information as sketched, for instance, in [36]. It seems that the basic pragmatic purpose of information is to answer questions. This is already implicit in Shannon’s theory. The question there is what symbol or sequence of symbols is transmitted over a channel. This implies that one should be concerned with extracting the part of the information relevant to the question one is interested in. Information extraction is indeed a common activity in internet search, query answering in databases, constraint solving etc. Surely also information comes usually in pieces, from different sources and sensors, from varying files or data bases. Therefore, aggregation is an unavoidable operation in processing information. Any theory of information must in one or another way cover these issues. Whether it does it the way information algebras proposes is another question. The fact that classical information structures of Computer Science like relational databases, different systems of logic, systems of equations, constraints, etc. can be modeled by information algebras is a strong argument for its relevance. Also many studies of semantic information turn around the concept of questions [26,27,28,29,37,38,39,40,41,42], although the algebraic structure of extraction is not explicitly discussed. These studies are also closely connected to semantic information theory, as discussed for instance in [23,24,25,43]. These last papers study language and information in a way similar to Section 4.4 where it is shown how language expresses information and how information algebras arise from language and its semantics. Finally we have shown that information algebras are also closely related to domain theory, another important part of Computer Science modeling information and computation. So, information algebras are very tightly linked to important pragmatic information concepts and information theories of Computer Science and Philosophy. This will be underlined even more strongly below.
The classical information theories are those of Shannon [44] and Solomonov, Kolmogorov and Chaitin [45,46,47,48]. All these theories address the question of how to measure information content of a message or an object (both seen as a sequence of symbols). In this sense, they are very limited theories of information. In information algebras we have an information order, that is a qualitative comparison of information content. We may wonder whether we may associate a quantitative measure of information content to the elements of an information algebra. This measure should respect the information order. This seems only possible if the information considered is in some sense “finite”, that is, can be represented by a finite number of bits. In [9] it is proposed to use Hartley measure to measure uncertainty of a partial answer to a given question in an atomistic information algebra, and then to measure information content by the reduction of uncertainty. As an illustration, consider the very particular case of a string algebra, see Section 3.1. We consider only binary strings to simplify matters. Suppose we look for a string of length n. There are 2 n possible strings of this length. If we know nothing about our string, then the measure of the uncertainty according to Hartley [49] is the logarithm of possible cases, that is n. If as an information we are given the prefix of length m of the string, there remain only 2 n m possible strings, that is the remaining uncertainty is n m . The reduction of uncertainty is then n ( n m ) = m . So the measure of information content in the string of length m about the string of length n is exactly m bits. This makes sense in this particular case as it does also in more general situations [9]. Shannon’s theory can be seen as an extension of Hartley’s approach to a probabilistic situation. Probabilistic information will be shortly touched below, although we claim that information is primarily not probabilistic. It turns out that Shannon’s way to measure information content can consistently be applied to information algebras, if some finiteness conditions are satisfied. It is an open question whether and how information algebra can be related to algorithmic information theory.
It has been mentioned at the beginning that the axiomatics of information algebra were originally motivated by local computation schemes [1,2]. The inference problem to be solved in a labeled information algebra ( Ψ , Q ; d , , π ) can be formulated as follows: Given ϕ 1 , , ϕ n Ψ and x Q compute
π x ( ϕ 1 · · ϕ n )
Query answering in relational databases, constraint solving, solving systems of linear equations, computing Fourier transforms and many other problems can be put into this form, see [7] for many further examples. The factors in this combination have domains x i = d ( ϕ i ) . Naive combination leads to an element on domain
d ( ϕ 1 · · ϕ n ) = x 1 x n
This is usually computationally infeasible, since the complexity of extraction and combination grows often (although not always) exponentially in the size of the domain [7]. Local computation schemes organize the computation in such a way that all operations can be executed “locally” on the domains x 1 , , x n of the factors of the combination. This is already possible in valuation algebras where the idempotency property is missing [5,6,7]. However, idempotency allows to simplify the computational architectures further. Local computation is also related to the idea of (conditional) independence of information, a concept known especially from probability theory [50,51], but also from relational database theory [52,53]. This indicates that this concept can be generalized to valuation or information algebras [6,54,55]. In general this subject is treated in the framework of multivariate systems. However, the general framework would be the one of a lattice of questions or domains. In this respect not much has been done so far, indicating a fruitful topic of research.
In Section 3.5 we have seen that atomistic labeled information algebras can be embedded into a generalized relational information algebra, that is an algebra of subsets. The question arises, whether any information algebra (labeled or domain-free) can be represented by an information algebra of sets, more particularly a generalized relational algebra in the labeled case. And how can this set algebra be characterized? This is the subject of representation theory. In lattice theory, the well-known Stone respectively Priestley duality theories solve exactly this problem, by giving precise topological characterizations of the representing set algebras for the case of Boolean algebras respectively distributive lattices (see [14]). Is it possible to extend these theories to distributive lattice or Boolean information algebras, i.e., algebras where Φ is a distributive lattice or a Boolean information algebra? These questions also remain open to a large extent so far. Similar questions have been treated for special information algebras like cylindrical algebras, see [17].
Finally, information, undoubtedly, is often uncertain. In the framework of information algebras uncertain information can be modeled as random maps with values in an information algebra, see [6,56]. These random maps form themselves again information algebras. A possible interpretation of a random map is that an information may depend on some unknown assumptions. One may then formulate hypotheses and study, given a random map (an uncertain information), under what assumptions and hence how likely it is that the hypothesis holds. This induces a kind of distribution function for the random map; it is a function monotone of order . Such functions are studied in capacity theory [57,58]. Many results from capacity theory can be used to study random maps and uncertain information [6,8,59]. The theory of uncertain information as understood here is a generalization of what is know as Dempster-Shafer theory [60,61]; and in fact, information algebras provide the natural general framework for this theory. Much of this theory, especially the beautiful Theory of Allocations of Probabilities (see [62]) can be extended to information algebras. Random maps are also related the Theory of Hints [63] and finally also to Probabilistic Argumentation Systems [64]. Finally, from the point of view of measuring information content, this probabilistic approach puts also Shannon’s theory of information in a more general perspective, see [65] for a preliminary discussion. Much remains to be done here.
To conclude we only mention a few other issues, where so far little or nothing is known to the best of our knowledge: One is the introduction of topology into information algebras, somehow like it is done for domains (see [66]). Another one is the question of effective information algebras, where combination and extraction are effectively computable. This should bring information algebras into the focus of computability. The theory is yet relatively new and much remains to be done.

Acknowledgments

We thank the anonymous referees for their important and valuable comments.

Author Contribution

Juerg Kohlas is responsible for the concept of the paper, a large part of the results presented and the writing. Juerg Schmid has contributed to the mathematical soundness of the theory. Both authors have read and approved the final published manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shenoy, P.P.; Shafer, G. Propagating Belief Functions Using Local Computation. IEEE Expert 1986, 1, 43–52. [Google Scholar] [CrossRef]
  2. Shenoy, P.P.; Shafer, G. Axioms for Probability and Belief Function Propagation. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Yager, R.R., Liu, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 499–528. [Google Scholar]
  3. Lauritzen, S.L.; Spiegelhalter, D.J. Local Computations with Probabilities on Graphical Structures and their Application to Expert Systems. J. R. Stat. Soc. Ser. B 1988, 50, 157–224. [Google Scholar]
  4. Shenoy, P.P. A Valuation-Based Language for Expert Systems. Int. J. Approx. Reason. 1989, 3, 383–411. [Google Scholar] [CrossRef]
  5. Kohlas, J.; Shenoy, P.P. Computation in Valuation Algebras. In Handbook of Defeasible Reasoning and Uncertainty Management Systems, Volume 5: Algorithms for Uncertainty and Defeasible Reasoning; Kohlas, J., Moral, S., Eds.; Springer: Dordrecht, The Netherlands, 2000; pp. 5–39. [Google Scholar]
  6. Kohlas, J. Information Algebras: Generic Structures for Inference; Springer: Berlin, Germany, 2003. [Google Scholar]
  7. Pouly, M.; Kohlas, J. Generic Inference. A Unified Theory for Automated Reasoning; Wiley: Hoboken, NJ, USA, 2011. [Google Scholar]
  8. Kohlas, J.; Schmid, J. Research Notes: An Algebraic Theory of Information; Technical Report 06-03; Department of Informatics, University of Fribourg: Fribourg, Switzerland, 2013; Available online: http://diuf.unifr.ch/drupal/tns/sites/diuf.unifr.ch.drupal.tns/files/file/main_0.pdf (accessed on 9 April 2014).
  9. Kohlas, J.; Schneuwly, C. Information Algebra. In Formal Theories of Information: From Shannon to Semantic Information Theory and General Concepts of Information; Sommaruga, G., Ed.; Lecture Notes in Computer Science, Volume 5363; Springer: Berlin, Germany, 2009; pp. 95–127. [Google Scholar]
  10. Grätzer, G. General Lattice Theory; Academic Press: London, UK, 1978. [Google Scholar]
  11. Kohlas, J.; Wilson, N. Semiring Induced Valuation Algebras: Exact and Approximate Local Computation Algorithms. Artif. Intell. 2008, 172, 1360–1399. [Google Scholar] [CrossRef]
  12. Cignoli, R. Quantifiers on Distributive Lattices. Discret. Math. 1991, 96, 183–197. [Google Scholar] [CrossRef]
  13. Halmos, P.R.; Givant, S. Logic as Algebra; Dolciani Mathematical Expositions Book 21; The Mathematical Association of America: Washington, DC, USA, 1998. [Google Scholar]
  14. Davey, B.; Priestley, H. Introduction to Lattices and Order; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  15. Halmos, P.R. Algebraic Logic; Chelsea: New York, NY, USA, 1962. [Google Scholar]
  16. Plotkin, B. Universal Algebra, Algebraic Logic, and Databases; Mathematics and its applications; Springer: Dordrecht, The Netherlands, 1994; Volume 272. [Google Scholar]
  17. Henkin, L.; Monk, J.D.; Tarski, A. Cylindric Algebras; North-Holland: Amsterdam, The Netherlands, 1971. [Google Scholar]
  18. Kohlas, J.; Staerk, R. Information Algebras and Consequence Operators. Log. Universalis 2007, 1, 1139–1165. [Google Scholar] [CrossRef]
  19. Wilson, N.; Mengin, J. Logical Deduction Using the Local Computation Framework. In Proceedings of European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty, ECSQARU’99, London, UK, 5–9 July 1999; pp. 386–396.
  20. Barwise, J.; Seligman, J. Information Flow: The Logic of Distributed Systems; Number 44 in Cambridge Tracts in Theoretical Computer Science; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  21. Ganter, B.; Wille, R. Formal Concept Analysis; Translated to English by C. Franzke; Springer: Berlin, Germany, 1999. [Google Scholar]
  22. Langel, J. Logic and Information: A Unifying Approach to Semantic Information Theory. Ph.D. Thesis, University of Fribourg, Fribourg, Switzerland, 2010. [Google Scholar]
  23. Bar-Hillel, Y.; Carnap, R. An Outline of a Theory of Semantic Information; Technical Report 247; Research Laboratory of Electronics, Massachusetts Institute of Technology: Cambridge, MA, USA, 1952. [Google Scholar]
  24. Bar-Hillel, Y.; Carnap, R. Semantic Information. Br. J. Philos. Sci. 1953, 4, 147–157. [Google Scholar] [CrossRef]
  25. Bar-Hillel, Y. Language and Information: Selected Essays on Their Theory and Application; Addison-Wesley: Boston, MA, USA, 1964. [Google Scholar]
  26. Hintikka, J. On Semantic Information. In Physics, Logic, and History, Proceedings of the First International Colloquium, Denver, CO, USA, 16–20 May 1966; Hintikka, J., Suppes, P., Eds.; Springer: New York, NY, USA, 1970; pp. 3–27. [Google Scholar]
  27. Hintikka, J. Surface Information and Depth Information. In Information and Inference; Hintikka, J., Suppes, P., Eds.; Springer: Dordrecht, The Netherlands, 1970; pp. 263–297. [Google Scholar]
  28. Hintikka, J. The Semantics of Questions and the Questions of Semantics; Volume 28, Acta Philosophica Fennica; North-Holland: Amsterdam, The Netherlands, 1976. [Google Scholar]
  29. Hintikka, J. Answers to Questions. In Questions; Hiz, H., Ed.; D. Reidel: Dordrecht, The Netherlands, 1978; pp. 279–300. [Google Scholar]
  30. Scott, D.S. Outline of a Mathematical Theory of Computation; Technical Monograph PRG-2; Oxford University Computing Laboratory, Programming Research Group: Oxford, UK, 1970. [Google Scholar]
  31. Scott, D.S. Continuous Lattices. In Toposes, Algebraic Geometry and Logic; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1972; pp. 97–136. [Google Scholar]
  32. Scott, D.S. Domains for Denotational Semantics. In Automata, Languages and Programming; Nielsen, M., Schmitt, E.M., Eds.; Springer: Berlin/Heidelberg, Germany, 1982; pp. 577–610. [Google Scholar]
  33. Scott, D.S. Computer Science Department, Carnegie Mellon University, PA, USA. A New Category? Domains, Spaces and Equivalence Relations. Unpublished manuscript. 1996. [Google Scholar]
  34. Guan, X.; Li, Y. On Two Types of Continuous Information Algebras. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2012, 20, 655–671. [Google Scholar] [CrossRef]
  35. Stoltenberg-Hansen, V.; Lindstroem, I.; Griftor, E. Mathematical Theory of Domains; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  36. Burgin, M. Theory of Information: Fundamentality, Diversity and Unification; World Scientific: Hackensack, NJ, USA, 2010. [Google Scholar]
  37. Groenendijk, J.; Stokhof, M. Studies on the Semantics of Questions and the Pragmatics of Answers. Ph.D. Thesis, Universiteit van Amsterdam, Amsterdam, The Netherlands, 1984. [Google Scholar]
  38. Groenendijk, J.; Stokhof, M. Questions. In Handbook of Logic and Language, 2nd ed.; van Benthem, J., ter Meulen, A., Eds.; Elsevier: London, UK, 2010; Chapter 25; pp. 1059–1131. [Google Scholar]
  39. Groenendijk, J. Questions and answers: Semantics and logic. In Proceedings of the 2nd CologNET-ElsET Symposium, Questions and Answers: Theoretical and Applied Perspectives. Amsterdam, The Netherlands, 18 December 2003; pp. 12–23.
  40. Groenendijk, J. The Logic of Interrogation: Classical Version. In Proceedings of the Ninth Conference on Semantic and Linguistic Theory, Santa Cruz, CA, USA, 19–21 February 1999; pp. 109–126.
  41. Hintikka, J. Questions about Questions. In Semantics and Philosophy; Munitz, M., Unger, P., Eds.; New York University Press: New York, NY, USA, 1974; pp. 103–158. [Google Scholar]
  42. Van Rooij, R. Comparing Questions and Answers: A Bit of Logic, a Bit of Language and Some Bits of Information. Available online: http://staff.science.uva.nl/ vanrooy/Sources.pdf (accessed on 8 April 2014).
  43. Barwise, J.; Seligman, J. Information Flow: The Logic of Distributed Systems; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  44. Shannon, C. A Mathematical Theory of Communications. Bell Syst. Tech. J. 1948, 27, 379–432. [Google Scholar] [CrossRef]
  45. Kolmogorov, A.N. Three Approaches to the Quantitative Definition of Information. Int. J. Comput. Math. 1968, 2, 1–4. [Google Scholar] [CrossRef]
  46. Kolmogorov, A.N. Logical Basis for information Theory and Probability. IEEE Trans. Inf. Theory 1968, 14, 662–664. [Google Scholar] [CrossRef]
  47. Chaitin, G. Algorithmic Information Theory; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  48. Li, M.; Vitanyi, P. An introduction to Kolmogorov complexity and its applications; Springer: New York, NY, USA, 1993. [Google Scholar]
  49. Hartley, R. Transmission of Information. Bell Syst. Tech. J. 1928, 535–563. [Google Scholar] [CrossRef]
  50. Cowell, R.G.; Dawid, A.P.; Lauritzen, S.L.; Spiegelhalter, D.J. Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  51. Dawid, A.P. Separoids: A Mathematical Framework for Conditional Independence and Irrelevance. Ann. Math. Artif. Intell. 2001, 32, 335–372. [Google Scholar] [CrossRef]
  52. Maier, D. The Theory of Relational Databases; Pitman: London, UK, 1983. [Google Scholar]
  53. Beeri, C.; Fagin, R.; Maier, D.; Yannakakis, M. On the Desirability of Acyclic Database Schemes. J. ACM 1983, 30, 479–513. [Google Scholar] [CrossRef]
  54. Studeny, M. Formal Properties of Conditional Independence in Different Calculi of AI. In Symbolic and Quantitative Approaches to Reasoning and Uncertainty; Clarke, M., Kruse, R., Moral, S., Eds.; Lecture Notes in Computer Science; Springer: Berlin, Germany, 1993; Volume 747, pp. 341–348. [Google Scholar]
  55. Shenoy, P. Conditional Independence in Valuation-based Systems. Int. J. Approx. Reason. 1994, 10, 203–234. [Google Scholar] [CrossRef]
  56. Kohlas, J.; Eichenberger, C. Uncertain Information. In Formal Theories of Information: From Shannon to Semantic Information Theory and General Concepts of Information; Sommaruga, G., Ed.; Lecture Notes in Computer Science, Volume 5363; Springer: Berlin, Germany, 2009; pp. 128–160. [Google Scholar]
  57. Choquet, G. Theory of Capacities. Annales de l’Institut Fourier 1953–1954, 5, 131–295. [Google Scholar] [CrossRef]
  58. Choquet, G. Lectures on Analysis; Benjaminm: New York, NY, USA, 1969. [Google Scholar]
  59. Kohlas, J. Support-and Plausibility Functions Induced by Filter-Valued Mappings. Int. J. Gen. Syst. 1993, 21, 343–363. [Google Scholar] [CrossRef]
  60. Dempster, A. Upper and Lower Probabilities Induced by a Multivalued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  61. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  62. Shafer, G. Allocations of Probability. Ann. Prob. 1979, 7, 827–839. [Google Scholar] [CrossRef]
  63. Kohlas, J.; Monney, P. A Mathematical Theory of Hints: An Approach to the Dempster-Shafer Theory of Evidence; Volume 425, Lecture Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  64. Haenni, R.; Kohlas, J.; Lehmann, N. Probabilistic Argumentation Systems. In Handbook of Defeasible Reasoning and Uncertainty Management Systems, Volume 5: Algorithms for Uncertainty and Defeasible Reasoning; Kohlas, J., Moral, S., Eds.; Springer: Dordrecht, The Netherlands, 2000; pp. 221–287. [Google Scholar]
  65. Pouly, M.; Kohlas, J.; Ryan, P. Generalized Information Theory for Hints. Int. J. Approx. Reason. 2013, 54, 17–34. [Google Scholar] [CrossRef]
  66. Gierz, E.G. Continuous Lattices and Domains; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]

Share and Cite

MDPI and ACS Style

Kohlas, J.; Schmid, J. An Algebraic Theory of Information: An Introduction and Survey. Information 2014, 5, 219-254. https://doi.org/10.3390/info5020219

AMA Style

Kohlas J, Schmid J. An Algebraic Theory of Information: An Introduction and Survey. Information. 2014; 5(2):219-254. https://doi.org/10.3390/info5020219

Chicago/Turabian Style

Kohlas, Juerg, and Juerg Schmid. 2014. "An Algebraic Theory of Information: An Introduction and Survey" Information 5, no. 2: 219-254. https://doi.org/10.3390/info5020219

Article Metrics

Back to TopTop