Next Article in Journal
Cooperative and Non-Cooperative Frameworks with Utility Function Design for Intermediate Deadline Assignment in Real-Time Distributed Systems
Next Article in Special Issue
Four-Fold Formal Concept Analysis Based on Complete Idempotent Semifields
Previous Article in Journal
A New Extended Two-Parameter Distribution: Properties, Estimation Methods, and Applications in Medicine and Geology
Previous Article in Special Issue
Similarity Measures for Learning in Lattice Based Biomimetic Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Singular Value Decomposition over Completed Idempotent Semifields

by
Francisco J. Valverde-Albacete
*,† and
Carmen Peláez-Moreno
Department of Signal Theory and Communications, Universidad Carlos III de Madrid, 28911 Leganés, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(9), 1577; https://doi.org/10.3390/math8091577
Submission received: 19 July 2020 / Revised: 2 September 2020 / Accepted: 4 September 2020 / Published: 12 September 2020

Abstract

:
In this paper, we provide a basic technique for Lattice Computing: an analogue of the Singular Value Decomposition for rectangular matrices over complete idempotent semifields (i-SVD). These algebras are already complete lattices and many of their instances—the complete schedule algebra or completed max-plus semifield, the tropical algebra, and the max-times algebra—are useful in a range of applications, e.g., morphological processing. We further the task of eliciting the relation between i-SVD and the extension of Formal Concept Analysis to complete idempotent semifields ( K -FCA) started in a prior work. We find out that for a matrix with entries considered in a complete idempotent semifield, the Galois connection at the heart of K -FCA provides two basis of left- and right-singular vectors to choose from, for reconstructing the matrix. These are join-dense or meet-dense sets of object or attribute concepts of the concept lattice created by the connection, and they are almost surely not pairwise orthogonal. We conclude with an attempt analogue of the fundamental theorem of linear algebra that gathers all results and discuss it in the wider setting of matrix factorization.

1. Introduction

Lattice Computing (LC) [1] intends to provide “an evolving collection of tools and methodologies that process lattice-ordered data”, as a means of establishing an information processing paradigm belonging to the wider field of Computational Intelligence [2], with explicit aim at modelling Cyber Physical Systems [3].
In this paper, we present a fundamental tool for the factorization of matrices by developing the Singular Value Decomposition (SVD), a staple technique of data analysis [4,5], for matrices over idempotent semifields [6,7]. In it we both use computing in lattices—since the matrices take values in idempotent semifield, an algebraic structure which is similar, but distinct to a fuzzy semiring—and computing with lattices—since our results highlight the role of the lattices of Formal Concept Analysis (FCA) [8] in reconstructing matrices.

1.1. Motivation

For that purpose, consider a linear form between spaces R : Y X and its adjoint R : X Y . It is extremely interesting to be able to describe these transformations as simply as possible—e.g., ideally with diagonal matrices—but to do so we have to specify further the algebras in which such linear forms can be represented and computed with.
A generalization of the properties of standard matrices over fields shows that a sensible point to start from are commutative semirings that already have the addition and multiplication operators to carry out matrix-vector operation (Section 2.1). The diagram of Figure 1 sketches the organization of commutative semirings (cfr. [9] (Figure 2); see also Section 2.1.1). However, generic semirings are too abstract and do not have many tools to work with. One thing we can do, though, is to suppose that the spaces are free, so that the domain and range spaces are products of copies of the basic semifield Y K m and X K g . We treat all vectors as columns (that is, right semi-vector spaces) so that the linear forms above can be represented as matrices in these R K g × m and R K m × g .
To go on specifying these forms, it is well known that the most important distinction between semirings is whether the semiring has an additive group structure or not [10,11]: additively cancellative semirings and zerosumfree semirings are as far apart as algebras can be in this respect.
Figure 1 shows, actually, a concept lattice of the position of positive semifields within the commutative semirings, as well as the better known collateral families of fields such as R and distributive lattices such as [ 0 , 1 ] , max , min .
  • If we can guarantee the existence of additive inverses, several further observations might eventually lead us to work in the familiar landscape of the groups such as Z and modules.
  • On the other hand, if we cannot guarantee the existence of additive inverses, the toolset is quite scarce. It is first improved by demanding a natural order compatible with the operations in the semiring, leading to dioids (for double monoids), or entireness, leading to so-called information algebras.
However, the step that enormously increases the toolset is to accept the existence of the multiplicative inverse that transforms the semiring into a semifield (see Section 2.1).
  • Adding the semifield structure to a group eventually leas us to the fields such as R or C and vector spaces of standard algebra.
    In this case, the SVD is a well-known decomposition scheme for matrices over a standard field, e.g., real or complex numbers [4] which clarifies enormously the action of the linear form (Section 1.2).
  • On the other hand, adding the semifield structure to information algebras that are dioids leads us to positive semifields and their positive semimodules, which are quite varied.
To the extent of our knowledge, the only well-developed theory concerns a subset of the positive semifields, the complete idempotent semifields and their semimodules, the ones we will be using in this paper (Section 2.1.2, Section 2.1.3, Section 2.1.4, Section 2.1.5 and Section 2.1.6). Idempotent semifields are important for us because we are putting forth the idea that to understand the spaces of linear transformations in idempotent semifields, FCA ([8], and Section 1.4 below) and in general Galois connections [12] (see also Appendix B) are needed.

1.2. The Singular Value Decomposition

Readers familiar with the SVD may skip this section.
Before turning to idempotent semifields and their lattices, let us recall that if the semifields are actually “complete” fields, e.g., K ¯ R or K ¯ C , Lanczos [13] shows how to aggregate the two linear forms into a single eigenvalue problem to carry out the (standard) SVD of R (and R ).
Theorem 1.
Let K ¯ be R or C . Given a matrix A M m × n ( K ¯ ) representing a linear form between the spaces X K ¯ n and Y K ¯ m , and A its adjoint. We define the following subspaces of X and Y :
  • The image, range or column space of A, I M A Y as I M A = { A x x X } ,
  • The kernel or null-space of A, K ER ( A ) X , as K ER ( A ) = { x X A x = 0 Y } ,
  • The co-image or row space of A, I M A X , as I M A = { A y y Y } ,
  • The co-kernel or left null-space of A, K ER ( A ) Y , as K ER ( A ) = { y Y A y = 0 X } .
Then:
1.
From I M A to I M A , the matrix yields an invertible transformation wherefore they have the same dimension, the rank, r = dim I M A = dim I M A .
2.
In X , the kernel is the orthogonal complement of the co-image, K ER ( A ) = I M A .
3.
In Y , the co-kernel is the orthogonal complement of the co-image, K ER ( A ) = I M A
4.
(Rank-nullity theorem) The dimension of the domain space decomposes as: dim I M A + dim K ER ( A ) = dim X .
5.
(Rank-co-nullity) The dimension of the range space decomposes as: dim I M A + dim K ER ( A ) = dim Y .
6.
there is a factorization A = U Σ V given in term of three matrices
(a)
U M m × m ( K ) is a unitary matrix of left singular vectors.
(b)
Σ M m × n ( K ) is a diagonal matrix of non-negative real values called the singular values.
(c)
V M n × n ( K ) is a unitary matrix of right singular vectors, and they can be structured as:
U = U 1 U 0 Σ = Σ 1 0 0 Σ 0 V = V 1 V 0
where Σ 0 is a zero matrix and U 0 and V 0 are the left and right singular vectors, respectively, of the null singular value, if it exists.
7.
And conjugate-dually, there is a factorization of A in terms of the same three matrices
A = V Σ U
where image and co-image, kernel and co-kernel swap roles.
An excellent but very brief summary of the affordances of SVD, integrated with the fundamental theorem of linear algebra between vector spaces, without the technical encumbrance, is [14] from which Theorem 1 is synthesized—see the figures therein for abstract representations of spaces. A very interesting approach tying the linear transformations and spectral considerations for understanding their associated subspaces is [13]: both have inspired material in this paper. A modern, computer-oriented exposition of the SVD can be found in [4].
Note that A can also be written using outer products in terms of the singular value σ i and its corresponding singular vectors ( u i , v i ) as:
A = i = 1 min ( m , n ) σ i u i v i *
hence, since the SVD is a costly procedure, it is also interesting to find k min ( m , n ) such that using the k greatest singular values we may approximate:
A i = 1 k σ i u i v i * .
In particular, singular vectors of the null eigenvalue never contribute to the reconstruction so they may be discarded.
Note also that this tool is particularly useful in applications in data analysis such as Latent Semantic Analysis [15] or Principal Component Analysis [16]. The book [17] explains some of its applications.

1.3. Formal Concept Analysis and Boolean Matrix Factorization

Consider a set of formal objects G and a (disjoint) set of formal attributes M. A matrix I K G × M with entries in a set K with object-indexed rows and attribute-indexed columns and is a ubiquitous object in mathematical applications ranging from the theory of vector spaces in pure Mathematics [14], to datasets in statistics [18], to measurement theory in general Science.
When the support set is 2 = { 0 , 1 } and carries a Boolean algebra, Wille [19] had the intuition to coalesce these objects, attributes and relationship in the guise of a “formal context” ( G , M , I ) , with I 2 G × M , and proposed Formal Concept Analysis (FCA) against the backdrop of Order Theory [8]. This is about describing special pairs of sets of objects and attributes called the formal concepts and their properties as defined by the formal context. We next state an extended form of the basic theorem.
Theorem 2
(Fundamental theorem of FCA). Let G be a set of formal objects, M a set of formal attributes and ( G , M , I ) be a formal context with I G × M . Then:
1.
The context analysis phase.
(a)
The polar operators of the context · : 2 G 2 M and · : 2 M 2 G .
A = { m M g A , g I m } B = { g G m B , g I m }
form a Galois connection ( · , · ) : 2 G 2 G whose formal concepts are the pairs ( A , B ) of closed elements such that A = B A = B whence the set of formal concepts is
B ( G , M , I ) = { ( A , B ) 2 G × 2 M A = B A = B } .
(b)
Formal concepts are partially ordered with the hierarchical order
( A 1 , B 1 ) ( A 2 , B 2 ) A 1 A 2 B 1 B 2 .
and the set of formal concepts with this order B ( G , M , I ) , is a complete lattice B ̲ ( G , M , I ) called the concept lattice of ( G , M , I ) .
(c)
In B ̲ ( G , M , I ) infima and suprema are given by:
t T ( A t , B t ) = t T A t , t T A t t T ( A t , B t ) = t T B t , t T B t
(d)
The basic functions γ ¯ : G V and μ ¯ : M V
g γ ¯ ( g ) = ( { g } , { g } ) m μ ¯ ( m ) = ( { m } , { m } )
are mappings such that γ ¯ ( G ) is supremum-dense in B ̲ ( G , M , I ) , μ ¯ ( M ) is infimum-dense in B ̲ ( G , M , I ) .
2.
The context synthesis phase.
(a)
A complete lattice L = L , is isomorphic to (read can be built as) the concept lattice B ̲ ( G , M , I ) if and only if there are mappings γ ¯ : G L and μ ¯ : M L such that
  • γ ¯ ( G ) is supremum-dense in L , μ ¯ ( M ) is infimum-dense in L , and
  • g I m is equivalent to γ ¯ ( g ) μ ¯ ( m ) for all g G and all m M .
(b)
In particular, consider the doubling context of L , K ( L ) = ( L , L , ) , and the standard context of L , S ( L ) = ( J ( L ) , M ( L ) , ) where J ( L ) and M ( L ) are the sets of join- and meet-irreducibles, respectively, of L , then
B ̲ ( K ( L ) ) L B ̲ ( S ( L ) ) .
Proof. 
See Chapter 1 of [8], or the slow buildup of results in Chapter 7 of [20]. □
Previously, Birkhoff [21] and Ore [22] investigated the notion of Galois connection, and Barbut and Monjardet [23,24] came up with the notion of the “Galois lattice”—essentially the concept lattice—of a binary relation.
Note that FCA can both be understood as a data analysis technique for Boolean tables—the analysis phase—and as a synthesis technique for a Boolean matrix that “respects” the order of some lattice—the synthesis phase. For the purpose of developing a SVD, the synthesis phase is the more important step. However, since the core of this phase is to conform the Boolean matrix so that its concept lattice—issued from the analysis phase—is isomorphic to the lattice being used as guide, both phases seem to be equally important.
If a pair ( A , B ) B ( G , M , I ) is a formal concept, then we say A is its extent and B is its intent. An early result of the relevance of FCA for for Boolean Matrix Factorization (BMF) was first described in [25]:
Theorem 3
(Theorem 1 of [25], adapted). Let G be a set of formal objects | G | = g , M a set of formal attributes | M | = m and ( G , M , I ) be a formal context with I G × M . If I = A B for A 2 g × k and B 2 k × m where the multiplication is that of Boolean matrices, then there exists F B ( G , M , I ) with | F | k such that for the Boolean matrices A F 2 g × | F | and B F 2 | F | × m we have I = A F B F .
So if a Boolean matrix accepts a BMF, it accepts one in terms of formal concepts with, possibly, even less factors. The expansion of the matrix operation leads to further understanding of this issue:
I = f F A · f B f ·
where we use the dot to range over all the indices in a line (row, column) of the matrix. This induces two observations:
  • The operations invoked on the matrix depend on the underlying algebra carried by 2 2 , e.g., a Boolean algebra, and suggests there might be a reconstruction for every algebra definable on that carrier set. Since it is well-known that the most abstract structure to support matrix algebra is that of a semiring, we review in Section 2.1 several concepts around semirings.
    The Boolean algebra is embedded in every idempotent semifield, a special kind of semiring with an idempotent addition (Section 2.1Section 2.1.2), and similar results for the reconstruction of matrices have been proven [26]. This paper is a natural extension of those results.
  • Clearly, A · f and B f · are the extent and intent of formal concepts encoded in correlative columns of A F and rows of B F , and the ∘ operation is an external product, which suggests that extents and intents are being used as column and row vectors, respectively, if both were used as column vectors, the factorization would have to be written as I = A F B F T . This points to the fact that the concepts of Linear Algebra over semirings, in general are important for this Theorem 3.

1.4. Formal Concept Analysis as Linear Algebra over Idempotent Semifields

There is another extension to FCA called K -FCA where the entries in the incidence belong to a complete idempotent semifield K ¯ [27,28,29]. An idempotent semifield is an algebra that resembles a standard field but whose additive structure is idempotent and thus has no inverse [10], instead it exhibits strong order-related properties [11]. One of the best known examples of this structure is the completed max-plus algebra [6], by other names minimax algebra [30], morphological algebra [31] or lattice algebra [32].
In [33] it was proven that K -FCA is best understood in terms of the linear algebra of the semivector spaces over K ¯ (see Section 2.1). This was later extended in [34] in the sense of better interpreting the analogues of the kernels of the linear transformation R and properly describing FCA itself as a special case of K -FCA.
Unfortunately, “natural” positive semifields are incomplete—barring the Boolean semifield B —lacking an inverse for the bottom element in the order, and the top completion provides not one but a pair of dually ordered semifields ( K ¯ , K ̲ ) with the inversion in the multiplicative group acting as the duality inducing operator. This entails that expressions will have two (differently completed) multiplications, additions, etc., causing a notational problem which has not yet been agreed upon in the community (see Appendix A). This is precisely the notational problem made evident by Moreau [35] for convex analysis, whose dotted notation we adopt throughout (see Section 2.1.3).
Once the notational problem is solved, this investigation leads naturally to highlight not only the Galois connection of (standard) FCA as expressed in Theorem 2, but other three types of Galois connection established by matrices (see Appendix B and [12]) which can also be understood as Galois connections between semimodules over idempotent semifields, the analogues of vector spaces over fields (Section 2.1.3, Section 2.1.4, Section 2.1.5 and Section 2.1.6). We call this construction the 4-fold K -FCA [36] and this is the core of our approach to integrating the i-SVD into K -FCA(see Section 2.2).
Furthermore, we have not specified which of the two semifields—that with the above-dotted or the one with the below-dotted operations—is “the reference”. We typically consider that the below-dotted reference is for semifields whose order aligns with the usual order in R , and this is the one we call K ¯ , because it is completed with a top element, so if K = R { } , then K ¯ = R { ± } . Changing this semifield to its order-dual K ̲ is what we call changing the bias of the analysis (see results in Section 2.2.3). It has important repercussions because it changes the way the matrix is approximated in the i-SVD (see Section 3).

1.5. Previous Results for the SVD over Idempotent Semifields

To the best of our knowledge, the first and foremost attempt at define an i-SVD for rectangular matrices over the max-plus algebra is [37]. The authors have a long tradition of using spectral results in their papers [31,38,39], but we know of no direct application of the SVD after their initial paper, nor the embedding of the construction in the wider setting of Galois connections.
A related result was obtained over the symmetrized max-plus semiring in [40] and later revisited in [41], but it is not directly applicable to our problem. They were, however, used to solve the related problem of finding the singular values of a matrix over the (incomplete) max-plus semiring, using the technique of logarithmic-exponential transformation between the standard eigenvalue and max-plus eigenvalue problems [42].

1.6. Reading Guide

This paper is an updated and completed version of [26] using ideas from [36]. In it, we investigate:
(1)
An idempotent analogue of the SVD based on the irreducibles of a Galois connection,
(2)
What are the implications of a change of bias, i.e., using K ̲ for the analysis, and
(3)
Whether there is and analogue of a Fundamental Theorem of Linear Algebra for matrices over idempotent semifields.
For that purpose, we first review K -FCA in Section 2 including the notation and maths of completed idempotent semifields that allows us to define the different connections and, in particular, the i-SVD on the Galois connection. Then we present our results in Section 3, prior to discussing them vis-à-vis similar solutions in Section 4.

2. Preliminaries

2.1. Linear Algebra over Complete Idempotent and Positive Semifields

2.1.1. A Short Systemization

A semiring is an algebra S = S , , , ϵ , e whose additive structure, S , , ϵ , is a commutative monoid and whose multiplicative structure, S \ { ϵ } , , e , is a monoid with multiplication distributing over addition from right and left and with additive neutral element absorbing for ⊗, i.e., a S , ϵ a = ϵ .
A commutative semiring is one whose multiplicative law is commutative. Note that semirings can be primitive or constructed from other algebras: although the base semiring is commutative, the construction may not as in the example below.
Example 1
(Square matrices semirings). Given a semiring S , the semiring of square matrices of order n over S is the structure M n ( S ) = S n × n , + , × , 0 , E where + is the usual entry-wise addition, × is usual matrix multiplication, 0 is the matrix whose entries are all the zero in S and E that matrix whose entries are all zero except at the diagonal, where they are the unit in S . Note that the operations for matrix addition and multiplication are those of the underlying semiring S whence this construction is completely generic.
All basic semirings in this paper are commutative, except for the example above, an important example of non-commutative semiring. Many more examples can be found in [10].
For the systematization of semirings the important operation seems to be addition [10,11]. In particular, every semiring accepts a canonical preorder, a b if and only if there exists c D with a c = b . A dioid is a semiring D where this relation is actually an order that is compatible with multiplication and addition, e.g., if a b then for all c S still holds a c b c and a c b c . Dioids are zerosumfree, i.e., they have no non-null additive factors of zero. In this, dioids are as different as a semiring can be from a ring, a quality suggested by their distance in the lattice of semirings and their properties of Figure 1.
A semiring S is complete, if for any index set I including the empty set, and any { a i } i I S the (possibly infinite) summations i I a i are defined and the distributivity conditions: i I a i c = i I a i c and c i I a i = i I c a i , are satisfied. Note that for c = e the above demand that infinite sums have a result. Commutative complete dioids are already complete residuated lattices.
An idempotent semiring is a dioid whose addition is idempotent, examples of which are:
Example 2
(Idempotent semirings).
  • The Boolean lattice B = { 0 , 1 } , , , 0 , 1
  • All fuzzy semirings, e.g., [ 0 , 1 ] , max , min , 0 , 1
  • The min-plus semiring or tropical algebra R min , + = R { } , min , + , , 0 also called the optimization algebra.
  • The max-plus semiring or polar algebra R max , + = R { } , max , + , , 0 also called the schedule algebra.
Of the semirings above, only the boolean lattice and the fuzzy semirings are complete dioids, since the rest lack the greatest element in the order, the top .
A semiring is a semifield if there exists a multiplicative inverse for every element a S , notated as a 1 , except, possibly, for the null element. Semifields are all entire, i.e., they have no non-null factors of the zero element. Note that there is an infinite quantity of semifields (see Chapter 8 of [11], Sections 4.4.3 and 4.4.4).
A semifield that is also a dioid is called a positive semifield, so these have all a natural order compatible with addition and multiplication (see Figure 1). If its addition is furthermore idempotent it is called an idempotent semifield.
Example 3
(Positive and idempotent semifields).
1.
The non-negative rationals Q 0 and non-negative reals R 0 are two positive semifields.
2.
The min-plus and max-plus algebras of Example 2 are idempotent semifields.
3.
The min-times algebra R min , × = R { } , min , × , , 1 and the max-times algebra R max , × = R , max , × , 0 , 1 are also idempotent semifields.

2.1.2. Semifield Completions

As noted above, positive semifields are incomplete in their natural order, lacking an adequate inverse for the bottom in the order, but there are procedures for completing such structures [29] (Construction 1) and we will not differentiate between complete or completed structures. This construction actually obtains a pair of dually ordered complete positive semifields from a single (incomplete) positive semifield. Its results are collected below as a Theorem.
Theorem 4.
For every (incomplete) positive semifield K = K , , , · 1 , , e
1.
There is a pair of completed semifields over K ¯ = K { }
K ¯ = K , ˙ , ˙ , · 1 , , e , K ¯ 1 = K , ˙ , ˙ , · 1 , , e ,
where = 1 and = 1 by definition,
2.
In addition to the individual laws as positive semifields, we have the modular laws:
( u ˙ v ) ˙ ( u ˙ v ) = u ˙ v ( u ˙ v ) ˙ ( u ˙ v ) = u ˙ v
the analogues of the De Morgan laws:
u ˙ v = ( u 1 ˙ v 1 ) 1 u ˙ v = ( u 1 ˙ v 1 ) 1 u ˙ v = ( u 1 ˙ v 1 ) 1 u ˙ v = ( u 1 ˙ v 1 ) 1
and the self-dual inequality in the natural order
u ˙ ( v ˙ w ) ( u ˙ v ) ˙ w .
3.
Furthermore, if K is a positive dioid, then the inversion operation is a dual order isomorphism between the dual order structures K ¯ = K , and ( K ¯ ) 1 = K , δ with the natural order of the original semifield a suborder of the first structure.
Several points need to be explicited regarding this theorem:
  • The issue of notation is contentious: so as not to derail the exposition, in Appendix A we discuss the problem and our decisions, which essentially amount to using a notation that maximally resembles that of linear algebra but also accommodates Boolean algebra.
  • Please also note that in complete semifields e which distinguishes them from inclines, and also that the inverse for the null is prescribed as 1 = . On a practical note, residuation in complete commutative idempotent semifields can be expressed in terms of inverses, and this extends to eigenspaces, examples of which will be given in the following sections.
  • In fact, complete idempotent semifields K ¯ = K , ˙ , ˙ , ˙ , ˙ , · 1 , , e , , appear as enriched structures, the advantage of working with them being that meets can be expressed by means of joins and inversion as a ˙ b = ( a 1 ˙ b 1 ) 1 .
They will be the main algebras used in this paper, so we next present their best known instances.
Example 4
(Complete idempotent semifields). This example gathers the most important subjects of investigation. These are all completed idempotent semifields that come in dual pairs.
1.
The “smallest” example of complete idempotent semifield is B 2 2 , but it lacks a common neutral element for multiplications ˙ and ˙ .
2.
The next smallest is 3 3 = { , e , } , ˙ , ˙ , ˙ , ˙ , , e , . 2 2 is embedded in 3 3 , and 3 3 is embedded in any bigger complete semifield.
3.
The complete max-times semifield R ¯ min , × = [ 0 , ] , max , × ˙ , · 1 , = 0 , 1 , =
4.
The complete min-times semifield R ¯ min , × = [ 0 , ] , min , × ˙ , · 1 , 1 = , 1 , 1 = 0
5.
The complete min-plus semifield R ¯ min , + = R { , } , min , + ˙ , · , , 0 .
6.
The complete max-plus semifield R ¯ max , + = R { , } , max , + ˙ , · , , 0 .
Note that the completed tropical algebra R ¯ min , + and the completed schedule algebra R ¯ max , + are the algebras in Convex Analysis [35], Morphological Analysis [43], and Idempotent Analysis [11]. In them we have c [ , ] , + ˙ c = and + ˙ c = , which solves several issues in simultaneously handling both semifields. These are the concrete algebras that we use as prototypes of completed idempotent semifields.
Although idempotent complete semifields may look like a very restricted catalogue, there are infinite positive complete semifields [9].

2.1.3. Idempotent Semimodules

Let D = D , + , × , ϵ D , e D be a commutative semiring. A D -semimodule X = X , , , ϵ X is a commutative monoid X , , ϵ X endowed with a scalar action ( λ , x ) λ x satisfying the following conditions for all λ , μ D , x , x X :
( λ × μ ) x = λ ( μ x ) λ ( x x ) = λ x λ x ( λ + μ ) x = λ x μ x λ ϵ X = ϵ X = ϵ D x e D x = x
If D is commutative, idempotent or complete, then X is also commutative, idempotent or complete.
Recall the semiring of square matrices of Example 1. The following defines the matrix semimodules.
Example 5
(Finite matrix semimodules [29]). For n , p N , the semimodule of finite matrices M n × p ( S ) = S n × p , , E is a ( M n ( S ) , M p ( S ) ) -bisemimodule [10], with matrix multiplication-like left and right actions and entry-wise addition. Note that E n and E p are the left and right identity matrices for this semimodule (see (13) for their dotted extensions).
Special cases of these semimodules are those of column vectors M p × 1 ( S ) and row vectors M 1 × n ( S ) , so we systematically equate left (right) semimodules with semimodules of rows (columns) over S .
Dualities in complete idempotent semimodules. If X K ¯ n × 1 is a right semimodule over a complete idempotent semifield K , three notions of “duals” may be distinguished:
  • The (pointwise) inverse [30], X 1 ( K ¯ 1 ) n × 1 is a right sub-semimodule of the inverse semifield K ¯ 1 such that if x X , then ( x 1 ) i = ( x i ) 1 . This duality is the order duality in partial orders and inverts the natural order: if x ˙ λ z then x 1 ˙ λ 1 z 1 .
    ( x ˙ λ ) 1 = x 1 ˙ λ 1 ( x 1 ˙ x 2 ) 1 = x 1 1 ˙ x 2 1
  • The transpose, X T K ¯ 1 × n is a left sub-semimodule of the same semifield such that if x is a right (column) vector, then x T is a left (row) vector.
    ( x ˙ λ ) T = λ ˙ x T ( x 1 ˙ x 2 ) T = x 1 T ˙ x 2 T
    These two dualities commute and allow us the definition of the conjugate [30] (When applied on R ¯ max , + or R ¯ min , + we call it the Cuninghame-Green conjugate.), X ( K ¯ 1 ) 1 × n is a left sub-semimodule of the inverse semifield K ¯ 1 such that if x X , then ( x 1 ) i = ( x i ) 1 , and x is a row vector. This duality at the same time inverts the natural order and the row-column nature of the semimodule.
    ( x ˙ λ ) = λ 1 ˙ x ( x 1 ˙ x 2 ) = x 1 ˙ x 2
    In fact, we can choose any two of the previous three dualities as independent and have the other defined from it:
    x 1 = ( x ) T = ( x T ) x T = ( x ) 1 = ( x 1 ) x = ( x 1 ) T = ( x T ) 1
  • Using residuation [44], another dual can be defined [45] (The original name for this dual was “opposite” in [45], and so it was adopted in [29]. The authors in [45] now prefer to call this concept the “dual” [46] due to the order dualization of the construction, we surmise.): given a complete idempotent semifield K ¯ and a right semimodule X over it define the (residuation) dual of X as the left K -semimodule X d with addition x 1 ˙ d x 2 = x 1 x 2 related to the original addition ˙ (which is the join in the natural order) and action ( λ , x ) λ ˙ d x = x / · λ . Note that ( X d ) d = X and that this dual commutes with inversion. One of the advantages of operating on completed idempotent semifields is that residuation can be expressed in terms of the original operations of the semimodule:
    λ ˙ d x = x ˙ λ 1 x 1 ˙ d x 2 = x 1 ˙ x 2
Example 6
(Dual idempotent semifields). The max-plus R max , + and min-plus R min , + semifields are completed, dually isomorphic semifields. These two completions are inverses as semimodules R ¯ min , + = R ¯ max , + 1 , hence order-dual lattices.
Conjugation and residuation in matrix semimodules over completed idempotent semifields. In matrix semimodules over completed idempotent semifields the issue of notation arises for the matrix semimodules of Example 5. Dotted matrix additions and multiplications for conformant A , B are:
( A ˙ B ) i j = A i j ˙ B i j ( A ˙ B ) i j = A i j ˙ B i j
( A ˙ B ) i j = k = 1 n A i k ˙ B k j ( A ˙ B ) i j = k = 1 n A i k ˙ B k j
Note that the (symmetrical) additive and multiplicative neutral elements are also different:
( E ) i j = ( E ) i j = ( E ) i j = e i = j i j ( E ) i j = e i = j i j .
We define involutive entry-wise inverses ( A 1 ) i j = ( A i j ) 1 , transposes ( A T ) i j = A j i and conjugates A = ( A T ) 1 = ( A 1 ) T . It is easy to see that E = E 1 = E and E = E 1 = E .
Then the following De Morgan-like laws hold for conjugates and matrices with the apropriate dimensions:
( A ˙ B ) = A ˙ B ( A ˙ B ) = A ˙ B ( A ˙ B ) = B ˙ A ( A ˙ B ) = B ˙ A
as well as the following residuation laws:
A \ · C = A ˙ C = ( C ˙ A ) A \ · C = A ˙ C = ( C ˙ A ) C / · A = C ˙ A = ( A ˙ C ) C / · A = C ˙ A = ( A ˙ C )
The following matrix algebra equations are proven in Chapter 8 of [30]:
Proposition 1.
Let K be an idempotent semifield, and A K m × n . Then:
1.
Alternating A- A products of 3 matrices can be shortened as in:
A ˙ ( A ˙ A ) = A ˙ ( A ˙ A ) = ( A ˙ A ) ˙ A = ( A ˙ A ) ˙ A = A A ˙ ( A ˙ A ) = A ˙ ( A ˙ A ) = ( A ˙ A ) ˙ A = ( A ˙ A ) ˙ A = A .
2.
Alternating A- A products of 4 matrices can be shortened as in:
A ˙ ( A ˙ ( A ˙ A ) ) = A ˙ A = ( A ˙ A ) ˙ ( A ˙ A )
3.
Alternating A- A products of 3 matrices and another terminal, arbitrary matrix can be shortened as in:
A ˙ ( A ˙ ( A ˙ M ) ) = A ˙ M = ( A ˙ A ) ˙ ( A ˙ M )
4.
The following inequalities apply:
A ˙ ( A ˙ M ) M A ˙ ( A ˙ M ) M A ˙ ( A ˙ M ) M A ˙ ( A ˙ M ) M
Generators and bases of complete idempotent semimodules. Consider a set of vectors S X , then the span of S (with respect to K an idempotent semifield) V = S K X is the subsemimodule of X generated by linear combinations of finitely many such vectors. In such case, we say that S spans V or that S is a set of generators for V . A semimodule V is finitely generated if there exists a finite set of vectors S X such that V = S K . All such finitely generated subsemimodules of a K n are closed, both in the convex sense, i.e., they include all of their limit points [47], and in the topological sense, in the Scott topology issued from the order properties of the idempotent semifield K [20].
A subset S X is called dependent if there exists a vector v S such that v is a linear combination of S \ { v } , otherwise S is called independent. There is a straightforward procedure to find a subset of independent vectors:
Proposition 2
(The A -test to find dependences between K ¯ -vectors in a set). Let S = { v i K g | 1 i m } be a set of m column g-dimensional vectors, and:
1.
Set the vectors up as matrix A K g × m with A · i = v i .
2.
Find the matrix A = ( A ˙ A ) ˙ ( ˙ E m ) . This annihilates the diagonal components of A ˙ A .
3.
Lastly, form the product A ˙ A .
Then,
  • For each j { 1 m } the vector A · j = ( A ˙ A ) · j if and only if A · j is a K ¯ -linear combination of the other vectors of A.
  • In such case, the elements of the j-th column of A provide the coefficients of the combination.
In addition, order-dually to find dependences with respect to the dual K ̲ .
Proof. 
The original proof is in Chapter 16, Section 2 of [30], including the order-dual. See also [7] (Theorem 3.4.2). □
Notice that this solves the question of finding from a finite set of vectors an independent subset.
A vector x in an idempotent semimodule X is called extreme if for x = u w we have either x = u or x = w . Typically an idempotent semimodule is ordered along the semifield it is generated over. In this order, call the norm of x X the sum of its components x = i x i , e.g., for R ¯ max , + this is the maximum, while for R ¯ min , + it is the minimum. Then a vector is scaled if its norm is the unit x = e .
A set B is called a basis of an idempotent subsemimodule V if it is independent and spans V . Bases in finitely generated idempotent semimodules are essentially unique, and defined by scaled extremals:
Proposition 3
([7] Theorem 3.3.9). Let B be the set of scaled extremals of V , and S V consist of scaled vectors. Then the following are equivalent:
  • S is a minimal set of generators of V .
  • S B and V = S K .
  • S is a basis for V .
Definition 1
(Dimension of finitely generated semimodules). If V is a finitely generated idempotent semimodule, then the number of vectors in any of its bases is called the dimension of the semimodule, dim ( V ) .
Example 7
(Standard bases in free idempotent semimodules). The standard basis of K g is the set of vectors of the unitary matrix E g , B = { ( E g ) · i i { 1 g } } , in matrix notation K g = E g K . For dual pairs of complete idempotent semifields this becomes: K ¯ g = E g K ¯ and K ̲ g = E g K ̲ . Therefore, the dimension of these freely generated comple idempotent semimodules is dim ( K g ) = g .

2.1.4. Matrices as Linear Transformations between Idempotent Semimodules

Definition 2
(Column and row spaces, nullspaces of a matrix). Let K be the carrier set of an idempotent semifield K . Given A K m × g , we call the column- and row-spaces of A the semimodules:
I M A = { A z z K g } I M A T = { A u u K m }
Furthermore, we call the nullspaces of A the semimodules:
N A = { x K g A x = m } N A T = { y K m A T y = g }
In particular, we do not use the term “kernel” for the nullspaces, since it is taken to describe decreasing, interior operators (see Appendix B), and the concept of “bikernel” is much more relevant for positive semirings (see Section 2.1.6).
These spaces are rather easy to describe by combinatorial means. For that purpose, recall that A is column (row) G-astic if every column (row) has an entry from G K . It is G-astic if it is both row- and column G-astic [30].
Lemma 1.
Let A K m × g where K is the carrier set of an idempotent semifield K . Then
1.
I M A is an idempotent semimodule generated by the columns of A, and row-dually for I M A T .
I M A = A K I M A T = A T K
2.
The nullspace of A is an idempotent semimodule generated by the columns of the identity matrix E g K g × g that are co-indexed with the empty columns of A, and row-column-dually for the nullspace of A T .
N A = ( E g ) · j | A · j = K g K N A T = ( E m ) · i | A · i = K m K
Furthermore, if A is column- K \ { , } -astic then its nullspace is reduced to N A = { g } and row-colum dually for N A T = { m } .
Proof. 
For Lemma 1, the result is directly read from (17) by considering the vectors z (u) as the coefficients of the combination and the set of column vectors of A ( A T ) the set of generators.
For Lemma 2, let x 1 , x 2 N A , and λ 1 , λ 2 K \ { } , then A ( λ 1 x 1 λ 2 x 2 ) = A x 1 λ 1 A x 2 λ 2 = g g , proving it is a semimodule. For the rest of the claim, see Chapter 3 of [7]. □
In complete idempotent semifields the issue is a bit more complicated due to the colum-row and the order dualities, so A K m × g with entries in the carrier set of a complete idempotent semifield actually has the following notorious subspaces—where we have written g instead of ( d ) g :
I M A = A K ¯ N A = { x K ¯ g A ˙ x = m }
I M A = A K ̲ N A = { x K ̲ g A ˙ x = m }
and row-olumn dually for A T . Please note that Lemma 1 holds both for K ¯ and K ̲ so all these semimodules are complete.
Proposition 4 and its row-colum and order duals show that the dimension of these subspaces is unrelated to the number of components of the vectors.
Proposition 4
([30]). Let K ¯ be an idempotent semifield other than 3 3 , and A K g × m with m 2 . Then:
1.
If g = 2 , then A has two columns such that every other column is a linear combination of them.
2.
If g > 2 , there exist m vectors in K g none of which is a linear combination of the others.
3.
If g > 2 and K 3 3 , we can find (at least) a set of g 2 g linearly independent vectors.
Proof. 
See Chapter 16, Section 3 of [30]. Also Theorems 3.4.4–5 of [7]. □
Therefore we should not expect to reproduce claims 1, 4 and 5 of the fundamental theorem of Linear Algebra, Theorem 1, in the idempotent semifield setting.

2.1.5. One-Sided Systems of Equations over Idempotent Semifields

This section highlights the difference in solving the equation systems underlying matrices in the idempotent semifield setting. Most of the material in here is from [7,30].
Definition 3.
Let K be the carrier set of a complete idempotent semifield K ¯ . Given A K g × m , and a X K ¯ g , b Y K ¯ m , the one-sided systems of equations over the complete idempotent semifield K ¯ for unknown x X and y Y are:
A ˙ x = b A T ˙ y = a
and likewise for its dual semifield K ̲ ,
A ˙ x = b A T ˙ y = a
Their solutions, naturally, manifests several dualities based on the underlying duality between rows and columns and that of the semifields. Among the more fruitful approaches to finding these solutions are:
  • The algebraic approach, based on relaxing the equations to inequalities and studying lower or upper bounds to the solution by means of residutation, available in every complete, naturally ordered semiring.
  • The combinatorial approach, which involves considering a trisection or partition of the values in the carrier set ( { } , K \ { , } , { } ) and studying the cases made evident by the consideration of such trisection both in the matrix and the given vectors a X and b Y [30]. Since the trisection can easily be seen to contain an embedded copy of 3 3 { , e , } this approach extends, at least partially, to every positive semifield.
The algebraic approach leads to the important notion of the principal solution to each system, found simply by using algebraic manipulation and residuation. Consider the relaxation of the one-sided systems of Definition 3, and let
s ( A , b ) = { x X A ˙ x b } s ( A T , a ) = { y Y A T ˙ y a }
be, respectively, the set of subsolutions to (23) and
s ( A , x ) = { x X A ˙ x b } s ( A , x ) = { x X A ˙ x b }
be the set of super-solutions to (24).
Lemma 2
(Principal solutions to one-sided K ¯ and K ̲ system of equations).
x ¯ = A ˙ b = s ( A , b ) y ¯ = A 1 ˙ a = s ( A T , a )
and row-column dually for:
x ̲ = A ˙ b = s ( A , b ) y ̲ = A 1 ˙ a = s ( A T , a )
where x ¯ reads “highest sub-solution” and x ̲ reads “lowest super-solution”, and likewise for solutions in Y .
Proof. 
For x ¯ = A ˙ b , see Theorem 3.1.1 of [7]. The rest follow by the row-column and order dualities. □

2.1.6. Complete Congruences of Idempotent Semimodules

This section—borrowed and adapted from [47,48,49]—shows that the concept of “kernel (of a matrix)” also needs to be changed in the idempotent semifield setting.
Given a right K -semimodule X a subset W X 2 is called a pre-congruence (of semimodules), if it is a subsemimodule W , , such that ( x , x ) W , x X , and if ( x 1 , x 2 ) W and ( x 2 , x 3 ) W , then ( x 1 , x 3 ) W . Furthermore, it is a congruence (of semimodules) whenever ( x 1 , x 2 ) W implies ( x 2 , x 1 ) W . Therefore, in the present case, congruences are equivalence relations closed with respect to the semimodule operation and have, in turn, a semimodule structure when thought of as a subsemimodule of X 2 [48] (§ 2).
A natural way to define a congruence is to consider a continuous, right linear functional F : X Y to a complete right K -semimodule Y , and define its image I M F and bikernel B IKER F as follows.
I M F = { y Y x X , y = F ( x ) } = { F ( x ) x X } = F ( X ) B IKER F = { ( x 1 , x 2 ) X 2 F ( x 1 ) = F ( x 2 ) } .
Conversely, every complete congruence W arises in this way. The bikernel is an analogue to the “kernel of a linear function” in the setting of idempotent semimodules (Please note that in this paper we use kernel to refer to monotone, contractive idempotent endomorphisms of semimodules, e.g., in Appendix B.).
Definition 4.
Let K be an idempotent semifield and X K n a semi-vector space over K . Then,
  • The orthogonal of a semimodule V X is the congruence
    V = { ( x 1 , x 2 ) X 2 x V , x 1 x = x 2 x }
  • The orthogonal of a congruence (as a semimodule) W X 2 is the semimodule
    W = { x X ( x 1 , x 2 ) W , x 1 x = x 2 x }
When two linear functions F : Z X and G : X Y compose, the property that K ER ( G ) intersects I M ( F ) at a single point is often called a transversality condition that provides the basis for several results in max-min-plus Control and Decision Theory [46]. Previously it has been used as a reason to call the relation between bikernels and images an orthogonality, whence the notation on bikernels and their orthogonals in Definition 4 [50]. However, for Galois connections as first described in idempotent semimodules by [45] a better way may be the following:
Definition 5
(Orthogonality between semimodules and bikernels). Given a pre-dual pair X , Y and a dot product X Y , we define the following correspondences between semimodules of X 2 and Y :
W X 2 W = { y Y x 1 y = x 2 y , ( x 1 , x 2 ) W } V Y V = { ( x 1 , x 2 ) X 2 x 1 y = x 2 y , y V }
Please note that V is a complete subsemimodule of Y and V = V [47], and
Proposition 5.
Let ( X , Y ) be a pre-dual pair satisfying the property that if W X 2 is a complete congruence and ( s , t ) W , then there exists a y Y such that if ( x 1 , x 2 ) W then x 1 y = x 2 y and s y t y . Then a subsemimodule W X 2 is a complete congruence if and only if W = W .
To investigate the structure of the equivalence classes of congruences later, on a complete (as a semimodule) pre-congruence W X 2 for x X define:
x ˇ = { x X ( x , x ) W } x ^ = { x X ( x , x ) W } .
Recall that x ˇ ( x ^ ) is just the supremum (infimum) in the equivalence class of x X and it is a closure (interior) operator: x x ˇ = x ˇ ˇ whence x 1 x 2 implies x ˇ 1 x ˇ 2 , and in particular x ˇ 1 = x ˇ 2 if ( x 1 , x 2 ) W (and dually for x ^ .) This will be expanded below.

2.2. Generalized K -Formal Concept Analysis

The following section is a very brief summary of [27,28,29,33,34,36,51] with new results woven into the argumentation and proofs of results left implicit.

2.2.1. Galois Connections between Idempotent Semivector Spaces

Galois connections are generally ubiquitous in order theory (see Appendix B) and the cornerstone of FCA (see Chapters 1&2 of [8]). In idempotent semifields they generalize the concept of a linear transformation between spaces and its transpose—crucial for the SVD—to a certain extent, since given a matrix R K g × m there are four possible types of Galois connections between the spaces associated to the matrix due to scalar products in the semifield K ¯ [29]. The next proposition, coming from [45] for finite values of the threshold parameter φ , was extended to deal with the non-finite cases in [34].
Proposition 6
(Galois connections between idempotent semimodules). Let K ¯ be a complete idempotent semifield and R K g × m a matrix with values in its carrier set. Consider the vector spaces X = K ¯ g and Y = K ¯ m , and an element φ K ¯ . Then the bracket x R y K ¯ OI = x ˙ R ˙ y 1 induces a Galois connection ( · R , φ , · R , φ ) : X Y between the spaces through the polars
x R , φ = R T ˙ x 1 ˙ φ 1 y R , φ = R ˙ y 1 ˙ φ 1
whose composition generate closure operators:
π R , φ ( x ) = ( x R , φ ) R , φ = R ˙ ( R ˙ x ˙ φ ) ˙ φ 1 π R T , φ ( y ) = ( y R , φ ) R , φ = R T ˙ ( R 1 ˙ y ˙ φ ) ˙ φ 1
which define two bijective sets, B G φ and B M φ over which the polars are dual order bijections and the closures are identities, respectively.
B G φ = ( Y ) R , φ = π R , φ ( X ) B M φ = ( X ) R = π R T , φ ( Y )
Furthermore, the bikernels of the polars are those induced by the closures:
B IKER ( · ) R , φ = B IKER π R , φ ( · ) B IKER ( · ) R , φ = B IKER π R T , φ ( · )
Proof. 
By definition, x R , φ = { y Y x R y K ¯ OI φ } . We solve for y in the inequality:
x R y K ¯ OI = x ˙ R ˙ y 1 φ y 1 x ˙ R \ · φ = R ˙ x ˙ φ
whence, inverting, y R T ˙ x 1 ˙ φ 1 . By Lemma 2, the right-hand side term is known to be the greatest solution to the equation R 1 ˙ y = x 1 ˙ φ 1 , whence it is also the looked-for join. Row-column dually on y R = { x X x R y K ¯ OI φ } , the rest of (31) follows.
Clearly, x R = R T ˙ x 1 ˙ φ 1 Y y x X R ˙ y 1 ˙ φ 1 = y R where the double implication was obtained by residuating the original inequality and inverting afterwards. This proves that the dual adjuncts form a Galois connection: By the results in Appendix B, (32) and (33) follow.
Finally, recall the definition of the bikernels of the polars
B IKER ( · ) R , φ = { ( x 1 , x 2 ) X x 1 R , φ = x 2 R , φ } B IKER ( · ) R , φ = { ( y 1 , y 2 ) Y y 1 R , φ = y 2 R , φ }
For ( x 1 , x 2 ) B IKER ( · ) R , φ call x 1 R , φ = x 2 R , φ = b , whence ( x 1 R , φ ) R , φ = ( x 2 R , φ ) R , φ = b R , φ = a which proves the forward inclusion. Likewise, for ( x 1 , x 2 ) B IKER π R , φ ( · ) call ( x 1 R , φ ) R , φ = ( x 2 R , φ ) R , φ = a then applying again the polars ( ( x 1 R , φ ) R , φ ) R , φ = ( ( x 2 R , φ ) R , φ ) R , φ = a R , φ = b wherefrom, considering that ( · ) R , φ and ( ) ˙ R , φ are mutually inverses on B G φ and B M φ , we have x 1 R , φ = x 2 R , φ = b . □
Notice that:
  • The diagram in Figure 2a summarizes this Galois connection [34].
  • Several results stem from using one of the polars or projectors in an argument: when a similar result follows from using the alternate polar, projector or bikernel, we will say that it holds GC-dually. This is licensed by the combination of row-column, and domain-range duality, between the polars.

2.2.2. K ¯ -Formal Concept Analysis

Proposition 6 is the crucial result that allows us to establish an analogue of FCA complete idempotent semifields in the following manner, starting with the extension of formal contexts and concepts:
Definition 6
(Formal K ¯ -contexts and φ -concepts). Let G and M be sets of (formal) objects and (formal) attributes, respectively, with | G | = g and | M | = m , and a matrix R K g × m with values in the carrier set of a complete idempotent semifield K ¯ . Then
  • the triple ( G , M , R ) K ¯ is a (formal) K ¯ -context, and
  • the isomorphic pairs of elements ( a , b ) B G φ × B M φ of Proposition 6 such that a = b R , φ and b = a R , φ are (formal) φ-concepts of the K ¯ -context ( G , M , R ) K ¯ , in which case a is called its φ-extent, and b its φ-intent. When the value of φ is clear we will omit it. The concept generating functions are:
    γ φ ( x ) = ( ( x R , φ ) R , φ , x R , φ ) μ φ ( y ) = ( y R , φ , ( y R , φ ) R , φ )
  • It is customary to use the notation B φ ( G , M , R ) K ¯ to refer to the set of formal φ -concepts of context ( G , M , R ) K ¯ , and we will also be using the notation B G φ ( G , M , R ) K ¯ and B M φ ( G , M , R ) K ¯ to refer to the sets of φ-extents and φ-intents, respectively, sometimes even without explicit mention of the context itself as B G φ and B M φ .
The parameter φ is crucial for the sets of fixpoints of the closure operators, i.e., the extents and intents. Recall that in a complete positive semifield K ¯ , an element φ is called extremal if φ { , } , and invertible [45] or finite [30] if and only if it is not extremal, φ K ¯ \ { , } . We can use invertible elements to scale vector spaces as follows:
( · ) ˜ φ : X X ˜ φ x x ˜ φ = x φ
Several points are worth mentioning:
  • Please note that since the upper and lower multiplication only differ in the behaviour of the extreme (non-invertible) elements, it is not necessary to use the dotted convention in scalings.
  • Using ⊥ in a semifield for scaling would annul every coordinate, so ˙ X { X } .
  • Using ⊤ would render any non-zero coordinate of x to ⊤ and we say that x ˜ is saturated. This is the way Boolean spaces are embedded into semivector spaces. ˙ K n 2 2 n [34] (§ 3.8, for an example).
  • Since most of the calculations below are already scaled, we will not use this cumbersone notation usually, but only when we have to mix in expressions or procedures both scaled and un-scaled magnitudes.
Invertible elements associated with a scalar product allow us to define Galois connections using residuation in an idempotent semifield between scaled spaces [45].
Theorem 5
( K ¯ -Formal Concept Analysis, analysis phase). Let G and M be sets of formal objects and formal attributes, respectively, with | G | = g and | M | = m , and let ( G , M , R ) K ¯ be a formal context whose incidence R K g × m takes values in the carrier set of a complete idempotent semifield K ¯ . Consider the vector spaces X = K ¯ g and Y = K ¯ m , an invertible element φ = γ μ K and the scaled spaces X ˜ γ = γ X and Y ˜ μ = μ Y . Then the bracket x R y K ¯ OI = x ˙ R ˙ y 1 induces a Galois connection ( · R , · R ) : X ˜ γ Y ˜ μ between the scaled spaces through the polars
x R = R T ˙ x 1 y R = R ˙ y 1
whose composition generate closure operators:
π R ( x ) = ( x R ) R = R ˙ ( R ˙ x ) π R T ( y ) = ( y R ) R = R T ˙ ( R 1 ˙ y )
which define two bijective sets, the system of extents B G γ and the system of intents B M μ over which the polars and the projectors are dual order isorphisms and identities respectively
B G γ = ( Y ˜ μ ) R = π R ( X ) = ( B M μ ) R B M μ = ( X ˜ γ ) R = π R T ( Y ) = ( B G γ ) R
Moreover, they are complete subsemimodules of the dual semifield K ̲ generated by the rows (respectively, columns) of R and indeed complete dually isomorphic lattices.
B G γ = R K ̲ B M μ = R T K ̲
Proof. 
Consider an invertible φ = γ μ K in Proposition 6. Then the bracket x R y K ¯ OI = x ˙ R ˙ y 1 φ can be rewritten in terms of the scaled spaces using residuation as
x ˙ R ˙ y 1 γ ˙ μ γ \ · ( x ˙ R ˙ y 1 ) / · μ e γ 1 ˙ x ˙ R ˙ y 1 ˙ μ 1 e ( x γ ) ˙ R ˙ ( y μ ) 1 e ( x ˜ γ ) ˙ R ˙ ( ( y ˜ μ ) ) 1 e
whence results (37)–(39) follow from to Proposition 6 between the scaled spaces using φ = e .
To prove (40), consider a generic z X ˜ γ and the K ̲ -combination of the columns of R, R ˙ z : We have: π R ( R ˙ z ) = R ˙ ( R ˙ ( R ˙ z ) ) = R ˙ z by the properties of the matrix products, whence R K ̲ B G γ ( G , M , R ) K ¯ . Now consider a B G γ ( G , M , R ) K ¯ whence R ˙ ( R ˙ a ) = a , but this precisely means that a is the combination of columns of R, with coefficients z = R ˙ a , whence B G γ ( G , M , R ) K ¯ R K ̲ , as desired. The proof for the system of intents is GC-dual.
Finally, since both are K ̲ -generated, they are at least, meet-semilattices. Since they are complete with a top, they are also lattices. □
The situation described in this theorem, barring the generation of the semimodule, is depicted to the left of Figure 2b. We will introduce a more detailed diagram when we clarify the role of the bikernel in the results Section.
Please note that the structure of the joins of the lattices defined above is complicated and only found by means of the projectors. Details on how to obtain them can be found in Section 3.3 of [34].
For concept lattices it is always important to obtain the object- and attribute-concepts, since they have special properties.
Lemma 3.
For a formal context ( G , M , R ) K ¯ , the object-concepts γ ¯ ( E g ) and attribute-concepts μ ¯ ( E m ) are:
γ ( E g ) = ( R ˙ R , R T ) μ ( E m ) = ( R , R T ˙ R 1 )
Proof. 
Since the columns of E g are the singleton sets of objects, we have:
B = R T ˙ ( E g ) 1 = R T A = R ˙ R
Similarly, the columns of E m are the singleton sets of attributes, whence:
A = R ˙ ( E m ) 1 = R B = R T ˙ R 1
 □
The following is just a rephrasing of (40), in terms of Lemma 3. It is, somehow, the idempotent semifield analogue of the generation of the lattices by intersection [8].
Corollary 1.
For a formal context ( G , M , R ) K ¯ , the attribute extents K ̲ -generate the system of extents, and GC-dually, the object intents K ̲ -generate the system of intents.
It is always interesting to know the top and bottom elements of the lattices. They can simply be obtained as;
= γ ( g ) = ( g , R T ˙ g ) = μ ( m ) = ( R ˙ m , m )
These four special elements are captured in Table 1, on the left, for future reference.
Extremal exploration. When φ is not invertible, the next proposition from [52] takes care of the corner cases.
Proposition 7
(Extremal exploration of the Galois connection). Let K ¯ be a complete idempotent semifield and R K g × m a matrix with values in its carrier set. Consider the vector spaces X = K ¯ g and Y = K ¯ m , an element φ K { , } , and consider the Galois connection ( · R , φ , · R , φ ) : X Y induced by the bracket x R y K ¯ OI = x ˙ R ˙ y 1 between the spaces. Then:
1.
If φ = , then:
x X , x R , = m y Y , y R , = g
B IKER π R ( · ) = { X } B IKER π R T ( · ) = { Y }
B G ( G , M , R ) K ¯ 1 1 B M ( G , M , R ) K ¯
2.
If φ , then:
x N R X , x R , φ = m y N R 1 Y , y R , φ = g
N R B IKER π R ( · ) N R 1 B IKER π R T ( · )
{ ( g , R T ˙ g ) , ( R ˙ m , m ) } B φ ( G , M , R ) K ¯
3.
Futhermore, if φ = , then:
B IKER π R ( · ) = { N R , X \ N R } B IKER π R T ( · ) = { N R 1 , Y \ N R 1 }
B G ( G , M , R ) K ¯ 2 2 B M ( G , M , R ) K ¯
Proof. 
See [52]. □

2.2.3. Dual K ¯ -Formal Concept Analysisor K ̲ -Formal Concept Analysis

It may seem strange that in Proposition 6 we started with K ¯ but the scalar products leading to the definition of the different Galois connections operated in K ̲ . This is actually a requisite of the original work discovering the connections [45], but it also dovetails into previous, unrelated work. We can do the opposite operation: start using K ̲ , what leads onto operations in K ¯ .
Before stating the theorem we are introducing yet another duality: those formulae that are correct for K ¯ are order-dually correct for K ̲ whenever all operations and special elements of K ¯ are substituted for their analogues in K ̲ . This is a consequence of lattice-duality for the pair of idempotent semifields. Instead of d = and d = we will use directly ⊤ and ⊥, exactly as we use the constants 0 and 1 in negated Boolean algebra. Recall that the unit e is its own dual. We write only the dual of Theorem 5.
Theorem 6
( K ̲ -Formal Concept Analysis, analysis phase). Consider the context ( G , M , R ) . The polars for the Galois connection between X ( K ̲ ) g and Y ( K ̲ ) m are:
x R = R T ˙ x 1 y R = R ˙ y 1
whence the closures (in K ̲ ) are:
π R d ( x ) = R ˙ ( R ˙ x ) x π R r d ( y ) = R T ˙ ( R 1 ˙ y ) y
Proof. 
Consider the bracket x R y K ¯ OI = x \ · R / · y T = x ˙ R ˙ y 1 using residuation in K ̲ . This theorem is then order-dual to Theorem 5. A full proof can be found in [33]. □
Notice that
  • Equation (51) describes closure operators for K ̲ , but not those found previously for K ¯ . In fact, since the semifield K ̲ is order-dual, they are interior operators in the original semifield K ¯ .
  • The dual special elements whose direct forms were found at the end of Section 2.2.2, can also be worked out. They appear in the right column of Table 1.
  • We can also invoke the dual of Proposition 6.
  • Since we will need for results from dual Theorems 5 and 6 to coexist, we must supply more notation. For the purpose of simplifying glyphs and as an easy mnemonic, we propose to mark the formal extents and intents of K ̲ -FCA with underlines as a ̲ , b ̲ and those of K ¯ -FCA with overlines as a ¯ , b ¯
To gather the results of this and the previous section, we propose a new, more nuanced diagram when φ is invertible, as depicted in Figure 3, to include the information about the generation of the semimodules, the nullspaces and the bikernels for the direct and inverse semifields.

3. Results: The SVD Based in K -FCA

3.1. Approximating the Incidence with Formal Concepts

Definition 7
(Conceptual factors). Consider the context ( G , M , R ) .
1.
For c ¯ = ( a ¯ , b ¯ ) B φ ( G , M , R ) K ¯ , define the lower conceptual φ -factor as R ¯ ( c ¯ ) = a ¯ ˙ φ ˙ b ¯ T , and
2.
For c ̲ = ( a ̲ , b ̲ ) B φ ( G , M , R ) K ¯ , define the upper conceptual φ -factor , as R ̲ ( c ̲ ) = a ̲ ˙ φ ˙ b ̲ T .
The following explains the pertinence of K -FCA to i-SVD and the names of the factors:
Lemma 4.
Let R K g × m be a matrix over a completed idempotent semifield K ¯ , with G a set indexing the rows | G | = g and M a set indexing the columns | M | = m of R. Let φ K so that: c ¯ B φ ( G , M , R ) K ¯ and c ̲ B φ ( G , M , R ) K ¯ are φ-concepts of the K ¯ - and K ̲ -concept lattices of the context ( G , M , R ) , respectively. Then,
R ¯ ( c ¯ ) R R ̲ ( c ̲ )
Proof. 
Since a ¯ R = b ¯ and b ¯ R = a ¯ we have that a ¯ ˙ R ˙ b ¯ 1 φ , whence: R a ¯ \ · φ / · b ¯ = a ¯ ˙ φ ˙ b ¯ T . The other bound is K ̲ -dual. □
Note that the approximation is in the entry-wise order between matrices, so R ¯ ( c ¯ ) is actually a lower approximation to matrix R using one of the φ -concepts of its induced context. To write tighter bounds we develop the following notation: let C ¯ B φ ( G , M , R ) K ¯ be a set of | C ¯ | = k φ -concepts, call A C ¯ the matrix whose columns are, in a particular order, the extents of the concepts of C ¯ and, B C ¯ the matrix whose columns are, in the same order, the intents of the concepts of C ¯ . Also, call Φ k = E k φ and Φ k = E k φ Then we may write:
R ¯ ( C ¯ ) = c ¯ C ¯ R ¯ ( c ¯ ) = A C ¯ ˙ Φ k ˙ B C ¯ T R ̲ ( C ̲ ) = c ̲ C ̲ R ̲ ( c ̲ ) = A C ̲ ˙ Φ k ˙ B C ̲ T
where the definition for R ̲ ( C ̲ ) follows by order-duality. Now we can tighten bounds easily:
Corollary 2.
In the conditions of Lemma 4, with C ¯ B φ ( G , M , R ) K ¯ and C ̲ B φ ( G , M , R ) K ¯ , we have
R ¯ ( C ¯ ) R R ̲ ( C ̲ )
Proof. 
For each c ¯ C ¯ B φ ( G , M , R ) K ¯ notice that R ¯ ( c ¯ ) K g × m is just an element of the idempotent semimodule of matrices over K ¯ for which R ¯ ( c ¯ ) R holds. For c ¯ 1 , c ¯ 2 B φ ( G , M , R ) K ¯ we have R ¯ ( c ¯ i ) ˙ R ¯ ( c ¯ 2 ) R ˙ R = R . Since the semifield of matrices is a complete K ¯ semimodule, the summation over the whole set of concepts exists and the lower bound holds. And K ̲ -dually for the upper bound. □
Our aim, next is to choose adequately the subsets of concepts to do the synthesis of the matrix. The best we can do is to use finite φ as the next lemma suggests:
Lemma 5.
In the conditions of Lemma 4,
1.
If φ = (dually φ = ) the lower (upper) bound is the worst possible.
2.
If φ = (dually φ = ) the upper (lower) bound is only tight in the saturated (empty) lines.
Proof. 
For Lemma 1 notice that for all concept c ¯ = ( a ¯ , b ¯ ) B ( G , M , R ) K ¯ , we have R ¯ ( c ¯ ) = a ˙ ˙ b ¯ T = g × m . And order-dually for φ = and the upper bound.
Regarding 2, if φ = by Proposition 7 B ( G , M , R ) K ¯ = { ( g , R T ˙ g ) , ( R ˙ m , m ) } , so R ¯ ( B ( G , M , R ) K ¯ ) R with:
R ¯ ( B ( G , M , R ) K ¯ ) = g ˙ ( R T ˙ g ) T ˙ ( R ˙ m ) ˙ m
Consider the first term: if R has no saturated columns R T ˙ g = m , and g ˙ ( R T ˙ g ) T = g ˙ g T = g × m . The second term is row-column dual: if R has no saturated rows R ˙ m = g , and ( R ˙ m ) ˙ ˙ m = g × m . Neither of these cases tightens the bound.
However, if R has saturated columns then ( R T ˙ g ) j = if R j · T is saturated and null otherwise. Then g ˙ ( R T ˙ g ) T has saturated rows were R has saturated rows and is null everywhere else, hence a lower bound on R. Row-column dually, if R has saturated rows then ( R ˙ m ) ˙ m has precisely the same saturated rows and is otherwise empty. Therefore this bound is tight, in general, only in the saturated rows.
Order-dually for the upper bounds regarding the the empty lines R R ¯ ( B ( G , M , R ) K ¯ ) with:
R ̲ ( B ( G , M , R ) K ¯ ) = g ˙ ( R T ˙ g ) T ˙ ( R ˙ m ) ˙ m
 □
Notice that the corner case R = g × m (and order-dually R = g × m ) is trivially solved by the single concept in B ( G , M , g × m ) K ¯ = { ( g , m ) } ( B ( G , M , g × m ) K ¯ = { ( g , m ) } ) using statement 1 of Lemma 5. The next corollary shows that all other φ improve the approximations.
Corollary 3.
In the conditions of Lemma 4, for all φ K \ { , }
R ¯ ( B ( G , M , R ) K ¯ ) R ¯ ( B φ ( G , M , R ) K ¯ ) R ̲ ( B φ ( G , M , R ) K ¯ ) R ̲ ( B ( G , M , R ) K ¯ )
Proof. 
By Proposition 7, Case 2 the two concepts of the the concept lattice for φ = are also included in the concept lattice for every finite φ { , } (and order-dually for B ( G , M , R ) K ¯ )
B ( G , M , R ) K ¯ B φ ( G , M , R ) K ¯ B ( G , M , R ) K ¯ B φ ( G , M , R ) K ¯ .
The results follows from the monotony of addition. □
Corollary 3 shows that for φ { , } , R ¯ ( B ( G , M , R ) K ¯ ) imposes a sort of “mask” of saturated rows and columns preserved by the lower addition in any other reconstruction. And order-dually for R ̲ ( B ( G , M , R ) K ¯ ) and the empty rows and columns with respect to upper bound accumulation. Hence our hope for better bounds for reconstruction is to use finite φ , in which case, Theorem 5 applies rather than the more general Proposition 6. In fact, in this case we can improve on Corollary 2.
Proposition 8.
Let R K g × m be a matrix over a completed idempotent semifield K ¯ , with G a set indexing the rows | G | = g and M a set indexing the columns | M | = m of R. Let φ K so that ( a ¯ , b ¯ ) B φ ( G , M , R ) K ¯ and ( a ̲ , b ̲ ) B φ ( G , M , R ) K ¯ are φ-concepts of the K ¯ - and K ̲ -concept lattices of the context ( G , M , R ) , respectively. Then, either the object- and attribute concepts of either lattice reconstruct the original matrix perfectly.
R ¯ ( γ K ¯ ( E g ) ) = R = R ̲ ( γ K ̲ ( E g ) ) R ¯ ( μ K ¯ ( E m ) ) = R = R ̲ ( μ K ̲ ( E m ) )
Proof. 
Since φ is invertible, we are in the conditions of Theorem 5 where we use normalized spaces and φ = e : Recall, from Table 1 that the object-concepts are γ K ¯ ( E g ) = ( R ˙ R , R T ) and the attribute-concepts are μ K ¯ ( E m ) = ( R , R T ˙ R 1 ) . Then the reconstructions are R ¯ ( γ K ¯ ( E g ) ) = ( R ˙ R ) ˙ E g ˙ R = ( R ˙ R ) ˙ R = R where the last steps follows from the triple products in (16). Likewise for the reconstruction from the attribute-concepts: μ K ¯ ( E m ) = R ˙ E m ˙ ( R T ˙ R 1 ) T = R ˙ ( R ˙ R ) = R . The other approximations are order-dual with γ K ̲ ( E g ) = ( R ˙ R , R T ) and μ K ̲ ( E m ) = ( R , R T ˙ R 1 )  □

3.2. Finding Minimal Sets of Factors

The preceding lemmas allow us to concentrate on using either attribute- or object- φ -concepts depending on our tastes, but provides no real insight into the kind of factors that conform the structure of the data in the context. This situation is analogous to the generation of the binary concept lattice using the attribute- and object-concepts: these are dense sets of generators but may not be minimal. This is the question that we address next.
Let v ¯ K ¯ g and λ K ¯ , then v ¯ ˙ λ is the ray generated by v ¯ , and we say that two vectors are collinear if the belong to the same ray. This result can be extended to φ - concepts:
Definition 8
(Concept scaling [53]). Consider the context ( G , M , R ) and finite φ ( , ) . For λ [ , ] define λ scalings of φ -concepts in the following way:
λ ˙ ( a ¯ , b ¯ ) = ( a ¯ ˙ λ , R T ˙ ( a ¯ 1 ˙ λ 1 ) ) f o r ( a ¯ , b ¯ ) B φ ( G , M , R ) K ¯ λ ˙ ( a ̲ , b ̲ ) = ( a ̲ ˙ λ , R T ˙ ( a ̲ 1 ˙ λ 1 ) f o r ( a ¯ , b ¯ ) B φ ( G , M , R ) K ¯
and we say that λ ˙ ( a ¯ , b ¯ ) and ( a ¯ , b ¯ ) (dually, λ ˙ ( a ̲ , b ̲ ) and ( a ̲ , b ̲ ) ) are upper (dually, lower) collinear concepts.
It is illustrative to find the ray of collinear concepts of a particular φ -concept. For that purpose, let ( a ¯ , b ¯ ) B φ ( G , M , R ) K ¯ , and consider upper collinear φ -concepts:
  • If a ¯ K g with finite components, then the ray has as many elements as the cardinality of k. Let λ take the following values < λ < e < μ < . Since multiplication is compatible with the order, we have a ¯ ˙ < a ¯ ˙ λ < a ¯ ˙ e < a ¯ ˙ μ < a ¯ ˙ = g . Since a ¯ B G φ = R K ¯ we have a ¯ = R ˙ u where u K ̲ m , whence a ¯ ˙ = ( R ˙ u ) ˙ = R ˙ ( u ˙ ) R ˙ m g , and so:
    g R ˙ m a ¯ ˙ < a ¯ ˙ λ < a ¯ ˙ e < a ¯ ˙ μ < g .
    For finite λ we have ( a ¯ ˙ λ ) R , φ = R ˙ ( a ¯ 1 λ 1 ) = ( R ˙ a ¯ 1 ) λ 1 = b ¯ λ 1 , wherefore applying the polar of intents we obtain:
    m ( a ¯ ˙ ) R , φ > b ¯ λ 1 > b ¯ > b ¯ μ 1 > R T ˙ g .
    By applying the polar of extents and pairing up we finally obtain the following structure for the ray:
    ( R ˙ m , m ) ( ( ( a ¯ ˙ ) R , φ ) R , φ , ( a ¯ ˙ ) R , φ ) < ( a ¯ ˙ λ , b ¯ ˙ λ 1 ) < ( a ¯ , b ¯ ) < < ( a ¯ ˙ μ , b ¯ ˙ μ 1 ) < ( g , R T ˙ g ) .
  • When a ¯ { , } g , since a ¯ ˙ λ = a for λ , the ray has a maximum of three elements:
    ( R T ˙ m , m ) ( a ¯ , b ¯ ) ( g , R ˙ g ) .
We use the cases of (61) to study collinear φ -concepts, since they include those of (62).
The relevance of scaled φ -concepts for our problem comes from the following fact:
Lemma 6.
Upper (lower) collinear φ-concepts with finite scales do not improve on each other’s lower (upper) conceptual factor when reconstructing the matrix.
Proof. 
Let ( a ¯ , b ¯ ) B φ ( G , M , R ) K ¯ be a direct-semifield φ -concept with conceptual factor R ¯ ( a ¯ , b ¯ ) = a ¯ ˙ ( b ¯ ) T . For finite λ , we have
R ¯ ( ( a ¯ , b ¯ ) ˙ λ ) = R ¯ ( ( a ¯ λ , b ¯ λ 1 ) = ( a ¯ λ ) φ ( b ¯ λ 1 ) T = a ¯ φ b ¯ T = R ¯ ( a ¯ , b ¯ ) .
The absence of dot in the notation is because for finite λ both operations are the same. And order dually for upper conceptual φ -factors. □
Notice that:
  • For λ = we have the concept ( g , R T ˙ g ) which reconstructs the saturated columns of R as R ¯ ( g , R T ˙ g ) = g ˙ ( g ˙ R ) , from (55). Recall for later that similarly R ¯ ( R ˙ m , m ) = ( R ˙ m ) ˙ ( m ) T reconstructs the saturated rows, and they are both lower approximations to R, by Proposition 1.4.
  • It is easy to see what concept is ( a ¯ , b ¯ ) ˙ . Recall that, since a ¯ is an intent it rewrites as a ¯ = R ˙ u for some u K ¯ m , but since it is the extent due to b ¯ , we must have a ¯ = R ˙ b ¯ 1 , whence we can choose u = b ¯ 1 . We have ( a ¯ ˙ ) R , φ = R T ˙ ( R ˙ ( b ¯ 1 ˙ ) ) 1 ) = R T ˙ ( R 1 ˙ ( b ¯ ˙ ) ) = π R , φ ( b ¯ ˙ ) , a closure, so that ( ( a ¯ ˙ ) R , φ ) R , φ = R ˙ ( R ˙ ( R ˙ ( u ˙ ) ) ) = R ˙ ( u ˙ ) = a ¯ ˙ which shows that it is already an extent. So ( a ¯ , b ¯ ) ˙ = ( a ¯ ˙ , π R , φ ( b ¯ ˙ ) ) .
    From (61), R ˙ m a ¯ ˙ = a , where the last inequality turns into an equality if a ¯ only has empty and saturated coordinates. Also, from (61) R T ˙ g b ¯ b ¯ ˙ π R , φ ( b ¯ ˙ ) .
This suggests that we need not concern ourselves with scaled repetitions of vectors, but center our efforts in sets of independent vectors (a fortiori, on bases). For that purpose, consider the next construction.
Definition 9
(Block forms involving sets of independent vectors). Let R K ¯ g × m . Suppose that the K ̲ version of the A -test detects a set of dependent columns with indices J ¯ so that the independent indices—equivalently the formal attributes—are J = M \ J ¯ , and likewise for the indices of the dependent rows I ¯ and the independent ones I = G \ I ¯ . Recall that repetitions and empty lines—in either algebra—are always dependent. From now on consider their cardinalities fixed as | J | = k m and | I | = l g .
Using these indices we build a permutation matrix for rows P and another for columns Q as customary. By applying these to R we obtain the following block structure on our target matrix R = P T ˙ R ˙ Q
R = V ̲ V ̲ ˙ W ̲ R = U ̲ T Z ̲ T ˙ U ̲ T T
where V ̲ and U ̲ T are sets of K ̲ -independent columns and rows of R, respectively, and W ̲ and Z ̲ are the combinations used to obtain the dependent columns and rows from them, respectively.
We first record a technical lemma.
Lemma 7.
Let R K ¯ g × m . Then:
R ˙ R = V ̲ ˙ V ̲ R T ˙ R 1 = U ̲ ˙ U ̲
Proof. 
R ˙ R = V ̲ V ̲ ˙ W ̲ ˙ V ̲ W ̲ ˙ V ̲ = V ̲ ˙ V ̲ ˙ V ̲ ˙ W ̲ ˙ ( W ̲ ˙ V ̲ ) = V ̲ ˙ V ̲
where the last step follows because W ̲ ˙ ( W ̲ ˙ V ̲ ) V ̲ , since it is a closure operator, whence V ̲ ˙ W ̲ ˙ ( W ̲ ˙ V ̲ ) V ̲ ˙ V ̲ , thus being overlooked by the lower addition. And row-column dually. □
Lemma 7 allows us to use the sets of join- and meet-irreducibles for reconstruction:
Lemma 8.
R K ¯ g × m can be reconstructed from its sets of K ̲ -independent columns or rows.
( V ̲ ˙ V ̲ ) ˙ ( V ̲ ˙ V ̲ ) R T = R = ( U ̲ ˙ U ̲ ) R ˙ ( U ̲ ˙ U ̲ ) T
Proof. 
Recall from Proposition 8 that the atrribute and object concepts —meet- and join-irreducibles of the φ -concept lattice, respectively—reconstruct the incidence perfectly. Since the extent of the object concepts is R ˙ R = V ̲ ˙ V ̲ , we find:
( V ̲ ˙ V ̲ ) R = R T ˙ ( V ̲ ˙ V ̲ ) 1 = R T ˙ ( R ˙ R ) 1 = R T ˙ ( R 1 ˙ R T ) = R T
whence
R ¯ ( ( V ̲ ˙ V ̲ ) , ( V ̲ ˙ V ̲ ) R ) = ( R ˙ R ) ˙ ( R T ) T = R
and row-column dually for the independent rows using the attribute extents:
R ¯ ( ( U ̲ ˙ U ̲ ) R , U ̲ ˙ U ̲ ) = R ˙ ( R T ˙ R 1 ) T = R
 □
These representations are not altogether satisfactory, since they involve the dependent part of R itself. A final results uses a K ¯ -independent vectors instead of the K ̲ -independent ones used above, and attains a perfect reconstruction. This is puzzling, inasmuch as we can use the A -test to prove that the K ¯ - and K ̲ -independent sets of vectors are also independent of each other (see Section 16.2 of [30]).
Proposition 9.
Consider the context ( G , M , R ) and a finite φ K .
  • If V ¯ is a set of K ¯ -independent columns, and U ¯ T be a set of K ¯ -independent rows of R, respectively. Then,
    R ¯ ( V ¯ , V ¯ R ) = R R ¯ ( U ¯ R , U ¯ ) = R T
    where the operators are those of K ¯ F o r m a l C o n c e p t A n a l y s i s .
  • Order-dually, if V ̲ be a set of K ̲ -independent columns, and U ̲ T is a set of K ̲ -independent rows of R, respectively. Then.
    R ̲ ( V ̲ , V ̲ R ) = R R ̲ ( U ̲ R , U ̲ ) = R T
    where the operators are those of K ̲ F o r m a l C o n c e p t A n a l y s i s .
Proof. 
For R = V ¯ V ¯ ˙ W ¯ we have V ¯ R = R T ˙ V ¯ 1 , whence
R ¯ ( V ¯ , V ¯ R ) = V ¯ ˙ ( V ¯ R ) T = V ¯ ˙ ( R T ˙ V ¯ 1 ) T = V ¯ ˙ ( V ¯ ˙ R ) = V ¯ ˙ ( V ¯ ˙ V ¯ V ¯ ˙ W ¯ ) = V ¯ ˙ ( V ¯ ˙ ( V ¯ ˙ E k ) ) V ¯ ˙ ( V ¯ ˙ ( V ¯ ˙ W ¯ ) ) = V ¯ V ¯ ˙ W ¯ = R
where the last but one step comes from the 3- and 4-products of matrices in Proposition 1.4. Row-column dually, we can partition R T = U ¯ U ¯ ˙ Z ¯ to prove:
R ¯ ( U ¯ R , U ¯ ) = ( R T ˙ U ¯ 1 ) ˙ U ¯ T = ( U ¯ ˙ ( U ¯ ˙ R ) ) T = R T
where the last step uses the second line of the previous demonstration (71). And order-dually for the lower approximation results. □

3.3. Towards a Fundamental Theorem of Linear Algebra over Idempotent Semifields

We set out to clarify the relationship of the SVD of rectangular matrices over complete idempotent semifields K ¯ and the FCA of said matrices when considered the incidences of semifield-valued formal contexts. We have completed previous work pointing out that the set of object concepts and attribute concepts of the K ¯ -valued concept lattices provides two independent bases to carry out the SVD and reconstruction.
We can collect all the previous results in a single theorem trying to emulate Theorem 1:
Theorem 7
(Fundamental Theorem of Linear Algebra over an Idempotent Semifield K ¯ , interim version). Let K ¯ be a complete idempotent semifield with dual K ̲ and R K g × m a matrix with values in its carrier set. Consider the vector spaces X = K ¯ g and Y = K ¯ m , an element φ K ¯ , and the Galois connection ( · R , φ , · R , φ ) : X Y between the spaces induced by the polars of the matrix. We define the following subspaces of X and Y induced by the polars:
  • The set of extents: I M ( · ) R , φ = ( Y ) R , φ = B G φ
  • The set of intents: I M ( · ) R , φ = ( X ) R , φ = B M φ
  • The bikernel of extents: B IKER ( · ) R , φ = { ( x 1 , x 2 ) X 2 x 1 R , φ = x 2 R , φ }
  • The bikernel of intents: B IKER ( · ) R , φ = { ( y 1 , y 2 ) Y 2 y 1 R , φ = y 2 R , φ }
Then:
1.
From B G φ to B M φ the polars yield an invertible transformation, a ¯ R , φ = b ¯ b ¯ R , φ = a ¯ .
2.
The set of extents: I M ( · ) R , φ = B G φ is K ̲ generated by the columns of R, and the set of intents is K ̲ generated by the columns of R T ,
B G φ = R K ̲ B M φ = R T K ̲
3.
In X K ¯ g the bikernel of extents is the “orthogonal” of the system of extents, B IKER ( · ) R , φ = B G φ , that is the blocks of B IKER ( · ) R , φ intersect B G φ at precisely one point, | B IKER ( · ) R , φ | = | B G φ | .
4.
In Y K ¯ m the bikernel of intents is the “orthogonal” of the system of intents, B IKER ( · ) R , φ = B M φ , that is the blocks of B IKER ( · ) R , φ intersect B M φ at precisely one point, | B IKER ( · ) R , φ | = | B M φ | .
5.
If V ̲ is a K ̲ -independent set of columns with dim V ̲ = k m vectors and U ̲ T a K ̲ -independent set of rows with dim U ̲ = l g vectors, then
B G φ = V ̲ K ̲ B M φ = U ̲ K ̲
6.
There are two “minimal” factorizations of the matrix, one using the join-irreducible and another the meet-irreducible ( K ¯ , φ ) -concepts:
( V ̲ ˙ V ̲ ) ˙ Φ k ˙ ( V ̲ ˙ V ̲ ) R T = R = ( U ̲ ˙ U ̲ ) R ˙ Φ l ˙ ( U ̲ ˙ U ̲ ) T
Proof. 
1 and 2 issue from Proposition 6 and Theorem 5 (Theorem 6 for their order duals). Likewise, 3 and 4 come from Theorem 5 (Theorem 6 for their order duals) and the general properties of the bikernels by Definition 5 and Proposition 5. For 5, clearly since V ̲ R , we have V ̲ K ̲ R K ̲ . On the other hand, if a R K ̲ then a = R ˙ b 1 but since R = V ̲ V ̲ ˙ W ̲ , then clearly a = V ̲ ˙ E k W ̲ ˙ b 1 V ̲ K ̲ . 6 is just Lemma 8, specifically (66) with explicit φ in each of the concepts. □

4. Discussion

4.1. Contributions

The main contribution of this paper is Theorem 7 that tries to provide for complete idempotent semifields one of the basic results for applications of standard algebra: a fundamental theorem of linear forms. This interim, basic theorem comprises several results relevant to LC, to wit:
  • The ranges of linear forms and the other related types of Galois Connections between idempotent spaces are complete lattices, i.e., it is computing lattices.
  • The computation of these spaces, their fixpoints, and pairs thereof—the φ -concepts—can all be computed algebraically via the entwined operators of a complete idempotent semifield and its dual, and these operators remind of and generalize boolean algebra, which lends a familiarity to computing in idempotent semifields. Since complete idempotent semifields are complete lattices, we say that in using the theorem we are computing in lattices.
  • φ -formal concepts in these different types of Galois connections allow us to reconstruct the initial matrix of the linear form (Lemma 8). Since the objects and notions of (Concept) Lattice Theory are relevant for this application, we say that we are computing with lattices.
  • Finally, independent subsets of join- and meet-irreducibles of the range lattices of the connection provide two perfect reconstructions and analogues of the SVD for matrices with values in an idempotent semifield (Proposition 9), also an instance of computing in lattices with lattices.
We next try to discuss these contributions from other viewpoints.

4.2. On the Possibility of a Fundamental Theorem of Idempotent Algebra

The comparison of Theorems 1 and 7 suggests several issues solved—the dimension of the image lattices, their generation, and the decomposition of the matrix—and others that need more work. Among the latter figure, prominently, the analogues of the rank-nullity and rank-co-nullity theorems contained in Theorem 1. The anomalies detected by decades of research into this question and the abundance of “rank” definitions do not seem to bode well for this issue [7,30].
Since idempotent semirings, and a fortiori idempotent semifields are zerosumfree—sums of elements are only zero in case all elements in the sum are zero—, and entire—they have no factors of zero apart from zero itself—the dot product of two vectors is only null in case they have non-intersecting supports—this reasoning carries straightforward from the natural numbers. This means that vectors are only orthogonal in case they have no non-zero coordinates in common, a very stringent requirement. In fact, the sets of meet- and join-dense elements are, in general, not orthogonal bases, nor is there hope of finding one from them, a situation profoundly at odds with the standard case.
A more promising line of work consists of describing the structure of the bikernels. We foresee that this important practical question will have an impact in modelling “noise” in matrices over idempotent semifields, since the partition of the input space underlying the congruence that is a bikernel defines an “approximation” space reminiscent of Rough Set analysis [54] on the inputs that map to a particular image, e.g., form part of a φ -formal concept. In this sense, from Proposition 7 we hypothesise that the influence of the φ parameter is to make the underlying partition of the congruence finer or coarser. We refer to the forthcoming [52] for a discussion of this issue.

4.3. On the Relationship of the i-SVD with the Spectral Theorem

The relationship of the standard SVD with the Spectral Theorem for square matrices is well known [13], and we would expect something like that to appear in the idempotent case, if the relationship is structural. Indeed, it would seem that (73) does not resemble a SVD because the presence of the singular values is missing.
To this one may reply that the spectral theory in idempotent semifields is, in general, more complicated than the analogue in standard algebra [11]. In fact, the work in [55,56] suggests that we might better think of these sets as skeleton lattices with which to build the subspaces B G γ , and B M μ . However, this subject needs further clarification.

4.4. Using other Types of Galois Connections to Base the i-SVD

Crucially, the procedure in [13] makes it clear that the SVD in standard algebra is defined in terms of an adjunction, but the procedure described in [53] takes place in a Galois connection, as all standard FCA does. In K ¯ -FCA there is a (left) adjunction available induced by an adjoint pair of matrices
R : Y X R : X Y y x = R ˙ y x y = R ˙ x
where we have used an analogue of the familiar concept of “conjugate” to define the adjunct. Note that apart from basing the SVD procedure on a Galois connection and a left adjunction we could have used the co-Galois connection or the right adjunction (Appendix B). That is to say, for each of the four possible choices of parenthesis we have two different i-SVD decompositions attending to whether we use a set of join- or meet-dense elements as vectors generating the lattice. This situation is at odds, with the standard one, where we have essentially one “canonical” basis of vectors prescribed by the SVD procedure (modulo permutation). However, we have verified that these new Galois connections do not provide any more information into the matrix reconstruction and will publish these results in later work [52].
Note also that a procedure reminiscent of those above but leading to an idempotent analogue of the HITS algorithm [57], was already introduced in [58]. This however leads to functions that are not part of a type of Galois connections, but pairs of mappings each belonging to a different type of connection. In the light of the results of this paper, the reconstruction properties of the pairs of fixpoints are not clear and such procedure cannot be considered an adequate analogue of SVD in the idempotent setting, in our opinion.

4.5. The i-SVD and Matrix Factorization

A prior investigation that inspired us to look into these issues is [25,59] with an emphasis, first on BMF and then on NMF. Indeed, these results were extended for FCA in the fuzzy setting on the basis that the Boolean semiring is always embedded in a fuzzy semiring (bottom left of Figure 1). Later results have emphasized generazability to residuated lattices [60] and approximations from below [61], and how to measure the quality of approximations, e.g., in the boolean case [62].
However, that approach is based on logics in the fuzzy setting, rather than the linear algebra that we follow here. In the understanding that K -FCA is just linear algebra over idempotent semifields, the two dual reconstructions from above and from below appear as natural. Note that the approximation from below in the fuzzy setting was already tackled in [61], but we also integrate seamlessly the approximation from above as a result of the order-duality of complete idempotent semifields, which is, in general, not considered in fuzzy semirings as the concept of an “inverted (fuzzy) logic” is less common.
Another difference with previous work in reconstruction using concepts is that we focus specifically on the SVD of a transformation between (semivector) spaces, rather than in matrix factorizations that come only later as an application.
Since both the join- and the meet-irreducibles of the concept lattice are enough for perfect reconstruction, an interesting future investigation suggests itself: is there an added value in choosing either? Notice that, typically, the join-irreducibles are object concepts—with extents closer to object singletons and complex intents—and vice-versa for the meet-irreducibles—with intents closer to attribute singletons, but complex extents. These may cognitively be labelled as “prototypes” for the object-concepts and “properties” for the attribute-concepts. Choosing either sets of concepts preferentially would be akin to a cognitive strategy for reconstructing matrices.
Example. There is fully weaved R markdown script—presented in the Supplementary Materials to this paper—carrying out an example of BMF of the incidence matrix of an easy, well-known binary context from [8] as illustration of these concepts and techniques. The script is part of the R package for computing with R ¯ max , + and R ¯ min , + available from the authors on request.

4.6. The i-SVD as a Lattice Computing Technique

We have chosen to address the problem of matrix reconstruction in the setting of complete idempotent semifields, which are already complete lattices. This has several important consequences for the LC paradigm:
  • Since there is a proper embedding of the boolean semiring into complete idempotent semifields as abstract algebras, while at the same time there are concrete instances of those algebras supported in the extended reals and even the extended positive reals, the tools we present conform to the LC tenet of homogeneously processing discrete (boolean) and continuous data with the same tools. This is what we called in the introduction computing in lattices.
  • Furthermore, this procedure extends through products and sums to several constructions for weighted sequences, formal series, etc. [10] that strictly formalize many objects relevant to Computer Science, which can themselves be partially ordered. As an instance of this, matrices being partially ordered with the entrywise order, NMF and BMF are seamlessly integrated in the theory we develop here.
  • As a consequence of matrices over idempotent semifields being considered linear forms between idempotent spaces, and since the images of these linear forms are ( K -FCA) lattices, a shift in concern is introduced from orthonormal bases to dense sets of meet and join-generators, from vectors and their images to formal concepts, etc., which we called computing with lattices above, since the concerns of the latter seem more relevant.
  • Finally, the programme of LC to emerge as an “information processing paradigm” is fostered since complete idempotent semifields—in particular the completed max-plus and min-plus semifields—are the limits of algebras that “process information” in the Information Theory meaning of the expression [9].
We are committed to provide more tools for the LC programme in this spirit in future contributions [52].

Supplementary Materials

The supplementary materials are available online at https://www.mdpi.com/2227-7390/8/9/1577/s1.

Author Contributions

Conceptualization, F.J.V.-A. and C.P.-M.; Formal analysis, F.J.V.-A. and C.P.-M.; Funding acquisition, C.P.-M.; Investigation, F.J.V.-A. and C.P.-M.; Methodology, F.J.V.-A. and C.P.-M.; Writing—original draft, F.J.V.-A.; Writing—review & editing, F.J.V.-A. and C.P.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Government-MinECo project TEC2017-84395-P and the Dept. of Research and Innovation of Madrid Regional Authority project EMPATIA-CM (Y2018/TCS-5046).

Acknowledgments

This paper evolved from a conference paper presented by the authors at FUZZ-IEEE 2018 [26]. We would like to acknowledge the reviewers of previous versions of this paper for their timely criticism and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BMFBoolean Matrix Factorization
FCAFormal Concept Analysis
i-SVDIdempotent Singular Value Decomposition
K -FCASemifield-Valued Formal Concept Analysis
LCLattice Computing
NMFNon-negative Matrix Factorization
SVDSingular Value Decomposition

Appendix A. The Problem with Notation of Completed Semifields and Their Semimodules

Complete, naturally ordered idempotent semifields come in dually ordered pairs, e.g., K ¯ and K ̲ and their operators will often appear together in expressions, that is there are two “products” and two “additions” that must coexist [30,35]. The problem really comes to the forefront in multiplications—because in K ¯ we have = but at the same time in K ̲ we have = . However, the operations in complete idempotent semifields resemble so much those of fields that it is interesting, intuitive and efficient to maintain a notation suggestive of the more usual one for fields (The well-known notation of using an apostrophe for the min-related operations in [30] follows this idea but downplays the min-plus semifield, and seems to have been prompted by obsolete typesetting technology.).
Figure A1 suggests one way to to obtain such a notation for a dual pair of semifields: maintain the carrier set and use decorations—by Moreau’s suggestion dots [35]—to distinguish the operations coming from K ¯ (lower dots) and its dual K ̲ (upper dots).
Figure A1. Construction of the aggregated structure of a completed naturally ordered semifield with its inverse, e.g max-min-plus. The name refers to the original basis for the construction, e.g., K ¯ since they share the natural order.
Figure A1. Construction of the aggregated structure of a completed naturally ordered semifield with its inverse, e.g max-min-plus. The name refers to the original basis for the construction, e.g., K ¯ since they share the natural order.
Mathematics 08 01577 g0a1
In these paired structures, meets can be expressed by means of joins and inversion as a ˙ b = ( a 1 ˙ b 1 ) 1 , and vice-versa and the dotted notation is used to suggest how multiplication works for extremal elements ˙ = and ˙ = . In the case of the max-min-plus semifield, this translates into + ˙ = and + ˙ = . One further advantage is that residuation can be expressed in terms of inverses, and this extends to semimodules:
( x ˙ λ ) 1 = x 1 ˙ λ 1 ( x 1 ˙ x 2 ) 1 = x 1 1 ˙ x 2 1
This notation was introduced in [29]—but also used in [33,34,36,53,55,56] and has two main advantages: first, it is coherent with Moreau’s for convex addition [35], which has precedence, and is reminiscent of [63] for max-times, where the conjugation just flips dots up and down. On the other hand, any expression written in it also holds for any other idempotent semifield, e.g the max-min-times semifield. Note furthermore that this notation is the same for scalars and matrices. In Table A1 it is compared to other notations currently in use.
Table A1. Different notations used for the completed max-min-plus semifield in the literature.
Table A1. Different notations used for the completed max-min-plus semifield in the literature.
Origin/OperationMax AdditionMin AdditionMax ProductMin Product ϵ e ϵ 1 Conjugate
standard algebra max , sup min , inf + ˙ [35] + ˙ [35] 0 ·
this paper [29] ˙ ˙ ˙ ˙ e ·
Cuninghame-Green [7,30] 0 · *
lattice algebra (scalar) [38]
(matricial)
+ + 0 · *
· *
Maragos (scalar) [32]
(matricial)
+ +

Appendix B. Residuated Maps, Adjunctions and Galois Connections

This section follows [20]. Let P = P , P and Q = Q , Q be partially ordered sets. We have:
  • A map f : P Q is residuated if inverse images of principal (order) ideals of Q under f are again principal ideals. Its residual map or simply residual, f # : Q P is
    f # ( q ) = max { p P f ( p ) Q q } .
  • A map g : Q P is dually residuated if the inverse images of principal dual (order) ideals under g are again dual ideals. Its dual residual map or simply dual residual, g : P Q is
    g ( p ) = min { q Q p P g ( q ) } .
This duality of concepts is fortunately simplified by a well-known theorem stating that residual maps are dually residuated, while dual residual maps are residuated, hence we may maintain only the two notions of residuated maps and their residuals.
In fact, the two notions are so entwined that we give a name to them: an adjoint pair of maps ( λ , ρ ) is a pair ( λ : P Q , ρ : Q P ) between two ordered sets such that p P , q Q , p P ρ ( q ) λ ( p ) Q q , equivalently, p P ρ ( λ ( p ) ) and λ ( ρ ( q ) ) Q q .
If the order relation is actually partial the lower or left adjoint, λ is uniquely determined by its right or upper adjoint, ρ , and conversely. The characterization theorem for adjoint maps states that ( λ , ρ ) are adjoint if and only if, λ is residuated with residual ρ , or equivalently, ρ is dually residuated with λ its dual residual.
Now consider the orders P = P , P and Q = Q , Q and their order duals P d = P , P and Q d = Q , Q , to obtain two adjoint and two dually adjoint pairs:
Definition A1
(Four different types of Galois connections and adjunctions).
1.
( λ , ρ ) is an adjunction on the left or simply a left adjunction, and we write ( λ , ρ ) : P Q iif: p P , q Q λ ( p ) Q q p P ρ ( q ) , that is, the functions are covariant, and we say that λ is the lower or left adjoint while ρ is the upper or right adjoint.
2.
( ρ , λ ) : P Q is an adjunction on the right or simply a right adjunction iff: p P , q Q ρ ( p ) Q q p P λ ( q ) , both functions are covariant, ρ is the upper adjoint, and λ the lower adjoint.
3.
( φ , ψ ) is a Galois Connection (proper), of two dual adjoints ( φ , ψ ) : P Q iff: p P , q Q φ ( p ) Q q p P ψ ( q ) , that is, both functions are contravariant. For that reason they are sometimes named contravariant or symmetric adjunctions on the right.
4.
( , ) is a co-Galois connection, of dual adjoints ( , ) : P Q if: p P , q Q ( p ) Q q p P ( q ) , that is, both functions are contravariant. For that reason they are sometimes named contravariant or symmetric adjunctions on the left. ( , ) is also a co-Galois connection.
Furthermore, as a sort of graphical summary, we introduce the diagram to the upper left-hand corner of Figure A2 as the pattern that carries the structures described in [29].
Figure A2. Diagrams visually depicting the maps and structures involved in the adjunction on the left ( λ , ρ ) : P Q (a), Galois connection ( φ , ψ ) : P Q (b), the co-Galois connection ( , ) : P Q (c) and the adjunction on the right ( ρ , λ ) : P Q (d) between two partially ordered sets (adapted from [29]). Closure operators are denoted by γ P , γ Q , interior (kernel) operators by κ P , κ Q , closure systems by P ¯ , Q ¯ and interior (kernel) systems by P ̲ , Q ̲ .
Figure A2. Diagrams visually depicting the maps and structures involved in the adjunction on the left ( λ , ρ ) : P Q (a), Galois connection ( φ , ψ ) : P Q (b), the co-Galois connection ( , ) : P Q (c) and the adjunction on the right ( ρ , λ ) : P Q (d) between two partially ordered sets (adapted from [29]). Closure operators are denoted by γ P , γ Q , interior (kernel) operators by κ P , κ Q , closure systems by P ¯ , Q ¯ and interior (kernel) systems by P ̲ , Q ̲ .
Mathematics 08 01577 g0a2
We illustrate how to read it with the diagram at the top left, which has:
  • A closure system, ρ ( Q ) = P ¯ , the closure range of the right adjoint (see below).
  • An interior system, λ ( P ) = Q ̲ , the kernel range of the left adjoint (see below).
  • A closure function [64] (suggest “closure operator”) γ P = ρ λ P I P , from P to the closure range P ¯ = ρ ( Q ) , with adjoint inclusion map P , where I P denotes the identity over P.
  • A kernel function [64] (also “interior operator”, “kernel operator”) κ P = λ ρ Q I Q , from Q to the range of Q ̲ = λ ( P ) , with adjoint inclusion map Q , where I Q denotes the identity over Q.
  • a perfect adjunction ( λ ˜ , ρ ˜ ) : P ¯ Q ̲ , i.e., a dual order isomorphism between the closure and kernel ranges P ¯ and Q ̲ .
Compare the mathematical objects above with those in a Galois connection proper seen in the top right of Figure A2: the ranges are both closure systems and both compositions closure operators due to the dualisation of the second set (we write γ Q for the new closure operator), resulting in the well-known perfect Galois connection, ( φ ˜ , ψ ˜ ) : P ¯ Q ¯ , the pair of dual order-isomorphic closure ranges lying at the heart of Formal Concept Analysis. The diagrams in the bottom left and right of Figure A2 show analogue structures for co-Galois connections and right adjunctions respectively.
The different monotonicity conditions account for different properties of the adjoint maps [20]:
  • if ( λ , ρ ) form a left adjunction, then λ is residuated, preserves existing least upper bounds (for lattices, joins) and ρ preserves existing greatest lower bounds (for lattices, meets).
  • if ( φ , ψ ) form a Galois connection, then both φ and ψ invert existing least upper bounds (for lattices, they transform joins into meets).
  • if ( ρ , λ ) form a right Galois connection, then ρ preserves existing greatest lower bounds (meets for lattices) and λ is residuated, preserves existing least upper bounds (joins for lattices).
  • if ( , ) form a co-Galois connection, then both an invert existing greatest lower bounds (for lattices, they transform meets into joins).
Table A2 summarises the main properties of all types of Galois connections.
Table A2. Summary of Galois connections and their properties, for P , Q posets.
Table A2. Summary of Galois connections and their properties, for P , Q posets.
Left Adjunction (type oo): ( λ , ρ ) : P Q Galois Connection (type oi): ( φ , ψ ) : P Q
p P , q Q λ ( p ) Q q p P ρ ( q ) p P , q Q φ ( p ) Q q p P ψ ( q )
I P ρ λ and I Q λ ρ I P ψ φ and I Q φ ψ
λ = λ ρ λ and ρ = ρ λ ρ φ = φ ψ φ and ψ = ψ φ ψ
λ monotone, residuated φ antitone
ρ monotone, residual ψ antitone
λ join-preserving, ρ meet-preserving φ join-inverting, ψ join-inverting
co-Galois connection (type io): ( , ) : P Q Right Adjunction (type ii): ( ρ , λ ) : P Q
p P , q Q ( p ) Q q p P ( q ) p P , q Q ρ ( p ) Q q p P λ ( q )
I P and I Q I P λ ρ and I Q ρ λ
= and = ρ = ρ λ ρ and λ = λ ρ λ
antitone ρ monotone, residual
antitone λ monotone, residuated
meet-inverting, meet-inverting ρ meet-preserving, λ join-preserving
See [65] for a revision of the genesis and importance of Galois Connections and adjunctions, as well as a discussion of the different notation and nomenclatures for these concepts. Ref. [66] is an early tutorial with mathematical applications in mind.

A Naming Convention for Galois Connections

The following naming convention was put forward in [29]. It stresses the composition with order- and dual order-isomorphisms and relates to the original names as annotated in Figure A2.
  • We take the type oo Galois connectionto be a basic adjunction composed with an even number of anti-isomorphism on the domain and range orders.
  • To obtain a type oi Galois connectioncompose a basic adjunction with an odd number of anti-isomorphism on the range.
  • To get a a type io Galois connectionwe compose a basic adjunction with an odd number of anti-isomorphisms on the domain.
  • Finally, a type ii Galois connection, is a basic adjunction with an odd number of anti-isomorphisms composed on both the domain and range.

References

  1. Kaburlasos, V.G. The Lattice Computing (LC) Paradigm. In Proceedings of the 15th International Conference on Concept Lattices and Their Applications CLA, Tallinn, Estonia, 29 June–1 July 2020; Valverde-Albacete, F.J., Trnecka, M., Eds.; Tallinn University of Technology: Tallinn, Estonia, 2020; pp. 1–7. [Google Scholar]
  2. Kaburlasos, V.G.; Ritter, G.X. (Eds.) Computational Intelligence Based on Lattice Theory; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  3. Platzer, A. Logical Foundations of Cyber-Physical Systems; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  4. Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed.; JHU Press: Baltimore, MD, USA, 2012. [Google Scholar]
  5. Mirkin, B. Mathematical Classification and Clustering; Nonconvex Optimization and its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  6. Baccelli, F.; Cohen, G.; Olsder, G.; Quadrat, J. Synchronization and Linearity; Wiley: Hoboken, NJ, USA, 1992. [Google Scholar]
  7. Butkovič, P. Max-Linear Systems. Theory and Algorithms; Monographs in Mathematics; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  8. Ganter, B.; Wille, R. Formal Concept Analysis: Mathematical Foundations; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  9. Valverde-Albacete, J.F.; Peláez-Moreno, C. The Rényi Entropies Operate in Positive Semifields. Entropy 2019, 21, 780. [Google Scholar] [CrossRef] [Green Version]
  10. Golan, J.S. Semirings and Their Applications; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  11. Gondran, M.; Minoux, M. Graphs, Dioids and Semirings. New Models and Algorithms; Operations Research Computer Science Interfaces Series; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  12. Denecke, K.; Erné, M.; Wismath, S. (Eds.) Galois Connections and Applications; Number 565 in Mathematics and Its Applications; Kluwer Academic: Dordrecht, The Netherlands; Boston, MA, USA; London, UK, 2004. [Google Scholar]
  13. Lanczos, C. Linear Differential Operators; Dover Publications: Mineola, NK, USA, 1997. [Google Scholar]
  14. Strang, G. The fundamental theorem of linear algebra. Am. Math. Mon. 1993, 100, 848–855. [Google Scholar] [CrossRef]
  15. Deerwester, S.; Dumais, S.; Furnas, G.W.; Landauer, T.K.; Harshman, R. Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 1990, 41, 391–407. [Google Scholar] [CrossRef]
  16. Pearson, K. On Lines and Planes of Closest Fit to Systems of Points in Space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef] [Green Version]
  17. Landauer, T.K.; McNamara, D.S.; Dennis, S.; Kintsch, W. Handbook of Latent Semantic Analysis; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2007. [Google Scholar]
  18. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning. Data Mining, Inference and Prediction, 2nd ed.; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1–764. [Google Scholar]
  19. Wille, R. Finite distributive lattices as concept lattices. Log. Math. 1985, 2, 635–648. [Google Scholar]
  20. Davey, B.; Priestley, H. Introduction to Lattices and Order, 2nd ed.; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  21. Birkhoff, G. Lattice Theory, 3rd ed.; American Mathematical Society: Providence, RI, USA, 1967. [Google Scholar]
  22. Ore, O. Theory of Graphs. American Mathematical Society Colloquiun Publications; American Mathematical Society: Providence, RI, USA, 1962; Volume XXXVIII. [Google Scholar]
  23. Barbut, M.; Monjardet, B. Ordre et Classification. Algèbre et Combinatoire, Tome I; Méthodes Mathématiques des Sciences de l’Homme; Hachette: New York, NY, USA, 1970. [Google Scholar]
  24. Barbut, M.; Monjardet, B. Ordre et Classification. Algèbre et Combinatoire, Tome II; Méthodes Mathématiques des Sciences de l’Homme; Hachette: New York, NY, USA, 1970. [Google Scholar]
  25. Belohlavek, R.; Vychodil, V. Formal concepts as optimal factors in Boolean factor analysis: Implications and experiments. In Proceedings of the 5th International Conference on Concept Lattices and their Applications, (CLA07), Montpellier, France, 24–26 October 2007. [Google Scholar]
  26. Valverde-Albacete, F.J.; Pelaez-Moreno, C. On the Relation between Semifield-Valued FCA and the Idempotent Singular Value Decomposition. In Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ IEEE 2018), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  27. Valverde-Albacete, F.J.; Peláez-Moreno, C. Towards a Generalisation of Formal Concept Analysis for Data Mining Purposes. In Formal Concept Analysis; Springer: Berlin/Heidelberg, Germany, 2006; Volume LNAI 3874, pp. 161–176. [Google Scholar]
  28. Valverde-Albacete, F.J.; Peláez-Moreno, C. Further Galois connections between Semimodules over Idempotent Semirings. In Proceedings of the Fifth International Conference on Concept Lattices and Their Applications, CLA 2007, Montpellier, France, 24–26 October 2007; pp. 199–212. [Google Scholar]
  29. Valverde-Albacete, F.J.; Peláez-Moreno, C. Extending conceptualisation modes for generalised Formal Concept Analysis. Inf. Sci. 2011, 181, 1888–1909. [Google Scholar] [CrossRef] [Green Version]
  30. Cuninghame-Green, R. Minimax Algebra; Number 166 in Lecture notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
  31. Ritter, G.X.; Sussner, P. An Introduction to Morphological Neural Networks. In Proceedings of the International Conference on Pattern Recognition (ICPR’96), Vienna, Austria, 25–29 August 1996; pp. 709–717. [Google Scholar]
  32. Maragos, P. Dynamical systems on weighted lattices: General theory. Math. Control Signals Syst. 2017, 29, 1–49. [Google Scholar] [CrossRef] [Green Version]
  33. Valverde-Albacete, F.J.; Peláez-Moreno, C. The Linear Algebra in Formal Concept Analysis over Idempotent Semifields. In Formal Concept Analysis; Number 9113 in LNAI; Springer: Berlin/Heidelberg, Germany, 2015; pp. 97–113. [Google Scholar]
  34. Valverde-Albacete, F.J.; Peláez-Moreno, C. K-Formal Concept Analysis as linear algebra over idempotent semifields. Inf. Sci. 2018, 467, 579–603. [Google Scholar] [CrossRef]
  35. Moreau, J.J. Inf-convolution, Sous-additivité, convexité des fonctions Numériques. J. Math. Pures Appl. 1970, 49, 109–154. [Google Scholar]
  36. Valverde-Albacete, F.J.; Peláez-Moreno, C. The Linear Algebra in Extended Formal Concept Analysis Over Idempotent Semifields. In Formal Concept Analysis; Bertet, K., Borchmann, D., Cellier, P., Ferré, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 211–227. [Google Scholar]
  37. Ritter, G.X.; Sussner, P. The minimax eigenvalue transform. In Image Algebra and Morphological Image Processing, III (San Diego, CA, 1992); Gader, P.D., Dougherty, E.R., Serra, J.C., Eds.; SPIE: Bellingham, WA, USA, 1992; pp. 276–282. [Google Scholar]
  38. Ritter, G.X.; Diaz-de Leon, J.; Sussner, P. Morphological bidirectional associative memories. Neural Netw. 1999, 12, 851–867. [Google Scholar] [CrossRef]
  39. Ritter, G.X.; Sussner, P.; Diaz-de Leon, J. Morphological Associative Memories. IEEE Trans. Neural Netw. 1998, 9, 281–293. [Google Scholar] [CrossRef] [PubMed]
  40. Schutter, B.D.; Moor, B.D. The Singular-Value Decomposition in the Extended Max Algebra. Linear Algebra Its Appl. 1997, 250, 143–176. [Google Scholar] [CrossRef] [Green Version]
  41. Schutter, B.D.; Moor, B.D. The QR Decomposition and the Singular Value Decomposition in the Symmetrized Max-Plus Algebra Revisited. SIAM Rev. 2002, 44, 417–454. [Google Scholar] [CrossRef]
  42. Hook, J. Max-plus singular values. Linear Algebra Its Appl. 2015, 486, 419–442. [Google Scholar] [CrossRef]
  43. Ronse, C. Why mathematical morphology needs complete lattices. Signal Process. 1990, 21, 129–154. [Google Scholar] [CrossRef]
  44. Blyth, T.; Janowitz, M. Residuation Theory; Pergamon Press: Oxford, UK, 1972. [Google Scholar]
  45. Cohen, G.; Gaubert, S.; Quadrat, J.P. Duality and separation theorems in idempotent semimodules. Linear Algebra Its Appl. 2004, 379, 395–422. [Google Scholar] [CrossRef] [Green Version]
  46. Cohen, G.; Gaubert, S.; Quadrat, J.P. Projection and aggregation in maxplus algebra. In Current Trends in Nonlinear Systems and Control; Birkhäuser Boston: Boston, MA, USA, 2006; pp. 443–454. [Google Scholar]
  47. Gaubert, S.; Katz, R.D. The tropical analogue of polar cones. Linear Algebra Its Appl. 2009, 431, 608–625. [Google Scholar] [CrossRef] [Green Version]
  48. Di Loreto, M.; Gaubert, S.; Katz, R.D.; Loiseau, J.J. Duality Between Invariant Spaces for Max-Plus Linear Discrete Event Systems. SIAM J. Control Optim. 2010, 48, 5606–5628. [Google Scholar] [CrossRef] [Green Version]
  49. Gaubert, S. Théorie des Systèmes Linéaires Dans les Dioïdes. Ph.D. Thesis, École des Mines de Paris, Paris, France, 1992. [Google Scholar]
  50. Cohen, G.; Gaubert, S.; Quadrat, J. Kernels, images and projections in dioids. In Proceedings of the Workshop on Discrete Event Systems (WODES), Edinburgh, Scotland, UK, 19–21 August 1996; pp. 1–8. [Google Scholar]
  51. Valverde-Albacete, F.J.; Peláez-Moreno, C. Galois Connections between Semimodules and Applications in Data Mining. In Formal Concept Analysis, Proceedings of the 5th International Conference on Formal Concept Analysis, ICFCA 2007, Clermont-Ferrand, France, 12–16 February 2007; Number 4390 in LNAI; Kusnetzov, S., Schmidt, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 181–196. [Google Scholar]
  52. Valverde-Albacete, F.J.; Peláez-Moreno, C. Idempotent Semifield-Valued Formal Concept Analysis and the 4-fold Galois Connection. 2020; in preparation. [Google Scholar]
  53. Valverde-Albacete, F.J.; Peláez-Moreno, C. Towards Galois Connections over Positive Semifields. In Information Processing and Management of Uncertainty in Knowledge-Based Systems; Springer: Berlin/Heidelberg, Germany, 2016; Volume 611 CCIS, pp. 81–92. [Google Scholar]
  54. Bloch, I. On links between mathematical morphology and rough sets. Pattern Recognit. 2000, 33, 1487–1496. [Google Scholar] [CrossRef]
  55. Valverde-Albacete, F.J.; Peláez-Moreno, C. The Spectra of irreducible matrices over completed idempotent semifields. Fuzzy Sets Syst. 2015, 271, 46–69. [Google Scholar] [CrossRef] [Green Version]
  56. Valverde-Albacete, F.J.; Peláez-Moreno, C. The spectra of reducible matrices over complete commutative idempotent semifields and their spectral lattices. Int. J. Gen. Syst. 2016, 45, 86–115. [Google Scholar] [CrossRef] [Green Version]
  57. Kleinberg, J.M. Authoritative sources in a hyperlinked environment. J. ACM 1999, 46, 604–632. [Google Scholar] [CrossRef]
  58. Valverde-Albacete, F.J.; Peláez-Moreno, C. A Formal Concept Analysis Look at the Analysis of Affiliation Networks. In Formal Concept Analysis of Social Networks; Missaoui, R., Kuznetsov, S.O., Obiedkov, S., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 171–195. [Google Scholar]
  59. Belohlavek, R.; Vychodil, V. Factor Analysis of Incidence Data via Novel Decomposition of Matrices. In Proceedings of the International Conference on Formal Concept Analysis (ICFCA09), Darmstadt, Germany, 12 May 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 83–97. [Google Scholar]
  60. Belohlavek, R. Optimal decompositions of matrices with entries from residuated lattices. J. Log. Comput. 2012, 22, 1405–1425. [Google Scholar] [CrossRef]
  61. Belohlavek, R.; Trnecka, M. From-below approximations in Boolean matrix factorization: Geometry and new algorithm. J. Comput. Syst. Sci. 2015, 81, 1678–1697. [Google Scholar] [CrossRef] [Green Version]
  62. Belohlavek, R.; Outrata, J.; Trnecka, M. Toward quality assessment of Boolean matrix factorizations. Inf. Sci. 2018, 459, 71–85. [Google Scholar] [CrossRef]
  63. Vorobjev, N. The extremal matrix algebra. Dokl. Akad. Nauk SSSR 1963, 4, 1220–1223. [Google Scholar]
  64. Düntsch, I.; Gediga, G. Approximation operators in qualitative data analysis. In Theory and Applications of Relational Structures as Knowledge Instruments; Lecture Notes in Computer Science; de Swart, H., Orłowska, E., Schmidt, G., Roubens, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2929, pp. 214–230. [Google Scholar]
  65. Erné, M. Adjunctions and Galois connections: Origins, History and Development. In Galois Connections and Applications; Springer: Amsterdam, The Netherlands, 2004; pp. 1–138. [Google Scholar]
  66. Erné, M.; Koslowski, J.; Melton, A.; Strecker, G. A primer on Galois Connections. Ann. N. Y. Acad. Sci. 1993, 704, 103–125. [Google Scholar] [CrossRef]
Figure 1. Lattice of a selection of abstract (leading asterisk, white label) and concrete (white label) commutative semirings and their properties (grey label) mentioned in the text, adapted from [9]. Each node is a concept of Abstract Algebra: its properties are obtained from the gray labels in nodes upwards, and its structures from the white labels in nodes downwards. The picture is related to the chosen sets of properties and algebras and does not fully reflect the structure of the class of semirings. We have chosen to highlight idempotent and non-idempotent positive semifields, such as R 0 + and Q 0 + .
Figure 1. Lattice of a selection of abstract (leading asterisk, white label) and concrete (white label) commutative semirings and their properties (grey label) mentioned in the text, adapted from [9]. Each node is a concept of Abstract Algebra: its properties are obtained from the gray labels in nodes upwards, and its structures from the white labels in nodes downwards. The picture is related to the chosen sets of properties and algebras and does not fully reflect the structure of the class of semirings. We have chosen to highlight idempotent and non-idempotent positive semifields, such as R 0 + and Q 0 + .
Mathematics 08 01577 g001
Figure 2. The Galois connection induced by a formal context between spaces ( · R , φ , · R , φ ) : X Y : (a) unnormalized version, (b) normalized in φ = γ μ (from [34]). Refer to the text for the notation.
Figure 2. The Galois connection induced by a formal context between spaces ( · R , φ , · R , φ ) : X Y : (a) unnormalized version, (b) normalized in φ = γ μ (from [34]). Refer to the text for the notation.
Mathematics 08 01577 g002
Figure 3. Schematics of the Galois connection in the formal concept lattices of (G, M, R) for the direct and inverse semifields (see Section 2.2.3) for an invertible φ. White ovals: concept lattices. Grey shaded areas: bikernels. Grey ovals: bikernel blocks. The increasing sense of the orders is schematised with arrows. The dashed slanted lines connect the extent and intent of some φ-concepts. (improved, enriched from [36]).
Figure 3. Schematics of the Galois connection in the formal concept lattices of (G, M, R) for the direct and inverse semifields (see Section 2.2.3) for an invertible φ. White ovals: concept lattices. Grey shaded areas: bikernels. Grey ovals: bikernel blocks. The increasing sense of the orders is schematised with arrows. The dashed slanted lines connect the extent and intent of some φ-concepts. (improved, enriched from [36]).
Mathematics 08 01577 g003
Table 1. Special elements of the concept lattices with biases in K ¯ (left column) and K ̲ (right column).
Table 1. Special elements of the concept lattices with biases in K ¯ (left column) and K ̲ (right column).
Special ElementBIAS in K ¯ Bias in K ̲
top γ K ¯ ( g ) = ( g , R T ˙ g ) γ K ̲ ( g ) = ( g , R T ˙ g )
attribute-concepts μ K ¯ ( E m ) = ( R , R T ˙ R 1 ) μ K ̲ ( E m ) = ( R , R T ˙ R 1 )
object-concepts γ K ¯ ( E g ) = ( R ˙ R , R T ) γ K ̲ ( E g ) = ( R ˙ R , R T )
bottom μ K ¯ ( m ) = ( R ˙ m , m ) μ K ̲ ( m ) = ( R ˙ m , m )

Share and Cite

MDPI and ACS Style

Valverde-Albacete, F.J.; Peláez-Moreno, C. The Singular Value Decomposition over Completed Idempotent Semifields. Mathematics 2020, 8, 1577. https://doi.org/10.3390/math8091577

AMA Style

Valverde-Albacete FJ, Peláez-Moreno C. The Singular Value Decomposition over Completed Idempotent Semifields. Mathematics. 2020; 8(9):1577. https://doi.org/10.3390/math8091577

Chicago/Turabian Style

Valverde-Albacete, Francisco J., and Carmen Peláez-Moreno. 2020. "The Singular Value Decomposition over Completed Idempotent Semifields" Mathematics 8, no. 9: 1577. https://doi.org/10.3390/math8091577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop