Next Article in Journal
Evolutionary Optimization of Ensemble Learning to Determine Sentiment Polarity in an Unbalanced Multiclass Corpus
Next Article in Special Issue
Changing the Geometry of Representations: α-Embeddings for NLP Tasks
Previous Article in Journal
A Veritable Zoology of Successive Phase Transitions in the Asymmetric q-Voter Model on Multiplex Networks
Previous Article in Special Issue
Lagrangian Submanifolds of Symplectic Structures Induced by Divergence Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Siegel–Klein Disk: Hilbert Geometry of the Siegel Disk Domain

Sony Computer Science Laboratories, Tokyo 141-0022, Japan
Entropy 2020, 22(9), 1019; https://doi.org/10.3390/e22091019
Submission received: 13 August 2020 / Revised: 10 September 2020 / Accepted: 10 September 2020 / Published: 12 September 2020
(This article belongs to the Special Issue Information Geometry III)

Abstract

:
We study the Hilbert geometry induced by the Siegel disk domain, an open-bounded convex set of complex square matrices of operator norm strictly less than one. This Hilbert geometry yields a generalization of the Klein disk model of hyperbolic geometry, henceforth called the Siegel–Klein disk model to differentiate it from the classical Siegel upper plane and disk domains. In the Siegel–Klein disk, geodesics are by construction always unique and Euclidean straight, allowing one to design efficient geometric algorithms and data structures from computational geometry. For example, we show how to approximate the smallest enclosing ball of a set of complex square matrices in the Siegel disk domains: We compare two generalizations of the iterative core-set algorithm of Badoiu and Clarkson (BC) in the Siegel–Poincaré disk and in the Siegel–Klein disk: We demonstrate that geometric computing in the Siegel–Klein disk allows one (i) to bypass the time-costly recentering operations to the disk origin required at each iteration of the BC algorithm in the Siegel–Poincaré disk model, and (ii) to approximate fast and numerically the Siegel–Klein distance with guaranteed lower and upper bounds derived from nested Hilbert geometries.

1. Introduction

German mathematician Carl Ludwig Siegel [1] (1896–1981) and Chinese mathematician Loo-Keng Hua [2] (1910–1985) have introduced independently the symplectic geometry in the 1940s (with a preliminary work of Siegel [3] released in German in 1939). The adjective symplectic stems from the Greek, and means “complex”: That is, mathematically the number field C instead of the ordinary real field R . Symplectic geometry was originally motivated by the study of complex multivariate functions in the two landmark papers of Siegel [1] and Hua [2]. As we shall see soon, the naming “symplectic geometry” for the geometry of complex matrices originally stems from the relationships with the symplectic groups (and their matrix representations). Presently, symplectic geometry is mainly understood as the study of symplectic manifolds [4] which are even-dimensional differentiable manifolds equipped with a closed and nondegenerate differential 2-form ω , called the symplectic form, studied in geometric mechanics.
We refer the reader to the PhD thesis [5,6] for an overview of Siegel bounded domains. More generally, the Siegel-like bounded domains have been studied and classified into 6 types in the most general setting of bounded symmetric irreducible homogeneous domains by Elie Cartan [7] in 1935 (see also [8,9]).
The Siegel upper space and the Siegel disk domains provide generalizations of the complex Poincaré upper plane and the complex Poincaré disk to spaces of symmetric square complex matrices. In the remainder, we shall term them the Siegel–Poincaré upper plane and the Siegel–Poincaré disk. The Siegel upper space includes the well-studied cone of real symmetric positive-definite (SPD) matrices [10] (SPD manifold). The celebrated affine-invariant SPD Riemannian metric [11] can be recovered as a special case of the Siegel metric.
Applications of the geometry of Siegel upper/disk domains are found in radar processing [12,13,14,15] especially for dealing with Toepliz matrices [16,17], probability density estimations [18] and probability metric distances [19,20,21,22], information fusion [23], neural networks [24], theoretical physics [25,26,27], and image morphology operators [28], just to cite a few.
In this paper, we extend the Klein disk model [29] of the hyperbolic geometry to the Siegel disk domain by considering the Hilbert geometry [30] induced by the open-bounded convex Siegel disk [31,32]. We call the Hilbert metric distance of the Siegel disk the Siegel–Klein distance. We term this model the Klein-Siegel model for short to contrast it with the Poincaré-Siegel upper plane model and the Poincaré-Siegel disk model. The main advantages of using the Klein-Siegel disk model instead of the usual Siegel–Poincaré upper plane or the Siegel–Poincaré disk are that the geodesics are unique and always straight by construction. Thus, this Siegel–Klein disk model is very well-suited for designing efficient algorithms and data structures by borrowing techniques of Euclidean computational geometry [33]. Moreover, in the Siegel–Klein disk model, we have an efficient and robust method to approximate with guarantees the calculation of the Siegel–Klein distance: This is especially useful when handling high-dimensional square complex matrices. The algorithmic advantage of the Hilbert geometry was already observed for real hyperbolic geometry (included as a special case of the Siegel–Klein model): For example, the hyperbolic Voronoi diagrams can be efficiently computed as an affine power diagram clipped to the boundary circle [34,35,36,37]. To demonstrate the advantage of the Siegel–Klein disk model (Hilbert distance) over the Siegel–Poincaré disk model (Kobayashi distance), we consider approximating the Smallest Enclosing Ball (SEB) of the a set of square complex matrices in the Siegel disk domain. This problem finds potential applications in image morphology [28,38] or anomaly detection of covariance matrices [39,40]. Let us state the problem as follows:
Problem 1 (Smallest Enclosing Ball (SEB)). 
Given a metric space ( X , ρ ) and a finite set { p 1 , , p n } of n points in X, find the smallest-radius enclosing ball with circumcenter c * minimizing the following objective function:
min c X max i { 1 , , n } ρ ( c , p i ) .
In general, the SEBs may not be unique in a metric space: For example, the SEBs are not unique in a discrete Hamming metric space [41] making it notably NP-hard to calculate. We note in passing that the set-complement of a Hamming ball is a Hamming ball in a Hamming metric space. However, the SEB is proven unique in the Euclidean geometry [42], the hyperbolic geometry [43], the Riemannian positive-definite matrix manifold [44,45], and more generally in any Cartan-Hadamard manifold [46] (Riemannian manifold that is complete and simply connected with non-positive sectional curvatures). The SEB is guaranteed to be unique in any Bruhat–Tits space [44] (i.e., complete metric space with a semi-parallelogram law) which includes the Riemannian SPD manifold.
A fast ( 1 + ϵ ) -approximation algorithm which requires 1 ϵ 2 iterations was reported in [46,47] to approximate the SEB in the Euclidean space: That is a covering ball of radius ( 1 + ϵ ) r * where r * = max i { 1 , , n } ρ ( c * , p i ) for c * = arg min c X max i { 1 , , n } ρ ( c , p i ) . Since the approximation factor does not depend on the dimension, this SEB approximation algorithm found many applications in machine learning [48] (e.g., in  Reproducing Kernel Hilbert Spaces [49], RKHS).

1.1. Paper Outline and Contributions

In Section 2, we concisely recall the usual models of the hyperbolic complex plane: The Poincaré upper plane model, and the Poincaré disk model, and the Klein disk model. We then briefly review the geometry of the Siegel upper plane domain in Section 3 and the Siegel disk domain in Section 4. Section 5 introduces the novel Siegel–Klein model using the Hilbert geometry and its Siegel–Klein distance. To demonstrate the algorithmic advantage of using the Siegel–Klein disk model over the Siegel–Poincaré disk model in practice, we compare in Section 6 the two implementations of the Badoiu and Clarkson’s SEB approximation algorithm [47] in these models. Finally, we conclude this work in Section 7. In the Appendix, we first list the notations used in this work, recall the deflation method for calculating numerically the eigenvalues of a Hermitian matrix (Appendix A), and provide some basic snippet code for calculating the Siegel distance (Appendix A).
Our main contributions are summarized as follows:
  • First, we formulate a generalization of the Klein disk model of hyperbolic geometry to the Siegel disk domain in Definition 2 using the framework of Hilbert geometry. We report the formula of the Siegel–Klein distance to the origin in Theorem 1 (and more generally a closed-form expression for the Siegel–Klein distance between two points whose supporting line passes through the origin), describe how to convert the Siegel–Poincaré disk to the Siegel–Klein disk and vice versa in Proposition 2, report an exact algorithm to calculate the Siegel–Klein distance for diagonal matrices in Theorem 4. In practice, we show how to obtain a fast guaranteed approximation of the Siegel–Klein distance using geodesic bisection searches with guaranteed lower and upper bounds (Theorem 5 whose proof is obtained by considering nested Hilbert geometries).
  • Second, we report the exact solution to a geodesic cut problem in the Siegel–Poincaré/Siegel–Klein disks in Proposition 3. This result yields an explicit equation for the geodesic linking the origin of the Siegel disk domain to any other matrix point of the Siegel disk domain (Propositions 3 and 4). We then report an implementation of the Badoiu and Clarkson’s iterative algorithm [47] for approximating the smallest enclosing ball tailored to the Siegel–Poincaré and Siegel–Klein disk domains. In particular, we show in §6 that the implementation in the Siegel–Klein model yields a fast algorithm which bypasses the costly operations of recentering to the origin required in the Siegel–Poincaré disk model.
Let us now introduce a few notations on matrices and their norms.

1.2. Matrix Spaces and Matrix Norms

Let F be a number field considered in the remainder to be either the real number field R or the complex number field C . For a complex number z = a + i b C (with imaginary number i 2 = 1 ), we denote by z ¯ = a i b its complex conjugate, and  by | z | = z z ¯ = a 2 + b 2 its modulus. Let Re ( z ) = a and Im ( z ) = b denote the real part and the imaginary part of the complex number z = a + i b , respectively.
Let M ( d , F ) be the space of d × d square matrices with coefficients in F , and let GL ( d , F ) denote its subspace of invertible matrices. Let Sym ( d , F ) denote the vector space of d × d symmetric matrices with coefficients in F . The identity matrix is denoted by I (or I d when we want to emphasize its d × d dimension). The conjugate of a matrix M = [ M i , j ] i , j is the matrix of complex conjugates: M ¯ : = [ M ¯ i , j ] i , j . The conjugate transpose of a matrix M is M H = ( M ¯ ) = M ¯ , the adjoint matrix. Conjugate transposition is also denoted by the star operator (i.e., M * ) or the dagger symbol (i.e., M ) in the literature. A complex matrix is said Hermitian when M H = M (hence M has real diagonal elements). For any M M ( d , C ) , Matrix M M H is Hermitian: ( M M H ) H = ( M H ) H ( M ) H = M M H .
A real matrix M M ( d , R ) is said symmetric positive-definite (SPD) if and only if x M x > 0 for all x R d with x 0 . This positive-definiteness property is written M 0 , where ≻ denotes the partial Löwner ordering [50]. Let PD ( d , R ) = { P 0 : P Sym ( d , R ) } be the space of real symmetric positive-definite matrices [10,44,51,52] of dimension d × d . This space is not a vector space but a cone, i.e., if P 1 , P 2 PD ( d , R ) then P 1 + λ P 2 PD ( d , R ) for all λ > 0 . The boundary of the cone consists of rank-deficient symmetric positive semi-definite matrices.
The (complex/real) eigenvalues of a square complex matrix M are ordered such that | λ 1 ( M ) | | λ d ( M ) | , where | · | denotes the complex modulus. The spectrum λ ( M ) of a matrix M is its set of eigenvalues: λ ( M ) = { λ 1 ( M ) , , λ d ( M ) } . In general, real matrices may have complex eigenvalues but symmetric matrices (including SPD matrices) have always real eigenvalues. The singular values σ i ( M ) of M are always real:
σ i ( M ) = λ i ( M M ¯ ) = λ i ( M ¯ M ) ,
and ordered as follows: σ 1 ( M ) σ d ( M ) with σ max ( M ) = σ 1 ( M ) and σ min ( M ) = σ d ( M ) . We have σ d i + 1 ( M 1 ) = 1 σ i ( M ) , and in particular σ d ( M 1 ) = 1 σ 1 ( M ) .
Any matrix norm · (including the operator norm) satisfies:
  • M 0 with equality if and only if M = 0 (where 0 denotes the matrix with all its entries equal to zero),
  • α M = | α | M ,
  • M 1 + M 2 M 1 + M 2 , and 
  • M 1 M 2 M 1 M 2 .
Let us define two usual matrix norms: The Fröbenius norm and the operator norm. The Fröbenius norm of M is:
M F : = i , j | M i , j | 2 ,
= tr ( M M H ) = tr ( M H M ) .
The induced Fröbenius distance between two complex matrices C 1 and C 2 is ρ E ( C 1 , C 2 ) = C 1 C 2 F .
The operator norm or spectral norm of a matrix M is:
M O = max x 0 M x 2 x 2 ,
= λ max ( M H M ) ,
= σ max ( M ) .
Notice that M H M is a Hermitian positive semi-definite matrix. The operator norm coincides with the spectral radius ρ ( M ) = max i { | λ i ( M ) | } of the matrix M and is upper bounded by the Fröbenius norm: M O M F , and we have M O max i , j | M i , j | . When the dimension d = 1 , the operator norm of [ M ] coincides with the complex modulus: M O = | M | .
To calculate the largest singular value σ max , we may use a the (normalized) power method [53,54] which has quadratic convergence for Hermitian matrices (see Appendix A). We can also use the more costly Singular Value Decomposition (SVD) of M which requires cubic time: M = U D V H where D = Diag ( σ 1 , , σ d ) is the diagonal matrix with coefficients being the singular values of M.

2. Hyperbolic Geometry in the Complex Plane: The Poincaré Upper Plane and Disk Models and the Klein Disk Model

We concisely review the three usual models of the hyperbolic plane [55,56]: Poincaré upper plane model in Section 2.1, the Poincaré disk model in Section 2.2, and the Klein disk model in Section 2.2.1. We then report distance expressions in these models and conversions between these three usual models in Section 2.3. Finally, in Section 2.4, we recall the important role of hyperbolic geometry in the Fisher–Rao geometry in information geometry [57,58].

2.1. Poincaré Complex Upper Plane

The Poincaré upper plane domain is defined by
H = z = a + i b : z C , b = Im ( z ) > 0 .
The Hermitian metric tensor is:
d s U 2 = d z d z ¯ Im ( z ) 2 ,
or equivalently the Riemannian line element is:
d s U 2 = d x 2 + d y 2 y 2 ,
Geodesics between z 1 and z 2 are either arcs of semi-circles whose centers are located on the real axis and orthogonal to the real axis, or vertical line segments when Re ( z 1 ) = Re ( z 2 ) .
The geodesic length distance is
ρ U ( z 1 , z 2 ) : = log | z 1 z ¯ 2 | + | z 1 z 2 | | z 1 z ¯ 2 | | z 1 z 2 | ,
or equivalently
ρ U ( z 1 , z 2 ) = arccosh | z 1 z ¯ 2 | 2 Im ( z 1 ) Im ( z 2 ) ,
where
arccosh ( x ) = log x + x 2 1 , x 1 .
Equivalent formula can be obtained by using the following identity:
log ( x ) = arcosh x 2 + 1 2 x = artanh x 2 1 x 2 + 1 ,
where
artanh ( x ) = 1 2 log 1 + x 1 x , x < 1 .
By interpreting a complex number z = x + i y as a 2D point with Cartesian coordinates ( x , y ) , the metric can be rewritten as
d s U 2 = d x 2 + d y 2 y 2 = 1 y 2 d s E 2 ,
where d s E 2 = d x 2 + d y 2 is the Euclidean (flat) metric. That is, the Poincaré upper plane metric d s U can be rewritten as a conformal factor 1 y times the Euclidean metric d s E . Thus, the metric of Equation (16) shows that the Poincaré upper plane model is a conformal model of hyperbolic geometry: That is, the Euclidean angle measurements in the ( x , y ) chart coincide with the underlying hyperbolic angles.
The group of orientation-preserving isometries (i.e., without reflections) is the real projective special group PSL ( 2 , R ) = SL ( 2 , R ) / { ± I } (quotient group), where SL ( 2 , R ) denotes the special linear group of matrices with unit determinant:
Isom + ( H ) PSL ( 2 , R ) .
The left group action is a fractional linear transformation (also called a Möbius transformation):
g . z = a z + b c z + d , g = a b c d , a d b c 0 .
The condition a b c d 0 is to ensure that the Möbius transformation is not constant. The set of Möbius transformations form a group Moeb ( R , 2 ) . The elements of the Möbius group can be represented by corresponding 2 × 2 matrices of PSL ( 2 , R ) :
a b c d , a d b c 0 .
The neutral element e is encoded by the identity matrix I.
The fractional linear transformations
w ( z ) = a z + b c z + d , a , b , c , d R , a d b c 0
are the analytic mappings C { } C { } of the Poincaré upper plane onto itself.
The group action is transitive (i.e., z 1 , z 2 H , g such that g . z 1 = z 2 ) and faithful (i.e., if  g . z = z z then g = e ). The stabilizer of i is the rotation group:
SO ( 2 ) = cos θ sin θ sin θ cos θ : θ R .
The unit speed geodesic anchored at i and going upward (i.e., geodesic with initial condition) is:
γ ( t ) = e t / 2 0 0 e t / 2 × i = i e t .
Since the other geodesics can be obtained by the action of PSL ( 2 , R ) , it follows that the geodesics in H are parameterized by:
γ ( t ) = a i e t + b c i e t + d .

2.2. Poincaré Disk

The Poincaré unit disk is
D = w ¯ w < 1 : w C .
The Riemannian Poincaré line element (also called Poincaré-Bergman line element) is
d s D 2 = 4 d w d w ¯ ( 1 | w | 2 ) 2 .
Since d s D 2 = 2 1 x 2 2 d s E 2 , we deduce that the metric is conformal: The Poincaré disk is a conformal model of hyperbolic geometry. The geodesic between points w 1 and w 2 are either arcs of circles intersecting orthogonally the disk boundary D , or straight lines passing through the origin 0 of the disk and clipped to the disk domain.
The geodesic distance in the Poincaré disk is
ρ D ( w 1 , w 2 ) = arccosh | w 1 w ¯ 2 1 | 2 ( 1 | w 1 | 2 ) ( 1 | w 2 | 2 ) ,
= 2 arctanh w 2 w 1 1 w ¯ 1 w 2 .
The group of orientation-preserving isometry is the complex projective special group PSL ( 2 , C ) = SL ( 2 , C ) / { ± I } where SL ( 2 , C ) denotes the special group of 2 × 2 complex matrices with unit determinant.
In the Poincaré disk model, the transformation
T z 0 , θ ( z ) = e i θ z z 0 1 z ¯ 0 z
corresponds to a hyperbolic motion (a Möbius transformation [59]) which moves point z 0 to the origin 0, and then makes a rotation of angle θ . The group of such transformations is the automorphism group of the disk, Aut ( D ) , and the transformation T z 0 , θ is called a biholomorphic automorphism (i.e., a one-to-one conformal mapping of the disk onto itself).
The Poincaré distance is invariant under automorphisms of the disk, and more generally the Poincaré distance decreases under holomorphic mappings (Schwarz–Pick theorem): That is, the Poincaré distance is contractible under holomorphic mappings f: ρ D ( f ( w 1 ) , f ( w 2 ) ) ρ D ( w 1 , w 2 ) .

2.2.1. Klein Disk

The Klein disk model [29,55] (also called the Klein-Beltrami model) is defined on the unit disk domain as the Poincaré disk model. The Klein metric is
d s K 2 = d s E 2 1 x E 2 + x , d x E 1 x E 2 2 .
It is not a conformal metric (except at the disk origin), and therefore the Euclidean angles in the ( x , y ) chart do not correspond to the underlying hyperbolic angles.
The Klein distance between two points k 1 = ( x 1 , y 1 ) and k 2 = ( x 2 , y 2 ) is
ρ K ( k 1 , k 2 ) = arccosh 1 ( x 1 x 2 + y 1 y 2 ) ( 1 k 1 2 ) ( 1 k 2 2 ) .
An equivalent formula shall be reported later in page 24 in a more setting of Theorem 4.
The advantage of the Klein disk over the Poincaré disk is that geodesics are straight Euclidean lines clipped to the unit disk domain. Therefore, this model is well-suited to implement computational geometric algorithms and data structures, see for example [34,60]. The group of isometries in the Klein model are projective maps RP 2 preserving the disk. We shall see that the Klein disk model corresponds to the Hilbert geometry of the unit disk.

2.3. Poincaré and Klein Distances to the Disk Origin and Conversions

In the Poincaré disk, the distance of a point w to the origin 0 is
ρ D ( 0 , w ) = log 1 + | w | 1 | w | .
Since the Poincaré disk model is conformal (and Möbius transformations are conformal maps), Equation (31) shows that Poincaré disks have Euclidean disk shapes (however with displaced centers).
In the Klein disk, the distance of a point k to the origin is
ρ K ( 0 , k ) = 1 2 log 1 + | k | 1 | k | = 1 2 ρ D ( 0 , k ) .
Observe the multiplicative factor of 1 2 in Equation (32).
Thus, we can easily convert a point w C in the Poincaré disk to a point k C in the Klein disk, and vice versa as follows:
w = 1 1 + 1 | k | 2 k ,
k = 2 1 + | w | 2 w .
Let C K D ( k ) and C D K ( w ) denote these conversion functions with
C K D ( k ) = 1 1 + 1 | k | 2 k ,
C D K ( w ) = 2 1 + | w | 2 w .
We can write C K D ( k ) = α ( k ) k and C D K ( w ) = β ( w ) w , so that α ( k ) > 1 is an expansion factor, and  β ( w ) < 1 is a contraction factor.
The conversion functions are Möbius transformations represented by the following matrices:
M K D ( k ) = α ( k ) 0 0 1 ,
M D K ( w ) = β ( w ) 0 0 1 .
For sanity check, let w = r + 0 i be a point in the Poincaré disk with equivalent point k = 2 1 + r 2 r + 0 i in the Klein disk. Then we have:
ρ K ( 0 , k ) = 1 2 log 1 + | k | 1 | k | ,
= 1 2 log 1 + 2 1 + r 2 r 1 2 1 + r 2 r ,
= 1 2 log 1 + r 2 + 2 r 1 + r 2 2 r ,
= 1 2 log ( 1 + r ) 2 ( 1 r ) 2 ,
= log 1 + r 1 r = ρ D ( 0 , w ) .
We can convert a point z in the Poincaré upper plane to a corresponding point w in the Poincaré disk, or vice versa, using the following Möbius transformations:
w = z i z + i ,
z = i 1 + w 1 w .
Notice that we compose Möbius transformations by multiplying their matrix representations.

2.4. Hyperbolic Fisher–Rao Geometry of Location-Scale Families

Consider a parametric family P = { p θ ( x ) } θ Θ of probability densities dominated by a positive measure μ (usually, the Lebesgue measure or the counting measure) defined on a measurable space ( X , Σ ) , where X denotes the support of the densities and Σ is a finite σ -algebra [57]. Hotelling [61] and Rao [62] independently considered the Riemannian geometry of P by using the Fisher Information Matrix (FIM) to define the Riemannian metric tensor [63] expressed in the (local) coordinates θ Θ , where Θ denotes the parameter space. The FIM is defined by the following symmetric positive semi-definite matrix [57,64]:
I ( θ ) = E p θ θ log p θ ( x ) θ log p θ ( x ) .
When P is regular [57], the FIM is guaranteed to be positive-definite, and can thus play the role of a metric tensor field: The so-called Fisher metric.
Consider the location-scale family induced by a density f ( x ) symmetric with respect to 0 such that X f ( x ) d μ ( x ) = 1 , X x f ( x ) d μ ( x ) = 0 and X x 2 f ( x ) d μ ( x ) = 1 (with X = R ):
P = p θ ( x ) = 1 θ 2 f x θ 1 θ 2 , θ = ( θ 1 , θ 2 ) R × R + + .
The density f ( x ) is called the standard density, and corresponds to the parameter ( 0 , 1 ) : p ( 0 , 1 ) ( x ) = f ( x ) . The parameter space Θ = R × R + + is the upper plane, and the FIM can be structurally calculated [65] as the following diagonal matrix:
I ( θ ) = a 2 0 0 b 2
with
a 2 : = f ( x ) f ( x ) 2 f ( x ) d μ ( x ) ,
b 2 : = x f ( x ) f ( x ) + 1 2 f ( x ) d μ ( x ) .
By rescaling θ = ( θ 1 , θ 2 ) as θ = ( θ 1 , θ 2 ) with θ 1 = a b 2 θ 1 and θ 2 = θ 2 , we get the FIM with respect to θ expressed as:
I ( θ ) = b 2 θ 2 2 1 0 0 1 ,
a constant time the Poincaré metric in the upper plane. Thus, the Fisher–Rao manifold of a location-scale family (with symmetric standard density f) is isometric to the planar hyperbolic space of negative curvature κ = 1 b 2 .

3. The Siegel Upper Space and the Siegel Distance

The Siegel upper space [1,3,5,66] SH ( d ) is defined as the space of symmetric complex square matrices of size d × d which have positive-definite imaginary part:
SH ( d ) : = Z = X + i Y : X Sym ( d , R ) , Y PD ( d , R ) .
The space SH ( d ) is a tube domain of dimension d ( d + 1 ) since
dim ( SH ( d ) ) = dim ( Sym ( d , R ) ) + dim ( PD ( d , R ) ) ,
with dim ( Sym ( d , R ) ) = d ( d + 1 ) 2 and dim ( PD ( d , R ) ) = d ( d + 1 ) 2 . We can extract the components X and Y from Z as X = 1 2 ( Z + Z ¯ ) and Y = 1 2 i ( Z Z ¯ ) = i 2 ( Z Z ¯ ) . The matrix pair ( X , Y ) belongs to the Cartesian product of a matrix-vector space with the symmetric positive-definite (SPD) matrix cone: ( X , Y ) Sym ( d , R ) × PD ( d , R ) . When d = 1 , the Siegel upper space coincides with the Poincaré upper plane: SH ( 1 ) = H . The geometry of the Siegel upper space was studied independently by Siegel [1] and Hua [2] from different viewpoints in the late 1930s–1940s. Historically, these classes of complex matrices Z SH ( d ) were first studied by Riemann [67], and later eponymously called Riemann matrices. Riemann matrices are used to define Riemann theta functions [68,69,70,71].
The Siegel distance in the upper plane is induced by the following line element:
d s U 2 ( Z ) = 2 tr Y 1 d Z Y 1 d Z ¯ .
The formula for the Siegel upper distance between Z 1 and Z 2 SH ( d ) was calculated in Siegel’s masterpiece paper [1] as follows:
ρ U ( Z 1 , Z 2 ) = i = 1 d log 2 1 + r i 1 r i ,
where
r i = λ i R ( Z 1 , Z 2 ) ,
with R ( Z 1 , Z 2 ) denoting the matrix generalization [72] of the cross-ratio:
R ( Z 1 , Z 2 ) : = ( Z 1 Z 2 ) ( Z 1 Z ¯ 2 ) 1 ( Z ¯ 1 Z ¯ 2 ) ( Z ¯ 1 Z 2 ) 1 ,
and λ i ( M ) denotes the i-th largest (real) eigenvalue of (complex) matrix M. The letter notation ‘R’ in R ( Z 1 , Z 2 ) is a mnemonic which stands for ‘r’atio.
The Siegel distance can also be expressed without explicitly using the eigenvalues as:
ρ U ( Z 1 , Z 2 ) = 2 tr R 12 i = 0 R 12 i 2 i + 1 2 ,
where R 12 = R ( Z 1 , Z 2 ) . In particular, we can truncate the matrix power series of Equation (58) to get an approximation of the Siegel distance:
ρ ˜ U , l ( Z 1 , Z 2 ) = 2 tr R 12 i = 0 l R 12 i 2 i + 1 2 .
It costs O ( Spectrum ( d ) ) = O ( d 3 ) to calculate the Siegel distance using Equation (55) and O ( l Mult ( d ) ) = O ( l d 2.3737 ) to approximate it using the truncated series formula of Equation (59), where Spectrum ( d ) denotes the cost of performing the spectral decomposition of a d × d complex matrix, and  Mult ( d ) denotes the cost of multiplying two d × d square complex matrices. For example, choosing the Coppersmith-Winograd algorithm for d × d matrix multiplications, we have Mult ( d ) = O ( d 2.3737 ) . Although Siegel distance formula of Equation (59) is attractive, the number of iterations l to get an ϵ -approximation of the Siegel distance depends on the dimension d. In practice, we can define a threshold δ > 0, and as a rule of thumb iterate on the truncated sum until tr R 12 i 2 i + 1 < δ .
A spectral function [52] of a matrix M is a function F which is the composition of a symmetric function f with the eigenvalue map Λ : F ( M ) : = ( f Λ ) ( M ) = f ( Λ ( M ) ) . For example, the Kullback–Leibler divergence between two zero-centered Gaussian distributions is a spectral function distance since we have:
D KL ( p Σ 1 , p Σ 2 ) = p Σ 1 ( x ) log p Σ 1 ( x ) p Σ 2 ( x ) d x ,
= 1 2 log | Σ 2 | | Σ 1 | + tr ( Σ 2 1 Σ 1 ) d ,
= 1 2 i = 1 d log λ i ( Σ 2 ) λ i ( Σ 1 ) + λ i ( Σ 2 1 Σ 1 ) d ,
= 1 2 i = 1 d λ i ( Σ 2 1 Σ 1 ) log λ i ( Σ 2 1 Σ 1 ) 1 ,
= ( f KL Λ ) ( Σ 2 1 Σ 1 ) ,
where | Σ | and λ i ( Σ ) denotes respectively the determinant of a positive-definite matrix Σ 0 , and the i-the real largest eigenvalue of Σ , and 
p Σ ( x ) = 1 ( 2 π ) d | Σ | exp 1 2 x Σ 1 x
is the density of the multivariate zero-centered Gaussian of covariance matrix Σ ,
f KL ( u 1 , , u d ) = 1 2 i = 1 d ( u i 1 log u i ) ,
is a symmetric function invariant under parameter permutations, and Λ ( · ) denotes the eigenvalue map.
This Siegel distance in the upper plane is also a smooth spectral distance function since we have
ρ U ( Z 1 , Z 2 ) = f Λ ( R ( Z 1 , Z 2 ) ) ,
where f is the following symmetric function:
f ( x 1 , , x d ) = i = 1 d log 2 1 + x i 1 x i .
A remarkable property is that all eigenvalues of R ( Z 1 , Z 2 ) are positive (see [1]) although R may not necessarily be a Hermitian matrix. In practice, when calculating numerically the eigenvalues of the complex matrix R ( Z 1 , Z 2 ) , we  obtain very small imaginary parts which shall be rounded to zero. Thus, calculating the Siegel distance on the upper plane requires cubic time, i.e., the cost of computing the eigenvalue decomposition.
This Siegel distance in the upper plane SH ( d ) generalizes several well-known distances:
  • When Z 1 = i Y 1 and Z 2 = i Y 2 , we have
    ρ U ( Z 1 , Z 2 ) = ρ PD ( Y 1 , Y 2 ) ,
    the Riemannian distance between Y 1 and Y 2 on the symmetric positive-definite manifold [10,51]:
    ρ PD ( Y 1 , Y 2 ) = Log ( Y 1 Y 2 1 ) F
    = i = 1 d log 2 λ i ( Y 1 Y 2 1 ) .
    In that case, the Siegel upper metric for Z = i Y becomes the affine-invariant metric:
    d s U 2 ( Z ) = tr ( Y 1 d Y ) 2 = d s PD ( Y ) .
    Indeed, we have ρ PD ( C Y 1 C , C Y 2 C ) = ρ PD ( Y 1 , Y 2 ) for any C GL ( d , R ) and
    ρ PD ( Y 1 1 , Y 2 1 ) = ρ PD ( Y 1 , Y 2 ) .
  • In 1D, the Siegel upper distance ρ U ( Z 1 , Z 2 ) between Z 1 = [ z 1 ] and Z 2 = [ z 2 ] (with z 1 and z 2 in C ) amounts to the hyperbolic distance on the Poincaré upper plane H :
    ρ U ( Z 1 , Z 2 ) = ρ U ( z 1 , z 2 ) ,
    where
    ρ U ( z 1 , z 2 ) : = log | z 1 z ¯ 2 | + | z 1 z 2 | | z 1 z ¯ 2 | | z 1 z 2 | .
  • The Siegel distance between two diagonal matrices Z = diag ( z 1 , , z d ) and Z = diag ( z 1 , , z d ) is
    ρ U ( Z , Z ) = i = 1 d ρ U 2 ( z i , z i ) .
    Observe that the Siegel distance is a non-separable metric distance, but its squared distance is separable when the matrices are diagonal:
    ρ U 2 ( Z , Z ) = i = 1 d ρ U 2 ( z i , z i ) .
The Siegel metric in the upper plane is invariant by generalized matrix Möbius transformations (linear fractional transformations or rational transformations):
ϕ S ( Z ) : = ( A Z + B ) ( C Z + D ) 1 ,
where S M ( 2 d , R ) is the following 2 d × 2 d block matrix:
S = A B C D .
which satisfies
A B = B A , C D = D C , A D B C = I .
The map ϕ S ( · ) = ϕ ( S , · ) is called a symplectic map.
The set of matrices S encoding the symplectic maps forms a group called the real symplectic group Sp ( d , R )  [5] (informally, the group of Siegel motions):
Sp ( d , R ) = A B C D , A , B , C , D M ( d , R ) : A B = B A , C D = D C , A D B C = I .
It can be shown that symplectic matrices have unit determinant [73,74], and therefore Sp ( d , R ) is a subgroup of SL ( 2 d , R ) , the special group of real invertible matrices with unit determinant. We also check that if M Sp ( d , R ) then M Sp ( d , R ) .
Matrix S denotes the representation of the group element g S . The symplectic group operation corresponds to matrix multiplications of their representations, the neutral element is encoded by E = I 0 0 I , and the group inverse of g S with S = A B C D is encoded by the matrix:
S ( 1 ) = : D B C A .
Here, we use the parenthesis notation S ( 1 ) to indicate that it is the group inverse and not the usual matrix inverse S 1 . The symplectic group is a Lie group of dimension d ( 2 d + 1 ) . Indeed, a symplectic matrix of Sp ( d , R ) has 2 d × 2 d = 4 d 2 elements which are constrained from the block matrices as follows:
A B = B A ,
C D = D C ,
A D B C = I .
The first two constraints are independent and of the form M = M which yields each d 2 d 2 elementary constraints. The third constraint is of the form M 1 M 2 = I , and independent of the other constraints, yielding d 2 elementary constraints. Thus, the dimension of the symplectic group is
dim ( Sp ( d , R ) ) = 4 d 2 ( d 2 d ) d 2 = 2 d 2 + d = d ( 2 d + 1 ) .
The action of the group is transitive: That is, for any Z = A + i B and S ( Z ) = B 1 2 0 A B 1 2 B 1 2 , we have ϕ S ( Z ) ( i I ) = Z . Therefore, by taking the group inverse
S ( 1 ) = ( B 1 2 ) 0 ( A B 1 2 ) ( B 1 2 ) ,
we get
ϕ S ( 1 ) ( Z ) = i I .
The action ϕ S ( Z ) can be interpreted as a “Siegel translation” moving matrix i I to matrix Z, and conversely the action ϕ S ( 1 ) ( Z ) as moving matrix Z to matrix i I .
The stabilizer group of Z = i I (also called isotropy group, the set of group elements S Sp ( d , R ) whose action fixes Z) is the subgroup of symplectic orthogonal matrices SpO ( 2 d , R ) :
SpO ( 2 d , R ) = A B B A : A A + B B = I , A B Sym ( d , R ) .
We have SpO ( 2 d , R ) = Sp ( 2 d , R ) O ( 2 d ) , where O ( 2 d ) is the group of orthogonal matrices of dimension 2 d × 2 d :
O ( 2 d ) : = R M ( 2 d , R ) : R R = R R = I .
Informally speaking, the elements of SpO ( 2 d , R ) represent the “Siegel rotations” in the upper plane. The Siegel upper plane is isomorphic to Sp ( 2 d , R ) / O d ( R ) .
A pair of matrices ( Z 1 , Z 2 ) can be transformed into another pair of matrices ( Z 1 , Z 2 ) of SH ( d ) if and only if λ ( R ( Z 1 , Z 2 ) ) = λ ( R ( Z 1 , Z 2 ) ) , where λ ( M ) : = { λ 1 ( M ) , , λ d ( M ) } denotes the spectrum of matrix M.
By noticing that the symplectic group elements M and M yield the same symplectic map, we define the orientation-preserving isometry group of the Siegel upper plane as the real projective symplectic group PSp ( d , R ) = Sp ( d , R ) / { ± I 2 d } (generalizing the group PSL ( 2 , R ) obtained when d = 1 ).
The geodesics in the Siegel upper space can be obtained by applying symplectic transformations to the geodesics of the positive-definite manifold (geodesics on the SPD manifold) which is a totally geodesic submanifold of SU ( d ) . Let Z 1 = i P 1 and Z 2 = i P 2 . Then the geodesic Z 12 ( t ) with Z 12 ( 0 ) = Z 1 and Z 12 ( 1 ) = Z 2 is expressed as:
Z 12 ( t ) = i P 1 1 2 Exp ( t Log ( P 1 1 2 P 2 P 1 1 2 ) ) P 1 1 2 ,
where Exp ( M ) denotes the matrix exponential:
Exp ( M ) = i = 0 1 i ! M i ,
and Log ( M ) is the principal matrix logarithm, unique when matrix M has all positive eigenvalues.
The equation of the geodesic emanating from P with tangent vector S T p (symmetric matrix) on the SPD manifold is:
γ P , S ( t ) = P 1 2 Exp ( t P 1 2 S P 1 2 ) P 1 2 .
Both the exponential and the principal logarithm of a matrix M can be calculated in cubic time when the matrices are diagonalizable: Let V denote the matrix of eigenvectors so that we have the following decomposition:
M = V diag ( λ 1 , , λ d ) V 1 ,
where λ 1 , , λ d are the corresponding eigenvalues of eigenvectors. Then for a scalar function f (e.g.,  f ( u ) = exp ( u ) or f ( u ) = log u ), we define the corresponding matrix function f ( M ) as
f ( M ) : = V diag ( f ( λ 1 ) , , f ( λ d ) ) V 1 .
The volume element of the Siegel upper plane is 2 d ( d 1 ) 2 d v where d v is the volume element of the d ( d + 1 ) -dimensional Euclidean space expressed in the Cartesian coordinate system.

4. The Siegel Disk Domain and the Kobayashi Distance

The Siegel disk [1] is an open convex complex matrix domain defined by
SD ( d ) : = W Sym ( d , C ) : I W ¯ W 0 .
The Siegel disk can be written equivalently as SD ( d ) : = W Sym ( d , C ) : I W W ¯ 0 or SD ( d ) : = W Sym ( d , C ) : W O < 1 . In the Cartan classification [7], the Siegel disk is a Siegel domain of type III.
When d = 1 , the Siegel disk SD ( 1 ) coincides with the Poincaré disk: SD ( 1 ) = D . The Siegel disk was described by Siegel [1] (page 2, called domain E to contrast with domain H of the upper space) and Hua in his 1948’s paper [75] (page 205) on the geometries of matrices [76]. Siegel’s paper [1] in 1943 only considered the Siegel upper plane. Here, the Siegel (complex matrix) disk is not to be confused with the other notion of Siegel disk in complex dynamics which is a connected component in the Fatou set.
The boundary SD ( d ) of the Siegel disk is called the Shilov boundary [5,77,78]): SD ( d ) : = W Sym ( d , C ) : W O = 1 . We have SD ( d ) = Sym ( d , C ) U ( d , C ) , where
U ( d , C ) = { U U * = U * U = I : U M ( d , C ) }
is the group of d × d unitary matrices. Thus, SD ( d ) is the set of symmetric d × d unitary matrices with determinant of unit module. The Shilov boundary is a stratified manifold where each stratum is defined as a space of constant rank-deficient matrices [79].
The metric in the Siegel disk is:
d s D 2 = tr ( I W W ¯ ) 1 d W ( I W W ¯ ) 1 d W ¯ .
When d = 1 , we  recover d s D 2 = 1 ( 1 | w | 2 ) 2 d w d w ¯ which is the usual metric in the Poincaré disk (up to a missing factor of 4, see Equation (25).
This Siegel metric induces a Kähler geometry [13] with the following Kähler potential:
K ( W ) = tr Log I W H W .
The Kobayashi distance [80] between W 1 and W 2 in SD ( d ) is calculated [79] as follows:
ρ D ( W 1 , W 2 ) = log 1 + Φ W 1 ( W 2 ) O 1 Φ W 1 ( W 2 ) O ,
where
Φ W 1 ( W 2 ) = ( I W 1 W ¯ 1 ) 1 2 ( W 2 W 1 ) ( I W ¯ 1 W 2 ) 1 ( I W ¯ 1 W 1 ) 1 2 ,
is a Siegel translation which moves W 1 to the origin O (matrix with all entries set to 0) of the disk: We have Φ W ( W ) = 0 . In the Siegel disk domain, the Kobayashi distance [80] coincides with the Carathéodory distance [81] and yields a metric distance. Notice that the Siegel disk distance, although a spectral distance function via the operator norm, is not smooth because of it uses the maximum singular value. Recall that the Siegel upper plane distance uses all eigenvalues of a matrix cross-ratio R.
It follows that the cost of calculating a Kobayashi distance in the Siegel disk is cubic: We require the computation of a symmetric matrix square root [82] in Equation (101), and  then compute the largest singular value for the operator norm in Equation (100).
Notice that when d = 1 , the “1d” scalar matrices commute, and we have:
Φ w 1 ( w 2 ) = ( 1 w 1 w ¯ 1 ) 1 2 ( w 2 w 1 ) ( 1 w ¯ 1 w 2 ) 1 ( 1 w ¯ 1 w 1 ) 1 2 ,
= w 2 w 1 1 w ¯ 1 w 2 .
This corresponds to a hyperbolic translation of w 1 to 0 (see Equation (28)). Let us call the geometry of the Siegel disk the Siegel–Poincaré geometry.
We observe the following special cases of the Siegel–Poincaré distance:
  • Distance to the origin: When W 1 = 0 and W 2 = W , we have Φ 0 ( W ) = W , and therefore the distance in the disk between a matrix W and the origin 0 is:
    ρ D ( 0 , W ) = log 1 + W O 1 W O .
    In particular, when d = 1 , we recover the formula of Equation (31): ρ D ( 0 , w ) = log 1 + | w | 1 | w | .
  • When d = 1 , we have W 1 = [ w 1 ] and W 2 = [ w 2 ] , and 
    ρ D ( W 1 , W 2 ) = ρ D ( w 1 , w 2 ) .
  • Consider diagonal matrices W = diag ( w 1 , , w d ) SD ( d ) and W = diag ( w 1 , , w d ) SD ( d ) . We have | w i | 1 for i { 1 , , d } . Thus, the diagonal matrices belong to the polydisk domain. Then we have
    ρ D ( W 1 , W 2 ) = i = 1 d ρ D 2 ( w i , w i ) .
    Notice that the polydisk domain is a Cartesian product of 1D complex disk domains, but it is not the unit d-dimensional complex ball { z C d : i = 1 d z i z ¯ i = 1 } .
We can convert a matrix Z in the Siegel upper space to an equivalent matrix W in the Siegel disk by using the following matrix Cayley transformation for Z SH d :
W U D ( Z ) : = ( Z i I ) ( Z + i I ) 1 SD ( d ) .
Notice that the imaginary positive-definite matrices i P of the upper plane (vertical axis) are mapped to
W U D ( i P ) : = ( P I ) ( P + I ) 1 SD ( d ) ,
i.e., the real symmetric matrices belonging to the horizontal-axis of the disk.
The inverse transformation for a matrix W in the Siegel disk is
Z D U ( W ) = i I + W I W 1 SH ( d ) ,
a matrix in the Siegel upper space. With those mappings, the origin of the disk 0 SD ( d ) coincides with matrix i I SH ( d ) in the upper space.
A key property is that the geodesics passing through the matrix origin 0 are expressed by straight line segments in the Siegel disk. We can check that
ρ D ( 0 , W ) = ρ D ( 0 , α W ) + ρ D ( α W , W ) ,
for any α [ 0 , 1 ] .
To describe the geodesics between W 1 and W 2 , we first move W 1 to 0 and W 2 to Φ W 1 ( W 2 ) . Then the geodesic between 0 and Φ W 1 ( W 2 ) is a straight line segment, and we map back this geodesic via Φ 1 W 1 ( · ) . The inverse of a symplectic map is a symplectic map which corresponds to the action of an element of the complex symplectic group.
The complex symplectic group is
Sp ( d , C ) = M J M = J , M = A B C D M ( 2 d , C ) ,
with
J = 0 I I 0 ,
for the d × d identity matrix I. Notice that the condition M J M = J amounts to check that
A B = B A , C D = D C , A D B C = I .
The conversions between the Siegel upper plan to the Siegel disk (and vice versa) can be expressed using complex symplectic transformations associated with the matrices:
W ( Z ) = I i I I i I . Z = ( Z i I ) ( Z + i I ) ) 1 ,
Z ( W ) = i I i I I I . W = i I + W I W 1 .
Figure 1 depicts the conversion of the upper plane to the disk, and vice versa.
The orientation-preserving isometries in the Siegel disk is the projective complex symplectic group PSp ( d , C ) = Sp ( d , C ) / { ± I 2 d } .
It can be shown that
Sp ( d , C ) = M = A B B ¯ A ¯ M ( 2 d , C ) ,
with
A B ¯ B H A = 0 ,
A A ¯ B H B = I .
and the left action of g Sp ( d , C ) is
g . W = ( A W + B ) ( A ¯ W + B ¯ ) 1 .
The isotropy group at the origin 0 is
A 0 0 A ¯ : A U ( d ) ,
where U ( d ) is the unitary group: U ( d ) = { U GL ( d , C ) : U H U = U U H = I } .
Thus, we can “rotate” a matrix W with respect to the origin so that its imaginary part becomes 0: There exists A such that Re ( A W W 1 A ¯ 1 ) = 0 .
More generally, we can define a Siegel rotation [83] in the disk with respect to a center W 0 SD ( d ) as follows:
R W 0 ( W ) = ( A W A W 0 ) ( B B W ¯ 0 W ) 1 ,
where
A ¯ A = ( I W 0 W ¯ 0 ) 1 ,
B ¯ B = ( I W ¯ 0 W 0 ) 1 ,
A ¯ A W 0 = W 0 B ¯ B .
Interestingly, the Poincaré disk can be embedded non-diagonally onto the Siegel upper plane [84].
In complex dimension d = 1 , the Kobayashi distance ρ W coincides with the Siegel distance ρ U . Otherwise, we calculate the Siegel distance in the Siegel disk as
ρ U ( W 1 , W 2 ) : = ρ U Z D U ( W 1 ) , Z D U ( W 2 ) .

5. The Siegel–Klein Geometry: Distance and Geodesics

We define the Siegel–Klein geometry as the Hilbert geometry for the Siegel disk model. Section 5.1 concisely explains the Hilbert geometry induced by an open-bounded convex domain. In Section 5.2, we study the Hilbert geometry of the Siegel disk domain. Then we report the Siegel–Klein distance in Section 5.3 and study some of its particular cases. Section 5.5 presents the conversion procedures between the Siegel–Poincaré disk and the Siegel–Klein disk. In Section 5.7, we design a fast guaranteed method to approximate the Siegel–Klein distance. Finally, we introduce the Hilbert-Fröbenius distances to get simple bounds on the Siegel–Klein distance in Section 5.8.

5.1. Background on Hilbert Geometry

Consider a normed vector space ( V , · ) , and define the Hilbert distance [30,85] for an open-bounded convex domain Ω as follows:
Definition 1 (Hilbert distance). 
The Hilbert distance is defined for any open-bounded convex domain Ω and a prescribed positive factor κ > 0 by
H Ω , κ ( p , q ) : = κ log CR ( p ¯ , p ; q , q ¯ ) , p q , 0 p = q .
where p ¯ and q ¯ are the unique two intersection points of the line ( p q ) with the boundary Ω of the domain Ω as depicted in Figure 2, and  CR denotes the cross-ratio of four points (a projective invariant):
CR ( a , b ; c , d ) = a c b d a d b c .
When p q , we have:
H Ω , κ ( p , q ) : = κ log q ¯ p p ¯ q q ¯ q p ¯ p .
The Hilbert distance is a metric distance which does not depend on the underlying norm of the vector space:
Proposition 1 (Formula of Hilbert distance). 
The Hilbert distance between two points p and q of an open-bounded convex domain Ω is
H Ω , κ ( p , q ) = κ log α + ( 1 α ) α ( α + 1 ) , p q , 0 p = q . ,
where p ¯ = p + α ( q p ) and q ¯ = p + α + ( q p ) are the two intersection points of the line ( p q ) with the boundary Ω of the domain Ω.
Proof. 
For distinct points p and q of Ω , let α + > 1 be such that q ¯ = p + α + ( q p ) , and  α < 0 such that p ¯ = p + α ( q p ) . Then we have q ¯ p = α + q p , p ¯ p = | α | q p , q q ¯ = ( α + 1 ) p q and p ¯ q = ( 1 α ) p q . Thus, we get
H Ω , κ ( p , q ) = κ log q ¯ p p ¯ q q ¯ q p ¯ p ,
= κ log α + ( 1 α ) | α | ( α + 1 ) ,
and H Ω ( p , q ) = 0 if and only if p = q . ☐
We may also write the source points p and q as linear interpolations of the extremal points p ¯ and q ¯ on the boundary: p = ( 1 β p ) p ¯ + β p q ¯ and q = ( 1 β q ) p ¯ + β q q ¯ with 0 < β p < β q < 1 for distinct points p and q. In that case, the Hilbert distance can be written as
H Ω , κ ( p , q ) = κ log 1 β p β p β q 1 β q β p β q , 0 β p = β q .
The projective Hilbert space ( Ω , H Ω ) is a metric space. Notice that the above formula has demonstrated that
H Ω , κ ( p , q ) = H Ω ( p q ) , κ ( p , q ) .
That is, the Hilbert distance between two points of a d-dimensional domain Ω is equivalent to the Hilbert distance between the two points on the 1D domain Ω ( p q ) defined by Ω restricted to the line ( p q ) passing through the points p and q.
Notice that the boundary Ω of the domain may not be smooth (e.g., Ω may be a simplex [86] or a polytope [87]). The Hilbert geometry for the unit disk centered at the origin with κ = 1 2 yields the Klein model [88] (or Klein-Beltrami model [89]) of hyperbolic geometry. The Hilbert geometry for an ellipsoid yields the Cayley-Klein hyperbolic model [29,35,90] generalizing the Klein model. The Hilbert geometry for a simplicial polytope is isometric to a normed vector space [86,91]. We refer to the handbook [31] for a survey of recent results on Hilbert geometry. The Hilbert geometry of the elliptope (i.e., space of correlation matrices) was studied in [86]. Hilbert geometry may be studied from the viewpoint of Finslerian geometry which is Riemannian if and only if the domain Ω is an ellipsoid (i.e., Klein or Cayley-Klein hyperbolic geometries). Finally, it is interesting to observe the similarity of the Hilbert distance which relies on a geometric cross-ratio with the Siegel distance (Equation (55)) in the upper space which relies on a matrix generalization of the cross-ratio (Equation (57)).

5.2. Hilbert Geometry of the Siegel Disk Domain

Let us consider the Siegel–Klein disk model which is defined as the Hilbert geometry for the Siegel disk domain Ω = SD ( d ) as depicted in Figure 3 with κ = 1 2 .
Definition 2 (Siegel–Klein geometry). 
The Siegel–Klein disk model is the Hilbert geometry for the open-bounded convex domain Ω = SD ( d ) with prescribed constant κ = 1 2 . The Siegel–Klein distance is
ρ K ( K 1 , K 2 ) : = H SD ( d ) , 1 2 ( K 1 , K 2 ) .
When d = 1 , the Siegel–Klein disk is the Klein disk model of hyperbolic geometry, and the Klein distance [34] between two any points k 1 C and k 2 C restricted to the unit disk is
ρ K ( k 1 , k 2 ) = arccosh 1 ( Re ( k 1 ) Re ( k 2 ) + Im ( k 1 ) Im ( k 2 ) ) ( 1 | k 1 | ) ( 1 | k 2 | ) ,
where
arccosh ( x ) = log x + x 2 1 , x 1 .
This formula can be retrieved from the Hilbert distance induced by the Klein unit disk [29].

5.3. Calculating and Approximating the Siegel–Klein Distance

The Siegel disk domain SD ( d ) = W Sym ( d , C ) : I W ¯ W 0 can be rewritten using the operator norm as
SD ( d ) = W Sym ( d , C ) : W O < 1 .
Let { K 1 + α ( K 2 K 1 ) , α R } denote the line passing through (matrix) points K 1 and K 2 . That line intersects the Shilov boundary when
K 1 + α ( K 2 K 1 ) O = 1 .
when K 1 K 2 , there are two unique solutions since a line intersects the boundary of a bounded open convex domain in at most two points: Let one solution be α + with α + > 1 , and the other solution be α with α < 0 . The Siegel–Klein distance is then defined as
ρ K ( K 1 , K 2 ) = 1 2 log α + ( 1 α ) | α | ( α + 1 ) ,
where K ¯ 1 = K 1 + α ( K 2 K 1 ) and K ¯ 2 = K 1 + α + ( K 2 K 1 ) are the extremal matrices belonging to the Shilov boundary SD ( d ) .
Notice that matrices K 1 and/or K 2 may be rank-deficient. We have rank ( K 1 + λ ( K 2 K 1 ) ) min ( d , rank ( K 1 ) + rank ( K 2 ) ) , see [92].
In practice, we may perform a bisection search on the matrix line ( K 1 K 2 ) to approximate these two extremal points K ¯ 1 and K ¯ 2 (such that these matrices are ordered along the line as follows: K ¯ 1 , K 1 , K 2 , K ¯ 2 ). We may find a lower bound for α and a upper bound for α + as follows: We seek α on the line ( K 1 K 2 ) such that K 1 + α ( K 2 K 1 ) falls outside the Siegel disk domain:
1 < K 1 + α ( K 2 K 1 ) O .
Since · O is a matrix norm, we have
1 < K 1 + α ( K 2 K 1 ) O K 1 O + | α | ( K 2 K 1 ) O .
Thus, we deduce that
| α | > 1 K 1 O ( K 2 K 1 ) O .

5.4. Siegel–Klein Distance to the Origin

When K 1 = 0 (the 0 matrix denoting the origin of the Siegel disk), and  K 2 = K SD ( d ) , it is easy to solve the equation:
α K O = 1 .
We have | α | = 1 K O , i.e.,
α + = 1 K O > 1 ,
α = 1 K O < 0 .
In that case, the Siegel–Klein distance of Equation (139) is expressed as:
ρ K ( 0 , K ) = log 1 + 1 K O 1 K O 1 ,
= 1 2 log 1 + K O 1 K O ,
= 2 ρ D ( 0 , K ) ,
where ρ D ( 0 , W ) is defined in Equation (104).
Theorem 1 (Siegel–Klein distance to the origin). 
The Siegel–Klein distance of matrix K SD ( d ) to the origin O is
ρ K ( 0 , K ) = 1 2 log 1 + K O 1 K O .
The constant κ = 1 2 is chosen in order to ensure that when d = 1 the corresponding Klein disk has negative unit curvature. The result can be easily extended to the case of the Siegel–Klein distance between K 1 and K 2 where the origin O belongs to the line ( K 1 K 2 ) . In that case, K 2 = λ K 1 for some λ R (e.g., λ = tr ( K 2 ) tr ( K 1 ) where tr denotes the matrix trace operator). It follows that
K 1 + α ( K 2 K 1 ) O = 1 ,
| 1 + α ( λ 1 ) | = 1 K 1 O .
Thus, we get the two values defining the intersection of ( K 1 K 2 ) with the Shilov boundary:
α = 1 λ 1 1 K 1 O 1 ,
α = 1 1 λ 1 + 1 K 1 O .
We then apply formula Equation (139):
ρ K ( K 1 , K 2 ) = 1 2 log α ( 1 α ) α ( α 1 ) ,
= 1 2 log 1 K 1 O 1 + K 1 O K 1 O ( 1 λ ) ( 1 + K 1 O ) K 1 O ( λ 1 ) ( 1 K 1 O ) .
Theorem 2.
The Siegel–Klein distance between two points K 1 0 and K 2 on a line ( K 1 K 2 ) passing through the origin is
ρ K ( K 1 , K 2 ) = 1 2 log 1 K 1 O 1 + K 1 O K 1 O ( 1 λ ) ( 1 + K 1 O ) K 1 O ( λ 1 ) ( 1 K 1 O ) ,
where λ = tr ( K 2 ) tr ( K 1 ) .

5.5. Converting Siegel–Poincaré Matrices from/to Siegel–Klein Matrices

From Equation (149), we deduce that we can convert a matrix K in the Siegel–Klein disk to a corresponding matrix W in the Siegel–Poincaré disk, and vice versa, as follows:
  • Converting K to W: We convert a matrix K in the Siegel–Klein model to an equivalent matrix W in the Siegel–Poincaré model as follows:
    C K D ( K ) = 1 1 + 1 K O 2 K .
    This conversion corresponds to a radial contraction with respect to the origin 0 since 1 1 + 1 K O 2 1 (with equality for matrices belonging to the Shilov boundary).
  • Converting W to K: We convert a matrix W in the Siegel–Poincaré model to an equivalent matrix K in the Siegel–Klein model as follows:
    C D K ( W ) = 2 1 + W O 2 W .
    This conversion corresponds to a radial expansion with respect to the origin 0 since 2 1 + W O 2 1 (with equality for matrices on the Shilov boundary).
Proposition 2 (Conversions Siegel–Poincaré⇔Siegel–Klein disk). 
The conversion of a matrix K of the Siegel–Klein model to its equivalent matrix W in the Siegel–Poincaré model, and vice versa, is done by the following radial contraction and expansion functions: C K D ( K ) = 1 1 + 1 K O 2 K and C D K ( W ) = 2 1 + | W | O 2 W .
Figure 4 illustrates the radial expansion/contraction conversions between the Siegel–Poincaré and Siegel–Klein matrices.
The cross-ratio ( p , q ; P , Q ) = p P q Q p Q q P of four collinear points on a line is such that ( p , q ; P , Q ) = ( p , r ; P , Q ) × ( r , q ; P , Q ) whenever r belongs to that line. By virtue of this cross-ratio property, the (pre)geodesics in the Hilbert–Klein disk are Euclidean straight. Thus, we can write the pregeodesics as:
γ K 1 , K 2 ( α ) = ( 1 α ) K 1 + α K 2 = K 1 + α ( K 2 K 1 ) .
Riemannian geodesics are paths which minimize locally the distance and are parameterized proportionally to the arc-length. A pregeodesic is a path which minimizes locally the distance but is not necessarily parameterized proportionally to the arc-length. For implementing geometric intersection algorithms (e.g., a geodesic with a ball), it is enough to consider pregeodesics.
Another way to get a generic closed-form formula for the Siegel–Klein distance is by using the formula for the Siegel–Poincaré disk after converting the matrices to their equivalent matrices in the Siegel–Poincaré disk. We get the following expression:
ρ K ( K 1 , K 2 ) = ρ D ( C K D ( K 1 ) , C K D ( K 2 ) ) ,
= 1 2 log 1 + Φ C K D ( K 1 ) ( C K D ( K 2 ) ) O 1 Φ C K D ( K 1 ) ( C K D ( K 2 ) ) O .
Theorem 3 (Formula for the Siegel–Klein distance). 
The Siegel–Klein distance between K 1 and K 2 in the Siegel disk is ρ K ( K 1 , K 2 ) = 1 2 log 1 + Φ C K D ( K 1 ) ( C K D ( K 2 ) ) O 1 Φ C K D ( K 1 ) ( C K D ( K 2 ) ) O .
The isometries in Hilbert geometry have been studied in [93].
We now turn our attention to a special case where we can report an efficient and exact linear-time algorithm for calculating the Siegel–Klein distance.

5.6. Siegel–Klein Distance between Diagonal Matrices

Let K α = K 1 + α K 21 with K 21 = K 2 K 1 . When solving for the general case, we seek for the extremal values of α such that:
I K ¯ α K α 0 ,
I ( K ¯ 1 + α K ¯ 21 ) ( K 1 + α K 21 ) 0 ,
I ( K ¯ 1 K 1 + α ( K ¯ 1 K 21 + K ¯ 21 K 1 ) + α 2 K ¯ 21 K 21 ) 0 ,
K ¯ 1 K 1 + α ( K ¯ 1 K 21 + K ¯ 21 K 1 ) + α 2 K ¯ 21 K 21 I .
This last equation is reminiscent to a Linear Matrix Inequality [94] (LMI, i.e.,  i y i S i 0 with y i R and S i Sym ( d , R ) where the coefficients y i are however linked between them).
Let us consider the special case of diagonal matrices corresponding to the polydisk domain: K = diag ( k 1 , , k d ) and K = diag ( k 1 , , k d ) of the Siegel disk domain.
First, let us start with the simple case d = 1 , i.e.,  the Siegel disk SD ( 1 ) which is the complex open unit disk { k C : k ¯ k < 1 } . Let k α = ( 1 α ) k 1 + α k 2 = k 1 + α k 21 with k 21 = k 2 k 1 . We have k ¯ α k α = a α 2 + b α + c with a = k ¯ 21 k 21 , b = k ¯ 1 k 21 + k ¯ 21 k 1 and c = k ¯ 1 k 1 . To find the two intersection points of line ( k 1 k 2 ) with the boundary of SD ( 1 ) , we need to solve k ¯ α k α = 1 . This amounts to solve an ordinary quadratic equation since all coefficients a, b, and c are provably reals. Let Δ = b 2 4 a c be the discriminant ( Δ > 0 when k 1 k 2 ). We get the two solutions α m = b Δ 2 a and α M = b + Δ 2 a , and apply the 1D formula for the Hilbert distance:
ρ K ( k 1 , k 2 ) = 1 2 log α M ( 1 α m ) | α m | ( α M 1 ) .
Doing so, we obtain a formula equivalent to Equation (30).
For diagonal matrices with d > 1 , we get the following system of d inequalities:
α i 2 k ¯ i k ¯ i k i k i + α i k ¯ i ( k i k i ) + k i ( k ¯ i k ¯ i ) + k ¯ i k i 1 0 , i { 1 , , d } .
For each inequality, we solve the quadratic equation as in the 1d case above, yielding two solutions α i and α i + . Then we satisfy all those constraints by setting
α = max i { 1 , , d } α i ,
α + = min i { 1 , , d } α i + ,
and we compute the Hilbert distance:
ρ K ( K 1 , K 2 ) = 1 2 log α + ( 1 α ) | α | ( α + 1 ) .
Theorem 4 (Siegel–Klein distance for diagonal matrices). 
The Siegel–Klein distance between two diagonal matrices in the Siegel–Klein disk can be calculated exactly in linear time.
Notice that the proof extends to triangular matrices as well.
When the matrices are non-diagonal, we must solve analytically the equation:
max | α | ,
such   that α 2 S 2 + α S 1 + S 0 0 ,
with the following Hermitian matrices (with all real eigenvalues):
S 2 = K ¯ 21 K 21 = S 2 H ,
S 1 = K ¯ 1 K 21 + K ¯ 21 K 1 = S 1 H ,
S 0 = K ¯ 1 K 1 I = S 0 H .
Although S 0 and S 2 commute, it is not necessarily the case for S 0 and S 1 , or  S 1 and S 2 .
When S 0 , S 1 and S 2 are simultaneously diagonalizable via congruence [95], the optimization problem becomes:
max | α | ,
such   that α 2 D 2 + α D 1 D 0 ,
where D i = P S i P for some P GL ( d , C ) , and we apply Theorem 4. The same result applies for simultaneously diagonalizable matrices S 0 , S 1 and S 2 via similarity: D i = P 1 S i P with P GL ( d , C ) .
Notice that the Hilbert distance (or its squared distance) is not a separable distance, even in the case of diagonal matrices. (However, recall that the squared Siegel–Poincaré distance in the upper plane is separable for diagonal matrices.)
When d = 1 , we have
ρ U ( z 1 , z 2 ) = ρ D ( w 1 , w 2 ) = ρ K ( k 1 , k 2 ) .
We now investigate a guaranteed fast scheme for approximating the Siegel–Klein distance in the general case.

5.7. A Fast Guaranteed Approximation of the Siegel–Klein Distance

In the general case, we use the bisection approximation algorithm which is a geometric approximation technique that only requires the calculation of operator norms (and not the square root matrices required in the functions Φ · ( · ) for calculating the Siegel distance in the disk domain).
We have the following key property of the Hilbert distance:
Property 1 (Bounding Hilbert distance). 
Let Ω + Ω Ω be strictly nested open convex bounded domains. Then we have the following inequality for the corresponding Hilbert distances:
H Ω + , κ ( p , q ) H Ω , κ ( p , q ) H Ω , κ ( p , q ) .
Figure 5 illustrates the Property 1 of Hilbert distances corresponding to nested domains. Notice that when Ω is a large enclosing ball of Ω with radius increasing to infinity, we have α α + , and therefore the Hilbert distance tends to zero.
Proof. 
Recall that H Ω , κ ( p , q ) = H Ω ( p q ) , κ ( p , q ) , i.e., the  Hilbert distance with respect to domain Ω can be calculated as an equivalent 1-dimensional Hilbert distance by considering the open-bounded (convex) interval Ω ( p q ) = [ p ¯ q ¯ ] . Furthermore, we have [ p ¯ q ¯ ] [ p ¯ q ¯ ] = Ω ( p q ) (with set containment Ω Ω ). Therefore let us consider the 1D case as depicted in Figure 6. Let us choose p < q so that we have p ¯ p ¯ < p < q < q ¯ q ¯ . In 1D, the Hilbert distance is expressed as
H Ω , κ ( p , q ) : = κ log | q ¯ p | | p ¯ q | | q ¯ q | | p ¯ p | ,
for a prescribed constant κ > 0 . Therefore it follows that
H Ω , κ ( p , q ) H Ω , κ ( p , q ) : = κ log | q ¯ p | | p ¯ q | | q ¯ q | | p ¯ p | × | q ¯ q | | p ¯ p | | q ¯ p | | p ¯ q | .
We can rewrite the argument of the logarithm as follows:
| q ¯ p | | p ¯ q | | q ¯ q | | p ¯ p | × | q ¯ q | | p ¯ p | | q ¯ p | | p ¯ q | = ( q ¯ p ) ( q ¯ q ) ( q ¯ q ) ( q ¯ p ) × ( p p ¯ ) ( q p ¯ ) ( p p ¯ ) ( q p ¯ ) ,
= CR ( q ¯ , q ¯ ; p , q ) × CR ( p , q ; p ¯ , p ¯ ) ,
with
CR ( a , b ; c , d ) = | a c | | b d | | a d | | b c | = | a c | | b c | | a d | | b d | .
Since p ¯ p ¯ < p < q < q ¯ q ¯ , we have CR ( q ¯ , q ¯ ; p , q ) 1 and CR ( p , q ; p ¯ , p ¯ ) 1 , see [29]. Therefore we deduce that H Ω , κ ( p , q ) H Ω , κ ( p , q ) when Ω Ω . ☐
Therefore the bisection search for finding the values of α and α + yields both lower and upper bounds on the exact Siegel–Klein distance as follows: Let α ( l , u ) and α + ( l + , u + ) where l , u , l + , u + are real values defining the extremities of the intervals. Using Property 1, we get the following theorem:
Theorem 5 (Lower and upper bounds on the Siegel–Klein distance). 
The Siegel–Klein distance between two matrices K 1 and K 2 of the Siegel disk is bounded as follows:
ρ K ( l , u + ) ρ K ( K 1 , K 2 ) ρ K ( u , l + ) ,
where
ρ K ( α m , α M ) : = 1 2 log α M ( 1 α m ) | α m | ( α M 1 ) .
Figure 7 depicts the guaranteed lower and upper bounds obtained by performing the bisection search for approximating the point K ¯ 1 ( K ¯ 1 , K ¯ 1 ) and the points K ¯ 2 ( K ¯ 2 , K ¯ 2 ) .
We have:
CR ( K ¯ 1 , K 1 ; K 2 , K ¯ 2 ) CR ( K ¯ 1 , K 1 ; K 2 , K ¯ 2 ) CR ( K ¯ 1 , K 1 ; K 2 , K ¯ 2 ) ,
where CR ( a , b ; c , d ) = a c b d a d b c denotes the cross-ratio. Hence we have
H Ω , 1 2 ( K 1 , K 2 ) ρ K ( K 1 , K 2 ) H Ω , 1 2 ( K 1 , K 2 ) .
Notice that the approximation of the Siegel–Klein distance by line bisection requires only to calculate an operator norm M O at each step: This involves calculating the smallest and largest eigenvalues of M, or the largest eigenvalue of M M ¯ . To get a ( 1 + ϵ ) -approximation, we need to perform O ( log 1 ϵ ) dichotomic steps. This yields a fast method to approximate the Siegel–Klein distance compared with the costly exact calculation of the Siegel–Klein distance of Equation (159) which requires calculation of the Φ · ( · ) functions: This involves the calculation of a square root of a complex matrix. Furthermore, notice that the operator norm can be numerically approximated using a Lanczos’s power iteration scheme [96,97] (see also [98]).

5.8. Hilbert-Fröbenius Distances and Fast Simple Bounds on the Siegel–Klein Distance

Let us notice that although the Hilbert distance does not depend on the chosen norm in the vector space, the Siegel complex ball SD ( d ) is defined according to the operator norm. In a finite-dimensional vector space, all norms are said “equivalent”: That is, given two norms · a and · b of vector space X, there exists positive constants c 1 and c 2 such that
c 1 x a x a c 2 x b , x X .
In particular, this property holds for the operator norm and Fröbenius norm of finite-dimensional complex matrices with positive constants c d , C d , c d and C d depending on the dimension d of the square matrices:
c d M O M F C d M O , M M ( d , C ) ,
c d M F M O C d M F , M M ( d , C ) .
As mentioned in the introduction, we have M O M F .
Thus, the Siegel ball domain SD ( d ) may be enclosed by an open Fröbenius ball FD d , 1 ( 1 + ϵ ) c d (for any ϵ > 0 ) with
FD ( d , r ) = : M M ( d , C ) : M F < r .
Therefore we have
H FD d , 1 c d , 1 2 ( K 1 , K 2 ) ρ K ( K 1 , K 2 ) ,
where H FD ( d , r ) , 1 2 denotes the Fröbenius-Klein distance, i.e., the Hilbert distance induced by the Fröbenius balls FD ( d , r ) with constant κ = 1 2 .
Now, we can calculate in closed-form the Fröbenius-Klein distance by computing the two intersection points of the line ( K 1 K 2 ) with the Fröbenius ball FD ( d , r ) . This amounts to solve an ordinary quadratic equation K 1 + α ( K 2 K 1 ) F 2 = r for parameter α :
K 21 F 2 α 2 + i , j K 21 i , j K ¯ 1 i , j + K 1 i , j K ¯ 21 i , j α + ( K 1 F 2 r ) = 0 ,
where K i , j denotes the coefficient of matrix K at row i and column j. Notice that i , j K 2 1 i , j K ¯ 1 i , j + K 1 i , j K ¯ 21 i , j is a real. Once α and α + are found, we apply the 1D formula of the Hilbert distance of Equation (132).
We summarize the result as follows:
Theorem 6 (Lower bound on Siegel–Klein distance). 
The Siegel–Klein distance is lower bounded by the Fröbenius-Klein distance for the unit complex Fröbenius ball, and it can be calculated in O ( d 2 ) time.

6. The Smallest Enclosing Ball in the SPD Manifold and in the Siegel Spaces

The goal of this section is to compare two implementations of a generalization of the Badoiu and Clarkson’s algorithm [47] to approximate the Smallest Enclosing Ball (SEB) of a set of complex matrices: The implementation using the Siegel–Poincaré disk (with respect to the Kobayashi distance ρ D ), and the implementation using the Siegel–Klein disk (with respect to the Siegel–Klein distance ρ K ).
In general, we may encode a pair of features ( S , P ) Sym ( d , R ) × P + + ( d , R ) in applications as a Riemann matrix Z ( S , P ) : = S + i P , and consider the underlying geometry of the Siegel upper space. For example, anomaly detection of time-series maybe considered by considering ( Σ ˙ ( t ) , Σ ( t ) ) where Σ ( t ) is the covariance matrix at time t and Σ ˙ ( t ) 1 d t ( Σ ( t + d t ) Σ ( t ) ) is the approximation of the derivative of the covariance matrix (a symmetric matrix) for a small prescribed value of d t .
The generic Badoiu and Clarkson’s algorithm [47] (BC algorithm) for a set { p 1 , , p n } of n points in a metric space ( X , ρ ) is described as follows:
  • Initialization: Let c 1 = p 1 and l = 1
  • Repeat L times:
    -
    Calculate the farthest point: f l = arg min i [ d ] ρ ( c l , p i ) .
    -
    Geodesic cut: Let c l + 1 = c l # t l f l , where p # t l q is the point which satisfies
    ρ ( p , p # t l X q ) = t l ρ ( p , q ) .
    -
    l l + 1 .
This elementary SEB approximation algorithm has been instantiated to various metric spaces with proofs of convergence according to the sequence { t l } l : see [43] for the case of hyperbolic geometry, ref. [46] for Riemannian geometry with bounded sectional curvatures, ref. [99,100] for dually flat spaces (a non-metric space equipped with a Bregman divergences [101,102]), etc. In Cartan-Hadamard manifolds [46], we require the series i t i to diverge while the series i t i 2 to converge. The number of iterations L to get a ( 1 + ϵ ) -approximation of the SEB depends on the underlying geometry and the sequence { t l } l . For example, in Euclidean geometry, setting t l = 1 l + 1 with L = 1 ϵ 2 steps yield a ( 1 + ϵ ) -approximation of the SEB [47].
We start by recalling the Riemannian generalization of the BC algorithm, and then consider the Siegel spaces.

6.1. Approximating the Smallest Enclosing Ball in Riemannian Spaces

We first instantiate a particular example of Riemannian space, the space of Symmetric Positive-Definite matrix manifold (PD or SPD manifold for short), and then consider the general case on a Riemannian manifold ( M , g ) .

Approximating the SEB on the SPD Manifold

Given n positive-definite matrices [103,104 P 1 , , P n of size d × d , we ask to calculate the SEB with circumcenter P * minimizing the following objective function:
min P PD ( d ) max i { 1 , , n } ρ PD ( P , P i ) .
This is a minimax optimization problem. The SPD cone is not a complete metric space with respect to the Fröbenius distance, but is a complete metric space with respect to the natural Riemannian distance.
When the minimization is performed with respect to the Fröbenius distance, we can solve this problem using techniques of Euclidean computational geometry [33,47] by vectorizing the PSD matrices P i into corresponding vectors v i = vec ( P i ) of R d × d such that P P F = vec ( P ) vec ( P ) 2 , where vec ( · ) : Sym ( d , R ) R d × d vectorizes a matrix by stacking its column vectors. In fact, since the matrices are symmetric, it is enough to half-vectorize the matrices: P P F = vec + ( P ) vec + ( P ) 2 , where vec + ( · ) : Sym + + ( d , R ) R d ( d + 1 ) 2 , see [50].
Property 2.
The smallest enclosing ball of a finite set of positive-definite matrices is unique.
Let us mention the two following proofs:
  • The SEB is well-defined and unique since the SPD manifold is a Bruhat–Tits space: That is, a complete metric space enjoying a semi-parallelogram law: For any P 1 , P 2 PD ( d ) and geodesic midpoint P 12 = P 1 ( P 1 1 P 2 ) 1 2 (see below), we have:
    ρ PD 2 ( P 1 , P 2 ) + 4 ρ PD 2 ( P , P 12 ) 2 ρ PD 2 ( P , P 1 ) + 2 d PD 2 ( P , P 2 ) , P PD ( d ) .
    See [44] page 83 or [105] Chapter 6). In a Bruhat–Tits space, the SEB is guaranteed to be unique [44,106].
  • Another proof of the uniqueness of the SEB on a SPD manifold consists of noticing that the SPD manifold is a Cartan-Hadamard manifold [46], and the SEB on Cartan-Hadamard manifolds are guaranteed to be unique.
We shall use the invariance property of the Riemannian distance by congruence:
ρ PD C P 1 C , C P 2 C = ρ PD ( P 1 , P 2 ) , C GL ( d , R ) .
In particular, choosing C = P 1 1 2 , we get
ρ PD ( P 1 , P 2 ) = ρ I , P 1 1 2 P 2 P 1 1 2 .
The geodesic from I to P is γ I , P ( α ) = Exp ( α Log P ) = P α . The set { λ i ( P α ) } of the d eigenvalues of P α coincide with the set { λ i ( P ) α } of eigenvalues of P raised to the power α (up to a permutation).
Thus, to cut the geodesic I # t PD P , we must solve the following problem:
ρ PD ( I , P α ) = t × ρ PD ( I , P ) .
That is
i log 2 λ i ( P ) α = t × i log 2 λ i ( P ) ,
α × i log 2 λ i ( P ) = t × i log 2 λ i ( P ) .
The solution is α = t . Thus, I # t PD P = P t . For arbitrary P 1 and P 2 , we first apply the congruence transformation with C = P 1 1 2 , use the solution I # t PD C P C = ( C P C ) t , and apply the inverse congruence transformation with C 1 = P 1 1 2 . It follows the theorem:
Theorem 7 (Geodesic cut on the SPD manifold). 
For any t ( 0 , 1 ) , we have the closed-form expression of the geodesic cut on the manifold of positive-definite matrices:
P 1 # t PD P 2 = P 1 1 2 Exp t Log P 1 1 2 P 2 P 1 1 2 P 1 1 2 ,
= P 1 1 2 P 1 1 2 P 2 P 1 1 2 t P 1 1 2 ,
= P 1 ( P 1 1 P 2 ) t ,
= P 2 ( P 2 1 P 1 ) 1 t .
The matrix P 1 1 2 P 2 P 1 1 2 can be rewritten using the orthogonal eigendecomposition as U D U , where D is the diagonal matrix of generalized eigenvalues. Thus, the PD geodesic can be rewritten as
P 1 # t PD P 2 = P 1 1 2 U D t U P 1 1 2 .
We instantiate the algorithm of Badoiu and Clarkson [47] to a finite set P = { P 1 , , P n } of n positive-definite matrices in Algorithm 1.
Algorithm 1: Algorithm to approximate the circumcenter of a set of positive-definite matrices.
Entropy 22 01019 i001
The complexity of the algorithm is in O ( d 3 n T ) where T is the number of iterations, d the row dimension of the square matrices P i and n the number of matrices.
Observe that the solution corresponds to the arc-length parameterization of the geodesic with boundary values on the SPD manifold:
γ P 1 , P 2 ( t ) = P 1 1 2 exp ( t Log ( P 1 1 2 P 2 P 1 1 2 ) ) P 1 1 2 .
The curve γ P 1 , P 2 ( t ) is a geodesic for any affine-invariant metric distance ρ ψ ( P 1 , P 2 ) = Log P 1 1 2 P 2 P 1 1 2 ψ where M ψ = ψ ( λ 1 ( M ) , , λ d ( M ) ) is a symmetric gauge norm [107].
In fact, we have shown the following property:
Property 3 (Riemannian geodesic cut). 
Let γ p , q ( t ) denote the Riemannian geodesic linking p and q on a Riemannian manifold ( M , g ) (i.e., parameterized proportionally to the arc-length and with respect to the Levi–Civita connection induced by the metric tensor g). Then we have
p 1 # t g p 2 = γ p 1 , p 2 ( t ) = γ p 2 , p 1 ( 1 t ) .
We report the generic Riemannian approximation algorithm in Algorithm 2.
Algorithm 2: Algorithm to approximate the Riemannian circumcenter of a set of points.
Entropy 22 01019 i002
Theorem 1 of [46] guarantees the convergence of the Algorithm 2 algorithm provided that we have a lower bound and an upper bound on the sectional curvatures of the manifold ( M , g ) . The sectional curvatures of the PD manifold have been proven to be negative [108]. The SPD manifold is a Cartan-Hadamard manifold with scalar curvature 1 8 d ( d + 1 ) ( d + 2 )  [109] depending on the dimension d of the matrices. Notice that we can identify P PD ( d ) with an element of the quotient space GL ( d , R ) / O ( d ) since O ( d ) is the isotropy subgroup of the GL ( d , R ) for the action P C P C (i.e.,  I C I C = I when C O ( d ) ). Thus, we have PD ( d ) GL ( d , R ) / O ( d ) . The SEB with respect to the Thompson metric
ρ T ( P 1 , P 2 ) : = max log λ max ( P 2 P 1 1 ) , log λ max ( P 1 P 2 1 )
has been studied in [107].

6.2. Implementation in the Siegel–Poincaré Disk

Given n d × d complex matrices W 1 , , W n SD ( d ) , we ask to find the smallest-radius enclosing ball with center W * minimizing the following objective function:
min W SD ( d ) max i { 1 , , n } ρ D ( W , W i ) .
This problem may have potential applications in image morphology [110] or anomaly detection of covariance matrices [39]. We may model the dynamics of a covariance matrix time-series Σ ( t ) by the representation ( Σ ( t ) , Σ ˙ ( t ) ) where Σ ˙ ( t ) = d d t Σ ( t ) Sym ( d , R ) and use the Siegel SEB to detect anomalies, see [40] for detection anomaly based on Bregman SEBs.
The Siegel–Poincaré upper plane and disk are not Bruhat–Tits space, but spaces of non-positive curvatures [111]. Indeed, when d = 1 , the Poincaré disk is not a Bruhat space.
Notice that when d = 1 , the hyperbolic ball in the Poincaré disk have Euclidean shape. This is not true anymore when d > 1 : Indeed, the equation of the ball centered at the origin 0:
Ball ( 0 , r ) = W SD ( d ) : log 1 + W O 1 W O r ,
amounts to
Ball ( 0 , r ) = W SD ( d ) : W O e r 1 e r + 1 .
when d = 1 , W O = | w | = ( Re ( w ) , Im ( w ) ) 2 , and Poincaré balls have Euclidean shapes. Otherwise, when d > 1 , W O = σ max ( W ) and σ max ( W ) e r 1 e r + 1 is not a complex Fröbenius ball.
To apply the generic algorithm, we need to implement the geodesic cut operation W 1 # t W 2 . We consider the complex symplectic map Φ W 1 ( W ) in the Siegel disk that maps W 1 to 0 and W 2 to W 2 = Φ W 1 ( W 2 ) . Then the geodesic between 0 and W 2 is a straight line.
We need to find α ( t ) W = 0 # t SD W (with α ( t ) > 0 ) such that ρ D ( 0 , α ( t ) W ) = t ρ D ( 0 , W ) . That is, we shall solve the following equation:
log 1 + α ( t ) W O 1 α ( t ) W O = t × log 1 + W O 1 W O .
We find the exact solution as
α ( t ) = 1 W O ( 1 + W O ) t ( 1 W O ) t ( 1 + W O ) t + ( 1 W O ) t .
Proposition 3 (Siegel–Poincaré geodesics from the origin). 
The geodesic in the Siegel disk is
γ 0 , W SD ( t ) = α ( t ) W
with
α ( t ) = 1 W O ( 1 + W O ) t ( 1 W O ) t ( 1 + W O ) t + ( 1 W O ) t .
Thus, the midpoint W 1 # SD W 2 : = W 1 # 1 2 SD W 2 of W 1 and W 2 can be found as follows:
W 1 # SD W 2 = Φ W 1 1 0 # SD Φ W 1 ( W 2 ) ,
where
0 # SD W = α 1 2 W ,
= 1 W O 1 + W O 1 W O 1 + W O + 1 W O W .
To summarize, Algorithm 3 recenters at every step the current center C t to the Siegel disk origin 0.
Algorithm 3: Algorithm to approximate the circumcenter of a set of matrices in the Siegel disk.
Entropy 22 01019 i003
The farthest point to the current approximation of the circumcenter can be calculated using the data-structure of the Vantage Point Tree (VPT), see [112].
The Riemannian curvature tensor of the Siegel space is non-positive [1,113] and the sectional curvatures are non-positive [111] and bounded above by a negative constant. In our implementation, we chose the step sizes t l = 1 l + 1 . Barbaresco [14] also adopted this iterative recentering operation for calculating the median in the Siegel disk. However at the end of his algorithm, he does not map back the median among the source matrix set. Recentering is costly because we need to calculate a square root matrix to calculate Φ C ( W ) . A great advantage of Siegel–Klein space is that we have straight geodesics anywhere in the disk so we do not need to perform recentering.

6.3. Fast Implementation in the Siegel–Klein Disk

The main advantage of implementing the Badoiu and Clarkson’s algorithm [47] in the Siegel–Klein disk is to avoid to perform the costly recentering operations (which require calculation of square root matrices). Moreover, we do not have to roll back our approximate circumcenter at the end of the algorithm.
First, we state the following expression of the geodesics in the Siegel disk:
Proposition 4 (Siegel–Klein geodesics from the origin). 
The geodesic from the origin in the Siegel–Klein disk is expressed
γ 0 , K SK ( t ) = α ( t ) K
with
α ( t ) = 1 K O ( 1 + K O ) t ( 1 K O ) t ( 1 + K O ) t + ( 1 K O ) t .
The proof follows straightforwardly from Proposition 3 because we have ρ K ( 0 , K ) = 1 2 ρ D ( 0 , K ) .

7. Conclusions and Perspectives

In this work, we have generalized the Klein model of hyperbolic geometry to the Siegel disk domain of complex matrices by considering the Hilbert geometry induced by the Siegel disk, an open-bounded convex complex matrix domain. We compared this Siegel–Klein disk model with its Hilbert distance called the Siegel–Klein distance ρ K to both the Siegel–Poincaré disk model (Kobayashi distance ρ W ) and the Siegel–Poincaré upper plane (Siegel distance ρ U ). We show how to convert matrices W of the Siegel–Poincaré disk model into equivalent matrices K of Siegel–Klein disk model and matrices Z in the Siegel–Poincaré upper plane via symplectic maps. When the dimension d = 1 , we have the following equivalent hyperbolic distances:
ρ D ( w 1 , w 2 ) = ρ K ( k 1 , k 2 ) = ρ U ( z 1 , z 2 ) .
Since the geodesics in the Siegel–Klein disk are by construction straight, this model is well-suited to implement techniques of computational geometry [33]. Furthermore, the calculation of the Siegel–Klein disk does not require the recentering of one of its arguments to the disk origin, a computationally costly Siegel translation operation. We reported a linear-time algorithm for computing the exact Siegel–Klein distance ρ K between diagonal matrices of the disk (Theorem 4), and a fast way to numerically approximate the Siegel distance by bisection searches with guaranteed lower and upper bounds (Theorem 5). Finally, we  demonstrated the algorithmic advantage of using the Siegel–Klein disk model instead of the Siegel–Poincaré disk model for approximating the smallest-radius enclosing ball of a finite set of complex matrices in the Siegel disk. In future work, we shall consider more generally the Hilbert geometry of homogeneous complex domains and investigate quantitatively the Siegel–Klein geometry in applications ranging from radar processing [14], image morphology [28], computer vision, to machine learning [24]. For example, the fast and robust guaranteed approximation of the Siegel–Klein distance may prove useful for performing clustering analysis in image morphology [28,38,110].

Supplementary Materials

The following are available online at https://franknielsen.github.io/SiegelKlein/.

Funding

This research received no external funding.

Acknowledgments

The author would like to thank Marc Arnaudon, Frédéric Barbaresco, Yann Cabanes, and Gaëtan Hadjeres for fruitful discussions, pointing out several relevant references, and feedback related to the Siegel domains.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following notations and main formulas are used in this manuscript:
Complex matrices:
Number field F Real R or complex C
M ( d , F ) Space of square d × d matrices in F
Sym ( d , R ) Space of real symmetric matrices
0matrix with all coefficients equal to zero (disk origin)
Fröbenius norm M F = i , j | M i , j | 2
Operator norm M O = σ max ( M ) = max i { | λ i ( M ) | }
Domains:
Cone of SPD matrices PD ( d , R ) = { P 0 : P Sym ( d , R ) }
Siegel–Poincaré upper plane SH ( d ) = Z = X + i Y : X Sym ( d , R ) , Y PD ( d , R )
Siegel–Poincaré disk SD ( d ) = W Sym ( d , C ) : I W ¯ W 0
Distances:
Siegel distance ρ U ( Z 1 , Z 2 ) = i = 1 d log 2 1 + r i 1 r i
r i = λ i R ( Z 1 , Z 2 )
R ( Z 1 , Z 2 ) : = ( Z 1 Z 2 ) ( Z 1 Z ¯ 2 ) 1 ( Z ¯ 1 Z ¯ 2 ) ( Z ¯ 1 Z ¯ 2 ) 1
Upper plane metric d s U ( Z ) = 2 tr Y 1 d Z Y 1 d Z ¯
PD distance ρ PD ( P 1 , P 2 ) = Log ( P 1 1 P 2 ) F = i = 1 d log 2 λ i ( P 1 1 P 2 )
PD metric d s PD ( P ) = tr ( P 1 d P ) 2
Kobayashi distance ρ D ( W 1 , W 2 ) = log 1 + Φ W 1 ( W 2 ) O 1 Φ W 1 ( W 2 ) O
Translation in the disk Φ W 1 ( W 2 ) = ( I W 1 W ¯ 1 ) 1 2 ( W 2 W 1 ) ( I W ¯ 1 W 2 ) 1 ( I W ¯ 1 W 1 ) 1 2
Disk distance to origin ρ D ( 0 , W ) = log 1 + W O 1 W O
Siegel–Klein distance ρ K ( K 1 , K 2 ) = 1 2 log α + ( 1 α ) α ( α + 1 ) , K 1 K 2 , 0 K 1 = K 2
( 1 α ) K 1 + α K 2 O = 1 ( α < 0 ), ( 1 α + ) K 1 + α + K 2 O = 1 ( α + > 1 )
Seigel-Klein distance to 0 ρ K ( 0 , K ) = 1 2 log 1 + K O 1 K O
Symplectic maps and groups:
Symplectic map ϕ S ( Z ) = ( A Z + B ) ( C Z + D ) 1 with S Sp ( d , R ) (upper plane)
ϕ S ( W ) with S Sp ( d , C ) (disk)
Symplectic group Sp ( d , F ) = A B C D , A B = B A , C D = D C , A D B C = I
A , B , C , D M ( d , F )
group composition lawmatrix multiplication
group inverse law S ( 1 ) = : D B C A
Translation in H ( d ) of Z = A + i B to i I T U ( Z ) = ( B 1 2 ) 0 ( A B 1 2 ) ( B 1 2 )
symplectic orthogonal matrices SpO ( 2 d , R ) = A B B A : A A + B B = I , A B Sym ( d , R )
(rotations in SH ( d ) )
Translation to 0 in SD ( d ) Φ W 1 ( W 2 ) = ( I W 1 W ¯ 1 ) 1 2 ( W 2 W 1 ) ( I W ¯ 1 W 2 ) 1 ( I W ¯ 1 W 1 ) 1 2
Isom + ( S ) Isometric orientation-preserving group of generic space S
Moeb ( d ) group of Möbius transformations

Appendix A. The Deflation Method: Approximating the Eigenvalues

A matrix M M ( d , C ) is diagonalizable if there exists a non-singular matrix P and a diagonal matrix Λ = diag ( λ 1 , , λ d ) such that M = P Λ P 1 . A Hermitian matrix (i.e., M = M * : = M ¯ ) is a diagonalizable self-adjoint matrix which has all real eigenvalues and admits a basis of orthogonal eigenvectors. We can compute all eigenvalues of a Hermitian matrix M by repeatedly applying the (normalized) power method using the so-called deflation method (see [114], Chapter 10, and [115], Chapter 7). The deflation method proceeds iteratively to calculate numerically the (normalized) eigenvalues λ i ’s and eigenvectors v i ’s as follows:
  • Let l = 1 and M 1 = M .
  • Initialize at random a normalized vector x 0 C d (i.e., x 0 on the unit sphere with x 0 * x 0 = 1 )
  • For j in ( 0 , , L l 1 ) :
    x j + 1 M l x j x j + 1 x j + 1 x j + 1 * x j + 1
  • Let v l = x L l and λ l = x L l * M l x L l
  • Let l l + 1 . If  l d then let M l = M l 1 λ l 1 v l 1 v l 1 * and goto 2.
The deflation method reports the eigenvalues λ i ’s such that
| λ 1 | > | λ 2 | | λ d | ,
where λ 1 is the dominant eigenvalue.
The overall number of normalized power iterations is L = i = 1 d L i (matrix-vector multiplication), where the number of iterations of the normalized power method at stage l can be defined such that we have | x L l x L l 1 | ϵ , for a prescribed value of ϵ > 0 . Notice that the numerical errors of the eigenpairs ( λ i , v i ) ’s propagate and accumulate at each stage. That is, at stage l, the deflation method calculates the dominant eigenvector on a residual perturbated matrix M l . The overall approximation of the eigendecomposition can be appreciated by calculating the last residual matrix:
M i = 1 d λ i v i v i * F .
The normalized power method exhibits linear convergence for diagonalizable matrices and quadratic convergence for Hermitian matrices. Other numerical methods for numerically calculating the eigenvalues include the Krylov subspace techniques [114,115,116].
  • Snippet Code
We implemented our software library and smallest enclosing ball algorithms in Java™.
The code below is a snippet written in Maxima (available in the supplementary material): A computer algebra system, freely downloadable at http://maxima.sourceforge.net/
  • /* Code in Maxima */
  • /* Calculate the Siegel metric distance in the Siegel upper space */
  • load(eigen);
  • /* symmetric */
  • S1: matrix( [0.265,   0.5],
  •     [0.5 , -0.085]);
  • /* positive-definite */
  • P1: matrix( [0.235,   0.048],
  •     [0.048 ,  0.792]);
  • /* Matrix in the Siegel upper space */
  • Z1: S1+%i*P1;
  • S2:  matrix( [-0.329,  -0.2],
  •    [-0.2 , -0.382]);
  • P2: matrix([0.464,   0.289],
  •     [0.289  , 0.431]);
  • Z2: S2+%i*P2;
  • /* Generalized Moebius transformation */
  • R(Z1,Z2) :=
  • ((Z1-Z2).invert(Z1-conjugate(Z2))).((conjugate(Z1)-conjugate(Z2)).invert(conjugate(Z1)-Z2));
  • R12: ratsimp(R(Z1,Z2));
  • ratsimp(R12[2][1]-conjugate(R12[1][2]));
  • /* Retrieve the eigenvalues: They are all reals */
  • r: float(eivals(R12))[1];
  • /* Calculate the Siegel distance */
  • distSiegel: sum(log( (1+sqrt(r[i]))/(1-sqrt(r[i]))  )**2, i, 1, 2);

References

  1. Siegel, C.L. Symplectic geometry. Am. J. Math. 1943, 65, 1–86. [Google Scholar] [CrossRef]
  2. Hua, L.K. On the theory of automorphic functions of a matrix variable I: Geometrical basis. Am. J. Math. 1944, 66, 470–488. [Google Scholar] [CrossRef]
  3. Siegel, C.L. Einführung in die Theorie der Modulfunktionen n-ten Grades. Math. Ann. 1939, 116, 617–657. [Google Scholar] [CrossRef]
  4. Blair, D.E. Riemannian Geometry of Contact and Symplectic Manifolds; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  5. Freitas, P.J. On the Action of the Symplectic Group on the Siegel upper Half Plane. Ph.D. Thesis, University of Illinois at Chicago, Chicago, IL, USA, 1999. [Google Scholar]
  6. Koufany, K. Analyse et Géométrie des Domaines Bornés Symétriques. Ph.D. Thesis, Université Henri Poincaré, Nancy, France, 2006. [Google Scholar]
  7. Cartan, É. Sur les domaines bornés homogènes de l’espace de n variables complexes. In Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg; Springer: Berlin/Heidelberg, Germany, 1935; Volume 11, pp. 116–162. [Google Scholar]
  8. Koszul, J.L. Exposés sur les Espaces Homogènes Symétriques; Sociedade de Matematica: Rio de Janeiro, Brazil, 1959. [Google Scholar]
  9. Berezin, F.A. Quantization in complex symmetric spaces. Math. USSR-Izv. 1975, 9, 341. [Google Scholar] [CrossRef]
  10. Förstner, W.; Moonen, B. A metric for covariance matrices. In Geodesy-the Challenge of the 3rd Millennium; Springer: Berlin/Heidelberg, Germany, 2003; pp. 299–309. [Google Scholar]
  11. Harandi, M.T.; Salzmann, M.; Hartley, R. From manifold to manifold: Geometry-aware dimensionality reduction for SPD matrices. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 17–32. [Google Scholar]
  12. Barbaresco, F. Innovative tools for radar signal processing based on Cartan’s geometry of SPD matrices & information geometry. In Proceedings of the 2008 IEEE Radar Conference, Rome, Italy, 26–30 May 2008; pp. 1–6. [Google Scholar]
  13. Barbaresco, F. Robust statistical radar processing in Fréchet metric space: OS-HDR-CFAR and OS-STAP processing in Siegel homogeneous bounded domains. In Proceedings of the 12th IEEE International Radar Symposium (IRS), Leipzig, Germany, 7–9 September 2011; pp. 639–644. [Google Scholar]
  14. Barbaresco, F. Information geometry of covariance matrix: Cartan-Siegel homogeneous bounded domains, Mostow/Berger fibration and Fréchet median. In Matrix Information Geometry; Springer: Berlin/Heidelberg, Germany, 2013; pp. 199–255. [Google Scholar]
  15. Barbaresco, F. Information geometry manifold of Toeplitz Hermitian positive definite covariance matrices: Mostow/Berger fibration and Berezin quantization of Cartan-Siegel domains. Int. J. Emerg. Trends Signal Process. 2013, 1, 1–11. [Google Scholar]
  16. Jeuris, B.; Vandebril, R. The Kähler mean of block-Toeplitz matrices with Toeplitz structured blocks. SIAM J. Matrix Anal. Appl. 2016, 37, 1151–1175. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, C.; Si, J. Positive Toeplitz operators on the Bergman spaces of the Siegel upper half-space. Commun. Math. Stat. 2019, 8, 113–134. [Google Scholar] [CrossRef] [Green Version]
  18. Chevallier, E.; Forget, T.; Barbaresco, F.; Angulo, J. Kernel density estimation on the Siegel space with an application to radar processing. Entropy 2016, 18, 396. [Google Scholar] [CrossRef] [Green Version]
  19. Burbea, J. Informative Geometry of Probability Spaces; Technical report; Pittsburgh Univ. PA Center: Pittsburgh, PA, USA, 1984. [Google Scholar]
  20. Calvo, M.; Oller, J.M. A distance between multivariate normal distributions based in an embedding into the Siegel group. J. Multivar. Anal. 1990, 35, 223–242. [Google Scholar] [CrossRef] [Green Version]
  21. Calvo, M.; Oller, J.M. A distance between elliptical distributions based in an embedding into the Siegel group. J. Comput. Appl. Math. 2002, 145, 319–334. [Google Scholar] [CrossRef] [Green Version]
  22. Tang, M.; Rong, Y.; Zhou, J. An information geometric viewpoint on the detection of range distributed targets. Math. Probl. Eng. 2015, 2015. [Google Scholar] [CrossRef]
  23. Tang, M.; Rong, Y.; Zhou, J.; Li, X.R. Information geometric approach to multisensor estimation fusion. IEEE Trans. Signal Process. 2018, 67, 279–292. [Google Scholar] [CrossRef]
  24. Krefl, D.; Carrazza, S.; Haghighat, B.; Kahlen, J. Riemann-Theta Boltzmann machine. Neurocomputing 2020, 388, 334–345. [Google Scholar] [CrossRef]
  25. Ohsawa, T. The Siegel upper half space is a Marsden–Weinstein quotient: Symplectic reduction and Gaussian wave packets. Lett. Math. Phys. 2015, 105, 1301–1320. [Google Scholar] [CrossRef] [Green Version]
  26. Froese, R.; Hasler, D.; Spitzer, W. Transfer matrices, hyperbolic geometry and absolutely continuous spectrum for some discrete Schrödinger operators on graphs. J. Funct. Anal. 2006, 230, 184–221. [Google Scholar] [CrossRef] [Green Version]
  27. Ohsawa, T.; Tronci, C. Geometry and dynamics of Gaussian wave packets and their Wigner transforms. J. Math. Phys. 2017, 58, 092105. [Google Scholar] [CrossRef]
  28. Lenz, R. Siegel Descriptors for Image Processing. IEEE Signal Process. Lett. 2016, 23, 625–628. [Google Scholar] [CrossRef] [Green Version]
  29. Richter-Gebert, J. Perspectives on Projective Geometry: A Guided Tour through Real and Complex Geometry; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  30. Hilbert, D. Über die gerade Linie als kürzeste Verbindung zweier Punkte (About the straight line as the shortest connection between two points). Math. Ann. 1895, 46, 91–96. [Google Scholar] [CrossRef] [Green Version]
  31. Papadopoulos, A.; Troyanov, M. Handbook of Hilbert Geometry, Volume 22 of IRMA Lectures in Mathematics and Theoretical Physics; European Mathematical Society (EMS): Zürich, Switzerland, 2014. [Google Scholar]
  32. Liverani, C.; Wojtkowski, M.P. Generalization of the Hilbert metric to the space of positive definite matrices. Pac. J. Math. 1994, 166, 339–355. [Google Scholar] [CrossRef] [Green Version]
  33. Boissonnat, J.D.; Yvinec, M. Algorithmic Geometry; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  34. Nielsen, F.; Nock, R. Hyperbolic Voronoi diagrams made easy. In Proceedings of the 2010 IEEE International Conference on Computational Science and Its Applications, Fukuoka, Japan, 23–26 March 2010; pp. 74–80. [Google Scholar]
  35. Nielsen, F.; Muzellec, B.; Nock, R. Classification with mixtures of curved Mahalanobis metrics. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 241–245. [Google Scholar]
  36. Nielsen, F.; Nock, R. Visualizing hyperbolic Voronoi diagrams. In Proceedings of the Thirtieth Annual Symposium on Computational Geometry, Kyoto, Japan, 8–11 June 2014; pp. 90–91. [Google Scholar]
  37. Nielsen, F.; Nock, R. The hyperbolic Voronoi diagram in arbitrary dimension. arXiv 2012, arXiv:1210.8234. [Google Scholar]
  38. Angulo, J.; Velasco-Forero, S. Morphological processing of univariate Gaussian distribution-valued images based on Poincaré upper-half plane representation. In Geometric Theory of Information; Springer: Berlin/Heidelberg, Germany, 2014; pp. 331–366. [Google Scholar]
  39. Tavallaee, M.; Lu, W.; Iqbal, S.A.; Ghorbani, A.A. A novel covariance matrix based approach for detecting network anomalies. In Proceedings of the 6th IEEE Annual Communication Networks and Services Research Conference (cnsr 2008), Halifax, NS, Canada, 5–8 May 2008; pp. 75–81. [Google Scholar]
  40. Cont, A.; Dubnov, S.; Assayag, G. On the information geometry of audio streams with applications to similarity computing. IEEE Trans. Audio Speech Lang. Process. 2010, 19, 837–846. [Google Scholar] [CrossRef]
  41. Mazumdar, A.; Polyanskiy, Y.; Saha, B. On Chebyshev radius of a set in hamming space and the closest string problem. In Proceedings of the 2013 IEEE International Symposium on Information Theory, Istanbul, Turkey, 7–12 July 2013; pp. 1401–1405. [Google Scholar]
  42. Welzl, E. Smallest enclosing disks (balls and ellipsoids). In New Results and New Trends in Computer Science; Springer: Berlin/Heidelberg, Germany, 1991; pp. 359–370. [Google Scholar]
  43. Nielsen, F.; Hadjeres, G. Approximating covering and minimum enclosing balls in hyperbolic geometry. In Proceedings of the International Conference on Geometric Science of Information, Palaiseau, France, 28–30 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 586–594. [Google Scholar]
  44. Lang, S. Math Talks for Undergraduates; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  45. Nielsen, F.; Bhatia, R. Matrix Information Geometry; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  46. Arnaudon, M.; Nielsen, F. On approximating the Riemannian 1-center. Comput. Geom. 2013, 46, 93–104. [Google Scholar] [CrossRef]
  47. Badoiu, M.; Clarkson, K.L. Smaller core-sets for balls. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, Baltimore, MD, USA, 12–14 January 2003; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003; pp. 801–802. [Google Scholar]
  48. Tsang, I.H.; Kwok, J.Y.; Zurada, J.M. Generalized core vector machines. IEEE Trans. Neural Netw. 2006, 17, 1126–1140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Paulsen, V.I.; Raghupathi, M. An Introduction to the Theory of Reproducing Kernel Hilbert Spaces; Cambridge University Press: Cambridge, UK, 2016; Volume 152. [Google Scholar]
  50. Nielsen, F.; Nock, R. Fast (1+ϵ)-Approximation of the Löwner Extremal Matrices of High-Dimensional Symmetric Matrices. In Computational Information Geometry; Springer: Berlin/Heidelberg, Germany, 2017; pp. 121–132. [Google Scholar]
  51. Moakher, M. A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM J. Matrix Anal. Appl. 2005, 26, 735–747. [Google Scholar] [CrossRef]
  52. Niculescu, C.; Persson, L.E. Convex Functions and Their Applications: A Contemporary Approach, 2nd ed.; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  53. Lanczos, C. An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators1. J. Res. Natl. Bur. Stand. 1950, 45, 255–282. [Google Scholar] [CrossRef]
  54. Cullum, J.K.; Willoughby, R.A. Lanczos Algorithms for Large Symmetric Eigenvalue Computations: Vol. 1: Theory; SIAM: Philadelphia, PA, USA, 2002; Volume 41. [Google Scholar]
  55. Cannon, J.W.; Floyd, W.J.; Kenyon, R.; Parry, W.R. Hyperbolic geometry. Flavors Geom. 1997, 31, 59–115. [Google Scholar]
  56. Goldman, W.M. Complex Hyperbolic Geometry; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  57. Amari, S.I. Information Geometry and Its Applications; Springer: Berlin/Heidelberg, Germany, 2016; Volume 194. [Google Scholar]
  58. Nielsen, F. An elementary introduction to information geometry. arXiv 2018, arXiv:1808.08271. [Google Scholar]
  59. Ratcliffe, J.G. Foundations of Hyperbolic Manifolds; Springer: Berlin/Heidelberg, Germany, 1994; Volume 3. [Google Scholar]
  60. Jin, M.; Gu, X.; He, Y.; Wang, Y. Conformal Geometry: Computational Algorithms and Engineering Applications; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  61. Hotelling, H. Spaces of statistical parameters. Bull. Am. Math. Soc. 1930, 36, 191. [Google Scholar]
  62. Rao, C.R. Information and accuracy attainable in the estimation of statistical parameters. Bull Calcutta. Math. Soc. 1945, 37, 81–91. [Google Scholar]
  63. Nielsen, F. Cramér-Rao lower bound and information geometry. In Connected at Infinity II; Springer: Berlin/Heidelberg, Germany, 2013; pp. 18–37. [Google Scholar]
  64. Sun, K.; Nielsen, F. Relative Fisher information and natural gradient for learning large modular models. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 3289–3298. [Google Scholar]
  65. Komaki, F. Bayesian prediction based on a class of shrinkage priors for location-scale models. Ann. Inst. Stat. Math. 2007, 59, 135–146. [Google Scholar] [CrossRef]
  66. Namikawa, Y. The Siegel upperhalf plane and the symplectic group. In Toroidal Compactification of Siegel Spaces; Springer: Berlin/Heidelberg, Germany, 1980; pp. 1–6. [Google Scholar]
  67. Riemann, B. Theorie der Abel’schen Functionen. J. Für Die Reine Und Angew. Math. 1857, 54, 101–155. [Google Scholar]
  68. Riemann, B. Über das Verschwinden der Theta-Functionen. Borchardt’s [= Crelle’s] J. Für Reine Und Angew. Math. 1865, 65, 214–224. [Google Scholar]
  69. Swierczewski, C.; Deconinck, B. Computing Riemann Theta functions in Sage with applications. Math. Comput. Simul. 2016, 127, 263–272. [Google Scholar] [CrossRef] [Green Version]
  70. Agostini, D.; Chua, L. Computing Theta Functions with Julia. arXiv 2019, arXiv:1906.06507. [Google Scholar]
  71. Agostini, D.; Améndola, C. Discrete Gaussian distributions via theta functions. SIAM J. Appl. Algebra Geom. 2019, 3, 1–30. [Google Scholar] [CrossRef] [Green Version]
  72. Bucy, R.; Williams, B. A matrix cross ratio theorem for the Riccati equation. Comput. Math. Appl. 1993, 26, 9–20. [Google Scholar] [CrossRef]
  73. Mackey, D.S.; Mackey, N. On the Determinant of Symplectic Matrices; Manchester Centre for Computational Mathematics: Manchester, UK, 2003. [Google Scholar]
  74. Rim, D. An elementary proof that symplectic matrices have determinant one. Adv. Dyn. Syst. Appl. (ADSA) 2017, 12, 15–20. [Google Scholar]
  75. Hua, L.K. Geometries of matrices. II. Study of involutions in the geometry of symmetric matrices. Trans. Am. Math. Soc. 1947, 61, 193–228. [Google Scholar] [CrossRef]
  76. Wan, Z.; Hua, L. Geometry of matrices; World Scientific: Singapore, 1996. [Google Scholar]
  77. Clerc, J.L. Geometry of the Shilov boundary of a bounded symmetric domain. In Proceedings of the Tenth International Conference on Geometry, Integrability and Quantization, Varna, Bulgaria, 6–11 June 2008; Institute of Biophysics and Biomedical Engineering, Bulgarian Academy: Varna, Bulgaria, 2009; pp. 11–55. [Google Scholar]
  78. Freitas, P.J.; Friedland, S. Revisiting the Siegel upper half plane II. Linear Algebra Its Appl. 2004, 376, 45–67. [Google Scholar] [CrossRef] [Green Version]
  79. Bassanelli, G. On horospheres and holomorphic endomorfisms of the Siegel disc. Rend. Del Semin. Mat. Della Univ. Padova 1983, 70, 147–165. [Google Scholar]
  80. Kobayashi, S. Invariant distances on complex manifolds and holomorphic mappings. J. Math. Soc. Jpn. 1967, 19, 460–480. [Google Scholar] [CrossRef]
  81. Carathéodory, C. Uber eine spezielle Metrik, die in der Theorie der analytischen Funktionen auftritt. Atti Pontif. Acad. Sc. Nuovi Lincei 1927, 80, 135–141. [Google Scholar]
  82. Sra, S. On the matrix square root via geometric optimization. arXiv 2015, arXiv:1507.08366. [Google Scholar] [CrossRef] [Green Version]
  83. Mitchell, J. Potential theory in the geometry of matrices. Trans. Am. Math. Soc. 1955, 79, 401–422. [Google Scholar] [CrossRef]
  84. Rong, F. Non-diagonal holomorphic isometric embeddings of the Poincaré disk into the Siegel upper half-plane. Asian J. Math. 2018, 22, 665–672. [Google Scholar] [CrossRef] [Green Version]
  85. Beardon, A. The Klein, Hilbert and Poincaré metrics of a domain. J. Comput. Appl. Math. 1999, 105, 155–162. [Google Scholar] [CrossRef] [Green Version]
  86. Nielsen, F.; Sun, K. Clustering in Hilbert’s Projective Geometry: The Case Studies of the Probability Simplex. Geom. Struct. Inf. 2018, 297–331. [Google Scholar]
  87. Nielsen, F.; Shao, L. On Balls in a Hilbert Polygonal Geometry (Multimedia Contribution). In Proceedings of the 33rd International Symposium on Computational Geometry (SoCG 2017), Brisbane, Australia, 4–7 July 2017; Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Wadern, Germany, 2017. [Google Scholar]
  88. Klein, F. Über die sogenannte nicht-Euklidische Geometrie. Math. Ann. 1873, 6, 112–145. [Google Scholar] [CrossRef]
  89. Beltrami, E. Saggio di interpretazione della geometria non-euclidea. G. Mat. 1868, IV, 284. [Google Scholar]
  90. Cayley, A. A sixth memoir upon quantics. Philos. Trans. R. Soc. Lond. 1859, 149, 61–90. [Google Scholar]
  91. De La Harpe, P. On Hilbert’s metric for simplices. Geom. Group Theory 1993, 1, 97–119. [Google Scholar]
  92. Marsaglia, G. Bounds for the Rank of the Sum of Two Matrices; Technical report; BOEING SCIENTIFIC RESEARCH LABS: Seattle, WA, USA, 1964. [Google Scholar]
  93. Speer, T. Isometries of the Hilbert metric. arXiv 2014, arXiv:1411.1826. [Google Scholar]
  94. El Ghaoui, L.; Niculescu, S.L. Advances in Linear Matrix Inequality Methods in Control; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  95. Bustamante, M.D.; Mellon, P.; Velasco, M. Solving the problem of simultaneous diagonalisation via congruence. arXiv 2019, arXiv:1908.04228. [Google Scholar]
  96. Kuczyński, J.; Woźniakowski, H. Estimating the largest eigenvalue by the power and Lanczos algorithms with a random start. SIAM J. Matrix Anal. Appl. 1992, 13, 1094–1122. [Google Scholar] [CrossRef]
  97. Higham, N.J.; Al-Mohy, A.H. Computing matrix functions. Acta Numer. 2010, 19, 159–208. [Google Scholar] [CrossRef] [Green Version]
  98. Li, Y.; Woodruff, D.P. Tight bounds for sketching the operator norm, Schatten norms, and subspace embeddings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM); Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Wadern, Germany, 2016. [Google Scholar]
  99. Nock, R.; Nielsen, F. Fitting the smallest enclosing Bregman ball. In Proceedings of the European Conference on Machine Learning, Porto, Portugal, 3–7 October 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 649–656. [Google Scholar]
  100. Nielsen, F.; Nock, R. On approximating the smallest enclosing Bregman balls. In Proceedings of the Twenty-Second Annual Symposium on Computational Geometry, Sedona, AZ, USA, 5–7 June 2006; pp. 485–486. [Google Scholar]
  101. Nielsen, F.; Boissonnat, J.D.; Nock, R. Visualizing Bregman Voronoi diagrams. In Proceedings of the Twenty-Third Annual Symposium on Computational Geometry, Gyeongju, Korea, 6–8 June 2007; pp. 121–122. [Google Scholar]
  102. Boissonnat, J.D.; Nielsen, F.; Nock, R. Bregman Voronoi diagrams. Discret. Comput. Geom. 2010, 44, 281–307. [Google Scholar] [CrossRef]
  103. Bougerol, P. Kalman filtering with random coefficients and contractions. SIAM J. Control. Optim. 1993, 31, 942–959. [Google Scholar] [CrossRef]
  104. Fletcher, P.T.; Moeller, J.; Phillips, J.M.; Venkatasubramanian, S. Horoball hulls and extents in positive definite space. In Workshop on Algorithms and Data Structures; Springer: Berlin/Heidelberg, Germany, 2011; pp. 386–398. [Google Scholar]
  105. Bhatia, R. Positive Definite Matrices; Princeton University Press: Princeton, NJ, USA, 2009; Volume 24. [Google Scholar]
  106. Bruhat, F.; Tits, J. Groupes réductifs sur un corps local: I. Données radicielles valuées. Publ. Math. De L’IHÉS 1972, 41, 5–251. [Google Scholar] [CrossRef]
  107. Mostajeran, C.; Grussler, C.; Sepulchre, R. Affine-Invariant Midrange Statistics. In Proceedings of the International Conference on Geometric Science of Information, Toulouse, France, 27–29 August 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 494–501. [Google Scholar]
  108. Helgason, S. Differential Geometry, Lie Groups, and Symmetric Spaces; Academic Press: Cambridge, MA, USA, 1979. [Google Scholar]
  109. Andai, A. Information geometry in quantum mechanics. Ph.D. Thesis, BUTE, Dhaka, Bangladesh, 2004. (In Hungarian). [Google Scholar]
  110. Angulo, J. Structure tensor image filtering using Riemannian L1 and L center-of-mass. Image Anal. Stereol. 2014, 33, 95–105. [Google Scholar] [CrossRef] [Green Version]
  111. D’Atri, J.; Dotti Miatello, I. A characterization of bounded symmetric domains by curvature. Trans. Am. Math. Soc. 1983, 276, 531–540. [Google Scholar] [CrossRef] [Green Version]
  112. Nielsen, F.; Piro, P.; Barlaud, M. Bregman vantage point trees for efficient nearest neighbor queries. In Proceedings of the 2009 IEEE International Conference on Multimedia and Expo, New York, NY, USA, 28 June–3 July 2009; pp. 878–881. [Google Scholar]
  113. Hua, L.K. The estimation of the Riemann curvature in several complex variables. Acta Math. Sin. Chin. Ser. 1954, 4, 143–170. [Google Scholar]
  114. Loehr, N. Advanced Linear Algebra; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  115. Izaac, J.; Wang, J. Computational Quantum Mechanics; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  116. Sun, J.; Zhou, A. Finite Element Methods for Eigenvalue Problems; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
Figure 1. Illustrating the properties and conversion between the Siegel upper plane and the Siegel disk.
Figure 1. Illustrating the properties and conversion between the Siegel upper plane and the Siegel disk.
Entropy 22 01019 g001
Figure 2. Hilbert distance induced by a bounded open convex domain Ω .
Figure 2. Hilbert distance induced by a bounded open convex domain Ω .
Entropy 22 01019 g002
Figure 3. Hilbert geometry for the Siegel disk: The Siegel–Klein disk model.
Figure 3. Hilbert geometry for the Siegel disk: The Siegel–Klein disk model.
Entropy 22 01019 g003
Figure 4. Conversions in the Siegel disk domain: Poincaré to/from Klein matrices.
Figure 4. Conversions in the Siegel disk domain: Poincaré to/from Klein matrices.
Entropy 22 01019 g004
Figure 5. Inequalities of the Hilbert distances induced by nested bounded open convex domains.
Figure 5. Inequalities of the Hilbert distances induced by nested bounded open convex domains.
Entropy 22 01019 g005
Figure 6. Comparison of the Hilbert distances H Ω , κ ( p , q ) and H Ω , κ ( p , q ) induced by nested open interval domains Ω Ω : H Ω , κ ( p , q ) H Ω , κ ( p , q ) .
Figure 6. Comparison of the Hilbert distances H Ω , κ ( p , q ) and H Ω , κ ( p , q ) induced by nested open interval domains Ω Ω : H Ω , κ ( p , q ) H Ω , κ ( p , q ) .
Entropy 22 01019 g006
Figure 7. Guaranteed lower and upper bounds for the Siegel–Klein distance by considering nested open matrix balls.
Figure 7. Guaranteed lower and upper bounds for the Siegel–Klein distance by considering nested open matrix balls.
Entropy 22 01019 g007

Share and Cite

MDPI and ACS Style

Nielsen, F. The Siegel–Klein Disk: Hilbert Geometry of the Siegel Disk Domain. Entropy 2020, 22, 1019. https://doi.org/10.3390/e22091019

AMA Style

Nielsen F. The Siegel–Klein Disk: Hilbert Geometry of the Siegel Disk Domain. Entropy. 2020; 22(9):1019. https://doi.org/10.3390/e22091019

Chicago/Turabian Style

Nielsen, Frank. 2020. "The Siegel–Klein Disk: Hilbert Geometry of the Siegel Disk Domain" Entropy 22, no. 9: 1019. https://doi.org/10.3390/e22091019

APA Style

Nielsen, F. (2020). The Siegel–Klein Disk: Hilbert Geometry of the Siegel Disk Domain. Entropy, 22(9), 1019. https://doi.org/10.3390/e22091019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop