Abstract
Explainable Artificial Intelligence (XAI) and acceptable artificial intelligence are active topics of research in machine learning. For critical applications, being able to prove or at least to ensure with a high probability the correctness of algorithms is of utmost importance. In practice, however, few theoretical tools are known that can be used for this purpose. Using the Fisher Information Metric (FIM) on the output space yields interesting indicators in both the input and parameter spaces, but the underlying geometry is not yet fully understood. In this work, an approach based on the pullback bundle, a well-known trick for describing bundle morphisms, is introduced and applied to the encoder–decoder block. With constant rank hypothesis on the derivative of the network with respect to its inputs, a description of its behavior is obtained. Further generalization is gained through the introduction of the pullback generalized bundle that takes into account the sensitivity with respect to weights.
1. Introduction
Explainable Artificial Intelligence (XAI) is generally described as a collection of methods allowing humans to understand how an algorithm is able to learn from a database, reproduce and generalize. It is currently an active, multidisciplinary area of research [1,2] that relies on several theoretical or heuristic tools to identify salient features and indicators explaining the surprisingly performances of machine learning algorithms, especially deep neural networks. From a statistical point of view, a neural network is nothing but a parameterized regression or classification model, that can be described as a random variable whose probability distribution is known conditionally to external inputs and internal parameters [3]. Unfortunately, even if this approach seems the most natural one, it is not adapted to XAI as no insight is gained on the learning and inference process. Furthermore, it seems that there is a contradiction between the statistical procedure that appeals for models with the smallest possible number of free parameters and the performance of deep learning relying on thousands to millions weights. On the other hand, attempts have been made to design numerical [4] or visual [5] indicators aiming at producing a summary of salient features.
XAI is also related to acceptable AI, that is proving or at least ensuring with a high probability that the model will produce the intended result and is robust to perturbations, either inherent to the data acquisition process or intentional. In both cases, it is mandatory to be able to perform a sensitivity analysis on a trained network. In [6], an approach based on geometry was taken and the need of a metric on the set of admissible perturbations enforced. The problem of the so-called adversarial attacks is treated in several papers [7,8,9] where mitigating procedures are proposed. Adversarial attacks are a major concern for acceptable AI, especially in critical application like autonomous vehicles or air traffic control. From now, most of the research effort was dedicated to the design of such attacks with the idea of incorporating the fooling inputs in the learning database in order to increase robustness. The reader can refer, for example, to Fast Gradient Sign methods [10], robust optimization methods [11] or DeepFool [12,13]. Unfortunately, while these approaches are relevant to acceptable AI, they do not provide XAI with usable tools. Furthermore, they rely on inputs in , or generally in a finite dimensional Euclidean space, which is not always a valid hypothesis.
There is also a question on why learning from a high dimension data space is possible, and a possible answer is because data effectively lies on a low dimensional manifold [14,15]. As a consequence, most of the directions in the input space will have a very small impact on the output, while only a few number of them, namely those who are tangent to the data manifold, are going to be of great influence [16]. The manifold hypothesis also justifies the introduction of the encoder–decoder architecture [17] that is of wide use in the field of natural language processing [18] or time-series prediction [19]. The true underlying data manifold, if it exists, is most of the time not accessible, although some of its characteristics may be known and incorporated in the model. In particular, it may be subject to some action by a Lie group or possess extra geometric properties, like the existence of a symplectic structure. Specific networks have be designed to cope with such situations [20,21].
In a general setting, little is known about the data manifold and its geometric features, like metric, Levi-Civita connection and curvature. However, Riemannian properties are the most important ones as they dictate the behavior of the network under moves in the input space. Recalling the statistical approach invoked before, it makes sense to model the output of the network as a density probability parameterized by inputs and weights. Within this frame, there exists a well-defined Riemannian metric on the output space known as the Fisher Information Metric (FIM) originating from a second order expansion of the Kullback–Leibler divergence. The importance of this metric has already been pointed out in several past works [22,23]. The FIM can be pulled back to the input space, yielding, in most cases, a degenerate metric that can nevertheless be exploited to better understand the effect of perturbations [16], or to parameter space to improve gradient-based learning algorithms [24]. In this last case, however, things tend to be less natural than for the input space.
In this work, a unifying framework for studying the geometry of deep networks is introduced, allowing a description of encoder–decoder blocks from the FIM perspective. The pullback bundle is a key ingredient in our approach.
In the sequel, features and outputs are random variables, thus characterized by their distribution functions, or their densities in the absolutely continuous case. Within this frame, a neural network is a random variable:
where is an underlying probability space and are, respectively, the input and weight measure spaces Finally, Y is assumed to take its values in the output measure space Most of the time, the network has a layered structure so that the expression of can be factored out as:
In many practical implementations, the weights W are deterministic, that is equivalent to saying that their probability distribution is a Dirac distribution. In this case, a neural network can be described as a parameterized family of random variables . A special case occur when a single decoder is considered [25], that is, a measurable function:
where f is a smooth mapping, assumed in [25] to be an immersion; that is, for any x, has maximal rank Conversely, one may consider an encoder
and assume f to be a submersion. In this paper, the geometry of the complete encoder–decoder network
will be considered, as well as the case
The article is structured as follows: In Section 2, the Fisher information metric is introduced and some formulas, valid when the parameter space is a smooth manifold, are given. In Section 3, the pullback bundle is defined and applied to the encoder–decoder case. Finally, a conclusion is drawn in Section 5. The convention of summation on repeated indices applies in this manuscript.
2. The Fisher Information Metric
In this section, we recall some basic definitions and properties in information geometry. The foundational ideas can be traced back to [26], but the main developments occur quite recently. The reader is referred to [27] for a comprehensive introduction. The exposition below assumes a quite high degree of regularity for the parameterized density families, which is nevertheless a common situation in practice, especially in the field of machine learning we are interested in.
2.1. Definitions and Properties
Definition 1.
A statistical model is a pair where is an oriented n dimensional smooth manifold and is a parameterized family of probability densities on a measured space such that, putting :
- For μ-almost all , the mapping is smooth;
- For any , there exists an open neighborhood of θ and an integrable mapping such that, for any , ;
- The mapping is one-to-one;
- The support of does not depend on θ.
Assuming p never vanishes, one can define the score as:
For any :
Thus, using the fact that the assumptions made on family allow swapping derivatives and integrals, it becomes:
where denotes the derivative with respect to the i-th component of in local coordinates. So, the score satisfies by (8):
A simple computation shows that:
proving that:
Let g be the section of defined by:
Now, given any tangent vector :
with Given the assumptions made on the family , g is a thus a positive definite symmetric section of , hence a Riemannian metric on called the Fisher Information Metric (FIM).
Remark 1.
The mapping embeds as a submanifold of the unit sphere in and the Fisher information metric is just the pullback of the ambient metric in with respect to . However, in machine learning applications, it is common to consider parameter spaces for which the one-to-one assumption for is non-valid so that g is only positive semidefinite. The study of the rank of the metric in this case is an important research topic.
It is quite fruitful to consider differential forms on parameterized by . The starting point is the definition of parameterized degree 0 forms.
Definition 2.
A parameterized 0-form is a mapping satisfying:
- For almost all , the mapping is smooth;
- For all , and all integers n, there exists a neighborhood and an integrable positive mapping such that for all and almost all :
Proposition 1.
Let X be a vector field on and f a parameterized 0-form in the previous sense. Then:
with
Proof.
is a degree 0 form on . If is the flow of X, then:
The assumptions made on f allowing the swapping of derivatives and integrals, so:
□
Remark 2.
Applying Proposition 1 to the constant function yields , a result already known by Equation (9)
A parameterized degree k differential form on can be defined readily by requiring that the coefficients of the elementary forms be parameterized differential forms of degree 0.
Proposition 2.
Let α be a degree k parameterized differential form on . Then:
Proof.
It is enough to consider a form Then:
since:
the claim follows. □
Proceeding the same way as in Proposition 1, and using Cartan’s homotopy formula, we obtain:
Proposition 3.
Let X be a vector field on and α a degree k parameterized differential form. Then
When , Equation (21) reads as:
Since , it becomes:
Given two vector fields :
with g the Fisher metric. Thus:
Proposition 4.
Remark 3.
In coordinates, , and after taking the expectation:
This is a well-known result in the case.
Let ∇ be an affine connection on . The same computation as above yields:
Proposition 5.
Let X be a vector field on and α a degree k parameterized differential form. Then:
When , we recover , showing that while the parameterized Hessian depends on the connection ∇, it is not the case of its expectation. When , the Fisher metric is known to be twice the second order term in the Taylor expansion of the Kullback–Leibler divergence, which can be proved easily by iterating derivatives. More generally, let ∇ be a connection and let be a smooth curve with . Recall that the Kullback–Leibler divergence between two probability densities is defined as:
The mapping:
is smooth, so Taylor formula applies for t close enough to 0:
With:
If the curve is a geodesic for ∇, then:
And, by recurrence:
The first derivative is readily computed as:
The second derivative can be obtained using ∇ as:
Since g is symmetric, , thus (35) characterizes g as Higher-order terms can be computed by repeatedly applying Proposition 5 and are expressed thanks to the quantities:
An interesting case occurs when the Fisher metric is non-degenerate and is its associated Levi-Civita connection. Normal coordinates at , denoted by , are given by taking an orthonormal basis, with respect to the Fisher metric, and letting [28] (p. 72):
Using the system of coordinates in place of , and noting that corresponds to the origin in normal coordinates, the KL divergence can be approximated at order 2 by:
where
2.2. The Fisher Information in Machine Learning
In machine learning applications, when the output is a probability distribution, then the Kullback–Leibler divergence is a natural measure for goodness-of-fit. Assuming that the database is given in the form of an iid sample of couples , then one can introduce the error function:
That may be approximated by:
where the notation stands for the tangent vector at P such that a geodesic (for ) with is such that . Taking the derivative with respect to W yields:
with being a tangent vector at .
In this form, having a critical point of the energy with respect to W is equivalent to the vanishing of a totally symmetric multilinear form on , the generalized tangent bundle of
Finally, if is a smooth mapping, one can take the pullback the Fisher metric on to obtain a semi-definite symmetric bilinear form on :
When is an embedding, is a Fisher metric on with as underlying densities. This is the case considered in [25].
As an example of a pullback metric, we are going to investigate the case of the von Mises–Fisher distribution (VMF) on with density:
where is the concentration parameter, is the location parameter and is the modified Bessel function of the first kind of order k. The Fisher metric in the embedding space can be deduced from the second moment since . If is assumed to be constant, then:
Although the expression for has been given in [29], we present here an alternative proof based on the fact that for any integer n, is a suspension of . If , then is a matrix whose entry is . By the rotation invariance of the VMF, can be selected as the first vector of an orthonormal basis, with respect to which x is expressed in components as . If we specialize the first component, then, if :
with and the Lebesgue measure on . If , then the integral vanishes by symmetry, otherwise:
with the area of the -sphere, which is given by the general relation:
Now, observing that [30]:
with B the beta function, the overall expression becomes, after using (49):
When , then the expression for the second moment becomes:
The integral is a difference of two terms, each of which can be simplified as before to yield:
This procedure can easily be applied to an arbitrary moment, each of the integral involved being expressible using and the Beta function.
Remark 4.
Since μ is not a parameterization of the unit sphere, the Fisher metric defined that way is related to an ambient metric in , defined only on the unit sphere.
An obvious embedded dimension submanifold of is obtained by taking a unit vector and computing the intersection of with an hyperplane defined by:
An elementary computation proves that the intersection locus is a sphere contained in :
Without loss of generality, can be taken as and the embedding can be written easily as:
The pullback metric is just the original one scaled by . The loss functions related to the VMF distribution are discussed in [31].
3. Pullback Bundles
In this section, a neural network with weights W is a mapping , where (i.e., ) is the input (i.e., output) manifold of dimension n (i.e., m). Both manifolds are assumed to be smooth, and also the mapping This last assumption is valid when the activation functions are smooth, which is the case for sigmoid functions, but not for the commonly used ReLu function. However, smooth approximations to the ReLu are easy to construct with an arbitrary degree of accuracy, so the framework introduced below can be still applied.
As mentioned in the introduction, is further assumed to be a statistical model 1 with Fisher metric This setting is the one of a neural network whose output is a random variable with conditional density in a family .
When the weights are kept fixed, the only free parameters are the inputs and the network is fully described by the mapping:
For the ease of notation, the mapping will be abbreviated by . When the activation functions in the network are smooth, is a smooth mapping and its derivative will be denoted by . With this convention, the pullback metric of g by , denoted , is defined by:
Unless the network is a decoder, is generally degenerated and does not provide with a Riemannian structure, so an ambient metric h on is assumed to exist. The triple is called the data manifold of the network. The kernel of , denoted , is the distribution in consisting of vectors X such that is the zero mapping. At a point , the vectors in belonging to give directions in which the output of the network will not change up to order Figure 1 represents the case of a one dimensional output space and a 2-sphere input space. Since the dimension of the output is less than the one of the input, some moves in the data manifold will not induce any change at the output.
Figure 1.
Kernel of the pullback metric.
Unless the dimension of is constant, this distribution does not define a foliation. However, this is true locally in the neighborhood of points in such that has maximal rank. Finally, if is an r-vector bundle on , then its pullback by will be denoted in short by We recall that if E has local charts:
and has local charts , then has local charts:
The pullback bundle enjoys a universal property that is in fact the main reason for introducing it in our context.
Proposition 6.
Let (i.e., ) be a vector bundle on (i.e., ). For any bundle morphism , there exists a unique bundle morphism such that the following diagram commutes:
where and
This proposition is a classical one and its proof can be found in many textbooks. The one we give below is very simple, using only local charts.
The above construction is constructive and thus gives a practical mean of computation. For a network with fixed weights, e.g., a trained one, the derivative can be efficiently computed by back propagation, so the bundle morphism:
has a practical meaning.
Introducing the pullback bundle gives the diagram:
The bundle mapping to is then the association:
The pullback bundle is thus a mean of representing the action of the network on tangent vectors to the data manifold. As an example, the construction of adversarial attacks given in [32,33] can be revisited in this context, extending it to the general setting of network with manifold inputs.
The general problem of building an adversarial attack is, informally, to find, for an input point in the data manifold, a direction in which a perturbation will have the most important effect on the output, hopefully fooling the network. Following [33], we define:
Definition 3.
Let h be a Riemannian metric on the input space. An optimal adversarial attack at with budget is a solution to:
Using (38), this optimization program can be viewed as a local approximation to the one based on the Kullback–Leibler divergence:
Definition 4.
A Kullback–Leibler optimal adversarial attack at with budget is a solution to:
The metric g on can be pulled back to by letting:
Due to the special form of the criterion, the optimal point is on the boundary, so that finally, the optimal adversarial attack problem may be formulated as:
Definition 5.
An optimal adversarial attack at with budget is a solution to:
Where stands for the unit sphere bundle with respect to the metric Please note that due to bilinearity, the problem can be solved for , then let the optimal vector be scaled by the original From standard linear algebra, if is the matrix of the bilinear form at x and the one of h, then one can find unitary matrices and diagonal matrices such that:
Any vector v in can be written as:
So that, finally, the original problem can be rewritten as:
which is solved readily by taking w to be the unit eigenvector of M associated with the largest eigenvalue. This is the solution found in [33] when
In many cases, as the above example indicates, it is more convenient to work uniquely in the input space, thus justifying the introduction of the pullback bundle From now, we are going to adopt this point of view.
Remark 5.
Please note that a section in is generally not related to a section of the form (63) in either or due to the fact that may not be a monomorphism or an epimorphism. The next proposition gives condition for the existence of global sections in associated with global sections in
Proposition 7.
In the case of a decoding network, when is an embedding, there is a natural embedding of bundles such that the image of is . The pullback bundle then splits as:
where F has rank .
Be careful that in this case, a section of the pullback bundle will not define a global section in since some points of the output space may have no preimage by However, by the extension lemma [34] (Lemma 5.34, p. 115), local (global if is closed) smooth vector fields on exist, extending it.
Proof.
If is an embedding, is a submanifold of and in an adapted chart, a vector field in can be written as , where the are the first n coordinate vector fields. It thus pulls back to a section of the same form in . Now, since is injective, is the image of a unique section in , hence the claim. □
Proposition 8.
If has constant rank r, then there exists a splitting , and bundle isomorphism that coincides with on the fibers.
Proof.
By Theorem 10.34, [34] (p. 266), is a subbundle of and a subbundle of . In local charts, the morphism gives rise to the decomposition:
with an isomorphism where restricted to Passing to local sections yields the result. □
An important case is the one of submersions, corresponding to encoders in machine learning. In this case, and establishes a bundle isomorphism between F and The pullback of Fisher–Rao metric g on gives rise to a metric on , but only to a degenerate metric on that can, nevertheless, be quite well understood, as indicated below.
Definition 6.
On the input bundle , the symmetric tensor is defined using the splitting , by:
Proposition 9.
There exists a symmetric -tensor on , denoted by Θ, such that, for any tangent vectors :
Proof.
From standard linear algebra, there exists an adjoint to , defined by:
with, in local coordinates:
where N (i.e., ) is the matrix associated with (i.e., ) and, as usual, The -tensor is then the product □
Remark 6.
Θ is defined even if is not full rank.
Remark 7.
All the relevant information concerning is encoded in As a consequence, the geometry of an encoder is described by this tensor, hence also the one of an encoder–decoder block.
Remark 8.
The tensor Θ has expression in a local orthonormal frame, hence is symmetric.
Definition 7.
Let ∇ be a connection on . Its dual connection is defined by the next equation:
where Z is any tangent vector in and are vector fields.
Definition 8.
A -tensor Θ is said to satisfy the gauge equation [35] if, for all tangent vectors Z:
Proposition 10.
Proof.
For any vector fields , and any tangent vector Z:
hence the claim. □
, being symmetric, admits a diagonal expression in a local orthonormal local frame . When there exists a connection ∇ such that for any vector fields , parallel transport of the shows that the eigenvalues are constant and the eigenspaces preserved. The existence of a solution to the gauge equation thus greatly simplifies the study of an encoder, as a local splitting of the input manifold exists. The reader is referred to [35] for more details. In fact, the tensor is defined even if for general networks and the splitting may exist in this setting. This is the case when the rank of is locally constant, hence when it is maximal. A practical computation of can be obtained through the singular value decomposition, as Proposition (74) indicates. A numerical integration of the distribution given by the first singular vectors gives rise to a local system of coordinates, defining in turn a connection satisfying the gauge equation (the existence of a global solution has a cohomological obstruction that is outside the scope of this paper).
Finally, we introduce below a construction that takes into account the weight influence. As mentioned in Section 2, the derivative of the network with respect to its weights is adequately described as a 1-form, thus a section of In fact, when the inner layers of the network are manifolds, the parameters are no longer real values and a suitable extension has to be introduced. One possible approach is to take a connection ∇ on the layer manifold Considering a point , the exponential defines a local chart centered at Given a point q in the injectivity domain of , one can obtain its coordinates as and the activation of a neuron with input q as , with a 1-from in In this general setting, a manifold neuron will be defined by its input in an exponential chart, a 1-form corresponding to the weights in the Euclidean setting and an activation function. Its free parameters are thus a couple This particular vector bundle is known as the generalized tangent bundle.
Recalling (43), it is worth to study the pullback of the generalized bundle . The generalized pullback bundle is then , whose local sections are generated by the pullback local sections of the form:
Please note that the pullback can be performed on any layer, internal or input. Most of the previous derivations can be carried out on the generalized bundle, which must be thus considered as a general, yet tractable framework for XAI.
4. A Numerical Example
In this example, the input data are the handwritten digits from the MNIST database. A neural network with the next architecture was coded in torch 2 and trained on the dataset:
- First layer: convolutional, kernel size of 3, nonlinearity sigmoid;
- Second layer: convolutional, kernel size of 3, nonlinearity sigmoid;
- Pooling layer;
- Two linear layers;
- Softmax layer.
The input metric is Euclidean, the output one is the Fisher metric of the multinomial distribution with ten classes, that is given by the matrix:
Since the output space has dimension 9, the pullback bundle also has dimension 9. At an input point x, a point in the pullback bundle is a couple with v a vector from at output point . On the other hand, the image of the input tangent bundle (simply a vector space in our case) has points with u an input vector. We are thus considering a bundle mapping where the right-hand term has values in the pullback bundle, equipped with the output Fisher metric. Tensor is computed via singular value decomposition, already implemented in torch. We selected the rotation rate of the singular vector associated with the largest singular value as an indicator of the complexity of the decision process in the neighborhood of an input point. The code was adapted from https://github.com/eliot-tron/CurvNetAttack (accessed on 12 September 2023). A detection of outliers from a sample of 1000 points was performed. A visual analysis reveals that they correspond to poorly drawn digits, as indicated in Figure 2 where the two digits with the highest curvature indicator are plotted:
Figure 2.
Samples with the highest rotation rate.
The first one is labeled “9”, which is quite obvious for a human operator, although the final stroke is vertical, while the second is labeled “7”, easily confused with a “1”.
5. Conclusions and Future Work
In this paper, several important constructions originating from information geometry were surveyed and some new ones introduced. The pullback bundle on a layer allows to describe the behavior of a network with respect to the Fisher information metric, and a simple description can be obtained when a gauge equation is satisfied. One important feature of this construction is its ability to fit in a general framework where layers take their inputs on a manifold.
Future work involves a companion paper describing computational procedures and examples from real case studies. An study of the properties of the pullback generalized bundle is also in progress. Finally, the case of networks with non constant rank must be considered. It is believed that they give rise to singular foliations.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The author declares no conflict of interest.
References
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef]
- Chamola, V.; Hassija, V.; Sulthana, A.R.; Ghosh, D.; Dhingra, D.; Sikdar, B. A Review of Trustworthy and Explainable Artificial Intelligence (XAI). IEEE Access 2023, 11, 78994–79015. [Google Scholar] [CrossRef]
- Chang, D.T. Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models. arXiv 2021, arXiv:cs.LG/2106.00120. [Google Scholar]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013, arXiv:1312.6034. [Google Scholar]
- Alicioglu, G.; Sun, B. A survey of visual analytics for Explainable Artificial Intelligence methods. Comput. Graph. 2022, 102, 502–520. [Google Scholar] [CrossRef]
- Fawzi, A.; Moosavi-Dezfooli, S.M.; Frossard, P. The Robustness of Deep Networks: A Geometrical Perspective. IEEE Signal Process. Mag. 2017, 34, 50–62. [Google Scholar] [CrossRef]
- Fawzi, A.; Fawzi, O.; Frossard, P. Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 2015, 107, 481–508. [Google Scholar] [CrossRef]
- Wong, E.; Kolter, Z. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Dy, J., Krause, A., Eds.; Proceedings of Machine Learning Research. PMLR: London, UK, 2018; Volume 80, pp. 5286–5295. [Google Scholar]
- Raghunathan, A.; Steinhardt, J.; Liang, P. Certified Defenses against Adversarial Examples. arXiv 2018, arXiv:1801.09344. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar]
- Abdollahpourrostam, A.; Abroshan, M.; Moosavi-Dezfooli, S.M. Revisiting DeepFool: Generalization and improvement. arXiv 2023, arXiv:2303.12481. [Google Scholar]
- Fefferman, C.; Mitter, S.K.; Narayanan, H. Testing the Manifold Hypothesis. arXiv 2013, arXiv:1310.0425. [Google Scholar] [CrossRef]
- Narayanan, H.; Mitter, S.K. Sample Complexity of Testing the Manifold Hypothesis. In Proceedings of the NIPS, Vancouver, BC, Canada, 6–9 December 2010. [Google Scholar]
- Grementieri, L.; Fioresi, R. Model-centric Data Manifold: The Data Through the Eyes of the Model. arXiv 2021, arXiv:2104.13289. [Google Scholar] [CrossRef]
- Ye, J.C.; Sung, W.K. Understanding Geometry of Encoder-Decoder CNNs. arXiv 2019, arXiv:1901.07647. [Google Scholar]
- Zhang, Z.; Yu, W.; Zhu, C.; Jiang, M. A Unified Encoder-Decoder Framework with Entity Memory. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, 7–11 December 2022. [Google Scholar]
- Zhang, H.; Li, S.; Chen, Y.; Dai, J.; Yi, Y. A Novel Encoder-Decoder Model for Multivariate Time Series Forecasting. Comput. Intell. Neurosci. 2022, 2022, 5596676. [Google Scholar] [CrossRef]
- Ju, C.; Guan, C. Deep Optimal Transport for Domain Adaptation on SPD Manifolds. arXiv 2022, arXiv:2201.05745. [Google Scholar]
- Santos, S.; Ekal, M.; Ventura, R. Symplectic Momentum Neural Networks—Using Discrete Variational Mechanics as a prior in Deep Learning. In Proceedings of the Conference on Learning for Dynamics & Control, Stanford, CA, USA, 23–24 June 2022. [Google Scholar]
- Karakida, R.; Okada, M.; Amari, S. Adaptive Natural Gradient Learning Based on Riemannian Metric of Score Matching. 2016. Available online: https://openreview.net/pdf?id=lx9lNjDDvU2OVPy8CvGJ (accessed on 27 July 2023).
- Amari, S.; Karakida, R.; Oizumi, M. Fisher Information and Natural Gradient Learning of Random Deep Networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Canary Islands, Spain, 9–11 April 2018. [Google Scholar]
- Karakida, R.; Akaho, S.; Amari, S. Universal statistics of Fisher information in deep neural networks: Mean field approach. J. Stat. Mech. Theory Exp. 2020, 2020, 124005. [Google Scholar] [CrossRef]
- Arvanitidis, G.; González-Duque, M.; Pouplin, A.; Kalatzis, D.; Hauberg, S. Pulling back information geometry. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics, Virtual, 28–30 March 2022; Camps-Valls, G., Ruiz, F.J.R., Valera, I., Eds.; Proceedings of Machine Learning Research. PMLR: London, UK, 2022; Volume 151, pp. 4872–4894. [Google Scholar]
- Rao, C.R. Information and the Accuracy Attainable in the Estimation of Statistical Parameters. In Breakthroughs in Statistics: Foundations and Basic Theory; Springer: New York, NY, USA, 1992; pp. 235–247. [Google Scholar] [CrossRef]
- Amari, S.; Nagaoka, H. Methods of Information Geometry; Fields Institute Communications, American Mathematical Society: Toronto, ON, Canada, 2000. [Google Scholar]
- Willmore, T. Riemannian Geometry; Oxford Science Publications, Oxford University Press: Oxford, UK, 1996. [Google Scholar]
- Kitagawa, T.; Rowley, J. von Mises-Fisher distributions and their statistical divergence. arXiv 2022, arXiv:2202.05192. [Google Scholar]
- Olver, F.W.J.; Olde Daalhuis, A.B.; Lozier, D.W.; Schneider, B.I.; Boisvert, R.F.; Clark, C.W.; Miller, B.R.; Saunders, B.V.; Cohl, H.S.; McClain, M.A. (Eds.) NIST Digital Library of Mathematical Functions; Release 1.1.10 of 2023-06-15. Available online: https://dlmf.nist.gov/ (accessed on 27 July 2023).
- Scott, T.R.; Gallagher, A.C.; Mozer, M.C. von Mises-Fisher Loss: An Exploration of Embedding Geometries for Supervised Learning. arXiv 2021, arXiv:cs.LG/2103.15718. [Google Scholar]
- Martin, J.; Elster, C. Inspecting adversarial examples using the fisher information. Neurocomputing 2020, 382, 80–86. [Google Scholar] [CrossRef]
- Zhao, C.; Fletcher, P.T.; Yu, M.; Peng, Y.; Zhang, G.; Shen, C. The adversarial attack and detection under the fisher information metric. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 5869–5876. [Google Scholar]
- Lee, J. Introduction to Smooth Manifolds; Graduate Texts in Mathematics; Springer: New York, NY, USA, 2012. [Google Scholar]
- Boyom, M.N. Foliations-Webs-Hessian Geometry-Information Geometry-Entropy and Cohomology. Entropy 2016, 18, 433. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

