1. Introduction
An important task in signal processing is the decomposition of a multivariate signal for the analysis of measured or simulated data leading to the possible detection of the relevant subspace or the sources of the signal. Recently, a new method—dynamical component analysis (DyCA)—based on modeling the signal via two coupled systems of ordinary differential equations (ODE) was introduced. One system is governed by time-invariant linear dynamics, whereas the second one is defined by an unknown non-linear vector field, assumed to be smooth. Its derivation and its features have been presented in depth (see [
1,
2]). The presented algorithm was nearly as simple as principal component analysis (PCA) or certain independent component analysis (ICA) approaches. The results obtained via DyCA, however, yield deeper insight into the underlying dynamics of the data. Moreover, as demonstrated by several examples in [
2], typically, neither ICA nor PCA approaches are able to capture the linear/non-linear character of the underlying dynamics.
The present work, in particular, is partially based on two conference papers [
3,
4]. Moreover, our objective is to reformulate the original cost function approach for DyCA— formerly leading to a generalized eigenvalue, or more generally, to an invariant eigenspace problem—into an inverse-problem-type formulation, which allows for the recovery of the state and mixing matrices from high-dimensional matrix-valued time series.
This paper is organized as follows. First, the general problem is briefly reviewed; the cost function is discussed in detail; and, in particular, the critical points are analyzed. Second, we formulate an optimization-free algorithm, mainly based on solving coupled singular-value decompositions. Finally, we present numerical experiments to support our approach.
2. Problem Formulation
Consider a signal and its derivative with respect to time denoted by .
Let
. Assume that
Q and
are of the form
where
is a constant matrix of rank
and
are samples of
, fulfilling the ODE
Here,
is some constant matrix and
is an unknown smooth function. Under these assumptions, we formulate the problem that will be addressed in the sequel.
Problem 1 (DyCA).
Given a signal , its derivative with .
Find estimates in a least squares sense for , , and according to the above assumptions.
Defining
, we obtain the following via Equations (
1) and (
2):
We will propose a method by which to solve Problem 1, assuming exact data.
3. Cost Function
We approach the DyCA problem by minimizing a suitable cost function. Similar to [
5], we fit the part of the data corresponding to the linear part of the ODE by minimizing the cost:
Here,
X and
depend
implicitly on
W.
To derive a more explicit expression for Equation (
4), we rewrite Equation (
3) by considering thin singular-value decompositions (SVDs) of
Q and
, respectively. In more detail, let
be a thin SVD of
Q, where
,
diagonal and
. Analogously, let
be a thin SVD of
, where
,
diagonal and
. As usual, (cf., e.g., [
4]), the Stiefel manifold is denoted by
i.e., a differentiable submanifold of the vector space
with
consisting of rectangular matrices with orthonormal columns. Exploiting
for
, we obtain via Equation (
5):
while Equation (
6) yields
For
, we set
Via the assumptions imposed in the formulation of the DyCA problem, we have for
yielding
; i.e.,
is invertible.
Hence, solving Equation (
5), as well as Equation (
6) for
X and
, respectively, and putting the result into Equation (
4) yields the smooth cost
where
and
are given via Equation (
10).
Remark 1. Essentially, the cost function Equation (12) considered here is a reformulation of the one considered earlier (see [2,5] and several follow up papers, in particular, e.g., [3,6]). There is, however, an important difference; the reformulation here takes, in some sense, the inverse problem character explicitly into consideration. Remark 2. Strictly speaking, the cost f defined in Equation (12) is not defined on the whole space but only on the subset given by , wherewhere are the continuous functions defined for byNote that is open by the second equality in Equation (13) and the continuity of . Thus, the domain of f, namely, , is an open subset of . Notation 1. From now on, if not indicated otherwise, denotes the domain of f as characterized in Remark 2.
In the sequel, through abuse of notation, we sometimes write instead of .
4. Analysis of the Cost
4.1. Derivatives
Obviously, is a smooth function. To obtain candidates for points , where f attains a minimum, we search for critical points of f.
As a preparation, to compute the derivative of f, we recall the following well-known lemma.
Lemma 1. The derivative ofevaluated at in direction , is given by Lemma 2. Let be defined via Equation (12) and setMoreover, let and . Then, the derivative of at in the direction is given byand the derivative of f with respect to the second argument, i.e., of the derivative of the function at in the direction , reads Proof. Expanding
and using Lemma 1, we obtain Equations (
18) and (
19) via a tedious but straightforward calculation. □
Using Lemma 2, we search for critical points of
f. Obviously,
is a critical point of
f iff the following two conditions, i.e.,
hold for all
and
.
Via Equation (
18), we obtain
for all
iff
is satisfied. Clearly, Equation (
22) is equivalent to
Similarly, we obtain via Equation (
19) that
is fulfilled for all
iff
holds. Because
are invertible, Equation (
24) is equivalent to
Moreover, Equation (
25) implies
Plugging Equation (
26) into Equation (
23) yields
being equivalent to
Multiplying Equation (
28) from the left by
, as well as from the right by
, and using the orthonormality property
yields
Conversely, assume that Equation (
29) is satisfied; then, Equation (
28) holds. Thus, Equation (
28) is equivalent to Equation (
29).
The above discussion is summarized in the next Theorem.
Theorem 1. Let . Then, is a critical point of , defined in Equation (12), iff the following two equalities, i.e.,andhold. Here, for convenience, we used . 4.2. Critical Points
In this section, the critical points of the DyCA cost are determined by using the characterization of Theorem 1, i.e., we solve the equation
for
. To this end, we define
Then, Equation (
32) is equivalent to
Set
and partition
, where
,
,
, and
. Then, Equation (
34) yields
i.e., Equation (
34) holds iff
is fulfilled. Next, partition
where
and consider
Clearly, through Equation (
38),
and
are fulfilled iff
is satisfied; i.e.,
Because of
being equivalent to
, we obtain
Thus, Equation (
32) admits a solution if
.
4.3. Construction of a Critical Point
Assuming
, we construct a solution of Equation (
32). Let
be an SVD of
U, where
,
, fulfilling
and
. Moreover,
and
are diagonal. Next, define
. Then,
is satisfied because of Equation (
42). Now, set
being equivalent to
Then,
is a solution of Equation (
32) via Equation (
43) combined with Equation (
39).
Remark 3.
Equation (32) has a solution iff . Assume and let be the solution constructed above. Then, every other solution of Equation (32) is of the form , where and .
4.4. Recovering Mixing Matrix W and State Matrix A
Next, we show how
can be recovered, assuming a
, satisfying Equation (
32), is given. Recall that
fulfills
where
. Thus, given
, the matrix
is a solution of Equation (
46) because of
Remark 4. W is not unique since is not unique; moreover, let with . Then, also satisfies .
Once
W is determined, we also obtain
A via Equation (
31), namely,
where
according to Equation (
10).
5. Algorithm
The analysis of the cost function above leads to the following Algorithm 1 for solving Problem 1.
Algorithm 1 DyCA |
Input: , Compute thin SVDs and . Set . Compute . Compute an SVD . Set . Define . Define
Output: . |
6. Applications
We now apply the proposed method to the Rössler attractor and the Lorenz system.
6.1. Rössler Attractor
We consider the Rössler attractor introduced in [
7]. Consider the ODE
where
,
, and
. Accordingly, with
and
we rewrite Equation (
50) as
or, equivalently,
Thus, Equation (
50) is of the form of Equation (
2), where
and
. Hence, we may apply Algorithm 1 to solve Problem 1 if the low-dimensional dynamics of the signal satisfies the ODE Equation (
50).
To illustrate the application of Algorithm 1, we perform a numerical experiment using MATLAB 2024a™.
Using the notation from Problem 1, a three-dimensional signal
is generated by integrating Equation (
50) using the MATLAB function
ode45. By evaluating the right-hand side of Equation (
50) at the time steps
, the derivative
is computed. The mixing matrix
, where
, is generated by uniformly distributed random numbers in the interval
. We then define
and
and apply Algorithm 1 to the signal
and its derivative
, where
and
.
Our results are illustrated in
Figure 1,
Figure 2 and
Figure 3 below. Alongside the original trajectory,
, we also plot the reconstructed trajectory obtained via the DyCA, as well as a reconstruction of the signal by means of a thin SVD of
Q.
6.2. Lorenz System
We also apply DyCA to the Lorenz system
where
, and
. Accordingly, by defining
and
we rewrite Equation (
55) as follows:
Thus, (
55) is written in the form of Equation (
2), where
and
. Hence, we may apply Algorithm 1 to solve Problem 1, where the low-dimensional dynamics of the signal satisfies the ODE Equation (
55).
We also indicate this via another numerical experiment. Analogously to the Rössler system discussed above, we create a mixing matrix
, where
, and we generate the signal
Q and its time derivative by integrating Equation (
55) using the MATLAB function
ode45. Similar to
Figure 1,
Figure 2 and
Figure 3, we present the results obtained for the Lorenz system in
Figure 4,
Figure 5 and
Figure 6 below.
7. Outlook and Discussion
In this paper, we have discussed a reformulation of the so-called DyCA problem, putting the original cost function approach into perspective with respect to an inverse problem formulation. It is certainly out of scope for this paper to discuss more advanced techniques from the vast area of numerics for inverse problems. In particular, to acknowledge the fact that one is ultimately interested in the inverse of the mixing matrix in the case of noise, a possibly ill-posed problem. For results in this direction, we refer the reader to forthcoming papers including real-world data, e.g., analyzing EEG data. We, however, have shown so far that for the two examples (Lorenz and Rössler) where data were generated artificially, our results are promising; in particular, for data corrupted only by a reasonable amount of noise, the algorithm works well.
Clearly, any questions related to scalability, usability, or complexity in the above context can be easily addressed via the vast body of existing literature on singular-value decomposition-based algorithms from the last 30 years, either from the numerical linear algebra community or from the pertinent signal processing literature.
Author Contributions
Conceptualization, K.H., M.S. and C.U.; Writing–original draft, K.H., M.S. and C.U.; Writing–review & editing, K.H., M.S. and C.U. All authors have read and agreed to the published version of the manuscript.
Funding
This work has been supported by the German Federal Ministry of Education and Research (BMBF-Projekt, funding numbers: 05M20WWA and 05M20WBA Verbundprojekt 05M2020—DyCA).
Institutional Review Board Statement
Not applicable.
Data Availability Statement
No data available.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
DyCA | Dynamical component analysis |
ODE | Ordinary differential equation |
SVD | Singular-value decomposition |
References
- Korn, K.; Seifert, B.; Uhl, C. Dynamical Component Analysis (DYCA) and Its Application on Epileptic EEG. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1100–1104. [Google Scholar] [CrossRef]
- Uhl, C.; Kern, M.; Warmuth, M.; Seifert, B. Subspace Detection and Blind Source Separation of Multivariate Signals by Dynamical Component Analysis (DyCA). IEEE Open J. Signal Process. 2020, 1, 230–241. [Google Scholar] [CrossRef]
- Romberger, P.; Warmuth, M.; Uhl, C.; Hüper, K. Dynamical Component Analysis: Matrix Case and Differential Geometric Point of View. In Proceedings of the CONTROLO 2022, Caparica, Portugal, 6–8 July 2022; Brito Palma, L., Neves-Silva, R., Gomes, L., Eds.; Springer: Cham, Switzerland, 2022; pp. 385–394. [Google Scholar] [CrossRef]
- Schlarb, M.; Hüper, K. Optimization on Stiefel Manifolds. In Proceedings of the CONTROLO 2022, Caparica, Portugal, 6–8 July 2022; Brito Palma, L., Neves-Silva, R., Gomes, L., Eds.; Springer: Cham, Switzerland, 2022; pp. 363–374. [Google Scholar] [CrossRef]
- Seifert, B.; Korn, K.; Hartmann, S.; Uhl, C. Dynamical Component Analysis (DYCA): Dimensionality Reduction for High-Dimensional Deterministic Time-Series. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Paglia, C.; Stiehl, A.; Uhl, C. Identification of Low-Dimensional Nonlinear Dynamics from High-Dimensional Simulated and Real-World Data. In Proceedings of the CONTROLO 2022, Caparica, Portugal, 6–8 July 2022; Brito Palma, L., Neves-Silva, R., Gomes, L., Eds.; Springer: Cham, Switzerland, 2022; pp. 205–213. [Google Scholar] [CrossRef]
- Rössler, O. An equation for continuous chaos. Phys. Lett. A 1976, 57, 397–398. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).