*2.4. Krylov Subspace Method*

As seen in Figure 2, the thermal characteristic of an electronic structure can refer to the construction of a system consisting of a high number of differential equations. Moreover, the larger the system the more accurate the results. However, preparation of a numerical solution of such a system can be very time-consuming and can require significant computational power. Thus, a solution for this problem is needed. One idea to deal with it is an order reduction of a constructed system of equations.

The order reduction process can be based on a moment matching method [26]. This method helps with moments calculation, being a negative coefficients of a system's transfer function *FT* in a Taylor's series, around point 0 [26,27]. Its form can be as follows [25]:

$$F\_T(l) = -\sum\_{i=0}^{\infty} m\_i \cdot l^i \tag{18}$$

where *mi* means an *i*th moment. For example, for the system Equation (16), moments are determined according to the following Equation:

$$m\_i = \mathcal{C}^T \left(\mathbf{A}^{-1} \cdot \mathbf{E}\right)^i \cdot \mathbf{A}^{-1} \cdot \mathbf{B}, i \in \mathcal{N} \cup \tag{19}$$

The main idea of the model order reduction is related to finding a new system of equations, which is characterized by a significantly lower order than the original one. In this process, it is extremely important to consider the same transfer function *FT* for both systems. It assures the existence of the same initial moments in the original and reduced systems of equations. However, due to the form of Equation (18), a direct numerical determination of such a proposed solution is impossible. Thus, to resolve this problem, the Krylov subspace method can be used.

In linear algebra, the Krylov space (or subspace) *K* of an order *r* for a certain *n* × *n* square matrix U and *n*-element vector s is a linear subspace of a space *Rn* that is generated by the following vectors: *s, U*·*s, <sup>U</sup>***2**·*s,* ... *, U***<sup>n</sup>**−**1**·*s*. Thus, the following Equation is true [11,28,29]:

$$K\_r(\mathbf{U}, \mathbf{s}) = \text{span}\left\{ \mathbf{s}, \mathbf{U} \cdot \mathbf{s}, \mathbf{U}^2 \cdot \mathbf{s}, \dots, \mathbf{U}^{n-1} \cdot \mathbf{s} \right\} \tag{20}$$

In the case of the previously analyzed system Equation (16), the matrix U is the same as A, while vector s is similar to *B*.

A Krylov subspace-based method is a numerical method that can find eigenvalues of large sparse matrices or solves a system with a high number of linear equations based on multiplications of vectors and matrices and operates on the determined vectors without the necessity of making additional operations on many matrices at the same time. Thus, starting with the vectors, the following vectors are determined, respectively: *U*·*s, <sup>U</sup>***2**·*s*, etc.

Taking into consideration that consecutively determined vectors become linearly dependent relatively quickly, Krylov subspace-based methods require an additional application of some

orthogonalization schemes. One of the most common of them are algorithms created by Lanczos [30–34], and Arnoldi [35–41]. The second algorithm mentioned was used in considerations presented in this paper. It can generate a set of orthonormal vectors which are determined using the modified Gram–Schmidt process (MGS) [42,43]. These vectors form a base Equation (20) of a Krylov subspace. Moreover, each vector from this base, starting with b, states consecutive columns of so-called transfer matrices V and W, which are used to determine coe fficients matrices for reduced systems of linear equations. The algorithm stops after generating the first zero vectors, i.e., if the aggregated sum of absolute values of all vector's coordinates does not exceed the tolerance value equal to 0.0001. The number of algorithm's iterations (or the number of non-zero vectors) is equal to the reduced model order *r*. It is worthwhile highlighting that these zero vectors are not included in constructed V and W matrices.

In the investigated case, the V and W matrices are identical. Moreover, the number of iterations of the considered algorithm determines an order of the Krylov subspace and, at the same time, an order of a newly generated, reduced system of equations. For example, the reduced version of the system of Equation (16) is as follows [25]:

·

$$\begin{cases} \ E\_{\mathbf{r}} \cdot \overline{\overline{T}\_{\mathbf{r}}(t)} = A\_{\mathbf{r}} \cdot \overline{\overline{T}\_{\mathbf{r}}(t)} + B\_{\mathbf{r}} \cdot \boldsymbol{\mu}(t) \\ \qquad y(t) = \mathcal{C}\_{\mathbf{r}} \cdot \overline{\overline{T}\_{\mathbf{r}}(t)} \end{cases} \quad t \in \mathbb{R}\_{+} \cup \ \{0\} \tag{21}$$

where the system Equation (21) is solved using backward di fferentiation formulas (BDFs) of variable order between 1 and 5. Previous research has shown that it is one of the most e ffective methods for solving these types of equation systems 3.
