2.2.3. Multi-Machine Synchronous Systems

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

As mentioned in the previous sections, the master data of the mode matrix plays an important role in system stability. The real parts of these quantities show exponential changes and the space matrix parts show fluctuation changes in sine. If the real part is negative, the exponential part of the response will tend towards zero. As a result, the fluctuating part will also be influenced by this and tend towards zero [31].

$$\mathbf{e}^{\mathbf{a}+\mathbf{i}\mathbf{b}} = \mathbf{e}^{\mathbf{a}}\mathbf{e}^{\mathbf{i}\mathbf{b}} = \mathbf{e}^{\mathbf{a}}(\mathbf{C}\cos(\mathbf{b}) + \mathbf{i}\mathbf{S}\sin(\mathbf{b})) \tag{26}$$

However, if some of the master data contains non-negative real parts, the responses will not tend to zero over time. However, it either will tend towards infinity with growing fluctuations (if the master data is positive) or will continue with the previous fluctuation (if the master data is negative). Thus, even a small amount of chaos will be enough to disrupt the balance of the system. As it can be understood from these explanations, it is sufficient to know the position of the master data on the complex surface in the analysis of linear systems, but it is also sufficient to know the sign in the real part of the master data. In the following sections, methods of finding master data in large systems will be examined. If the dynamic system of each machine is as shown in Equation (27) below [32].

$$
\dot{\mathbf{x}} = \mathbf{A}\mathbf{x} \tag{27}
$$

$$\mathbf{A} = \begin{bmatrix} \mathbf{a}\_{11} & \mathbf{a}\_{12} & \mathbf{a}\_{13} & 0 & 0 & 0\\ \mathbf{a}\_{21} & 0 & 0 & 0 & 0 & 0\\ 0 & \mathbf{a}\_{32} & \mathbf{a}\_{33} & \mathbf{a}\_{34} & 0 & 0\\ 0 & \mathbf{a}\_{42} & \mathbf{a}\_{43} & \mathbf{a}\_{44} & 0 & 0\\ \mathbf{a}\_{51} & \mathbf{a}\_{52} & \mathbf{a}\_{53} & 0 & \mathbf{a}\_{55} & 0\\ \mathbf{a}\_{61} & \mathbf{a}\_{62} & \mathbf{a}\_{63} & 0 & \mathbf{a}\_{65} & \mathbf{a}\_{66} \end{bmatrix} \times \mathbf{x} = \begin{bmatrix} \Delta \mathbf{w} \\ \Delta \delta \\ \Delta \psi\_{\rm fd} \\ \Delta \mathbf{v}\_{1} \\ \Delta \mathbf{v}\_{2} \\ \Delta \mathbf{v}\_{5} \end{bmatrix} \tag{28}$$

$$
\Delta \dot{\mathbf{x}} = \left\{ \mathbf{A}\_{\rm D} + \mathbf{B}\_{\rm D} (\mathbf{Y} + \mathbf{Y}\_{\rm N})^{-1} \mathbf{C}\_{\rm D} \right\} \Delta \mathbf{x} \tag{29}
$$

$$\mathbf{A} = \left\{ \mathbf{A}\_{\mathrm{D}} + \mathbf{B}\_{\mathrm{D}} (\mathbf{Y} + \mathbf{Y}\_{\mathrm{N}})^{-1} \mathbf{C}\_{\mathrm{D}} \right\} \tag{30}$$


$$\mathbf{A} = \left\{ \mathbf{A}\_{\mathrm{D}} + \mathbf{B}\_{\mathrm{D}} (\mathbf{Y}\_{\mathrm{N}} + \mathbf{Y}\_{\mathrm{D}})^{-1} \mathbf{C}\_{\mathrm{D}} \right\} \tag{31}$$

Thus, the master data of the matrix of the above situation will determine the state of the system. In general, this matrix is a large sparse matrix and it is di fficult to find its main data [33].

### 2.2.4. Householder Method Small Signal Stability

Many methods are based on orthogonal similarity. If two matrices are homologous, then the QR matrices present in the Homologous Transformations and a QR Algorithm will be as in Equation (32), since they have the same principal amounts as well as the same polynomials. For the separation of orthogonal and an upper triangular matrix, the transposed 'p' is obtained by multiplying the air distance by the force. The main advantage is obtained by transforming the resulting matrix.

$$\mathbf{A} = \mathbf{P}^{-1} \mathbf{B} \mathbf{P} \to \mathbf{A} \sim \mathbf{B} \tag{32}$$

$$\mathbf{A} = \mathbf{Q}^{\mathrm{T}} \mathbf{B} \mathbf{Q} \to \mathbf{A} \sim \mathbf{B} / \mathbf{Q}^{-1} = \mathbf{Q}^{\mathrm{T}} \tag{33}$$

$$\mathbf{A}\_0 = \mathbf{A} \mathbf{A} \tag{34}$$

$$\mathbf{A} \overset{\mathbf{T} \wedge \mathbf{n}}{\frown} \mathbf{n} \overset{\mathbf{T} \wedge \mathbf{n}}{\rhd} \quad \mathbf{n} \overset{\mathbf{T} \wedge \mathbf{n}}{\frown} \mathbf{n} \overset{\mathbf{T} \wedge \mathbf{n}}{\rhd} \tag{35}$$

$$\mathbf{A}\_{\mathbf{k}} = \mathbf{Q}\_{\mathbf{k}} \mathbf{R}\_{\mathbf{k}} \Rightarrow \mathbf{A}\_{\mathbf{k}+1} = \mathbf{R}\_{\mathbf{k}} \mathbf{Q}\_{\mathbf{k}} \Rightarrow \mathbf{A}\_{\mathbf{k}} = \mathbf{Q}\_{\mathbf{k}}^{\mathrm{T}} (\mathbf{R}\_{\mathbf{k}} \mathbf{Q}\_{\mathbf{k}}) \mathbf{Q}\_{\mathbf{k}} = \mathbf{R}\_{\mathbf{k}} \mathbf{Q}\_{\mathbf{k}} = \mathbf{A}\_{\mathbf{k}+1} \mathbf{A}\_{\mathbf{k}} \sim \mathbf{A}\_{\mathbf{k}+1} \tag{57}$$

$$\mathbf{A}\_{\mathbf{k}+1} = \mathbf{Q}\_{\mathbf{k}} \mathbf{Q}\_{\mathbf{k}-1} \dots \mathbf{Q}\_{\mathbf{0}} \mathbf{R}\_{\mathbf{k}} \mathbf{Q}\_{\mathbf{0}}^{\mathbf{T}} \mathbf{Q}\_{1}^{\mathbf{T}} \dots \mathbf{Q}\_{\mathbf{k}}^{\mathbf{T}} \to \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^{-1} \tag{35}$$

If U is a unit vector, the Householder Matrix Hu is defined as follows per Equation (37):

$$\mathbf{H} \mathbf{u} = \mathbf{I} - 2\mathbf{u}\mathbf{u}^{\mathrm{T}}, \mathbf{u}^{\mathrm{T}}\mathbf{u} = \mathbf{1} \tag{36}$$

$$\begin{array}{rcl} \mathbf{H}\_{\mathbf{u}}^{\mathrm{T}} &= \left(\mathbf{I} - 2\mathbf{u}\mathbf{u}^{\mathrm{T}}\right)^{\mathrm{T}} \\ &= \left(\mathbf{I} - 2\mathbf{u}\mathbf{u}^{\mathrm{T}}\right) \\ &= \mathbf{H}\_{\mathbf{u}} \end{array} \tag{37}$$

$$\begin{array}{ll} \mathbf{H}\_{\mathbf{u}}^{2} &= \left(\mathbf{I} - 2\mathbf{u}\mathbf{u}^{\mathsf{T}}\right)\left(\mathbf{I} - 2\mathbf{u}\mathbf{u}^{\mathsf{T}}\right) \\ &= \mathbf{I} - 2\mathbf{u}\mathbf{u}^{\mathsf{T}} - 2\mathbf{u}\mathbf{u}^{\mathsf{T}} + 4\mathbf{u}\mathbf{(u}^{\mathsf{T}}\mathbf{u}\mathbf{u})\mathbf{u}^{\mathsf{T}} \\ &= \mathbf{I} \end{array} \tag{38}$$

If A, n × n is a matrix, then B, the transformation of A through the Householder transformation, will be as in Equation (39).

$$\mathbf{A}\mathbf{n} \times \mathbf{n} \mathbf{A} \mathbf{B} = \mathbf{H}\_{\mathbf{u}} \mathbf{A} \mathbf{H}\_{\mathbf{u}} \tag{39}$$

If the generated Hu matrix is indicated by Huj:HuAHuj

$$\mathbf{R} = \mathbf{H}\_{\mathbf{u}\_n} (\mathbf{H}\_{\mathbf{u}\_{n-1}} \dots (\mathbf{H}\_{\mathbf{u}\_1} \mathbf{A})) = \mathbf{Q} \mathbf{A} \Rightarrow \mathbf{A} = \mathbf{Q} \mathbf{R} \tag{40}$$

$$\mathbf{Q} = \mathbf{H}\_{\mathbf{u}\_{\mathrm{n}}} \mathbf{H}\_{\mathbf{u}\_{\mathrm{n}-1}} \dots \mathbf{H}\_{\mathbf{u}\_{1}} \tag{41}$$

Thus, the matrix A is decomposed into QR. If the expression 2η %% %aij %% % − η is zero, U will not be able to be defined. Therefore, the matrix A cannot be homologous to an upper-triangular matrix orthogonally by the QR algorithm. In this case, the matrix in discussion can be orthogonally made homologous to the previous Hessenberg Matrix. As a result, instead of the master data of matrix A, the master data of the Hessenberg Matrix is found. There are two famous assumptions on this topic.
