*Article* **Design and Implementation of Novel Efficient Full Adder/Subtractor Circuits Based on Quantum-Dot Cellular Automata Technology**

**Mohsen Vahabi 1, Pavel Lyakhov 2,\* and Ali Newaz Bahar <sup>3</sup>**


**Abstract:** One of the emerging technologies at the nanoscale level is the Quantum-Dot Cellular Automata (QCA) technology, which is a potential alternative to conventional CMOS technology due to its high speed, low power consumption, low latency, and possible implementation at the atomic and molecular levels. Adders are one of the most basic digital computing circuits and one of the main building blocks of VLSI systems, such as various microprocessors and processors. Many research studies have been focusing on computable digital computing circuits. The design of a Full Adder/Subtractor (FA/S), a composite and computing circuit, performing both the addition and the subtraction processes, is of particular importance. This paper implements three new Full Adder/Subtractor circuits with the lowest number of cells, lowest area, lowest latency, and a coplanar (single-layer) circuit design, as was shown by comparing the results obtained with those of the best previous works on this topic.

**Keywords:** Quantum-Dot Cellular Automata (QCA); Full Adder/Subtractor (FA/S); coplanar

#### **1. Introduction**

The QCA technology, with its unique features such as minimal dimensions, high speed, very low latency, low power consumption, and high operating frequency [1], has attracted the attention of many researchers and scientists as a new method of communication and computation. It has introduced significant novelties in the field of computer science and logic circuits. Adders are one of the most fundamental computational circuits of digital logic and have attracted researchers' attention. Adders are one of the main building blocks of many VLSI systems, such as various microprocessors and processors. Are the new designs aiming at optimizing the relevant blocks compatible with the development of this technology? A complete Adder/Subtractor design with a simple structure and low power consumption can significantly simplify digital circuits. A Full Adder/Subtractor design should include a composite computations circuit and allow performing both addition and subtraction processes. One of the problems in creating hybrid courses is the appropriate composition of wires crossover to reduce costs.

Due to the high price and increasing circuit complexity, a multilayer crossovers design in the implementation of QCA circuits is not desirable (favorable) [2,3]. To achieve coplanar crossovers, it was suggested to rotate the QCA cells, but due to the coexistence of two types of QCA cells, this caused some problems, such as low stability and high implementation cost. Therefore, a design including this type of cells is not desirable [2,4]. The best method for designing QCA circuits is based on the use of 90-degree cells with non-adjacent clock

**Citation:** Vahabi, M.; Lyakhov, P.; Bahar, A.N. Design and Implementation of Novel Efficient Full Adder/Subtractor Circuits Based on Quantum-Dot Cellular Automata Technology. *Appl. Sci.* **2021**, *11*, 8717. https://doi.org/10.3390/app11188717

Academic Editor: Vladimir M. Fomin

Received: 19 July 2021 Accepted: 15 September 2021 Published: 18 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

phases (four clock phases) to develop the crossover wires in a single layer [2,5]. Therefore, we used these two types of crossover for this research.

For this reason, The proposed designs, due to their coplanar structure and to the fact that they do not require other layers, have a reduced number of cells, occupied area, and delay. The remainder of this article is organized as follows. Section 2 (Background) provides an overview of QCA and previous literature. Section 3 (Proposed Circuits) presents the proposed architecture of Full Adder/Subtractor circuits. In Section 4 (Guidelines Performance Evaluation), we compare the proposed designs with previous architectures. In Section 5 (Conclusion), we discuss our conclusions.

#### **2. Background**

#### *2.1. The Basis of Quantum-Dot Cellular Automata (QCA) Technology*

This technology is based on QCA cells, and the basis of the QCA cell can represent a logical bit with occupied space in the nanoscale. A QCA cell includes two electrons, and, based on the Coulombic repulsion created between two electrons, two logical values of "0" and "1" are possible. The QCA cells are square, as shown in Figure 1. Each enclosure consists of four holes. The two electrons are trapped inside and can move freely between the holes; by placing two electrons in four spots, six different states are created, which is impossible due to Coulombic repulsion forces between the electrons. As a result, to satisfy these forces, electrons are placed inside the holes with as far apart as possible, until the Coulombic repulsion law is satisfied. Depending on the location of the electrons and their diameter, two structures are created; by the establishment of two electrons in each of these two poles, two different states are created. With these two types of systems, two logical values can be obtained; we will consider one of the logical values. We attribute these two polar structures 1 and –1 to the logical values of "1" and "0", respectively; the poles at 1 and –1 same are those of the square cells, as shown in Figure 1 [6–8].

**Figure 1.** (**a**) Normal QCA cells' structure, (**b**) Normal QCA wire's structure, (**c**) Rotated QCA cells' structure and (**d**) Rotated QCA wire's structure.

When the electrons move inside the cell, they tunnel between the holes. Then, the moving of the electrons inside the cell is similar to a nonlinear move, and the Coulombic repulsion force is not exerted just between the electrons inside a cell. However, as shown in Figure 2, each cell adjacent to this one, which has a logical value, is affected and affects the next adjacent cell that has no value, converting it to its value [6–8].

**Figure 2.** Behavior of adjacent cells.

#### *2.2. QCA Four-Phase Clock*

A four-clocking phase scheme for the QCA is shown in Figure 3. As shown in the figure, the barriers (potential barriers) rise during the first clock phase (switch). At the beginning of this phase, the borders are low, and the QCA cell is unpolarized; in this state, under the effect Columbic repulsion, the cell receives data from its adjacent cells. Then, with the barriers rising, the QCA cells are polarized according to their input drive modes, and at the end of this clock phase, the borders are high enough to prevent electron tunneling. As a result, the cell is locked. It is in this phase that the actual switching happens. During the second phase of the clock (hold), the barriers remain high. In this phase, the cell is relatively stable and transmits its data to the adjacent cells. The walls gradually decrease during the third clock phase (release), and the cell becomes unstable. In this phase, the cell is allowed to lose its polarization (unpolarized). During the fourth clock phase (relax phase), the cell barriers are in the lowest state, and the cells remain unpolarized. The cell is not used in this phase. After the end of this phase, the cell enters the switch phase again [3,9].

**Figure 3.** (**a**) Clock phases and (**b**) QCA four-phase clock mechanism.

#### *2.3. QCA Four-Phase Clock*

One of the gates used in logic circuits is the inverter gate (Not gate). A type of inverter gate used in QCA technology is shown in Figure 4. It is used for inverting the desired signal as required [10,11].

**Figure 4.** Two different Not-gate in QCA based on 90◦ cells.

One of the most usable logic gates in QCA technology is the majority gate. This gate has an odd number of inputs and one output. In other words, the output cell value (output cell polarization) is determined according to the logical value of the majority inputs. As a result, the output cell value is determined based on the majority of inputs [8,10]. Figure 5a shows an example of this gate.

**Figure 5.** (**a**) QCA with implementation of the majority gate, (**b**) QCA with implementation of the AND Gate with two inputs and (**c**) QCA with implementation of the OR Gate with two inputs.

By stabilizing (fixed) one of the inputs of the majority gate and considering a logical value "0" (polarization −1), the AND gate is generated [10,12]. Figure 5b shows a two-input AND gate.

The OR gate is generated by fixing one of the majority gate inputs and considering a logical value "1" (polarization +1) [10,12]. Figure 5c shows a two-input OR gate.

#### *2.4. Related Work*

In a previous paper [13], the architecture of a Full Adder/Full Subtractor with a multilayer crossover design was described (Figure 6). This type of multilayer design requires a larger consumption area than we planned. It has also more cells and delays with respect to our designs, so its cost is very high. As a result, our proposed technique was implemented using the coplanar method. It has significant advantages over this type of architecture regarding cell number, delay, and area consumption.

**Figure 6.** Full Adder/Full Subtractor circuit [13].

In another paper [14], the architecture of a Full Adder/Full Subtractor was also presented (Figure 7). This type of design, due to the high delay and unsuitable carry output, requires another crossover. That leads to an increase in the delay and number of cells. As a result, this architecture is also not suitable. Our proposed design has significant advantages relative to this design [14], such as the number of cells, delay, area of consumption, and therefore cost function.

**Figure 7.** Full Adder/Full Subtractor circuit [14].

Another paper [15] described a Full Adder/Full Subtractor architecture using the coplanar method design (Figure 8). This type of design is not very favorable, because the rotated cells (45◦ cells) make it more vulnerable and increase the implementation costs. Our method has significant advantages also relative to this design [15], such as the number of cells, delay, and area. Besides, normal cells were used. Our design (C) is 50% better than that design.

**Figure 8.** Full Adder/Full Subtractor circuit [15].

In a paper [16], a Full Adder/Full Subtractor circuit architecture, implemented in a coplanar method design was presented. Our proposed designs (A and B) are an addition to the coplanar designs in terms of number cells, area and, therefore, cost of implementation and have significant advantages relative to this design (Figure 9). Despite not using rotated cells in our designs (A and B), their latency is equal to that reported in this previous article. In comparison, our third design delay (C) is 33.34% superior to this design and is ideal in terms of cell number and area.

**Figure 9.** Full Adder/Full Subtractor circuit [16].

Another paper [17] also presented two Full Adder/Full Subtractor circuit designs, as shown in Figures 10 and 11; these designs are also coplanar, but our designs have significant advantages in terms of number of cells, delay, and area.

**Figure 10.** Full Adder/Full Subtractor circuit [17]-a.

**Figure 11.** Full Adder/Full Subtractor circuit [17]-b.

#### **3. The Proposed Circuits**

In this paper, we designed three new Full Adder/Subtractor (FA/S) circuits based on the XOR gate [18] with the lowest number of cells, smallest consumption circuit area, and lowest latency (delay) relative to the previous best circuits. In cases, a single-layer (coplanar) design was used. Therefore, these circuits are the best examples of design ever made. The two designs (A and B) are coplanar and use only standard cells (90◦ cells). In the third design (C) which is coplanar, the cells are rotated (45◦ cells) and also coplanar. The third design shows that the use of this type of cell may reduce the delay of circuit outputs, but, it also reduces the stability and resistance of the circuit compared to the circuits with standard cells.

#### *3.1. FA/S Circuits Design*

A Full Adder/Subtractor circuit is a combination circuit where two addition and subtraction operations are performed. This circuit has three inputs (A, B, Cin) and three outputs (S\D, Cout, Bout) [19,20]. Equation (1) is the equation of the output S\D, Equation (2) is the equation of the Cout output, and Equation (3) is the equation of the Bout output. Figure 12, block diagram, and Table 1 show the correct Table of this circuit.

$$\mathbf{S} \backslash \mathbf{D} = \mathbf{A} \oplus \mathbf{B} \oplus \mathbf{C} \text{in} \tag{1}$$

$$\text{Count} = \text{M(A,B,Cin)} = \text{A.B} + \text{A.C} + \text{B.C} \tag{2}$$

$$\text{Bout} = \text{M(A',B,Cin)} = \text{A'.B} + \text{A'.C} + \text{B.C} \tag{3}$$

This paper designed FA/S circuits with the lowest number of cells, lowest consumable area, and lowest latency (delay), compared to the previous best examples. We used a singlelayer (coplanar) design to obtain the best designs ever made. The design is better than previous ones not only in terms of cell number, area, and delay but also because it is based on a single layer. Figure 13 presents a block diagram of these circuits and Figures 14–16, show the implementation of the Proposed Full Adder/Subtractor (FA/S) circuits designs.

**Figure 12.** Block diagram of the Full Adder/Subtractor circuit.



**Figure 13.** Simulation results of the proposed XOR-gate.

**Figure 14.** The proposed (A) Full Adder/Full Subtractor circuit.

**Figure 15.** The proposed (B) Full Adder/Full Subtractor circuit.

**Figure 16.** The proposed (C) Full Adder/Full Subtractor circuit.

#### *3.2. Simulation Results*

In this section, the simulator outputs of the proposed circuits are shown in Figures 17–19. The output latency of both offered courses (A and B) is the same, and they provide the same simulation outputs. As can be seen, in both circuits, the delay is one clock (four phases). The delay of the third circuit (C) is 0.5 clock (two stages). The proposed Full Adder/Subtractor hybrid circuits combine two addition and subtraction circuits and allow the concurrent performance of both operations.

**Figure 17.** Simulation results for the proposed (A) Full Adder/Full Subtractor circuit.

**Figure 18.** Simulation results for the proposed (B) Full Adder/Full Subtractor circuit.

**Figure 19.** Simulation results for the proposed (C) Full Adder/Full Subtractor circuit.

#### **4. Guidelines of Performance Evaluation**

The QCA Designer provided the simulation results. The simulation parameters are presented in Table 2. The proposed design was compared with designs described in previous works. For all circuits designed, parameters including area, delay, and cell numbers are provided. The type of crossover is also presented for a better and more accurate comparison.

**Table 2.** Simulation parameters for the QCA Designer.


The simulation results are given in Table 3. As can be seen, the proposed circuits were compared with the best circuits previously described. In Table 3, consumption area, delay, and cell number of the proposed Full Adder/Subtractor circuits are compared to those of previous designs.


**Table 3.** Comparing the Full Adder/Subtractor (FA/S) of this study with those of previous works.

As shown in Table 3, our designs (A) and (B) allow reducing the area and power consumption up to 39.1% with respect to previous circuits described in [14,17]. As can be seen, the delay in the proposed designs improved significantly with respect to previous works. Our designs (A) and (B) reduce the delay by 50% in comparison to the designs in [13,14] and by 30% with respect to those in [15,17]. The reduction in the proposed design C, relative to the designs in [13,14], corresponds to 66.66%, whereas it corresponds to 50% in comparison to those in [15], [17]-a and [17]-b, and to 33.33% in comparison to that in [16]. As can be seen, the proposed designs have also the lowest cell number with respect to the other designs. Improvement in the cell number of the proposed design A relative to the designs in [13–17]-a and [17]-b is about 24.45%, 18.07%, 17.07%, 9.34%, 26.09%, and 19.05% respectively; the cell number improvement in the proposed design B relative to [13–17]-a and [17]-b, respectively, is about 25.56%, 19.28%, 18.29%, 10.67%, 27.17%, and 20.24%; finally, the cell number improvement in the proposed design C relative to [13–17]-a and [17]-b, respectively, is about 27.78%, 21.69%, 20.73%, 13.34%, 29.35%, and 22.62%.

#### **5. Conclusions**

The FA/S designs using the QCA technology use at least three layers for the crossover, while several techniques use 45◦ cells. Indeed, only non-adjacent clock phases (four clock phases) are required to design the crossover in a single layer, which is robust and better. However, the coplanar crossover's design using rotated cells can reduce delay in the circuit. In some cases, depending on the type of usage, these two types of design can be used. The circuits' designs proposed in this study are better and preferable than previous designs in terms of number of cells consumed, circuit area, delay, and cost. As a result, these three proposed designs can be used in more extensive and more complex circuits.

**Author Contributions:** Conceptualization, M.V.; methodology, M.V.; validation, M.V., P.L. and A.N.B.; formal analysis, M.V.; investigation, M.V., P.L. and A.N.B.; writing—original draft preparation, M.V.; writing—review and editing, P.L. and A.N.B.; supervision, P.L. and A.N.B. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Data are contained within the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **An Algorithm for Fast Multiplication of Kaluza Numbers**

**Aleksandr Cariow †, Galina Cariowa † and Janusz P. Paplinski \*,†**

Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, Szczecin, Zołnierska 49, 71-210 Szczecin, Poland; acariow@wi.zut.edu.pl (A.C.); gcariowa@wi.zut.edu.pl (G.C.) ˙ **\*** Correspondence: janusz.paplinski@zut.edu.pl

† These authors contributed equally to this work.

**Abstract:** This paper presents a new algorithm for multiplying two Kaluza numbers. Performing this operation directly requires 1024 real multiplications and 992 real additions. We presented in a previous paper an effective algorithm that can compute the same result with only 512 real multiplications and 576 real additions. More effective solutions have not yet been proposed. Nevertheless, it turned out that an even more interesting solution could be found that would further reduce the computational complexity of this operation. In this article, we propose a new algorithm that allows one to calculate the product of two Kaluza numbers using only 192 multiplications and 384 additions of real numbers.

**Keywords:** convolutional neural networks; fast algorithms; hypercomplex number multiplication; Kaluza numbers

#### **1. Introduction**

The permanent development of the theory and practice of data processing, as well as the need to solve increasingly complex problems of computational intelligence, inspire the use of complex and advanced mathematical methods and formalisms to represent and process big multidimensional data arrays. A convenient formalism for representing big data arrays is the high-dimensional number system. For a long time, highdimensional number systems have been used in physics and mathematics for modeling complex systems and physical phenomena. Today, hypercomplex numbers [1] are also used in various fields of data processing, including digital signal and image processing, machine graphics, telecommunications, and cryptography [2–10]. However, their use in brain-inspired computation and neural networks has been largely limited due to the lack of comprehensive and all-inclusive information processing and deep learning techniques. Although there has been a number of research articles addressing the use of quaternions and octonions, higher-dimensional numbers remain a largely open problem [11–22]. Recently, new articles appeared in open access that presented a sedenion-based neural network [23,24]. The expediency of using numerical systems of higher dimensions was also noted. Thus, the object of our research was hypercomplex-valued convolutional neural networks using 32-dimensional Kaluza numbers.

In advanced hypercomplex-valued convolutional neural networks, multiplying hypercomplex numbers is the most time-consuming arithmetic operation. The reason for this is that the addition of *N*-dimensional hypercomplex numbers requires *N* real additions, while the multiplication of these numbers already requires *<sup>N</sup>*(*<sup>N</sup>* − <sup>1</sup>) real additions and *<sup>N</sup>*<sup>2</sup> real multiplication. It is easy to see that the increasing of dimensions of hypercomplex numbers increases the computational complexity of the multiplication. Therefore, reducing the computational complexity of the multiplication of hypercomplex numbers is an important scientific and engineering problem. The original algorithm for computing the product of Kaluza numbers was described in [25], but we found a more efficient solution. The purpose of this article is to present our new solution.

**Citation:** Cariow, A.; Cariowa, G.; Paplinski, J.P. An Algorithm for Fast Multiplication of Kaluza Numbers. *Appl. Sci.* **2021**, *11*, 8203. https:// doi.org/10.3390/app11178203

Academic Editor: Pavel Lyakhov

Received: 6 August 2021 Accepted: 1 September 2021 Published: 3 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. Preliminary Remarks**

In all likelihood, the rules for constructing Kaluza numbers were first described in [26]. In article [25], based on these rules, a multiplication table for the imaginary units of the Kaluza number was constructed. A Kaluza number is defined as follows:

$$d = d\_0 + \sum\_{n=1}^{31} d\_n e\_n$$

where *<sup>N</sup>* = <sup>2</sup>*m*−<sup>1</sup> and {*dn*} for *<sup>n</sup>* = 1, 2, . . . , 31 are real numbers, and {*en*} for *<sup>n</sup>* = 1, 2, . . . , 31 are the imaginary units.

Imaginary units *e*1, *e*2, ... , *em* are called principal, and the remaining imaginary units are expressed through them using the formula:

$$e\_s = e\_{p\prime}e\_{q\prime} \dots e\_{r\prime}$$

where 1 ≤ *p* < *q* < ··· < *r* ≤ *m*.

All kinds of works of imaginary units are entirely based on established rules:

$$
\epsilon\_p^2 = \epsilon\_p; \quad \epsilon\_q \epsilon\_p = \alpha\_{pq} \epsilon\_p \epsilon\_q; \quad p < q; \quad pq = 1, 2, \dots, m
$$

For Kaluza numbers [26]:

$$m = 5, \quad \epsilon\_1 = \epsilon\_2 = 1, \quad \epsilon\_2 = \epsilon\_3 = -1, \quad \kappa\_{p\eta} = -1$$

Using the above rules, the results of all possible products of imaginary units of Kaluza numbers can be summarized in the following tables [25]: Tables 1–4. For conveniens of notation we represents each element *ei* in the tables by its subscript *i*, and we set *i* = *ei*.

**Table 1.** Multiplication rules of Kaluza numbers for *e*0, *e*1, ... , *e*<sup>15</sup> and *e*0, *e*1, ... , *e*<sup>15</sup> (elements *ei* denoted by their subscripts, i.e., *i* = *ei*).



**Table 2.** Multiplication rules of Kaluza numbers for *e*0, *e*1, ... , *e*<sup>15</sup> and *e*16, *e*17, ... , *e*<sup>31</sup> (elements *ei* denoted by their subscripts, i.e., *i* = *ei*).

**Table 3.** Multiplication rules of Kaluza numbers for *e*16, *e*17, ... , *e*<sup>31</sup> and *e*0, *e*1, ... , *e*<sup>15</sup> (elements *ei* denoted by their subscripts, i.e., *i* = *ei*).


**Table 4.** Multiplication rules of Kaluza numbers for *e*16, *e*17, ... , *e*<sup>31</sup> and *e*16, *e*17, ... , *e*<sup>31</sup> (elements *ei* denoted by their subscripts, i.e., *i* = *ei*).


Suppose we want to compute the product of two Kaluza numbers:

$$d = d^{(1)}d^{(2)} = d\_0 + \sum\_{n=1}^{31} d\_n e\_{n\nu}$$

where

$$d^{(1)} = a\_0 + \sum\_{n=1}^{31} a\_n e\_n \quad \text{and} \quad d^{(2)} = b\_0 + \sum\_{n=1}^{31} b\_n e\_n.$$

The operation of the multiplication of Kaluza numbers can be represented more compactly in the form of a matrix-vector product:

$$\mathbf{Y}\_{32 \times 1} = \mathbf{B}\_{32} \mathbf{X}\_{32 \times 1} \tag{1}$$

,

,

where  $\mathbf{Y}\_{32 \times 1} = [d\_0, d\_1, \dots, d\_{31}]^\mathrm{T}$ ,  $\mathbf{X}\_{32 \times 1} = [a\_0, a\_1, \dots, a\_{31}]^\mathrm{T}$ , 
$$\mathbf{B}\_{32} = \begin{bmatrix} \mathbf{B}\_{16}^{(0,0)} & \mathbf{B}\_{16}^{(1,0)} \\ \mathbf{B}\_{16}^{(0,1)} & \mathbf{B}\_{16}^{(1,1)} \end{bmatrix}^\mathrm{T}$$


60

**<sup>B</sup>**(0,1) <sup>16</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *b*<sup>16</sup> *b*<sup>17</sup> *b*<sup>18</sup> −*b*<sup>19</sup> −*b*<sup>20</sup> −*b*<sup>21</sup> −*b*<sup>22</sup> −*b*<sup>23</sup> *b*<sup>10</sup> *b*<sup>11</sup> *b*<sup>12</sup> −*b*<sup>13</sup> −*b*<sup>14</sup> −*b*<sup>15</sup> *b*<sup>26</sup> *b*<sup>27</sup> −*b*<sup>7</sup> −*b*<sup>8</sup> −*b*<sup>9</sup> −*b*<sup>26</sup> −*b*<sup>27</sup> −*b*<sup>28</sup> −*b*<sup>13</sup> −*b*<sup>14</sup> −*b*<sup>6</sup> −*b*<sup>26</sup> −*b*<sup>27</sup> −*b*<sup>8</sup> −*b*<sup>9</sup> −*b*<sup>29</sup> −*b*<sup>11</sup> −*b*<sup>12</sup> *b*<sup>26</sup> −*b*<sup>6</sup> −*b*<sup>28</sup> *b*<sup>7</sup> *b*<sup>29</sup> −*b*<sup>9</sup> *b*<sup>10</sup> *b*<sup>30</sup> *b*<sup>27</sup> *b*<sup>28</sup> −*b*<sup>6</sup> −*b*<sup>29</sup> *b*<sup>7</sup> *b*<sup>8</sup> −*b*<sup>30</sup> *b*<sup>10</sup> −*b*<sup>3</sup> −*b*<sup>4</sup> −*b*<sup>5</sup> −*b*<sup>22</sup> −*b*<sup>23</sup> −*b*<sup>24</sup> *b*<sup>19</sup> *b*<sup>20</sup> −*b*<sup>2</sup> −*b*<sup>22</sup> −*b*<sup>23</sup> −*b*<sup>4</sup> −*b*<sup>5</sup> −*b*<sup>25</sup> *b*<sup>17</sup> *b*<sup>18</sup> *b*<sup>22</sup> −*b*<sup>2</sup> −*b*<sup>24</sup> *b*<sup>3</sup> *b*<sup>25</sup> −*b*<sup>5</sup> −*b*<sup>16</sup> −*b*<sup>31</sup> *b*<sup>23</sup> *b*<sup>24</sup> −*b*<sup>2</sup> −*b*<sup>25</sup> *b*<sup>3</sup> *b*<sup>4</sup> *b*<sup>31</sup> −*b*<sup>16</sup> *b*<sup>1</sup> *b*<sup>19</sup> *b*<sup>20</sup> −*b*<sup>17</sup> −*b*<sup>18</sup> −*b*<sup>31</sup> −*b*<sup>4</sup> −*b*<sup>5</sup> −*b*<sup>19</sup> *b*<sup>1</sup> *b*<sup>21</sup> *b*<sup>16</sup> *b*<sup>31</sup> −*b*<sup>18</sup> *b*<sup>3</sup> *b*<sup>25</sup> −*b*<sup>20</sup> −*b*<sup>21</sup> *b*<sup>1</sup> −*b*<sup>31</sup> *b*<sup>16</sup> *b*<sup>17</sup> −*b*<sup>25</sup> *b*<sup>3</sup> −*b*<sup>17</sup> *b*<sup>16</sup> *b*<sup>31</sup> *b*<sup>1</sup> *b*<sup>21</sup> −*b*<sup>20</sup> *b*<sup>2</sup> *b*<sup>24</sup> −*b*<sup>18</sup> −*b*<sup>31</sup> *b*<sup>16</sup> −*b*<sup>21</sup> *b*<sup>1</sup> *b*<sup>19</sup> −*b*<sup>24</sup> *b*<sup>2</sup> *b*<sup>31</sup> −*b*<sup>18</sup> *b*<sup>17</sup> *b*<sup>20</sup> −*b*<sup>19</sup> *b*<sup>1</sup> *b*<sup>23</sup> −*b*<sup>22</sup> −*b*<sup>24</sup> *b*<sup>25</sup> *b*<sup>26</sup> *b*<sup>27</sup> *b*<sup>28</sup> −*b*<sup>29</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>28</sup> −*b*<sup>29</sup> −*b*<sup>22</sup> −*b*<sup>23</sup> −*b*<sup>24</sup> *b*<sup>25</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>15</sup> −*b*<sup>30</sup> *b*<sup>19</sup> *b*<sup>20</sup> *b*<sup>21</sup> *b*<sup>31</sup> *b*<sup>25</sup> *b*<sup>29</sup> −*b*<sup>30</sup> −*b*<sup>15</sup> *b*<sup>17</sup> *b*<sup>18</sup> *b*<sup>31</sup> *b*<sup>21</sup> *b*<sup>24</sup> *b*<sup>28</sup> −*b*<sup>12</sup> *b*<sup>14</sup> −*b*<sup>16</sup> −*b*<sup>31</sup> *b*<sup>18</sup> −*b*<sup>20</sup> −*b*<sup>23</sup> −*b*<sup>27</sup> *b*<sup>11</sup> −*b*<sup>13</sup> *b*<sup>31</sup> −*b*<sup>16</sup> −*b*<sup>17</sup> *b*<sup>19</sup> *b*<sup>22</sup> *b*<sup>26</sup> *b*<sup>21</sup> *b*<sup>31</sup> −*b*<sup>13</sup> −*b*<sup>14</sup> −*b*<sup>15</sup> −*b*<sup>30</sup> *b*<sup>29</sup> *b*<sup>25</sup> *b*<sup>31</sup> *b*<sup>21</sup> −*b*<sup>11</sup> −*b*<sup>12</sup> −*b*<sup>30</sup> −*b*<sup>15</sup> *b*<sup>28</sup> *b*<sup>24</sup> *b*<sup>18</sup> −*b*<sup>20</sup> *b*<sup>10</sup> *b*<sup>30</sup> −*b*<sup>12</sup> *b*<sup>14</sup> −*b*<sup>27</sup> −*b*<sup>23</sup> −*b*<sup>17</sup> *b*<sup>19</sup> −*b*<sup>30</sup> *b*<sup>10</sup> *b*<sup>11</sup> −*b*<sup>13</sup> *b*<sup>26</sup> *b*<sup>22</sup> −*b*<sup>25</sup> *b*<sup>24</sup> *b*<sup>8</sup> *b*<sup>9</sup> *b*<sup>29</sup> −*b*<sup>28</sup> −*b*<sup>15</sup> −*b*<sup>21</sup> −*b*<sup>5</sup> −*b*<sup>23</sup> −*b*<sup>7</sup> −*b*<sup>29</sup> *b*<sup>9</sup> *b*<sup>27</sup> *b*<sup>14</sup> *b*<sup>20</sup> *b*<sup>4</sup> *b*<sup>22</sup> *b*<sup>29</sup> −*b*<sup>7</sup> −*b*<sup>8</sup> −*b*<sup>26</sup> −*b*<sup>13</sup> −*b*<sup>19</sup> −*b*<sup>23</sup> −*b*<sup>5</sup> −*b*<sup>6</sup> −*b*<sup>28</sup> *b*<sup>27</sup> *b*<sup>9</sup> *b*<sup>12</sup> *b*<sup>18</sup> *b*<sup>22</sup> *b*<sup>4</sup> *b*<sup>28</sup> −*b*<sup>6</sup> −*b*<sup>26</sup> −*b*<sup>8</sup> −*b*<sup>11</sup> −*b*<sup>17</sup> *b*<sup>2</sup> −*b*<sup>3</sup> −*b*<sup>27</sup> *b*<sup>26</sup> −*b*<sup>6</sup> *b*<sup>7</sup> *b*<sup>10</sup> *b*<sup>16</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ **<sup>B</sup>**(1,1) <sup>16</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *b*<sup>0</sup> *b*<sup>13</sup> *b*<sup>14</sup> −*b*<sup>11</sup> −*b*<sup>12</sup> −*b*<sup>30</sup> *b*<sup>8</sup> *b*<sup>9</sup> −*b*<sup>13</sup> *b*<sup>0</sup> *b*<sup>15</sup> *b*<sup>10</sup> *b*<sup>30</sup> −*b*<sup>12</sup> −*b*<sup>7</sup> −*b*<sup>29</sup> −*b*<sup>14</sup> −*b*<sup>15</sup> *b*<sup>0</sup> −*b*<sup>30</sup> *b*<sup>10</sup> *b*<sup>11</sup> *b*<sup>29</sup> −*b*<sup>7</sup> −*b*<sup>11</sup> *b*<sup>10</sup> *b*<sup>30</sup> *b*<sup>0</sup> *b*<sup>15</sup> −*b*<sup>14</sup> −*b*<sup>6</sup> −*b*<sup>28</sup> −*b*<sup>12</sup> −*b*<sup>30</sup> *b*<sup>10</sup> −*b*<sup>15</sup> *b*<sup>0</sup> *b*<sup>13</sup> *b*<sup>28</sup> −*b*<sup>6</sup> *b*<sup>30</sup> −*b*<sup>12</sup> *b*<sup>11</sup> *b*<sup>14</sup> −*b*<sup>13</sup> *b*<sup>0</sup> −*b*<sup>27</sup> *b*<sup>26</sup> *b*<sup>8</sup> −*b*<sup>7</sup> −*b*<sup>29</sup> *b*<sup>6</sup> *b*<sup>28</sup> −*b*<sup>27</sup> *b*<sup>0</sup> *b*<sup>15</sup> *b*<sup>9</sup> *b*<sup>29</sup> −*b*<sup>7</sup> −*b*<sup>28</sup> *b*<sup>6</sup> *b*<sup>26</sup> −*b*<sup>15</sup> *b*<sup>0</sup> −*b*<sup>29</sup> *b*<sup>9</sup> −*b*<sup>8</sup> *b*<sup>27</sup> −*b*<sup>26</sup> *b*<sup>6</sup> *b*<sup>14</sup> −*b*<sup>13</sup> −*b*<sup>28</sup> *b*<sup>27</sup> −*b*<sup>26</sup> *b*<sup>9</sup> −*b*<sup>8</sup> *b*<sup>7</sup> *b*<sup>12</sup> −*b*<sup>11</sup> *b*<sup>4</sup> −*b*<sup>3</sup> −*b*<sup>25</sup> *b*<sup>2</sup> *b*<sup>24</sup> −*b*<sup>23</sup> −*b*<sup>1</sup> −*b*<sup>21</sup> *b*<sup>5</sup> *b*<sup>25</sup> −*b*<sup>3</sup> −*b*<sup>24</sup> *b*<sup>2</sup> *b*<sup>22</sup> *b*<sup>21</sup> −*b*<sup>1</sup> −*b*<sup>25</sup> *b*<sup>5</sup> −*b*<sup>4</sup> *b*<sup>23</sup> −*b*<sup>22</sup> *b*<sup>2</sup> −*b*<sup>20</sup> *b*<sup>19</sup> −*b*<sup>24</sup> *b*<sup>23</sup> −*b*<sup>22</sup> *b*<sup>5</sup> −*b*<sup>4</sup> *b*<sup>3</sup> −*b*<sup>18</sup> *b*<sup>17</sup> *b*<sup>21</sup> −*b*<sup>20</sup> *b*<sup>19</sup> *b*<sup>18</sup> −*b*<sup>17</sup> *b*<sup>16</sup> *b*<sup>5</sup> −*b*<sup>4</sup> *b*<sup>15</sup> −*b*<sup>14</sup> *b*<sup>13</sup> *b*<sup>12</sup> −*b*<sup>11</sup> *b*<sup>10</sup> −*b*<sup>9</sup> *b*<sup>8</sup> *b*<sup>29</sup> −*b*<sup>28</sup> −*b*<sup>4</sup> −*b*<sup>5</sup> −*b*<sup>25</sup> *b*<sup>24</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> *b*<sup>9</sup> *b*<sup>27</sup> *b*<sup>3</sup> *b*<sup>25</sup> −*b*<sup>5</sup> −*b*<sup>23</sup> *b*<sup>20</sup> *b*<sup>14</sup> −*b*<sup>8</sup> −*b*<sup>26</sup> −*b*<sup>25</sup> *b*<sup>3</sup> *b*<sup>4</sup> *b*<sup>22</sup> −*b*<sup>19</sup> −*b*<sup>13</sup> *b*<sup>27</sup> *b*<sup>9</sup> *b*<sup>2</sup> *b*<sup>24</sup> −*b*<sup>23</sup> −*b*<sup>5</sup> *b*<sup>18</sup> *b*<sup>12</sup> −*b*<sup>26</sup> −*b*<sup>8</sup> −*b*<sup>24</sup> *b*<sup>2</sup> *b*<sup>22</sup> *b*<sup>4</sup> −*b*<sup>17</sup> −*b*<sup>11</sup> −*b*<sup>6</sup> *b*<sup>7</sup> *b*<sup>23</sup> −*b*<sup>22</sup> *b*<sup>2</sup> −*b*<sup>3</sup> *b*<sup>16</sup> *b*<sup>10</sup> −*b*<sup>14</sup> *b*<sup>12</sup> −*b*<sup>1</sup> −*b*<sup>21</sup> *b*<sup>20</sup> −*b*<sup>18</sup> −*b*<sup>5</sup> −*b*<sup>9</sup> *b*<sup>13</sup> −*b*<sup>11</sup> *b*<sup>21</sup> −*b*<sup>1</sup> −*b*<sup>19</sup> *b*<sup>17</sup> *b*<sup>4</sup> *b*<sup>8</sup> *b*<sup>0</sup> *b*<sup>10</sup> −*b*<sup>20</sup> *b*<sup>19</sup> −*b*<sup>1</sup> −*b*<sup>16</sup> −*b*<sup>3</sup> −*b*<sup>7</sup> *b*<sup>10</sup> *b*<sup>0</sup> −*b*<sup>18</sup> *b*<sup>17</sup> −*b*<sup>16</sup> −*b*<sup>1</sup> −*b*<sup>2</sup> −*b*<sup>6</sup> *b*<sup>20</sup> −*b*<sup>18</sup> *b*<sup>0</sup> *b*<sup>15</sup> −*b*<sup>14</sup> *b*<sup>12</sup> −*b*<sup>9</sup> −*b*<sup>5</sup> −*b*<sup>19</sup> *b*<sup>17</sup> −*b*<sup>15</sup> *b*<sup>0</sup> *b*<sup>13</sup> −*b*<sup>11</sup> *b*<sup>8</sup> *b*<sup>4</sup> −*b*<sup>1</sup> −*b*<sup>16</sup> *b*<sup>14</sup> −*b*<sup>13</sup> *b*<sup>0</sup> *b*<sup>10</sup> −*b*<sup>7</sup> −*b*<sup>3</sup> −*b*<sup>16</sup> −*b*<sup>1</sup> *b*<sup>12</sup> −*b*<sup>11</sup> *b*<sup>10</sup> *b*<sup>0</sup> −*b*<sup>6</sup> −*b*<sup>2</sup> *b*<sup>3</sup> −*b*<sup>2</sup> −*b*<sup>9</sup> *b*<sup>8</sup> −*b*<sup>7</sup> *b*<sup>6</sup> *b*<sup>0</sup> *b*<sup>1</sup> −*b*<sup>7</sup> *b*<sup>6</sup> *b*<sup>5</sup> −*b*<sup>4</sup> *b*<sup>3</sup> −*b*<sup>2</sup> *b*<sup>1</sup> *b*<sup>0</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

The direct multiplication of the matrix-vector product in Equation (1) requires 1024 real multiplications and 992 additions. We shall present an algorithm that reduces computation complexity to 192 multiplications and 384 additions of real numbers.

#### **3. Synthesis of a Rationalized Algorithm for Computing Kaluza Numbers Product**

We first rearrange the rows and columns of the matrix respectively using the permutations: *π<sup>r</sup>* = (11, 17, 2, 6, 13, 19, 3, 7, 0, 1, 4, 8, 10, 16, 22, 26, 30, 31, 23, 27, 15, 21, 5, 9, 14, 20, 25, 29, 12, 18, 24, 28) and *π<sup>c</sup>* = (10, 16, 3, 7, 0, 1, 2, 6, 13, 19, 22, 26, 11, 17, 4, 8, 12, 18, 5, 9, 14, 20, 23, 27, 15, 21, 24, 28, 30, 31, 25, 29). Next, we change the sign of the selected rows {8, 9, 12, 13, 14, 15, 26, 27} and columns {2, 3, 6, 7, 8, 9, 12, 13} by multiplying them by −1. We can easily see that this transformation leads in the future to minimizing the computational complexity of the final algorithm. Then we can write:

$$\mathbf{Y}\_{32 \times 1} = \mathbf{M}\_{32}^{(r)} \mathbf{B}\_{32} \mathbf{M}\_{32}^{(c)} \mathbf{X}\_{32 \times 1} \tag{2}$$

,

.

where the monomial matrices **<sup>M</sup>**(*r*) <sup>32</sup> , **<sup>M</sup>**(*c*) <sup>32</sup> are products of an appriopriate alternating sign changing matrices **<sup>S</sup>**(*r*) <sup>32</sup> , **<sup>S</sup>**(*c*) <sup>32</sup> and a permutation matrix **<sup>P</sup>**(*r*) <sup>32</sup> , **<sup>P</sup>**(*c*) <sup>32</sup> :

$$\mathbf{M}\_{32}^{(r)} = \mathbf{S}\_{32}^{(r)} \mathbf{P}\_{32}^{(r)},$$

$$\mathbf{M}\_{32}^{(c)} = \mathbf{P}\_{32}^{(c)} \mathbf{S}\_{32}^{(c)},$$

where:



**<sup>P</sup>**(*r*,(1,0)) <sup>16</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0100000000000000 0000000000000100 0000000000000000 0000000001000000 0000000000000000 0000000000000000 0000000000100000 0000000000000000 0000000000000000 0000000000000000 0000000000010000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>P</sup>**(*r*,(1,1)) <sup>16</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0000000000000000 0000000000000000 0100000000000000 0000000000000000 0000010000000000 0000000001000000 0000000000000000 0000001000000000 0000000000100000 0000000000000010 0000000000000000 0000000100000000 0000000000010000 0000000000000001 0000000000001000 0000000000000100 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>32</sup>** <sup>=</sup> diag 1, 1, 1, 1, −1, 1, 1, 1, −1, 1, 1, −1, 1, −1, 1, 1, 1, <sup>−</sup>1, 1, <sup>−</sup>1, 1, 1, 1, 1, <sup>−</sup>1, 1, 1, 1, <sup>−</sup>1, 1, 1, 1 .

The matrix **B**˘ <sup>32</sup> is calculated from:

$$\mathbf{B}\_{32} = \left(\mathbf{M}\_{32}^{(r)}\right)^{-1} \mathbf{B}\_{32} \left(\mathbf{M}\_{32}^{(c)}\right)^{-1} \text{ .}$$

If we interpret the **B**˘ <sup>32</sup> matrix as a block matrix, it is easy to see that it has a bisymmetric structure:

$$
\mathring{\mathbf{B}}\_{32} = \begin{bmatrix}
\mathbf{B}\_{16}^{(0)} & \mathbf{B}\_{16}^{(1)} \\
\mathbf{B}\_{16}^{(1)} & -\mathbf{B}\_{16}^{(0)}
\end{bmatrix},
$$

where

**<sup>S</sup>**(**r**)

**<sup>B</sup>**˘ (0) <sup>16</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *b*<sup>13</sup> *b*<sup>19</sup> −*b*<sup>3</sup> *b*<sup>7</sup> −*b*<sup>11</sup> −*b*<sup>17</sup> *b*<sup>2</sup> −*b*<sup>6</sup> −*b*<sup>10</sup> −*b*<sup>16</sup> −*b*<sup>22</sup> *b*<sup>26</sup> −*b*<sup>0</sup> −*b*<sup>1</sup> −*b*<sup>4</sup> *b*<sup>8</sup> *b*<sup>19</sup> *b*<sup>13</sup> *b*<sup>7</sup> −*b*<sup>3</sup> −*b*<sup>17</sup> −*b*<sup>11</sup> −*b*<sup>6</sup> *b*<sup>2</sup> −*b*<sup>16</sup> −*b*<sup>10</sup> *b*<sup>26</sup> −*b*<sup>22</sup> −*b*<sup>1</sup> −*b*<sup>0</sup> *b*<sup>8</sup> −*b*<sup>4</sup> −*b*<sup>22</sup> −*b*<sup>26</sup> −*b*<sup>10</sup> *b*<sup>16</sup> −*b*<sup>4</sup> −*b*<sup>8</sup> −*b*<sup>0</sup> *b*<sup>1</sup> −*b*<sup>3</sup> −*b*<sup>7</sup> *b*<sup>13</sup> −*b*<sup>19</sup> *b*<sup>2</sup> *b*<sup>6</sup> −*b*<sup>11</sup> *b*<sup>17</sup> −*b*<sup>26</sup> −*b*<sup>22</sup> *b*<sup>16</sup> −*b*<sup>10</sup> −*b*<sup>8</sup> −*b*<sup>4</sup> *b*<sup>1</sup> −*b*<sup>0</sup> −*b*<sup>7</sup> −*b*<sup>3</sup> −*b*<sup>19</sup> *b*<sup>13</sup> *b*<sup>6</sup> *b*<sup>2</sup> *b*<sup>17</sup> −*b*<sup>11</sup> *b*<sup>11</sup> *b*<sup>17</sup> −*b*<sup>2</sup> *b*<sup>6</sup> −*b*<sup>13</sup> −*b*<sup>19</sup> *b*<sup>3</sup> −*b*<sup>7</sup> −*b*<sup>0</sup> −*b*<sup>1</sup> −*b*<sup>4</sup> *b*<sup>8</sup> −*b*<sup>10</sup> −*b*<sup>16</sup> −*b*<sup>22</sup> *b*<sup>26</sup> *b*<sup>17</sup> *b*<sup>11</sup> *b*<sup>6</sup> −*b*<sup>2</sup> −*b*<sup>19</sup> −*b*<sup>13</sup> −*b*<sup>7</sup> *b*<sup>3</sup> −*b*<sup>1</sup> −*b*<sup>0</sup> *b*<sup>8</sup> −*b*<sup>4</sup> −*b*<sup>16</sup> −*b*<sup>10</sup> *b*<sup>26</sup> −*b*<sup>22</sup> −*b*<sup>4</sup> −*b*<sup>8</sup> −*b*<sup>0</sup> *b*<sup>1</sup> −*b*<sup>22</sup> −*b*<sup>26</sup> −*b*<sup>10</sup> *b*<sup>16</sup> −*b*<sup>2</sup> −*b*<sup>6</sup> *b*<sup>11</sup> −*b*<sup>17</sup> *b*<sup>3</sup> *b*<sup>7</sup> −*b*<sup>13</sup> *b*<sup>19</sup> −*b*<sup>8</sup> −*b*<sup>4</sup> *b*<sup>1</sup> −*b*<sup>0</sup> −*b*<sup>26</sup> −*b*<sup>22</sup> *b*<sup>16</sup> −*b*<sup>10</sup> −*b*<sup>6</sup> −*b*<sup>2</sup> −*b*<sup>17</sup> *b*<sup>11</sup> *b*<sup>7</sup> *b*<sup>3</sup> *b*<sup>19</sup> −*b*<sup>13</sup> −*b*<sup>10</sup> −*b*<sup>16</sup> *b*<sup>22</sup> −*b*<sup>26</sup> −*b*<sup>0</sup> −*b*<sup>1</sup> *b*<sup>4</sup> −*b*<sup>8</sup> *b*<sup>13</sup> *b*<sup>19</sup> *b*<sup>3</sup> −*b*<sup>7</sup> −*b*<sup>11</sup> −*b*<sup>17</sup> −*b*<sup>2</sup> *b*<sup>6</sup> −*b*<sup>16</sup> −*b*<sup>10</sup> −*b*<sup>26</sup> *b*<sup>22</sup> −*b*<sup>1</sup> −*b*<sup>0</sup> −*b*<sup>8</sup> *b*<sup>4</sup> *b*<sup>19</sup> *b*<sup>13</sup> −*b*<sup>7</sup> *b*<sup>3</sup> −*b*<sup>17</sup> −*b*<sup>11</sup> *b*<sup>6</sup> −*b*<sup>2</sup> −*b*<sup>3</sup> −*b*<sup>7</sup> −*b*<sup>13</sup> *b*<sup>19</sup> *b*<sup>2</sup> *b*<sup>6</sup> *b*<sup>11</sup> −*b*<sup>17</sup> −*b*<sup>22</sup> −*b*<sup>26</sup> *b*<sup>10</sup> −*b*<sup>16</sup> −*b*<sup>4</sup> −*b*<sup>8</sup> *b*<sup>0</sup> −*b*<sup>1</sup> −*b*<sup>7</sup> −*b*<sup>3</sup> *b*<sup>19</sup> −*b*<sup>13</sup> *b*<sup>6</sup> *b*<sup>2</sup> −*b*<sup>17</sup> *b*<sup>11</sup> −*b*<sup>26</sup> −*b*<sup>22</sup> −*b*<sup>16</sup> *b*<sup>10</sup> −*b*<sup>8</sup> −*b*<sup>4</sup> −*b*<sup>1</sup> *b*<sup>0</sup> −*b*<sup>0</sup> −*b*<sup>1</sup> *b*<sup>4</sup> −*b*<sup>8</sup> −*b*<sup>10</sup> −*b*<sup>16</sup> *b*<sup>22</sup> −*b*<sup>26</sup> *b*<sup>11</sup> *b*<sup>17</sup> *b*<sup>2</sup> −*b*<sup>6</sup> −*b*<sup>13</sup> −*b*<sup>19</sup> −*b*<sup>3</sup> *b*<sup>7</sup> −*b*<sup>1</sup> −*b*<sup>0</sup> −*b*<sup>8</sup> *b*<sup>4</sup> −*b*<sup>16</sup> −*b*<sup>10</sup> −*b*<sup>26</sup> *b*<sup>22</sup> *b*<sup>17</sup> *b*<sup>11</sup> −*b*<sup>6</sup> *b*<sup>2</sup> −*b*<sup>19</sup> −*b*<sup>13</sup> *b*<sup>7</sup> −*b*<sup>3</sup> *b*<sup>2</sup> *b*<sup>6</sup> *b*<sup>11</sup> −*b*<sup>17</sup> −*b*<sup>3</sup> −*b*<sup>7</sup> −*b*<sup>13</sup> *b*<sup>19</sup> *b*<sup>4</sup> *b*<sup>8</sup> −*b*<sup>0</sup> *b*<sup>1</sup> *b*<sup>22</sup> *b*<sup>26</sup> −*b*<sup>10</sup> *b*<sup>16</sup> *b*<sup>6</sup> *b*<sup>2</sup> −*b*<sup>17</sup> *b*<sup>11</sup> −*b*<sup>7</sup> −*b*<sup>3</sup> *b*<sup>19</sup> −*b*<sup>13</sup> *b*<sup>8</sup> *b*<sup>4</sup> *b*<sup>1</sup> −*b*<sup>0</sup> *b*<sup>26</sup> *b*<sup>22</sup> *b*<sup>16</sup> −*b*<sup>10</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**˘ (1) <sup>16</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ −*b*<sup>15</sup> −*b*<sup>21</sup> −*b*<sup>5</sup> *b*<sup>9</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> −*b*<sup>23</sup> *b*<sup>27</sup> −*b*<sup>12</sup> −*b*<sup>18</sup> *b*<sup>24</sup> −*b*<sup>28</sup> *b*<sup>14</sup> *b*<sup>20</sup> −*b*<sup>25</sup> *b*<sup>29</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> *b*<sup>9</sup> −*b*<sup>5</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> *b*<sup>27</sup> −*b*<sup>23</sup> −*b*<sup>18</sup> −*b*<sup>12</sup> −*b*<sup>28</sup> *b*<sup>24</sup> *b*<sup>20</sup> *b*<sup>14</sup> *b*<sup>29</sup> −*b*<sup>25</sup> *b*<sup>24</sup> *b*<sup>28</sup> −*b*<sup>12</sup> *b*<sup>18</sup> −*b*<sup>25</sup> −*b*<sup>29</sup> *b*<sup>14</sup> −*b*<sup>20</sup> −*b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>15</sup> *b*<sup>21</sup> −*b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>30</sup> *b*<sup>31</sup> *b*<sup>28</sup> *b*<sup>24</sup> *b*<sup>18</sup> −*b*<sup>12</sup> −*b*<sup>29</sup> −*b*<sup>25</sup> −*b*<sup>20</sup> *b*<sup>14</sup> −*b*<sup>9</sup> −*b*<sup>5</sup> *b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>27</sup> −*b*<sup>23</sup> *b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> −*b*<sup>23</sup> *b*<sup>27</sup> −*b*<sup>15</sup> −*b*<sup>21</sup> −*b*<sup>5</sup> *b*<sup>9</sup> −*b*<sup>14</sup> −*b*<sup>20</sup> *b*<sup>25</sup> −*b*<sup>29</sup> *b*<sup>12</sup> *b*<sup>18</sup> −*b*<sup>24</sup> *b*<sup>28</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> *b*<sup>27</sup> −*b*<sup>23</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> *b*<sup>9</sup> −*b*<sup>5</sup> −*b*<sup>20</sup> −*b*<sup>14</sup> −*b*<sup>29</sup> *b*<sup>25</sup> *b*<sup>18</sup> *b*<sup>12</sup> *b*<sup>28</sup> −*b*<sup>24</sup> *b*<sup>25</sup> *b*<sup>29</sup> −*b*<sup>14</sup> *b*<sup>20</sup> −*b*<sup>24</sup> −*b*<sup>28</sup> *b*<sup>12</sup> −*b*<sup>18</sup> −*b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>30</sup> *b*<sup>31</sup> −*b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>15</sup> *b*<sup>21</sup> *b*<sup>29</sup> *b*<sup>25</sup> *b*<sup>20</sup> −*b*<sup>14</sup> −*b*<sup>28</sup> −*b*<sup>24</sup> −*b*<sup>18</sup> *b*<sup>12</sup> −*b*<sup>27</sup> −*b*<sup>23</sup> *b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>9</sup> −*b*<sup>5</sup> *b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>12</sup> −*b*<sup>18</sup> −*b*<sup>24</sup> *b*<sup>28</sup> *b*<sup>14</sup> *b*<sup>20</sup> *b*<sup>25</sup> −*b*<sup>29</sup> −*b*<sup>15</sup> −*b*<sup>21</sup> *b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>18</sup> −*b*<sup>12</sup> *b*<sup>28</sup> −*b*<sup>24</sup> *b*<sup>20</sup> *b*<sup>14</sup> −*b*<sup>29</sup> *b*<sup>25</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>9</sup> *b*<sup>5</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>27</sup> *b*<sup>23</sup> −*b*<sup>5</sup> −*b*<sup>9</sup> *b*<sup>15</sup> −*b*<sup>21</sup> −*b*<sup>23</sup> −*b*<sup>27</sup> *b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>24</sup> *b*<sup>28</sup> *b*<sup>12</sup> −*b*<sup>18</sup> −*b*<sup>25</sup> −*b*<sup>29</sup> −*b*<sup>14</sup> *b*<sup>20</sup> −*b*<sup>9</sup> −*b*<sup>5</sup> −*b*<sup>21</sup> *b*<sup>15</sup> −*b*<sup>27</sup> −*b*<sup>23</sup> −*b*<sup>31</sup> *b*<sup>30</sup> *b*<sup>28</sup> *b*<sup>24</sup> −*b*<sup>18</sup> *b*<sup>12</sup> −*b*<sup>29</sup> −*b*<sup>25</sup> *b*<sup>20</sup> −*b*<sup>14</sup> −*b*<sup>14</sup> −*b*<sup>20</sup> −*b*<sup>25</sup> *b*<sup>29</sup> *b*<sup>12</sup> *b*<sup>18</sup> *b*<sup>24</sup> −*b*<sup>28</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>15</sup> −*b*<sup>21</sup> *b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>20</sup> −*b*<sup>14</sup> *b*<sup>29</sup> −*b*<sup>25</sup> *b*<sup>18</sup> *b*<sup>12</sup> −*b*<sup>28</sup> *b*<sup>24</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>27</sup> *b*<sup>23</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>9</sup> *b*<sup>5</sup> *b*<sup>23</sup> *b*<sup>27</sup> −*b*<sup>30</sup> *b*<sup>31</sup> *b*<sup>5</sup> *b*<sup>9</sup> −*b*<sup>15</sup> *b*<sup>21</sup> −*b*<sup>25</sup> −*b*<sup>29</sup> −*b*<sup>14</sup> *b*<sup>20</sup> *b*<sup>24</sup> *b*<sup>28</sup> *b*<sup>12</sup> −*b*<sup>18</sup> *b*<sup>27</sup> *b*<sup>23</sup> *b*<sup>31</sup> −*b*<sup>30</sup> *b*<sup>9</sup> *b*<sup>5</sup> *b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>29</sup> −*b*<sup>25</sup> *b*<sup>20</sup> −*b*<sup>14</sup> *b*<sup>28</sup> *b*<sup>24</sup> −*b*<sup>18</sup> *b*<sup>12</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .

There is an effective method of factorization of this type matrices, which during the calculation of the matrix-vector products allows to reduce the number of multiplications from 32<sup>2</sup> to 3/4 · <sup>32</sup><sup>2</sup> at the expense of increasing additions from 32 · 31 to 5/4 · <sup>32</sup> · 31 [27]. The matrix **B**˘ <sup>32</sup> used in the procedure of multiplication (2) can be described as:

$$
\begin{aligned}
\check{\mathbf{B}}\_{32} &= \begin{bmatrix}
\mathbf{I}\_{16} & \mathbf{0}\_{16} & \mathbf{I}\_{16} \\
\mathbf{0}\_{16} & \mathbf{I}\_{16} & \mathbf{I}\_{16}
\end{bmatrix} \begin{bmatrix}
(\mathbf{\mathcal{B}}\_{16}^{(0)} - \mathbf{\mathcal{B}}\_{16}^{(1)}) & \mathbf{0}\_{16} & \mathbf{0}\_{16} \\
\mathbf{0}\_{16} & -(\mathbf{\mathcal{B}}\_{16}^{(0)} + \mathbf{\mathcal{B}}\_{16}^{(1)}) & \mathbf{0}\_{16} \\
\mathbf{0}\_{16} & \mathbf{0}\_{16} & \mathbf{\mathcal{B}}\_{16}^{(1)}
\end{bmatrix} \begin{bmatrix}
\mathbf{I}\_{16} & \mathbf{0}\_{16} \\
\mathbf{0}\_{16} & \mathbf{I}\_{16} \\
\mathbf{I}\_{16} & \mathbf{I}\_{16}
\end{bmatrix},
\end{aligned}
$$

where **I**<sup>16</sup> is an identity matrix and **0**<sup>16</sup> is a null matrix. Thus, we can write a new procedure for calculating the product of Kaluza numbers in the following form:

$$\mathbf{Y}\_{32 \times 1} = \mathbf{M}\_{32}^{(r)} \mathbf{T}\_{32 \times 48} \mathbf{B}\_{48} \mathbf{T}\_{48 \times 32} \mathbf{M}\_{32}^{(c)} \mathbf{X}\_{32 \times 1} \tag{3}$$

where

$$\mathbf{T}\_{32 \times 48} = \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \end{bmatrix} \otimes \mathbf{I}\_{16},$$

$$\mathbf{B}\_{48} = \text{quasidigag} \begin{pmatrix} \mathbf{B}\_{16}^{(-)} \\ \mathbf{B}\_{16}^{(+)} \\ \mathbf{B}\_{16}^{(1)} \end{pmatrix},$$

$$\mathbf{B}\_{\mathbf{16}}^{(-)} = \mathbf{B}\_{16}^{(0)} - \mathbf{B}\_{16}^{(1)},\tag{4}$$

$$\mathbf{B}\_{16}^{(+)} = -\left(\mathbf{B}\_{16}^{(0)} + \mathbf{B}\_{16}^{(1)}\right),\tag{5}$$

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

,

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

.

$$\mathbf{T}\_{48\times32} = \begin{bmatrix} 1 & 0\\ 0 & 1\\ 1 & 1 \end{bmatrix} \otimes \mathbf{I}\_{16\times3}$$

where the symbol "⊗" denotes the tensor product of two matrices and quasidiag() means a block-diagonal matrix.

We introduce the following notation to (4) and (5):


we obtain:

**<sup>B</sup>**(−) **<sup>16</sup>** = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>15</sup> *c*<sup>14</sup> −*c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>4</sup> *c*<sup>5</sup> −*c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>12</sup> *c*<sup>13</sup> −*c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>23</sup> *c*<sup>22</sup> −*c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>29</sup> *c*<sup>28</sup>

**<sup>B</sup>**(+) **<sup>16</sup>** = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>27</sup> *c*<sup>26</sup> −*c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>0</sup> *c*<sup>1</sup> −*c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>3</sup> *c*<sup>2</sup> −*c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>8</sup> *c*<sup>9</sup> −*c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>9</sup> *c*<sup>8</sup>

The matrices **<sup>B</sup>**(−) <sup>16</sup> , **<sup>B</sup>**(+) <sup>16</sup> and **<sup>B</sup>**(1) <sup>16</sup> have similar structures. If we now change the signs of all of the elements of the sixth and seventh rows, as well as all of the elements of the second, third, sixth and seventh columns, to the opposite, then the matrices **<sup>B</sup>**(−) <sup>16</sup> , **<sup>B</sup>**(+) 16 and **<sup>B</sup>**(1) <sup>16</sup> will have structures of type **A***N*/2 **B***N*/2 **<sup>B</sup>***N*/2 **<sup>A</sup>***N*/2 , which leads to reducing the number of real multiplications during matrix-vector product calculation. We can write the sign transformation matrices for rows **<sup>S</sup>**(*r*) <sup>16</sup> and columns **<sup>S</sup>**(*c*) <sup>16</sup> as:

$$\mathbf{S}\_{16}^{(r)} = \text{diag}\begin{pmatrix} 1, & 1, & 1, & 1, & 1, & -1, & -1, & -1, & 1, & 1, & 1, & 1, & 1, & 1 \end{pmatrix}, \quad (1)$$
  $\mathbf{S}\_{12}^{(r)} = \text{diag}\begin{pmatrix} 1, & 1, & 1, & 1, & 1, & 1 & 1 \end{pmatrix}, \quad (2)$ 

**<sup>S</sup>**(*c*) <sup>16</sup> <sup>=</sup> diag 1, 1, <sup>−</sup>1, <sup>−</sup>1, 1, 1, <sup>−</sup>1, <sup>−</sup>1, 1, 1, 1, 1, 1, 1, 1, 1 .

Then, we obtain new standardized matrices:

$$\mathbf{B}\_{16}^{(-)} = \mathbf{S}\_{16}^{(r)} \mathbf{B}\_{16}^{(-)} \mathbf{S}\_{16}^{(c)} = \begin{bmatrix} \mathbf{B}\_8^{(0)} & \mathbf{B}\_8^{(1)} \\ \mathbf{B}\_8^{(1)} & \mathbf{B}\_8^{(0)} \end{bmatrix} \tag{6}$$

$$\mathbf{B}\_{16}^{(+)} = \mathbf{S}\_{16}^{(r)} \mathbf{B}\_{16}^{(+)} \mathbf{S}\_{16}^{(c)} = \begin{bmatrix} \mathbf{B}\_8^{(2)} & \mathbf{B}\_8^{(3)} \\ \mathbf{B}\_8^{(3)} & \mathbf{B}\_8^{(2)} \end{bmatrix} \tag{7}$$

$$\mathbf{B}\_{16}^{(1)} = \mathbf{S}\_{16}^{(r)} \mathbf{B}\_{16}^{(1)} \mathbf{S}\_{16}^{(c)} = \begin{bmatrix} \mathbf{B}\_8^{(4)} & \mathbf{B}\_8^{(5)} \\ \mathbf{B}\_8^{(5)} & \mathbf{B}\_8^{(4)} \end{bmatrix} \tag{8}$$

where:

**<sup>B</sup>**(0) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>7</sup> *c*<sup>6</sup> −*c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>12</sup> *c*<sup>13</sup> −*c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>29</sup> *c*<sup>28</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**(1) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>4</sup> *c*<sup>5</sup> −*c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>31</sup> *c*<sup>30</sup> −*c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>21</sup> *c*<sup>20</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**(2) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>19</sup> *c*<sup>18</sup> −*c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>0</sup> *c*<sup>1</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>1</sup> *c*<sup>0</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>8</sup> *c*<sup>9</sup> −*c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>11</sup> *c*<sup>10</sup> *c*<sup>9</sup> *c*<sup>8</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ,

**<sup>B</sup>**(3) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>28</sup> *c*<sup>29</sup> *c*<sup>30</sup> *c*<sup>31</sup> *c*<sup>24</sup> *c*<sup>25</sup> *c*<sup>26</sup> *c*<sup>27</sup> *c*<sup>29</sup> *c*<sup>28</sup> *c*<sup>31</sup> *c*<sup>30</sup> *c*<sup>25</sup> *c*<sup>24</sup> *c*<sup>27</sup> *c*<sup>26</sup> *c*<sup>22</sup> *c*<sup>23</sup> *c*<sup>20</sup> *c*<sup>21</sup> *c*<sup>18</sup> *c*<sup>19</sup> *c*<sup>16</sup> *c*<sup>17</sup> *c*<sup>23</sup> *c*<sup>22</sup> *c*<sup>21</sup> *c*<sup>20</sup> *c*<sup>19</sup> *c*<sup>18</sup> *c*<sup>17</sup> *c*<sup>16</sup> *c*<sup>12</sup> *c*<sup>13</sup> *c*<sup>14</sup> *c*<sup>15</sup> *c*<sup>8</sup> *c*<sup>9</sup> *c*<sup>10</sup> *c*<sup>11</sup> *c*<sup>13</sup> *c*<sup>12</sup> *c*<sup>15</sup> *c*<sup>14</sup> *c*<sup>9</sup> *c*<sup>8</sup> *c*<sup>11</sup> *c*<sup>10</sup> −*c*<sup>6</sup> *c*<sup>7</sup> *c*<sup>4</sup> *c*<sup>5</sup> *c*<sup>2</sup> *c*<sup>3</sup> *c*<sup>0</sup> *c*<sup>1</sup> −*c*<sup>7</sup> *c*<sup>6</sup> *c*<sup>5</sup> *c*<sup>4</sup> *c*<sup>3</sup> *c*<sup>2</sup> *c*<sup>1</sup> *c*<sup>0</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**(4) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ −*b*<sup>15</sup> −*b*<sup>21</sup> *b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>9</sup> *b*<sup>5</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>27</sup> *b*<sup>23</sup> *b*<sup>24</sup> *b*<sup>28</sup> *b*<sup>12</sup> −*b*<sup>18</sup> −*b*<sup>25</sup> −*b*<sup>29</sup> −*b*<sup>14</sup> *b*<sup>20</sup> *b*<sup>28</sup> *b*<sup>24</sup> −*b*<sup>18</sup> *b*<sup>12</sup> −*b*<sup>29</sup> −*b*<sup>25</sup> *b*<sup>20</sup> −*b*<sup>14</sup> −*b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>15</sup> −*b*<sup>21</sup> *b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>27</sup> *b*<sup>23</sup> −*b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>9</sup> *b*<sup>5</sup> −*b*<sup>25</sup> −*b*<sup>29</sup> −*b*<sup>14</sup> *b*<sup>20</sup> *b*<sup>24</sup> *b*<sup>28</sup> *b*<sup>12</sup> −*b*<sup>18</sup> −*b*<sup>29</sup> −*b*<sup>25</sup> *b*<sup>20</sup> −*b*<sup>14</sup> *b*<sup>28</sup> *b*<sup>24</sup> −*b*<sup>18</sup> *b*<sup>12</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**(5) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ −*b*<sup>12</sup> −*b*<sup>18</sup> *b*<sup>24</sup> −*b*<sup>28</sup> *b*<sup>14</sup> *b*<sup>20</sup> −*b*<sup>25</sup> *b*<sup>29</sup> −*b*<sup>18</sup> −*b*<sup>12</sup> −*b*<sup>28</sup> *b*<sup>24</sup> *b*<sup>20</sup> *b*<sup>14</sup> *b*<sup>29</sup> −*b*<sup>25</sup> −*b*<sup>5</sup> −*b*<sup>9</sup> −*b*<sup>15</sup> *b*<sup>21</sup> −*b*<sup>23</sup> −*b*<sup>27</sup> −*b*<sup>30</sup> *b*<sup>31</sup> −*b*<sup>9</sup> −*b*<sup>5</sup> *b*<sup>21</sup> −*b*<sup>15</sup> −*b*<sup>27</sup> −*b*<sup>23</sup> *b*<sup>31</sup> −*b*<sup>30</sup> −*b*<sup>14</sup> −*b*<sup>20</sup> *b*<sup>25</sup> −*b*<sup>29</sup> *b*<sup>12</sup> *b*<sup>18</sup> −*b*<sup>24</sup> *b*<sup>28</sup> −*b*<sup>20</sup> −*b*<sup>14</sup> −*b*<sup>29</sup> *b*<sup>25</sup> *b*<sup>18</sup> *b*<sup>12</sup> *b*<sup>28</sup> −*b*<sup>24</sup> *b*<sup>23</sup> *b*<sup>27</sup> *b*<sup>30</sup> −*b*<sup>31</sup> *b*<sup>5</sup> *b*<sup>9</sup> *b*<sup>15</sup> −*b*<sup>21</sup> *b*<sup>27</sup> *b*<sup>23</sup> −*b*<sup>31</sup> *b*<sup>30</sup> *b*<sup>9</sup> *b*<sup>5</sup> −*b*<sup>21</sup> *b*<sup>15</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .

There is a possiblity to use a method of factorization for the standardized matrices (6)–(8). This allows us to reduce the number of multiplications to 82/2 using 8(8 + 1) additions for each of above matrices. Therefore, similar to the previous we can write [27,28]:

$$
\begin{bmatrix}
\mathbf{A}\_{N/2} & \mathbf{B}\_{N/2} \\
\mathbf{B}\_{N/2} & \mathbf{A}\_{N/2}
\end{bmatrix} = \begin{bmatrix}
\mathbf{I}\_{N/2} & \mathbf{I}\_{N/2} \\
\mathbf{I}\_{N/2} & -\mathbf{I}\_{N/2}
\end{bmatrix} \begin{bmatrix}
\frac{1}{2}(\mathbf{A}\_{N/2} + \mathbf{B}\_{N/2}) & \mathbf{0}\_{N/2} \\
\mathbf{0}\_{N/2} & \frac{1}{2}(\mathbf{A}\_{N/2} - \mathbf{B}\_{N/2})
\end{bmatrix} \begin{bmatrix}
\mathbf{I}\_{N/2} & \mathbf{I}\_{N/2} \\
\mathbf{I}\_{N/2} & -\mathbf{I}\_{N/2}
\end{bmatrix},\tag{9}
$$

where **A***N*/2 , **B***N*/2 are some matrices. Therefore, we can rewrite (6)–(8) as:

$$
\begin{split}
\mathbf{B}\_{16}^{(-)} &= \begin{bmatrix} \mathbf{I}\_{8} & \mathbf{I}\_{8} \\ \mathbf{I}\_{8} & -\mathbf{I}\_{8} \end{bmatrix} \begin{bmatrix} \frac{1}{2} \mathbf{B}\_{8}^{(0+)} & \mathbf{0}\_{8} \\ \mathbf{0}\_{8} & \frac{1}{2} \mathbf{B}\_{8}^{(0-)} \end{bmatrix} \begin{bmatrix} \mathbf{I}\_{8} & \mathbf{I}\_{8} \\ \mathbf{I}\_{8} & -\mathbf{I}\_{8} \end{bmatrix}, \\
\mathbf{B}\_{16}^{(+)} &= \begin{bmatrix} \mathbf{I}\_{8} & \mathbf{I}\_{8} \\ \mathbf{I}\_{8} & -\mathbf{I}\_{8} \end{bmatrix} \begin{bmatrix} \frac{1}{2} \mathbf{B}\_{8}^{(1+)} & \mathbf{0}\_{8} \\ \mathbf{0}\_{8} & \frac{1}{2} \mathbf{B}\_{8}^{(1-)} \end{bmatrix} \begin{bmatrix} \mathbf{I}\_{8} & \mathbf{I}\_{8} \\ \mathbf{I}\_{8} & -\mathbf{I}\_{8} \end{bmatrix}, \\
\mathbf{B}\_{16}^{(1)} &= \begin{bmatrix} \mathbf{I}\_{8} & \mathbf{I}\_{8} \\ \mathbf{I}\_{8} & -\mathbf{I}\_{8} \end{bmatrix} \begin{bmatrix} \frac{1}{2} \mathbf{B}\_{8}^{(2+)} & \mathbf{0}\_{8} \\ \mathbf{0}\_{8} & \frac{1}{2} \mathbf{B}\_{8}^{(2-)} \end{bmatrix} \begin{bmatrix} \mathbf{I}\_{8} & \mathbf{I}\_{8} \\ \mathbf{I}\_{8} & -\mathbf{I}\_{8} \end{bmatrix},
\end{split}
$$

where:

$$\mathbf{B}\_8^{(0+)} = \mathbf{B}\_8^{(0)} + \mathbf{B}\_8^{(1)},\tag{10}$$

$$\mathbf{B}\_8^{(0-)} = \mathbf{B}\_8^{(0)} - \mathbf{B}\_8^{(1)},\tag{11}$$

$$\mathbf{B}\_8^{(1+)} = \mathbf{B}\_8^{(2)} + \mathbf{B}\_8^{(3)},\tag{12}$$

$$\mathbf{B}\_8^{(1-)} = \mathbf{B}\_8^{(2)} - \mathbf{B}\_8^{(3)},\tag{13}$$

$$\mathbf{B}\_8^{(2+)} = \mathbf{B}\_8^{(4)} + \mathbf{B}\_8^{(5)},\tag{14}$$

$$\mathbf{B}\_8^{(2-)} = \mathbf{B}\_8^{(4)} - \mathbf{B}\_8^{(5)}.\tag{15}$$

Combining partial decompositions in a single procedure we can rewrite procedure, (3) as following:

$$\mathbf{Y}\_{32\times 1} = \mathbf{M}\_{32}^{(r)} \mathbf{T}\_{32\times 48} \mathbf{S}\_{48}^{(r)} \mathbf{W}\_{48}^{(1)} \mathbf{B}\_{48} \mathbf{W}\_{48}^{(1)} \mathbf{S}\_{48}^{(c)} \mathbf{T}\_{48\times 32} \mathbf{M}\_{32}^{(c)} \mathbf{X}\_{32\times 1}.$$

where

$$\mathbf{B}\_{48} = \text{quasiddiag}\left(\frac{1}{2}\mathbf{B}\_8^{(0+)}, \frac{1}{2}\mathbf{B}\_8^{(0-)}, \frac{1}{2}\mathbf{B}\_8^{(1+)}, \frac{1}{2}\mathbf{B}\_8^{(1-)}, \frac{1}{2}\mathbf{B}\_8^{(2+)}, \frac{1}{2}\mathbf{B}\_8^{(2-)}\right),$$

$$\mathbf{S}\_{48}^{(r)} = \mathbf{I}\_3 \otimes \mathbf{S}\_{16}^{(r)},$$

$$\mathbf{S}\_{48}^{(c)} = \mathbf{I}\_3 \otimes \mathbf{S}\_{16}^{(c)},$$

$$\mathbf{W}\_{48}^{(1)} = \mathbf{I}\_3 \otimes \mathbf{H}\_2 \otimes \mathbf{I}\_{8\prime}$$

**H**<sup>2</sup> is the order 2 Hadamard matrix, i.e.:

$$\mathbf{H}\_2 = \begin{bmatrix} 1 & 1\\ 1 & -1 \end{bmatrix}.$$

Introducing the following notation:


to (10)–(13), we obtain:

**<sup>B</sup>**(0+) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *d*<sup>0</sup> *d*<sup>1</sup> *d*<sup>2</sup> *d*<sup>3</sup> *d*<sup>4</sup> *d*<sup>5</sup> *d*<sup>6</sup> *d*<sup>7</sup> *d*<sup>1</sup> *d*<sup>0</sup> *d*<sup>3</sup> *d*<sup>2</sup> *d*<sup>5</sup> *d*<sup>4</sup> *d*<sup>7</sup> *d*<sup>6</sup> *d*<sup>8</sup> *d*<sup>9</sup> *d*<sup>10</sup> *d*<sup>11</sup> *d*<sup>12</sup> *d*<sup>13</sup> *d*<sup>14</sup> *d*<sup>15</sup> −*d*<sup>9</sup> *d*<sup>8</sup> *d*<sup>11</sup> *d*<sup>10</sup> *d*<sup>13</sup> *d*<sup>12</sup> *d*<sup>15</sup> *d*<sup>14</sup> *d*<sup>16</sup> *d*<sup>17</sup> *d*<sup>18</sup> *d*<sup>19</sup> *d*<sup>20</sup> *d*<sup>21</sup> *d*<sup>22</sup> *d*<sup>23</sup> *d*<sup>17</sup> *d*<sup>16</sup> *d*<sup>19</sup> *d*<sup>18</sup> *d*<sup>21</sup> *d*<sup>20</sup> *d*<sup>23</sup> *d*<sup>22</sup> *d*<sup>24</sup> *d*<sup>25</sup> *d*<sup>26</sup> *d*<sup>27</sup> *d*<sup>28</sup> *d*<sup>29</sup> *d*<sup>30</sup> *d*<sup>31</sup> *d*<sup>25</sup> *d*<sup>24</sup> *d*<sup>27</sup> *d*<sup>26</sup> *d*<sup>29</sup> *d*<sup>28</sup> *d*<sup>31</sup> *d*<sup>30</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**(0−) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *d*<sup>10</sup> *d*<sup>11</sup> *d*<sup>8</sup> *d*<sup>9</sup> *d*<sup>14</sup> *d*<sup>15</sup> *d*<sup>12</sup> *d*<sup>13</sup> −*d*<sup>11</sup> *d*<sup>10</sup> *d*<sup>9</sup> *d*<sup>8</sup> *d*<sup>15</sup> *d*<sup>14</sup> *d*<sup>13</sup> *d*<sup>12</sup> −*d*<sup>2</sup> *d*<sup>3</sup> *d*<sup>0</sup> *d*<sup>1</sup> *d*<sup>6</sup> *d*<sup>7</sup> *d*<sup>4</sup> *d*<sup>5</sup> −*d*<sup>3</sup> *d*<sup>2</sup> *d*<sup>1</sup> *d*<sup>0</sup> *d*<sup>7</sup> *d*<sup>6</sup> *d*<sup>5</sup> *d*<sup>4</sup> −*d*<sup>26</sup> *d*<sup>27</sup> *d*<sup>24</sup> *d*<sup>25</sup> *d*<sup>30</sup> *d*<sup>31</sup> *d*<sup>28</sup> *d*<sup>29</sup> *d*<sup>27</sup> *d*<sup>26</sup> *d*<sup>25</sup> *d*<sup>24</sup> *d*<sup>31</sup> *d*<sup>30</sup> *d*<sup>29</sup> *d*<sup>28</sup> *d*<sup>18</sup> *d*<sup>19</sup> *d*<sup>16</sup> *d*<sup>17</sup> *d*<sup>22</sup> *d*<sup>23</sup> *d*<sup>20</sup> *d*<sup>21</sup> *d*<sup>19</sup> *d*<sup>18</sup> *d*<sup>17</sup> *d*<sup>16</sup> *d*<sup>23</sup> *d*<sup>22</sup> *d*<sup>21</sup> *d*<sup>20</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ,

**<sup>B</sup>**(1+) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *d*<sup>30</sup> *d*<sup>31</sup> *d*<sup>28</sup> *d*<sup>29</sup> *d*<sup>26</sup> *d*<sup>27</sup> *d*<sup>24</sup> *d*<sup>25</sup> *d*<sup>31</sup> *d*<sup>30</sup> *d*<sup>29</sup> *d*<sup>28</sup> *d*<sup>27</sup> *d*<sup>26</sup> *d*<sup>25</sup> *d*<sup>24</sup> −*d*<sup>22</sup> *d*<sup>23</sup> *d*<sup>20</sup> *d*<sup>21</sup> *d*<sup>18</sup> *d*<sup>19</sup> *d*<sup>16</sup> *d*<sup>17</sup> *d*<sup>23</sup> *d*<sup>22</sup> *d*<sup>21</sup> *d*<sup>20</sup> *d*<sup>19</sup> *d*<sup>18</sup> *d*<sup>17</sup> *d*<sup>16</sup> *d*<sup>14</sup> *d*<sup>15</sup> *d*<sup>12</sup> *d*<sup>13</sup> *d*<sup>10</sup> *d*<sup>11</sup> *d*<sup>8</sup> *d*<sup>9</sup> *d*<sup>15</sup> *d*<sup>14</sup> *d*<sup>13</sup> *d*<sup>12</sup> *d*<sup>11</sup> *d*<sup>10</sup> *d*<sup>9</sup> *d*<sup>8</sup> *d*<sup>6</sup> *d*<sup>7</sup> *d*<sup>4</sup> *d*<sup>5</sup> *d*<sup>2</sup> *d*<sup>3</sup> *d*<sup>0</sup> *d*<sup>1</sup> −*d*<sup>7</sup> *d*<sup>6</sup> *d*<sup>5</sup> *d*<sup>4</sup> *d*<sup>3</sup> *d*<sup>2</sup> *d*<sup>1</sup> *d*<sup>0</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ , **<sup>B</sup>**(1−) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *d*<sup>20</sup> *d*<sup>21</sup> *d*<sup>22</sup> *d*<sup>23</sup> *d*<sup>16</sup> *d*<sup>17</sup> *d*<sup>18</sup> *d*<sup>19</sup> *d*<sup>21</sup> *d*<sup>20</sup> *d*<sup>23</sup> *d*<sup>22</sup> *d*<sup>17</sup> *d*<sup>16</sup> *d*<sup>19</sup> *d*<sup>18</sup> −*d*<sup>28</sup> *d*<sup>29</sup> *d*<sup>30</sup> *d*<sup>31</sup> *d*<sup>24</sup> *d*<sup>25</sup> *d*<sup>26</sup> *d*<sup>27</sup> *d*<sup>29</sup> *d*<sup>28</sup> *d*<sup>31</sup> *d*<sup>30</sup> *d*<sup>25</sup> *d*<sup>24</sup> *d*<sup>27</sup> *d*<sup>26</sup> *d*<sup>4</sup> *d*<sup>5</sup> *d*<sup>6</sup> *d*<sup>7</sup> *d*<sup>0</sup> *d*<sup>1</sup> *d*<sup>2</sup> *d*<sup>3</sup> *d*<sup>5</sup> *d*<sup>4</sup> *d*<sup>7</sup> *d*<sup>6</sup> *d*<sup>1</sup> *d*<sup>0</sup> *d*<sup>3</sup> *d*<sup>2</sup> *d*<sup>12</sup> *d*<sup>13</sup> *d*<sup>14</sup> *d*<sup>15</sup> *d*<sup>8</sup> *d*<sup>9</sup> *d*<sup>10</sup> *d*<sup>11</sup> *d*<sup>13</sup> *d*<sup>12</sup> *d*<sup>15</sup> *d*<sup>14</sup> *d*<sup>9</sup> *d*<sup>8</sup> *d*<sup>11</sup> *d*<sup>10</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .

In order to simplify, we introduce the following notation for the elements of matrix **<sup>B</sup>**(2+) <sup>8</sup> (14):

$$\begin{aligned} c\_{32} &= b\_{12} + b\_{15}, & c\_{33} &= b\_{18} + b\_{21}, & c\_{34} &= b5 + b\_{24}, & c\_{35} &= b\theta + b\_{28}, \\ c\_{36} &= b\_{14} - b\_{30}, & c\_{37} &= b\_{20} - b\_{31}, & c\_{38} &= b\_{23} - b\_{25}, & c\_{39} &= b\_{29} - b\_{27}, \\ c\_{40} &= b\_{24} - b5, & c\_{41} &= b\_{28} - b\theta, & c\_{42} &= b\_{12} - b\_{15}, & c\_{43} &= b\_{21} - b\_{18}, \\ c\_{44} &= b\_{23} + b\_{25}, & c\_{45} &= b\_{27} + b\_{29}, & c\_{46} &= b\_{14} + b\_{30}, & c\_{47} &= b\_{20} + b\_{31}. \end{aligned}$$

we obtain:

$$
\mathbf{B}\_8^{(2+)} = \begin{bmatrix} -c\_{32} & c\_{33} & c\_{34} & c\_{35} & c\_{36} & c\_{37} & c\_{38} & c\_{39} \\ -c\_{33} & c\_{32} & c\_{35} & c\_{34} & c\_{37} & c\_{36} & c\_{39} & c\_{38} \\ c\_{40} & c\_{41} & c\_{42} & c\_{43} & c\_{44} & c\_{45} & c\_{46} & c\_{47} \\ c\_{41} & c\_{40} & c\_{43} & c\_{42} & c\_{45} & c\_{44} & c\_{47} & c\_{46} \\ -c\_{46} & c\_{47} & c\_{44} & c\_{45} & c\_{42} & c\_{43} & c\_{40} & c\_{41} \\ -c\_{47} & c\_{46} & c\_{45} & c\_{44} & c\_{43} & c\_{42} & c\_{41} & c\_{40} \\ c\_{38} & c\_{39} & c\_{36} & c\_{37} & c\_{34} & c\_{35} & c\_{32} & c\_{33} \\ -c\_{39} & c\_{38} & c\_{37} & c\_{36} & c\_{35} & c\_{34} & c\_{33} & c\_{32} \end{bmatrix}.$$

Now, we introduce the following notation for the elements of matrix **<sup>B</sup>**(2−) <sup>8</sup> (15):


we obtain:

**<sup>B</sup>**(2−) <sup>8</sup> = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *c*<sup>48</sup> *c*<sup>49</sup> *c*<sup>50</sup> *c*<sup>51</sup> *c*<sup>52</sup> *c*<sup>53</sup> *c*<sup>54</sup> *c*<sup>55</sup> *c*<sup>49</sup> *c*<sup>48</sup> *c*<sup>51</sup> *c*<sup>50</sup> *c*<sup>53</sup> *c*<sup>52</sup> *c*<sup>55</sup> *c*<sup>54</sup> *c*<sup>56</sup> *c*<sup>57</sup> *c*<sup>58</sup> *c*<sup>59</sup> *c*<sup>60</sup> *c*<sup>61</sup> *c*<sup>62</sup> *c*<sup>63</sup> *c*<sup>57</sup> *c*<sup>56</sup> *c*<sup>59</sup> *c*<sup>58</sup> *c*<sup>61</sup> *c*<sup>60</sup> *c*<sup>63</sup> *c*<sup>62</sup> −*c*<sup>62</sup> *c*<sup>63</sup> *c*<sup>60</sup> *c*<sup>61</sup> *c*<sup>58</sup> *c*<sup>59</sup> *c*<sup>56</sup> *c*<sup>57</sup> *c*<sup>63</sup> *c*<sup>62</sup> *c*<sup>61</sup> *c*<sup>60</sup> *c*<sup>59</sup> *c*<sup>58</sup> *c*<sup>57</sup> *c*<sup>56</sup> −*c*<sup>54</sup> *c*<sup>55</sup> *c*<sup>52</sup> *c*<sup>53</sup> *c*<sup>50</sup> *c*<sup>51</sup> *c*<sup>48</sup> *c*<sup>49</sup> −*c*<sup>55</sup> *c*<sup>54</sup> *c*<sup>53</sup> *c*<sup>52</sup> *c*<sup>51</sup> *c*<sup>50</sup> *c*<sup>49</sup> *c*<sup>48</sup> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ .

All of the above matrices have the same internal structure. We can permute rows and columns using the *π<sup>r</sup>* = (5 1 2 7 4 0 3 6) and *π<sup>c</sup>* = (5 1 2 6 4 0 3 7) permutation rules, respectively. We obtain the following form:

$$\mathbf{B}\_8^{(\gamma)} = \mathbf{P}\_8^{(r)} \mathbf{B}\_8^{(x)} \mathbf{P}\_8^{(c)},\tag{16}$$

where **<sup>B</sup>**(*γ*) <sup>8</sup> , **<sup>B</sup>**<sup>ˆ</sup> (*γ*) <sup>8</sup> are the corresponding items in the sets:

$$\begin{aligned} \mathbf{B}\_8^{(\gamma)} &\in \left\{ \mathbf{B}\_8^{(0+)}, \mathbf{B}\_8^{(0-)}, \mathbf{B}\_8^{(1+)}, \mathbf{B}\_8^{(1-)}, \mathbf{B}\_8^{(2+)}, \mathbf{B}\_8^{(2-)} \right\}, \\ \mathbf{B}\_8^{(\gamma)} &\in \left\{ \mathbf{B}\_8^{(0+)}, \mathbf{B}\_8^{(0-)}, \mathbf{B}\_8^{(1+)}, \mathbf{B}\_8^{(1-)}, \mathbf{B}\_8^{(2+)}, \mathbf{B}\_8^{(2-)} \right\} \end{aligned}$$

and

$$\mathbf{P}\_8^{(r)} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ \end{bmatrix}, \qquad \mathbf{P}\_8^{(c)} = \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix}.$$

The matrices **<sup>B</sup>**<sup>ˆ</sup> (*γ*) <sup>8</sup> (16) are calculated via the following equation:

$$\mathbf{B}\_8^{(\gamma)} = \left(\mathbf{P}\_8^{(r)}\right)^{-1} \mathbf{B}\_8^{(\alpha)} \left(\mathbf{P}\_8^{(c)}\right)^{-1}$$

and have a standardized form (9) that reduces the number of multiplications. Thus, we can write: 

**<sup>B</sup>**<sup>ˆ</sup> (0+) <sup>8</sup> = **<sup>P</sup>**(*r*) 8 −<sup>1</sup> **<sup>B</sup>**(0+) 8 **<sup>P</sup>**(*c*) 8 −<sup>1</sup> = **<sup>B</sup>**(0) <sup>4</sup> **<sup>B</sup>**(1) 4 **<sup>B</sup>**(1) <sup>4</sup> **<sup>B</sup>**(0) 4 , **<sup>B</sup>**<sup>ˆ</sup> (0−) <sup>8</sup> = **<sup>P</sup>**(*r*) 8 −<sup>1</sup> **<sup>B</sup>**(0−) 8 **<sup>P</sup>**(*c*) 8 −<sup>1</sup> = **<sup>B</sup>**(2) <sup>4</sup> **<sup>B</sup>**(3) 4 **<sup>B</sup>**(3) <sup>4</sup> **<sup>B</sup>**(2) 4 , **<sup>B</sup>**<sup>ˆ</sup> (1+) <sup>8</sup> = **<sup>P</sup>**(*r*) 8 −<sup>1</sup> **<sup>B</sup>**(1+) 8 **<sup>P</sup>**(*c*) 8 −<sup>1</sup> = **<sup>B</sup>**(4) <sup>4</sup> **<sup>B</sup>**(5) 4 **<sup>B</sup>**(5) <sup>4</sup> **<sup>B</sup>**(4) 4 , **<sup>B</sup>**<sup>ˆ</sup> (1−) <sup>8</sup> = **<sup>P</sup>**(*r*) 8 −<sup>1</sup> **<sup>B</sup>**(1−) 8 **<sup>P</sup>**(*c*) 8 −<sup>1</sup> = **<sup>B</sup>**(6) <sup>4</sup> **<sup>B</sup>**(7) 4 **<sup>B</sup>**(7) <sup>4</sup> **<sup>B</sup>**(6) 4 , **<sup>B</sup>**<sup>ˆ</sup> (2+) <sup>8</sup> = **<sup>P</sup>**(*r*) 8 −<sup>1</sup> **<sup>B</sup>**(2+) 8 **<sup>P</sup>**(*c*) 8 −<sup>1</sup> = **<sup>B</sup>**(8) <sup>4</sup> **<sup>B</sup>**(9) 4 **<sup>B</sup>**(9) <sup>4</sup> **<sup>B</sup>**(8) 4 , **<sup>B</sup>**<sup>ˆ</sup> (2−) <sup>8</sup> = **<sup>P</sup>**(*r*) 8 −<sup>1</sup> **<sup>B</sup>**(2−) 8 **<sup>P</sup>**(*c*) 8 −<sup>1</sup> = **<sup>B</sup>**(10) <sup>4</sup> **<sup>B</sup>**(11) 4 **<sup>B</sup>**(11) <sup>4</sup> **<sup>B</sup>**(10) 4 ,

where:

$$\mathbf{B}\_{4}^{(0)} = \begin{bmatrix} d\_{20} & d\_{16} & d\_{19} & d\_{23} \\ d\_{4} & d\_{0} & d\_{3} & d\_{7} \\ d\_{13} & d\_{9} & d\_{10} & d\_{14} \\ -d\_{28} & d\_{24} & d\_{27} & d\_{31} \end{bmatrix}, \qquad \mathbf{B}\_{4}^{(1)} = \begin{bmatrix} d\_{21} & d\_{17} & d\_{18} & d\_{22} \\ d\_{5} & d\_{1} & d\_{2} & d\_{6} \\ d\_{12} & d\_{8} & d\_{11} & d\_{15} \\ d\_{29} & d\_{25} & d\_{26} & d\_{30} \end{bmatrix}.$$

**<sup>B</sup>**(2) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>30</sup> *d*<sup>26</sup> *d*<sup>25</sup> *d*<sup>29</sup> *d*<sup>14</sup> *d*<sup>10</sup> *d*<sup>9</sup> *d*<sup>13</sup> −*d*<sup>7</sup> *d*<sup>3</sup> *d*<sup>0</sup> *d*<sup>4</sup> −*d*<sup>22</sup> *d*<sup>18</sup> *d*<sup>17</sup> *d*<sup>21</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(3) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>31</sup> *d*<sup>27</sup> *d*<sup>24</sup> *d*<sup>28</sup> *d*<sup>15</sup> *d*<sup>11</sup> *d*<sup>8</sup> *d*<sup>12</sup> *d*<sup>6</sup> *d*<sup>2</sup> *d*<sup>1</sup> *d*<sup>5</sup> *d*<sup>23</sup> *d*<sup>19</sup> *d*<sup>16</sup> *d*<sup>20</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(4) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>10</sup> *d*<sup>14</sup> *d*<sup>13</sup> *d*<sup>9</sup> −*d*<sup>26</sup> *d*<sup>30</sup> *d*<sup>29</sup> *d*<sup>25</sup> *d*<sup>19</sup> *d*<sup>23</sup> *d*<sup>20</sup> *d*<sup>16</sup> −*d*<sup>2</sup> *d*<sup>6</sup> *d*<sup>5</sup> *d*<sup>1</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(5) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ −*d*<sup>11</sup> *d*<sup>15</sup> *d*<sup>12</sup> *d*<sup>8</sup> *d*<sup>27</sup> *d*<sup>31</sup> *d*<sup>28</sup> *d*<sup>24</sup> *d*<sup>18</sup> *d*<sup>22</sup> *d*<sup>21</sup> *d*<sup>17</sup> −*d*<sup>3</sup> *d*<sup>7</sup> *d*<sup>4</sup> *d*<sup>0</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(6) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>0</sup> *d*<sup>4</sup> *d*<sup>7</sup> *d*<sup>3</sup> *d*<sup>16</sup> *d*<sup>20</sup> *d*<sup>23</sup> *d*<sup>19</sup> *d*<sup>25</sup> *d*<sup>29</sup> *d*<sup>30</sup> *d*<sup>26</sup> *d*<sup>8</sup> *d*<sup>12</sup> *d*<sup>15</sup> *d*<sup>11</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(7) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>1</sup> *d*<sup>5</sup> *d*<sup>6</sup> *d*<sup>2</sup> *d*<sup>17</sup> *d*<sup>21</sup> *d*<sup>22</sup> *d*<sup>18</sup> *d*<sup>24</sup> *d*<sup>28</sup> *d*<sup>31</sup> *d*<sup>27</sup> −*d*<sup>9</sup> *d*<sup>13</sup> *d*<sup>14</sup> *d*<sup>10</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(8) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *c*<sup>42</sup> *c*<sup>46</sup> *c*<sup>45</sup> *c*<sup>41</sup> *c*<sup>36</sup> *c*<sup>32</sup> *c*<sup>35</sup> *c*<sup>39</sup> −*c*<sup>45</sup> *c*<sup>41</sup> *c*<sup>42</sup> *c*<sup>46</sup> *c*<sup>34</sup> *c*<sup>38</sup> *c*<sup>37</sup> *c*<sup>33</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(9) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ −*c*<sup>43</sup> *c*<sup>47</sup> *c*<sup>44</sup> *c*<sup>40</sup> *c*<sup>37</sup> *c*<sup>33</sup> *c*<sup>34</sup> *c*<sup>38</sup> −*c*<sup>44</sup> *c*<sup>40</sup> *c*<sup>43</sup> *c*<sup>47</sup> *c*<sup>35</sup> *c*<sup>39</sup> *c*<sup>36</sup> *c*<sup>32</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(10) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ −*c*<sup>58</sup> *c*<sup>62</sup> *c*<sup>61</sup> *c*<sup>57</sup> −*c*<sup>52</sup> *c*<sup>48</sup> *c*<sup>51</sup> *c*<sup>55</sup> *c*<sup>61</sup> *c*<sup>57</sup> *c*<sup>58</sup> *c*<sup>62</sup> −*c*<sup>50</sup> *c*<sup>54</sup> *c*<sup>53</sup> *c*<sup>49</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(11) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ −*c*<sup>59</sup> *c*<sup>63</sup> *c*<sup>60</sup> *c*<sup>56</sup> −*c*<sup>53</sup> *c*<sup>49</sup> *c*<sup>50</sup> *c*<sup>54</sup> *c*<sup>60</sup> *c*<sup>56</sup> *c*<sup>59</sup> *c*<sup>63</sup> *c*<sup>51</sup> *c*<sup>55</sup> *c*<sup>52</sup> *c*<sup>48</sup> ⎤ ⎥ ⎥ ⎦.

We can use the multiplication procedure (9) and represent the above matrices in a form:

**<sup>B</sup>**<sup>ˆ</sup> (0+) <sup>8</sup> = **I4 I4 I4** −**I4** ⎡ ⎣ 1 2 **<sup>B</sup>**(0) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(1) 4 **0 0** <sup>1</sup> 2 **<sup>B</sup>**(0) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(1) 4 ⎤ ⎦ **I4 I4 I4** −**I4** , **<sup>B</sup>**<sup>ˆ</sup> (0−) <sup>8</sup> = **I4 I4 I4** −**I4** ⎡ ⎣ 1 2 **<sup>B</sup>**(2) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(3) 4 **0 0** <sup>1</sup> 2 **<sup>B</sup>**(2) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(3) 4 ⎤ ⎦ **I4 I4 I4** −**I4** , **<sup>B</sup>**<sup>ˆ</sup> (1+) <sup>8</sup> = **I4 I4 I4** −**I4** ⎡ ⎣ 1 2 **<sup>B</sup>**(4) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(5) 4 **0 0** <sup>1</sup> 2 **<sup>B</sup>**(4) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(5) 4 ⎤ ⎦ **I4 I4 I4** −**I4** , **<sup>B</sup>**<sup>ˆ</sup> (1−) <sup>8</sup> = **I4 I4 I4** −**I4** ⎡ ⎣ 1 2 **<sup>B</sup>**(6) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(7) 4 **0 0** <sup>1</sup> 2 **<sup>B</sup>**(6) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(7) 4 ⎤ ⎦ **I4 I4 I4** −**I4** , **<sup>B</sup>**<sup>ˆ</sup> (2+) <sup>8</sup> = **I4 I4 I4** −**I4** ⎡ ⎣ 1 2 **<sup>B</sup>**(8) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(9) 4 **0 0** <sup>1</sup> 2 **<sup>B</sup>**(8) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(9) 4 ⎤ ⎦ **I4 I4 I4** −**I4** , **<sup>B</sup>**<sup>ˆ</sup> (2−) <sup>8</sup> = **I4 I4 I4** −**I4** ⎡ ⎣ 1 2 **<sup>B</sup>**(10) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(11) 4 **0 0** <sup>1</sup> 2 **<sup>B</sup>**(10) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(11) 4 ⎤ ⎦ **I4 I4 I4** −**I4** ,

where

$$\mathbf{B}\_{4}^{(0)} + \mathbf{B}\_{4}^{(1)} = \begin{bmatrix} d\_{20} + d\_{21} & d\_{16} + d\_{17} & d\_{19} - d\_{18} & d\_{22} + d\_{23} \\ d\_{4} + d\_{5} & d\_{0} + d\_{1} & d\_{3} - d\_{2} & d\_{6} + d\_{7} \\ d\_{12} + d\_{13} & d\_{8} - d\_{9} & d\_{10} + d\_{11} & d\_{14} - d\_{15} \\ d\_{29} - d\_{28} & d\_{24} + d\_{25} & d\_{26} + d\_{27} & d\_{31} - d\_{30} \end{bmatrix},$$

$$\mathbf{B}\_{4}^{(0)} - \mathbf{B}\_{4}^{(1)} = \begin{bmatrix} d\_{20} - d\_{21} & d\_{16} - d\_{17} & d\_{18} + d\_{19} & d\_{23} - d\_{22} \\ d\_{4} - d\_{5} & d\_{0} - d\_{1} & d\_{2} + d\_{3} & d\_{7} - d\_{6} \\ d\_{13} - d\_{12} & d\_{8} - d\_{9} & d\_{10} - d\_{11} & d\_{14} + d\_{15} \\ -d\_{28} - d\_{29} & d\_{24} - d\_{25} & d\_{27} - d\_{26} & d\_{30} + d\_{31} \end{bmatrix},$$

**<sup>B</sup>**(2) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(3) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>30</sup> + *d*<sup>31</sup> *d*<sup>27</sup> − *d*<sup>26</sup> *d*<sup>24</sup> − *d*<sup>25</sup> *d*<sup>28</sup> − *d*<sup>29</sup> *d*<sup>14</sup> + *d*<sup>15</sup> *d*<sup>10</sup> − *d*<sup>11</sup> *d*<sup>8</sup> − *d*<sup>9</sup> *d*<sup>13</sup> − *d*<sup>12</sup> *d*<sup>6</sup> − *d*<sup>7</sup> *d*<sup>2</sup> − *d*<sup>3</sup> *d*<sup>1</sup> − *d*<sup>0</sup> *d*<sup>5</sup> − *d*<sup>4</sup> *d*<sup>23</sup> − *d*<sup>22</sup> *d*<sup>18</sup> + *d*<sup>19</sup> *d*<sup>16</sup> − *d*<sup>17</sup> *d*<sup>20</sup> − *d*<sup>21</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(2) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(3) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>30</sup> − *d*<sup>31</sup> *d*<sup>26</sup> − *d*<sup>27</sup> *d*<sup>24</sup> − *d*<sup>25</sup> *d*<sup>28</sup> − *d*<sup>29</sup> *d*<sup>14</sup> − *d*<sup>15</sup> *d*<sup>10</sup> + *d*<sup>11</sup> *d*<sup>8</sup> − *d*<sup>9</sup> *d*<sup>12</sup> + *d*<sup>13</sup> −*d*<sup>6</sup> − *d*<sup>7</sup> *d*<sup>2</sup> − *d*<sup>3</sup> −*d*<sup>0</sup> − *d*<sup>1</sup> −*d*<sup>4</sup> − *d*<sup>5</sup> −*d*<sup>22</sup> − *d*<sup>23</sup> *d*<sup>18</sup> − *d*<sup>19</sup> *d*<sup>16</sup> − *d*<sup>17</sup> *d*<sup>20</sup> − *d*<sup>21</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(4) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(5) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>10</sup> − *d*<sup>11</sup> *d*<sup>14</sup> + *d*<sup>15</sup> *d*<sup>13</sup> − *d*<sup>12</sup> *d*<sup>8</sup> − *d*<sup>9</sup> *d*<sup>27</sup> − *d*<sup>26</sup> *d*<sup>30</sup> + *d*<sup>31</sup> *d*<sup>28</sup> − *d*<sup>29</sup> *d*<sup>24</sup> − *d*<sup>25</sup> *d*<sup>18</sup> + *d*<sup>19</sup> *d*<sup>23</sup> − *d*<sup>22</sup> *d*<sup>20</sup> − *d*<sup>21</sup> *d*<sup>16</sup> − *d*<sup>17</sup> −*d*<sup>2</sup> − *d*<sup>3</sup> *d*<sup>6</sup> − *d*<sup>7</sup> *d*<sup>5</sup> − *d*<sup>4</sup> *d*<sup>1</sup> − *d*<sup>0</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(4) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(5) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>10</sup> + *d*<sup>11</sup> *d*<sup>14</sup> − *d*<sup>15</sup> *d*<sup>12</sup> + *d*<sup>13</sup> *d*<sup>8</sup> − *d*<sup>9</sup> −*d*<sup>26</sup> − *d*<sup>27</sup> *d*<sup>30</sup> − *d*<sup>31</sup> *d*<sup>28</sup> − *d*<sup>29</sup> *d*<sup>24</sup> − *d*<sup>25</sup> *d*<sup>19</sup> − *d*<sup>18</sup> *d*<sup>22</sup> + *d*<sup>23</sup> *d*<sup>20</sup> + *d*<sup>21</sup> *d*<sup>16</sup> + *d*<sup>17</sup> *d*<sup>3</sup> − *d*<sup>2</sup> *d*<sup>6</sup> + *d*<sup>7</sup> *d*<sup>4</sup> + *d*<sup>5</sup> *d*<sup>0</sup> + *d*<sup>1</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(6) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(7) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>0</sup> + *d*<sup>1</sup> *d*<sup>4</sup> + *d*<sup>5</sup> *d*<sup>6</sup> + *d*<sup>7</sup> *d*<sup>3</sup> − *d*<sup>2</sup> *d*<sup>16</sup> + *d*<sup>17</sup> *d*<sup>20</sup> + *d*<sup>21</sup> *d*<sup>22</sup> + *d*<sup>23</sup> *d*<sup>19</sup> − *d*<sup>18</sup> *d*<sup>24</sup> + *d*<sup>25</sup> *d*<sup>29</sup> − *d*<sup>28</sup> *d*<sup>31</sup> − *d*<sup>30</sup> *d*<sup>26</sup> + *d*<sup>27</sup> *d*<sup>8</sup> − *d*<sup>9</sup> *d*<sup>12</sup> + *d*<sup>13</sup> *d*<sup>14</sup> − *d*<sup>15</sup> *d*<sup>10</sup> + *d*<sup>11</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(6) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(7) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *d*<sup>0</sup> − *d*<sup>1</sup> *d*<sup>4</sup> − *d*<sup>5</sup> *d*<sup>7</sup> − *d*<sup>6</sup> *d*<sup>2</sup> + *d*<sup>3</sup> *d*<sup>16</sup> − *d*<sup>17</sup> *d*<sup>20</sup> − *d*<sup>21</sup> *d*<sup>23</sup> − *d*<sup>22</sup> *d*<sup>18</sup> + *d*<sup>19</sup> *d*<sup>25</sup> − *d*<sup>24</sup> *d*<sup>28</sup> + *d*<sup>29</sup> *d*<sup>30</sup> − *d*<sup>31</sup> *d*<sup>26</sup> − *d*<sup>27</sup> *d*<sup>8</sup> + *d*<sup>9</sup> *d*<sup>12</sup> − *d*<sup>13</sup> *d*<sup>14</sup> − *d*<sup>15</sup> *d*<sup>11</sup> − *d*<sup>10</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(8) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(9) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *c*<sup>42</sup> − *c*<sup>43</sup> *c*<sup>46</sup> − *c*<sup>47</sup> *c*<sup>44</sup> − *c*<sup>45</sup> *c*<sup>41</sup> − *c*<sup>40</sup> *c*<sup>36</sup> + *c*<sup>37</sup> *c*<sup>32</sup> − *c*<sup>33</sup> *c*<sup>34</sup> − *c*<sup>35</sup> *c*<sup>38</sup> + *c*<sup>39</sup> −*c*<sup>44</sup> − *c*<sup>45</sup> *c*<sup>40</sup> + *c*<sup>41</sup> *c*<sup>42</sup> + *c*<sup>43</sup> *c*<sup>47</sup> − *c*<sup>46</sup> *c*<sup>34</sup> + *c*<sup>35</sup> *c*<sup>38</sup> − *c*<sup>39</sup> *c*<sup>37</sup> − *c*<sup>36</sup> *c*<sup>32</sup> − *c*<sup>33</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(8) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(9) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *c*<sup>42</sup> + *c*<sup>43</sup> *c*<sup>47</sup> − *c*<sup>46</sup> *c*<sup>44</sup> − *c*<sup>45</sup> *c*<sup>40</sup> + *c*<sup>41</sup> *c*<sup>36</sup> − *c*<sup>37</sup> *c*<sup>33</sup> − *c*<sup>32</sup> *c*<sup>34</sup> − *c*<sup>35</sup> *c*<sup>39</sup> − *c*<sup>38</sup> *c*<sup>44</sup> − *c*<sup>45</sup> *c*<sup>41</sup> − *c*<sup>40</sup> *c*<sup>42</sup> − *c*<sup>43</sup> *c*<sup>46</sup> − *c*<sup>47</sup> *c*<sup>34</sup> − *c*<sup>35</sup> *c*<sup>38</sup> + *c*<sup>39</sup> *c*<sup>36</sup> + *c*<sup>37</sup> *c*<sup>32</sup> − *c*<sup>33</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(10) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(11) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ −*c*<sup>58</sup> − *c*<sup>59</sup> *c*<sup>63</sup> − *c*<sup>62</sup> *c*<sup>60</sup> − *c*<sup>61</sup> *c*<sup>56</sup> − *c*<sup>57</sup> −*c*<sup>52</sup> − *c*<sup>53</sup> *c*<sup>48</sup> + *c*<sup>49</sup> *c*<sup>50</sup> + *c*<sup>51</sup> *c*<sup>54</sup> − *c*<sup>55</sup> *c*<sup>60</sup> + *c*<sup>61</sup> *c*<sup>56</sup> + *c*<sup>57</sup> *c*<sup>58</sup> − *c*<sup>59</sup> *c*<sup>62</sup> + *c*<sup>63</sup> *c*<sup>51</sup> − *c*<sup>50</sup> *c*<sup>54</sup> − *c*<sup>55</sup> *c*<sup>53</sup> − *c*<sup>52</sup> *c*<sup>48</sup> − *c*<sup>49</sup> ⎤ ⎥ ⎥ ⎦, **<sup>B</sup>**(10) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(11) <sup>4</sup> = ⎡ ⎢ ⎢ ⎣ *c*<sup>59</sup> − *c*<sup>58</sup> *c*<sup>62</sup> − *c*<sup>63</sup> *c*<sup>60</sup> − *c*<sup>61</sup> *c*<sup>56</sup> − *c*<sup>57</sup> *c*<sup>53</sup> − *c*<sup>52</sup> *c*<sup>48</sup> − *c*<sup>49</sup> *c*<sup>51</sup> − *c*<sup>50</sup> *c*<sup>54</sup> − *c*<sup>55</sup> *c*<sup>61</sup> − *c*<sup>60</sup> *c*<sup>57</sup> − *c*<sup>56</sup> *c*<sup>58</sup> + *c*<sup>59</sup> *c*<sup>62</sup> − *c*<sup>63</sup> −*c*<sup>50</sup> − *c*<sup>51</sup> *c*<sup>55</sup> − *c*<sup>54</sup> *c*<sup>52</sup> + *c*<sup>53</sup> *c*<sup>48</sup> − *c*<sup>49</sup> ⎤ ⎥ ⎥ ⎦.

Combining the calculations for of the all above matrices in a single procedure we finally obtain:

$$\mathbf{Y}\_{32\times1} = \mathbf{M}\_{32}^{(r)} \mathbf{T}\_{32\times48} \mathbf{S}\_{48}^{(r)} \mathbf{W}\_{48}^{(1)} \mathbf{P}\_{48}^{(r)} \mathbf{W}\_{48}^{(2)} \hat{\mathbf{B}}\_{48} \mathbf{W}\_{48}^{(2)} \mathbf{P}\_{48}^{(c)} \mathbf{W}\_{48}^{(1)} \mathbf{S}\_{48}^{(c)} \mathbf{T}\_{48\times32} \mathbf{M}\_{32}^{(c)} \mathbf{X}\_{32\times1} \tag{17}$$

where:

**B**ˆ <sup>48</sup> = quasidiag ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 1 4 **<sup>B</sup>**(0) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(1) 4 1 4 **<sup>B</sup>**(0) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(1) 4 1 4 **<sup>B</sup>**(2) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(3) 4 1 4 **<sup>B</sup>**(2) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(3) 4 1 4 **<sup>B</sup>**(4) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(5) 4 1 4 **<sup>B</sup>**(4) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(5) 4 1 4 **<sup>B</sup>**(6) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(7) 4 1 4 **<sup>B</sup>**(6) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(7) 4 1 4 **<sup>B</sup>**(8) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(9) 4 1 4 **<sup>B</sup>**(8) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(9) 4 1 4 **<sup>B</sup>**(10) <sup>4</sup> <sup>+</sup> **<sup>B</sup>**(11) 4 1 4 **<sup>B</sup>**(10) <sup>4</sup> <sup>−</sup> **<sup>B</sup>**(11) 4 ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ , **<sup>P</sup>**(*r*) <sup>48</sup> <sup>=</sup> **<sup>I</sup>**<sup>6</sup> <sup>⊗</sup> **<sup>P</sup>**(*r*) 8 , **<sup>P</sup>**(*c*) <sup>48</sup> <sup>=</sup> **<sup>I</sup>**<sup>6</sup> <sup>⊗</sup> **<sup>P</sup>**(*c*) <sup>8</sup> , **<sup>W</sup>**(2) <sup>48</sup> = **I**<sup>6</sup> ⊗ **H**<sup>2</sup> ⊗ **I**4.

Figure 1 shows a data flow diagram describing the new algorithm for the computation of the product of Kaluza numbers (17). In this paper, the data flow diagram is oriented from left to right. Straight lines in the figure denote the operations of data transfer. Points, where lines converge, denote summation. The dotted lines indicate the subtraction operation. We use the regular lines without arrows on purpose, so as not to clutter the picture. The rectangles indicate the matrix-vector multiplications with matrices inscribed inside a rectangle.

**Figure 1.** A data flow diagram for the proposed algorithm.

#### **4. Evaluation of Computational Complexity**

We will now calculate how many multiplications and additions of real numbers are required for the implementation of the new algorithm and will compare this with the number of operations required both for direct computation of matrix-vector products in Equation (1) and for implementing our previous algorithm [25]. The number of real multiplications required using the new algorithm is 192. Thus, using the proposed algorithm, the number of real multiplications needed to calculate the Kaluza number product is significantly reduced. The number of real additions required using our algorithm is 384. We observe that the direct computation of the Kaluza number product requires 608 additions more than the proposed algorithm. Thus, our proposed algorithm saves 832 multiplications and 960 additions of real numbers compared with the direct method. Thus, the total number of arithmetic operations for the proposed algorithm is approximately 71.4% less than that of the direct computation. The previously proposed algorithm [25] calculates the same

result using 512 multiplications and 576 additions of real numbers. Thus, our proposed algorithm saves 62.5% of multiplications and 33.3% of additions of real numbers compared with our previous algorithm. Hence, the total number of arithmetic operations for the new proposed algorithm is approximately 47% less than that of our previous algorithm.

#### **5. Conclusions**

We presented a new effective algorithm for calculating the product of two Kaluza numbers. The use of this algorithm reduces the computational complexity of multiplications of Kaluza numbers, thus reducing implementation complexity and leading to a high-speed resource-effective architecture suitable for parallel implementation on VLSI platforms. Additionally, we note that the total number of arithmetic operations in the new algorithm is less than the total number of operations in the compared algorithms. Therefore, the proposed algorithm is better than the compared algorithms, even in terms of its software implementation on a general-purpose computer.

The proposed algorithm can be used in metacognitive neural networks using Kaluza numbers for data representation and processing. The effect in this case is achieved by using non-commutative finite groups based on the properties of the hypercomplex algebra [24]. When using the Kaluza number, in this case, the rule for generating the elements of the group will be set, as well as the rule for performing the group operation of multiplication. Such a system can contain two components: a neural network based on Kaluza numbers, which represents a cognitive component, and a metacognitive component, which serves to self-regulate the learning algorithm. At each stage, the metacognitive component will decide how and when the learning takes place. The algorithm removes unnecessary samples and keeps only those that are used. This decision will be determined by the magnitude and 31 phases of the Kaluza number. However, these matters are beyond the scope of this article and require more detailed research.

**Author Contributions:** Conceptualization, A.C.; methodology, A.C., G.C. and J.P.P.; validation, J.P.P.; formal analysis, A.C. and J.P.P.; writing—original draft preparation, A.C. and J.P.P.; writing—review and editing, A.C. and J.P.P.; visualization, A.C. and J.P.P.; supervision, A.C., G.C. and J.P.P. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **System for Neural Network Determination of Atrial Fibrillation on ECG Signals with Wavelet-Based Preprocessing**

**Pavel Lyakhov 1,2,\*, Mariya Kiladze <sup>1</sup> and Ulyana Lyakhova <sup>1</sup>**


**Featured Application: The use by medical of a neural network classification system for ECG signals with preprocessing steps to recognize atrial fibrillation.**

**Abstract:** Today, cardiovascular disease is the leading cause of death in developed countries. The most common arrhythmia is atrial fibrillation, which increases the risk of ischemic stroke. An electrocardiogram is one of the best methods for diagnosing cardiac arrhythmias. Often, the signals of the electrocardiogram are distorted by noises of varying nature. In this paper, we propose a neural network classification system for electrocardiogram signals based on the Long Short-Term Memory neural network architecture with a preprocessing stage. Signal preprocessing was carried out using a symlet wavelet filter with further application of the instantaneous frequency and spectral entropy functions. For the experimental part of the article, electrocardiogram signals were selected from the open database PhysioNet Computing in Cardiology Challenge 2017 (CinC Challenge). The simulation was carried out using the MatLab 2020b software package for solving technical calculations. The best simulation result was obtained using a symlet with five coefficients and made it possible to achieve an accuracy of 87.5% in recognizing electrocardiogram signals.

**Keywords:** digital filter; electrocardiogram; instantaneous frequency; symlet wavelet; spectral entropy; signal denoising; LSTM

#### **1. Introduction**

The number of people who suffer from cardiac diseases is increasing every day. This disease is the leading cause of death in developed countries [1–3]. Electrocardiography is a method of recording and studying the electric fields that are generated during the work of the heart. An ECG is the result of electrocardiography [4,5] and is a graphical record of the electrical activity of the heart produced by depolarization and repolarization of the atria and ventricles. The electrocardiogram (ECG) is a non-invasive technique used to detect cardiovascular disease. The ECG is described by the waveforms of the P, QRS, and T waves, which are associated with each heart-rate function. The P wave displays the process of depolarization of the atrial myocardium; the QRS complex displays depolarization of the ventricles; the ST segment, and the T wave displays the processes of repolarization of the ventricular myocardium [6]. Figure 1 shows an example of an electrocardiogram waveform with P, Q, R, S, and T characteristics, as well as standard electrocardiogram intervals are PQ intervals, ST intervals, and QRS complex.

Atrial fibrillation is a major risk factor for ischemic stroke [7]. The main criteria for the presence of atrial fibrillation are the absence of P waves, the presence of atrial fibrillation waves, different R-R intervals, the heart-rate (HR) is constant or accelerated, and the QRS complex is less than 0.12 s [8]. Since the P wave is not detected during atrial fibrillation, the interval between QRS complexes increases and there is no possibility to calculate the

**Citation:** Lyakhov, P.; Kiladze, M.; Lyakhova, U. System for Neural Network Determination of Atrial Fibrillation on ECG Signals with Wavelet-Based Preprocessing. *Appl. Sci.* **2021**, *11*, 7213. https://doi.org/ 10.3390/app11167213

Academic Editor: Fabio La Foresta

Received: 11 June 2021 Accepted: 2 August 2021 Published: 5 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

PQ and QT intervals. Calculating HR or QRS time from digital ECG signals is problematic. Therefore, it is necessary to pay attention to the absence of the P-peak and the presence of different intervals between R-R peaks when determining atrial fibrillation.

**Figure 1.** Example of an electrocardiogram signal.

In some modern electrocardiographs, various signal filters are used, which allow obtainment of a higher quality of the electrocardiogram, while introducing some distortions in the form of the received signal. Low-pass filters in the range of 0.5–1 Hz can reduce the effect of the floating contour while introducing distortions in the shape of the ST segment [4]. A low-frequency anti-tremor filter in the 35 Hz range suppresses artifacts associated with muscle activity. A notch filter in the range of 50–60 Hz neutralizes line pickups [5].

Today, learning algorithms are becoming more accurate, but recognition systems created based on artificial intelligence in medicine, and in particular, in cardiology, are not able to achieve a 100% accurate result [9]. For this reason, it is relevant to search for ways to increase this indicator. One of the possible ways to increase this indicator is the preliminary processing of ECG signals. Finding peaks in signals is an important step in many automatic ECG processing systems. In this work, a neural network classification system for ECG signals is proposed for determining atrial fibrillation with a preprocessing stage. At the stage of preprocessing, signals are selected by the number of heartbeat counts, as well as wavelet analysis to clean up noise and isolate the R-peak and spectral analysis to isolate the P-peak. Automatic detection of atrial fibrillation from ECG signals will allow doctors to determine if a patient needs cardiac care.

#### **2. Related Research**

Today, medicine is considered one of the promising areas for the introduction of artificial intelligence, because the analysis of medical signals is the most common research method in this area. An example is an algorithm for identifying patients with atrial fibrillation in sinus rhythm based on convolutional neural networks (CNN) in [10]. The work [11] presents a hybrid approach with the use of Empirical Mode Decomposition and CNN to classify ECG signals. In [12], the author has implemented the extraction of ECG signal features based on wavelet transform with further classification using Long Short-Term Memory (LSTM). The work [13] presents a method for detecting atrial fibrillation

using LSTM. The method achieved an accuracy of 98.51%. The essence of the approach proposed in the work is the separation of ECG signals with a sliding window and the loading of the obtained blocks into the decision-making system. ECG signals were taken from the MIT-BIH database Atrial Fibrillation Database. Dataset was 10 h long and each signal contained 100 heartbeats (R-R peak). The authors point out the slow learning speed and high requirements for computing resources. For the simulation, the authors used the Quadro M5000 GPU (Nvidia, Santa Clara, CA, USA).

This work is of scientific interest, however, due to the different methodology and resources used, it cannot be compared with the proposed system for neural network determination of atrial fibrillation on ECG signals with wavelet-based preprocessing.

Finding peaks in signals is an important step in many signal processing applications. Automatic peak detection using neural network classification systems is difficult due to the physiological variability of P waves and QRS complexes, as well as the presence of various types of noise, including muscle noise, artifacts due to electrode movement, power-line noise, and baseline deviations.

Software QRS complex recognition is an integral part of modern computerized ECG monitoring systems. An algorithm for their recognition is presented in [14] and is based on optimized filtering and simple threshold setting since optimized filtering is considered a factor in achieving good timing accuracy. The most widely known method for detecting a single R-peak is the Pan-Tomkins method, which uses three types of processing steps: linear digital filtering, nonlinear transformation, and decision rule algorithms [15]. The work [16] presents an algorithm for detecting QRS using multi-stage morphological filtering to suppress impulse noise. In work [17], to determine the QRS complex, it is proposed to create an estimated QRS signal using the parameters extracted from the original ECG signal.

For automatic disease diagnosis systems based on ECG signals, accurate determination of the P wave is critical. In work [18], one of the first methods of processing ECG signals to isolate the P-peak in its flow using wavelet transform and subsequent training of the neural network is described. In [3], a system for determining the P and T waves based on the wavelet transform is presented.

The main methods of processing ECG signals for noise reduction are digital filtering, adaptive filtering, wavelet filtering. The paper [19] describes a method for processing ECG signals using an adaptive wavelet transform based on the Poincare section and the Shannon method. There are also methods for processing signals online in real-time for use in pacemakers. The paper [20] describes a processing method using biorthogonal wavelet transform based on a linear phase structure for noise removal, feature extraction, and compression of the ECG signal. There are also methods for computerized detection of fibrillation. Work [21] describes a method for detecting fibrillation using an eight-layer neural convolutional network, which requires only basic data normalization without preliminary processing and extraction of features from raw ECG samples. Work [7] describes a method for constructing classifiers based on several sets of functions (a set of Andreotti, Zabikhi functions, an aggregated set of functions, and a set of Dutt functions) and using a random forest classification method.

#### **3. Materials and Methods**

#### *3.1. Neural Network System for Atrial Fibrillation Recognition by ECG Signal*

Any digital signal is distorted by noises of varying nature. Noise on the ECG signals makes it difficult to analyze the data, both for the specialist and for systems based on artificial intelligence. Signal preprocessing allows the ECG to be prepared for further classification. This paper proposes a system for determining atrial fibrillation by ECG signals, which includes four stages presented in Figure 2. The first stage consists of the preprocessing of signals to isolate signals with the same count of heartbeats. At the second stage, noise is removed from the ECG signals using a discrete wavelet transform. At the third stage, the P-peak is isolated using spectral analysis. The fourth step is to classify signals using the LSTM network.

**Figure 2.** Neural network classification system with the pre-processing stage for ECG signals.

#### *3.2. Method for Pre-Processing of ECG Signals*

As part of the preprocessing of the ECG signal, its length is checked. Each ECG signal has a specific number of heartbeats and several samples. The length of ECG signals is measured in the number of samples that make up the signal. The ECG signal database can contain signals with a different number of samples. For the correct operation of the neural network classification system, the number of samples must be the same for all signals. To select ECG signals with one length, the following steps are necessary. At the first stage, the ECG signals must be divided into groups with the same number of samples. The second step is to select a group consisting of the largest number of signals of the same length. The third step removes signals consisting of fewer and more samples than in the selected group.

#### *3.3. Removing Noise from ECG Signals Using a Discrete Wavelet Transform*

Any digital signal is distorted by noises of varying nature. To isolate signal features, it is necessary to clean noise from them. An ECG signal that is distorted by noise can be written as:

$$\mathcal{W}(t) = \mathcal{S}(t) + \mathcal{N}(t),\tag{1}$$

where *W*(*t*) is the ECG signal, *S*(*t*) is the ECG signal without noise distortion, *N*(*t*) is the noise on the ECG signal.

Wavelet transform is a common way to remove noise from a signal [22]. To clean the noise from the ECG signals, a discrete wavelet transform (DWT) of the symlet family was used, which is a Daubechies wavelet with the least asymmetry and a compact carrier. The detail factor for each case is set empirically.

There are three stages of using wavelet transform to clean the noise from the ECG signal. The first step is to obtain noisy wavelet coefficients using the DWT of a noisy signal. The second stage is the choice of thresholding. The third stage is an inverse wavelet transform to obtain a purified signal [23]. The DWT of the ECG signal is:

$$DWT(a,b) = \frac{1}{\sqrt{2}} \sum\_{j=0}^{N} W\_j \int\_j^{j+1} \Psi\left(\frac{t-b}{a}\right) dt\_\prime \tag{2}$$

where *N* is the number of samples on the ECG signal, *W* is the ECG signal distorted by noise, *ψ* is a symlet, *a* and *b* variables can take on the values *a* = 1... *N*, *b* = 1... *N* − 1.

To obtain DWT, a low-pass analysis filter with an *g* impulse response and a high-pass analysis filter with an *h* impulse response are used. As a result of filtering, approximating and detailing coefficients are obtained [24].

$$y\_{low}(t) = \sum\_{k=-\infty}^{\infty} W(k)g(2t - k),\tag{3}$$

$$y\_{high}(t) = \sum\_{k=-\infty}^{\infty} \mathcal{W}(k)h(2t - k). \tag{4}$$

With each DWT application, the number of samples on the ECG signal is halved by Table 1 [24,25]. Each subsequent decomposition is carried out in terms of the low-frequency component by Figure 3.

**Table 1.** Length of ECG signals in DWT, where *N* is the initial number of samples on the ECG signal.


**Figure 3.** DWT ECG signal, where *W*(*t*) is an ECG signal distorted by noise, 2 ↓ is a decimation, *g* is a low-pass analysis filter, *h* is a high-pass analysis filter.

The next step in clearing noise from the ECG signal is to select a threshold function and thresholding. The thresholding is a value that determines whether there is noise in the signal. If the value of the wavelet coefficient at a certain moment is greater than the value of the threshold, then it is considered the value of the signal, if less, then the noise [24]. To determine the threshold limit of the ECG signal, the minimax threshold was used [26]:

$$T \le \sqrt{2\ln N},\ T^2 = 2\ln(N+1) - 4\ln(\ln(N+1)) - \ln 2\pi \tag{5}$$

The wavelet coefficients are transformed by thresholding, using the threshold functions. The functions of hard- and soft-threshold determination are most often used [24]. To determine the noise on the ECG signal, the soft-threshold function was used [26]:

$$0 \le \max\left(1 - \frac{T}{|\mathbf{x}|}, 0\right) \le 1,\tag{6}$$

where *x* is the value of the wavelet coefficient.

The reverse DWT for obtaining a cleaned ECG signal is as follows [24]:

$$S(t) = \sum\_{k=-\infty}^{\infty} \tilde{g}\_k y\_{low\_k}(t) + \sum\_{m=-\infty}^{\infty} \sum\_{k=-\infty}^{\infty} \tilde{h}\_{m\_k} y\_{high\_k}(t),\tag{7}$$

where *<sup>S</sup>*(*t*) is the ECG signal without noise distortion, *<sup>g</sup><sup>k</sup>* and *hmk* are approximating and detailing coefficients after processing by the threshold function. The reverse DWT of the ECG signal is performed according to Figure 4.

#### *3.4. Isolation of the P-Peak Feature Using Spectral Analysis*

The instantaneous frequency and spectral entropy functions were selected to isolate the P-peak. To calculate it, it is necessary to calculate the amplitude spectrum of the process using the Fourier transform, then normalize the amplitude spectrum so that the sum of its readings becomes equal to 1 and calculate the entropy using Shannon's formula. Changes in spectral entropy over time are associated with changes in the waveform, which allows its use to distinguish features on the ECG signal. Since the ECG signal consists of a finite number of samples, Shannon's formula for calculating the spectral entropy of the ECG signal is:

$$\mathcal{W}(t) = \begin{array}{c} -\sum\_{i=1}^{N} n\_i \log n\_{i\prime} \end{array} \tag{8}$$

where *S* is the amount of information, *N* is the number of possible events, *ni* is the value of the *i*-th samples on the ECG signal. However, it is more correct to use the Fourier transform when working with stationary signals. Therefore, for more accurate identification of the P-peak on the ECG signal, several signal-processing methods are required. With a deviation in the work of the heart, changes in the frequency of the ECG signals occur. To determine such changes, an instantaneous frequency is used, since this method allows the researcher to take into account the nature of the process, which changes over time [27].

**Figure 4.** Reverse DWT ECG signal, where *<sup>S</sup>*(*t*) is a noise-free ECG signal, <sup>2</sup> <sup>↑</sup> is interpolation, *<sup>g</sup>* is a low-pass synthesis filter, *h* is a high-pass synthesis filter.

As the ECG signal is non-stationary, in order to calculate the instantaneous frequency, the researcher can refer to the works of Carson and Fry [28] and Van de Pol [29]:

$$f\_l(t) = \frac{1}{2\pi} \frac{d\mathcal{W}(t)}{dt},\tag{9}$$

where *W*(*t*) is an ECG signal. Formula (9) describes the rate of change in the phase of the ECG signal, i.e., the instantaneous frequency shows how often the peaks appear and disappear.

#### *3.5. LSTM Processing of ECG Data*

LSTM networks have been specifically designed to find patterns over time [30]. Since ECG signals are sequences of peaks, the ability to memorize characteristic fragments of time series is critical when using LSTM in this area. Long-term and short-term memory is the main reason that the LSTM network is used as the basic structure of ECG signal recognition systems. In the present study, the deep LSTM network was used to classify ECG signals.

Passing a signal through a standard LSTM network structure involves four stages. The first stage is the passage of the signal through the sigmoidal layer, which is designed to determine the desired information. The second stage consists of passing through the sigmoidal layer, which determines whether this or that information is relevant and transmits a signal to the hyperbolic layer, which defines a new vector of candidates. The

third stage is to save a new vector of candidates. The fourth stage consists of passing through the sigmoidal layer, which is designed to determine the required information, and the hyperbolic tangent to display information in the range [−1; 1]. Figure 5 shows a typical LSTM structure.

**Figure 5.** Standard LSTM structure.

To classify ECG signals, it is proposed to use an LSTM network with two input layers *y*<sup>1</sup> and *y*2, which are combined into a two-dimensional vector *x*. All calculations are performed according to the standard LSTM structure, which is shown in Figure 3, where *x* is the input two-dimensional vector, *ht*−<sup>1</sup> is the output vector from the previous LSTM (*h*<sup>0</sup> = 0) block, *ht* is the output vector of the LSTM block, *Ct*−<sup>1</sup> is the state vector from the previous LSTM (*C*<sup>0</sup> = 0) block, *Ct* is the state vector of the LSTM block, *σ* is the sigmoid activation function, *tanh* is the hyperbolic tangent activation function, × is the multiplication operator, + is the addition operator.

The forget gate vector *ft* is the result of a computational step through the sigmoidal layer, which is intended to determine the desired information. The resulting vector determines what information needs to be "memorized" and is calculated by the formula:

$$f\_t = \sigma\left(M\_f[h\_{t-1}, x] + b\_f\right),\tag{10}$$

where *Mf* is a matrix of parameters, *bf* is a vector of parameters. Update gate vector *it* checks the relevance of the information using the sigmoidal activation function:

$$\dot{a}\_t = \sigma(M\_i[h\_{t-1}, \mathbf{x}] + b\_i),\tag{11}$$

where *Mi* is a matrix of parameters, *bi* is a vector of parameters. The next step is to define a new state vector.

$$\mathbb{C}\_{t} = f\_{t} \times \mathbb{C}\_{t-1} + i\_{t} \times \tanh(M\_{\mathbb{C}}[h\_{t-1}, \mathbf{x}] + b\_{\mathbb{C}}),\tag{12}$$

where *MC* is a matrix of parameters, *bc* is a vector of parameters,× is the multiplication operator.

The output gate vector *Ot* that is a candidate for leaving the LSTM network is calculated by the formula:

$$\mathbf{O}\_{t} = \sigma(\mathbf{M}\_{\rm O}[\mathbf{h}\_{t-1}, \mathbf{x}] + \mathbf{b}\_{\rm O}),\tag{13}$$

where *MO* is a matrix of parameters, *bO* is a vector of parameters. The definition of the output vector *ht* is made according to the formula:

$$h\_l = O\_l \times \sigma(\mathbb{C}\_l). \tag{14}$$

The rest of the LSTM network layers are standard and are used in neural networks to classify signals.

#### **4. Results**

For modeling, ECG signals were selected from the international open database PhysioNet Computing in Cardiology Challenge 2017 (CinC Challenge) [31]. This database contains over 10,000 ECG records; it is freely available from AliveCor and is a random sample of patient records of no more than one minute in duration. The Physionet Computing in Cardiology Challenge 2017 database consists of 8528 ECG signals for training and 3658 ECG signals for validation. The base consists of four types of single-channel signals: 5152 normal signals (N), 771 signals with cardiac fibrillation (A), 46 noisy signals (~) and 2557 other signals (O). The simulations were performed using two categories of signals, namely signals without heart defects (N) and signals with atrial fibrillation (A). These signal categories were selected to study the signs of atrial fibrillation on ECG signals for more correct LSTM learning. A total of 1000 signals were selected from the database CinC Challenge for the first modeling. For the second experimental simulation, 5925 ECG signals were selected, namely 5152 normal signals (N) and 771 signals with cardiac fibrillation (A). Examples of selected ECG signals from the database CinC Challenge are shown in Figure 6.

**Figure 6.** An example of ECG signals without heart defects (Normal Signal) and signals with atrial fibrillation (AFib signal) from the CinC Challenge database [31].

The simulation was carried out using the MatLab 2020b software package for solving technical calculations. The calculations were performed on a PC with a processor Intel(R) Core (TM) i5-10210U CPU @ 1.60 GHz (8 CPUs), 2.1 GHz.

Database PhysioNet Computing in Cardiology Challenge 2017 (CinC Challenge) consists of ECG signals with different numbers of samples. Figure 7 shows that there are significantly more signals from 9000 samples. For correct training of the neural network, the input ECG signals must contain the same number of samples. Therefore, at the stage of preliminary data processing for the first training, 976 ECG signals with several samples to 9000 were selected. For the second simulation, 5754 ECG signals with 9000 samples were taken at the preprocessing stage.

**Figure 7.** Histogram of the ratio of the number of samples to the number of signals.

DWT allows splitting the signal into high and low frequencies. Analysis of high frequencies of the ECG signal can determine the presence of peaks. Analysis of low frequencies of the ECG signal allows determining the presence of noise of varying nature [22]. The symlet is an orthogonal wavelet and can be used to reconstruct the signal [23] or to find R-peaks on ECG signals [24,32]. To remove noise from ECG signals and isolate R-peaks on them, the symlet wavelet filter was chosen. The symlet is similar in shape to the QRS complex on the ECG signal. This means that the decomposition coefficients of the ECG signal using the symlet-based DWT have a high correlation with the location of the P-peaks. Their number can be determined based on the analysis of the coefficients of the wavelet decomposition of the ECG signal. Figure 8 shows a graphical display of the symlet and QRS complex. Figure 9 shows examples of the five-level decomposition of an ECG signal based on a symlet. The spikes in coefficients D 2, D 3 and D 4 correspond to heartbeats, from which it is possible to determine the location of the R-peaks of the cardiogram.

**Figure 8.** Symlet repeating the shape of the R-peak on the ECG signal.

**Figure 9.** An example of a five-level decomposition of an ECG signal based on a symlet.

The results from Table 2 were obtained by simulating the proposed system using various symlet. Each value of the table is the result of training the proposed system for neural network determination of atrial fibrillation based on LSTM with signal preprocessing using different coefficients of symlet. The best learning result was obtained using a fivecoefficient symlet.

**Table 2.** Simulation of the proposed method using a wavelet symlet with different coefficients.


For the correct selection of various signs on the ECG signals, one-dimensional functions must be used. The functions of instantaneous frequency and spectral entropy were selected to isolate the P-peak. Fourier series and integral Fourier transform are the basis of harmonic signal analysis. However, when analyzing ECG signals, these transformations do not provide the possibility of analyzing peaks, understanding the local properties of the signal and its frequency characteristics. Therefore, for these purposes, we used the characteristics of the Fourier spectrum. Instantaneous frequency calculates a spectrogram using short-term Fourier transforms versus window time. Spectral entropy estimates entropy based on a power spectrogram. The time output of the function corresponds to the center of the time windows.

The selected ECG signals from the CinC Challenge database were divided into signals for training and signals for testing in a percentage ratio of 90:10. For training, the architecture of the LSTM neural network was assembled. The network consisted of two input layers, to which preprocessed signals were applied, and one hundred hidden recurrent layers. Preprocessing of signals using a symlet and subsequent application of spectral analysis functions made it possible to reduce the length of ECG signals to 255 HC. Table 3 presents the results of modeling various methods for detecting atrial fibrillation. The matrix of inaccuracies as a result of training the proposed system of neural network determination of atrial fibrillation from ECG signals is presented in Figures 10 and 11.

The best indicator of the accuracy of ECG signal recognition was obtained using the method proposed in the work and amounted to 87.5%. This accuracy was obtained in the first simulation using 976 ECG signals. The indicator of the accuracy of ECG signal recognition during the second simulation was 87.4%. This result is identical to that obtained in the first simulation. The smallest indicator of recognition accuracy was obtained using a one-dimensional LSTM without preliminary signal processing. The results obtained indicate that the use of pre-processing of ECG signals can significantly increase the recognition accuracy of neural network classification systems.


**Table 3.** Simulation results of various methods for detecting atrial fibrillation.

**Figure 10.** A confusion matrix is a result of training a system for neural network determination of atrial fibrillation on ECG signals with wavelet-based preprocessing.

**Figure 11.** A confusion matrix is a result of the validation of a system for neural network determination of atrial fibrillation on ECG signals with wavelet-based preprocessing.

#### **5. Discussion**

The article presents a system for neural network determination of atrial fibrillation on ECG signals with wavelet-based preprocessing. Simulation of the proposed system made it possible to achieve a recognition accuracy of 87.5%, which is significantly higher than the 79.0–82.0% level that can be achieved using known systems [33–35].

Work [33] is devoted to the development of a smartphone application for the detection of atrial fibrillation. The method proposed in [33] uses preliminary preprocessing of the ECG signal, which includes the analysis of the R-R and P-peak intervals. Despite the identity of the methods used, the authors of the work indicate the accuracy of the system at 79.0%, which is significantly lower than the result obtained by modeling the system proposed in this work.

The authors of [34] used the training of a recurrent neural network without preliminary signal processing and achieved an accuracy of 82.0%. The recurrent network is the basis of the LSTM network used in our method, which makes it possible to make comparisons.

In [35], the authors used a part of the MIT-BH 2017 database consisting of normal ECG signals, signals with fibrillation, and "others." A group of noisy signals was not used. For the experiment, the authors used signals with a length of 4 heart counts (4 R-R intervals). The method proposed in [35] did not give a positive result for signals of the "other" category. The results shown for determining the presence of fibrillation from groups with normal signals and signals with fibrillations were 82.0% accuracy, which is also lower than the result of the accuracy of the proposed system for neural network determination of atrial fibrillation on ECG signals.

There are other ways to measure atrial fibrillation. In [36], a more complex methodology is used that requires more computing power. Using multiple convolutional neural networks and LSTM networks is a resource-intensive method. At the same time, the use of pre-processing of the signal before training the neural network can reduce resource costs.

The proposed stages of preliminary processing of ECG signals made it possible to prepare data for further analysis to conduct an automated determination of atrial fibrillation. The average accuracy of a cardiologist's diagnosis by visual analysis of ECG signals is 65.0–70.0% [37]. The use of the proposed neural network system for determining atrial fibrillation from ECG signals by specialists will make it possible to increase the efficiency of diagnostics in comparison with methods of visual diagnosis. The proposed system for neural network determination of atrial fibrillation on ECG signals can only be used as an additional diagnostic tool by specialists. This system is not a medical device and cannot independently diagnose patients.

A promising direction for further research is the construction of more complex systems for the neural network classification of ECG signals, using, together with the analysis of signals, various metadata about patients, such as age, gender, race, genetic predisposition, and other descriptors. One of the next steps for further research is the creation of a mobile application for real-time fibrillation detection. Additionally, further research plans include the complication of the proposed system by using convolutional neural networks to improve the accuracy of determining atrial fibrillation.

**Author Contributions:** Conceptualization, P.L.; methodology, P.L.; software, M.K.; validation, M.K.; formal analysis, P.L.; investigation, M.K.; resources, U.L.; data curation, P.L.; writing—original draft preparation, M.K.; writing—review and editing, U.L.; visualization, M.K.; supervision, U.L.; project administration, P.L.; funding acquisition, P.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Russian Foundation for Basic Research (project no. 19-07-00130 A) and by the Presidential Council for grants (project no. MK-3918.2021.1.6).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** The authors are grateful to the North Caucasus Federal University for supporting the competition of scientific groups and individual scientists of the North Caucasus Federal University.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Applied Sciences* Editorial Office E-mail: applsci@mdpi.com www.mdpi.com/journal/applsci

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34 www.mdpi.com

ISBN 978-3-0365-5516-4