Structural dynamic modeling of buildings has come a long way in the last 50 years, due to the competition of software development companies and the increased availability of computational resources. These technologies have evolved from simulating only prismatic beams to including geometrical and material nonlinearities [
1]. From a control engineering perspective, these models have a particularly great importance when designing control systems for earthquake hazard mitigation. Having a good model exclusively for the above mentioned purpose can significantly improve the behavior of these systems [
2].
The civil engineering field is imaginative, and it ranges from water-resources to structural design and analysis. Generally speaking, the problems in this field are unstructured and imprecise, influenced by a designer’s intuitions and past experiences. The conventional computing methods based on analytic or empirical relationships take time and are labor intensive when they are presented with real life problems. In addition, Soft Computing techniques (SC) based on the reasoning, intuition, conscience, and knowledge of an individual can be easily empowered to study, model, and analyze such problems [
3,
4].
Unlike conventional computing technology based on exact solutions, in SC, either independent or mutually complementary work supports engineering activities by utilizing the human mind’s cognitive behavior to achieve cost-effective solutions aimed at exploiting the trivial and uncertain nature of the problem in a given tolerance of imprecision to achieve a quick solution to a problem [
5]. As a multidisciplinary field, SC employs a variety of complementary tools, including statistical, probability, and optimization tools.
According to Falcone et al., SC should be divided into two main domains [
5]. The first one, approximate thinking, collects a set of knowledge-driven methods that sacrifice health or completeness in order to achieve a substantial speed of thinking. The second one, randomized search, is also a family of digitally optimized techniques, such as direct search, free derivative search, or black-box search, which work by moving iteratively to better positions in the search space, which are sampled from a surrounding hyper sphere.
Earthquake engineering can be described as the civil engineering branch devoted to reducing the risks of an earthquake. An earthquake is the moment when the Earth’s surface is shaking, which is caused by moving interactions on the boundary of a plate [
6]. The sudden release of energy, called seismic waves, kills thousands of people and destroys many buildings. In this narrow context, earthquake engineering examines problems that occur when the earthquake occurs and seeks methods that minimize the damage caused by its activities. The first leads to the prediction of an earthquake, while the second leads to the optimal design of objects’ seismic performance. A whole range of earthquake engineering problems have arisen that are suitable to solve by SC [
7,
8]. The focus of SC in earthquake engineering is on solving two types of problems: the search for the best seismic structural design (system analysis); data analysis for earthquake prediction (modeling and simulation). In order to achieve earthquake safety, seismic design optimization addresses passive and active structures [
9].
The goal of this project is to measure the performances of different variants of DE and PSO in optimizing the parameters in a proposed seismic model for building structures that is lightweight enough to be used for different applications that require easy computation and reliability. The authors will analyze the convergence speed and other indicators of the algorithms’ performance and will compare the two algorithms in this use case. The achieved model will also be tested in two scenarios: simulation and prediction. The prediction will be performed using an extended Kalman filter.
The bibliographic search for this project can be divided into the following sections, regarding the field in which it was performed:
1.2. System Identification and Parameter Estimation
Due to the uncertainty, time-lagging, multi-variable couplings, and the limitations between the input and output, traditional model control methods are becoming increasingly difficult to control complex processes correctly in the rapid development of modern industry. Due to the complex structure, different parameters and time variations for industrial applications, this is a challenge for traditional identification methods, particularly in multivariate systems. Methods for identifying multi-variable systems date back to the 1960s, but the majority of methods for identifying them require noise-free observations. Together with their high calculation costs, this makes them difficult to apply in practice [
12]. In view of the above problems, many scientists proposed that a polynomial matrix be substituted for the state space model, to define the multi-variable system [
13].
Some researchers then proposed the Hankel matrix-based methods for row subspace identification. The first step in this method is to obtain the system’s increased observability matrix (or status sequence) and then calculate the parameter matrix of each sub-space. Multi-variable output error status [
13], sub-space state-space identification numerical algorithms [
14], and canonical variate analyses [
15] are the main representative techniques.
Input signal selection is an important factor in system identification, as stated in [
16], where the authors discussed the importance of the input signal selection and explained, briefly, a few types of input signals for system identification. The discussed signals were: the step, pseudo random binary sequence, auto-regressive moving average process, and sum of sinusoids. Based on this information and from prior knowledge, our choices for identifying signals were the step and PRBS signals. In the same book, in Chapter 1, the authors discussed the influence of data feedback on the identification performances. This is of great importance, because our system has strong feedback. They concluded that by having feedback in a system, it can make it unidentifiable. However, by having a reference signal, the previously mentioned problem disappears, affecting the identification performance.
Extensive research has also been done on PSO’s performance compared to that of GA. One example is [
17], where the authors discussed the performance of the PSO algorithm compared to that of Genetic Algorithms (GA) in system identification. Their case was a nonlinear model, and the experiment was performed online. They concluded that this type of algorithm is an efficient tool in nonlinear system identification, producing similar and better results than GA, having the advantage of low computational cost and faster convergence. Worden et al. also recently arrived at the same conclusion about nonlinear system identification [
18]. The identification of non-linear systems involves much more than linear identification. The following aspects contribute to this observation: non-linear models live in a multiplex system of a greater size, while linear models live in easier to characterize simple hyperplanes; in non-linear system identification, structural model errors are frequently inevitable, and this affects the three main choices: experiment design, model selection, and cost-function selection; entering noise before non-linearity requires new numerical tools to solve the problem of optimization [
19]. Moreover, extensive research has been done to compare parameter estimation capabilities to PSO variations like PSO, APSO, and Quantum behaved PSO (QPSO) [
20,
21]. The nonlinear model types on which the experiments are performed are the Hammerstein and Wiener models. Their conclusion was that using swarm intelligence, such as modifying the original algorithm, improved the parameter estimation performance. Other variations of the PSO algorithm have been studied for system identification; for example, the PSO-QI algorithm was discussed in [
22], where the authors of the paper analyzed the use of the PSO-QI algorithm for system identification, which was compared to to the classic PSO and DE. They concluded that for system identification, the modified algorithm was the best among the three because of its fast convergence.
Research has also been done when using DE for system identification and parameter estimation in systems. The work in [
23] discussed the optimal approximation of linear systems using a Differential Evolution (DE) algorithm. The authors incorporated a search space expansion scheme in order to overcome the difficulty of specifying proper intervals for initializing the DE search. Besides PSO, DE variations have also been studied for these tasks, for example in [
24], where the authors discussed a hybrid DE algorithm for nonlinear parameter estimation of kinetic systems. In this article, the authors combined the DE algorithm with the Gauss–Newton method. Basically, the DE was used to provide a good initial point for the Gauss–Newton algorithm, which then found the absolute minimum. Their conclusion was that this approach was an effective one for this kind of task.
1.3. Kinematic and Kinetic Modeling
Kinematics refers to the study of object movement without taking into account the forces acting on it. An
n segmented inverted pendulum can be considered as a kinematic chain (parts serially connected by joints). Each element can be defined as a rigid body defining a geometric relationship between two joints [
25]. Based on these assumptions and on Natsakis’ course [
26], an n segmented inverted pendulum kinematic model can be looked at as a one degree of freedom joint series of n elements connected on
links with the length considered to be zero. The axis of a joint is determined by the rotation of link
i in relation to link
. The distance between two different axes can be measured by determining the common perpendicular on them. If two axes are parallel, they can describe an infinite number of common perpendiculars, but all with the same length.
The forward kinematics model describes the relation between variable orientation or displacement inputs for each joint and the position and orientation of the end effector, represented in a 4 × 4 homogeneous transformation matrix. There are several approaches for computing the forward kinematics model, but in this paper, we will discuss the Denavit–Hartenberg convention [
27,
28]. The coordinate system for each link is defined by the following rules: the rotation axis of the joint represents the Z axis; the perpendicular on the plane formed by the current joint Z axis and the following joint Z axis represents the current joint X axis. The convention is based on four parameters, set out in
Table 1 after defining the coordinate system.
Olav et al. proposed the Lagrangian approach to determine the kinetic model [
29], but the advantages [
30] and limitations of this type of model [
31] can be easily found in the literature.
1.4. Kalman Filter
In 1960, R. E. Kalman introduced his famous discrete data filtering technique [
32]. The Kalman filter is basically a set of mathematical equations that provides an efficient way of computing the least squares problem using a recursive method. It is very powerful, because it can estimate the future, the present, and the past states of a system, even if its true nature is unknown. The original algorithm is suitable for linear state space models. For nonlinear state space models, the extended Kalman filter was introduced. This algorithm linearizes the operation around a current estimate with the help of partial derivatives [
33].
The Kalman filter, even though it was introduced in 1960, is widely used and lately has provided one of the most common ways to minimize the disadvantages [
34] associated with strap-down inertial navigation systems [
35]. The filtering method requires an accurate dynamic model [
36] and observed integration model, including an inertial sensor error stochastic model and a priori details on content regression coefficients between the two systems [
37]. However, there are several inconsistencies, as follows: the difference in the linearization approach; precise stochastic modeling that cannot accurately model sensors; the need for stochastic parameters to be adjusted. Each needs a new a priori sensor system and information [
38]. In addition, some filtering methods [
39,
40] were successfully applied.
In the fine alignment process, the error of the inertial sensors is estimated and compensated using the optimum estimation algorithm in order to improve the accuracy of the initial attitude matrix. The most frequently used estimates are based on a Kalman filter, which can handle only linear systems and requires accurate information about noise statistics [
41].
Petritoli et al. concentrated on the well-known data fission with integrity monitoring, low cost sensors, and a low energy consumption computer; however, they did not take into account the aging effects of such low cost sensors in depth [
42].
The Kalman filter is relatively less mathematically complicated and easier to deploy compared to the other filters, such as the particle filter. However, the capacity of the Kalman filter to position nonlinear integrated systems accurately is limited [
42,
43]. However, for example, there are also some advantages of using this approach [
44]. Eom et al. established a method for the improvement of physical estimates using multiphysical models and Kalman data fusion filters by processing raw measurements within a sensor [
45].