2. Literature Review
Numerous researchers have explored various control architectures for civil structures that can be embedded on low-power computing cores, with a specific attempt to address the communication constraints of WSUs. In particular, these often focus on distributing traditional optimal control algorithms across a network of computing nodes and typically depend on using complex state estimators. In [
8], a partially decentralized linear quadratic regulator (LQR) control scheme that leverages a Kalman filter to estimate unknown system states was validated on a full scale structure. In [
9], a sparse representation of the LQR was proposed, which requires less information for decision-making than a traditional centralized approach, thereby reducing information flow requirements. In [
10], the authors proposed a distributed
H∞ algorithm for civil infrastructure, and in [
11], the authors explored distributing the
H∞ algorithm across multiple communication subnets of wireless sensing nodes. While these studies demonstrate the successful use of low-power computing cores in control architectures, they also highlight several challenges associated with this technology. Such challenges include increased computational requirements at the already resource-constrained sensing node and decision-making based on reduced information, resulting in some degradation in the control effectiveness.
The proportional-integrator-derivative (PID) controller has also been considered in numerous studies due to its ease of implementation, which often makes it less computationally expensive. Developing the control parameters for the PID controller for seismic control of structures is challenging because the off-diagonal terms in the damping and stiffness matrices of the structure cause cross-coupling in the system. As a result, traditional empirical methods for deriving the control parameters, such as the Ziegler–Nichols method, are often ineffective. However, numerous researchers have focused on using metaheuristic algorithms, such as the particle swarm optimization [
12,
13,
14,
15] and the genetic algorithm [
16,
17,
18,
19], to determine the PID control parameters for a variety of applications and these can be extended to the control of civil structures. In [
20], the PSO algorithm was used to derive the PID coefficients for mitigating the effect of high-impact loads on a highway bridge, modeled as a single input–single output system. In [
21], a genetic algorithm was used to derive the control parameters for a PID controller for a single active tuned mass damper (ATMD) and was applied to numerically control an 11-story building. In [
22], the authors combined the LQR control scheme with a PID controller using the cuckoo search algorithm and validated the controller in simulation on a seismically excited 10-story structure equipped with a single ATMD. In [
23], a new evolutionary algorithm was proposed to derive the variables for a combined PID-LQR controller, which is numerically validated on a four-degree-of-freedom building that was equipped with an active control device on each floor. While all of these studies demonstrate the effectiveness of these methods, all but one is limited to a single output mechanism and they did not discuss communication considerations that would be encountered in real-world implementations of the structure. In [
24], the authors did account for these communication and computational constraints by including time delays, while also exploring various metaheuristic methods for developing PID control parameters, and found that such delays could severely inhibit the control effectiveness.
In this study, the authors seek to address the real-world challenges experienced when implementing seismic control of structures through consideration of communication constraints and the computational complexity of the control algorithm. In particular, we propose to use a novel sensing node that is capable of real-time spectral decomposition to alleviate computations at the controlling node. This node also deviates from traditional Nyquist data acquisition rates and, as a result, minimizes the amount of transmitted data across the network. By leveraging the front-end signal processing of the sensing node, the control algorithm is reduced to a simplistic weighted sum of inputs that is easily executed. Other metaheuristic methods were previously explored in [
25] and the PSO method was proven to be an effective method for determining parameters in this weighted-sum algorithm. This study continues to use the PSO method and focuses on further streamlining the weighted-control parameters to enhance the real-time control capabilities.
3. Adaptation of Particle Swarm Optimization and the Proposed Weighted Control Algorithm
Particle swarm optimization (PSO) is an iterative, population-based learning technique that is derived from the idea of swarm intelligence [
26]. In PSO, a number of particles are dispersed randomly in a search space and each particle location is evaluated according to a specified fitness function. With each iteration of the algorithm, every particle moves to a new location in the search space based on its own history as well as on the behavior of other nearby particles. The overarching goal of the particle is to move closer to the optimum of the fitness function. To achieve this, each particle in the swarm tracks three vectors:
x, which is the current position of the particle,
v, which is its current velocity, and
xb, which is the previous best position of the particle. Each particle also interacts with neighboring particles and stores the best position found from all neighbors,
g, in order to leverage the benefits of the swarm.
Each particle updates its three vectors every iteration through the equations:
and
where
i is the particle number,
k is the iteration number,
ρ1 and
ρ2 are random numbers between 0 and 1, and γ
1 and γ
2 are the acceleration coefficients which are both assigned to be 2 as recommended in [
26]. Equation (3) also includes an inertia weight, λ, which affects the convergence and plays a role in balancing the desire of the particle to search locally versus globally [
27]. An inertia damping constant, τ, is used to gradually modify this balance. λ is initially assigned to be 1 and is decreased using τ equal to 0.99 [
28], which results in a global search that gradually becomes more localized. At the end of each iteration, each position of the particle is evaluated according to a fitness function and the best position is updated if applicable.
As will be discussed, the search space in this particular application has high dimensionality (
x ϵ ℝ
275x1) and each element in the vector can assume a large range of values, from -1E8 to 1E8. As a result, it is easy for a particle to diverge from a localized optimal solution, resulting in an extremely large cost function. To alleviate this challenge, the algorithm was modified to include a homing mechanism for the particle. If the fitness function of the particle significantly exceeds the fitness found from the best position of all of the neighbors, the particle returns to its previously identified local best position. The particle also resets its velocity to zero to prevent it from quickly diverging again. This modified PSO algorithm is termed PSO-H. The algorithm is depicted in the block diagram shown in
Figure 1.
The PSO-H algorithm is used to optimize the necessary control parameters for executing a control algorithm that was originally proposed in [
25]. At the core of this algorithm is a novel sensing paradigm that is based on the mechanisms employed by the mammalian cochlea, first implemented by Peckens and Lynch for structural monitoring purposes [
29,
30]. In this paradigm, the response of the structure is decomposed into its frequency components in real-time using a series of overlapping bandpass filters. As discussed in [
11,
12], this bank of bandpass filters is optimized to fit the input signal by modifying the number of filters in the bank as well as their center frequency increments and passband frequency. The ability of this filter bank to decompose a signal into frequency components is attractive for the purposes of control, as it allows for instantaneous feature extraction, resulting in minimal signal processing at the actuation node.
The frequency selectivity of the
jth filter is defined by the second-order transfer function for a bandpass filter [
31], expressed as
where
H0 is set to 1.0 to ensure a unit gain in the filter and
Q0,j is related to the frequency selectivity of the filter, defined as (2ξ
j)
−1, such that ξ
j is the damping ratio of the filter and
ωj is the center frequency of the filter (rad/s). By making these substitutions for
H0 and
Q0,j in Equation (4), the impulse response of that filter is determined through the inverse Laplace transform, yielding
with
The convolution integral can be used with the input signal,
y(
t), and the impulse response of that filter,
hj(
t), to obtain the continuous time output of each
jth bandpass filter,
zj(
t),
Decomposing the signal into numerous components via bandpass filters increases the amount of information to manage, which is actually counterproductive to the goal of streamlining the control algorithm and communication. As such, the filtered signals,
zj(
t), are passed through a simple peak-picking algorithm and only these peak values are transmitted to the controller in an asynchronous sampling scheme. This allows for the control law to become a weighted sum of these peaks, expressed as
where
u(
tc) is the calculated control value at control time increment
tc,
is a control weighting vector, and
is the vector of the detected peak values for all filters, given that
n is the total number of filters. This complete process is depicted in
Figure 2. For simplicity, the architecture shown in
Figure 2 is a single input–single output system but it could easily be extended to a multiple input–multiple output system.
There is no empirical method for determining the parameters in the control weighting vector,
, as shown in [
25], and the PSO is one option for determining these control parameters. In the PSO optimization technique, each particle represents a potential weighting vector,
, where
. After completing the optimization, the particle with the smallest fitness function is chosen as the resulting weighting vector. This optimization is completed offline and prior to the control event, thereby minimizing the computational demand on the actuating node during the actual seismic event.
4. Application of Control Parameter Optimization on Five-Story Benchmark Structure
To validate the PSO-H optimization on the control weighting vector, the five-story Kajima-Shizouka building was used as a benchmark structure (
Figure 3). All simulations were executed using MATLAB. The structure was modeled as a lumped mass system, similar to that proposed by [
32], which is based on the actual structure used in the study conducted in [
33]. The structural properties are given in
Table 1, yielding five natural frequencies of 1.00, 2.82, 4.49, 5.80, and 6.77 Hz. The damping in the structure was modeled as a 5% damping ratio based on Rayleigh damping that is both mass-proportional and stiffness-proportional [
34]. It is assumed that only the horizontal degrees-of-freedom is measurable and controllable. Each floor is assumed to include an installed transducer, which measures inter-story displacement, and an ideal actuator, which is capable of supplying the demanded control force. In other words, the actuator block in
Figure 2 is set to a value of one and
u(
tc) equals
f(
tc).
The base-excited structural system is modeled in continuous time as an
m degree-of-freedom, linear time-invariant, lumped mass shear structure. This can be generalized through
m equations of motion [
34]:
where
M,
Cd, and
are the mass, damping and stiffness matrices, respectively. The displacement vector relative to the base of the structure is
,
is the ground acceleration, and
is the ground acceleration influence vector, where each term takes a value of one. The locations of the actuators are described by the matrix
and
is a vector of control forces given that
p is the number of input control forces. The variable
t represents continuous time. By setting the control force,
f, equal to zero, the uncontrolled response of the structure to the base excitation is approximated using the integration method proposed by Newmark [
34]. In this simulation, the control frequency was set to 100 Hz and the simulation time-step was 488 μsec. It is assumed that the control force is held constant in between control time-steps.
To introduce the weighted-sum control algorithm, it is assumed that each floor is outfitted with a bank of bandpass filters that are optimized to fully capture the displacement signal of that floor. Previous parametric studies have shown that a sensor with 11 filters, each having a center frequency every 0.7 Hz and a passband of 0.5 Hz, is optimal for representing the response of the structure to seismic base excitation [
30]. The dynamics of each filter is modeled using Equations (5) and (6). It is assumed that every unit in the filter bank is able to broadcast the peaks from its filtered signal to the controller on each floor and therefore every filter on every floor in the filter bank has a weighted connection with every controller on every floor. For the five-story structure, this resulted in a weight matrix,
W, that was 55 × 5 elements. The 55 rows in this matrix were derived from five sensors, each having eleven filters, and the five columns were due to the connections with the five actuators (
Figure 4). For the PSO algorithm, this matrix was rearranged into a 275-element particle and when represented in vector format, it is termed
.
In order to optimize the weighting vector for the purposes of control using PSO, a viable fitness function is needed. Several cost functions that were proposed in [
35] were used as a measure for quantifying the effectiveness of control for a given weighting vector. These metrics are chosen as they offer a dimensionless quantification, which makes them easily adaptable to a fitness function for the PSO algorithm and they have been used in numerous other studies, for example, in [
8,
25], and [
36]. Two of these cost functions focus on minimizing the inter-story drift of the structure, which reduces the likelihood of damage to the building system. One of these cost functions quantifies the reduction of the absolute maximum drift, expressed as
and the other cost function quantifies the overall time history reduction of the inter-story drift, expressed as
In these equations,
d(
t)
uncontrolled is the time history of the inter-story drift for all floors without any implementation of control, while
d(
t)
uncontrolled represents the response of the structure when subject to a control scenario. Furthermore,
is the absolute value function and
denotes the
l2-norm function. The other two cost functions focus on minimizing the acceleration of the structure, which is related to the occupational comfort during the event. Similar to the displacement cost functions, the acceleration cost functions quantify the reduction of the absolute maximum acceleration,
, expressed as
and the time history response, expressed as
The demanded control force is also quantified through a cost function provided in [
35], which is defined as
where
f(
t) is the time history of the control force for each floor and
Ws is the seismic weight of the building based on the above ground mass of the structure.
Each cost function is a
m-dimensional vector, where
m is the number of floors in the structure, thereby providing quantification for each floor in the structure. As the PSO algorithm, in general, requires a single fitness function, these five resulting vectors are summed together across all floors, yielding
In an uncontrolled scenario, the first four cost functions, J1 to J4, each take on a value of 1.0 for all floors, which sums to 20 for the five-story structure. It is observed that J5 typically takes on values of 0.2 or less for each floor in this simulation in order for it to be an equally weighted objective and a scalar of 5.0 is included with this cost function. Therefore, if the fitness function is less than 25, it can be concluded that the weight matrix is effective, as the uncontrolled response is likely less than the controlled response.
To determine the optimal weight matrix,
W, the five-story structure was subject to seismic base excitation in simulation using the 1940 El Centro (SE) earthquake ground acceleration record (
Figure 5). The dynamics of the structure and the associated sensing nodes were modeled using Equations (4)–(7) and (9), and the response of the structure to the excitation was approximated using the Newmark integration method [
34]. To cover an adequate search space, the algorithm uses 50 particles. To confirm convergence of the algorithm, the particles are trained until the best solution,
g, does not change for 50 consecutive iterations. This does not ensure locating a global minimum, but with an adequate number of particles and with executing the homing feature, the algorithm does locate a competitive local minimum. Using these metrics, the weight matrix is trained using the PSO-H algorithm, resulting in the cost functions shown in
Table 2, denoted as Weighted-sum in the table. A sample time history response for the drift is shown in
Figure 6. To ensure that the weight matrix is not over-trained to the El Centro record, it is also validated using the 1995 Kobe (NS JMA) and 1989 Loma Prieta (CORRALITOS) earthquake ground acceleration records (
Figure 5). These results are shown in
Table 3 and
Table 4.
As a comparison to the weighted-control algorithm, a traditional full-state feedback linear quadratic regulator (LQR) [
37] was also considered. This controller assumes that all necessary states (i.e., displacement and velocity) of all floors are measurable or estimates them using techniques such as the Kalman filter.
The LQR uses the algebraic Ricatti equation to minimize the cost function
subject to the full state feedback control law, expressed as
u = -
K, where
is a vector of the inter-story displacement and velocity of all the floors of the structure and
is the resulting constant feedback gain matrix, given
m states (or floors) and
p control forces.
This minimization is subject to two parameters:
, which applies a weight to the cost of the structural response and
, which applies a weight to the cost of the control effort. In the algorithm,
Q and
R are chosen using the commonly accepted Bryson’s Rule [
38] that establishes these values as proportional to the inverse of the square of the maximum acceptable displacement and control force, respectively. For this application, the non-zero entries of
Q are
and the non-zero entries of
R are
. Based on limitations described in [
7], it is assumed that the control sampling frequency is limited to 40 Hz, which subjects the control effort to realistic experimental constraints.The results from this controller are shown in
Table 2,
Table 3 and
Table 4 for the El Centro, Kobe, and Loma Prieta earthquakes.
The weighted-sum control algorithm has similar control effectiveness as the LQR algorithm when considering the El Centro earthquake for all cost functions (
Table 2). However, the LQR controller is much more adaptable to other earthquakes as demonstrated by its continued effectiveness at mitigating inter-story displacement and acceleration for both the Kobe and Loma Prieta earthquakes. The weighted-sum control algorithm is able to reduce the displacement measures (i.e.,
J1 and
J2) when the structure is subject to the other Kobe and Loma Prieta earthquakes, but it is significantly less effective at reducing the acceleration measures (i.e.,
J3 and
J4) when compared to the LQR controller (
Table 3 and
Table 4). In application, retraining the weight matrix for the characteristics of the anticipated seismic response could improve the performance of the control algorithm. For all control scenarios, the controllers were more effective at reducing displacement than acceleration, which is typically a trade-off when controlling structures during seismic events. For all earthquakes, the weighted-sum control algorithm did place minimal demand on the actuators, as denoted by the
J5 cost function, and this will contribute to its decrease in control effectiveness.
As communication overhead is a common challenge associated with WSUs and in particular when applied to control applications, an additional cost function,
is introduced that compares the amount of data that is transmitted during the execution of the weighted-sum control algorithm versus the amount of data that is transmitted during the execution of the traditional LQR algorithm. In this cost index,
NP1 is the number of peaks that are detected from all filters across all floors.
NP2 is the number of data points obtained via traditional Nyquist sampling rates, combined across all floors. While the weighted-sum method does use asynchronous sampling to generate the number of peaks, the bandpass filters decompose the displacement of each floor into eleven signals that must be transmitted, albeit as peaks, to the controller nodes. For all three earthquakes, this resulted in an undesirable increase in information flow, as indicated by
J6 being greater than 1.0 (
Table 3,
Table 4 and
Table 5). It is hypothesized, however, that not all connections are needed in the weight matrix and some filters can be eliminated from the algorithm. This in turn would reduce the information flow and address one of the common challenges associated with control using WSUs.
5. Integration of Pruning for Stream-Lined Control
As previously noted, the weighted control matrix has high dimensionality (
W ϵ ℝ
55x5) and can assume a large range of values (
Figure 7). It is hypothesized that due to the high connectivity, not all weight values within the matrix are necessary and some can be removed. This hypothesis is based on experiences with artificial neural networks (ANNs), which share a similar architecture to the proposed weighted-control algorithm. Similar to the structure of the weighted-control algorithm, ANNs typically need to predefine the network architecture, which can cause over-fitting if too many weights are defined or under-fitting if not enough weights are defined. Numerous methods exist for eliminating or pruning the weights in ANNs, such as minimal-value deletion [
39], ref [
40] that focus only on the magnitude or other methods that develop criteria to assess the importance of the weight [
41,
42,
43,
44]. The architecture of the proposed control method is more basic than a traditional ANN, only having a single layer of weights and, as a result, some of the elimination criteria in these studies is overly complex. As such, three methods are explored: (1) minimal-value deletion [
39], (2) optimal brain surgeon (OBS) [
41], and (3) a brute force method, termed minimum error pruning.
With each method, the effect of the minimized weighted control matrix is evaluated using the percent change in the fitness function,
where
C is the percent difference,
O2,
1 is the fitness function after implementing a pruning method, and
O2,0 is the original fitness function from the PSO-H optimization using the El Centro earthquake record, both of which use a modified fitness function (Equation (19)). As the weighted control matrix becomes sparse, it is possible for all connections to be eliminated to an actuator. When this occurs, elements of the
J5 cost function become zero while the displacement or acceleration of the structure may be undesirably increasing. As a result, a modified fitness function is used for the assessment of the effectiveness of the pruning method that omits the control force cost function (i.e.,
J5),
The effectiveness of the pruning algorithms was also evaluated using the number of removed weights and the number of removed filters. A filter is removed when it is no longer connected to any of the five controllers, indicating that its information is not required for any of the control force calculations. It is the removal of a filter, rather than just a weight, that is most beneficial to the network as it results in less information exchange across the network, which translates to power savings and also a more streamlined control-force calculation at the controller.
The first pruning method that was considered is a minimal-value deletion algorithm that eliminates a single weight with each iteration of the algorithm based on its magnitude [
39]. For this algorithm, the weight with the smallest absolute magnitude is eliminated and the fitness function is evaluated for the reduced weighted control matrix. The process is repeated for the weight with the second smallest absolute magnitude. This elimination of weights continues until 270 weights have been eliminated and only five weights remain. The resulting percent change in the fitness function as a function of the removed weights is shown in
Figure 8a. The resulting number of removed filters as a function of removed weights is shown in
Figure 8b.
Next, the optimal brain surgeon (OBS) algorithm was considered [
41]. OBS uses the Hessian matrix,
H, to create a vector of saliencies,
L, where each
qth entry in the vector is defined as
given that
wq is the
qth weight in the weight vector,
. The inverse Hessian matrix is calculated as
such that (·)
−1 is the inverse function,
r is the iteration number, and
X is a vector of partial derivatives. As the standard application of OBS is for multi-layer networks,
X is truncated to only include derivatives with respect to the input-to-output layers, similar to the hidden-to-output weights discussed in [
41]. For the weighted control algorithm, the output layer uses a linear activation function, which reduces it down to
, where
z(
r) is the output of the bandpass filters on the
rth iteration. In this instance, the number of iterations equals the number of time-steps in the simulation.
As specified in [
41], the weight with the smallest saliency is used to update all other weights by deriving a weight change vector,
and adding this to the weight vector,
. In Equation (22),
eq is the unit vector in weight space corresponding to the
qth weight
wq. After updating all weights, the identified weight is removed from the matrix. The algorithm continues to eliminate weights in this manner until there is an unacceptable amount of incurred error, at which point the network could be retrained. To understand the effect of removing weights, the stopping error for the algorithm was incrementally increased without implementing any retraining until 270 weights had been removed. The resulting percent change in the objective function as a function of the removed weights is shown in
Figure 8a. The resulting number of removed filters as a function of the removed weights is shown in
Figure 8b.
Finally, the third method that was considered for eliminating extraneous weights is the minimum error method. In this method, one weight is temporarily removed and the reduced weighted control matrix is evaluated using the fitness function (Equation (19)). This weight is then inserted back into the matrix, the next weight is removed, and the fitness function is evaluated. After all weights are considered, the weight that results in the smallest fitness function is permanently removed. This is repeated for all remaining weights until 270 weights have been removed. The resulting percent change in the fitness function as a function of the removed weights is shown in
Figure 8a. The resulting number of removed filters as a function of the removed weights is shown in
Figure 8b.
As can be seen from
Figure 8a, there is a non-linear relationship between the weights and the fitness function for all three methods. In particular, up to 223 weights can be removed for any of the three methods and less than a 5% degradation in control effectiveness occurs when comparing the full state to the pruned state. After that, the percent difference rapidly increases for the minimal-value deletion pruning and OBS methods. The minimum error pruning method, however, only incurs 1.03% error with 223 removed weights and 257 weights can be removed while still incurring less than 5% error. Another interesting trend is that the percent change in all three methods did not always incrementally increase. In some instances, removing additional weights actually improved the performance of the network. This was particularly true for the minimal-value deletion pruning and OBS, but was still evident to a smaller extent in the minimum error method. The minimum error method significantly outperformed the other two methods and, in some cases, a negative percent change was observed, indicating that the pruned network performed better than the full-state network. The pruned states resulting from minimal-value deletion pruning and OBS never performed better than the full-state network.
When considering the number of removed filters versus the number of removed weights (
Figure 8b), all three methods exhibited very similar trends. Initially, the number of removed filters remained at zero, even as the number of removed weights increased, because a filter was only removed when it was no longer connected to any controller units. However, once a significant portion of weights have been removed, the number of removed filters rapidly increases as connections are removed. Removing filters is a competing metric with the incurred error, as shown in
Figure 8a. It is desirable to remove more filters as this results in a more streamlined control algorithm, but as more filters are removed, the control effectiveness degrades as indicated by an increase in percent error. Therefore, determining the optimal number of weights to remove is a balance between these two metrics and requires some subjectivity.
Pruning Methods with Periodic PSO-H Retraining
While the error that incurred through the removal of weights was relatively small, it was next considered whether the control effectiveness could be improved by periodically retraining the weighted control matrix using the PSO-H algorithm during the three pruning processes. Each pruning method was re-executed with the modification that once groups of 10 weights have been removed, the weighted control matrix was retrained. The PSO-H-retraining algorithm continued to use 50 particles, with each particle also eliminating the weights that were identified through the pruning mechanism. At the start of each retraining session, λ, the inertia weight from Equation (3), was reset to 1.0 to allow the search to start globally and move locally. The PSO-H algorithm iterates until the best fitness function does not change for 50 iterations. The retraining pruning algorithms were compared against the control effectiveness of the full-state network, each using the El Centro earthquake for seismic excitation. The resulting percent change from the full-state network is shown in
Figure 9a for the three different pruning methods. For comparison, the pruning methods without retraining, extracted from
Figure 8a, are also overlain on the figure.
In general, all three pruning methods showed improvement over the full-state matrix when including periodic retraining. In contrast to the pruning without retraining, the OBS and minimal-value deletion methods now outperformed the minimum error method and were able to improve the control effectiveness when compared to the full-state matrix by over 6% in some cases. The pruned network produced the best results when using minimal-value deletion pruning to eliminate 170 weights, with a 6.28% improvement in control effectiveness. However, this only resulted in eliminating six filters, which did not streamline data communication. When 220 weights were removed using Minimal-value deletion pruning, this still maintained an improved control effectiveness, with 5.27% difference, and also eliminated 20 filters or an approximate reduction of 36.4% in the data transmission (=20 removed filters/55 total filters). Removing 230 weights is also attractive as it improves control effectiveness, though less with a 1.93% difference, and now eliminates 28 filters or an approximate reduction of 50.9% in the data transmission (
Table 5). The OBS method performed similarly to the minimal-value deletion pruning method but was not able to achieve the same improvement in control effectiveness and therefore was not considered in more detail. The minimum error method was able to improve when integrated into periodic retraining but contrary to the results without retraining, its performance lagged behind the minimal-value deletion and OBS methods.
To once again ensure that the pruned weighted control matrix was not over-trained to the El Centro earthquake, the matrices that were pruned using minimal-value deletion pruning for 220 weights and 230 weights were applied to control the structure when subject to the 1995 Kobe (NS JMA) and 1989 Loma Prieta (CORRALITOS) earthquake ground acceleration records (
Table 5). When removing 220 weights, the remaining weights allowed for enough generality that effective control could still be achieved for these two earthquakes, as indicated by the negative percent change in the fitness function. When removing 230 weights, however, the remaining weights were less effective at controlling the structure and some degradation in the control began to be apparent, perhaps indicating overfitting. Therefore, removing 220 weights using the minimal-value deletion pruning method is chosen as the optimal scenario as it is effective at controlling the structure but still remains generalizable to multiple input earthquakes.
The full results for all cost functions using the minimal-value deletion pruning for 220 weights are shown in
Table 2,
Table 3 and
Table 4. In almost all cost functions and for all earthquakes, the pruned matrix outperformed the full-state matrix and was closer to the performance of the LQR controller. Similar to the full-state controller, the pruned control scheme is not as adaptable to the Kobe and Loma Prieta earthquakes. However, across all earthquakes, this controller scheme places more demand on the actuators, which likely contributes to its increase in control effectiveness. The pruned matrix showed the most promise, though, in addressing the communication challenges of the network. For the El Centro earthquake, the pruned method transmitted 19.4% less data than the LQR and for the Kobe, it transmitted 47.5% less data. The two methods transmitted equal amounts of data for the Loma Prieta earthquake.