Damage dispersion observed in buildings that were affected by earthquakes in the past can be related to seismic vibrations characteristics (influenced by source characteristics and by geological site conditions) and to the differences in the seismic vulnerability of each building [
24]. For this reason, it is very important to do an adequate assessment of this vulnerability, namely by using nonlinear seismic analysis, being very common the use of an incremental nonlinear analysis [
25] or a static (pushover) nonlinear analysis [
26] for vulnerability assessment purposes. When assessing an individual building and when the knowledge level is low, the part 3 of Eurocode 8 (EC8-3) proposes an assessment based on simulated design in accordance with usual practice at the time of construction, which was the strategy adopted in this case study but for a more general purpose.
The present case study aims to evaluate how feasible is to use an ANN (trained with a low number of training vectors) for vulnerability assessment, in terms of accuracy of results.
4.1. Studied Structural Typology
Probably, the concrete buildings with higher seismic vulnerability are the ones built prior to modern seismic codes. Between the decades of 1930s and 1950s, in countries like Portugal, the buildings were designed without considering any seismic action and that is the reason why the proposed approach was tested in this specific typology.
As it has been possible to observe in old structural designs and according to the codes of that period [
27], the area of the reinforcing steel bars (
As) was determined considering an equivalent concrete area of the homogenized cross-section (usually using a homogenization factor of 10 for beams and 15 for columns).
Due to the lack of computational resources, the axial forces (Nc) were normally determined by multiplying the influence area of each column by the weight of the floor. The compression stresses were determined assuming an elastic behaviour of the homogenized cross-section. So, these simplified assumptions led to the necessity of a very small amount of reinforcement in low-rise buildings and normally the value of As was just a minimum percentage of the concrete cross-section, which creates very vulnerable buildings, in terms of their seismic behaviour.
In this work, this old simplified procedure was used to design some reinforced concrete frames (
Figure 5) in order to simulate the design solutions usually adopted in Portugal in that period (using the minimum number of rebars that leads to
As ≥ 0.005⋅
bc⋅
hc), which were used as the training set of a MFFNN. The adopted input variables were the number of beam spans (
I1 =
nb), the mean beam span dimension (
I2 =
Lb) and the mean cross-section column height (
I3 =
hc).
A design value of 40 kgf/cm
2 was used for concrete in pure compression, a value of 45 kgf/cm
2 was used in flexure and a design value of 1200 kgf/cm
2 was used for the reinforcement [
27].
The slabs thickness and the beams high were considered as a function of Lb, so the mass per unit area was also considered as a function of Lb. As a simplification and just for the purpose of this study, a constant value of 0.25 m was adopted for the width of all beams (bb) and columns (bc). A T-section was adopted for all concrete beams, as proposed in the EC8.
The mass adopted for each dynamic structural system was computed considering a transversal influence area equal to Lb.
At first, 125 nonlinear static analyses were carried out, which were named as the training set n. 1 (TS1), considering frames with the following values: nb = 1, 2, 3, 4 and 5 spans; Lb = 2, 3, 4, 5 and 6 m; hc = 0.25, 0.325, 0.4, 0.475 and 0.55 m.
The capacity curves were obtained by using the SeismoStruct software [
28] and adopting a triangular force pattern. In
Figure 6a all the 125 original capacity curves of the single degree of freedom system are presented and the corresponding simplified equivalent elastic-perfectly plastic capacity curves are presented in
Figure 6b.
Each incremental static nonlinear structural analysis was carried out until the EC8-3 near collapse (NC) limit state was reached for the chord rotation capacity (Equation (A.1) of the EC8-3), or when it was impossible to reach the convergence of iterative process used in the nonlinear structural analysis. When the NC shear capacity limit was reached (Equation (A.12) of EC8-3), the shear strength was reduced to a value corresponding to only 20% of the original strength (this is the SeismoStruct default option, which seems acceptable when observing some laboratorial tests results [
29]).
Another training set with 27 capacity curves was also considered, which was named as the training set n. 2 (TS2), which was a subset of the first one, with: nb = 1, 3 and 5 spans; Lb = 2, 4 and 6 m; hc = 0.25, 0.4 and 0.55 m.
Additionally, three control cases (not belonging to any of the training sets TS1 or TS2) were considered: the control case n. 1 (CC1), with nb = 1, Lb = 5.5 m and hc = 0.5 m; the control case n. 2 (CC2), with nb = 3, Lb = 4.2 m and hc = 0.35 m; and the control case n. 3 (CC3), with nb = 4, Lb = 3.5 m and hc = 0.28 m.
To process such an amount of data, computer procedures were developed for the automatic creation of computer files containing all the training set values.
4.3. Results and Discussion
The results obtained with the two neural networks (ANN1 and ANN2) are compared in
Figure 8,
Figure 9 and
Figure 10. The blue dots are the results obtained from the 125 nonlinear structural analysis (TS1) and the red and green lines are the corresponding results obtained with ANN1 and ANN2, respectively.
It is evident that the results presented in
Figure 6 are highly variable, due to the highly nonlinear structural behaviour of the studied buildings, so it seems that the use of a mean capacity curve is not the best approach to assess the seismic vulnerability of this typology.
This is probably why it is so difficult to predict the seismic response of a building when using much more simplified approaches, which are normally used in large-scale studies. The use of ANN may be a valid alternative, if the training sets are representative enough of the problem domain.
Observing the results, it is possible to notice that the ANN1 can reproduce outputs in good agreement with the structural analysis results. However, that is not the case of the ANN2, which are only able to match the results for the 27 training set points (TS2).
To better compare the performance of each neural network, a one-way classical ANOVA F-test [
31] was used and the results are presented in
Table 1,
Table 2 and
Table 3. The lower the F-test results are the higher is the confidence in the ANN results.
The obtained F-test results indicate that ANN2 presents better results than ANN1 when only considering the TS2 capacity curves (which were the 27 capacity curves used to train the ANN2) but ANN1 still presents very good results, because F-test values are almost zero. If only the results obtained with ANN1 and ANN2 were compared with each other for the TS2 case, it would lead to the false conclusion that ANN2 has a better performance than ANN1. However, when considering all the 125 capacity curves of the TS1, it is evident that ANN2 is unable to reproduce the entire domain of the problem, because F-test results for the ANN2 are much higher than zero, being even higher than one. Once again, the ANN1 presents very good results, because F-test values are still almost zero.
The maximum percentage of error of each ANN was also determined for the same input data, which corresponds to each one of the 125 analysis cases of the TS1 and it is presented in
Table 4.
The highest maximum error value was obtained for the Fy*, probably because this variable presents a higher range of values. Again, it is possible to conclude that ANN1 exhibits the lowest errors and they seem to be acceptable to use in large-scale studies, namely having in mind the error that should be expected for this type of studies. On the other hand, ANN2 maximum errors seem to be totally unacceptable, because they are much higher then 100%.
It is important to highlight that the only way to significantly reduce the ANN1 and ANN2 errors seems to be increasing the number of training vectors, to better cover the whole domain of the studied problem. Therefore, the ANN1 presents better results than the ANN2 when all the ANN solutions obtained for the 125 cases (TS1) are compared against the nonlinear analysis results.
Finally, the results obtained for the control cases 1 to 3 were compared to the ones obtained with the previously trained neural networks (
Table 5,
Table 6 and
Table 7).
The ANN1 presents higher errors when considering the control cases (which are not belonging to the training vectors), in comparison to TS1 results. However, the results still seem to be acceptable in terms of Fy*, (maximum error of 5.2%) and dy* (maximum error of 12.9%), namely in the context of the errors that are usually associated with large-scale studies using more simplified empirical methods. The worst result was obtained for the du* (maximum error of 35.6%). These errors would probably be reduced if a higher number of training vectors was used.