Next Article in Journal
Differential Game-Based Cooperative Interception Guidance Law with Collision Avoidance
Previous Article in Journal
Aerodynamic Analysis and Simulation for a Z-Shaped Folding Wing UAV
Previous Article in Special Issue
Damage Detection and Localization Methodology Based on Strain Measurements and Finite Element Analysis: Structural Health Monitoring in the Context of Industry 4.0
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Interval Neural Network Method for Identifying Static Concentrated Loads in a Population of Structures

by
Yang Cao
,
Xiaojun Wang
*,
Yi Wang
,
Lianming Xu
and
Yifei Wang
National Key Laboratory of Strength and Structural Integrity, Institute of Solid Mechanics, School of Aeronautic Science and Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(9), 770; https://doi.org/10.3390/aerospace11090770
Submission received: 5 June 2024 / Revised: 11 September 2024 / Accepted: 13 September 2024 / Published: 19 September 2024
(This article belongs to the Special Issue Aircraft Structural Health Monitoring and Digital Twin)

Abstract

:
During the design and validation of structural engineering, the focus is on a population of similar structures, not just one. These structures face uncertainties from external environments and internal configurations, causing variability in responses under the same load. Identifying the real load from these dispersed responses is a significant challenge. This paper proposes an interval neural network (INN) method for identifying static concentrated loads, where the network parameters are internalized to create a new INN architecture. Additionally, the paper introduces an improved interval prediction quality loss function indicator named coverage and mean square criterion (CMSC), which balances the interval coverage rate and interval width of the identified load, ensuring that the median of the recognition interval is closer to the real load. The efficiency of the proposed method is assessed through three examples and validated through comparative research against other loss functions. Our research findings indicate that this approach enhances the interval accuracy, robustness, and generalization of load identification. This improvement is evident even when faced with challenges such as limited training data and significant noise interference.

1. Introduction

Modern engineering structures such as aircraft, high-rise buildings, and new bridges operate in complex environments and are subjected to various external loads, leading to potential structural damage and failure [1]. In particular, aircraft require precise load assessments to balance safety and design costs. Inadequate load assessments can compromise safety margins, causing premature failures, while overly conservative assessments increase design costs due to excess weight [2]. Accurate load information is essential for ensuring structural safety and minimizing damage during normal operations. External load information can be obtained in two main ways: direct measurement [2] and indirect measurement. Direct measurement involves using multiple sensors to measure load information directly when installation is feasible. However, due to harsh conditions, functional requirements, and spatial or temporal constraints, direct measurement is often impractical. Therefore, the common approach in engineering is to indirectly determine structural loads by measuring more accessible structural responses, such as strain, displacement, velocity, and acceleration. This method involves back-calculating external excitations from known structural responses and system characteristics, which is a mechanical inverse problem in structural dynamics [3].
Load identification, originating in aviation in the 1970s [4], remains in its early stages despite extensive research. Typically, load identification involves three steps: establishing a mechanical system model, measuring structural responses, and selecting an identification method. This process spans multiple disciplines, including modeling, structural vibration analysis, and computational inverse methods. Recently, neural networks [2,5,6,7,8,9] have gained attention for rapid load identification, addressing the inefficiencies of traditional methods. Neural networks do not require theoretical expressions between vibration responses and loads. With sufficient data, the network learns the relationship between structural response and load. This is particularly useful for complex nonlinear systems. In 2000, Sofyan and Trivailo [10] used back propagation neural networks (BPNN) to identify aerodynamic loads on thin plate structures and developed general-purpose software for multi-type dynamic load identification problems. In 2006, Trivailo [11] introduced Elman networks to solve the problem of identifying buffeting loads and maneuvering loads on aircraft tail wings, and the identification results showed that Elman networks have good identification performance for aerodynamic load identification problems. In 2015, Samagassi et al. [12] used wavelet vector machines to identify multiple impact loads on linear elastic structures and achieved the identification of multiple impact loads on a simply supported beam model under a single response. In 2018, Cooper and Dimaio [13] achieved the identification of static loads on large wing ribs of an aircraft using feedforward neural networks (FNNs). In 2019, Chen et al. [14] used deep neural networks (DNNs) to achieve impact load identification, where they used the damage deformation characteristic parameters of a hemispherical shell structure and various characteristic parameters of impact loads as inputs and outputs of the DNN network for load identification. In 2022, Yang [8] proposed a dynamic load identification method based on deep dilated convolutional neural networks (DCNN). This method can be used as a filter in dynamic load identification due to its strong noise immunity in convolutional layers. To address the issue of rapid fatigue life depletion caused by airframe flutter, Candon [15] focused on a single-reference aircraft under a multi-input single-output (MISO) load monitoring scenario. Using strain as the input, the goal was to predict the representative bending and torsional dynamics as well as the quasi-static load spectra on the aircraft wings during transonic flutter maneuvers. A suite of machine learning models was consolidated, including linear regression models, traditional artificial neural networks, and deep learning strategies. In 2024, Liu [7] proposed a new hybrid model-data-driven interval structure dynamic load identification framework, which seamlessly combines finite element modeling with machine learning techniques. When facing challenges such as limited training data, significant noise interference, and non-zero initial conditions, this method can also improve the accuracy, robustness, and generalization of dynamic load identification.
Traditional neural network methods can accurately establish a mapping bridge between structural responses and real loads, and while the mapping accuracy may be high, it is not credible when faced with data that deviate from true values. Due to the point-to-point input–output mapping in traditional neural network architectures, slight variations in the structure itself or minor disturbances in measurements can cause significant deviations in the performance of a well-trained traditional neural network, potentially rendering it completely ineffective. Dispersed data might all be correct but not entirely accurate. The fixed architecture of traditional neural networks can become confusing and ineffective when faced with the variability of a population of structures, as it struggles to adequately respond to and represent such diversity. In such cases, INNs [16,17,18,19] can effectively solve the modeling problem of uncertain data. INNs, based on the combination of interval analysis and traditional point-value neural networks, use interval numbers to characterize subjective and objective uncertainties, use traditional point-value neural networks to complete physical process modeling, and achieve the modeling and quantification of imprecise data in a clever combination. By using new modeling methods such as an INN in system design, it is possible to effectively avoid model structural errors and other requirements. Some scholars [16,20] have studied interval prediction models for neural network prediction, but the existing studies are all based on point-value neural networks to achieve interval value prediction, which is essentially still a statistical regression. Point-value neural network-based interval prediction is difficult to provide a comprehensible explanation during training, and the uncertainty of system inputs and parameters is difficult or unable to be quantified. INNs are a generalization of interval prediction models and can provide a comprehensive prediction result, not just a point value. In 1991, Ishibuchi and others [21] proposed the backpropagation neural network (BPNN) model with interval input and point-value network parameters for the first time. They also provided forward calculation and backward learning algorithms based on interval analysis theory. Since then, INNs have begun to enter the academic research field. In 1998, Beheshti [19] et al. defined the concept of INN for the first time and divided the training problem of INNs into two categories: directly obtaining numerical solutions and solving nonlinear optimization problems. In 2000, Garczarczyk [22] proposed the gradient descent algorithm for the 4-layer INN with interval weights and thresholds. Yao [23] proposed an improved INN based on the Widrow–Hoff learning rule in 2004 and combined it with the time-delay neural network to construct a dynamic INN model. In 2009, Campi, Calafiore, and Garatti et al. [24] developed an interval prediction model for supervised learning, which can perform interval predictions with guaranteed accuracy. In 2019, Sadeghi et al. [25] proposed a neural network backpropagation algorithm with interval predictions, which uses a small batch of random gradient descent algorithms to train and quantify the uncertainty of the neural network, thereby achieving constant-width interval predictions. In 2022, Saeed [26] proposed two new interval prediction frameworks using independent recurrent neural networks. This method employs Gaussian functions centered around the predictions of point forecast models and the estimation errors of error forecast models to estimate the prediction intervals. The average coverage width index improved by 43% and 12%, respectively, compared to traditional models and models based on Long Short-Term Memory (LSTM). Shao [27] introduced a new approach to obtain reliable predictions from the perspective of pattern classification. A novel hybrid framework was established, composed of nested LSTM, Multi-Head Self-Attention (MHSA) mechanisms, CNNs, and a feature space identification method, aimed at robust interval forecasting.
However, subsequent research on INNs has not developed or flourished as expected. One reason is the difficulty in expanding the application scenarios of INN in practical use, which significantly diminishes the motivation for their research. Nonetheless, as structural design increasingly emphasizes robustness alongside precision, the powerful capability of INN to handle uncertain data positions them as a potential breakthrough technology. In this paper, the significant uncertainty characteristics of load identification in a population of structures align well with the capabilities of INN, yet this area remains largely unexplored. In order to fill this gap, this study proposed a study on the INN method for uncertain load identification problems, which takes into account these potential deviations and quantifies the uncertainties involved, making the data model more robust. Even in the face of significant uncertainties within a population of structures, these methods can still grasp the main contradictions, establish interval mapping relationships, and address the credibility issues of the data and identification results.
The main contributions and innovations of this paper are summarized as follows:
(1) The INN framework is implemented for load identification in a population of structures for the first time;
(2) A global optimization algorithm is proposed to solve the problem of interval weight and threshold in INN;
(3) To reasonably evaluate the accuracy and credibility of the identification results, an improved loss function metric CMSC combining interval coverage rate and interval width is established.

2. Problem Description

In the aerospace industry, static and quasi-static concentrated loads are common types of loads encountered. During aircraft design and validation, static and quasi-static load tests are indispensable for simulating the loads that aircraft may encounter under normal flight or extreme conditions. During landing, the landing gear and aircraft structure are subjected to impact loads. Although these loads occur over a short duration, their nature is more quasi-static because they are predictable and repetitive. When weapons, fuel tanks, or other equipment are mounted under the wings or on the fuselage of an aircraft, the additional loads generated also constitute concentrated loads. Additionally, while parked, aircraft are subject to wind forces. Although these forces are dynamically changing, they can be approximated as quasi-static loads in conditions of low or slowly changing wind speeds.
The study of these loads is crucial for the design, safety, and performance of aircraft. By understanding and analyzing these static and quasi-static loads, engineers can better design aircraft structures to ensure their safe operation in all anticipated operating environments. The mathematical description of static concentrated load identification is shown in Equation (1).
y = F ( x , b )
In this equation, y represents the concentrated load, x represents the measured response, and b represents the physical parameters of the structural system. It is evident that the mapping relationship between structural response and load bearing is influenced by the parameters of the structural system.
Structures of similar design are considered as a population, where parameter b variations in geometry, materials, and boundary conditions cause variable responses under identical loads, appearing as scatter points (Figure 1). Additionally, experimental measurements introduce errors.
Identifying the real load from scattered experimental responses under uncertain conditions is a significant and challenging problem. Load identification methods under uncertainty aim to deduce the range of potential loads from the input scatter of experimental responses, ensuring the real load falls within this identified range. By employing a data-driven approach, neural networks offer an alternative to traditional mechanistic models. These networks, which mimic the human brain’s structure and information processing, are capable of self-learning, organizing, and adapting. Unlike mechanistic models, neural network-based models simplify the structure and bypass the need to understand the physical significance, directly mapping the relationships between sensor responses and load data.
This paper introduces an INN approach for structural load identification that leverages its ability to quantify uncertainty. The INN model uses dispersed measurement data from similar structures to provide a range of potential load values, accommodating uncertainties in geometry, materials, and measurement errors. This model does not make assumptions about the data by enhancing its generalization, robustness, or credibility. The neural network parameters are set as interval values, allowing the output to also be intervals based on interval arithmetic. This INN framework offers reliable prediction intervals even with limited data samples, effectively addressing load identification for a population of structures, as follows:
y I = F x , b I = I N N x , W I , Θ I
In the INN model for load identification, y I represents the identified load interval, x represents the measured responses, and b I represents the physical interval parameters of the structural system, which correspond to the interval weights W I and interval thresholds Θ I within the INN. The learning process of the network parameters corresponds to the updating process of the uncertain physical model. This substitution allows the INN to adaptively refine its estimates based on the variability and uncertainty inherent in the input data, thereby improving the credibility and accuracy of the load identification under uncertain conditions.

3. The Research Framework

The procedure for the INN method applied to load identification problems is depicted in Figure 2. The introduced INN approach enables modeling, analysis, and quantification of the imprecise data obtained, specifically for typical structural components or parts facing load identification challenges. The process for employing the INN approach to address the identification of loads under conditions of uncertainty involves several key steps, which are detailed below:
Data Preparation: Data for training the INN are obtained either through finite element simulation or experimental methods. Each training sample’s input–output relationship is constructed according to the specific type of load identification problem. The dataset is then divided into training and testing sets.
Construction of INN: This involves the selection of the INN type, determination of the network structure, and training of the network. The specific load identification problem dictates the choice of the INN model, while the characteristics of the sample data determine the number of neurons in the input and output layers of the load identification interval network model. Finally, an appropriate loss function for the network is selected, and suitable algorithms are used to train and adjust the network parameters.
Load Identification Testing and Performance Evaluation: The INN model that meets the assessment criteria is used for the identification/prediction of unknown loads. The network’s performance is evaluated by analyzing the results of the load identification.

4. Interval Back Propagation Neural Network Architecture for Load Identification

To ensure the generality and adaptability of the proposed INN architecture in practical applications, this paper does not introduce additional interval quantification methods. Instead, it maintains the original point-value form of the structure response–load data pairs during load identification. The load intervals identified by the interval back propagation neural networks (IBPNN) should contain the real load point values. The structure and learning algorithm of the IBPNN are as follows, with Figure 3 illustrating a typical three-layer IBPNN.
In this context, x = x 1 , x 2 , , x l represents the input measured response data, w I and v I denote the interval weight values, and θ I and λ I represent the interval threshold values. The output y I describes the interval identified load. The subscripts i , j , and k indicate the indices corresponding to the respective layers, where i 1 , l , j 1 , n , k 1 , m . At this stage, the network’s weights and thresholds are set as interval values, while the input data to the network are in point-value form. The output of the network, as per Equation (3), is generated accordingly.
y k I = f x 1 , x 2 , , x l = [ y ¯ k , y ¯ k ] = f j = 1 m v ¯ j k , v ¯ j k u j [ λ _ k , λ ¯ k ]
In this setting, ¯ and _ represent the upper and lower bounds of the interval parameter, respectively. The output corresponding to the hidden layer is denoted as u j I = u ¯ j , u ¯ j = f i = 1 l w ¯ i j , w ¯ i j x i θ ¯ j , θ ¯ j . The upper and lower bounds of its hidden layer output are u ¯ j   a n d   u ¯ j , respectively, as Equation (4).
u ¯ j = f i = 1 l w ¯ i j x i x i 0 + i = 1 l w ¯ i j x i x i < 0 θ ¯ j u ¯ j = f i = 1 l w ¯ i j x i x i 0 + i = 1 l w ¯ i j x i x i < 0 θ ¯ j
By selecting the unipolar Sigmoid function f x = 1 1 + e x as the activation function, where the range is set as 0 , 1 , it follows that u j > 0 . Combining Equations (3) and (4) reveals the specific form of the interval-identified load y k I , whose lower and upper bound can be expressed as
y ¯ k = f j = 1 m v ¯ j k v ¯ j k 0 u ¯ j + j = 1 m v ¯ j k v ¯ j k < 0 u ¯ j λ _ k y ¯ k = f j = 1 m v ¯ j k v ¯ j k 0 u ¯ j + j = 1 m v ¯ j k v ¯ j k < 0 u ¯ j λ ¯ k

5. Solving INN: Loss Function Construction and Interval Parameter Optimization

In addition to the architecture of INN, the effectiveness of load identification is also influenced by the construction of the loss function and the optimization of network parameters. This chapter integrates the strengths of traditional INN and point value neural networks, proposing an improved loss function indicator. A global optimization algorithm is employed for training the network parameters, ultimately achieving efficient and credible interval load identification results.

5.1. Loss Function Based on Quality Assessment Metrics for Predictive Intervals

Traditional neural networks typically use metrics such as Mean Squared Error (MSE) as the loss function for network training. The original loss function can be constructed as follows:
MSE = 1 2 k = 1 n d k p y k p 2
Here, p 1 , P represents the sample size and k 1 , n denotes the number of network outputs. d represents the real load in the samples. However, for INN, the identified results are interval values while the real outputs are point values. Therefore, when constructing the loss function for INN, the metrics need to be adapted to account for the interval nature of the predictions. The loss function Interval Mean Squared Error (IMSE) for the IBPNN is defined as follows:
IMSE = 1 4 k = 1 n d k ( p ) y ¯ k ( p ) 2 + d k ( p ) y ¯ k ( p ) 2
In this context, y ¯ k ( p ) and y ¯ k ( p ) represent the lower and upper bounds of the identified load, respectively. Similarly, when the real load is in point value form, we have d ¯ k ( p ) = d ¯ k ( k ) = d k ( p ) .
However, modifying the traditional neural network loss function into an interval form and directly applying it to INN is still inappropriate. The loss function based on IMSE does not reflect the characteristics of the interval-enveloped target values, as illustrated in Figure 4. In this figure, the upper bound of the load identification interval is at a distance of r 1 from the real load sample d k , while the lower bound is at a distance of r 2 from d k . In two different interval scenarios, both r 1 and r 2 are consistent, resulting in identical IMSE function values. However, the coverage of the intervals differs significantly. Clearly, scenario 1 is more desirable.
The researchers [28] introduce the Coverage Width Criterion (CWC) as an evaluation metric for the quality of prediction intervals, used to construct a loss function for INN learning. CWC is related to two other metrics, Prediction Interval Coverage Probability (PICP) and the mean prediction interval width (MPIW). PICP is the most commonly used metric for assessing interval quality. It describes the extent to which the target value is contained within the prediction interval and is the primary indicator for evaluating the quality of prediction intervals. The PICP value directly reflects the accuracy of the prediction interval and represents the percentage of sample target values included in the prediction interval.
Mathematically, PICP is defined as
PICP = 1 n i = 1 n c i c i = 1   L i y i U i 0   otherwise
where n is the number of samples and L i and U i represent the lower and upper bounds of the i-th prediction interval. If the i-th prediction interval encloses the corresponding sample value, then c i = 1 ; otherwise, c i = 0 as shown in Figure 5. The PICP value of the identified interval is closely related to the interval width. When using the PICP metric to evaluate the quality of the prediction interval, it is evident that high-quality intervals can be easily obtained by simply widening the interval. However, overly wide prediction intervals have limited practical value and may not meet engineering requirements.
Therefore, in addition to using PICP as an interval quality evaluation metric, it is necessary to introduce a metric related to interval width, namely MPIW, to characterize the quality of the prediction interval. Its mathematical definition is as follows:
MPIW = 1 n i = 1 n U i L i
Here, U i L i represents the prediction interval width for the i-th sample. The MPIW can provide sensitive information about the prediction interval’s responsiveness to changes in the real target value. Specifically, under the same PICP, a smaller MPIW indicates a higher quality prediction interval.
From Equations (8) and (9), it is evident that an interval with a high PICP value and a small MPIW value represents a high-quality prediction interval. The metrics PICP and MPIW can describe and measure the quality of the prediction interval from different perspectives. However, increasing PICP typically results in a wider MPIW, while narrowing MPIW often leads to a lower PICP. Our goal is to construct a prediction interval with high coverage (larger PICP) and sufficient narrowness (smaller MPIW), ensuring its ability to quantify and express data errors effectively. Therefore, when using prediction interval quality metrics to construct the INN loss function, these two metrics should be integrated into a comprehensive measure. Based on the above analysis, the INN loss function integrating both quality evaluation metrics [28] is constructed as follows:
CWC = MPIW 1 + e η PICP μ
An analysis of Equation (10) reveals that when the PICP is less than the predefined credibility level μ , the exponential term will exceed 1, causing the CWC to increase sharply with the PICP. In this scenario, PICP serves as the primary evaluation metric for prediction intervals, aligning with the priority of considering the weight percentage of PICP. Conversely, when the PICP exceeds μ , the curve of the exponential term gradually flattens as the PICP value increases, leading to a gradual decrease in its weight and MPIW becomes the primary optimization target. η represents the weight amplification coefficient for the PICP.
However, the current construction of CWC still has issues. When the upper and lower bounds of the interval are the same, the MPIW remains constant at 0, resulting in a loss function of 0 and rendering the training ineffective. Even if MPIW is not 0, there is no clear rule for selecting intervals with the same interval width, which does not align with practical physical situations. In real-world engineering problems, the sample points are densely distributed around the real value and sparsely distributed far from it. Therefore, this paper proposes an improved coverage and mean square criterion (CMSC), where the MPIW in the formula is replaced by IMSE. This modification aims to have the real load values as close as possible to the median of the identification intervals while fulfilling the requirements of interval coverage rate and interval width. The improved form of CMSC is presented below, as follows:
CMSC = IMSE 1 + e η PICP μ
During the INN learning process, if the obtained PICP value is lower than the set threshold value μ , the resulting prediction intervals may be misleading, emphasizing the optimization of the prediction interval coverage rate. Conversely, when the obtained PICP reaches the specified level, the INN based on the CMSC criterion optimizes IMSE to reduce the uncertainty in the network’s interval outputs.

5.2. Optimization of Parameters in INN

The fundamental concept of INN involves generating interval output results through neural networks. For interval prediction, algorithms like the LUBE (Lower Upper Bound Estimation) algorithm [28] construct a neural network model with dual output neurons. In this model, the two outputs of the neural network represent the upper and lower bounds of the prediction interval, thereby forming the output interval. The INN model proposed in this chapter achieves interval results by utilizing interval weights and thresholds within the network. Figure 6 illustrates the distinctions between these two methods. Compared to the LUBE algorithm, the INN architecture used in this study offers greater fitting flexibility due to the interval form of weights and thresholds, even with the same number of layers and neurons.
In this chapter, the CMSC is used as the objective function for the IBPNN model. Considering extensive interval calculations and the abrupt nature of CMSC, gradient-based algorithms are not particularly suitable for optimizing interval weights and thresholds. The GA originates from computer simulations of biological systems. It is a stochastic global search and optimization method that mimics the evolutionary mechanisms of nature, drawing inspiration from Darwin’s theory of evolution and Mendel’s principles of genetics. Essentially, GA is an efficient parallel global search method that can automatically acquire and accumulate knowledge about the search space during the search process and adaptively control the search process to find the optimal solution.
Given the advantages of GA in discovering optimal solutions, avoiding local optima, and ensuring convergence, it is chosen for network optimization. Let the interval weight matrix be W I and the threshold vector be Θ I . The optimization goal for the INN is to determine the optimal network P = ( W I , Θ I ) such that the value of its loss function CMSC is minimized.
The steps for the genetic optimization of optimal interval network parameters are as follows:
(1) Encode the interval network parameters to be optimized into chromosomes, and set the loss function CMSC of the training data as the fitness function. (2) Evaluate the fitness of each chromosome corresponding to an individual. (3) Following the principle that the higher the fitness, the greater the probability of selection, select two individuals from the population as parents. (4) Extract the chromosomes of the parents and perform crossover to produce offspring. (5) Mutate the chromosomes of the offspring. (6) Repeat steps 2, 3, and 4 until the optimal population is produced, thereby obtaining the optimized network parameters and ultimately achieving high-quality prediction intervals.
Figure 7 illustrates the overall flowchart of the INN model for centralized load identification based on the CMSC function. The detailed steps are described as follows.
  • Data Partitioning
The structural response is selected and load data are measured under different loading conditions from multiple individuals of the same structural population as the data source for the INN model. Partition the data into training and testing sets.
  • Parameter Initialization
Determine the input and output dimensions of the INN model based on the dimensionality of the load identification problem to be solved. Set algorithm-related parameters such as the maximum number of iterations and population size for GA. Initialize the interval weights and thresholds of the INN model. Set the credibility level and amplification factor in the CMSC-based network loss function.
  • IBPNN Network Training
Determine the optimal number of neurons in the hidden layer using empirical formulas. Set the fitness function of the GA to be the normalized CMSC index of the INN prediction interval. Once the GA meets the termination conditions, obtain the optimal interval weights and thresholds of the INN model, complete the training, and output the optimal network.
  • Construction of Identified Load Intervals
Based on the optimized IBPNN model obtained in step 3, input the structural response data from the test set. Predict and identify the load intervals. Evaluate the identification performance using PICP, IMSE, and CMSC metrics.

6. Numerical Case and Engineering Application

The paper conducts validation through both numerical and experimental case studies. In Case 1, the improved CMSC INN loss function indicator is compared with the traditional MSE loss function of deterministic neural networks and the traditional CWC loss function indicator of INN. The results demonstrate that the CMSC indicator can provide superior load prediction intervals. In Case 2, the study employs random sampling and finite element simulation to model the influence of uncertainties in material properties and dimensions on samples. This approach proves that the proposed method can still achieve accurate interval load identification results despite the presence of uncertainties. In Case 3, the method is applied to engineering experiments. The findings show that with minimal training from a few test results, interval load identification for all samples can be achieved with excellent accuracy. This demonstrates the method’s robustness and effectiveness in practical engineering scenarios.

6.1. Case 1: Identification of Polynomial Response

Assuming the structure bears a load denoted as y and generates a response x , when there is a single measurement point, the response x corresponds directly to the load y and x can be represented as a function of y , such as x = f y . The unit of y is kilonewtons (kN) and y represents the structural strain. Similarly, from the perspective of identifying the inverse problem, the load y can also be expressed as a function of the response x , y = g ( x ) . In the example, the form of g ( x ) is assumed as shown in Equation (12).
y = g x = 1 + 1 1 + e 12 x 0.5 + 0.1 x 1 x 1 + 0.05 r n d 1 , 1
Here, when uncertainty is not considered, load and response are assumed to be unbiased ideal data, and the functional relationship between load and response is as shown in Figure 8. Rand [−1,1] represents a random number within the interval [−1,1], following a normal distribution, symbolizing the uncertainties both internal and external in the identification process. Aside from the translation shift of 1, the uncertainty is approximately 5% of the real value. The normalized x is randomly sampled from the range [0, 1]. There are 100 samples of x representing 100 different operating conditions, with 70 sets allocated for the training set and 30 sets for the test set. The INN is composed of three hidden layers, with each containing four neuron nodes.
The training process of network parameters and the fitness function are illustrated in Figure 9. To illustrate the necessity of interval neural networks, a deterministic neural network as Figure 10 was established for comparative validation. From Figure 10a,b, if the uncertainties within the structural group are not considered, the recognition results of the traditional neural network can fit well both in the training and testing sets. However, as can be seen in Figure 10c,d, traditional neural networks fail to quantify biases in the face of uncertain disturbances. When applying the trained neural network for load identification, a deterministic neural network provides a fixed load result. This means that while we can only infer that the real load is near the identified value—whether it is slightly higher or lower, and by how much—these details cannot be predicted in advance. Consequently, in practical engineering applications, researchers find it very challenging and lack confidence in making decisions based on such fixed-load results. In addition, a well-trained deterministic network structure may become inapplicable or even completely fail with slight changes in the data model. Situations where the identified load is less than the real load are extremely dangerous in engineering applications.
Therefore, interval load identification methods are necessary. These methods account for potential deviations and quantify uncertainties, providing a more robust data model that can address significant uncertainties even within structural groups. They establish interval mapping relationships and resolve issues related to data and result credibility. Furthermore, the application prospects of interval neural networks can be discussed from two aspects. First, the midpoint of the identified interval can directly serve as an estimate of the real load. Second, the range of the identified interval can be used to estimate the error bounds of the real load, with the upper bound providing a conservative estimate for engineering applications. This approach not only matches the performance of traditional deterministic neural networks but also accounts for uncertainties and disturbances encountered in practical engineering applications.
Furthermore, to validate the performance of interval neural network load identification under different loss functions, the study conducted a comparative analysis. It can be observed in Figure 11 that while the identification accuracy based on IMSE is relatively high, the intervals do not encompass the sample points, rendering the identified intervals non-referential. Conversely, the intervals identified based on CWC are sufficiently wide to cover the real load samples as Figure 12. However, the CWC loss function only imposes constraints on interval coverage and width while neglecting the correspondence between interval medians and the real load. As a result, it significantly loses the mapping relationship between response and load functions, as compared to Figure 8. In contrast, the intervals identified using CMSC in Figure 13 not only ensure full coverage of the training loads but also better reflect the functional mapping between response and load. In the test set, only one sample is not covered by the identified interval, demonstrating a more effective and practical interval estimation. At this point, the values for the test set parameters are as follows: PICP = 96.67%, IMSE = 0.0072, and CMSC = 0.0079.

6.2. Case 2: Identification of Concentrated Static Loads on Cantilever Beams

A cantilever beam model as shown in Figure 14 was constructed, with its far-left node fixed. The cantilever beam was divided into 10 elements with nodes numbered from left to right. The beam has a length of l , width of b , and height of h . The elastic modulus E , density ρ , and Poisson’s ratio ν of the metal material applied to the beam were considered. To simulate the variability between individuals of the same type, the material and dimensional parameters of the cantilever beam in each sample were randomly selected within the corresponding ranges shown in Table 1.
This section includes two loading schemes: (1) applying load F 1 at node 4 and (2) applying load F 2 at node 9. The magnitudes of F 1 and F 2 are randomly selected within the range of 0 to 1000 N. Utilizing the displacement responses at nodes 1, 3, 5, and 8 as network inputs, a trained INN is employed to identify the values of the loads F 1 and F 2 .

6.2.1. Data Preparation for F 1 and F 2

The first 30 sets of data from 40 different operating conditions are selected for training the INN, while the remaining 10 sets are used for testing as Figure 15. The INN employed here uses the normalized interval quality assessment index (CMSC) as the loss function for training. The global optimal values for the network parameters (i.e., interval weights and thresholds) are obtained using the GA global optimization algorithm. The parameter settings for the INN algorithm are detailed in Table 2.
After normalizing the displacement response and force load data at the measurement points, training samples for the INN can be obtained. Based on the number of structural displacement responses and force loads, the network structure can be determined, specifically the number of input neurons, which is 4, and the number of output neurons, which is 1. Considering the complexity of interval calculations, we select a 5-layer INN model, which means the network has three hidden layers. Testing has shown that when the number of hidden layer nodes is set to six, the INN achieves a good training effect.

6.2.2. Load Identification Results Evaluation for F 1 and F 2

When the network learning reaches the maximum number of iterations, the training stops and the network obtains the optimal structural parameters. After sufficient training, the load identification results for the 30 training samples and 10 test samples can be obtained using the constructed INN model.
Figure 16 and Figure 17 show the fitness values of the loss function CMSC during the training process and the load identification results under two different loading forms. From the figures and Table 3, it can be observed that the load intervals identified by the INN model completely envelop the real values of the loads to be identified under various operating conditions and loading modes. In the figure, the x-axis represents the sample numbers, while the y-axis represents the load in units of kN.

6.3. Case 3: Identification of Shear Loads on Perforated Stiffened Plates

6.3.1. Experimental System Setup Plan

To verify the effectiveness of the proposed methodology, an experimental work of stiffened plates under shear loads was processed. The experimental setup utilized in this study primarily comprises a load application system and a data acquisition system.
Specifically, the equipment includes an MTS mechanical testing machine, a resistance strain gauge rosette, and a data acquisition system. The MTS mechanical testing machine is used to apply loads to the test specimen, primarily through a stepwise loading process. The strain gauges measure the structural strain, capturing local response information of the structure under load. To establish the load identification experimental system, it is crucial to define the transmission paths of both the load and response signals and subsequently develop the experimental procedure. Initially, the loading scheme is set up using an MTS mechanical testing machine to apply the load to the test piece and simultaneously collect load information. Additionally, strain gauges are used to measure the response signals during the process and input them into the load identification algorithms for comparison with the collected real load information. The detailed process is illustrated in Figure 18.

6.3.2. Experiment Conducted

In this experiment, the composite perforated stiffened plates are selected as the experimental subject, which is made of T300/901. The T300/901 carbon/epoxy composite laminates consist of bidirectional carbon fiber-impregnated material (woven composite material) with a single-layer thickness of 0.22 mm. The reinforcing plate includes panels, reinforced pieces, ribs, a cap ridge, and a fixture. The panel is the main load-bearing component, but its stability is poor. The ribs are arranged to increase their stability, and the fixture and cap ridges are used to connect the panel and ribs.
The reinforced pieces are made of aluminum alloy, while other parts are made of fabric composite materials. The size of the stiffened plate is shown in Figure 19a. The stacking order of all composite material parts is [ 45 / 0 2 / 45 / 0 / 4 5 ¯ ] s . The test setup and loading schematic diagram of the shear loading condition (SLC) are shown in Figure 19b. In the load diagram of SLC, the tensile force is transmitted to the fixture through a planar hinge, causing the structure to be subjected to shear load.
In this test work, each test piece was loaded from 0 to 120 kN by taking 5 kN as the loading step. There are four test samples for the shear test. Structural strain is a local response sensitive to the influence of structural loading, making it an important reference for load identification. Shear loads were applied to the stiffened plate, and structural in-plane shear strains were measured. The distribution diagram of strain measurement points used for model updating is shown in Figure 19c.

6.3.3. Data Preparation for Training

Four test specimens were incrementally loaded to measure the corresponding load and structural strain response data, and an INN database was established. Twenty sets of data pairs were randomly selected from the database as the training set, as shown in Table 4.
As shown in Figure 19b,c, the stiffened plate was loaded at the lower end and fixed at the upper end. There are three structural measurement points; thus, the neural network has three input neurons and one output neuron. After testing, a three-layer hidden layer with four neurons yielded satisfactory recognition results. In addition, the genetic algorithm was configured with a population size of 50, a maximum iteration number of 20,000, a credibility level μ of 95% in the CMSC function, and an amplification factor η of 50.

6.3.4. Load Identification Results and Performance Evaluation

The recognition effectiveness of the training dataset loadings is illustrated in Figure 20. It can be observed that the PICP is 100%, indicating that the identified interval can effectively envelop the real loadings. The CMSC index is 0.04, demonstrating that the recognition results adequately balance interval coverage and interval width. The optimally trained INN was applied to predict four additional operating conditions for the test specimens, all yielding excellent load-interval recognition results, as in Figure 21 and Table 5.
In experiments, the sources of uncertainty are extensive. In addition to uncertainties related to structural geometry, material parameters, and connection conditions, there are also measurement errors due to factors such as circuit noise, environmental interference, and operational deviations during the assembly and disassembly of test specimens. Each load condition sample is independent and exhibits varying degrees of uncertainty. These uncertainties lead to significant dispersion in the results measured for each test specimen. The differing manifestations of uncertainty between the training and test samples contribute to the dispersion observed in the recognition results of the test specimens shown in Figure 21. Despite these challenges, the INN method outlined in the paper can accurately predict the load range and envelop the real loadings. Moreover, compared to other neural network methods, INN can achieve accurate prediction results with a smaller training dataset, which is particularly advantageous for engineering applications.

7. Conclusions

In engineering practice, similar structures exhibit uncertainties in material properties and geometric dimensions. Additionally, observational errors introduced during experimental measurements lead to dispersed responses among these structures under the same load, making accurate load identification challenging. Traditional point-value-based neural networks, constrained by a fixed data structure, struggle to address these uncertainties.
Therefore, this paper introduces the interval neural network method, which internalizes the weights and thresholds in the network, enhancing its robustness in dealing with model and data uncertainties while maintaining recognition accuracy. Furthermore, the paper presents an improved CMSC loss function metric. This function is shown to balance the interval coverage rate of the real load and the interval width, thereby making the recognized interval median closer to the real load, outperforming traditional point-value neural network loss functions like MSE and conventional interval neural network loss functions like CWC. Considering the characteristics of interval operations and the step-like nature of the loss function, a genetic algorithm is employed for the global optimization of interval network parameters. This ultimately achieves credible interval load identification under uncertain influences.
In summary, the paper, through numerical and experimental case studies, achieves uncertainty modeling and quantification in load identification problems, with an interval coverage rate greater than 95%. This demonstrates the method’s accuracy, credibility, and strong interference resistance when studying the performance of structural population, thereby expanding its application prospects in engineering.

Author Contributions

Conceptualization, Y.W. (Yi Wang) and X.W.; methodology, Y.C.; software, L.X.; validation, Y.W. (Yifei Wang); writing—original draft preparation, Y.C. and Y.W. (Yi Wang); writing—review and editing, X.W.; visualization, L.X.; supervision, Y.W. (Yifei Wang). All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Nature Science Foundation of China (NSFC Nos. 12472193, 12072006, 52192632, and 12132001).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Y.F.; Zhu, J.S. Damage identification for bridge structures based on correlation of the bridge dynamic responses under vehicle load. Structures 2021, 33, 68–76. [Google Scholar] [CrossRef]
  2. Liu, R.; Dobriban, E.; Hou, Z.; Qian, K. Dynamic Load Identification for Mechanical Systems: A Review. Arch. Comput. Methods Eng. 2022, 29, 831–863. [Google Scholar] [CrossRef]
  3. Gladwell, G.M.L. Inverse Problems in Vibration. Appl. Mech. Rev. 1986, 39, 1013–1018. [Google Scholar] [CrossRef]
  4. Bartlett, F.; Flannelly, W. Model verification of force determination for measuring vibratory loads. J. Am. Helicopter Soc. 1979, 24, 10–18. [Google Scholar] [CrossRef]
  5. Zhao, L.W.; Yin, B. The Study of Load Identification Based on Raw Current Data. In Proceedings of the 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC 2018), Chongqing, China, 14–16 December 2018; pp. 528–534. [Google Scholar]
  6. Yang, X.L.; Guo, Y.F.; Chen, Y.W.; Zhao, J.W.; Dong, L.L.; Lü, Y.J. Random load identification of cylindrical shell structure based on multi-layer neural network and support vector regression. J. Strain Anal. Eng. Des. 2024, 59, 03093247241245185. [Google Scholar] [CrossRef]
  7. Liu, Y.R.; Wang, L.; Ng, B.F. A hybrid model-data-driven framework for inverse load identification of interval structures based on physics-informed neural network and improved Kalman filter algorithm. Appl. Energy 2024, 359, 122740. [Google Scholar] [CrossRef]
  8. Yang, H.; Jiang, J.; Chen, G.; Zhao, J. Dynamic load identification based on deep convolution neural network. Mech. Syst. Signal Process. 2023, 185, 109757. [Google Scholar] [CrossRef]
  9. Miller, B.; Piątkowski, G.; Ziemiański, L. Beam yielding load identification by neural networks. Comput. Assist. Methods Eng. Sci. 2023, 6, 449–467. [Google Scholar]
  10. Sofyan, E.; Trivailo, P. Solving Aerodynamic Load Inverse Problems Using a Hybrid FEM-Artificial Intelligence. In Proceedings of the Australasian MATLAB Users Conference, Melbourne, Australia, 9–10 November 2000. [Google Scholar]
  11. Trivailo, P.M.; Carn, C.L. The inverse determination of aerodynamic loading from structural response data using neural networks. Inverse Probl. Sci. Eng. 2006, 14, 379–395. [Google Scholar] [CrossRef]
  12. Samagassi, S.; Khamlichi, A.; Driouach, A.; Jacquelin, E. Reconstruction of multiple impact forces by wavelet relevance vector machine approach. J. Sound Vib. 2015, 359, 56–67. [Google Scholar] [CrossRef]
  13. Cooper, S.B.; Dimaio, D. Static load estimation using artificial neural network: Application on a wing rib. Adv. Eng. Softw. 2018, 125, 113–125. [Google Scholar] [CrossRef]
  14. Chen, G.; Li, T.; Chen, Q.; Ren, S.; Wang, C.; Li, S. Application of deep learning neural network to identify collision load conditions based on permanent plastic deformation of shell structures. Comput. Mech. 2019, 64, 435–449. [Google Scholar] [CrossRef]
  15. Candon, M.; Esposito, M.; Fayek, H.; Levinski, O.; Koschel, S.; Joseph, N.; Carrese, R.; Marzocca, P. Advanced multi-input system identification for next generation aircraft loads monitoring using linear regression, neural networks and deep learning. Mech. Syst. Signal Process. 2022, 171, 108809. [Google Scholar] [CrossRef]
  16. Khosravi, A.; Nahavandi, S.; Creighton, D.; Atiya, A.F. Comprehensive Review of Neural Network-Based Prediction Intervals and New Advances. IEEE Trans. Neural Netw. 2011, 22, 1341–1356. [Google Scholar] [CrossRef]
  17. Weerdt, E.D.; Chu, Q.P.; Mulder, J.A. Neural Network Output Optimization Using Interval Analysis. IEEE Trans. Neural Netw. 2009, 20, 638–653. [Google Scholar] [CrossRef]
  18. Hwang, J.G.; Ding, A.A. Prediction intervals for artificial neural networks. J. Am. Stat. Assoc. 1997, 92, 748–757. [Google Scholar] [CrossRef]
  19. Beheshti, M.; Berrached, A.; de Korvin, A.; Hu, C.; Sirisaengtaksin, O. On interval weighted three-layer neural networks. In Proceedings of the 31st Annual Simulation Symposium, Boston, MA, USA, 5–9 April 1998; IEEE: New York, NY, USA, 1998; pp. 188–194. [Google Scholar]
  20. Dipu, K.H.M.; Abbas, K.; Anwar, H.M.; Saeid, N. Neural Network-Based Uncertainty Quantification: A Survey of Methodologies and Applications. IEEE Access 2018, 6, 36218–36234. [Google Scholar]
  21. Ishibuchi, H.; Tanaka, H. An extension of the BP-algorithm to interval input vectors-learning from numerical data and expert’s knowledge. In Proceedings of the 1991 IEEE International Joint Conference on Neural Networks, Seattle, WA, USA, 18–21 November 1991; Volume 1582, pp. 1588–1593. [Google Scholar]
  22. Garczarczyk, Z.A. Interval neural networks. In Proceedings of the International Symposium on Circuits and Systems, Geneva, Switzerland, 28–31 May 2000. [Google Scholar]
  23. Xifan, Y.; Shengda, W.; Shaoqiang, D. Approximation of interval models by neural networks. In Proceedings of the 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), Budapest, Hungary, 25–29 July 2004; Volume 1022, pp. 1027–1032. [Google Scholar]
  24. Campi, M.C.; Calafiore, G.; Garatti, S. Interval predictor models: Identification and reliability. Automatica 2009, 45, 382–392. [Google Scholar] [CrossRef]
  25. Sadeghi, J.; de Angelis, M.; Patelli, E. Efficient training of interval Neural Networks for imprecise training data. Neural Netw. 2019, 118, 338–351. [Google Scholar] [CrossRef]
  26. Saeed, A.; Li, C.; Gan, Z.; Xie, Y.; Liu, F. A simple approach for short-term wind speed interval prediction based on independently recurrent neural networks and error probability distribution. Energy 2022, 238, 122012. [Google Scholar] [CrossRef]
  27. Shao, Z.; Yang, Y.; Zheng, Q.; Zhou, K.; Liu, C.; Yang, S. A pattern classification methodology for interval forecasts of short-term electricity prices based on hybrid deep neural networks: A comparative analysis. Appl. Energy 2022, 327, 120115. [Google Scholar] [CrossRef]
  28. Lian, C.; Zeng, Z.G.; Yao, W.; Tang, H.M.; Chen, C.L.P. Landslide Displacement Prediction With Uncertainty Based on Neural Networks with Random Hidden Weights. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2683–2695. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Uncertainty in a population of structures.
Figure 1. Uncertainty in a population of structures.
Aerospace 11 00770 g001
Figure 2. The modeling process of INN for load identification issues.
Figure 2. The modeling process of INN for load identification issues.
Aerospace 11 00770 g002
Figure 3. Topology of a three-layer IBPNN.
Figure 3. Topology of a three-layer IBPNN.
Aerospace 11 00770 g003
Figure 4. The identified interval values of two types at the same objective function value (IMSE).
Figure 4. The identified interval values of two types at the same objective function value (IMSE).
Aerospace 11 00770 g004
Figure 5. Graph demonstrating the concept of PICP.
Figure 5. Graph demonstrating the concept of PICP.
Aerospace 11 00770 g005
Figure 6. Comparison of two types of interval prediction models.
Figure 6. Comparison of two types of interval prediction models.
Aerospace 11 00770 g006aAerospace 11 00770 g006b
Figure 7. Flowchart of the Centralized Load Identification Algorithm based on IBPNN.
Figure 7. Flowchart of the Centralized Load Identification Algorithm based on IBPNN.
Aerospace 11 00770 g007
Figure 8. The functional relationship between response and load.
Figure 8. The functional relationship between response and load.
Aerospace 11 00770 g008
Figure 9. Iterative Optimization Process using a genetic algorithm.
Figure 9. Iterative Optimization Process using a genetic algorithm.
Aerospace 11 00770 g009
Figure 10. The results of the load identification based on traditional BPNN.
Figure 10. The results of the load identification based on traditional BPNN.
Aerospace 11 00770 g010aAerospace 11 00770 g010b
Figure 11. The results of the load identification based on the loss function IMSE.
Figure 11. The results of the load identification based on the loss function IMSE.
Aerospace 11 00770 g011
Figure 12. The results of the load identification based on the loss function CWC.
Figure 12. The results of the load identification based on the loss function CWC.
Aerospace 11 00770 g012
Figure 13. The results of the load identification based on the loss function CMSC.
Figure 13. The results of the load identification based on the loss function CMSC.
Aerospace 11 00770 g013
Figure 14. Illustration of a cantilever beam structure subjected to a concentrated static load.
Figure 14. Illustration of a cantilever beam structure subjected to a concentrated static load.
Aerospace 11 00770 g014
Figure 15. The displacement response of nodes under various loading scenarios.
Figure 15. The displacement response of nodes under various loading scenarios.
Aerospace 11 00770 g015
Figure 16. The load identification results when loading F 1 .
Figure 16. The load identification results when loading F 1 .
Aerospace 11 00770 g016
Figure 17. The load identification results when loading F 2 .
Figure 17. The load identification results when loading F 2 .
Aerospace 11 00770 g017
Figure 18. Experimental procedure schematic.
Figure 18. Experimental procedure schematic.
Aerospace 11 00770 g018
Figure 19. Experimental procedure.
Figure 19. Experimental procedure.
Aerospace 11 00770 g019
Figure 20. Recognition results of the training dataset.
Figure 20. Recognition results of the training dataset.
Aerospace 11 00770 g020
Figure 21. Load identification results for four test samples.
Figure 21. Load identification results for four test samples.
Aerospace 11 00770 g021aAerospace 11 00770 g021b
Table 1. Material and geometric parameters of the cantilever beam.
Table 1. Material and geometric parameters of the cantilever beam.
Material PropertiesGeometric Dimensions
E G p a ρ k g / m 3 ν l m m b m m h m m
[196,204][7760,7840][0.297,0.303][998,1002][9.9,10.1][9.9,10.1]
Table 2. Parameter settings of the INN algorithm.
Table 2. Parameter settings of the INN algorithm.
Algorithm FunctionsParameterValue
GA AlgorithmPopulation Size50
Elite Count2
Crossover Fraction0.8
Mutation uniform0.1
CMSC Algorithm μ 0.95
η 50
Table 3. Identifying and evaluating performance indicators for F 1 and F 2 .
Table 3. Identifying and evaluating performance indicators for F 1 and F 2 .
PICPIMSECMSC
Train F 1 100%0.00340.0037
F 2 100%0.00330.0035
Test F 1 100%0.00370.0039
F 2 100%0.00250.0027
Table 4. Training data test.
Table 4. Training data test.
Test Specimen
Number
Loading
Magnitude (kN)
Test Specimen
Number
Loading
Magnitude (kN)
15250
115275
120280
14535
165345
195375
1105380
253115
210410
245430
Table 5. Identifying and evaluating performance indicators.
Table 5. Identifying and evaluating performance indicators.
SamplesPICPIMSECMSC
Trainspecimens 100%0.01500.0162
Testspecimen 1 100%0.01460.0158
specimen 2100%0.01600.0173
specimen 3100%0.01940.0210
specimen 4100%0.00920.0099
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, Y.; Wang, X.; Wang, Y.; Xu, L.; Wang, Y. An Interval Neural Network Method for Identifying Static Concentrated Loads in a Population of Structures. Aerospace 2024, 11, 770. https://doi.org/10.3390/aerospace11090770

AMA Style

Cao Y, Wang X, Wang Y, Xu L, Wang Y. An Interval Neural Network Method for Identifying Static Concentrated Loads in a Population of Structures. Aerospace. 2024; 11(9):770. https://doi.org/10.3390/aerospace11090770

Chicago/Turabian Style

Cao, Yang, Xiaojun Wang, Yi Wang, Lianming Xu, and Yifei Wang. 2024. "An Interval Neural Network Method for Identifying Static Concentrated Loads in a Population of Structures" Aerospace 11, no. 9: 770. https://doi.org/10.3390/aerospace11090770

APA Style

Cao, Y., Wang, X., Wang, Y., Xu, L., & Wang, Y. (2024). An Interval Neural Network Method for Identifying Static Concentrated Loads in a Population of Structures. Aerospace, 11(9), 770. https://doi.org/10.3390/aerospace11090770

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop