Next Article in Journal
Bratteli Diagrams, Hopf–Galois Extensions and Calculi
Previous Article in Journal
A Biologically Inspired Model for Detecting Object Motion Direction in Stereoscopic Vision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research and Application of ROM Based on Res-PINNs Neural Network in Fluid System

1
China Aerospace Academy of Systems Science and Engineering, Beijing 100089, China
2
China Academy of Space Technology, Beijing 100089, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(2), 163; https://doi.org/10.3390/sym17020163
Submission received: 31 December 2024 / Revised: 19 January 2025 / Accepted: 20 January 2025 / Published: 22 January 2025

Abstract

:
In the design of fluid systems, rapid iteration and simulation verification are essential, and reduced-order modeling techniques can significantly improve computational efficiency and accuracy. However, traditional Physics-Informed Neural Networks (PINNs) often face challenges such as vanishing or exploding gradients when learning flow field characteristics, limiting their ability to capture complex fluid dynamics. This study presents an enhanced reduced-order model (ROM): Physics-Informed Neural Networks based on Residual Networks (Res-PINNs). By integrating a Residual Network (ResNet) module into the PINN architecture, the proposed model improves training stability while preserving physical constraints. Additionally, the model’s ability to capture and learn flow field states is further enhanced by the design of a symmetric parallel neural network structure. To evaluate the effectiveness of the Res-PINNs model, two classic fluid dynamics problems—flow around a cylinder and Vortex-Induced Vibration (VIV)—were selected for comparative testing. The results demonstrate that the Res-PINNs model not only reconstructs flow field states with higher accuracy but also effectively addresses limitations of traditional PINN methods, such as vanishing gradients, exploding gradients, and insufficient learning capacity. Compared to existing approaches, the proposed Res-PINNs provide a more stable and efficient solution for deep learning-based reduced-order modeling in fluid system design.

1. Introduction

Modeling and simulation (M&S) of fluid systems involve using computational techniques to represent the varying states and key behavioral characteristics of fluids. By solving system model equations through computer simulations, detailed information such as velocity fields, pressure distributions, and temperature distributions can be obtained. This approach enables iterative validation, performance testing, and identification of potential issues in the design system without the need for physical experiments, thereby enhancing safety and reliability. However, in computational simulations, spatial flow field regions are typically discretized using grids. Increasing the number of grids significantly raises computational costs. Moreover, due to the complexity of fluid dynamics equations—especially in cases involving complex fluid behaviors like multiphase flows and turbulence—more computational resources are required, making it challenging to achieve rapid solutions for fluid problems [1,2]. Consequently, ROM methods are necessary to accelerate simulation speed and reduce the complexity of solving fluid dynamics problems.
ROM is a well-established method for reducing computational time, proposed several years ago [3]. ROM simplifies the original system into a lower-order system that retains key behaviors and characteristics, thereby decreasing computational resources and increasing efficiency while accurately capturing the system’s dynamic behavior. Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) are the most conventional ROM methods that help to lower computational costs while maintaining sufficient accuracy [4] and have been widely applied in related fields. In recent years, with the significant improvements in computational power, Artificial Intelligence (AI) techniques such as deep learning (DL) have been extensively applied to surrogate modeling and ROM [5]. Deep-learning methods effectively reduce the dimensionality of system models, reconstructing high-dimensional model characteristics into lower-dimensional features, thus enabling more efficient system simulation and analysis. These methods are now widely used across multiple engineering fields, such as aerospace [6], materials engineering [7], and structural engineering [8], demonstrating substantial advantages. For example, Halder [9] successfully used a Recurrent Neural Network (RNN) model to predict the force at the next time step based on the current aileron motion state in a wing-gust and aeroelastic interaction system, proving the effectiveness of DL in dynamic system modeling. Similarly, Zhang [10] utilized a Deep Neural Network (DNN) to predict force variations in a vortex-induced vibration system given fluid parameters and boundary conditions, further validating the potential of DL in complex fluid dynamics problems. Additionally, Li [11] employed a Long Short-Term Memory (LSTM) network to capture the dynamic characteristics of airfoil pitching and plunging systems in transonic flow across multiple Mach numbers, enabling rapid acquisition of force and motion responses, and demonstrating its value in various flow states. Despite the significant progress achieved by DL methods in surrogate modeling for fluid dynamics and structural engineering, these approaches typically reconstruct only partial information about the flow field state. For more complex simulation problems, such as fluid control or fluid–structure interactions, it is necessary to fully reconstruct the entire flow field state, as it represents essential parameters for the system’s response. Furthermore, due to the inherently “black-box” nature of neural networks, which lack explicit constraints based on physical laws, there are limitations in objectively describing physical phenomena.
To enhance the interpretability of DL models, Raissi et al. [12] proposed a PINN framework that embeds Partial Differential Equations (PDEs) or other differential equations directly into the neural network. PINNs integrate physical equations into the neural network, ensuring that the model learns not only from data but also remains consistent with the underlying physical laws, driven by both data and physical equations [13]. When simulating fluid system problems, the governing equations of fluid dynamics can be embedded into the neural network as prior knowledge. The model can then utilize spatiotemporal discrete data of flow field states and structural movements to reconstruct the changes in flow fields and structural responses [14]. Currently, substantial research has been conducted worldwide using PINN models to address fluid dynamics problems. For instance, Bararnia [15] extended the application of PINN to solve problems involving viscous and thermal boundary layers. They selected three benchmark problems for thermal boundary layers and explored the impact of equation nonlinearity and unbounded boundary conditions on adjusting the network’s width and depth to obtain reasonable solutions. Tang et al. [16] proposed a transfer learning-enhanced PINN that reconstructs flow field and structural information using varying numbers of training sets. They also introduced a stepwise iterative training strategy to train the PINN model, effectively reducing the model’s dependence on large datasets and lowering training costs. Similarly, Arzani et al. [17] combined mathematical equations of blood flow with measurement data, using PINN to reconstruct near-wall blood flow and wall shear stress in arterial flow. However, when applying PINN to ROM, gradient vanishing or exploding problems frequently occur. These issues often arise because the physical constraints—such as PDEs—embedded into the loss function during network training can introduce diversity and complex nonlinearities. This causes the gradient of the loss function to exhibit extremely large or small variations in certain regions. Additionally, when dealing with high-dimensional physical problems, the network may need to learn complex nonlinear mappings. The characteristics of these high-dimensional problems make neural networks prone to falling into “flat regions” or “saddle points”, leading to vanishing gradients, or encountering regions with extreme variations, resulting in sudden large increases in gradients. Consequently, the network fails to accurately learn from the data, leading to unsatisfactory results. Moreover, this increases both the training time and computational costs.
To address the aforementioned challenges, this study proposes PINNs based on residual modules, named Res-PINNs. The approach begins by designing a parallel neural network grounded in the concept of symmetry, which transmits sparse spatiotemporal state information of the flow field—divided across different dimensions—into the network. This improves both the accuracy and stability of the model when tackling complex problems. Additionally, residual neural network modules are incorporated into the model to mitigate gradient vanishing or explosion issues during training, thereby enhancing the reconstruction accuracy of high-dimensional flow field states and enabling more precise capture of the flow dynamics. To assess the effectiveness of the proposed model, it was applied to classical fluid dynamics problems, including flow around a cylinder and VIV. The results demonstrate that the model not only alleviates the gradient vanishing and explosion problems commonly encountered in traditional PINN training, but also achieves a more accurate flow field reconstruction. This substantially improves the model’s overall performance and generalization capability. The proposed approach enriches ROM methods and provides a fresh perspective on fluid system modeling and simulation. In comparison to existing research, this study makes three key contributions: (1) Embedding ResNet modules into the PINN framework: The study introduces ResNet modules into the PINN framework to address gradient vanishing and explosion issues during training, thereby enhancing the model’s stability and reliability. (2) Utilizing a parallel neural network architecture: This study adopts a symmetric parallel neural network architecture, which comprehensively learns data across multiple dimensions, improving the accuracy of the Res-PINNs model’s reconstruction and reducing errors. (3) Application to classical fluid dynamics problems: The Res-PINNs model is applied to classical fluid dynamics problems, such as flow around a cylinder and vortex-induced vibration, in the development of ROM for various scenarios. This application not only validates the model’s applicability and effectiveness but also provides an innovative solution for spacecraft system design, simulation validation, and iterative optimization, with significant practical implications.

2. Materials and Methods

This section will provide a detailed overview of the fundamental concepts related to the PINN neural network model, as well as the proposed model and its improvements in this study.

2.1. Neural Network Model

Neural network models, with their powerful nonlinear mapping and feature extraction capabilities, can approximate any bounded continuous function to arbitrary precision, as stated by the universal approximation theorem. This capability reduces the complexity of the original function, providing an effective approach for establishing ROM of systems. Among these models, the Fully Connected Neural Network (FCNN) is the most classical structure. An FCNN primarily consists of an input layer, an output layer, and one or more hidden layers. The neurons in these layers, along with their interconnections, form a directed graph representing the neural network:
z l = f l ( W l T z l 1 + b l )
In this context, Zl represents the l-th hidden layer, while Wl and bl denote the weight matrix and bias vector for the l-th layer, respectively. The function fl represents the nonlinear activation function, with common choices including the Rectified Linear Unit (ReLU) function, the sigmoid function, and others. The activation function’s purpose is to transform a linear equation into a nonlinear one, thereby enhancing the expressiveness of the neural network and improving its ability to handle nonlinear problems. Neural network models require training only on the weight and bias parameters, which reduces computational cost while maintaining the accuracy of the results, thus providing an approximate solution for complex problems.
When solving problems using a neural network, a black-box model is typically constructed to map the input-output relationship. The solving process can be represented as follows:
U ( x , t , θ ) U ( x , t , θ ) = z l ( x , t , θ , W , b )
Here, θ represents the parameters in the equation that need to be trained, U ( x , t , θ ) denotes the exact or true outcome, and U ( x , t , θ ) represents the result computed by the neural network. This result is obtained by minimizing the loss function, which can be expressed as follows:
L ( W , b ) = 1 N U ( x , t , θ ) z l ( x , t , θ , W , b ) 2 W * , b * = argmin W , b   L ( W , b )
Here, L ( W , b ) represents the loss of the training data, W * , b * denotes the weights and biases that need to be optimized during the neural network training process, and N indicates the number of training data samples.
Although FCNNs have strong nonlinear representation capabilities and can learn complex nonlinear problems, increasing the model’s depth often leads to issues such as vanishing or exploding gradients. These problems cause network degradation during the later stages of training, making it difficult for the network to converge to an optimal solution. To address these challenges, He et al. [18] proposed the ResNet model, which achieved a breakthrough in training a very deep neural network, with hundreds of layers, by embedding shortcut connections into the FCNN structure. These shortcut connections allow the model to bypass one or more layers, enabling gradients to flow more easily through the network and mitigating the vanishing or exploding gradient problem. As a result, ResNet models can maintain training stability and improve performance even at significant depths.
The core of the ResNet model is its residual structure, as illustrated in Figure 1, which shows the output after passing through two hidden layers. In this structure, the input is defined as x, and the output as H(x). The shortcut connection a key component of the residual structure adds the input of the network directly to the output obtained after passing through the hidden layers. This output is then processed by an activation function. This design significantly alleviates the issues of vanishing and exploding gradients during training, providing a more robust foundation for the model’s learning accuracy and convergence. By allowing the gradients to flow more easily through the network, the shortcut connections enable deeper networks to be trained more effectively. The introduction of ResNet has opened new directions for the development of deep neural networks, providing a powerful solution to the challenges encountered when training very deep networks.

2.2. PINN Model

The PINN model is a type of neural network that incorporates physical knowledge into its framework, as shown as Figure 2. It enhances problem-solving capabilities by adding a physics-based loss function term to the data loss function of an FCNN. This additional loss term is derived from physical equations, ensuring that the predictions made by the neural network conform to the laws of physics during training. In the study of transient fluid flow characteristics, the PINN can reconstruct changes in the flow field state by updating the neural network with additional fluid control equations. This approach enables the network to account for the underlying physics governing fluid dynamics, leading to more accurate and physically consistent predictions. Its loss function can be defined as follows:
L P I N N ( W , b ) = λ D A T A L D A T A + λ I C L I C + λ B C L B C + λ E Q L E Q L I C = 1 N I C n = 1 N I C U n U n L B C = 1 N B C n = 1 N B C U n U n L D A T A = 1 N I C n = 1 N D A T A U n U n L E Q = 1 N r n = 1 N r r n
Here, λ D A T A ,   λ I C ,   λ B C ,   λ E Q represents the coefficients for the loss functions corresponding to the different components, such as the training data, initial conditions, boundary conditions, and the physical knowledge equation. These coefficients determine the relative importance of each component in the total loss function. N R ,   N I C ,   N B C ,   N D A T A denotes the training data for the different terms, which may include observed values or measurements that the neural network aims to learn from U N ,   U N represents the exact or true results, as well as the results predicted by the neural network model. The difference between these two is minimized during training to improve the model’s accuracy. r n indicates the residual of the r-th physical knowledge equation at the n-th data point.
While the PINN offers a promising approach for constructing an ROM of fluid systems, it often encounters network “degradation” during training [19]. This degradation occurs because the physical constraints involved typically exhibit high diversity and complex nonlinear characteristics, which can cause sharp fluctuations in the gradients of the loss function in certain regions, resulting in extreme maxima or minima. Such fluctuations can destabilize the gradients, leading to issues like vanishing or exploding gradients during the training process. Moreover, when dealing with high-dimensional physical problems, the PINN must learn complex nonlinear mappings, further increasing the difficulty of training. The nature of these high-dimensional problems makes the neural network prone to “flat regions” or “saddle points”, where the gradient values are very small or nearly zero, preventing the model from effectively learning the critical features of the input data. Conversely, in regions where the gradients suddenly increase, the model may over-adjust its parameters, leading to inaccurate learning outcomes. These phenomena not only reduce the model’s accuracy and stability but also significantly increase training time and computational costs. Therefore, to enhance the performance of PINNs, effective strategies must be employed to mitigate these challenges.

2.3. Res-PINNs Model

To effectively address the problem of performance degradation during neural network training and to achieve comprehensive and accurate extraction of flow field state information, this study proposes a multidimensional parallel PINN neural network, termed Res-PINNs. This model is based on the integration of the PINN and the ResNet model. The overall framework of the network structure is illustrated in Figure 3.
According to the findings of Jagtap [20], segmenting the training data can reduce the overall loss function value, allowing for better extraction of flow field features and more accurate predictions of system changes. Inspired by previous studies, this study proposes an innovative architecture based on a symmetric parallel neural network to address the challenges posed by the complexity of flow field data. The network structure consists of three symmetric parallel neural networks, each responsible for processing flow field data at different dimensional levels: low, middle, and high. This symmetric network design effectively avoids the computational cost and time overhead associated with training large-scale, high-dimensional data simultaneously, while significantly enhancing the stability and robustness of the model during training. During the training process for each dimension, the individual neural networks operate independently. Thus, if one network encounters issues such as gradient vanishing or explosion, the other networks can continue training normally, preventing problems in one dimension from negatively affecting the entire model. To further optimize data processing, this study employs a Random Split (RS) algorithm to partition the spatial grid coordinates. The RS algorithm introduces randomness in partitioning by generating random indices based on a specified seed, which effectively reduces potential patterns or sequences in the data that could interfere with model training. In this study, the inflow velocity of the velocity field serves as a reference for partitioning, dividing the spatial grid coordinates into low-dimensional, middle-dimensional, and high-dimensional groups. During training, flow field state information is fed into the corresponding neural network models, which process the data according to their designated dimensional levels. To enhance the model’s expressiveness and accuracy, this study incorporates a ResNet structure into the FCNN. This addition enables the network to more effectively capture the complex features of the flow field across different dimensions. The outputs from the networks at each dimensional level are integrated to represent the overall flow field state. This multidimensional training and output integration strategy allows the model to more comprehensively and accurately capture the intricate features of the flow field, thus significantly improving its performance. The proposed approach effectively addresses the challenges of processing high-dimensional flow field data, demonstrating superior accuracy and stability compared to traditional methods. The limited-memory Broyden Fletcher Goldfarb Shanno (L-BFGS) optimization algorithm, a quasi-Newton method, is designed for solving large-scale unconstrained optimization problems. Unlike gradient descent algorithms, L-BFGS does not require a global learning rate; it automatically adjusts the step size at each iteration using second-order information, making the optimization process more robust and automated. In this study, the loss function, established using both training data and physical equations, is employed to compute the loss [21]. The Adam optimizer and L-BFGS algorithm are used to perform backpropagation and parameter optimization, completing a single training cycle of the neural network. Notably, the loss values at the flow field boundary points are only trained in the low-dimensional neural network, and the initial and training data input into the three neural networks do not overlap. This neural network framework not only enables better extraction and learning of flow field feature information but also improves the model’s performance during loss computation, reducing occurrences of gradient vanishing and explosion during training. It offers a useful approach and perspective for developing reduced-order models and applications in fluid systems.

3. Application of Res-PINNs

This section is divided by subheadings. It provides a concise and precise description of the experimental results, their interpretation, and the experimental conclusions that can be drawn.

3.1. Application of Res-PINNs to the Flow Around a Cylinder Problem

The flow around a cylinder problem is a classic case in Computational Fluid Dynamics (CFD), where different physical phenomena emerge in the flow field at various Reynolds numbers. In this case study, the objective is to reconstruct the flow field state of a two-dimensional flow around a cylinder using limited, discrete flow field data and a neural network model embedded with physical knowledge. A ROM for the flow around the cylinder is established, and the reconstruction results of the proposed Res-PINNs model are compared with those of the standard PINN model to validate the effectiveness of the network structure. In the flow around a cylinder problem, the Navier–Stokes (N-S) equations and the continuity equation typically serve as constraints. For two-dimensional, unsteady, incompressible fluid flow, the governing equations are generally expressed as follows:
u x + v y = 0
u t + u u x + v u y + p x 1 R e ( 2 u x 2 + 2 u y 2 ) = 0
v t + u v x + v v y + p y 1 R e ( 2 v x 2 + 2 v y 2 ) = 0
Here, (x,y) are the spatial coordinates of points in the flow field, and t represents time. The variables u and v denote the velocity components of the fluid: u(x,y,t) is the velocity in the direction of the fluid inflow (denoted as IL), and v(x,y,t) is the velocity perpendicular to the fluid inflow direction (denoted as CF). The variable p represents the absolute pressure at each spatial point in the flow field, p(x,y,t). The term Re stands for the Reynolds number, which is a dimensionless parameter characterizing the flow regime, indicating the relative influence of inertial forces to viscous forces in the fluid.
In this case study, the numerical simulation method from CFD is used to obtain training data samples. The computational domain is illustrated in Figure 4, where the overall flow field region is a rectangular area. The left side serves as the velocity inlet, while the right side is the pressure outlet. The upper and lower boundaries are set with no-slip boundary conditions. The inlet velocity on the left is set to u0 = 0.09 m/s, and the absolute pressure at the right outlet is 0 Pa. The dynamic viscosity of the fluid is μ = 1.5 × 10−5 kg/m·s, the density is set to 1 kg/m3, and the cylinder diameter is D = 0.05, resulting in a Reynolds number of Re = 300; we chose the laminar flow model in the Ansys simulation model for the computational analysis.
An O-type grid method was employed for the grid partitioning of the flow field region, as show in Figure 5. The grid cell size was specified as 7.5 mm, and the height of the first layer in the boundary layer grid was set to 2 mm, with a total of seven layers and a growth rate of 1.15. The computational domain was divided into 40,064 cells in total. According to the grid quality evaluation results, 98.08% of the grids were classified as high quality. This grid partitioning approach ensures both the accuracy and stability of the calculations, providing a solid foundation for subsequent numerical simulations. The simulation time step is set to 0.005 s, and the PISO algorithm is used for transient solution calculations.
After 50 s from the start of the simulation, a stable vortex shedding phenomenon is observed. The flow field state information between 60 to 65 s is selected as the data collection period for training purposes. A training dataset is constructed using time slices of 0.1 s each. The spatial coordinates at each time instant are discretized, and the Latin hypercube sampling (LHS) algorithm is used to sample the spatial coordinates of the flow field region. During random sampling of the data, each time slice is processed independently. For each time slice, 1000 coordinate points are randomly sampled, and the corresponding flow field state information is used as training data for input into the training model. The data are then reset to a time frame of 0 to 5 s. The sampling results are shown in Figure 6.
For both the PINN model and the Res-PINNs model, the input consists of the spatiotemporal coordinates t n , x n , y n of the sparse flow field, and the output includes the velocity and pressure information p n , u n , v n at each coordinate point within the flow field. After transformation, the equation for the loss function is expressed as follows:
e 1 = u x + v y e 2 = u t + u u x + v u y + p x 1 R e ( 2 u x 2 + 2 u y 2 ) e 3 = v t + u v x + v v y + p y 1 R e ( 2 v x 2 + 2 v y 2 )
The total loss function is composed of two parts: the loss for the flow field state information and the loss for the governing control equations. The flow field state information includes the velocity components in the IL direction and CF direction, as well as the pressure information. The total loss function can be expressed as follows:
L d a t a = λ I N 1 N I N n = 1 N I N u ( x n , y n , t n ) u n 2 + λ I C 1 N I C n = 1 N I C u ( x n , y n , t n ) u n 2 + λ B C 1 N B C n = 1 N B C u ( x n , y n , t n ) u n 2 L e q u = λ e q u i = 1 3 n = 1 N e i 2
In the above expressions, N I N ,   N I C ,   N B C denotes the grid points within the flow field, at the initial time, and along the boundary. u ( x n ,   y n ,   t n ) ,   u n represents the data obtained from neural network training and the reference data. L e q u denotes the residuals of the governing equations. λ I N ,   λ I C ,   λ B C ,   λ e q u represents the flow field information at interior points, initial time flow field information, boundary point flow field information, and the weights for the loss function. Based on the analysis results from reference [22], the loss function weights for the Res-PINNs neural network are set to λ I N = 1 ,   λ I C = 1 ,   λ B C = 1 ,   λ e q u = 75 , while the loss function weights for the PINN neural network are set to λ I N = 1 ,   λ I C = 1 ,   λ B C = 1 ,   λ e q u = 150 . These loss functions are then incorporated into the neural networks for model training. To avoid overfitting during the network training process, a dropout layer was added before the network output to reduce the interdependence between neurons, with the dropout rate set to 0.3. The DNN are trained using the machine learning framework TensorFlow on an NVIDIA GeForce RTX 3080 Ti GPU. After approximately 11 h of training, the model converged. Figure 7 shows the training loss as a function of the number of training steps. After 14,000 iterations, the training error converged to less than 0.01.
In this case study, the model’s effectiveness is evaluated by reconstructing the flow field state at 5.5 s using sparse flow field information learned from 0 to 5 s. The reconstruction results are shown in Figure 8, where Figure 8a presents the reconstruction results using the PINN model; Figure 8b shows the reconstruction results using the Res-PINNs model. Within both Figure 8a,b, the first column displays the simulation results obtained from Ansys Fluent, which serve as a reference for evaluating the model’s accuracy. The second column shows the reconstructed flow field results from the ROM. The third column illustrates the error between the reference simulation results and the ROM reconstructions. From top to bottom, each row represents the pressure field contour plot, the velocity field contour plot in the IL direction, and the velocity field contour plot in the CF direction.
Based on the computational analysis of the results, for the PINN model, the relative error in the pressure field reconstruction is 10.9%; the relative errors in the velocity fields along the IL and CF directions are 38.8% and 39.2%, respectively; and the larger errors in the velocity fields are mainly concentrated around the cylinder wall region and the wake region. This discrepancy may be due to gradient vanishing or exploding during training, causing minimal changes in the loss function value as the number of iterations increases. Consequently, the network becomes less sensitive to learning the flow field states. In contrast, for the Res-PINNs model proposed in this study: The relative error in the pressure field is reduced to 7.8%; the relative errors in the velocity fields along the IL and CF directions are significantly reduced to 4.1% and 6.3%, respectively; and the Res-PINNs model demonstrates a more accurate reconstruction of the flow field state, particularly around the cylinder and in capturing vortex shedding features. This indicates that the model achieves more comprehensive learning of the flow characteristics, offering more reliable results for flow field modeling and reconstruction.
Overall, the results show that the Res-PINNs model significantly outperforms the standard PINN model in terms of accuracy and error reduction, particularly in regions with complex flow features. This highlights the effectiveness of the Res-PINNs framework in providing more accurate and stable solutions for fluid dynamics problems.
The analysis above indicates that the Res-PINNs model performs significantly better than the standard PINN model when dealing with complex flow fields. The Res-PINNs model demonstrates a more sensitive and comprehensive ability to capture state changes within the flow field. Although there are still some errors in certain regions of the flow field, overall, the Res-PINNs model exhibits a superior performance in flow field reconstruction compared to the standard PINN model.
Based on the reconstructed flow field data, the pressure and velocity gradient distributions on the cylinder surface can be integrated to compute the lift coefficient and drag coefficient of the cylinder. These coefficients are key parameters in characterizing the aerodynamic forces acting on the cylinder. The formulas for calculating these coefficients are as follows:
F L = Γ 1 R e ( u y + v x ) n x + 2 R e v y n y p n y d s F D = Γ 1 R e ( u y + v x ) n y + 2 R e u x n x p n x d s
In the equations, Γ represents the boundary of the cylinder’s surface. n x and n y are the components of the unit normal vector on the cylinder surface. The dimensionless formulas for calculating the lift coefficient and drag coefficient are as follows:
C L / D = F L / D 0.5 ρ u 0 2 D
Figure 9 illustrates the reconstruction results of the pressure and lift coefficient based on the flow field data. In the figure, the black line represents the results obtained from CFD simulation. The blue line indicates the predicted values from the Res-PINNs model. The red line shows the predicted results from the PINN model. From the results, it can be observed that for the prediction of CD, the ROM results have a smaller error compared to the CFD simulation results. In contrast, the PINN model shows a larger amplitude difference compared to the CFD simulation results. This may be due to the gradient vanishing problem in the neural network model during the flow field reconstruction process, making it less sensitive to changes in the flow field state around the cylinder wall and failing to learn relevant flow characteristics, resulting in biased predictions in subsequent processes. The Res-PINNs prediction results are closer to the CFD simulation results; for the prediction of CL, both the PINN and Res-PINNs models show some phase differences compared to the CFD simulation results. However, the Res-PINNs model, being more sensitive to changes in the flow field gradients, yields predictions that are closer to the CFD simulation results compared to the PINN model. The above analysis leads to the conclusion that, compared to the PINN model, the results of the flow field state reconstructed by the Res-PINNs model are in better agreement with the CFD simulation results.

3.2. Application of Res-PINNs to the Vortex-Induced Vibration Problem

VIV is a coupled fluid–structure interaction problem that occurs when a structure is placed in a flow field at a certain velocity. Periodic pulsating fluid forces along the flow direction and perpendicular to it cause the elastic structure to undergo periodic oscillations. This interaction alters the vortex shedding pattern of the fluid, resulting in coupled variations in fluid flow, structural displacement, and forces acting on the structure.
In this case study, the Reynolds averaged Navier–Stokes (RANS) equations are embedded into the Res-PINNs framework to solve high Reynolds number VIV problems, validating the model’s effectiveness. An incompressible fluid is governed by the N-S equations and the mass conservation equation, as detailed in Section 3.1. However, as the Reynolds number increases, turbulence becomes significant, which not only raises the computational cost but can also lead to ill-conditioned matrices during numerical solutions. To address the challenges posed by turbulence, the RANS equations are introduced. The RANS equations provide a time-averaged approach to deal with turbulent flow, reducing the complexity of modeling turbulence while retaining the essential flow characteristics. The equations are expressed as follows:
u ¯ t + u ¯ u ¯ x + v ¯ u ¯ y = p ¯ x + ( ν + ν t ) ( 2 u ¯ x 2 + 2 u ¯ y 2 ) v ¯ t + u ¯ v ¯ x + v ¯ v ¯ y = p ¯ y + ( ν + ν t ) ( 2 v ¯ x 2 + 2 v ¯ y 2 ) u ¯ x + v ¯ y = 0
Here, ν t represents the turbulent eddy viscosity coefficient. The RANS equations simulate turbulence by establishing a relationship between the eddy viscosity coefficient and the time-averaged parameters of the turbulence. The structural vibration equation can be modeled using a typical mass-spring-damper system. The equations of motion for the cylinder in the IL direction and the CF direction can be expressed as follows:
m ς ¨ + c ς ˙ + k ς = F D ( t ) m η ¨ + c η ˙ + k η = F L ( t )
In these equations, m represents the mass of the cylinder. c and k denote the damping coefficient and stiffness coefficient of the cylinder, respectively. FL(t) and FD(t) represent the lift and drag forces. ς ,   η indicate the displacements along the IL and CF directions. At the initial state, the boundary conditions for the cylinder are as follows:
ς ˙ ( 0 ) = η ˙ ( 0 ) = 0 ς ( 0 ) = η ( 0 ) = 0
By discretizing the structural equations using the fourth-order Runge–Kutta method, the velocity and displacement responses of the cylinder can be solved. For solving the velocity and displacement in the IL direction, the equations can be expressed as follows:
ς ˙ ( t n + 1 ) = ς ˙ ( t n ) + Δ t 6 ( k 1 + 2 k 2 + 2 k 3 + k 4 ) ς ( t n + 1 ) = ς ( t n ) + ς ˙ ( t n ) Δ t + Δ t 2 6 ( k 1 + k 2 + k 3 )
Among them,
k 1 = F D ( t n ) m c m ς ˙ ( t n ) Δ t k m ς ( t n ) k 2 = F D ( t n ) m c m ς ˙ ( t n ) + Δ t 2 k 1 k m ς ( t n ) + Δ t 2 ς ˙ ( t n ) k 3 = F D ( t n ) m c m ς ˙ ( t n ) + Δ t 2 k 2 k m ς ( t n ) + Δ t 2 ς ˙ ( t n ) + Δ t 2 4 k 1 k 4 = F D ( t n ) m c m ς ˙ ( t n ) + Δ t 2 k 3 k m ς ( t n ) + Δ t 2 ς ˙ ( t n ) + Δ t 2 2 k 2
In the formula, k1, k2, k3, and k4 are the coefficients used in the Runge–Kutta method, and Δ t represents the simulation time step. The displacement and velocity responses along the CF direction are solved in a manner similar to the equations for the IL direction.
In the VIV case, CFD simulation technology is used to obtain the training data. The overall flow field is modeled and simulated using the Ansys Fluent software, and a schematic diagram of the entire flow field region is shown in Figure 10. The structure consists of a 2-DOF two-dimensional cylindrical body with a spring. The left side is a velocity inlet with a magnitude of u0 = 1 m/s, the right side is a 0 MPa absolute pressure outlet, the upper and lower boundaries are no-slip boundaries, and the cylinder surface is a no-slip wall. The overset mesh region is set to a size of 3D, and an unstructured O-type mesh is used to divide the refined region. When the mesh is updated, the internal region of the O-type mesh remains unchanged during the movement of the cylinder, while the area outside the O-type mesh changes shape. The mesh division results are shown in Figure 11. The size of the entire computational domain is 18D × 12D. The grid cell size was set to 3 mm, with the height of the first layer set to 1 mm and a growth rate of 1.3. The grid consisted of 60,040 cells in total, with 97.32% of the grids being categorized as high quality, according to the grid quality evaluation results. The Reynolds number is set to Re = 1000, with a fluid density of 1 kg/m3 and a dynamic viscosity of μ = 0.5 × 10−4 kg/(m·s). The cylinder’s mass is set to m = 2 kg, the damping coefficient c = 0.084, and the stiffness k = 2.2020. During the simulation, the velocity, pressure, and other information at each grid point in the flow field are first computed, and the forces acting on the cylinder surface are calculated. This force information is then transferred to a compiled UDF, where the fourth-order Runge–Kutta method is used to solve the equations of motion for the structure. This process determines the displacement and velocity of the cylinder at the next time step, and the results are fed back into Fluent to update the flow field state for the subsequent time-step calculations, and the simulation time step is set to 0.005 s. The simulation model uses the SST k−ω model, primarily because this model provides relatively accurate turbulence predictions in complex flow problems, especially in the near-wall region and in capturing flow separation. Compared to DNS and experimental flow types, the SST k−ω model offers a reasonable trade-off between computational efficiency and accuracy.
Once the VIV motion of the cylinder reaches a steady state, the velocity field, pressure field, and cylinder motion information are extracted from the CFD simulation results as training data. The period from 20 to 25 s, after the motion has stabilized, is selected as the training dataset, with a time sampling interval of 0.1 s. To ensure that the data used are sufficient to support model training, we focused on its diversity and coverage during the data collection process, a Latin hypercube sampling method is employed to sample the flow field region, and 7000 data points are chosen as training data for each time step [1].
In this case, the ROM based on Res-PINNs is used to reconstruct the entire flow field state using sparse flow field information. The neural network structure adopts the same architecture as the previous case, with the network input being the spatiotemporal data t n , x n , y n of the sparse flow field. When solving for VIV, due to the coupling between the structure and the fluid, the structural vibration is also considered as a feature for both training and prediction. Additionally, the turbulent eddy viscosity coefficient is treated as an unknown parameter to be solved, making the network output p n , u n , v n , ε n , η n , v t . For simplicity, the loss function of this network omits the overline notation from Equation (12). By embedding the horizontal and vertical displacements of the cylinder into Equation (12), the equation’s loss function is formulated as shown in Equation (17).
e 1 = u t + u u x + v u y + p x ( ν + ν t ) ( 2 u x 2 + 2 u y 2 ) + 2 ς t 2 e 2 = v t + u v x + v v y + p y ( ν + ν t ) ( 2 v x 2 + 2 v y 2 ) + 2 η t 2 e 3 = u x + v y
The overall loss function of the neural network model is the sum of the training data loss and the equation loss, as shown in Equation (18).
L d a t a = n = 1 N u ( x n , y n , t n ) u n 2 + n = 1 N v ( x n , y n , t n ) v n 2 + n = 1 N p ( x n , y n , t n ) p n 2 + n = 1 N v t ( x n , y n , t n ) v t n 2 + n = 1 N η ( t n ) η t n 2 + n = 1 N ς ( t n ) ς t n 2 L e q u = i = 1 3 n = 1 N e i 2
where u ( x n , y n , t n ) , v ( x n , y n , t n ) , p ( x n , y n , t n ) , v t ( x n , y n , t n ) , and ς ( t n ) , η ( t n ) represent the output results after neural network training. The loss is calculated, and parameter updates are performed through backpropagation using the Adam optimizer and the L-BFGS optimization method. This completes one training iteration of the neural network. Similarly, to prevent overfitting, a dropout layer was also added before the network output, with the dropout rate set to 0.5. The training is conducted using the same simulation environment and configuration as in Case 3.1. After 150,000 iterations, the network converges, and the loss calculation results are shown in Figure 12.
The effectiveness of the model is evaluated by comparing the flow field reconstruction results at the 30 s mark using the ROM of Res-PINNs and PINN. The results are presented in Figure 13, where Figure 13a shows the reconstruction from the PINN model, and Figure 13b displays the reconstruction from the Res-PINNs model. In both figures, the first column provides the Ansys Fluent simulation results as a reference, the second column shows the reconstructed results from the ROM, and the third column illustrates the error between the reference simulation and the reconstructed results. From top to bottom, the rows represent the pressure field contour plot, the velocity field contour plot along the IL direction, and the velocity field contour plot along the CF direction. Calculations of the prediction results for the two models reveal that the velocity field reconstructed by the Res-PINNs model shows errors of 17.7% and 17.2% relative to the CFD simulation results, while the errors for the PINN model are 40.3% and 30.8%, respectively. For the pressure field, the reconstruction errors of the Res-PINNs and PINN models are 14% and 35.1%, respectively. The areas with significant errors are primarily located in the O-type mesh and its vicinity. This discrepancy is likely due to the limitations of the PINN model’s network architecture, which cannot fully analyze high-dimensional and nonlinear problems, resulting in a reduced ability to learn the flow field state features. In contrast, the Res-PINNs model, with its more complex network structure, can more accurately capture the velocity distribution of the flow field. The reconstruction results indicate that the Res-PINNs model outperforms the PINN model in the wake region, mitigating the shortcomings found in the PINN model.
Figure 14 shows the reconstruction results for the lift coefficient, drag coefficient, and structural displacement of the flow field from 25 to 28 s. In the figure, the black line represents the exact values obtained from the CFD simulation, the blue line represents the reconstruction results from the Res-PINNs model, and the red line represents the reconstruction results from the PINN model. Analysis of the reconstruction results reveals significant differences between the reconstructed structural vibration responses and the CFD simulation results. These discrepancies are more noticeable at the peaks and troughs of the response states, and there is a clear phase difference when compared to the CFD results. However, the overall trends remain consistent with the CFD results. A comparison with other studies [23,24] suggests that this discrepancy may result from the relatively low resolution of the data slicing, which impedes the model’s ability to capture the dynamic coupling between the flow field and the structure. Consequently, the model fails to accurately capture the transient behavior of the flow field and structural dynamics, particularly during more intense dynamic responses. The insufficient temporal information further causes phase deviations in the prediction results along the time axis, impairing the model’s ability to accurately track changes in the flow field. This discrepancy suggests that a greater amount of time-sliced data might be required for the model to learn and compute more accurately when constructing the training dataset. For the reconstruction of the lift and drag coefficients, the Res-PINNs model provides results that are more accurate and closer to the CFD simulation outcomes compared to the PINN model.
Based on the above analysis, it can be concluded that both the Res-PINNs and PINN models exhibit significant differences from the CFD simulation results when reconstructing the structural response in VIV problems. However, for the reconstruction of the lift and drag coefficients, the Res-PINNs model demonstrates superior performance.

4. Conclusions

This paper proposes a parallel Res-PINNs neural network, building on the PINN ROM, to achieve fast simulation and flow field reconstruction for fluid systems. Compared to traditional PINN neural network models, the proposed Res-PINNs model integrates ResNet modules to address the issues of gradient vanishing or exploding that can occur during the training of FCNN. Additionally, the network structure has been optimized with a parallel network training strategy to better extract and learn flow field state information, thereby enhancing the quality of flow field learning and reconstruction. To validate the performance of the proposed Res-PINNs neural network, two cases were used for verification: a two-dimensional cylinder flow and a 2DOF VIV case at high Reynolds numbers. The results were compared with predictions from the traditional PINN neural network. The findings indicate the following:
(1)
For the two-dimensional cylinder flow problem, the Res-PINNs model predicted flow field velocity and pressure errors of 4.1%, 6.3%, and 7.8%, respectively, compared to 38.8%, 39.2%, and 10.9% for the PINN model. Additionally, the Res-PINNs model provided more accurate reconstruction results for the lift and drag coefficients, demonstrating its superior ability to handle complex flow fields and capture variations in flow field states more comprehensively and sensitively.
(2)
For the 2-DOF VIV case at high Reynolds numbers, the Res-PINNs model, which incorporates the RANS equations, the eddy viscosity coefficient, and structural motion equations, accurately predicted the VIV state information. The Res-PINNs model showed flow field velocity and pressure errors of 17.7%, 17.2%, and 14%, respectively, compared to 40.3%, 30.8%, and 35.1% for the PINN model. These results validate the effectiveness of the Res-PINNs model, although there remains room for improvement in reconstructing structural displacement responses accurately.
(3)
Moreover, for the same number of time steps, the Res-PINNs model requires only one-twentieth of the simulation time of numerical simulations, significantly enhancing computational efficiency.

5. Discussion

The proposed Res-PINNs model demonstrates significant potential in tackling challenges in fluid system modeling and simulation. However, the flow field reconstruction problem itself exhibits strong nonlinear characteristics, especially when the coupling effects between the fluid and structure are complex. Simplified ROMs may fail to capture all key dynamics, leading to an increase in error. Additionally, the quality and coverage of the training data are crucial factors influencing the model’s accuracy. The sparsity of data slices and the insufficient number of data points may prevent the network from accurately capturing the transient behaviors of complex flow fields, particularly in cases with significant nonlinear dynamic responses. Thus, the prediction accuracy and generalization capability of the model still need further refinement. In future work, we will focus on optimizing the model structure to achieve more accurate flow field state distributions. While this study is limited to cylinder flow and VIV cases, future research should explore more complex problems to enhance the model’s robustness and better address the practical demands of engineering applications.

Author Contributions

Conceptualization, Y.L.; Data curation, R.Z.; Formal analysis, Y.L.; Investigation, P.W.; Methodology, Y.L.; Resources, J.J.; Software, Y.L.; Supervision, R.Z.; Validation, J.H.; Visualization, R.Z.; Writing—original draft, Y.L.; Writing—review and editing, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by China State Administration of Science, Technology, and Industry for National Defense Civil Aerospace Project, grant number D020101.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data involves sensitive information and is related to confidential aspects of my research project, and therefore cannot be shared at this stage.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

PINNsPhysics-Informed Neural Networks
ROMReduced-Order Model
Res-PINNsPhysics-Informed Neural Networks based on Residual Networks
ResNetResidual Network
VIVVortex Induced Vibration
M&SModeling and Simulation
PODProper Orthogonal Decomposition
DMDDynamic Mode Decomposition
AIArtificial Intelligence
DLDeep Learning
RNNRecurrent Neural Network
DNNDeep Neural Network
LSTMLong Short-Term Memory
PDEsPartial Differential Equations
FCNNFully Connected Neural Network
ReLURectified Linear Unit
RSRandom Split
L-BFGSLimited-memory Broyden Fletcher Goldfarb Shanno
CFDComputational Fluid Dynamics
N-SNavier–Stokes
LHSLatin Hypercube Sampling
RANSReynolds Averaged Navier–Stokes

References

  1. Ding, H.; Shu, C.; Yeo, K.; Xu, D. Simulation of incompressible viscous flows past a circular cylinder by hybrid FD scheme and meshless least square-based finite difference method. Comput. Methods Appl. Mech. Eng. 2004, 193, 727–744. [Google Scholar] [CrossRef]
  2. Liu, F.; Zheng, X. A Strongly Coupled Time-Marching Method for Solving the Navier–Stokes andk-ω Turbulence Model Equations with Multigrid. J. Comput. Phys. 1996, 128, 289–300. [Google Scholar] [CrossRef]
  3. Lucia, D.J.; Beran, P.S.; Silva, W.A. Reduced-order modeling: New approaches for computational physics. Prog. Aerosp. Sci. 2004, 40, 51–117. [Google Scholar] [CrossRef]
  4. Li, K.; Kou, J.; Zhang, W. Unsteady aerodynamic reduced-order modeling based on machine learning across multiple airfoils. Aerosp. Sci. Technol. 2021, 119, 107173. [Google Scholar] [CrossRef]
  5. Fresca, S.; Dede’, L.; Manzoni, A. A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs. J. Sci. Comput. 2021, 87, 61. [Google Scholar] [CrossRef]
  6. Brahmachary, S.; Bhagyarajan, A.; Ogawa, H. Fast estimation of internal flowfields in scramjet intakes via reduced-order modeling and machine learning. Phys. Fluids 2021, 33, 106110. [Google Scholar] [CrossRef]
  7. JJana, A.; Mitra, A.S.; Das, S.; Chueh, W.C.; Bazant, M.Z.; García, R.E. Physics-based, reduced order degradation model of lithium-ion batteries. J. Power Sources 2022, 545, 231900. [Google Scholar] [CrossRef]
  8. Janda, T.; Schmidt, J.; Hála, P.; Konrád, P.; Zemanová, A.; Sovják, R.; Zeman, J.; Šejnoha, M. Reduced order models of elastic glass plate under low velocity impact. Comput. Struct. 2021, 244, 106430. [Google Scholar] [CrossRef]
  9. Halder, R.; Damodaran, M.; Khoo, B. Deep learning based reduced order model for airfoil-gust and aeroelastic interaction. AIAA J. 2020, 58, 4304–4321. [Google Scholar] [CrossRef]
  10. Zhang, M.; Fu, S.; Ren, H.; Ma, L.; Xu, Y. A hybrid FEM-DNN-based vortex-induced Vibration Prediction Method for Flexible Pipes under oscillatory flow in the time domain. Ocean Eng. 2022, 246, 110488. [Google Scholar] [CrossRef]
  11. Li, K.; Kou, J.; Zhang, W. Deep neural network for unsteady aerodynamic and aeroelastic modeling across multiple Mach numbers. Nonlinear Dyn. 2019, 96, 2157–2177. [Google Scholar] [CrossRef]
  12. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  13. Raissi, M.; Wang, Z.; Triantafyllou, M.S.; Karniadakis, G.E. Deep learning of vortex-induced vibrations. J. Fluid Mech. 2019, 861, 119–137. [Google Scholar] [CrossRef]
  14. Wang, J.-X.; Wu, J.; Ling, J.; Iaccarino, G.; Xiao, H. A comprehensive physics-informed machine learning framework for predictive turbulence modeling. arXiv 2017, arXiv:1701.07102. [Google Scholar]
  15. Bararnia, H.; Esmaeilpour, M. On the application of physics informed neural networks (PINN) to solve boundary layer thermal-fluid problems. Int. Commun. Heat Mass Transf. 2022, 132, 105890. [Google Scholar] [CrossRef]
  16. Tang, H.; Liao, Y.; Yang, H.; Xie, L. A transfer learning-physics informed neural network (TL-PINN) for vortex-induced vibration. Ocean Eng. 2022, 266, 113101. [Google Scholar] [CrossRef]
  17. Arzani, A.; Wang, J.-X.; D’Souza, R.M. Uncovering near-wall blood flow from sparse data with physics-informed neural networks. Phys. Fluids 2021, 33, 071905. [Google Scholar] [CrossRef]
  18. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  19. Zhang, G.; Yang, H.; Pan, G.; Duan, Y.; Zhu, F.; Chen, Y. Constrained Self-Adaptive Physics-Informed Neural Networks with ResNet Block-Enhanced Network Architecture. Mathematics 2023, 11, 1109. [Google Scholar] [CrossRef]
  20. Jagtap, A.D.; Karniadakis, G.E. Extended physics-informed neural networks (XPINNs): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
  21. Pratama, D.A.; Abo-Alsabeh, R.R.; Bakar, M.A.; Salhi, A.; Ibrahim, N.F. Solving partial differential equations with hybridized physic-informed neural network and optimization approach: Incorporating genetic algorithms and L-BFGS for improved accuracy. Alex. Eng. J. 2023, 77, 205–226. [Google Scholar] [CrossRef]
  22. Kag, V.; Seshasayanan, K.; Gopinath, V. Physics-informed data based neural networks for two-dimensional turbulence. Phys. Fluids 2022, 34, 055130. [Google Scholar] [CrossRef]
  23. Hijazi, S.; Freitag, M.; Landwehr, N.J.A.M.; Sciences, S. POD-Galerkin reduced order models and physics-informed neural networks for solving inverse problems for the Navier–Stokes equations. Adv. Model. Simul. Eng. Sci. 2023, 10, 5. [Google Scholar] [CrossRef]
  24. Xiao, Y.; Yang, L.; Du, Y.; Song, Y.; Shu, C. Radial basis function-differential quadrature-based physics-informed neural network for steady incompressible flows. Phys. Fluids 2023, 35, 073607. [Google Scholar] [CrossRef]
Figure 1. Structure of the ResNet neural network.
Figure 1. Structure of the ResNet neural network.
Symmetry 17 00163 g001
Figure 2. Structure of the PINN neural network model.
Figure 2. Structure of the PINN neural network model.
Symmetry 17 00163 g002
Figure 3. Structure of the Res-PINNs neural network.
Figure 3. Structure of the Res-PINNs neural network.
Symmetry 17 00163 g003
Figure 4. Computational domain for the flow around a cylinder problem.
Figure 4. Computational domain for the flow around a cylinder problem.
Symmetry 17 00163 g004
Figure 5. Mesh generation results for the flow field.
Figure 5. Mesh generation results for the flow field.
Symmetry 17 00163 g005
Figure 6. Results of LHS.
Figure 6. Results of LHS.
Symmetry 17 00163 g006
Figure 7. Results of the model loss function.
Figure 7. Results of the model loss function.
Symmetry 17 00163 g007
Figure 8. Flow field reconstruction results of PINN and Res-PINNs model. (a) PINN reconstruction results; (b) Res-PINNs reconstruction results.
Figure 8. Flow field reconstruction results of PINN and Res-PINNs model. (a) PINN reconstruction results; (b) Res-PINNs reconstruction results.
Symmetry 17 00163 g008
Figure 9. Reconstruction results of flow field lift and drag: (a) CD reconstruction results; (b) CL reconstruction results.
Figure 9. Reconstruction results of flow field lift and drag: (a) CD reconstruction results; (b) CL reconstruction results.
Symmetry 17 00163 g009
Figure 10. Computational domain for the VIV problem.
Figure 10. Computational domain for the VIV problem.
Symmetry 17 00163 g010
Figure 11. Mesh division results of the VIV flow field.
Figure 11. Mesh division results of the VIV flow field.
Symmetry 17 00163 g011
Figure 12. Model loss calculation and results.
Figure 12. Model loss calculation and results.
Symmetry 17 00163 g012
Figure 13. Flow field reconstruction results. (a) Reconstruction results using PINN; (b) reconstruction results using Res-PINNs.
Figure 13. Flow field reconstruction results. (a) Reconstruction results using PINN; (b) reconstruction results using Res-PINNs.
Symmetry 17 00163 g013
Figure 14. Displacement response and reconstruction results.
Figure 14. Displacement response and reconstruction results.
Symmetry 17 00163 g014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Hou, J.; Wei, P.; Jin, J.; Zhang, R. Research and Application of ROM Based on Res-PINNs Neural Network in Fluid System. Symmetry 2025, 17, 163. https://doi.org/10.3390/sym17020163

AMA Style

Liu Y, Hou J, Wei P, Jin J, Zhang R. Research and Application of ROM Based on Res-PINNs Neural Network in Fluid System. Symmetry. 2025; 17(2):163. https://doi.org/10.3390/sym17020163

Chicago/Turabian Style

Liu, Yuhao, Junjie Hou, Ping Wei, Jie Jin, and Renjie Zhang. 2025. "Research and Application of ROM Based on Res-PINNs Neural Network in Fluid System" Symmetry 17, no. 2: 163. https://doi.org/10.3390/sym17020163

APA Style

Liu, Y., Hou, J., Wei, P., Jin, J., & Zhang, R. (2025). Research and Application of ROM Based on Res-PINNs Neural Network in Fluid System. Symmetry, 17(2), 163. https://doi.org/10.3390/sym17020163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop