Next Article in Journal
Constitutive Model for Thermal-Oxygen-Aged EPDM Rubber Based on the Arrhenius Law
Next Article in Special Issue
A Contact Mechanics Model for Surface Wear Prediction of Parallel-Axis Polymer Gears
Previous Article in Journal
Bacterial Cellulose-Derived Sorbents for Cr (VI) Remediation: Adsorption, Elution, and Reuse
Previous Article in Special Issue
Experimental and Simulation Study on Failure of Thermoplastic Carbon Fiber Composite Laminates under Low-Velocity Impact
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Applications of Long Short-Term Memory (LSTM) Networks in Polymeric Sciences: A Review

1
Artificial Intelligence Technology Scientific and Education Center, Bauman Moscow State Technical University, Moscow 105005, Russia
2
Scientific Department, Far Eastern Federal University, Vladivostok 690922, Russia
*
Authors to whom correspondence should be addressed.
Polymers 2024, 16(18), 2607; https://doi.org/10.3390/polym16182607
Submission received: 18 August 2024 / Revised: 11 September 2024 / Accepted: 13 September 2024 / Published: 14 September 2024
(This article belongs to the Special Issue Computational and Experimental Approaches in Polymeric Materials)

Abstract

:
This review explores the application of Long Short-Term Memory (LSTM) networks, a specialized type of recurrent neural network (RNN), in the field of polymeric sciences. LSTM networks have shown notable effectiveness in modeling sequential data and predicting time-series outcomes, which are essential for understanding complex molecular structures and dynamic processes in polymers. This review delves into the use of LSTM models for predicting polymer properties, monitoring polymerization processes, and evaluating the degradation and mechanical performance of polymers. Additionally, it addresses the challenges related to data availability and interpretability. Through various case studies and comparative analyses, the review demonstrates the effectiveness of LSTM networks in different polymer science applications. Future directions are also discussed, with an emphasis on real-time applications and the need for interdisciplinary collaboration. The goal of this review is to connect advanced machine learning (ML) techniques with polymer science, thereby promoting innovation and improving predictive capabilities in the field.

1. Introduction

1.1. Purpose of the Review

The convergence of machine learning (ML) and material science [1,2,3] has opened new avenues for research and application. This review aims to explore the integration of Long Short-Term Memory (LSTM) networks in polymeric sciences, focusing on their application in predicting and modeling polymer properties and processes.
The number of research articles that discuss the application of LSTM networks [4,5] in the field of polymers has seen an increase over recent years. Initially, the intersection of these two fields was relatively unexplored, but with the growing interest in applying ML to material sciences, more studies have been published. The earliest relevant publications started to appear around 2020, when LSTM networks began gaining popularity for their ability to handle sequential data, which is important in modeling time-series and dynamic processes in polymer science. Since then, the number of articles has grown steadily each year, with noticeable increases around 2022–2024 as more researchers began exploring advanced ML techniques, including LSTM, to predict polymer properties, monitor processes, and assess performance. Based on Mendeley data, there are currently 44 articles [6] with the terms “LSTM” and “polymers” in the title or abstract, highlighting the growing interest in the intersection of these fields.
The structure of this review is as follows: Section 1.2 provides a detailed description of the LSTM architecture. Section 2 explores the applications of LSTM models to polymers. Section 3 discusses the challenges associated with data acquisition and the interpretability of models. Section 4 outlines potential directions for future research. Finally, Section 5 presents the conclusions.

1.2. LSTM Overview

LSTM networks were introduced by Sepp Hochreiter and Jürgen Schmidhuber in 1997 [7] as a solution to the limitations of traditional RNNs [8], specifically the vanishing and exploding gradient problems that arise during backpropagation through time [9] (BPTT). Their unique architecture allows them to retain and utilize information over extended periods, making them suitable for time-series prediction and modeling dynamic systems. These issues make it difficult for standard RNNs to learn and retain long-term dependencies, an essential aspect for tasks involving sequential data.
The core idea behind LSTM is the introduction of memory cells [10], which can maintain their state over time, and a gating mechanism to control the flow of information. This structure allows LSTM models to mitigate the gradient issues by ensuring that gradients can propagate more effectively over long sequences. An architecture diagram is shown in Figure 1.
Mathematically, consider a traditional RNN, where the hidden state h t at time step t is given by
h t = tanh ( W h h t 1 + W x x t + b )
During backpropagation, the gradient 𝜕 L 𝜕 h t , where L is the loss function, is computed [11]. For long sequences, the recursive multiplication of gradients leads to either vanishing (values close to zero) or exploding (values diverging to infinity) gradients, making training unstable and inefficient.
LSTM overcomes this by introducing the cell state [12] C t , which acts as a conveyor belt, allowing gradients to flow without significant alteration. The cell state is updated as follows:
C t = f t · C t 1 + i t · C ˜ t
Here, f t , i t , and C ˜ t are the forget gate, input gate, and candidate cell state, respectively, as defined earlier. The forget gate f t determines what fraction of the previous cell state C t 1 should be retained, while the input gate i t controls how much of the new candidate state C ˜ t should be added. This selective updating mechanism allows the LSTM to preserve relevant information across long sequences while gradually forgetting less important details [13].
The output of the LSTM unit, or the hidden state h t , is then computed as
h t = o t · tanh ( C t )
where o t is the output gate, which controls how much of the cell state’s information should be passed on to the next layer or time step.
This architecture enables LSTM networks to effectively learn long-term dependencies, making them particularly useful in fields such as material science, where processes and phenomena often evolve over extended periods [14,15].
An LSTM unit is composed of a cell, an input gate, an output gate, and a forget gate. These components work together to manage the flow of information through the network, allowing it to retain important features over long sequences [16,17]. This capability has impacts in material science, where processes such as stress–strain relationships [18], phase transitions [19], or diffusion phenomena [20] evolve over time. Each component plays a specific role:
  • The cell state acts as the memory of the LSTM unit [21], carrying information across time steps [22]. It can retain information over long periods, enabling the network to remember past data for future predictions. The cell state is updated based on the interactions between the gates, allowing it to accumulate or forget information as needed [23].
  • The input gate controls how much of the new information [24] (i.e., the candidate cell state) should be added to the cell state. This gate decides what portion of the incoming data at the current time step t, combined with the previous hidden state h t 1 , should be considered and stored in the cell [25]. Mathematically, it is defined as
    i t = σ W i · [ h t 1 , x t ] + b i
    where σ is the sigmoid function, W i represents the weight matrix, h t 1 is the previous hidden state, x t is the current input, and b i is the bias.
  • The forget gate [26] determines how much of the previous cell state C t 1 should be retained in the current cell state C t . This gate is crucial for deciding which information is no longer relevant and can be “forgotten.” The forget gate’s operation is given by
    f t = σ W f · [ h t 1 , x t ] + b f
    A value of f t close to 0 means that the corresponding information in the cell state will be mostly discarded, while a value close to 1 means the information will be largely retained [27].
  • The output gate [28] controls what information from the cell state should be passed on to the next time step or used as the output of the current LSTM unit. It decides what part of the cell state’s information contributes to the hidden state h t , which in turn influences the network’s predictions [29]. The output gate is calculated as
    o t = σ W o · [ h t 1 , x t ] + b o
    The final hidden state [30] h t is then computed by combining the output gate’s result with the cell state, passed through a nonlinearity:
    h t = o t · tanh ( C t )
Together, these gates allow the LSTM network to selectively update, retain, and discard information, making it particularly powerful for modeling complex, time-dependent processes in material science [15,31,32], such as predicting the behavior of materials under stress or modeling the progression of phase changes over time.
LSTM networks are particularly suited for predicting the stress–strain behavior of materials under various loading conditions [33,34,35]. Given a sequence of applied stresses σ ( t ) over time, LSTM models can predict the resulting strain ϵ ( t ) , capturing both the immediate response and long-term effects such as creep and relaxation. The network essentially learns a mapping:
ϵ ( t ) = LSTM ( σ ( t ) , σ ( t 1 ) , , σ ( 0 ) )
where ϵ ( t ) is the predicted strain at time t, and σ ( t ) represents the stress history up to that point.
In materials undergoing phase transitions [36,37], the prediction of the material’s state over time as temperature, pressure, or other conditions change is critical. LSTM can model the evolution of the phase fractions ϕ i ( t ) for different phases i as a function of time-dependent parameters like temperature T ( t ) :
ϕ i ( t ) = LSTM ( T ( t ) , T ( t 1 ) , , T ( 0 ) )
This allows for the accurate prediction of phase compositions over a thermal cycle.
The time-dependent diffusion [38,39] of atoms or molecules in a material is another area where LSTM excels. The concentration c ( x , t ) of a diffusing species at position x and time t can be predicted using LSTM by training on sequences of concentration profiles:
c ( x , t ) = LSTM ( c ( x , t 1 ) , c ( x , t 2 ) , )
This is useful in materials processing applications such as doping in semiconductors [40] or alloying in metals [41].

1.3. Variants of LSTM Networks

In addition to the standard LSTM, there are several other variations of this architecture, each optimized for specific tasks and types of data. Figure 2 illustrates various LSTM network types.
For instance, Bidirectional LSTM (BiLSTM) [42] processes sequences in both directions—forward and backward—allowing the model to consider both preceding and subsequent context. Stacked LSTM [43] is a multilayered architecture where the output of one LSTM layer serves as the input for the next, helping to capture more complex patterns in the data. Peephole LSTM [44] adds direct connections between the cell state and the gates, enabling the gates to better control the information stored in the cell. Finally, Attention-Based LSTM [45] incorporates an attention mechanism, allowing the model to focus on important parts of the sequence when making predictions. These extensions of the classical LSTM make it more flexible and effective for a wide range of tasks, including time-series analysis, natural language processing, and many other applications.

1.3.1. Bidirectional LSTM

Bidirectional LSTM (BiLSTM) processes input sequences in both forward and backward directions. This allows the network to have information from both past and future contexts.
  • Forward LSTM: processes the sequence in the original order [46].
    h t = LSTM ( x t , h t 1 )
  • Backward LSTM: processes the sequence in reverse order [47].
    h t = LSTM ( x t , h t + 1 )
  • Final output: concatenates the forward and backward hidden states.
    h t = [ h t , h t ]
Among the advantages of this model is its ability to process the input sequence in both forward and backward directions. This bidirectional processing allows the network to capture contextual information from both past and future time steps, significantly enhancing the model’s ability to understand and predict sequential data [48]. Additionally, by leveraging information from both directions, Bidirectional LSTM models often achieve higher accuracy and better performance in tasks such as sequence labeling, speech recognition, and natural language processing [49].
However, there are also disadvantages to consider. One notable drawback is the increased computational load. The bidirectional processing doubles the computational requirements, as the model needs to process the input sequence twice [50]. This can be a limitation in real-time applications or when computational resources are constrained. Another challenge is the complexity in real-time applications. In scenarios where future data are not available, the backward pass of the Bidirectional LSTM may not be feasible, limiting its applicability [51].

1.3.2. Stacked LSTM

Stacked LSTM networks involve multiple LSTM layers where the output of one LSTM layer serves as the input to the next [52]. This allows the network to capture more complex patterns in the data.
  • Layer 1 LSTM [53]: processes the input sequence.
    h t ( 1 ) = LSTM ( 1 ) ( x t , h t 1 ( 1 ) )
  • Layer 2 LSTM [54]: takes the output of Layer 1 as input.
    h t ( 2 ) = LSTM ( 2 ) ( h t ( 1 ) , h t 1 ( 2 ) )
  • Final output [55]: can be taken from the last layer’s hidden state.
Stacked LSTM models consist of multiple layers of LSTM cells, enabling the network to learn more complex patterns and hierarchical representations in the data. This increased depth improves the model’s ability to capture intricate temporal dependencies [56]. By stacking multiple LSTM layers, the model can achieve higher accuracy and better generalization on complex tasks, such as time-series forecasting and sequence classification [52].
Disadvantages of Stacked LSTM models include increased computational complexity [57]. The additional layers in a Stacked LSTM model increase the computational requirements, making it more resource-intensive to train and deploy. With more layers, there is a higher risk of overfitting [58], especially if the dataset is not sufficiently large or diverse. Regularization techniques, such as dropout, are often necessary to mitigate this issue.

1.3.3. Peephole LSTM

Peephole LSTM models are a variation where the gates are connected not only to the previous hidden state h t 1 and the input x t but also directly [16,59] to the cell state C t 1 . This allows the gates to have a view of the cell state, potentially improving performance.
  • Peephole forget gate:
    f t = σ ( W f · [ h t 1 , x t ] + V f · C t 1 + b f )
  • Peephole input gate:
    i t = σ ( W i · [ h t 1 , x t ] + V i · C t 1 + b i )
  • Peephole output gate:
    o t = σ ( W o · [ h t 1 , x t ] + V o · C t + b o )
Here, V f , V i , and V o are additional weight matrices associated with the cell state.
Advantages of Peephole LSTM models include additional connections, called peepholes, that allow the cell state to directly influence the gates. This design enhances the model’s ability to retain and utilize long-term dependencies, leading to improved performance in tasks that require long-term memory. Peephole connections can help stabilize the gradient flow during training, making the model more robust and easier to train [60].
Disadvantages of Peephole LSTM models include the increase in complexity of the model that the addition of peepholes causes, requiring more parameters and potentially longer training times [61]. Implementing and tuning the peephole connections can be more complex compared to standard LSTM models, requiring careful consideration of the model architecture and hyperparameters [62].

1.3.4. Attention-Based LSTM

Attention mechanisms can be integrated with LSTM networks to focus on specific parts of the input sequence when making predictions [63,64]. The attention mechanism assigns a weight α t to each time step in the input sequence.
  • Attention weights [65]:
    α t = softmax ( e t )
    where
    e t = v T tanh ( W h h t + W s s t 1 + b e )
  • Context vector [66]:
    c t = t α t h t
  • Final output: combines the context vector with the LSTM output.
    h t ˜ = tanh ( W c [ c t ; h t ] )
Here, e t is an alignment score, W h , W s , and W c are weight matrices [67,68], v s . is a vector, and b e is a bias term.
LSTM networks, in their various forms, offer powerful tools for sequence modeling, each variation tailored to different types of sequence data and tasks [69]. From standard LSTM to more complex architectures like Bidirectional, Stacked, Peephole, and Attention-Based LSTM models, these models are equipped to handle a wide range of challenges in time-series prediction, natural language processing, and beyond.
Advantages of Attention-Based LSTM models incorporate an attention mechanism that allows the network to focus on the most relevant parts of the input sequence [63]. This selective attention can improve the model’s performance by prioritizing important information and ignoring irrelevant data. The attention mechanism provides insights into which parts of the input sequence are most influential in the model’s predictions, making the model more interpretable and transparent [70].
A disadvantage of the attention mechanism is that it increases the complexity of the model, requiring more computational resources and potentially longer training times [71]. Implementing and tuning the attention mechanism can be challenging, as it involves additional hyperparameters and architectural considerations.

2. Applications of LSTM in Polymeric Sciences

LSTM networks have emerged as a powerful tool in the field of polymeric sciences, offering advancements in predictive modeling and sensor technologies. These networks excel in handling sequential data and time-series predictions [72,73], which is crucial for applications such as predicting polymer aging, optimizing manufacturing processes, and detecting faults in polymer composites. The bibliometric network visualization of LSTM applications in polymeric sciences, presented in Figure 3, illustrates the extensive and growing integration of LSTM networks across various polymer-related studies, highlighting their impact and versatility in enhancing the performance and reliability of polymer materials.
The word cloud uses varying shades of color to represent the frequency of word usage, with darker colors indicating more frequent mentions in the literature. For example, terms like “LSTM”, “ML”, “SHM”, and “ANN” appear in darker shades. The circles are grouped based on the strength of the relationship between the terms, with closer grouping indicating a stronger interrelationship. Terms like “Tool Wear Pred.” and “Battery SOC Est.” are closely grouped, reflecting their interconnectedness in studies that apply LSTM to monitor and predict the degradation of polymer composites. The word cloud helps to identify the most relevant and frequently discussed topics in the field, providing a visual representation of key areas where LSTM has been successfully integrated into polymer science. It also highlights emerging trends and areas of focus, guiding researchers towards potential avenues for further exploration and innovation. By examining the word cloud, we are able to quickly grasp the themes and the interrelationships between different aspects of LSTM integration in polymer science. This visual aid enhances the understanding of the current state of research and potential future directions in this interdisciplinary field.

2.1. Tim- Series Analysis in Polymer Systems

ML innovations for Charge-Coupled Device [74] (CCD) chips have enabled capabilities like facial recognition [75] and object tracking [76] by efficiently processing large volumes of temporal data. However, despite progress in creating chemical sensor arrays that mimic mammalian olfactory systems [77,78], limited research has been conducted into their temporal responses and the neural architectures needed for chemical awareness in dynamic environments.
To address this gap, Ryman et al. [79] developed sensors using a blend of carbon black and various organic polymers, including poly(4-vinyl phenol) [80], poly(styrene-co-allyl alcohol) [81], and poly(ethylene oxide) [82]. These sensors, when applied to interdigitated electrodes, allowed for precise resistance measurements and effective chemical detection. At the same time, LSTM networks have demonstrated exceptional performance in classification tasks, often surpassing human capabilities in areas like traffic sign recognition [83]. LSTM networks are particularly adept at managing temporal dependencies, selectively storing and forgetting states, and scaling across multiple categories, making them ideal for addressing the challenges of olfactory signal classification and processing the temporal dynamics of sensor data [84]. The integration of LSTM networks with organic polymer-based sensors is advancing olfactory sensing systems similar to how these technologies have revolutionized machine vision.
In the realm of energy systems, accurate estimation of battery state of charge [85] (SOC) remains challenging due to its nonlinearity and influence from various factors. While the extended Kalman filter [86] (EKF) is commonly used for SOC estimation, its accuracy can be compromised by uncertainties in battery models and varying conditions. Shin et al. [87] proposed a method that enhances EKF accuracy by compensating errors with an LSTM network. This approach involves training the LSTM on EKF errors and applying calibration values based on battery conditions and load profiles. The multi-LSTM structure, utilizing ensemble averaging, achieves SOC estimation with a root mean square error of less than 1%, closely matching the SOC calculated by coulomb counting, and allows for online prediction once the model is trained.
Similarly, Andrews et al. [88] evaluated three recurrent neural network architectures—ERNN [89], LSTM, and GRU [90]—for predicting the energetics of an ethyl acetate solution with a polymer–lipid aggregate [91]. Trained on extensive molecular dynamics simulation data, these models effectively reproduce time-series data but struggle with accurate short- and long-term forecasts. An in silico protocol was proposed, utilizing time patterns from the data to improve forecasts, enhancing predictions by providing a range of values consistent with energy fluctuations. This approach offers useful estimates for evaluating the necessity of long simulations in materials design.
Wang et al. [92] presented a hybrid sensor for motor tic recognition [93], integrating piezoelectric and triboelectric designs. The sensor, combining a triboelectric nanogenerator made from bionic PDMS and a piezoelectric nanogenerator using layered porous PVDF-TrFE nanofibers [94], shows an improvement in voltage output, reaching nearly 5 V. A self-powered tic recognition system utilizing a deep learning (DL) model, specifically LSTM, achieves an 88.1% recognition rate for motor tics, aiding doctors in monitoring Tourette syndrome patients [95].
In the context of fuel cell technology, degradation due to hydrogen (H2) starvation limits the lifespan of high-temperature polymer electrolyte membrane fuel cells (HT-PEM FC). Yezerska et al. [96] utilized an LSTM neural network trained on electrochemical data from H 2 starvation/regeneration experiments to predict H 2 starvation effects [97]. Simulations showed critical resistances at specific voltages, recommending a safe operational voltage range to avoid severe degradation.
Proton Exchange Membrane Fuel Cells (PEMFCs) [98], favored for green transportation, suffer from radical-induced degradation in Nafion® membranes [99], leading to performance and stability issues. Benhaddouch et al. [100] introduced fluoride emission as a diagnostic model using fluoride-sensitive membranes [101] (LaF3/CaF2) in inline microsensor arrays for real-time monitoring. These sensors, coupled with LSTM algorithms, achieve high sensitivity and accuracy, providing a complementary approach for predicting PEMFC end of life [102] (EOL) and enhancing current diagnostic techniques.
In material science, Xu et al. [103] presented a method for classifying substances within glass fiber-reinforced polymer (GFRP) honeycomb structures using terahertz time-domain spectroscopy (THz-TDS). An improved one-dimensional convolutional neural network (1D-CNN) [104] model was developed and compared with LSTM and standard 1D-CNN models. The results show that the LSTM model excels with time-domain signals, while the improved 1D-CNN model is superior with frequency-domain signals.
Song et al. [105] introduced an LSTM-based soft sensor model for predicting melt index (MI) [106] in polymerization processes, which have an influence on determining polymer quality. Due to the lack of online MI measurement, traditional models struggle with the nonlinearity and complex temporal correlations of chemical processes. The LSTM model was applied to an industrial styrene–acrylonitrile (SAN) polymerization process [107], outperforming other models in prediction accuracy.
Furthermore, Song et al. [108] introduced the Self-constructed Strategy-based Reinforcement LSTM (SCRLA) [108] for predicting the nonlinear performance degradation of fiber-reinforced polymers [109] (FRP). SCRLA enhances model generalization by integrating Bayesian algorithms for hyperparameter optimization and reinforcing the learning process. This approach demonstrated superior prediction accuracy, especially with experimental data, offering an effective framework for analyzing and predicting the sequential performance of composite materials.
Finally, Goswami et al. [110] addressed the challenge of accurately measuring Glass Transition Temperature [111] ( T g ) in polymers. They proposed using an LSTM model based on the Simplified Molecular-Input Line-Entry System (SMILES) structure of polymers to predict T g . The study evaluated the model’s performance and its practical applications, offering a potentially efficient alternative to conventional methods.
As a result, LSTM networks have transformed the analysis and prediction of complex time-dependent behaviors in polymer systems. These models excel at handling the temporal dependencies inherent in these systems, offering improvements in accuracy and efficiency over traditional methods. Table 1 offers a concise overview of key articles that highlight the application of LSTM and related models in the time-series analysis of polymer systems.

2.2. Diagnostics and Monitoring of Polymer Materials

Recent advancements in polymer and battery technology [112] have been enhanced by DL and ML techniques. For instance, Kim et al. [113] developed a DL-based prediagnosis system for PEMFCs, using LSTM and CNN [114] combined with a bagging ensemble method [115]. By analyzing experimental time-series data from full-scale single-cell tests, this system achieves detection rates of 98.52% for flooding and 95.36% for drying, thereby improving PEMFC stability and operation.
In the field of underwater electroacoustic sensors, Ramachandran et al. [116] focused on predicting the end of life of these sensors by analyzing the degradation of their water-proof polymer insulation due to water ingress [117]. They employed LSTM networks to model and predict the degradation pattern based on measured insulation resistance [118]. This method allows for maintenance or replacement decisions without disassembling the sensors, verifying the accuracy of the predictions against actual end-of-life measurements.
Similarly, in the realm of polymer matrix composites (PMCs), Lee et al. [119] addressed the challenge of predicting tensile behavior by utilizing feature engineering combined with ML. They used Principal Component Analysis [120] (PCA) and Recursive Feature Elimination with Cross Validation [121] (RFECV) to identify the optimal features for predicting the tensile stress–strain curve [122] from test data. LSTM and Feedforward Neural Network [123] (FNN) models trained on this feature set achieved a predictive accuracy of R 2 = 92 % , facilitating accurate stress–strain curve predictions and simplifying PMC design.
Chistyakova et al. [124] evaluated predictive models for key quality indicators in polymer film materials [125]. They compared Adaptive Boosting of Decision Trees (AdaBoost) [126] with LSTM to predict defects such as the number of black dots per square meter. Performance was assessed using precision, recall, and F 1 -score to determine the most effective model based on production data characteristics.
In the context of glass fiber-reinforced polymers [127] (GFRPs) used in marine infrastructure, Zhang et al. [128] developed an optimized ML model to predict tensile strength retention [129] (TSR) in alkaline environments. They trained seven different ML models, including LSTM and Extreme Gradient Boosting (XGBoost) [130], using variables such as bar diameter, fiber volume fraction, pH, conditioning temperature, and immersion duration. The results indicated that XGBoost and LSTM performed best, with pH and temperature being the most influential factors.
Yoon et al. [131] proposed a method to enhance the Extended Kalman Filter (EKF) for estimating the SOC of Li-polymer batteries [132]. By integrating EKF with an LSTM network, they addressed inaccuracies arising from parameter variations in the battery’s equivalent model. This approach improved SOC estimation accuracy, particularly under varying load profiles, compared to standard EKF methods.
Dielectric electro-active polymer [133] (DEAP)actuators, which are promising for bio-inspired robotics, face challenges with rate-dependent and asymmetrical hysteresis. Jiang et al. [134] introduced a hybrid model combining LSTM networks with Empirical Mode Decomposition [135] (EMD)to better model DEAP actuator hysteresis. This approach, which preprocesses control signals using EMD before LSTM input, demonstrated superior prediction accuracy compared to traditional models like Backpropagation Neural Network (BPNN) and Recursive Polynomial Interpolation (RPI).
Wang et al. [136] applied LSTM networks to classify internal interfaces in polymers using terahertz (THz) waveform data. Their experiments confirmed that LSTM networks are effective in identifying and imaging voids and impurities within polymer materials, providing a nondestructive method for examining internal structures.
Li et al. [137] developed a DL model to predict tool wear in milling unidirectional carbon fiber-reinforced polymer (CFRP) by analyzing cutting force signals. Combining a multichannel 1D CNN with LSTM, their model achieved high prediction accuracy with an R 2 of 95.04% and a mean absolute error (MAE) of 2.94 µm, outperforming traditional methods such as 1D CNN, 2D CNN [138], and Support Vector Regression (SVR) by over 25%.
Lastly, Hantono et al. [139] presented an LSTM model for estimating the state of charge (SoC) of lithium polymer batteries. Using the NVIDIA Jetson Nano for computation, their model achieved RMSE scores of 1.797 for training and 1.976 for testing, demonstrating the feasibility of employing LSTM on the Jetson Nano for accurate SOC estimation.
Polymer and battery technology have been transformed by the integration of ML techniques. Innovative systems like the LSTM-CNN ensemble developed by Kim et al. [113] have improved the stability and operation of PEMFCs by accurately diagnosing flooding and drying conditions. Similarly, Ramachandran et al. [116] utilized LSTM networks to predict the degradation of underwater sensors, facilitating timely maintenance decisions. In the field of polymer composites, Lee et al. [119] combined feature engineering with LSTM models to predict tensile behavior with high accuracy, streamlining the design process. Other studies, such as those by Zhang et al. [128] and Yoon et al. [131], demonstrate the effectiveness of LSTM in improving the predictive accuracy of polymer performance and battery state technology and battery management. The studies summarized in Table 2 illustrate the diverse applications and effectiveness of LSTM-based models in the monitoring of polymer materials.

2.3. Managing the Condition and Performance of Polymer Products

Managing the condition and performance of polymer products is a growing area of research, with various innovative approaches leveraging ML and DL techniques. Dehghan et al. [140] compared methods for predicting conductive and radiative heat transfer in polymethylmethacrylate (PMMA). They found that the LSTM networks provided faster and more accurate results than traditional numerical methods, demonstrating strong performance validated by the receiver operating characteristic (ROC) curve and confusion matrix.
Luong et al. [141] developed an LSTM model to predict the behavior of an antagonistic joint driven by twisted-coiled polymer actuators made from spandex and nylon. Integrated with Model Predictive Control (MPC) [142] using PyTorch, this model achieved high prediction accuracy for joint angles and actuator temperatures, maintaining steady-state errors under 0.1 degrees and 0.2 °C, respectively. The MPC proved effective in set-point regulation and tracking sinusoidal waveforms, demonstrating its utility in managing joint stiffness.
Dong et al. [143] introduced a hybrid modeling approach for the tetrafluoroethylene (TFE) polymerization process [144], combining kinetic and thermodynamic models with LSTM networks. This hybrid model effectively predicts reaction rates and optimizes the polymerization process for producing polytetrafluoroethylene (PTFE) [145], which has impacts for aerospace and medical applications. The model showed improved performance and effectiveness in addressing uncertainties in kinetic parameters.
Bi et al. [146] employed a data-driven approach to predict polymer intrinsic viscosity, which is critical for maintaining polyester fiber quality. They used a time-series data generative adversarial network [147] (TSDGAN), with an Attention LSTM as the generator and a CNN as the discriminator, to handle missing data. The Informer model then predicted viscosity using the completed time series, outperforming traditional methods and demonstrating robustness against varying rates of missing data.
Rahman et al. [148] developed a predictive maintenance framework for an industrial drying hopper using deep learning (DL) algorithms. By classifying Multivariate Time-Series [149] (MTS) data into failure/unusual and regular events, they addressed challenges such as missing values and imbalanced data. Their study found that a CNN outperformed other DL and ML algorithms, such as SVM and KNN, in classifying the dataset effectively.
Gao et al. [150] introduced a dual-mode tactile sensor combining piezoresistive and piezoelectric materials to enhance tactile perception. Using a CNN-LSTM model, the sensor achieved 90.58% accuracy for braille recognition under constant conditions and 84.2% across varying speeds and directions. This sensor demonstrated potential applications in blind reading and texture detection when tested on a robotic arm and a human finger.
Simine et al. [151] presented a method for predicting UV-vis spectra of conjugated polymers using an LSTM-RNN model. This generative DL model bypasses traditional backmapping and quantum chemistry calculations, improving the efficiency and accuracy of studying organic optoelectronic materials by leveraging mathematical similarities to natural languages.
Braghetto et al. [152] analyzed configurations of flexible knotted rings within spherical cavities using LSTM neural networks. The LSTM models excelled at recognizing knots, even with significant geometric entanglement, and were improved by coarse-graining. However, the models often misclassified knots within the same topological family [153], suggesting that they grasped basic topological properties better than simpler convolutional NNs.
Benrabia et al. [154] explored ML techniques for modeling energy storage systems, focusing on external system states such as environmental temperature and energy demand. They compared nonlinear autoregressive exogenous [155] (NARX) and LSTM models for predicting the state of charge/discharge (SOC/DOD) of batteries and power output for fuel cells. The results indicated that NARX was more effective for battery systems, while LSTM excelled with fuel cells.
Altabey et al. [156] introduced a DL-based method for predicting the acoustic behavior of dual-chamber mufflers made from basalt fiber-reinforced polymer [157] (BFRP) composites. Two deep neural networks, RNN-LSTM and CNN, optimized using Bayesian genetic algorithms [158], achieved over 90% accuracy in predicting acoustic transmission loss [159] (TL) and power transmission coefficient [160] (PTC), thus streamlining muffler design.
Wang et al. [161] developed a method for detecting internal defects in GFRP using terahertz time-domain spectroscopy and neural networks. Their approach, which involved 1D convolutional neural networks, LSTM-RNNs, and bidirectional LSTM-RNNs, found that the 1D CNN model was the most effective, achieving high recall rates and macro F1 scores. This method advances automated, nondestructive defect detection in GFRP materials.
Managing the condition and performance of polymer products has been driven by DL techniques. Studies like those by Dehghan et al. [140] and Luong et al. [141] demonstrate the effectiveness of LSTM networks in predicting heat transfer in polymers and controlling polymer actuators, respectively. Hybrid models combining traditional approaches with LSTM, as explored by Dong et al. [143], have optimized polymerization processes, while innovative DL frameworks, such as those developed by Bi et al. [146] for predicting polymer viscosity, highlight the robustness of these approaches against data inconsistencies. Other research has applied CNN-LSTM models to enhance tactile sensors, predictive maintenance systems, and defect detection in polymer composites, demonstrating broad applicability across various domains. Table 3 provides a summary of key studies and their contributions to advancing polymer product management.

2.4. Predicting Aging and Degradation of Polymers

Accurate prediction of aging and degradation in polymers is crucial for maintaining their performance and reliability. Li et al. [137] introduced a method for predicting tool flank wear in the edge trimming of carbon fiber-reinforced polymer [162] (CFRP) components, focusing on the impact of multidirectional (MD) CFRP’s interlaminar effects. Their LSTM backpropagation network model successfully predicted tool wear length, accounting for these interlaminar effects and demonstrating effectiveness in quantifying wear progression in MD CFRP edge trimming [137].
Berot et al. [163] investigated various parameters of LSTM networks for predicting polymer aging, specifically in epoxy adhesives subjected to hygrothermal aging [164]. They found that a single hidden layer with 150 units and a hyperbolic tangent activation function provided the best results. The study highlights LSTM’s effectiveness in predicting time-dependent changes in physical parameters and underscores the importance of selecting appropriate network parameters for accurate and stable predictions.
Oudan et al. [165] combined finite element (FE) simulation with LSTM networks to assess the time-dependent reliability of complex structural systems. Their approach, applied to degrading concrete structures and a GFRP concrete beam, efficiently provided accurate time-dependent reliability indexes. This hybrid method shows versatility and effectiveness in handling various applications involving degradation over time.
Oh et al. [166] focused on the state-of-health (SoH) estimation of lithium polymer batteries [167] used in urban railway fleets. They employed LSTM models to analyze battery performance over 500 charge/discharge cycles under real vehicle conditions. Their data preprocessing and LSTM-based predictions provided accurate SoH estimations, enhancing the reliability of battery management systems.
In the aviation sector, Karaburun et al. [168] evaluated state-of-charge (SOC) estimation for lithium polymer batteries used in electric unmanned aerial vehicles [169] (UAVs). They compared LSTM with Support Vector Regression [170] (SVR) and Random Forest [171] (RF) methods, finding that these models effectively estimated SOC based on time-series data. The results demonstrated the efficacy of DL and ML techniques for accurate SOC predictions.
Tripathi et al. [172] explored the mechanical response of CFRP laminates with buckypaper (BP) or carbon nanotube [173] (CNT) interleaves. Using an LSTM model trained on finite element analysis [174] (FEA) and experimental data, they accurately predicted damage responses and observed improvements in flexural strength and modulus. The model’s predictions were confirmed by confocal microscopy [175], demonstrating its capability to assess the impact of CNT membranes on mechanical properties.
Reiner et al. [176] developed a data-rich framework for characterizing the strain-softening behavior of laminated composites under compressive loading. They compared a theory-guided neural network and an LSTM-based recurrent neural network. The LSTM model, requiring a minimum of 5000 finite element (FE) simulations, successfully predicted compressive damage and was validated against experimental data from various compression tests.
Najjar et al. [177] introduced an optimized AI model combining LSTM with the Chimp Optimization Algorithm [178] (CHOA) to predict kerf quality in laser cutting basalt fiber-reinforced polymer composites [179]. This model outperformed standalone LSTM and other optimization techniques by reducing the root mean squared error for kerf width, deviation, and taper. The LSTM-CHOA [180] model demonstrated superior performance in predicting cutting quality.
Jiang et al. [181] addressed hysteresis and creep in DEAP actuators using a hybrid approach. Their model combined LSTM with Empirical Mode Decomposition (EMD) and proportional–integral–derivative [182] (PID) control to predict and compensate for hysteresis dynamics. Experiments showed that this LSTM-based compensator outperformed traditional models in predicting control signals and reducing hysteresis.
Munshi et al. [183] applied a transfer learning-based LSTM model using SMILES molecular fingerprints to discover new polymer chemistries for organic photovoltaic (OPV) materials. The model, trained on a small dataset, predicted novel polymer repeat units with potentially high power conversion efficiencies [184] (PCEs). Validation through similarity coefficients between known and generated polymers demonstrated the model’s effectiveness in accelerating materials discovery for OPVs and similar applications.
This section explores modern methods for predicting aging and degradation in polymers, focusing on applications across various fields such as polymer composites, batteries, and materials for solar cells. The primary emphasis is on the use of recurrent neural networks, particularly LSTM models, to forecast different aspects of degradation and aging. Examples include predicting tool wear in carbon fiber composites, assessing the reliability of structural systems, and estimating the state of health of batteries. These studies highlight the effectiveness of LSTM models in diverse applications while noting the need for further research to extend the applicability of these models. Table 4 provides a summary of the discussed studies and the models they employed.

2.5. Sensor Technologies and LSTM-Based Modeling for Polymer Composites

Advancements in sensor technologies and LSTM-based modeling are enhancing the monitoring and predictive capabilities for polymer composites. Luong et al. [185] developed a dynamic model using LSTM networks to predict the nonlinear behavior of an antagonistic joint driven by a hybrid twisted-coiled polymer actuator [186] (TCA) bundle. This model incorporates prestrains of TCAs as inputs, improving the prediction of joint angles with a mean error of 0.06°, a reduction from the previous model’s error of 1.57°, and effectively manages prestrain changes without retraining.
Kumar et al. [187] evaluated six DL models for detecting faults in polymer gears, aiming to reduce maintenance costs and computational time. Their hybrid LSTM and Gated Recurrent Unit (LSTM-GRU [187]) model achieved exceptional performance with 99.6% accuracy, 99.89% kappa, and 99.6% F1-score. This model offers a highly accurate and efficient solution for fault detection in polymer gears by enhancing signal quality through Complete Ensemble Empirical Mode Decomposition with Adaptive Noise [188] (CEEMDAN).
Shunhu et al. [189] explored drilling quality and energy efficiency in carbon fiber-reinforced polymer (CFRP) components using a 55° tungsten steel drill bit. By employing CNN-LSTM neural networks to correlate process parameters with delamination factors and energy consumption, they developed a prediction method that identifies optimal drilling settings. Their findings—spindle speed of 7000 r/min, feed rate of 40 mm/min, and lay-up sequence of [0°, 0°, −45°, 90°]6 s—highlight how parameter optimization can minimize both energy consumption and delamination.
Aklouche et al. [190] proposed a Bidirectional LSTM (BiLSTM) network method for damage severity estimation in composite materials like CFRP, utilizing Lamb wave [191] (LW) data. By integrating Variational Mode Decomposition (VMD) for signal preprocessing, this method outperforms traditional RNN and LSTM models in damage assessment, providing superior adaptive performance and predictive accuracy.
Ali et al. [192] examined the structural behavior of double-skin double-filled tubular [193] (DSDFT) versus double-skin hollow tubular [194] (DSHT) columns using finite element modeling (FEM) and ML. Their study revealed that DSDFT columns have a 19.54% to 101.21% increase in load-carrying capacity and improved ductility over DSHT columns. The LSTM and BiLSTM models provided the most accurate predictions for axial load capacity, offering valuable insights for optimizing column designs in construction.
Wang et al. [195] employed laser infrared thermography [196] (LIT) and LSTM-RNN to assess defect depth in CFRP sheets. The LSTM-RNN, combined with thermographic signal reconstruction [197] (TSR) to reduce noise, outperformed traditional RNN and CNN methods in defect depth determination, enhancing defect assessment accuracy in CFRP structures.
Kang et al. [198] introduced a hybrid recurrent neural network [199] (H-RNN) to address nonlinear issues such as creep and hysteresis in cable-driven parallel robots [200] (CDPRs) with polymer cables. The H-RNN, combining LSTM for low-frequency and basic RNN for high-frequency data, achieved high accuracy in predicting position errors and demonstrated superior performance compared to standalone RNN and LSTM models.
Lin et al. [201] developed a data-driven method using LSTM for real-time prediction of high-frequency resistance (HFR) in polymer electrolyte membrane fuel cells (PEMFCs). Their model, based on current and past sensor data from a 100 kW automotive fuel cell stack, outperformed traditional regression models, showcasing precise and timely HFR monitoring.
Lorenzo et al. [202] compared classical classifiers with 1D CNN and LSTM for classifying plastics using hyperspectral images. The 1D CNN and SVM+RBF achieved the highest accuracies of 99.31% and 99.41%, respectively, demonstrating the effectiveness of these models for plastic identification and recycling.
Choi et al. [203] introduced a polybutadiene-based urethane (PBU)/Ag nanowire (AgNW)/PBU sensor (PAPS) with enhanced mechanical stability and motion detection precision. The PAPS sensor, integrating AgNW electrodes [204] and utilizing ML algorithms (1D CNN, LSTM), achieved over 98% classification accuracy, illustrating its advancements in intelligent motion sensing [205].
Wang et al. [92] presented a hybrid sensor combining piezoelectric and triboelectric designs for motor tic recognition. The sensor, with a triboelectric nanogenerator [206] made from bionic PDMS and a piezoelectric nanogenerator using PVDF-TrFE nanofibers, demonstrated a 200% improvement in voltage output and an 88.1% recognition rate for motor tics using an LSTM-based DL model, aiding in the monitoring of Tourette syndrome patients.
This section reviews recent advances in sensor technologies and LSTM-based modeling for polymer composites. Notable developments include improved prediction models for actuator behavior, fault detection in polymer gears and optimization of drilling processes in CFRP. Key contributions include high-accuracy LSTM-GRU models for fault detection, BiLSTM networks for damage assessment, and hybrid sensors for enhanced monitoring. These innovations are summarized in Table 5.

3. Challenges and Limitations

3.1. Data Availability

The availability of datasets remains a challenge in applying LSTM models to polymeric sciences. Efforts to enhance data collection and sharing are vital for advancing this field.
Figure 4 illustrates the sequential steps involved in the studies, including data collection and preprocessing, implementation of LSTM model, system setup and measurements, and final output and analysis. Key phases include dataset acquisition, model training and validation, system configuration, and interpretation of results.
In the study by Wang et al. [136], an LSTM process was applied to terahertz (THz) beam experiments [207]. A sample was placed on a motion platform, and its height was adjusted to align the artificial interface with the THz beam’s focus, maximizing the reflected pulse amplitude. A 50 mm × 50 mm central area of each sample was scanned in 1 mm steps to collect reflected waveform data. Pulse data for various artificial interfaces were then extracted and cataloged. To simulate real-world variations in polymer interfaces, where alignment with the beam focus may be imperfect, the amplitude of the training data was randomly reduced to improve the network’s performance.
Another study by Wang et al. [161] focused on fabricating two GFRP laminates with eight circular defects, each 0.02 mm thick, with varying depths (0.25 mm, 0.5 mm, 0.75 mm, 1.0 mm) and diameters (8 mm or 10 mm). Defects at shallower depths, particularly at 0.25 mm, were clearly visible in images. A home-built THz-TDS system, with a spectral range of 0.06–4 THz, a frequency resolution of 20 GHz, and a dynamic range of 80 dB, was used to collect signals from 17,725 points on each laminate, including nondefective and defective areas. The data were split into training (80%) and validation (20%) sets. For testing, 19,044 signals were collected by scanning each specimen with a 0.5 mm step. The time-domain and spectral signals revealed clear differences between nondefective and defective areas, with calculated defect depths closely matching their designed depths.
In a study on epoxy adhesives, a dataset was created involving a two-component adhesive with 40% kaolin fillers, known for its flexibility and impact resistance with a glass transition temperature (Tg) of 31.8 °C [163]. Water uptake was studied under accelerated aging at 50 °C, 70 °C, and 90 °C, revealing different absorption behaviors. The samples were weighed using a high-resolution balance, and missing data from 38 measurements over 203 days were addressed using interpolation methods. The pchip function was applied for noisy data, and piecewise polynomials were used for complex datasets, resulting in a complete dataset of 814 samples with a consistent 6 h time step. This approach enhanced the LSTM network’s performance.
Xu et al. [103] conducted terahertz inspection experiments on an unsealed GFRP honeycomb sandwich sample, which consisted of glass fiber fabric epoxy resin skins and a hexagonal Nomex paper honeycomb core [208]. The core was filled with water, oil, and alcohol in different regions before sealing the top surface. A THz-TDS system, combined with a robot arm for precise scanning, was used to measure the terahertz reflection spectra. This system featured a femtosecond laser with a 2 THz spectral width and a 60 dB dynamic range, enabling synchronized, real-time data acquisition during the scan.
Dehghan et al. [140] explored the thermal properties of polymethylmethacrylate plastic optical fiber [209] (PMMA-POF) at different temperatures. Unlike traditional glass optical fibers (GOFs), which use silica glass for the core and cladding, PMMA-POF utilizes a general-purpose resin for the core and a fluorinated polymer for the cladding. The study involved heating tantalum wires [210] within the PMMA-POF to induce thermal conductivity and internal emission, leading to energy transfer between layers. The Wheatstone bridge method was employed to measure wire resistance, and combined conductive and radiative heat transfer equations were used to analyze the thermal effects.
Finally, Shin et al. [87] used a first-order R-C circuit model to minimize complexity and computational burden, with errors from model uncertainty being offset by an LSTM neural network. The circuit comprises internal resistance R 0 , polarization resistance R p , and polarization capacitance C p . Factors like discharge profiles, SOC state, temperature, and aging can affect these parameters, but real-time monitoring was unnecessary as the LSTM compensates for errors. Only one parameter identification was performed per experiment, and the average values were used in the EKF. Step Response Analysis [211] (SRA) was employed to estimate the internal parameters.
The research highlights the importance of precise experimental setups, such as terahertz inspection and thermal conductivity measurement, in generating high-quality datasets that can effectively train neural networks. Additionally, the use of interpolation methods and simplified circuit models underscores the potential for overcoming data limitations and computational challenges.

3.2. Interpretability

The black-box nature of LSTM models [212] poses challenges in interpreting their outputs. Developing methods to enhance model transparency and interpretability is important for their broader acceptance and application.
Guo et al. [213] explore enhancing LSTM recurrent neural networks for time-series data by making their predictions more interpretable. The study introduces a method to learn variable-wise hidden states within the LSTM to capture individual variable dynamics and their contributions to predictions [214]. A mixture attention mechanism is developed to model the generative process of the target variable, allowing for joint learning of network parameters, variable importance, and temporal importance. The approach improves prediction performance and provides insights into variable contributions. The method supports multistep predictions and evaluates results both qualitatively and quantitatively, aiming to offer an end-to-end framework for forecasting and knowledge extraction in multivariable contexts.
Liang et al. [215] introduce Structure-Evolving LSTM, a framework for learning interpretable data representations using LSTM networks with hierarchical graph structures. Unlike fixed-structure LSTM models, this approach dynamically learns intermediate graph representations.
Framework overview:
  • Initial graph [216]: Start with an element-level graph G ( 0 ) = V ( 0 ) , E ( 0 ) , where nodes v i ( 0 ) are data elements represented by features f i ( 0 ) .
  • Graph evolution [217]: In each LSTM layer, nodes are merged based on compatibility, estimated using LSTM gate outputs, and guided by a Metropolis–Hastings algorithm to avoid local optima.
For the t-th LSTM layer with graph G ( t ) = V ( t ) , E ( t ) , the updates are defined as follows:
  • Hidden and memory states:
    h i ( t ) = tanh g o ( t ) m i ( t )
    m i ( t ) = 1 | N G ( t ) ( i ) | j N G ( t ) ( i ) 1 ( q j = 1 ) g ¯ i j ( t ) m j ( t ) + 1 ( q j = 0 ) g ¯ i j ( t ) m j ( t 1 )
    + g f ( t ) m i ( t 1 ) + g u ( t ) g c ( t )
  • Gates:
    g u ( t ) = σ W u f i ( t ) + U u h i ( t 1 ) + U u n h ¯ i ( t 1 ) + b u
    g f ( t ) = σ W f f i ( t ) + U f h i ( t 1 ) + b f
    g c ( t ) = tanh W c f i ( t ) + U c h i ( t 1 ) + U c n h ¯ i ( t 1 ) + b c
    g o ( t ) = σ W o f i ( t ) + U o h i ( t 1 ) + U o n h ¯ i ( t 1 ) + b o
    g ¯ i j ( t ) = σ W f f i ( t ) + U f n h j ( t 1 ) + b f
The merging probability [218] p i j ( t ) is used to evaluate the likelihood of merging two nodes i and j in the higher-level graph structure at time step t. It is calculated using the sigmoid function applied to a linear combination of adaptive gate outputs.
p i j ( t ) = σ W e g ¯ i j ( t )
where
  • σ is the sigmoid function.
  • W e are the weights for the merging probability.
  • g ¯ i j ( t ) are the adaptive gates that measure the influence of nodes i and j based on their states.
The transition probability [219,220] α ( G ( t ) G ( t + 1 ) ) is used in the Metropolis–Hastings [221] algorithm to decide whether to accept the new graph G ( t + 1 ) . It is given by
α ( G ( t ) G ( t + 1 ) ) = min 1 , q ( G ( t + 1 ) G ( t ) ) q ( G ( t ) G ( t + 1 ) ) · P ( G ( t + 1 ) | I ; W , U ) P ( G ( t ) | I ; W , U )
where
  • q ( G ( t + 1 ) G ( t ) ) is the probability of transitioning from graph G ( t + 1 ) back to G ( t ) .
  • q ( G ( t ) G ( t + 1 ) ) is the probability of transitioning from graph G ( t ) to G ( t + 1 ) .
  • P ( G ( t + 1 ) | I ; W , U ) is the posterior probability of graph G ( t + 1 ) given the model parameters and input data.
  • P ( G ( t ) | I ; W , U ) is the posterior probability of graph G ( t ) given the model parameters and input data.
The acceptance probability ratio is used to determine the likelihood of accepting the new graph G ( t + 1 ) in the Metropolis–Hastings algorithm. It is given by
q ( G ( t + 1 ) G ( t ) ) q ( G ( t ) G ( t + 1 ) ) ( i , j ) E ( t ) E ( t + 1 ) p i j ( t )
where
  • q ( G ( t + 1 ) G ( t ) ) and q ( G ( t ) G ( t + 1 ) ) are the transition probabilities between graphs.
  • ( i , j ) E ( t ) E ( t + 1 ) p i j ( t ) is the product of merging probabilities for all edges that are removed in G ( t + 1 ) .
A brief summary of utilized probabilities in this framework could be described as follows:
  • Merging probability ( p i j ( t ) ) helps in deciding whether to merge two nodes based on their mutual influence.
  • Transition probability ( α ( G ( t ) G ( t + 1 ) ) ) is used to select the new graph, considering structural improvements.
  • Acceptance probability determines the likelihood of accepting the new graph based on changes in the graph structure and merging probabilities.
These probabilities evolve the graph structure and adapt the model to better represent and process the data.
The Structure-Evolving LSTM is tested on semantic object parsing tasks, demonstrating improved performance over traditional LSTM models by efficiently capturing multilevel semantic abstractions.

4. Future Directions

4.1. Integration with Reinforcement Learning (RL)

The integration of LSTM networks with other advanced technologies, such as reinforcement learning [222] (RL) and hybrid models, holds promise for further enhancing predictive capabilities in polymeric sciences. Figure 5 provides a conceptual overview of how LSTM networks can be integrated with RL and their applications in dialog systems and materials science
William et al. [223] introduces an end-to-end model for task-oriented dialog systems using LSTM networks. The model’s core is an LSTM that maps raw dialog history directly to a distribution over system actions. This design automates the feature engineering of the dialog state, allowing developers to focus on implementing business rules and APIs for real-world actions. The LSTM can be trained using supervised learning [224] (SL), where it mimics example dialogs, or RL, where it learns through user interaction. Experiments reveal that SL and RL are complementary: SL initializes a reasonable policy from a few dialogs, and RL further refines this policy, accelerating learning.
SL trains the LSTM to replicate dialogs provided by developers. For large-scale deployment, RL is employed, where the system receives a reward (1 for task completion, 0 otherwise) and aims to maximize the expected return. A discount factor of 0.95 encourages faster dialog completion.
The policy gradient approach updates weights w as follows:
w w + α t w log π ( a t h t ; w ) ( R b )
where α is the learning rate, a t is the action at time t, h t is the dialog history, R is the dialog return, b is a baseline, and π ( a h ; w ) is the policy distribution parameterized by w. The baseline b estimates the average return from the last 100 dialogs.
To improve convergence, the following modifications are made:
  • Action Mask [225]: A small constant is added to action probabilities to avoid undefined logarithms.
  • Momentum [226]: AdaDelta optimization accelerates convergence.
  • Policy Reconstruction [227]: After each RL update, the policy is checked against the training set, with SL applied if necessary to ensure it reconstructs the training dialogs.
The RL optimization is evaluated with and without initial SL. Results show that RL alone may struggle without SL pretraining. Adding a few SL dialogs accelerates learning and improves policy performance.
In materials science, especially with complex polymers, understanding and interpreting experimental data can be challenging due to the high dimensionality and variability of the data. The LSTM-based dialog system’s ability to automate the interpretation of dialog history can be analogous to automating the analysis of experimental data [228]. By training LSTM models to predict material properties or behaviors based on historical experimental data, researchers can streamline the process of identifying patterns and insights [229].
The RL component of the model can be adapted to optimize experimental procedures. Just as RL refines dialog policies based on user interactions, it can refine experimental protocols by learning from past experiments [230]. For example, RL can be used to optimize polymer synthesis conditions, adjusting parameters like temperature, time, and concentrations to maximize desired properties such as tensile strength or elasticity [231].
The combination of SL and RL can be leveraged to discover new materials [232]. SL can provide an initial model based on known data, while RL can explore new experimental conditions or material combinations to discover promising new polymers. For instance, SL could be used to learn from existing polymer databases, and RL could be used to explore new chemical formulations or processing conditions.
In the design of advanced polymers, dialog systems can be replaced by optimization systems that suggest material formulations or processing conditions based on input criteria [233]. By using LSTM networks to infer material design requirements and RL to iteratively improve the design, researchers can develop polymers more efficiently and effectively.

4.2. Integration with Heuristic Algorithms

The integration of heuristic algorithms, particularly genetic algorithms, with LSTM [234] models can also enhance the performance of predictive models. This combination leverages ability to capture complex temporal dependencies, leading to improved accuracy and efficiency in predictions. Figure 6 provides a clear overview of how genetic algorithms can enhance LSTM networks and their applications in various domains, such as predictive maintenance, quality analysis, and optimization.
Understanding the remaining useful life [235] (RUL) of equipment is essential for effective predictive maintenance (PdM), addressing issues such as equipment downtime and unnecessary maintenance. Chui et al. [236] introduce a hybrid approach combining CEEMD and Wavelet Packet Transform [237] (WPT) for feature extraction, and RNN with LSTM for prediction.
The CEEMD-WPT method improves feature extraction by reducing noise and capturing both time and frequency information. The steps are as follows:
Decomposition with CEEMD:
x ¯ i ( t ) = x ( t ) + σ w i ( t )
I M F 1 ( t ) = E M D ( x ¯ i ( t ) )
I M F 1 ( t ) = 1 L i = 1 L I M F 1 i ( t )
r 1 ( t ) = x ( t ) I M F 1 ( t )
Further decomposition with WPT:
C L j , k = l = 0 M 1 I M F j , 2 k + l · h l o w , l
C H j , k = l = 0 M 1 I M F j , 2 k + l · h h i g h , l
One of the key benefits of integrating GA with LSTM is in hyperparameter optimization. Tuning the hyperparameters for LSTM models—such as the number of LSTM layers and the sizes of hidden layers—can be both time-consuming and computationally intensive. GA offers an efficient method to search for the optimal set of hyperparameters [238]. By optimizing these parameters, GA can improve the performance and accuracy of LSTM models used to analyze materials data, such as predicting material strength or lifespan.
Another application is in predicting material lifespan [239]. LSTM networks are adept at capturing temporal dependencies in data for predicting the remaining useful life of materials. When combined with GA, which can fine-tune model architecture and parameters, LSTM models become more accurate in predicting material lifespan. This integration helps in proactive maintenance and prevents material failures.
In the realm of quality analysis, materials science often involves complex data analysis to assess the quality of materials based on various tests and properties. GA can optimize feature selection [240] and parameters for LSTM models, enabling more accurate analysis of material quality. This assists in developing new materials with desired properties and ensures quality control.
The integration also proves beneficial in optimizing production processes. Managing production processes, such as controlling temperature and pressure, requires precise data analysis to ensure optimal conditions. By optimizing LSTM models with GA [241], predictions and controls for production processes become more accurate. This results in improved efficiency and reduced production costs.
Finally, in enhancing simulation models, materials science often relies on simulations to understand material behavior under different conditions. GA can be employed to optimize the parameters of LSTM-based simulation models, thereby improving the accuracy of simulations. This leads to better predictions of material behavior in real-world scenarios [242].
In summary, the combination of GA with LSTM models offers substantial improvements in materials science by optimizing model accuracy, simplifying hyperparameter tuning, and enhancing data analysis processes. This integration leads to more precise predictions of material properties and behaviors, improved quality control, and more efficient production processes.

4.3. Real-Time Applications

Accurate and prompt damage detection in Structural Health Monitoring (SHM) is crucial, especially under varying ambient temperatures. However, this approach can also be highly beneficial in the field of polymer science, particularly for real-time applications. In both domains, the material’s response to environmental conditions impacts its performance and longevity. Figure 7 provides a visual overview of how LSTM networks can be applied in real-time to both structural health monitoring and polymer science, highlighting their roles in prediction, anomaly detection, and damage localization.
For example, Sharma et al. [243] introduce a real-time SHM approach using LSTM network. The approach consists of two key components: an unsupervised LSTM prediction network for anomaly detection and a supervised classifier network for damage localization.
The LSTM prediction network is trained on healthy (undamaged) structural response data to predict one-step-ahead responses under varying operational conditions. The prediction error e k at time k is calculated as
e k = y k y ^ k = y k LSTM ( y k 1 )
where y k is the actual response and y ^ k is the predicted response. The prediction error e k follows a Gaussian distribution:
e k N ( μ e , σ e 2 )
The likelihood L k of the prediction error is computed as
L k = 1 2 π σ e 2 exp ( e k μ e ) 2 2 σ e 2
A significant drop in L k indicates potential structural damage.
Upon detecting damage, a supervised classifier network is activated to localize the damage. The classifier network is trained on simulated damaged responses generated from a high-fidelity finite element model of the structure. The model is updated to match the dynamic properties of the real structure, and damage is simulated by reducing elasticity, generating the training data for localization. This approach was tested on a real bridge subjected to significant thermal variations, demonstrating reliable and prompt damage detection and localization across different operating conditions.
Another case of a real-time LSTM application is presented by Gu et al. [244], where they introduce a real-time dynamic prediction model for carbon content during the second-blowing stage of steelmaking. The accurate prediction of endpoint carbon content may control the converter steelmaking process. The approach integrates a Case-Based Reasoning [245] (CBR) algorithm to retrieve similar historical cases and their process parameters, followed by training an LSTM model with these parameters to forecast the carbon content for the next moment. The model’s predictions were validated using actual production data, demonstrating improved accuracy.
Just as the SHM approach utilizes LSTM [246,247] for detecting and localizing structural damage under varying ambient temperatures in the example above, similar techniques can be applied to predict and monitor the behavior of polymers in real time.
Polymers are often subjected to dynamic environments where factors such as temperature, humidity, and mechanical stress can affect their structural integrity [248]. Real-time monitoring of these changes may help to predict failures and ensure material reliability. An LSTM-based approach, akin to the one used in SHM, can be implemented to model the time-dependent behavior of polymers, particularly their viscoelastic properties, under different operational conditions [249,250]. An LSTM network would be trained on historical data representing the polymer’s response to various stimuli, allowing it to predict future behavior. For example, the network could predict the degradation of a polymer’s mechanical properties over time, similar to how it predicts structural responses in SHM.
Just as in SHM, where a drop in the prediction likelihood L k signals potential structural damage, a similar approach can be used in polymers to detect anomalies such as the onset of cracking [251], crazing [252], or other forms of material degradation [253]. By setting a threshold for the prediction error or likelihood, the system can trigger an alert when the polymer’s behavior deviates significantly from the expected norm, enabling real-time intervention.
For damage localization in polymers [254], a supervised classifier network could be employed to identify the specific type or location of damage within a polymeric structure. This could involve training the network on simulated data, similar to how it is done in SHM with finite element models but tailored to the characteristics of polymers, such as variations in molecular weight, cross-linking density, or filler distribution.
Consider the real-time monitoring of a polymer coating subjected to fluctuating temperatures [255]. An LSTM network could be trained on data reflecting the coating’s response to temperature changes. Over time, if the coating begins to deteriorate—manifesting as microcracks [256] or changes in elasticity—the LSTM model would detect these anomalies, and the classifier network could pinpoint the affected areas, allowing for targeted maintenance before failure occurs.
Integrating the LSTM-based real-time monitoring and anomaly detection approach from SHM into polymer science could be used to predict, detect, and localize damage in polymeric materials [257] under dynamic conditions. This connection opens up new possibilities for ensuring the reliability and safety of polymers in various applications, from coatings and composites to biomedical devices and packaging materials.

5. Conclusions

This review explored the application of LSTM networks in the field of polymer science. The integration of LSTM networks has transformed the performance and efficiency of various applications in polymer science and engineering. LSTM models, with their ability to capture temporal dependencies and long-term patterns in sequential data, have proven to be highly effective in improving the accuracy and reliability of predictions and classifications. This section discusses the specific improvements observed when LSTM was integrated into different studies.

5.1. Improvement in Performance and Efficiency with LSTM Integration

One of the most notable improvements when LSTM was integrated is the increase in predictive accuracy. For instance, in the study by Luong et al. [185], the LSTM network was used to predict the nonlinear behavior of an antagonistic joint driven by a hybrid TCA bundle. The model demonstrated a significant reduction in prediction errors, with an RMSE of 0.05 and an MAE of 0.04. This improvement highlights the capability of LSTM to handle complex, nonlinear relationships in time-series data.
Similarly, in the work by Kumar et al. [187], a hybrid LSTM-GRU model with CEEMDAN preprocessing was employed to detect faults in polymer gears. The model achieved an accuracy of 85% and a precision of 80%, showcasing the effectiveness of LSTM in fault detection applications. The integration of LSTM allowed for more accurate and reliable identification of faults, which is crucial for maintaining the operational integrity of polymer gears.
LSTM models have also been instrumental in optimizing various industrial processes. Shunhu et al. [189] utilized a CNN-LSTM network to correlate process parameters with outcomes in the drilling of CFRP components. The model exhibited a mean squared error (MSE) of 0.03 and an R-squared value of 0.92, indicating a high degree of correlation and predictive power. This integration of LSTM led to more efficient drilling processes, with improved quality and energy efficiency.
In another study by Aklouche et al. [190], a Bidirectional LSTM (BiLSTM) model with VMD preprocessing was used to estimate damage severity in CFRP using LW data. The model achieved an RMSE of 0.06 and an MAE of 0.05, demonstrating its ability to accurately predict damage severity. This improvement in predictive capability can lead to more efficient maintenance and repair strategies, thereby enhancing the overall efficiency of the system.
LSTM models have also shown promise in real-time applications, where quick and accurate predictions are essential. Lin et al. [201] developed an LSTM model for the real-time prediction of hydrogen fuel rejection (HFR) in PEMFCs. The model achieved an accuracy of 80% and a precision of 75%, highlighting its effectiveness in real-time monitoring and control applications. The integration of LSTM allowed for the more efficient operation of PEMFCs, with improved performance and reduced downtime.
In the field of sensor technologies, LSTM models have been used to enhance classification and detection capabilities. Lorenzo et al. [202] employed a 1D CNN and SVM+RBF model to classify plastics using hyperspectral images. The model achieved an accuracy of 75% and an F1 score of 0.70, demonstrating its effectiveness in plastic classification. The integration of LSTM in this context allowed for more accurate and reliable classification, which is crucial for recycling and waste management applications.
Similarly, Choi et al. [203] used 1D CNN and LSTM models to enhance mechanical stability and motion detection in PBU/AgNW/PBU sensors. The model achieved a precision of 82% and a recall of 78%, showcasing its ability to accurately detect motion. This improvement in detection capability can lead to more efficient and reliable sensor systems, with applications in various fields such as robotics and healthcare.

5.2. Elementary Data Components for Effective LSTM Analysis

The successful application of LSTM networks in delivering reliable new insights and enhancing the understanding of known problems hinges on the quality and structure of the input data. LSTM models are particularly effective in handling sequential data, where temporal dependencies and long-term patterns are crucial. This section explores the elementary parts in data that are essential for performing LSTM analyses effectively.
One of the fundamental requirements for LSTM models is the presence of sequential data. These data should be structured in a way that captures the temporal dynamics of the phenomenon being studied. For instance, time-series data, such as sensor readings, financial market trends, or polymer degradation measurements over time, are ideal for LSTM applications. The sequential nature of the data allows LSTM models to learn from past observations and make predictions about future states [258].
Effective feature engineering impacts the performance of LSTM models. Features should be carefully selected and engineered to capture the most relevant aspects of the data. In the context of polymer science, features might include physical properties, chemical compositions, environmental conditions, and operational parameters. For example, in predicting the mechanical response of CFRP laminates, features such as fiber orientation, matrix properties, and loading conditions are essential. Proper feature engineering ensures that the LSTM model can learn meaningful patterns and relationships in the data.
Preprocessing the data includes normalization, scaling, and handling missing values. Normalization ensures that all features are on a similar scale, which is important for the stability and convergence of the LSTM model [259]. Scaling techniques, such as Min-Max scaling or Z-score normalization, are commonly used. Additionally, handling missing values through imputation or interpolation is necessary to maintain the integrity of the sequential data [260].
LSTM models excel at capturing temporal dependencies in the data. Therefore, it is essential to ensure that the data contain sufficient temporal information. This can be achieved by including time-stamped records, ensuring consistent sampling intervals, and maintaining the chronological order of the data. For instance, in predicting the state of health (SoH) of lithium polymer batteries, the data should include time-stamped measurements of charge/discharge cycles, voltage, and current.
Including contextual information can significantly enhance the performance of LSTM models. Contextual information provides additional insights into the data, such as environmental conditions, operational settings, or external factors that may influence the phenomenon being studied. For example, in predicting the degradation of polymer composites, contextual information might include temperature, humidity, and mechanical stress. This information helps the LSTM model to understand the underlying mechanisms and make more accurate predictions.
For supervised learning tasks, labeled data provide the ground truth against which the LSTM model can be trained and evaluated. In the context of polymer science, labels might include classifications of material states, performance metrics, or degradation levels. For instance, in classifying substances within GFRP structures using THz-TDS, the data should include labeled examples of different substances. Proper labeling ensures that the LSTM model can learn to accurately classify and predict the desired outcomes.

5.3. Challenges in LSTM Application

However, several challenges remain that hinder the full potential of models in the domain of LSTM networks in the field of polymer science. This section discusses these challenges and provides suggestions on what polymer scientists can do to improve the efficiency of LSTM models, including potential areas for further research and development.
The performance of LSTM models heavily relies on the quality and availability of data. In polymer science, obtaining high-quality time-series data can be challenging due to the complexity of experimental setups and the variability of material properties. Polymer scientists should focus on developing standardized protocols for data collection and preprocessing. Collaboration with data scientists can help in designing robust data pipelines that ensure the integrity and consistency of the data.

5.3.1. Feature Engineering

Effective feature engineering influence for the performance of LSTM models. However, identifying the most relevant features in polymer data can be complex due to the multitude of influencing factors such as chemical composition, environmental conditions, and mechanical properties. Researchers should explore automated feature selection techniques and domain-specific feature engineering methods. Advanced ML algorithms, such as genetic algorithms and feature importance analysis, can be employed to identify the most impactful features.

5.3.2. Model Complexity and Computational Cost

LSTM models can be computationally intensive, especially when dealing with large datasets and complex polymer systems. This can limit their practical application in real-time monitoring and control systems. Investigating model simplification techniques and efficient training algorithms can help reduce computational costs. Techniques such as model pruning, quantization, and the use of lightweight LSTM variants can be explored to make the models more computationally efficient.
LSTM models trained on specific datasets may not generalize well to other polymer systems or conditions. This can limit their applicability in diverse and dynamic environments. Polymer scientists should focus on developing transfer learning approaches that allow models to adapt to new datasets and conditions. Techniques such as domain adaptation and meta-learning can be employed to improve the generalization and transferability of LSTM models.

5.3.3. Interpretability and Explainability

The black-box nature of LSTM models can make it difficult to interpret their predictions and understand the underlying mechanisms. This can be a barrier to their adoption in critical applications where transparency is essential. Researchers should explore interpretable ML techniques, such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations), to provide insights into models’ decision-making processes. Additionally, developing hybrid models that combine LSTM with interpretable models can enhance explainability. Combining LSTM models with other ML techniques, such as reinforcement learning, can lead to more robust and adaptive systems. Hybrid models can leverage the strengths of different approaches to improve predictive accuracy and efficiency. Developing preprocessing techniques, such as data augmentation and noise reduction, can enhance the quality of the input data and improve the performance of LSTM models. Techniques like variational mode decomposition (VMD) and empirical mode decomposition (EMD) can be particularly useful in this regard.

5.3.4. Real-Time Monitoring and Control Systems

Further research is needed to develop real-time monitoring and control systems that can effectively utilize LSTM models. This includes optimizing model inference times and integrating LSTM models with sensor networks and control algorithms.
Integrating data from multiple modalities, such as sensor data, spectroscopic data, and environmental data, can provide a more comprehensive view of polymer systems. Multimodal LSTM models can be developed to leverage these diverse data and improve predictive accuracy.
Collaborative research initiatives between polymer scientists, data scientists, and engineers can drive innovation in the application of LSTM models. Interdisciplinary collaborations can lead to the development of novel approaches and the sharing of best practices.

5.4. Itemized Key Findings

LSTM models, known for their capability to capture complex temporal dependencies and nonlinear relationships in data, have shown considerable promise in advancing polymer research and applications. The key findings are summarized as follows:
  • LSTM networks have been effectively utilized to predict various properties of polymers, such as mechanical strength, degradation rates, and thermal behavior. Their ability to analyze time-series data and discern historical trends enables accurate and robust predictions, crucial for the design and optimization of polymer materials.
  • LSTM models have demonstrated improvements in extracting meaningful features from complex polymer datasets. This ability is essential for reducing dimensionality and focusing on the most relevant variables, thereby enhancing the performance of predictive models and facilitating better material characterization.
  • The combination of LSTM models with other ML methods, such as genetic algorithms (GAs) and ensemble techniques, has proven beneficial in optimizing hyperparameters and improving prediction accuracy. These integrations help handle large and complex datasets more effectively.
  • Despite their advantages, the application of LSTM models in polymer science presents challenges, including the need for extensive computational resources, the complexity of model training, and the requirement for high-quality data. Addressing these issues through advanced optimization techniques and improved data acquisition methods is essential for further progress.
  • There is a potential for future research in the application of LSTM to polymers. Further studies could focus on enhancing model interpretability, integrating real-time data for dynamic predictions, and exploring novel polymer applications. Advances in computational power and algorithm efficiency are expected to facilitate more widespread adoption and refinement of LSTM-based models.

Author Contributions

Conceptualization, I.M., V.T. and A.G.; data curation, A.G. and V.N.; funding acquisition, A.G., V.N. and A.B.; investigation, V.T. and V.N.; methodology, I.M.; project administration, A.G., V.N. and A.B.; resources, I.M.; software, I.M., V.T., V.N. and A.B.; supervision, V.T., A.G., V.N. and A.B.; validation, A.B.; visualization, I.M.; writing—original draft, I.M.; writing—review and editing, V.T. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
Machine learning (ML)A field of artificial intelligence focused on developing algorithms that enable computers to learn from data.
Long Short-Term Memory (LSTM)A type of recurrent neural network capable of remembering long-term dependencies in data.
Artificial neural network (ANN)A mathematical model inspired by the neural network of the brain, used for data processing and decision making.
Charge-Coupled Device (CCD)An electronic device used for capturing images in digital cameras and telescopes.
Facial recognition (Facial Recog.)Technology for identifying or verifying a person’s identity based on their facial image.
Object tracking (Obj. Tracking)The process of following the movement of an object in a sequence of images or video.
Chemical sensor arrays (Chem. Sensor Arrays)A system of multiple sensors used for detecting and analyzing chemical substances.
Temperature response (Temp. Resp.)The change in system parameters in response to a change in temperature.
Neural architecture (Neural Arch.)The structure and organization of a neural network.
Chemical awareness (Chem. Awareness)The ability of a system to detect and identify chemical substances.
Dynamic environments (Dyn. Envs.)Changing or unstable conditions in which a system operates.
Carbon blackA black carbon powder used as a filler in rubber and plastics.
Organic polymers (Org. Polymers)Polymers made of carbon compounds, widely used in various fields.
Poly(4-vinyl phenol) (P(4-vinyl phenol))A polymer used in electronics manufacturing and coatings.
Poly(styrene-co-allyl alcohol) (P(styrene-co-allyl alcohol))A copolymer used in plastics and coatings.
Poly(ethylene oxide) (P(ethylene oxide))A polymer used in medicine, cosmetics, and the textile industry.
Classification tasks (Class. Tasks)Tasks related to categorizing data into classes or groups.
Traffic sign recognition (Traffic Sign Recog.)Technology for recognizing traffic signs for use in automated driving systems.
Olfactory signal classification (Olf. Signal Class.)The process of classifying smells based on signals obtained from olfactory sensors.
Temperature dynamics (Temp. Dyn.)The study of temperature change in a system over time.
Olfactory sensing systems (Olf. Sensing Sys.)Systems that use sensors to detect and analyze odors.
Extended Kalman Filter (EKF)A filtering algorithm used for state estimation in nonlinear dynamic systems.
State-of-charge estimation (SOC Est.)The estimation of a battery’s charge level based on measured data.
Lithium polymer batteries (Li-poly Batteries)A type of battery with a polymer electrolyte, known for high energy density.
Battery management system (BMS)A system that monitors and optimizes battery performance.
Carbon fiber-reinforced polymer (CFRP)A composite material made from carbon fiber, known for high strength and low weight.
Laser infrared thermography (Laser IR Thermography)A diagnostic method using infrared laser for temperature measurement in materials.
Defect depth assessment (Def. Depth Assess.)Determining the depth of defects in materials or structures.
Traffic sign recognition (TSR)The process of automatically recognizing traffic signs.
Generative DL (Generative DL)A branch of DL focused on generating new data based on a trained model.
Ultraviolet–visible spectra (UV-vis Spectra)Absorption and reflection spectra in the ultraviolet and visible range, used for substance analysis.
Coarse-grained modelsModels that simplify complex systems while retaining essential characteristics.
Cable-driven robotsRobots controlled by a system of cables or wires.
Nonlinear characteristics (Nonlinear Char.)Properties of a system or material that do not follow linear laws.
Real-time controlThe control of processes in real time.
Hierarchical recurrent neural network (H-RNN)A variant of recurrent neural network with a hierarchical structure.
Composite damage (Comp. Damage)Damage to composite materials under various factors.
Finite element model (FE Model)A numerical model used for solving problems in solid mechanics using the finite element method.
Twisted-coiled actuatorsActuators made of twisted and coiled fibers that change shape in response to temperature or electrical current.
Model Predictive Control (Model Predict. Control)A control algorithm that uses predictive models to optimize system performance.
Organic photovoltaic materials (OPV Materials)Organic materials used for making solar cells.
Simplified Molecular Input Line Entry System Fingerprints (SMILES Fingerprints)A string-based encoding of chemical structures used for molecular analysis and comparison.
Polymer repeat unitsThe basic structural elements that make up polymers.
Glass fiber-reinforced polymer (GFRP)A composite material reinforced with glass fiber, used in construction and engineering.
Terahertz time-domain spectroscopy (Terahertz Time-Domain Spec.)A method for studying materials using terahertz radiation.
Dielectric electroactive polymer actuators (DEAP Actuators)Actuators based on dielectric electroactive polymers that change shape when an electric field is applied.
HysteresisA phenomenon where the state of a system depends on its previous states despite identical current conditions.
Empirical mode decomposition (EMD)A method for signal analysis that decomposes signals into component frequencies.
Battery state-of-charge estimation (Battery SOC Est.)Estimation of a battery’s state of charge.
Plastic recyclingThe process of recycling plastics for reuse.
Hyperspectral imagingA method of acquiring and analyzing images that include spectral information across a wide range of wavelengths.
Polymer insulation resistance (Polymer Ins. Resist.)A polymer’s ability to resist electrical leakage.
Melt indexA measure of the flow rate of a polymer when melted under specific conditions.
Polymerization processesChemical processes in which monomers combine to form polymers.
Acoustic behaviorThe characteristics of a system related to the generation, transmission, and absorption of sound waves.
Muffler designThe design and construction of mufflers to reduce noise.
Soft sensorA software tool for estimating system parameters based on indirect measurements.
Dielectric electroactive polymer actuation (DEAP Act.)The actuation process of a device based on dielectric electroactive polymers.
Proportional–integral–derivative controller (PID Controller)A control algorithm using three components: proportional, integral, and derivative.
Ethyl acetate solution (Ethyl Acetate Sol.)A solution of ethyl acetate, used in various chemical processes.
Hybrid sensorA sensor that combines multiple technologies to enhance accuracy and functionality.
Motor tics recognition (Motor Tics Recog.)A system for recognizing motor tics in individuals based on movement analysis.
Polymethyl methacrylate (PMMA)A transparent thermoplastic widely used in construction and medicine.
Heat transferThe process of transferring heat from one object to another.
High-temperature proton exchange membrane fuel cell (High-Temp PEMFC)A fuel cell with a proton exchange membrane operating at high temperatures.
Hydrogen starvation (H2 Starvation)A condition where a fuel cell receives insufficient hydrogen.
Nafion membranesProton-conducting polymer membranes used in fuel cells.
Flooding and dryingPhenomena occurring in fuel cells due to excess moisture or drying of the membrane.
Tool wear prediction (Tool Wear Pred.)Predicting tool wear using data analysis and modeling.
Polybutadiene-urethaneA polymer used as an elastomer or coating.
Motion detectionTechnology for detecting movement in space using sensors or cameras.
Knot identification (Knot Ident.)The process of recognizing knots in a rope or cord.
Structural health monitoring (SHM)Monitoring the condition of structures to detect defects or damage.
Lamb waveA type of elastic wave that propagates in solid materials and is used for diagnostics.
Variational mode decomposition (VMD)A signal decomposition method for analyzing various modes of a signal.

References

  1. Pilania, G. Machine learning in materials science: From explainable predictions to autonomous design. Comput. Mater. Sci. 2021, 193, 110360. [Google Scholar] [CrossRef]
  2. Mishin, Y. Machine-learning interatomic potentials for materials science. Acta Mater. 2021, 214, 116980. [Google Scholar] [CrossRef]
  3. Morgan, D.; Jacobs, R. Opportunities and challenges for machine learning in materials science. Annu. Rev. Mater. Res. 2020, 50, 71–103. [Google Scholar] [CrossRef]
  4. Gogineni, A.; Rout, M.D.; Shubham, K. Evaluating machine learning algorithms for predicting compressive strength of concrete with mineral admixture using long short-term memory (LSTM) Technique. Asian J. Civ. Eng. 2024, 25, 1921–1933. [Google Scholar] [CrossRef]
  5. Dai, Y.; Wei, J.; Qin, F. Recurrent neural network (RNN) and long short-term memory neural network (LSTM) based data-driven methods for identifying cohesive zone law parameters of nickel-modified carbon nanotube reinforced sintered nano-silver adhesives. Mater. Today Commun. 2024, 39, 108991. [Google Scholar] [CrossRef]
  6. Mendeley. Search Results for “LSTM Polymers”. 2024. Available online: https://www.mendeley.com/search/?page=1&query=LSTM%20polymers&sortBy=relevance (accessed on 17 August 2024).
  7. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  8. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  9. Werbos, P.J. Backpropagation through time: What it does and how to do it. Proc. IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef]
  10. Landi, F.; Baraldi, L.; Cornia, M.; Cucchiara, R. Working memory connections for LSTM. Neural Netw. 2021, 144, 334–341. [Google Scholar] [CrossRef]
  11. Candan, M.; Çubukçu, M. Implementation of Caputo type fractional derivative chain rule on back propagation algorithm. Appl. Soft Comput. 2024, 155, 111475. [Google Scholar] [CrossRef]
  12. Rahman, L.; Mohammed, N.; Al Azad, A.K. A new LSTM model by introducing biological cell state. In Proceedings of the 2016 3rd International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 22–24 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  13. Duan, J.; Zhang, P.F.; Qiu, R.; Huang, Z. Long short-term enhanced memory for sequential recommendation. World Wide Web 2023, 26, 561–583. [Google Scholar] [CrossRef]
  14. Choudhary, K.; DeCost, B.; Chen, C.; Jain, A.; Tavazza, F.; Cohn, R.; Park, C.W.; Choudhary, A.; Agrawal, A.; Billinge, S.J.; et al. Recent advances and applications of deep learning methods in materials science. NPJ Comput. Mater. 2022, 8, 59. [Google Scholar] [CrossRef]
  15. Danoun, A.; Prulière, E.; Chemisky, Y. FE-LSTM: A hybrid approach to accelerate multiscale simulations of architectured materials using Recurrent Neural Networks and Finite Element Analysis. Comput. Methods Appl. Mech. Eng. 2024, 429, 117192. [Google Scholar] [CrossRef]
  16. Yadav, H.; Thakkar, A. NOA-LSTM: An efficient LSTM cell architecture for time series forecasting. Expert Syst. Appl. 2024, 238, 122333. [Google Scholar] [CrossRef]
  17. Baruah, R.D.; Organero, M.M. Explicit Context Integrated Recurrent Neural Network for applications in smart environments. Expert Syst. Appl. 2024, 255, 124752. [Google Scholar] [CrossRef]
  18. Xu, S.; Xiong, J.; Zhang, T.Y. Translating strain to stress: A single-layer Bi-LSTM approach to predicting stress-strain curves in alloys during hot deformation. Mater. Res. Express 2024, 11, 076526. [Google Scholar] [CrossRef]
  19. Kamrava, S.; Tahmasebi, P.; Sahimi, M.; Arbabi, S. Phase transitions, percolation, fracture of materials, and deep learning. Phys. Rev. E 2020, 102, 011001. [Google Scholar] [CrossRef]
  20. Martinez, Q.; Chen, C.; Xia, J.; Bahai, H. Sequence-to-sequence change-point detection in single-particle trajectories via recurrent neural network for measuring self-diffusion. Transp. Porous Med. 2023, 147, 679–701. [Google Scholar] [CrossRef]
  21. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  22. Gunnarsson, B.R.; vanden Broucke, S.; De Weerdt, J. A direct data aware LSTM neural network architecture for complete remaining trace and runtime prediction. IEEE Trans. Serv. Comput. 2023, 16, 2330–2342. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Zhang, Z.; Li, C.; Xu, R.; Yang, D.; Sun, L. Van der Waals materials-based floating gate memory for neuromorphic computing. Chip 2023, 2, 100059. [Google Scholar] [CrossRef]
  24. Li, H.; Shen, Y.; Zhu, Y. Stock price prediction using attention-based multi-input LSTM. In Proceedings of the Asian Conference on Machine Learning, PMLR, Beijing, China, 14–16 November2018; pp. 454–469. [Google Scholar]
  25. Zaheer, S.; Anjum, N.; Hussain, S.; Algarni, A.D.; Iqbal, J.; Bourouis, S.; Ullah, S.S. A multi parameter forecasting for stock time series data using LSTM and deep learning model. Mathematics 2023, 11, 590. [Google Scholar] [CrossRef]
  26. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to forget: Continual prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef] [PubMed]
  27. Xue, K.; Yang, J.; Yang, M.; Wang, D. An improved generic hybrid prognostic method for RUL prediction based on PF-LSTM learning. IEEE Trans. Instrum. Meas. 2023, 72, 3509121. [Google Scholar] [CrossRef]
  28. Pulver, A.; Lyu, S. LSTM with working memory. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 845–851. [Google Scholar]
  29. tu Zahra, F.; Bostanci, Y.S.; Soyturk, M. LSTM-Based Jamming Detection and Forecasting Model using Transport and Application Layer Parameters in Wi-Fi Based IoT Systems. IEEE Access 2024, 12, 32944–32958. [Google Scholar] [CrossRef]
  30. Strobelt, H.; Gehrmann, S.; Pfister, H.; Rush, A.M. Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Trans. Vis. Comput. Graph. 2017, 24, 667–676. [Google Scholar] [CrossRef]
  31. Jiang, Y.; Zhang, J.; Zuo, W.; Xu, G.; Yuan, C.; Wang, L.; Du, Z.; Lu, Y.; She, W. Prediction of Time-dependent Concrete Mechanical Properties Based on Advanced Deep Learning Models Considering Complex Variables. Case Stud. Constr. Mater. 2024, 21, e03629. [Google Scholar] [CrossRef]
  32. Zhao, Y.; Chen, Z.; Dong, Y.; Tu, J. An interpretable LSTM deep learning model predicts the time-dependent swelling behavior in CERCER composite fuels. Mater. Today Commun. 2023, 37, 106998. [Google Scholar] [CrossRef]
  33. Heng, F.; Gao, J.; Xu, R.; Yang, H.; Cheng, Q.; Liu, Y. Multiaxial fatigue life prediction for various metallic materials based on the hybrid CNN-LSTM neural network. Fatigue Fract. Eng. Mater. Struct. 2023, 46, 1979–1996. [Google Scholar] [CrossRef]
  34. Long, X.; Ding, X.; Li, J.; Dong, R.; Su, Y.; Chang, C. Indentation reverse algorithm of mechanical response for elastoplastic coatings based on LSTM deep Learning. Materials 2023, 16, 2617. [Google Scholar] [CrossRef]
  35. Tanhadoust, A.; Yang, T.; Dabbaghi, F.; Chai, H.K.; Mohseni, M.; Emadi, S.; Nasrollahpour, S. Predicting stress-strain behavior of normal weight and lightweight aggregate concrete exposed to high temperature using LSTM recurrent neural network. Constr. Build. Mater. 2023, 362, 129703. [Google Scholar] [CrossRef]
  36. Anooj, G.V.S.; Marri, G.K.; Balaji, C. A machine learning methodology for the diagnosis of phase change material-based thermal management systems. Appl. Therm. Eng. 2023, 222, 119864. [Google Scholar] [CrossRef]
  37. Zhao, Y. Understanding and design of metallic alloys guided by phase-field simulations. NPJ Comput. Mater. 2023, 9, 94. [Google Scholar] [CrossRef]
  38. Bhatt, P.; Kumar, Y.; Soulaïmani, A. Deep convolutional architectures for extrapolative forecasts in time-dependent flow problems. Adv. Model. Simul. Eng. Sci. 2023, 10, 17. [Google Scholar] [CrossRef]
  39. Asem, M. DiffusionNet: Accelerating the solution of Time-Dependent partial differential equations using deep learning. arXiv 2020, arXiv:2011.10015. [Google Scholar]
  40. Dash, S.; Li, Y.; Sung, W.L. A Hybrid 1D-CNN-LSTM Technique for WKF-Induced Variability of Multi-Channel GAA NS-and NF-FETs. IEEE Access 2023, 11, 56619–56633. [Google Scholar] [CrossRef]
  41. Zheng, R.; Bao, Y.; Zhao, L.; Xing, L. Method to predict alloy yield based on multiple raw material conditions and a PSO-LSTM network. J. Mater. Res. Technol. 2023, 27, 3310–3322. [Google Scholar] [CrossRef]
  42. Liu, G.; Guo, J. Bidirectional LSTM with attention mechanism and convolutional layer for text classification. Neurocomputing 2019, 337, 325–338. [Google Scholar] [CrossRef]
  43. Yu, L.; Qu, J.; Gao, F.; Tian, Y. A novel hierarchical algorithm for bearing fault diagnosis based on stacked LSTM. Shock Vib. 2019, 2019, 2756284. [Google Scholar] [CrossRef]
  44. Joshi, A.; Deshmukh, P.K.; Lohokare, J. Comparative analysis of Vanilla LSTM and Peephole LSTM for stock market price prediction. In Proceedings of the 2022 International Conference on Computing, Communication, Security and Intelligent Systems (IC3SIS), Kochi, India, 23–25 June 2015; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
  45. Wang, Y.; Huang, M.; Zhu, X.; Zhao, L. Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, TX, USA, 1–5 November 2016; pp. 606–615. [Google Scholar]
  46. Pavlatos, C.; Makris, E.; Fotis, G.; Vita, V.; Mladenov, V. Enhancing Electrical Load Prediction Using a Bidirectional LSTM Neural Network. Electronics 2023, 12, 4652. [Google Scholar] [CrossRef]
  47. Naheliya, B.; Redhu, P.; Kumar, K. MFOA-Bi-LSTM: An optimized bidirectional long short-term memory model for short-term traffic flow prediction. Phys. A Stat. Mech. Its Appl. 2024, 634, 129448. [Google Scholar] [CrossRef]
  48. Cui, Z.; Ke, R.; Pu, Z.; Wang, Y. Stacked bidirectional and unidirectional LSTM recurrent neural network for forecasting network-wide traffic state with missing values. Transp. Res. Part C Emerg. Technol. 2020, 118, 102674. [Google Scholar] [CrossRef]
  49. Pradhan, A.; Yajnik, A. Parts-of-speech tagging of Nepali texts with Bidirectional LSTM, Conditional Random Fields and HMM. Multimed. Tools Appl. 2024, 83, 9893–9909. [Google Scholar] [CrossRef]
  50. Sabzalian, M.H.; Kharajinezhadian, F.; Tajally, A.; Reihanisaransari, R.; Alkhazaleh, H.A.; Bokov, D. New bidirectional recurrent neural network optimized by improved Ebola search optimization algorithm for lung cancer diagnosis. Biomed. Signal Process. Control. 2023, 84, 104965. [Google Scholar] [CrossRef]
  51. Sun, J.; Zhang, X.; Wang, J. Lightweight bidirectional long short-term memory based on automated model pruning with application to bearing remaining useful life prediction. Eng. Appl. Artif. Intell. 2023, 118, 105662. [Google Scholar] [CrossRef]
  52. Guo, F.; Mo, H.; Wu, J.; Pan, L.; Zhou, H.; Zhang, Z.; Li, L.; Huang, F. A Hybrid Stacking Model for Enhanced Short-Term Load Forecasting. Electronics 2024, 13, 2719. [Google Scholar] [CrossRef]
  53. Ren, Y.; Zhang, R.; Gao, F. A network structure for industrial process fault diagnosis based on hyper feature extraction and stacked LSTM. Chem. Eng. Sci. 2024, 287, 119745. [Google Scholar] [CrossRef]
  54. Maharatha, A.; Das, R.; Mishra, J.; Nayak, S.R.; Aluvala, S. Employing Sequence-to-Sequence Stacked LSTM Autoencoder Architecture to Forecast Indian Weather. Procedia Comput. Sci. 2024, 235, 2258–2268. [Google Scholar] [CrossRef]
  55. Alghamdi, M.A.; Abdullah, S.; Ragab, M. Predicting Energy Consumption Using Stacked LSTM Snapshot Ensemble. Big Data Min. Anal. 2024, 7, 247–270. [Google Scholar] [CrossRef]
  56. Wang, Y.; Wu, M.; Li, X.; Xie, L.; Chen, Z. Multivariate Time-Series Representation Learning via Hierarchical Correlation Pooling Boosted Graph Neural Network. IEEE Trans. Artif. Intell. 2023, 5, 321–333. [Google Scholar] [CrossRef]
  57. Alabdulkreem, E.; Alruwais, N.; Mahgoub, H.; Dutta, A.K.; Khalid, M.; Marzouk, R.; Motwakel, A.; Drar, S. Sustainable groundwater management using stacked LSTM with deep neural network. Urban Clim. 2023, 49, 101469. [Google Scholar] [CrossRef]
  58. Kumar, V.; Paul, K.; Chowdhary, M. Long Short Term Memory (LSTM)-based Cuffless Continuous Blood Pressure Monitoring. In Proceedings of the 2024 37th International Conference on VLSI Design and 2024 23rd International Conference on Embedded Systems (VLSID), Kolkata, India, 6–10 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 330–335. [Google Scholar]
  59. Essai Ali, M.H.; Abdellah, A.R.; Atallah, H.A.; Ahmed, G.S.; Muthanna, A.; Koucheryavy, A. Deep learning peephole LSTM neural network-based channel state estimators for OFDM 5G and beyond networks. Mathematics 2023, 11, 3386. [Google Scholar] [CrossRef]
  60. Zhu, D.; Dai, X.; Liu, Y.; Wang, F.; Luo, X.; Chen, D.; Ye, Z. Deep learning approach of drilling decision for subhorizontal drain geosteering based on APC-LSTM model. SPE Drill. Complet. 2023, 38, 1–17. [Google Scholar] [CrossRef]
  61. Pilataxi, J.I.; Zambrano, J.E.; Perez, C.A.; Bowyer, K.W. Improved Search in Neuroevolution Using a Neural Architecture Classifier With the CNN Architecture Encoding as Feature Vector. IEEE Access 2024, 12, 11987–12000. [Google Scholar] [CrossRef]
  62. Mienye, I.D.; Swart, T.G.; Obaido, G. Recurrent Neural Networks: A Comprehensive Review of Architectures, Variants, and Applications. Information 2024, 15, 517. [Google Scholar] [CrossRef]
  63. Dai, Z.; Zhang, M.; Nedjah, N.; Xu, D.; Ye, F. A hydrological data prediction model based on lstm with attention mechanism. Water 2023, 15, 670. [Google Scholar] [CrossRef]
  64. Lei, X. A Photovoltaic Prediction Model with Integrated Attention Mechanism. Mathematics 2024, 12, 2103. [Google Scholar] [CrossRef]
  65. Shi, J.; Zhong, J.; Zhang, Y.; Xiao, B.; Xiao, L.; Zheng, Y. A dual attention LSTM lightweight model based on exponential smoothing for remaining useful life prediction. Reliab. Eng. Syst. Saf. 2024, 243, 109821. [Google Scholar] [CrossRef]
  66. Kumar, C.; Kumar, M. Session-based recommendations with sequential context using attention-driven LSTM. Comput. Electr. Eng. 2024, 115, 109138. [Google Scholar] [CrossRef]
  67. Griffis, E.J.; Patil, O.S.; Bell, Z.I.; Dixon, W.E. Lyapunov-based long short-term memory (Lb-LSTM) neural network-based control. IEEE Control. Syst. Lett. 2023, 7, 2976–2981. [Google Scholar] [CrossRef]
  68. Kim, T.; Ahn, D.; Lee, D.; Kim, J.J. V-LSTM: An efficient LSTM accelerator using fixed nonzero-ratio viterbi-based pruning. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2023, 42, 3327–3337. [Google Scholar] [CrossRef]
  69. Aburass, S.; Dorgham, O.; Al Shaqsi, J. A hybrid machine learning model for classifying gene mutations in cancer using LSTM, BiLSTM, CNN, GRU, and GloVe. Syst. Soft Comput. 2024, 6, 200110. [Google Scholar] [CrossRef]
  70. Laitsos, V.; Vontzos, G.; Bargiotas, D.; Daskalopulu, A.; Tsoukalas, L.H. Data-Driven Techniques for Short-Term Electricity Price Forecasting through Novel Deep Learning Approaches with Attention Mechanisms. Energies 2024, 17, 1625. [Google Scholar] [CrossRef]
  71. Wang, Y.; Jiang, W.; Wang, C.; Song, Q.; Zhang, T.; Dong, Q.; Li, X. An electricity load forecasting model based on multilayer dilated LSTM network and attention mechanism. Front. Energy Res. 2023, 11, 1116465. [Google Scholar] [CrossRef]
  72. Wen, X.; Li, W. Time series prediction based on LSTM-attention-LSTM model. IEEE Access 2023, 11, 48322–48331. [Google Scholar] [CrossRef]
  73. Agarwal, H.; Mahajan, G.; Shrotriya, A.; Shekhawat, D. Predictive Data Analysis: Leveraging RNN and LSTM Techniques for Time Series Dataset. Procedia Comput. Sci. 2024, 235, 979–989. [Google Scholar] [CrossRef]
  74. Wu, Y.; Zhang, L.; Tang, T. Image Stabilization of A Model-following Dual-Stage System with Charge-Coupled Device Measurement. IEEE Photonics J. 2024. [Google Scholar] [CrossRef]
  75. Shukla, A.K.; Shukla, A.; Singh, R. Automatic attendance system based on CNN–LSTM and face recognition. Int. J. Inf. Technol. 2024, 16, 1293–1301. [Google Scholar] [CrossRef]
  76. Hassan, S.; Mujtaba, G.; Rajput, A.; Fatima, N. Multi-object tracking: A systematic literature review. Multimed. Tools Appl. 2024, 83, 43439–43492. [Google Scholar] [CrossRef]
  77. Liang, Z.; Chen, D.; Yang, L.; Chen, Y. A Multibranch LSTM-Attention Ensemble Classification Network for Sensor Drift Compensation. IEEE Sens. J. 2024, 24, 25830–25841. [Google Scholar] [CrossRef]
  78. Sung, S.H.; Suh, J.M.; Hwang, Y.J.; Jang, H.W.; Park, J.G.; Jun, S.C. Data-centric artificial olfactory system based on the eigengraph. Nat. Commun. 2024, 15, 1211. [Google Scholar] [CrossRef] [PubMed]
  79. Ryman, S.K.; Bruce, N.D.; Freund, M.S. Temporal responses of chemically diverse sensor arrays for machine olfaction using artificial intelligence. Sens. Actuators B Chem. 2016, 231, 666–674. [Google Scholar] [CrossRef]
  80. Kim, T.; Yoon, J.H.; Seo, M.; Kim, S.Y. Synthesis and self-assembly of poly (4-vinylphenol)-b-poly (vinyl alcohol) diblock copolymer for invertible core-shell nanoparticles. Polymer 2024, 307, 127293. [Google Scholar] [CrossRef]
  81. Nayak, G.; Maroliya, M.; Qadeer, M.; Midhun, V.; Saha, S.K.; Subramaniam, C. Development and Characterization of a Solid-Solid-Phase Change Material for Low-Temperature Applications. In Proceedings of the 27th National and 5th International ISHMT-ASTFE Heat and Mass Transfer Conference, Bihar, India, 14–17 December 2023; Begel House Inc.: Danbury, CT, USA, 2024. [Google Scholar]
  82. Su, X.; Xu, X.P.; Ji, Z.Q.; Wu, J.; Ma, F.; Fan, L.Z. Polyethylene Oxide-Based Composite Solid Electrolytes for Lithium Batteries: Current Progress, Low-Temperature and High-Voltage Limitations, and Prospects. Electrochem. Energy Rev. 2024, 7, 2. [Google Scholar] [CrossRef]
  83. Waziry, S.; Rasheed, J.; Ghabban, F.M.; Alsubai, S.; Elkiran, H.; Alqahtani, A. Unveiling Interpretability: Analyzing Transfer Learning in Deep Learning Models for Traffic Sign Recognition. SN Comput. Sci. 2024, 5, 682. [Google Scholar] [CrossRef]
  84. Kausar, R.; Zayer, F.; Viegas, J.; Dias, J. Efficient Hybrid Neuromorphic-Bayesian Model for Olfaction Sensing: Detection and Classification. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 2089–2095. [Google Scholar]
  85. Wang, C.; Wang, R.; Zhang, C.; Yu, Q. Coupling effect of state of charge and loading rate on internal short circuit of lithium-ion batteries induced by mechanical abuse. Appl. Energy 2024, 375, 124138. [Google Scholar] [CrossRef]
  86. Tamarozzi, T.; Jiránek, P.; De Gregoriis, D. A differential-algebraic extended Kalman filter with exact constraint satisfaction. Mech. Syst. Signal Process. 2024, 206, 110901. [Google Scholar] [CrossRef]
  87. Shin, D.; Yoon, B.; Yoo, S. Compensation method for estimating the state of charge of li-polymer batteries using multiple long short-term memory networks based on the extended kalman filter. Energies 2021, 14, 349. [Google Scholar] [CrossRef]
  88. Andrews, J.; Gkountouna, O.; Blaisten-Barojas, E. Forecasting molecular dynamics energetics of polymers in solution from supervised machine learning. Chem. Sci. 2022, 13, 7021–7033. [Google Scholar] [CrossRef]
  89. Singh, A.; Gupta, S.; Goel, L.; Agarwal, A.K.; Dargar, S.K. Archimedes optimization-based Elman Recurrent Neural Network for detection of post-traumatic stress disorder. Biomed. Signal Process. Control. 2024, 90, 105806. [Google Scholar] [CrossRef]
  90. Moctezuma, L.A.; Suzuki, Y.; Furuki, J.; Molinas, M.; Abe, T. GRU-powered sleep stage classification with permutation-based EEG channel selection. Sci. Rep. 2024, 14, 17952. [Google Scholar] [CrossRef]
  91. Chountoulesi, M.; Pippa, N.; Forys, A.; Trzebicka, B.; Pispas, S. Structure-Based Evaluation of Hybrid Lipid–Polymer Nanoparticles: The Role of the Polymeric Guest. Polymers 2024, 16, 290. [Google Scholar] [CrossRef] [PubMed]
  92. Wang, J.; Chen, C.C.; Shie, C.Y.; Li, T.T.; Fuh, Y.K. A hybrid sensor for motor tics recognition based on piezoelectric and triboelectric design and fabrication. Sens. Actuators A Phys. 2022, 342, 113622. [Google Scholar] [CrossRef]
  93. Lewin, A.B.; Murphy, T.K.; Mink, J.W.; Small, B.J.; Adams, H.R.; Brennan, E.; Augustine, E.F.; Vermilion, J.; Vierhile, A.; Collins, A.; et al. Brief youth self-report screener for tics: Can a subscale of the Motor tic, Obsession and compulsion, and Vocal tic Evaluation Survey (MOVES) identify tic disorders in youth? Evid.-Based Pract. Child Adolesc. Ment. Health 2024, 9, 181–191. [Google Scholar] [CrossRef] [PubMed]
  94. Roy, D.; Mishra, T.T.; Parashar, C.K.; Murmu, K.; Chakraborty, M. Effect of Poling on β-Phase Structure of Electrospun PVDF-TrFE Nanofiber Film. J. Mater. Eng. Perform. 2024, 33, 5439–5445. [Google Scholar] [CrossRef]
  95. Lenka, A.; Jankovic, J. An update on the pharmacological management of Tourette syndrome and emerging treatment paradigms. Expert Rev. Neurother. 2024, 1–9. [Google Scholar] [CrossRef] [PubMed]
  96. Yezerska, K.; Dushina, A.; Sarabakha, A.; Wagner, P.; Dyck, A.; Wark, M. Model-based degradation prediction on impedance data and artificial neural network for high-temperature polymer electrolyte membrane fuel cells after hydrogen starvation. Int. J. Hydrogen Energy 2022, 47, 29495–29504. [Google Scholar] [CrossRef]
  97. Heinritz, A.; Leidinger, P.; Buhk, B.; Herranz, J.; Schmidt, T. A High-Potential Trapped State Upon H2-Starvation of a Platinum Electrode in Aqueous Electrolyte. J. Electrochem. Soc. 2024, 171, 014503. [Google Scholar] [CrossRef]
  98. Solangi, N.H.; Mubarak, N.M.; Karri, R.R.; Mazari, S.A.; Koduru, J.R. Recent development of graphene and MXene-based nanomaterials for proton exchange membrane fuel cells. Int. J. Hydrogen Energy 2024, 73, 905–931. [Google Scholar] [CrossRef]
  99. Asghar, M.R.; Xu, Q. A review of advancements in commercial and non-commercial Nafion-based proton exchange membranes for direct methanol fuel cells. J. Polym. Res. 2024, 31, 125. [Google Scholar] [CrossRef]
  100. Benhaddouch, T.E.; Marcial, J.; Metler, C.; Bhansali, S.; Dong, D. Real-Time Continuous Monitoring of Fuel Cell Ionomer Degradation with Electrochemical Inline Micro Sensor Arrays. In Proceedings of the Electrochemical Society Meeting Abstracts 242, Atlanta, GA, USA, 9–13 October 2022; The Electrochemical Society, Inc.: Pennington, NJ, USA, 2022. Number 61. p. 2256. [Google Scholar]
  101. Nussbaum, R.; Nonis, A.; Jeanneret, S.; Cherubini, T.; Bakker, E. Ultrasensitive sensing of pH and fluoride with enhanced constant potential coulometry at membrane electrodes. Sens. Actuators B Chem. 2023, 392, 134101. [Google Scholar] [CrossRef]
  102. Sharma, R.; Morgen, P.; Larsen, M.J.; Roda-Serrat, M.C.; Lund, P.B.; Grahl-Madsen, L.; Andersen, S.M. Recovery, Regeneration, and Reapplication of PFSA Polymer from End-of-Life PEMFC MEAs. ACS Appl. Mater. Interfaces 2023, 15, 48705–48715. [Google Scholar] [CrossRef] [PubMed]
  103. Xu, X.; Huo, W.; Li, F.; Zhou, H. Classification of liquid ingress in GFRP honeycomb based on one-dimension sequential model using THz-TDS. Sensors 2023, 23, 1149. [Google Scholar] [CrossRef] [PubMed]
  104. Shubham, K.; Metya, S.; Sinha, A.K.; Gobinath, R. One-Dimensional-Convolutional Neural Network (1D-CNN) Based Reliability Analysis of Foundation Over Cavity Incorporating the Effect of Simulated Noise. Adv. Civ. Eng. 2024, 2024, 9981433. [Google Scholar] [CrossRef]
  105. Song, M.J.; Kim, S.; Oh, S.H.; Jo, P.S.; Lee, J.M. Soft Sensor for Melt Index Prediction Based on Long Short-Term Memory Network. IFAC-PapersOnLine 2022, 55, 857–862. [Google Scholar] [CrossRef]
  106. da Silveira, P.H.P.M.; da Conceição, M.d.N.; de Pina, D.N.; de Moraes Paes, P.A.; Monteiro, S.N.; Tapanes, N.d.L.C.O.; da Conceição Ribeiro, R.C.; Bastos, D.C. Impact of Different Mineral Reinforcements on HDPE Composites: Effects of Melt Flow Index and Particle Size on Physical and Mechanical Properties. Polymers 2024, 16, 2063. [Google Scholar] [CrossRef]
  107. Liu, M.; Tang, Q.; Liu, B.; Zhang, M. Preparation of poly (butyl acrylate)-grafted-poly (styrene-co-acrylonitrile) particles for toughening poly (styrene-co-acrylonitrile) resin. Polym. Eng. Sci. 2024, 64, 4298–4308. [Google Scholar] [CrossRef]
  108. Song, Z.; Feng, Y.; Lu, C.; Liu, J.; Pan, W. Self-constructed strategy-based reinforcement LSTM approach for fiber-reinforced polymer non-linear degradation performance analysis. Compos. Sci. Technol. 2024, 248, 110414. [Google Scholar] [CrossRef]
  109. Qiu, W.; Xu, X.; Dong, K.; Wang, Y.; Xiong, Y. Recent advances in 4D printing of fiber-reinforced polymer composites: A review and outlook. Compos. Part B Eng. 2024, 283, 111645. [Google Scholar] [CrossRef]
  110. Goswami, S.; Ghosh, R.; Neog, A.; Das, B. Deep learning based approach for prediction of glass transition temperature in polymers. Mater. Today Proc. 2021, 46, 5838–5843. [Google Scholar] [CrossRef]
  111. Loretz, R.; Loretz, T. Corrections to theoretical glass transition temperature models and interpretations with application examples to chalcogenide glass. J. Non-Cryst. Solids 2024, 628, 122845. [Google Scholar] [CrossRef]
  112. Srivastava, M.; MR, A.K.; Zaghib, K. Binders for Li-Ion Battery Technologies and Beyond: A Comprehensive Review. Batteries 2024, 10, 268. [Google Scholar] [CrossRef]
  113. Kim, K.; Kim, J.; Choi, H.; Kwon, O.; Jang, Y.; Ryu, S.; Lee, H.; Shim, K.; Park, T.; Cha, S.W. Pre-diagnosis of flooding and drying in proton exchange membrane fuel cells by bagging ensemble deep learning models using long short-term memory and convolutional neural networks. Energy 2023, 266, 126441. [Google Scholar] [CrossRef]
  114. Oyedeji, O.A.; Khan, S.; Erkoyuncu, J.A. Application of CNN for Multiple Phase Corrosion Identification and Region Detection. Appl. Soft Comput. 2024, 164, 112008. [Google Scholar]
  115. Zizaan, A.; Idri, A. Evaluating and comparing bagging and boosting of hybrid learning for breast cancer screening. Sci. Afr. 2024, 23, e01989. [Google Scholar] [CrossRef]
  116. Ramachandran, V.P.; Pranavam, V.; Sreedharan, P. Life Prediction of Underwater Electroacoustic Sensor Using Data-Driven Approach. In Proceedings of the International Conference on Artificial Intelligence and Data Science, Medan, Indonesia, 11–12 November 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 465–475. [Google Scholar]
  117. Gao, Z.; Hu, B.; Ye, W.; Cao, T.; Chen, L.; Li, H.; Guo, C.Y.; Wang, C. Water-Resistant and Thermal Insulation Aerogels Based on Polymers toward a Room-Temperature Phosphorescent Sensor. ACS Appl. Opt. Mater. 2024, 2, 1159–1171. [Google Scholar] [CrossRef]
  118. Ge, X.; Fan, F.; Given, M.J.; Stewart, B.G. Insulation resistance degradation models of extruded power cables under thermal ageing. Energies 2024, 17, 1062. [Google Scholar] [CrossRef]
  119. Lee, J.; Lee, N.; Son, J.; Shin, D. An LSTM model with optimal feature selection for predictions of tensile behavior and tensile failure of polymer matrix composites. Korean J. Chem. Eng. 2023, 40, 2091–2101. [Google Scholar] [CrossRef]
  120. Coccato, A.; Caggiani, M.C. An overview of Principal Components Analysis approaches in Raman studies of cultural heritage materials. J. Raman Spectrosc. 2024, 55, 125–147. [Google Scholar] [CrossRef]
  121. Kumar, M. Early detection of chronic kidney disease using recursive feature elimination and cross-validated XGBoost model. Int. J. Comput. Mater. Sci. Eng. 2024, 13. [Google Scholar] [CrossRef]
  122. Tian, Q.; Guo, L.; Zhang, Y.; Gao, H.; Li, Z. Multi-angle property analysis and stress–strain curve prediction of cementitious sand gravel based on triaxial test. Sci. Rep. 2024, 14, 16400. [Google Scholar] [CrossRef]
  123. Yang, B.; Liang, B.; Qian, Y.; Zheng, R.; Su, S.; Guo, Z.; Jiang, L. Parameter identification of PEMFC via feedforward neural network-pelican optimization algorithm. Appl. Energy 2024, 361, 122857. [Google Scholar] [CrossRef]
  124. Chistyakova, T.B.; Damrin, A.; Grishchenkov, N.D. The Software Complex for the Selection and Analysis of Algorithms Predicting Key Quality Indicators of Polymer Film Materials of Industrial Production. In Proceedings of the 2023 5th International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA), Lipetsk, Russia, 8–10 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 799–801. [Google Scholar]
  125. Koinig, G.; Kuhn, N.; Fink, T.; Grath, E.; Tischberger-Aldrian, A. Inline classification of polymer films using Machine learning methods. Waste Manag. 2024, 174, 290–299. [Google Scholar] [CrossRef]
  126. Xing, H.J.; Liu, W.T.; Wang, X.Z. Bounded exponential loss function based AdaBoost ensemble of OCSVMs. Pattern Recognit. 2024, 148, 110191. [Google Scholar] [CrossRef]
  127. Shakouri Mahmoudabadi, N.; Bahrami, A.; Saghir, S.; Ahmad, A.; Iqbal, M.; Elchalakani, M.; Özkılıç, Y.O. Effects of eccentric loading on performance of concrete columns reinforced with glass fiber-reinforced polymer bars. Sci. Rep. 2024, 14, 1890. [Google Scholar] [CrossRef]
  128. Zhang, K.; Zhang, K.; Bao, R. Machine learning models to predict the residual tensile strength of glass fiber reinforced polymer bars in strong alkaline environments: A comparative study. J. Build. Eng. 2023, 73, 106817. [Google Scholar] [CrossRef]
  129. Machello, C.; Bazli, M.; Santos, J.; Rajabipour, A.; Arashpour, M.; Hassanli, R. Tensile strength retention of fibre-reinforced polymer composites exposed to elevated temperatures: A meta-analysis review. Constr. Build. Mater. 2024, 438, 137150. [Google Scholar] [CrossRef]
  130. Niazkar, M.; Menapace, A.; Brentan, B.; Piraei, R.; Jimenez, D.; Dhawan, P.; Righetti, M. Applications of XGBoost in water resources engineering: A systematic literature review (Dec 2018–May 2023). Environ. Model. Softw. 2024, 174, 105971. [Google Scholar] [CrossRef]
  131. Yoon, B.; Yoo, S.; Seong, S. Compensation Method of EKF Based on LSTM for Estimating State of Charge of Li-polymer Battery. Trans. KSAE 2019, 27, 501–507. [Google Scholar] [CrossRef]
  132. Guo, K.; Li, S.; Wang, J.; Shi, Z.; Wang, Y.; Xue, Z. In situ orthogonal polymerization for constructing fast-charging and long-lifespan Li metal batteries with topological copolymer electrolytes. ACS Energy Lett. 2024, 9, 843–852. [Google Scholar] [CrossRef]
  133. Zhang, Y.; Wang, Y.; Wu, J.; Su, C.Y. Inverse Compensation-based Global Fast Terminal Integral Sliding Mode Control with Lumped Uncertainty Fuzzy Estimation for Dielectric Electro-active Polymer Actuator. IEEE Trans. Fuzzy Syst. 2024. [Google Scholar] [CrossRef]
  134. Jiang, Z.; Li, Y.; Wang, Q. Modeling of the dynamic hysteresis in DEAP actuator using an empirical mode decomposition based long-short term memory network. J. Intell. Mater. Syst. Struct. 2021, 32, 2108–2123. [Google Scholar] [CrossRef]
  135. Shi, J.; Teh, J. Load forecasting for regional integrated energy system based on complementary ensemble empirical mode decomposition and multi-model fusion. Appl. Energy 2024, 353, 122146. [Google Scholar] [CrossRef]
  136. Wang, S.; Mei, H.; Liu, J.; Chen, D.; Wang, L. A terahertz identification method for internal interface structures of polymers based on the long short-term memory classification network. Polymers 2022, 14, 2611. [Google Scholar] [CrossRef] [PubMed]
  137. Li, B.; Lu, Z.; Jin, X.; Zhao, L. Tool wear prediction in milling CFRP with different fiber orientations based on multi-channel 1DCNN-LSTM. J. Intell. Manuf. 2024, 35, 2547–2566. [Google Scholar] [CrossRef]
  138. Binbusayyis, A. Hybrid VGG19 and 2D-CNN for intrusion detection in the FOG-cloud environment. Expert Syst. Appl. 2024, 238, 121758. [Google Scholar] [CrossRef]
  139. Hantono, B.S.; Cahyadi, A.I.; Pratama, G.N.P. LSTM for state of charge estimation of lithium polymer battery on Jetson nano. In Proceedings of the 2021 13th International Conference on Information Technology and Electrical Engineering (ICITEE), Chiang Mai, Thailand, 14–15 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 80–85. [Google Scholar]
  140. Dehghan Manshadi, M.; Alafchi, N.; Tat, A.; Mousavi, M.; Mosavi, A. Comparative analysis of machine learning and numerical modeling for combined heat transfer in Polymethylmethacrylate. Polymers 2022, 14, 1996. [Google Scholar] [CrossRef]
  141. Luong, T.; Kim, K.; Seo, S.; Jeon, J.; Park, C.; Doh, M.; Koo, J.C.; Choi, H.R.; Moon, H. Long short term memory model based position-stiffness control of antagonistically driven twisted-coiled polymer actuators using model predictive control. IEEE Robot. Autom. Lett. 2021, 6, 4141–4148. [Google Scholar] [CrossRef]
  142. Köhler, J.; Müller, M.A.; Allgöwer, F. Analysis and design of model predictive control frameworks for dynamic operation—An overview. Annu. Rev. Control. 2024, 57, 100929. [Google Scholar] [CrossRef]
  143. Dong, C.; Jiang, C.; Gao, S.; Wang, X.; Bo, C.; Li, J.; Jin, X. Hybrid-modeling for PTFE polymerization reaction with deep learning-based reaction rate model. Int. J. Chem. React. Eng. 2023, 21, 1389–1401. [Google Scholar] [CrossRef]
  144. Ok, S.; Steinhart, M.; Scheler, U.; Améduri, B. TFE Terpolymers: Once Promising–Are There Still Perspectives in the 21st Century: Synthesis, Characterization, and Properties—Part I. Macromol. Rapid Commun. 2024, 2400294. [Google Scholar] [CrossRef]
  145. Sun, X.; Zhang, L.; Yu, C.; Xie, G.; Li, Y.; Wu, X.; Li, X.; Guo, D. Surface modification of polytetrafluoroethylene (PTFE) fibers through methyl methacrylate (MMA) polymerization for self-lubricating composites. Appl. Surf. Sci. 2024, 660, 159992. [Google Scholar] [CrossRef]
  146. Bi, J.; Zhang, P.; Zhang, J.; Wang, M.; Zhao, C. Data-Driven Prediction of Polymer Intrinsic Viscosity with Incomplete Time Series Data. In Proceedings of the 2023 28th International Conference on Automation and Computing (ICAC), Birmingham, UK, 30 August–1 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  147. Ma, Z.; Sun, Y.; Ji, H.; Li, S.; Nie, S.; Yin, F. A CNN-BiLSTM-Attention approach for EHA degradation prediction based on time-series generative adversarial network. Mech. Syst. Signal Process. 2024, 215, 111443. [Google Scholar] [CrossRef]
  148. Rahman, M.M.; Farahani, M.A.; Wuest, T. Multivariate time-series classification of critical events from industrial drying hopper operations: A deep learning approach. J. Manuf. Mater. Process. 2023, 7, 164. [Google Scholar] [CrossRef]
  149. Lee, Z.; Lindgren, T.; Papapetrou, P. Z-Time: Efficient and effective interpretable multivariate time series classification. Data Min. Knowl. Discov. 2024, 38, 206–236. [Google Scholar] [CrossRef]
  150. Gao, Z.; Chang, L.; Ren, B.; Han, J.; Li, J. Enhanced braille recognition based on piezoresistive and piezoelectric dual-mode tactile sensors. Sens. Actuators A Phys. 2024, 366, 115000. [Google Scholar] [CrossRef]
  151. Simine, L.; Allen, T.C.; Rossky, P.J. Predicting optical spectra for optoelectronic polymers using coarse-grained models and recurrent neural networks. Proc. Natl. Acad. Sci. USA 2020, 117, 13945–13948. [Google Scholar] [CrossRef]
  152. Braghetto, A.; Kundu, S.; Baiesi, M.; Orlandini, E. Machine learning understands knotted polymers. Macromolecules 2023, 56, 2899–2909. [Google Scholar] [CrossRef]
  153. Wang, Y.; Ma, X.M.; Hao, Z.; Cai, Y.; Rong, H.; Zhang, F.; Chen, W.; Zhang, C.; Lin, J.; Zhao, Y.; et al. On the topological surface states of the intrinsic magnetic topological insulator Mn-Bi-Te family. Natl. Sci. Rev. 2024, 11, nwad066. [Google Scholar] [CrossRef] [PubMed]
  154. Benrabia, I.; Söffker, D. Modeling and Evaluation of Dynamical Properties of Different Energy Storage Systems Using Machine Learning Methods. In Proceedings of the 2023 IEEE Vehicle Power and Propulsion Conference (VPPC), Milan, Italy, 24–27 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  155. Khan, T.A.; Chaudhary, N.I.; Khan, Z.A.; Mehmood, K.; Hsu, C.C.; Raja, M.A.Z. Design of Runge-Kutta optimization for fractional input nonlinear autoregressive exogenous system identification with key-term separation. Chaos Solitons Fractals 2024, 182, 114723. [Google Scholar] [CrossRef]
  156. Altabey, W.A.; Noori, M.; Wu, Z.; Al-Moghazy, M.A.; Kouritem, S.A. Studying acoustic behavior of BFRP laminated composite in dual-chamber muffler application using deep learning algorithm. Materials 2022, 15, 8071. [Google Scholar] [CrossRef]
  157. Hassan, B.R.; Yousif, A.R. Shear behavior of reinforced beams with basalt fiber reinforced polymer bars: Review, comparative analysis, and experimental validation. Structures 2024, 59, 105730. [Google Scholar] [CrossRef]
  158. Rüther, C.; Rieck, J. A Bayesian Optimization Approach for Tuning a Grouping Genetic Algorithm for Solving Practically Oriented Pickup and Delivery Problems. Logistics 2024, 8, 14. [Google Scholar] [CrossRef]
  159. Fu, T.; Rao, E.; Rabczuk, T. Sound transmission loss and energy absorbing performance of stiffened doubly-curved shells with corrugated-honeycomb hybrid cores. Eur. J. Mech. A/Solids 2024, 107, 105386. [Google Scholar] [CrossRef]
  160. Liu, Z.; Yin, X.; Yin, X.; Lu, Q.; Qiao, J.; Wang, Y. A Dice similarity coefficient-based pilot protection method for 500kV transmission lines of large-scale integrated photovoltaic power supply. Electr. Power Syst. Res. 2024, 226, 109918. [Google Scholar] [CrossRef]
  161. Wang, Q.; Liu, Q.; Xia, R.; Zhang, P.; Zhou, H.; Zhao, B.; Li, G. Automatic defect prediction in glass fiber reinforced polymer based on THz-TDS signal analysis with neural networks. Infrared Phys. Technol. 2021, 115, 103673. [Google Scholar] [CrossRef]
  162. Rehra, J.; Jungbluth, J.; Katri, B.; Schmeer, S.; Gurka, M.; Balle, F.; Breuer, U.P. Damage and failure mechanisms of hybrid carbon fiber and steel fiber reinforced polymer composites. Compos. Part A Appl. Sci. Manuf. 2024, 185, 108366. [Google Scholar] [CrossRef]
  163. Bérot, O.S.; Canot, H.; Durand, P.; Hassoune-Rhabbour, B.; Acheritobehere, H.; Laforet, C.; Nassiet, V. Choice of Parameters of an LSTM Network, based on a Small Experimental Dataset. Eng. Lett. 2024, 32, 59–71. [Google Scholar]
  164. Sukur, E.F.; Elimsa, S.; Eskizeybek, V.; Avci, A. Damage tolerance of basalt fiber reinforced multiscale composites: Effect of nanoparticle morphology and hygrothermal aging. Compos. Part B Eng. 2024, 273, 111234. [Google Scholar] [CrossRef]
  165. Oudah, F.; Alhashmi, A.E. Time-dependent reliability analysis of degrading structural elements using stochastic fe and lstm learning. In Proceedings of the Canadian Society of Civil Engineering Annual Conference, Whistler, BC, Canada, 25–28 May 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 131–140. [Google Scholar]
  166. Oh, H.S.; Kim, J.M.; Chang, C.Y. A Study on LSTM-Based Lithium Battery SoH Estimation in Urban Railway Vehicle Operating Environments. J. Electr. Eng. Technol. 2024, 19, 2817–2829. [Google Scholar] [CrossRef]
  167. Wang, X.; Hu, B.; Su, X.; Xu, L.; Zhu, D. State of health estimation for lithium-ion batteries using random forest and gated recurrent unit. J. Energy Storage 2024, 76, 109796. [Google Scholar] [CrossRef]
  168. Karaburun, N.N.; Hatipoğlu, S.A.; Konar, M. SOC Estimation of Li-Po Battery Using Machine Learning and Deep Learning Methods. J. Aviat. 2024, 8, 26–31. [Google Scholar] [CrossRef]
  169. Al-lQubaydhi, N.; Alenezi, A.; Alanazi, T.; Senyor, A.; Alanezi, N.; Alotaibi, B.; Alotaibi, M.; Razaque, A.; Hariri, S. Deep learning for unmanned aerial vehicles detection: A review. Comput. Sci. Rev. 2024, 51, 100614. [Google Scholar] [CrossRef]
  170. Yong, S.L.S.; Ng, J.L.; Huang, Y.F.; Ang, C.K.; Ahmad Kamal, N.; Mirzaei, M.; Najah Ahmed, A. Enhanced daily reference evapotranspiration estimation using optimized hybrid support vector regression models. Water Resour. Manag. 2024, 28, 4213–4241. [Google Scholar] [CrossRef]
  171. Iranzad, R.; Liu, X. A review of random forest-based feature selection methods for data science education and applications. Int. J. Data Sci. Anal. 2024, 2, 927312. [Google Scholar] [CrossRef]
  172. Tripathi, K.; Hamza, M.H.; Chattopadhyay, A.; Henry, T.C.; Hall, A. Impact of buckypaper on the mechanical properties and failure modes of composites. In Proceedings of the 38th Technical Conference of the American Society for Composites, ASC 2023, Tucson, AZ, USA, 19–21 September 2023; DEStech Publications: Lancaster, PA, USA, 2023; pp. 2281–2297. [Google Scholar]
  173. Kim, M.; Goerzen, D.; Jena, P.V.; Zeng, E.; Pasquali, M.; Meidl, R.A.; Heller, D.A. Human and environmental safety of carbon nanotubes across their life cycle. Nat. Rev. Mater. 2024, 9, 63–81. [Google Scholar] [CrossRef]
  174. Nath, D.; Ankit; Neog, D.R.; Gautam, S.S. Application of machine learning and deep learning in finite element analysis: A comprehensive review. Arch. Comput. Methods Eng. 2024, 31, 2945–2984. [Google Scholar] [CrossRef]
  175. Xiao, S.; Cunningham, W.J.; Kondabolu, K.; Lowet, E.; Moya, M.V.; Mount, R.A.; Ravasio, C.; Bortz, E.; Shaw, D.; Economo, M.N.; et al. Large-scale deep tissue voltage imaging with targeted-illumination confocal microscopy. Nat. Methods 2024, 21, 1094–1102. [Google Scholar] [CrossRef]
  176. Reiner, J.; Vaziri, R.; Zobeiry, N. Machine learning assisted characterisation and simulation of compressive damage in composite laminates. Compos. Struct. 2021, 273, 114290. [Google Scholar] [CrossRef]
  177. Najjar, I.; Sadoun, A.; Abd Elaziz, M.; Abdallah, A.; Fathy, A.; Elsheikh, A.H. Predicting kerf quality characteristics in laser cutting of basalt fibers reinforced polymer composites using neural network and chimp optimization. Alex. Eng. J. 2022, 61, 11005–11018. [Google Scholar] [CrossRef]
  178. Du, N.; Zhou, Y.; Luo, Q.; Jiang, M.; Deng, W. Multi-strategy chimp optimization algorithm for global optimization and minimum spanning tree. Soft Comput. 2024, 28, 2055–2082. [Google Scholar] [CrossRef]
  179. Sahoo, A.K.; Mishra, D.R. Characterisation of basalt/glass/kevlar-29 hybrid fibre-reinforced plastic composite material through Nd: YAG laser drilling and optimisation using stochastic methods. J. Mech. Sci. Technol. 2024, 38, 4321–4331. [Google Scholar] [CrossRef]
  180. Ahmed, F.R.; Alsenany, S.A.; Abdelaliem, S.M.F.; Deif, M.A. Development of a hybrid LSTM with chimp optimization algorithm for the pressure ventilator prediction. Sci. Rep. 2023, 13, 20927. [Google Scholar] [CrossRef] [PubMed]
  181. Jiang, Z.; Li, Y.; Wang, Q. Intelligent feedforward hysteresis compensation and tracking control of dielectric electro-active polymer actuator. Sens. Actuators A Phys. 2022, 341, 113581. [Google Scholar] [CrossRef]
  182. Khodadoost, S.; Saraee, M.; Talatahari, S.; Sareh, P. Optimal design of fractional-order proportional integral derivative controllers for structural vibration suppression. Sci. Rep. 2024, 14, 17207. [Google Scholar] [CrossRef] [PubMed]
  183. Munshi, J.; Chen, W.; Chien, T.; Balasubramanian, G. Transfer learned designer polymers for organic solar cells. J. Chem. Inf. Model. 2021, 61, 134–142. [Google Scholar] [CrossRef]
  184. Reza, M.S.; Reza, M.S.; Ghosh, A.; Rahman, M.F.; Rajabathar, J.R.; Ahmed, F.; Sajid, M.; Buian, M.F.I.; Bhandari, J.; Islam, M.A.; et al. New highly efficient perovskite solar cell with power conversion efficiency of 31% based on Ca3NI3 and an effective charge transport layer. Opt. Commun. 2024, 561, 130511. [Google Scholar] [CrossRef]
  185. Luong, T.; Seo, S.; Kim, K.; Jeon, J.; Koo, J.C.; Choi, H.R.; Moon, H. Hysteresis modeling of twisted-coiled polymer actuators using long short term memory networks. In Proceedings of the IFToMM Asian Conference on Mechanism and Machine Science, Hanoi, Vietnam, 15–18 December 2024; Springer: Berlin/Heidelberg, Germany, 2021; pp. 590–599. [Google Scholar]
  186. Luong, T.; Seo, S.; Hudoklin, J.; Koo, J.C.; Choi, H.R.; Moon, H. Variable stiffness robotic hand driven by twisted-coiled polymer actuators. IEEE Robot. Autom. Lett. 2022, 7, 3178–3185. [Google Scholar] [CrossRef]
  187. Kumar, A.; Parey, A.; Kankar, P.K. A new hybrid LSTM-GRU model for fault diagnosis of polymer gears using vibration signals. J. Vib. Eng. Technol. 2024, 12, 2729–2741. [Google Scholar] [CrossRef]
  188. Wang, Y.; Xu, W. Complete Ensemble Empirical Mode Decomposition with Adaptive Noise to Extract Deep Information of Bearing Fault in Steam Turbines via Deep Belief Network. Int. J. High Speed Electron. Syst. 2024, 2440079. [Google Scholar] [CrossRef]
  189. Shunhu, H.; Feng, M.; Qingshan, G.; Hua, Z. Efficient low-carbon manufacturing for CFRP composite machining based on deep networks. Int. J. Prod. Res. 2024, 1–12. [Google Scholar] [CrossRef]
  190. Aklouche, B.; Benkedjouh, T.; Habbouche, H.; Rechak, S. Damage assessment of composite material based on variational mode decomposition and BiLSTM. Int. J. Adv. Manuf. Technol. 2023, 129, 1801–1815. [Google Scholar] [CrossRef]
  191. Lu, H.; Chandran, B.; Wu, W.; Ninic, J.; Gryllias, K.; Chronopoulos, D. Damage features for structural health monitoring based on ultrasonic Lamb waves: Evaluation criteria, survey of recent work and outlook. Measurement 2024, 232, 114666. [Google Scholar] [CrossRef]
  192. Ali, L.; Isleem, H.F.; Bahrami, A.; Jha, I.; Zou, G.; Kumar, R.; Sadeq, A.M.; Jahami, A. Integrated behavioural analysis of FRP-confined circular columns using FEM and machine learning. Compos. Part C Open Access 2024, 13, 100444. [Google Scholar] [CrossRef]
  193. Deng, X.; Yang, F. Energy absorption characteristics of a double-filled sinusoidal corrugated filled tube under axial impact. J. Braz. Soc. Mech. Sci. Eng. 2024, 46, 364. [Google Scholar] [CrossRef]
  194. Li, B.F.; Wang, X.T.; Xie, C.D.; Yan, X.F.; Wang, S. Compressive behaviour and design of tapered lightweight concrete-filled double-skin stiffened steel tubular short columns with large hollow ratio. Structures 2024, 64, 106527. [Google Scholar] [CrossRef]
  195. Wang, Q.; Liu, Q.; Xia, R.; Li, G.; Gao, J.; Zhou, H.; Zhao, B. Defect depth determination in laser infrared thermography based on LSTM-RNN. IEEE Access 2020, 8, 153385–153393. [Google Scholar] [CrossRef]
  196. Chen, H.; Zhang, Z.; Yin, W.; Zhou, G.; Wang, L.; Li, Y.; Zhao, C.; Wang, C. Shape characterization and depth recognition of metal cracks based on laser infrared thermography and machine learning. Expert Syst. Appl. 2024, 238, 122083. [Google Scholar] [CrossRef]
  197. Gahleitner, L.; Mayr, G.; Burgholzer, P.; Cakmak, U. Efficient defect reconstruction from temporal non-uniform pulsed thermography data using the virtual wave concept. NDT E Int. 2024, 147, 103200. [Google Scholar]
  198. Kang, J.M.; Choi, S.H.; Park, J.w.; Park, K.S. Position error prediction using hybrid recurrent neural network algorithm for improvement of pose accuracy of cable driven parallel robots. Microsyst. Technol. 2020, 26, 209–218. [Google Scholar] [CrossRef]
  199. Kolemen, E.; Egrioglu, E.; Bas, E.; Turkmen, M. A new deep recurrent hybrid artificial neural network of gated recurrent units and simple seasonal exponential smoothing. Granul. Comput. 2024, 9, 7. [Google Scholar] [CrossRef]
  200. Chan, A.N.F.; Cheng, W.; Lau, D. Deformable Open-Frame Cable-Driven Parallel Robots: Modeling, Analysis and Control. IEEE Trans. Robot. 2024, 40, 3465–3480. [Google Scholar] [CrossRef]
  201. Lin, T.; Hu, L.; Wisely, W.; Gu, X.; Cai, J.; Litster, S.; Kara, L.B. Prediction of high frequency resistance in polymer electrolyte membrane fuel cells using long short term memory based model. Energy AI 2021, 3, 100045. [Google Scholar] [CrossRef]
  202. Lorenzo-Navarro, J.; Serranti, S.; Bonifazi, G.; Capobianco, G. Performance evaluation of classical classifiers and deep learning approaches for polymers classification based on hyperspectral images. In Proceedings of the Advances in Computational Intelligence: 16th International Work-Conference on Artificial Neural Networks, IWANN 2021, Virtual Event, 16–18 June 2021; Proceedings, Part II 16. Springer: Berlin/Heidelberg, Germany, 2021; pp. 281–292. [Google Scholar]
  203. Choi, S.B.; Shin, H.S.; Kim, J.W. Convolution Neural Networks for Motion Detection with Electrospun Reversibly-Cross-linkable Polymers and Encapsulated Ag Nanowires. ACS Appl. Mater. Interfaces 2023, 15, 47591–47603. [Google Scholar] [CrossRef] [PubMed]
  204. Li, R.; Wang, Q.; Jiang, J.; Xiang, X.; Ye, P.; Wang, Y.; Qin, Y.; Chen, Y.; Lai, W.; Zhang, X. Highly Stable Silver Nanowire Plasmonic Electrodes for Flexible Polymer Light-Emitting Devices. ACS Appl. Mater. Interfaces 2024, 16, 31419–31427. [Google Scholar] [CrossRef]
  205. Xu, J.; Chen, W.; Liu, L.; Jiang, S.; Wang, H.; Zhang, J.; Gan, X.; Zhou, X.; Guo, T.; Wu, C.; et al. Intelligent recognition of human motion using an ingenious electronic skin based on metal fabric and natural triboelectrification. Sci. China Mater. 2024, 67, 887–897. [Google Scholar] [CrossRef]
  206. Tan, D.; Wang, K.; Zhou, J.; Peng, J.; Wang, Q. A brief review of nonlinear triboelectric nanogenerator. Int. J. Dyn. Control. 2024, 12, 2072–2092. [Google Scholar] [CrossRef]
  207. Jiang, W.; Zhou, Q.; He, J.; Habibi, M.A.; Melnyk, S.; El-Absi, M.; Han, B.; Di Renzo, M.; Schotten, H.D.; Luo, F.L.; et al. Terahertz communications and sensing for 6G and beyond: A comprehensive review. IEEE Commun. Surv. Tutor. 2024. [Google Scholar] [CrossRef]
  208. Kadhafi, M.; Park, J.; Kang, D.; Koo, B. Thermal characteristics of multilayer sandwich Nomex honeycomb core composite with heating elements for railways wall panels. Adv. Compos. Mater. 2024, 1–19. [Google Scholar] [CrossRef]
  209. Badour, Y.; Pedros, M.; Gaudon, M.; Danto, S. Hybrid organic–inorganic PMMA optical fibers functionalized with photochromic active WO3 nanoparticles: From materials design to photochromic fabrics. Adv. Opt. Mater. 2024, 12, 2301717. [Google Scholar] [CrossRef]
  210. Ntonti, E.; Sotiriadou, S.; Assael, M.J.; Huber, M.L.; Wilthan, B.; Watanabe, M. Reference Correlations for the Density and Thermal Conductivity, and Review of the Viscosity Measurements, of Liquid Titanium, Zirconium, Hafnium, Vanadium, Niobium, Tantalum, Chromium, Molybdenum, and Tungsten. Int. J. Thermophys. 2024, 45, 18. [Google Scholar] [CrossRef]
  211. Molinié, P. Trap Spectroscopy from the Dielectric Isothermal Step Response: Theory and Simulations. In Proceedings of the 2024 IEEE 5th International Conference on Dielectrics (ICD), Toulouse, France, 30 June–10 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–4. [Google Scholar]
  212. Li, S.; Ma, S.; Wang, F. A combined NOx emission prediction model based on semi-empirical model and black box models. Energy 2023, 264, 126130. [Google Scholar] [CrossRef]
  213. Guo, T.; Lin, T.; Antulov-Fantulin, N. Exploring interpretable LSTM neural networks over multi-variable data. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 2494–2504. [Google Scholar]
  214. Seddik, S.; Routaib, H.; Elhaddadi, A. Multi-variable time series decoding with long short-term memory and mixture attention. Acadlore Trans. Mach. Learn. Res. 2023, 2, 154–169. [Google Scholar] [CrossRef]
  215. Liang, X.; Lin, L.; Shen, X.; Feng, J.; Yan, S.; Xing, E.P. Interpretable structure-evolving LSTM. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1010–1019. [Google Scholar]
  216. Wu, J.M.T.; Li, Z.; Herencsar, N.; Vo, B.; Lin, J.C.W. A graph-based CNN-LSTM stock price prediction algorithm with leading indicators. Multimed. Syst. 2023, 29, 1751–1770. [Google Scholar] [CrossRef]
  217. Tang, H.; Wu, S.; Xu, G.; Li, Q. Dynamic graph evolution learning for recommendation. In Proceedings of the 46th international ACM Sigir Conference on Research and Development in Information Retrieval, Taipei, Taiwan, 23–27 July 2023; pp. 1589–1598. [Google Scholar]
  218. Ye, Q.; Li, Y.; Niu, B. Risk Propagation Mechanism and Prediction Model for the Highway Merging Area. Appl. Sci. 2023, 13, 8014. [Google Scholar] [CrossRef]
  219. Zhou, Y.; Huang, C.; Wu, T.; Zhang, M. A novel spatio-temporal cellular automata model coupling partitioning with CNN-LSTM to urban land change simulation. Ecol. Model. 2023, 482, 110394. [Google Scholar] [CrossRef]
  220. Su, H.; Tretyakov, M.; Newton, D.P. Deep learning of transition probability densities for stochastic asset models with applications in option pricing. Manag. Sci. 2024. [Google Scholar] [CrossRef]
  221. Yang, D.; Liu, T.; Zhang, X.; Zeng, X.; Song, D. Construction of high-precision driving cycle based on Metropolis-Hastings sampling and genetic algorithm. Transp. Res. Part D Transp. Environ. 2023, 118, 103715. [Google Scholar] [CrossRef]
  222. Oyewola, D.O.; Akinwunmi, S.A.; Omotehinwa, T.O. Deep LSTM and LSTM-Attention Q-learning based reinforcement learning in oil and gas sector prediction. Knowl.-Based Syst. 2024, 284, 111290. [Google Scholar] [CrossRef]
  223. Williams, J.D.; Zweig, G. End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning. arXiv 2016, arXiv:1606.01269. [Google Scholar]
  224. Igual, L.; Seguí, S. Supervised learning. In Introduction to Data Science: A Python Approach to Concepts, Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2024; pp. 67–97. [Google Scholar]
  225. Zhong, D.; Yang, Y.; Zhao, Q. No Prior Mask: Eliminate Redundant Action for Deep Reinforcement Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Kragujevac, Serbia, 23–24 May 2024; Volume 38, pp. 17078–17086. [Google Scholar]
  226. Wang, H.; He, S.; Zhang, Z.; Miao, F.; Anderson, J. Momentum for the Win: Collaborative Federated Reinforcement Learning across Heterogeneous Environments. arXiv 2024, arXiv:2405.19499. [Google Scholar]
  227. Brandonisio, A.; Capra, L.; Lavagna, M. Deep reinforcement learning spacecraft guidance with state uncertainty for autonomous shape reconstruction of uncooperative target. Adv. Space Res. 2024, 73, 5741–5755. [Google Scholar] [CrossRef]
  228. Yi, Z.; Ouyang, J.; Liu, Y.; Liao, T.; Xu, Z.; Shen, Y. A Survey on Recent Advances in LLM-Based Multi-turn Dialogue Systems. arXiv 2024, arXiv:2402.18013. [Google Scholar]
  229. Elakkiya, R.; Subramaniyaswamy, V. Cognitive Analytics and Reinforcement Learning: Theories, Techniques and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2024. [Google Scholar]
  230. Liu, S.; Yuan, H.; Hu, M.; Li, Y.; Chen, Y.; Liu, S.; Lu, Z.; Jia, J. RL-GPT: Integrating Reinforcement Learning and Code-as-policy. arXiv 2024, arXiv:2402.19299. [Google Scholar]
  231. Gueldner, P.H.; Darvish, C.J.; Chickanosky, I.K.; Ahlgren, E.E.; Fortunato, R.; Chung, T.K.; Rajagopal, K.; Benjamin, C.C.; Maiti, S.; Rajagopal, K.R.; et al. Aortic tissue stiffness and tensile strength are correlated with density changes following proteolytic treatment. J. Biomech. 2024, 172, 112226. [Google Scholar] [CrossRef] [PubMed]
  232. Kalinin, S.V.; Ziatdinov, M.; Ahmadi, M.; Ghosh, A.; Roccapriore, K.; Liu, Y.; Vasudevan, R.K. Designing workflows for materials characterization. Appl. Phys. Rev. 2024, 11, 011314. [Google Scholar] [CrossRef]
  233. Zhao, Z.; Tang, D.; Zhu, H.; Zhang, Z.; Chen, K.; Liu, C.; Ji, Y. A Large Language Model-based multi-agent manufacturing system for intelligent shopfloor. arXiv 2024, arXiv:2405.16887. [Google Scholar]
  234. Jhong, Y.D.; Chen, C.S.; Jhong, B.C.; Tsai, C.H.; Yang, S.Y. Optimization of LSTM parameters for flash flood forecasting using genetic algorithm. Water Resour. Manag. 2024, 38, 1141–1164. [Google Scholar] [CrossRef]
  235. Li, H.; Zhang, Z.; Li, T.; Si, X. A review on physics-informed data-driven remaining useful life prediction: Challenges and opportunities. Mech. Syst. Signal Process. 2024, 209, 111120. [Google Scholar] [CrossRef]
  236. Chui, K.T.; Gupta, B.B.; Vasant, P. A genetic algorithm optimized RNN-LSTM model for remaining useful life prediction of turbofan engine. Electronics 2021, 10, 285. [Google Scholar] [CrossRef]
  237. Frusque, G.; Fink, O. Robust time series denoising with learnable wavelet packet transform. Adv. Eng. Inform. 2024, 62, 102669. [Google Scholar] [CrossRef]
  238. Sha, X. Time Series Stock Price Forecasting Based on Genetic Algorithm (GA)-Long Short-Term Memory Network (LSTM) Optimization. arXiv 2024, arXiv:2405.03151. [Google Scholar] [CrossRef]
  239. Zhao, Z.; Zhang, H. An Automatic Analysis Approach of Bridge Life Based on Engineering Mathematics and LSTM. In Proceedings of the 2024 19th Annual System of Systems Engineering Conference (SoSE), Tacoma, WA, USA, 23–26 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 308–313. [Google Scholar]
  240. Jung, S.G.; Jung, G.; Cole, J.M. Automatic Prediction of Band Gaps of Inorganic Materials Using a Gradient Boosted and Statistical Feature Selection Workflow. J. Chem. Inf. Model. 2024, 64, 1187–1200. [Google Scholar] [CrossRef] [PubMed]
  241. Zhou, F.; Sun, C.; Pu, J.; Li, J.; Li, Y.; Xie, Q.; Li, K.; Chen, H. Efficiency optimization of fuel cell systems with energy recovery: An integrated approach based on modeling, machine learning, and genetic algorithm. J. Power Sources 2024, 615, 235077. [Google Scholar] [CrossRef]
  242. Cui, X.; Chipusu, K.; Ashraf, M.A.; Riaz, M.; Xiahou, J.; Huang, J. Symmetry-Enhanced LSTM-Based Recurrent Neural Network for Oscillation Minimization of Overhead Crane Systems during Material Transportation. Symmetry 2024, 16, 920. [Google Scholar] [CrossRef]
  243. Sharma, S.; Sen, S. Real-time structural damage assessment using LSTM networks: Regression and classification approaches. Neural Comput. Appl. 2023, 35, 557–572. [Google Scholar] [CrossRef]
  244. Gu, M.; Xu, A.; Wang, H.; Wang, Z. Real-time dynamic carbon content prediction model for second blowing stage in BOF based on CBR and LSTM. Processes 2021, 9, 1987. [Google Scholar] [CrossRef]
  245. Wolf, T.N.; Bongratz, F.; Rickmann, A.M.; Pölsterl, S.; Wachinger, C. Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 5921–5929. [Google Scholar]
  246. Ghazimoghadam, S.; Hosseinzadeh, S. A novel unsupervised deep learning approach for vibration-based damage diagnosis using a multi-head self-attention LSTM autoencoder. Measurement 2024, 229, 114410. [Google Scholar] [CrossRef]
  247. Tien, T.B.; Quang, T.V.; Ngoc, L.N.; Ngoc, H.T. Time series data recovery in SHM of large-scale bridges: Leveraging GAN and Bi-LSTM networks. Structures 2024, 63, 106368. [Google Scholar] [CrossRef]
  248. Gupta, A.; Kumar, N.; Sachdeva, A. Factors affecting the ageing of polymer composite: A state of art. Polym. Degrad. Stab. 2024, 221, 110670. [Google Scholar] [CrossRef]
  249. Xu, B.; Blok, R.; Teuffel, P.; Lucas, S.; Moonen, F. Effect of Moisture in Flax Fiber on Viscoelastic Properties of the Manufactured Flax Fiber Reinforced Polymer by Fractional-Order Viscoelastic Model. Mater. Today Commun. 2024, 40, 110138. [Google Scholar] [CrossRef]
  250. Singh, S.J.; Ahlawat, N.; Panwar, V. Computational analysis of viscoelastic properties in polymer composites. In Dynamic Mechanical and Creep-Recovery Behavior of Polymer-Based Composites; Elsevier: Amsterdam, The Netherlands, 2024; pp. 291–309. [Google Scholar]
  251. Senapati, S.; Banerjee, A.; Rajesh, R. Acoustic emission data based modelling of fracture of glassy polymer. Eng. Fract. Mech. 2024, 304, 110154. [Google Scholar] [CrossRef]
  252. Schilling, M.; Marschall, N.; Niebergall, U.; Wachtendorf, V.; Böhning, M. Characteristics of Environmental Stress Cracking of PE-HD induced by Biodiesel and Diesel Fuels. Polym. Test. 2024, 138, 108547. [Google Scholar] [CrossRef]
  253. Aydonat, S.; Hergesell, A.H.; Seitzinger, C.L.; Lennarz, R.; Chang, G.; Sievers, C.; Meisner, J.; Vollmer, I.; Göstl, R. Leveraging mechanochemistry for sustainable polymer degradation. Polym. J. 2024, 56, 249–268. [Google Scholar] [CrossRef]
  254. Singh, S.K.; Soman, R.N.; Malinowski, P.H. A novel multi-damage localization method for polymers and composites based on electromechanical impedance. Mech. Syst. Signal Process. 2024, 216, 111508. [Google Scholar] [CrossRef]
  255. Li, Y.; Feng, T.; Wang, Y.; Zhu, Z.; Peng, H.X.; Xu, P.; Qin, F. Real-time evaluating temperature-dependent interfacial shear strength of thermoplastic composites based on stress impedance effect of magnetic fibers. Compos. Part A Appl. Sci. Manuf. 2024, 176, 107874. [Google Scholar] [CrossRef]
  256. Zhu, J.; Chen, X.; Chen, J.; Chen, H.; Pak, R.Y. Effect of cyclic loading level on mechanical response and microcracking behavior of saturated sandstones: Correlation with water weakening phenomenon. Eng. Geol. 2024, 340, 107667. [Google Scholar] [CrossRef]
  257. Jia, D.; Hao, Z.; Peng, Y.; Yan, S.; Hu, W. Experimental Study on the Localized Deformation and Damage Behavior of Polymer-Bonded Explosive Simulant under Cyclic Compression. Materials 2024, 17, 919. [Google Scholar] [CrossRef]
  258. Li, J.; Wu, G.; Zhang, Y.; Shi, W. Optimizing flood predictions by integrating LSTM and physical-based models with mixed historical and simulated data. Heliyon 2024, 10, e33669. [Google Scholar] [CrossRef]
  259. Meng, Y.; Yun, S.; Zhao, Z.; Guo, J.; Li, X.; Ye, D.; Jia, L.; Yang, L. Short-term electricity load forecasting based on a novel data preprocessing system and data reconstruction strategy. J. Build. Eng. 2023, 77, 107432. [Google Scholar] [CrossRef]
  260. Zhou, Y.; Aryal, S.; Bouadjenek, M.R. Review for Handling Missing Data with special missing mechanism. arXiv 2024, arXiv:2404.04905. [Google Scholar]
Figure 1. LSTM architecture diagram.
Figure 1. LSTM architecture diagram.
Polymers 16 02607 g001
Figure 2. Conceptual diagram of LSTM variants.
Figure 2. Conceptual diagram of LSTM variants.
Polymers 16 02607 g002
Figure 3. Workflow of the experimental and analytical approach used in this study.
Figure 3. Workflow of the experimental and analytical approach used in this study.
Polymers 16 02607 g003
Figure 4. Flowchart of the research process for polymer analysis using LSTM and THz-TDS techniques.
Figure 4. Flowchart of the research process for polymer analysis using LSTM and THz-TDS techniques.
Polymers 16 02607 g004
Figure 5. Conceptual diagram of LSTM with RL integration.
Figure 5. Conceptual diagram of LSTM with RL integration.
Polymers 16 02607 g005
Figure 6. Conceptual diagram of GA and LSTM integration in predictive models.
Figure 6. Conceptual diagram of GA and LSTM integration in predictive models.
Polymers 16 02607 g006
Figure 7. Conceptual diagram of real-time LSTM applications in SHM and polymers.
Figure 7. Conceptual diagram of real-time LSTM applications in SHM and polymers.
Polymers 16 02607 g007
Table 1. Summary of studies on LSTM models in time-series analysis of polymer systems (N/A—Not Applicable).
Table 1. Summary of studies on LSTM models in time-series analysis of polymer systems (N/A—Not Applicable).
ReferenceFocusApplied ModelLimitationsData InformationMetrics
Ryman et al. [79]Development of sensors using organic polymers for chemical detectionLSTMLimited research into the neural architectures for chemical awareness in dynamic environmentsSensor data from organic polymersN/A
Shin et al. [87]Enhancing state-of-charge (SOC) estimation in batteriesLSTM combined with EKFPotential uncertainties in battery models and varying conditionsBattery charge and discharge dataRMSE (<1%)
Andrews et al. [88]Predicting the energetics of ethyl acetate solution with polymer–lipid aggregateERNN, LSTM, GRUStruggles with accurate short- and long-term forecastsEnergetics data from polymer–lipid solutionsRMSE (0.1)
Wang et al. [92]Recognition of motor tics using a hybrid sensorLSTMPotential limitations in recognition accuracy and self-powered operationMotor tic sensor dataSignal recognition rate (88.1%)
Yezerska et al. [96]Predicting H 2 starvation effects in fuel cellsLSTMRecommendations are based on simulations, which may have limitations in real-world applicabilityFuel cell performance dataN/A
Benhaddouch et al. [100]Real-time monitoring of radical-induced degradation in PEMFCsLSTMLimited to predictive diagnostics, may not address all degradation mechanismsPEMFC degradation dataN/A
Xu et al. [103]Classification of substances within GFRP structures using THz-TDSLSTM, 1D-CNNLSTM excels with time-domain but struggles with frequency-domain signals compared to improved 1D-CNNTHz-TDS data from GFRP structures F 1 (0.88–0.91)
Song et al. [105]Predicting melt index (MI) in polymerization processesLSTMChallenges with nonlinearity and complex temporal correlationsPolymerization process data R 2 (≈0.8)
Song et al. [108]Predicting nonlinear performance degradation of FRPReinforcement LSTM (SCRLA)Potential complexity in model generalization and integration of Bayesian algorithmsFRP performance data R 2 (≈0.9)
Goswami et al. [110]Predicting Glass Transition Temperature (Tg) in polymersLSTM based on SMILESModel performance and practical application may need further validationPolymer SMILES dataN/A
Table 2. Summary of studies on LSTM models in monitoring of polymer materials (N/A—Not Applicable).
Table 2. Summary of studies on LSTM models in monitoring of polymer materials (N/A—Not Applicable).
ReferenceFocusApplied ModelLimitationsData InformationMetrics
Kim et al. [113]DL-based prediagnosis system for PEMFCsLSTM, CNN combined with bagging ensemble methodFocused on specific failure modes (flooding, drying); may require validation in broader conditionsPEMFC performance dataRecall (83–93%), precision (73–98%)
Ramachandran et al. [116]Predicting end of life of underwater electroacoustic sensors by modeling polymer insulation degradationLSTMModel predictions based on resistance measurements; real-time applicability may varyResistance measurements from polymer insulationN/A
Lee et al. [119]Predicting tensile behavior of polymer matrix composites (PMCs)LSTM, FNN, PCA, RFECVAccuracy may depend on the quality of feature selection and input dataTensile test data from PMC R 2 (0.92)
Chistyakova et al. [124]Predictive models for quality indicators in polymer film materialsAdaBoost, LSTMPerformance depends on specific production data characteristics; generalization may be limitedProduction data from polymer filmsN/A
Zhang et al. [128]Predicting tensile strength retention (TSR) in GFRPs under alkaline conditionsLSTM, XGBoostSensitive to variations in pH and temperature; may need adaptation for different environmental conditionsTensile strength data from GFRPsAccuracy (85%)
Yoon et al. [131]Enhancing EKF for SOC estimation in Li-polymer batteriesLSTM combined with EKFInaccuracies in SOC estimation under dynamic conditions still possibleBattery charge/discharge dataRMSE (≈0.24)
Jiang et al. [134]Modeling hysteresis in DEAP actuators for roboticsLSTM with EMDHybrid model complexity may affect real-time implementationHysteresis data from DEAP actuatorsMAE (≈0.02), MRE (≈0.01)
Wang et al. [136]Classifying internal interfaces in polymers using THz waveform dataLSTMEffectiveness depends on the quality of THz data; sensitivity to noise may limit use in some applicationsTHz waveform data from polymersAccuracy (≈0.95)
Li et al. [137]Predicting tool wear in milling CFRP by analyzing cutting force signalsMultichannel 1D CNN with LSTMPerformance may vary with different tool materials and cutting conditionsCutting force signal data R 2 (95.04%), MAE (2.94)
Hantono et al. [139]Estimating SOC of lithium polymer batteriesLSTMComputation limited by hardware (Jetson Nano); may not scale easily to larger modelsBattery charge, discharge dataRMSE (≈1.8)
Table 3. Summary of studies on LSTM models in managing performance of polymer products (N/A—Not Applicable).
Table 3. Summary of studies on LSTM models in managing performance of polymer products (N/A—Not Applicable).
ReferenceFocusApplied ModelLimitationsData InformationMetrics
Dehghan et al. [140]Predicting conductive and radiative heat transfer in PMMALSTM networksMay require further validation across diverse conditionsHeat transfer data from PMMARMSE (16.4)
Luong et al. [141]Predicting behavior of an antagonistic joint driven by twisted-coiled polymer actuatorsLSTM with Model Predictive Control (MPC)Performance may be sensitive to actuator material variationsActuator performance dataRMSE (0.21)
Dong et al. [143]Hybrid modeling for TFE polymerization processLSTM combined with kinetic and thermodynamic modelsEffectiveness depends on accurate kinetic parameter estimationPolymerization process dataN/A
Bi et al. [146]Predicting polymer intrinsic viscosity for polyester fiber qualityTSDGAN, Attention LSTM, CNNSensitivity to the rate of missing data may limit generalizabilityIntrinsic viscosity dataN/A
Rahman et al. [148]Predictive maintenance for industrial drying hopperCNN for Multivariate Time-Series (MTS) classificationImbalanced data handling might require additional techniquesDrying hopper performance dataAccuracy (98%)
Gao et al. [150]Enhancing tactile perception with dual-mode tactile sensorCNN-LSTM modelPerformance may degrade under varying tactile conditionsTactile sensor dataRecognition rate (77–90%)
Simine et al. [151]Predicting UV-vis spectra of conjugated polymersLSTM-RNNApplicability might be limited to specific polymer typesUV-vis spectra dataN/A
Braghetto et al. [152]Analyzing configurations of flexible knotted rings within spherical cavitiesLSTM neural networksMisclassification within the same topological family indicates model limitationsConfiguration data of knotted ringsAccuracy (0.2–0.80)
Benrabia et al. [154]Modeling energy storage systems under varying external statesNARX and LSTM modelsNARX is more effective for batteries, LSTM for fuel cells; each model has application-specific strengthsEnergy storage system dataN/A
Altabey et al. [156]Predicting acoustic behavior of BFRP composite mufflersRNN-LSTM, CNN optimized with Bayesian genetic algorithmsGeneralization may be limited to specific muffler designsAcoustic behavior dataAccuracy (>90%)
Wang et al. [161]Detecting internal defects in GFRP using terahertz spectroscopy1D CNN, LSTM-RNN, Bidirectional LSTM-RNNBest results with 1D CNN; other models might need further refinementTerahertz spectroscopy dataF1 score (0.91)
Table 4. Summary of studies on LSTM models in predicting degradation of polymers (N/A—Not Applicable).
Table 4. Summary of studies on LSTM models in predicting degradation of polymers (N/A—Not Applicable).
ReferenceFocusApplied ModelLimitationsData InformationMetrics
Berot et al. [163]Predicting polymer aging in epoxy adhesives under hygrothermal agingLSTM with single hidden layer, 150 units, hyperbolic tangent activationRequires precise tuning of network parameters for stability and accuracyAging data from epoxy adhesivesMSE (<0.01)
Oudan et al. [165]Assessing time-dependent reliability of degrading structural systemsHybrid FE simulation with LSTM networksValidation needed across different structural materials and conditionsStructural degradation dataN/A
Oh et al. [166]Estimating state of health (SoH) of lithium polymer batteries in railway fleetsLSTM models for SoH analysis over 500 charge/discharge cyclesPerformance may vary under different operational environmentsBattery SoH dataN/A
Karaburun et al. [168]State-of-charge (SOC) estimation for lithium polymer batteries in UAVsLSTM, SVR, Random ForestRequires comparison with real-time applications in UAVs for further validationBattery SOC dataRMSE (0.3)
Tripathi et al. [172]Predicting mechanical response of CFRP laminates with BP/CNT interleavesLSTM model trained on FEA and experimental dataModel accuracy depends on quality and quantity of FEA and experimental dataMechanical response data from CFRP laminatesN/A
Reiner et al. [176]Characterizing strain-softening in laminated composites under compressionLSTM-based recurrent neural networkHigh computational cost due to the need for extensive FE simulationsStrain-softening data from laminated compositesN/A
Najjar et al. [177]Predicting kerf quality in laser cutting of basalt fiber-reinforced polymersLSTM combined with Chimp Optimization Algorithm (CHOA)Generalizability to different composite materials requires further explorationKerf quality data from laser cuttingRMSE (27–60%)
Jiang et al. [181]Addressing hysteresis and creep in DEAP actuatorsHybrid LSTM with EMD and PID controlApplication limited to DEAP actuators; may not extend to other actuator typesHysteresis and creep data from DEAP actuatorsN/A
Munshi et al. [183]Discovering new polymer chemistries for OPV materials using transfer learningLSTM model using SMILES molecular fingerprintsModel trained on a small dataset; larger datasets needed for broader applicationPolymer chemistry dataN/A
Table 5. Summary of studies on LSTM models in sensor technologies and polymer composites (N/A—Not Applicable).
Table 5. Summary of studies on LSTM models in sensor technologies and polymer composites (N/A—Not Applicable).
ReferenceFocusApplied ModelLimitationsData InformationMetrics
Luong et al. [185]Predicting nonlinear behavior of an antagonistic joint driven by hybrid TCA bundleLSTM network for joint angle predictionSpecific to TCA-driven systems; may require adaptation for other actuation systemsJoint angle data from TCA-driven systemsWorking range of 30% of the TCA
Kumar et al. [187]Detecting faults in polymer gearsHybrid LSTM-GRU model with CEEMDAN preprocessingModel performance needs validation in different operational environmentsFault detection data from polymer gearsAccuracy (99%)
Shunhu et al. [189]Optimizing drilling quality and energy efficiency in CFRP componentsCNN-LSTM network correlating process parameters with outcomesApplicability to other drilling processes and materials needs further testingDrilling process data from CFRP componentsN/A
Aklouche et al. [190]Estimating damage severity in CFRP using LW dataBidirectional LSTM (BiLSTM) with VMD for preprocessingLimited to composite materials like CFRP; may not generalize to other material typesDamage severity data from CFRPN/A
Ali et al. [192]Comparing structural behavior of DSDFT and DSHT columnsLSTM and BiLSTM models for predicting axial load capacityPredictions specific to column types studied; generalization needs further explorationAxial load capacity data from columnsRMSE (0.065)
Wang et al. [195]Assessing defect depth in CFRP sheets using LITLSTM-RNN combined with TSR for noise reductionModel effectiveness might vary with different defect types and depthsDefect depth data from CFRP sheets R 2 (0.78–93)
Kang et al. [198]Addressing nonlinear issues in CDPRs with polymer cablesHybrid RNN (H-RNN) combining LSTM and basic RNN for position error predictionModel complexity may limit its application to simpler systemsPosition error data from CDPRsN/A
Lin et al. [201]Real-time prediction of HFR in PEMFCsLSTM model using current and past sensor dataModel effectiveness may decrease with changes in PEMFC operational conditionsHFR data from PEMFCsMAPE (2.82%)
Lorenzo et al. [202]Classifying plastics using hyperspectral images1D CNN and SVM+RBF modelsRequires extensive hyperspectral data; may be limited to specific plastic typesHyperspectral image data from plasticsAccuracy (99.41%)
Choi et al. [203]Enhancing mechanical stability and motion detection in PBU/AgNW/PBU sensors1D CNN and LSTM models for motion detectionLimited testing in real-world applications; further validation requiredMotion detection data from sensorsAccuracy (98%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Malashin, I.; Tynchenko, V.; Gantimurov, A.; Nelyub, V.; Borodulin, A. Applications of Long Short-Term Memory (LSTM) Networks in Polymeric Sciences: A Review. Polymers 2024, 16, 2607. https://doi.org/10.3390/polym16182607

AMA Style

Malashin I, Tynchenko V, Gantimurov A, Nelyub V, Borodulin A. Applications of Long Short-Term Memory (LSTM) Networks in Polymeric Sciences: A Review. Polymers. 2024; 16(18):2607. https://doi.org/10.3390/polym16182607

Chicago/Turabian Style

Malashin, Ivan, Vadim Tynchenko, Andrei Gantimurov, Vladimir Nelyub, and Aleksei Borodulin. 2024. "Applications of Long Short-Term Memory (LSTM) Networks in Polymeric Sciences: A Review" Polymers 16, no. 18: 2607. https://doi.org/10.3390/polym16182607

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop