Next Article in Journal
Adjunctive Damage Model to Describe the Interaction of Different Defect Types in Textile Composites on the Strain-Rate-Dependent Material Behaviour
Previous Article in Journal
Medium-Temperature Glass-Composite Phosphate Materials for the Immobilization of Chloride Radioactive Waste
Previous Article in Special Issue
Characterisation of Composite Materials for Wind Turbines Using Frequency Modulated Continuous Wave Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Predicting Mechanical Properties of Composite Materials

by
Fasikaw Kibrete
1,2,3,
Tomasz Trzepieciński
4,*,
Hailu Shimels Gebremedhen
2,3 and
Dereje Engida Woldemichael
2,3
1
Department of Mechanical Engineering, University of Gondar, Gondar P.O. Box 196, Ethiopia
2
Artificial Intelligence and Robotic Center of Excellence, Addis Ababa Science and Technology University, Addis Ababa P.O. Box 16417, Ethiopia
3
Department of Mechanical Engineering, College of Engineering, Addis Ababa Science and Technology University, Addis Ababa P.O. Box 16417, Ethiopia
4
Department of Manufacturing Processes and Production Engineering, Rzeszow University of Technology, Al. Powst. Warszawy 8, 35-959 Rzeszow, Poland
*
Author to whom correspondence should be addressed.
J. Compos. Sci. 2023, 7(9), 364; https://doi.org/10.3390/jcs7090364
Submission received: 17 July 2023 / Revised: 25 August 2023 / Accepted: 28 August 2023 / Published: 1 September 2023
(This article belongs to the Special Issue Machine Learning in Composites)

Abstract

:
The determination of mechanical properties plays a crucial role in utilizing composite materials across multiple engineering disciplines. Recently, there has been substantial interest in employing artificial intelligence, particularly machine learning and deep learning, to accurately predict the mechanical properties of composite materials. This comprehensive review paper examines the applications of artificial intelligence in forecasting the mechanical properties of different types of composites. The review begins with an overview of artificial intelligence and then outlines the process of predicting material properties. The primary focus of this review lies in exploring various machine learning and deep learning techniques employed in predicting the mechanical properties of composites. Furthermore, the review highlights the theoretical foundations, strengths, and weaknesses of each method used for predicting different mechanical properties of composites. Finally, based on the findings, the review discusses key challenges and suggests future research directions in the field of material properties prediction, offering valuable insights for further exploration. This review is intended to serve as a significant reference for researchers engaging in future studies within this domain.

1. Introduction

Material scientists and engineers continuously aim to improve their ability to understand, predict, and enhance the desired properties in materials. These properties typically encompass chemical, thermal, mechanical, electrical, optical, and magnetic aspects, as depicted in Figure 1. Among these, accurately predicting the mechanical properties of materials, including strength, stiffness, elasticity, plasticity, ductility, brittleness, toughness, and hardness, holds significant importance for their application in various engineering fields [1]. Traditionally, determining the mechanical properties of materials has relied on laborious and expensive experimental tests such as tensile, compression, and impact testing. However, conducting a series of experiments can be time consuming, costly due to the required equipment, and prone to error arising from testing inaccuracies, machine issues, or variations among manufacturers [2]. Recognizing these limitations, researchers and professionals have increasingly turned to simulation-based approaches for predicting mechanical properties. In contrast to experimental measurements, numerical simulations offer advantages such as reduced material consumption and equipment requirements. However, the reliability of the data generated from simulations may vary, and the calculations often necessitate high-performance computing equipment.
The utilization of artificial intelligence (AI) methods for predicting material properties has experienced significant growth in recent decades. This expansion can be attributed to the increased availability of material data obtained from experimental measurements and numerical simulations. As a result, AI techniques, particularly machine learning (ML), a subfield of AI, have witnessed remarkable advancements and widespread application in the study of material properties. The fundamental concept behind using machine learning for the prediction of properties involves analyzing and determining the nonlinear relationships between properties and their related factors based on existing information. Researchers have leveraged machine learning methods to predict various properties of diverse materials, including metals [3,4], polymers [5], ceramics [6], and composites [7]. Machine learning’s capacity to learn the intricate nonlinearities of material properties from training data has been instrumental in making these predictions.
Machine learning has also been extensively utilized by researchers to explore the mechanical properties of composite materials with complex microstructures. The pioneering use of machine learning in predicting composite material properties dates back to 1995, when artificial neural networks were employed to forecast the mechanical behavior of metal matrix composites [8]. Building upon this early work, subsequent studies [8,9,10] utilized neural networks to predict various mechanical properties of composite materials.
In another study, Vinoth and Datta [11] introduced an artificial neural-network-based machine learning approach to predict the Young’s modulus and tensile strength of polyethylene nanocomposites based on the geometric parameters of nanometric filler. Furthermore, Daghigh et al. [12] employed decision trees and adaptive boosting machine learning methods to predict the fracture toughness of multi-scale bio-nanocomposites containing different particle fillers. Numerous studies have also demonstrated the superior performance of machine learning methods in predicting the mechanical properties of composites [13,14,15,16,17].
In recent times, there has been a growing prevalence of deep learning methods in the domain of material properties prediction. These methods have garnered attention due to their promising capabilities in extracting pertinent information from existing data and making accurate predictions of material properties. Gu et al. [18] employed linear and convolutional neural network models to forecast the toughness and strength of two-dimensional composites using images of their microstructure. This approach leveraged the power of deep learning to analyze the microstructural features and make predictions based on them. Stel’makh et al. [19] combined deep neural networks with ensemble regression trees to predict the mechanical properties of highly functional lightweight fiber-reinforced concrete. This integration of deep learning with ensemble techniques allowed for more accurate predictions of the mechanical properties of the concrete material. These recent studies exemplify the growing utilization of deep learning methods in the prediction of material properties, showcasing their effectiveness in capturing complex relationships within the data and providing valuable insights into material behavior.
Although the application of machine learning and deep learning methods for material properties prediction has been reported several times in the literature, the establishment and understanding of holistic insights in this field are nevertheless limited. A review of the recent research achievements in this field is lacking, and the future research directions for further development have not been clearly stated. Furthermore, the selection and adoption of the right artificial intelligence method for a specific domain is still largely dependent on the experience of the researchers.
Taking into consideration the aforementioned points, this research paper presents a comprehensive review that focuses on recent advancements in the utilization of artificial intelligence methods to predict the mechanical properties of composite materials. The primary objective of this review is to explore the commonly used traditional machine learning and deep learning techniques employed for predicting the mechanical properties of composites. Moreover, the study examines the fundamental principles, strengths, and weaknesses of each method used for predicting the various mechanical properties of composites. The aim is to provide valuable guidance to researchers and professionals in selecting suitable intelligent methods that are specifically tailored to their tasks, rather than relying on random selection. By consolidating the observations made, the study also identifies key challenges and proposes future research directions within this promising field. To achieve these objectives, a systematic analysis of the primary literature in this domain was conducted, and the findings are presented concisely to offer a comprehensive overview.
The rest of this paper is organized as follows. Section 2 introduces artificial intelligence and summarizes its material properties prediction process. Section 3 and Section 4 provide a concise review of the application of traditional machine learning methods and deep learning methods, respectively, in predicting the mechanical properties of various composite materials. Section 5 summarizes the observations, and presents the key challenges and future research directions in this field. Finally, the conclusions are presented in Section 6.

2. Overview of Artificial Intelligence in the Prediction of Material Properties

In recent times, artificial intelligence (AI) has emerged as a revolutionary force with transformative potential, impacting numerous fields worldwide. The broad range of applications of AI signifies its capacity to bring about significant changes in various domains. The term “artificial intelligence” (AI) was initially coined by John McCarthy in the year 1955 [20], and is defined as the field of computer science that enables computers to mimic human intelligence processes, such as learning, reasoning, and self-correction [21]. AI is employed to carry out complex tasks in the same way humans solve problems. It encompasses a wide array of methods, including machine learning, deep learning, and traditional rule-based programming. These approaches enable AI systems to learn and make decisions in order to perform specific tasks autonomously. Machine learning involves training AI models on large datasets to learn patterns and make predictions or decisions without explicit programming. Deep learning, which is a subset of machine learning, leverages artificial neural networks with multiple layers to effectively process intricate and complex data. Additionally, traditional rule-based programming involves explicitly defining a set of rules and conditions for AI systems to follow. By leveraging these various methods, AI systems can exhibit intelligent behavior and solve intricate problems.
Machine learning (ML) is a subfield of artificial intelligence that enables algorithms to learn from past data and experiences. By leveraging this capability, ML algorithms can develop models that encapsulate the knowledge gained from the data. These models are subsequently utilized to make predictions or decisions without explicit programming instructions. ML algorithms excel at identifying patterns and relationships within the data, enabling them to make accurate predictions or take appropriate actions when presented with new, previously unseen inputs. This ability to learn from data and generalize to new instances is a key characteristic of machine learning and contributes to its wide range of applications in various domains [22]. This ability to learn from data and make informed decisions without explicit instructions is a key characteristic of machine learning [23]. As shown in Figure 2, machine learning algorithms can generally be classified into four categories: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning [24]. In supervised learning, the algorithm learns from labeled training datasets to make predictions for new data. Supervised learning can be used for both regression and classification tasks. It is the most widely used learning approach in the field of material properties prediction. In unsupervised learning, the algorithm learns to recognize patterns in data without being explicitly trained using labeled data. Semi-supervised learning represents an intermediate approach between supervised and unsupervised learning, incorporating both labeled and unlabeled data (with a predominant focus on unlabeled data) in the training process. For a full understanding of these algorithms, we refer the reader to the excellent papers of Chibani et al. [25] and Chan et al. [26]. Reinforcement learning is a distinct type of machine learning algorithm that has the ability to interact with dynamic environments, learning through trial and error in order to make predictions [27]. However, reinforcement learning is not as yet widely used in the prediction of material properties, and the focus of this review is confined to supervised and unsupervised learning models.
Machine learning algorithms or models with more than two hidden layers in the artificial neural networks are known as deep learning (DL). By having multiple layers, deep learning models are capable of automatically extracting relevant information from raw input data, eliminating the need for human intervention in feature engineering. These models learn intricate patterns and representations directly from the data, allowing them to handle complex tasks such as speech recognition, natural language processing, and many others. Due to its capacity to capture intricate patterns and hierarchical representations within the data, deep learning has exhibited remarkable performance across a wide range of domains [28]. An example of a deep neural network architecture is displayed in Figure 3. Convolutional neural networks (CNNs), stacked auto-encoders (SAEs), deep belief networks (DBNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs) are among the most widely recognized deep learning methods to have demonstrated successful applications in predicting material properties. The relationship between artificial intelligence, machine learning, and deep learning is shown in Figure 4.
In the context of artificial-intelligence-based prediction of material properties, the raw experimental or simulated material data are typically divided into three separate datasets: the training dataset, the validation dataset, and the testing dataset. The training dataset is utilized to train the artificial intelligence model. During the training process, the model learns from the patterns and relationships within this dataset. The validation dataset is employed to monitor the model’s performance and detect any signs of overfitting. Overfitting happens when a model focuses too much on the training data and fails to generalize well to new data. By assessing the model’s performance on the validation dataset, adjustments can be made to optimize the model’s hyperparameters. Hyperparameters are parameters that control the learning process and behavior of the model. After the model is trained and optimized, it can be utilized to make predictions on new input data, known as the testing dataset. The testing dataset acts as an independent collection of data that is used to assess the model’s predictive performance on unseen examples. This step helps assess the model’s generalization capabilities and provides an estimation of its performance in real-world scenarios. Figure 5 shows the typical prediction process of artificial-intelligence-based methods in predicting material properties.
In the following sections, applications of machine learning and deep learning methods in the prediction of the mechanical properties of composite materials are presented.

3. Traditional Machine Learning Methods for Predicting the Mechanical Properties of Composites

In this section, a brief overview of the application of traditional machine learning methods for predicting the mechanical properties of composite materials is presented.

3.1. Support Vector Machine (SVM)

The support vector machine (SVM) is a popular supervised learning method used for classification, regression, and outlier detection. SVM was initially proposed by Cortes and Vapnik in 1995 based on statistical learning theory [30]. As shown in Figure 6, SVM helps in determining the best line or decision boundary, called a hyperplane. The samples nearest to the decision boundary are called support vectors, which govern the decision boundary equation. The margin is the distance between the vectors and the hyperplane. SVM aims to find the optimal hyperplane that has the maximal margin [31].
Support vector regression (SVR) is the method employed to solve regression problems using SVM. One notable advantage of SVM regression is that its computational complexity is independent of the dimensionality of the input space. Additionally, SVR exhibits excellent generalization capability and achieves high prediction accuracy. These characteristics have made SVR a successful tool for predicting the mechanical properties of various composite materials. For example, Tang et al. [32] used support vector regression (SVR) to predict the mechanical properties of composite materials. Bonifácio et al. [33] applied a support vector machine with regression to predict the mechanical properties, including compressive strength and static Young’s Modulus, of concrete, and the results showed the satisfactory capability of SVR for predicting properties. Similarly, Liu et al. [7] applied the SVM regression algorithm to predict the Young’s modulus and the ultimate tensile strength of graphene-reinforced aluminium nanocomposites based on molecular dynamics simulation. The results showed that SVM regression outperformed ANN and AdaBoost regression for the prediction of material properties, but at a higher computational cost. Hasanzadeh et al. [34] predicted the compressive, flexural, and tensile strengths of basalt-fiber-reinforced concrete using support vector regression (SVR). The modulus of elasticity and compressive stress–strain curves were also simulated using this traditional machine learning method. Similarly, many researchers have used support vector regression (SVR) to predict the mechanical properties of different types of composite materials [35,36,37,38]. The results have demonstrated that SVR can reasonably predict the evaluated mechanical properties with high accuracy.
In summary, SVM demonstrates excellent prediction performance for small datasets with a high number of input variables compared to artificial neural networks and decision trees, showcasing its superior accuracy and computational efficiency. This advantage arises from SVM’s ability to generalize norms with smaller amounts of feature data and its avoidance of iterative processes for optimizing weights and bias parameters. However, SVM lacks transparency of results and complexity in selecting the appropriate kernel function and its parameters, and it also consumes significant memory resources, limiting its utilization in the prediction of material properties. Additionally, SVM’s performance diminishes when handling large datasets, especially in the presence of noise and outliers.

3.2. k-Nearest Neighbor (k-NN)

The k-nearest neighbor (k-NN) method is a machine learning method that falls under the category of non-parametric and instance-based approaches. It can be utilized for both classification and regression tasks. Unlike other methods that construct a general internal model, k-NN stores all instances associated with the training data in an n-dimensional space. In k-NN, the distances between the new data and training data in a descriptor hyperspace are measured using Euclidean distance; thus, the output is predicted based on the values of the nearest k instances. k-NN possesses several strengths, including its ease of understanding, simpler modeling approach, and reasonable performance with minimal parameter adjustments. Consequently, the predictive ability of k-NN has garnered significant interest among researchers in the field of predicting the mechanical behavior of composite materials. For instance, Sharma et al. [39] proposed a model using the k-nearest neighbor algorithm to predict the fracture toughness of silica-filled epoxy-reinforced composites. The results showed that the proposed model was able to achieve up to 96% prediction accuracy with limited experimentation. k-NN has also been successfully used to predict the tensile response of aluminum composites [40]. Thirumoorthy et al. [41] developed a unified prediction framework using the k-nearest neighbor and ant lion optimization algorithms to evaluate the tensile properties of reinforced Al6061 composites. The prediction results showed that the proposed KNN-ALO was able to accurately predict the tensile and hardness properties of stir-cast aluminium composites.
The aforementioned studies provide evidence that k-NN demonstrates reasonable accuracy for the prediction of the mechanical properties of composite materials. Its simplicity, versatility, and effectiveness are the major advantages of the k-NN approach. However, there are certain drawbacks to k-NN algorithms, including their limited ability to handle a large number of features, their weaker performance in capturing complex correlations, and their slower processing speed when dealing with large-volume datasets. Additionally, selecting the optimal neighborhood parameter, k, can be challenging, as it significantly impacts the predictive performance when determining material properties using k-NN.

3.3. Decision Tree

The decision tree is a widely recognized non-parametric supervised machine learning method employed for both classification and regression purposes. Decision tree models are composed of a root node, leaf nodes, and branches, as illustrated in Figure 7. The root of the tree represents the input data, and each branch represents a potential decision. The leaves of the tree correspond to the output generated by the algorithm.
Numerous researchers have used decision tree approaches to predict the mechanical properties of composite materials. Qi et al. [43] used the decision tree model to predict the mechanical properties of carbon-fiber-reinforced plastic. Kosicka et al. [44] also employed a decision tree model to predict the mechanical properties of polymer composites with alumina modifiers. However, the issue of overfitting greatly affects the prediction performance of decision trees in this research area. In order to enhance the prediction performance of the decision tree, the random forest concept was introduced. Random forest regression is one of the ensemble learning methods for regression. In the random forest, the ensemble consists of a numer, k, of decision trees, and the trees are selected randomly for the maximum number of cycles to obtain the final prediction results from the given dataset. Hegde et al. [45] investigated the mechanical properties of the hardness of vacuum-sintered Ti-6Al-4V reinforced with SiCp composites using a random forest regression method. Zhang et al. [46] also predicted the mechanical properties of composite laminate using a random forest model, and the results showed that the model was able to arrive at an accurate prediction in a shorter time. Similarly, researchers have used random forest regression to predict the mechanical properties of composites [13,47].
In addition to random forest, adaptive boosting (AdaBoost) is widely used to predict the mechanical properties of composite materials. AdaBoost is an ensemble learning algorithm that employs an iterative approach in order to improve predictive accuracy by learning from the errors of previous trees. Liu et al. [7] utilized the AdaBoost regression algorithm to predict the Young’s modulus and ultimate tensile strength of graphene-reinforced aluminium nanocomposites based on molecular dynamics datasets. Karamov et al. [48] utilized an AdaBoost regression algorithm to predict the fracture toughness of pultruded composites using other material properties. Pathan et al. [49] employed a gradient-boosting regression algorithm to predict the macroscopic elastic stiffness and yield strengths of unidirectional fiber composites. Shang et al. [50] also proposed decision-tree- and AdaBoost-based models for predicting the compressive strength and splitting tensile strength of recycled coarse-aggregate-based concrete. The results showed that the AdaBoost regressor was able to make more accurate predictions than the decision tree.
Extreme gradient boosting (XGBoost) is another ensemble learning algorithm that constructs a final model by combining multiple individual decision trees. XGBoost utilizes gradient descent optimization techniques to minimize the loss function associated with the training process. This approach enhances the performance and predictive accuracy of the model. It has been shown by Guo et al. [51] that the XGBoost regression algorithm can more accurately predict the compressive strength, tensile strength, and ductility of high-performance fiber-reinforced cementitious composites than an artificial neural network, support vector regression, or classification and regression tree under the same conditions.
In general, decision tree models are more simple to interpret than other machine learning methods due to the effect of the tuning of the weights and hyperparameters. Thus, decision trees can handle large datasets with low time consumption.

3.4. Artificial Neural Networks (ANNs)

Artificial neural networks (ANNs) are the most popular supervised machine learning method, and were inspired by the biological neural network. The typical architecture of an artificial neural network consist of an input layer, a hidden layer, and an output layer, as shown in Figure 8. The neurons of the input layer take input features, while output neurons give predictions. In the hidden layer, each neuron receives input data from input-layer neurons, integrates those data, and then uses the result in a straightforward computation.
The ability of artificial neural networks (ANNs) to handle complex nonlinear data has made it one of the most common machine learning methods in the prediction of different mechanical properties of composite materials. For instance, Krishnan et al. [52] conducted a study on the application of artificial neural networks (ANNs) to predict various mechanical properties, including the flexural modulus, ultimate tensile strength, tensile modulus, flexural strength, hardness, and elongation of recycled low-density polyethylene composite reinforced with date palm wood fiber. The results demonstrated highly favorable correlation coefficients for all predictions, exceeding 0.99. In the study by Barbosa et al. [14], an artificial neural network (ANN) was employed to predict the mechanical properties of multi-layered laminate composites. Kabbani et al. [53] focused on predicting the stress–strain relationship of unidirectional fiberglass polypropylene at various fiber orientation angles and cooling rates using an ANN. Wang et al. [54] utilized a standard ANN to predict the fracture behavior of carbon-fiber-reinforced polymer laminates, achieving a prediction rate of 86%, thus outperforming the k-nearest neighbors and random forest models. In ref [55], an ANN was used to predict the static strength properties of carbon-fiber-reinforced composites. Devadiga et al. [56] employed an ANN to predict the hardness of multiwall-carbon-nanotube–fly-ash-reinforced aluminum composites.
In addition to the standard ANN, researchers have proposed novel neural network methods for enhancing the prediction performance of composite materials’ mechanical behavior. Wang et al. [57] introduced a backpropagation neural network (BPNN) algorithm within the ANN framework to predict the mechanical properties of multi-layer braided-textile-reinforced tubular structures based on axial compression experiments. Li et al. [40] proposed a genetic algorithm (GA)-optimized backpropagation neural network model for predicting the transverse elastic modulus, transverse tensile strength, and transverse compressive strength of unidirectional carbon-fiber-reinforced polymer composites containing microvoids. Rajkumar et al. [58] developed a feedforward neural network (FFNN) model to predict the mechanical properties of giant reed-fiber-reinforced polyethylene terephthalate composites. Kumar et al. [59] utilized the radial basis function neural network and the generalized regression neural network models to predict the ultimate failure strength of glass/epoxy composite laminates using acoustic emission parameters.
The foregoing literature review indicates that artificial neural networks (ANNs) are the most frequently used approach for predicting the mechanical properties of composite materials, with the highest accuracy. However, the prediction process of ANNs is hard to explain, and a large amount of data is required to achieve accurate training. ANNs face problems with overfitting in the case of small amounts of training data. They also require a very large amount of computation time for complex problems, and require special hardware during training when compared to support vector machines, k-nearest neighbors, and extreme gradient boosting.

3.5. Other Machine Learning Methods

Besides the commonly used machine learning methods mentioned above, other machine learning methods have also been applied to predict the mechanical properties of composite materials, bringing in different characteristics and benefits, including linear regression [60], logistic regression [61], fuzzy logic [62,63], neuro-fuzzy inference systems [64], extreme learning machines [65,66], and graph neural networks [67,68,69].

4. Deep Learning Methods for Predicting Mechanical Properties of Composites

This section presents a concise review of the application of the most researched and most widely used deep learning methods for predicting the mechanical properties of composite materials.

4.1. Convolutional Neural Networks (CNNs)

Convolutional neural networks (CNNs) are biologically inspired feedforward neural networks that operate by extracting local features from raw input data in a layer-by-layer fashion to make predictions [70]. Their inception can be traced back to 1980 [71], and further advancements were made in 1998 [72]. The CNN architecture typically consists of multiple hidden layers, including the convolutional layer, the pooling layer, and the fully connected layer, along with an activation function, as depicted in Figure 9. Within the convolutional layer, there exist multiple learnable kernels or filters with trainable and shared weights, enabling them to extract high-level local features from the input data and generate new feature maps to be passed to the subsequent layer. The pooling layer, acting as a down-sampling layer, reduces the dimensionality of the feature maps, thereby reducing computational complexity and preventing overfitting. The features extracted from the convolutional layers are typically fed into fully connected layers to obtain the final labels or predictions. To enhance the nonlinear characteristics of the neural network, various activation functions, such as sigmoid, hyperbolic tangent, rectified linear unit (ReLU), and leaky rectified linear unit (Leaky ReLU), are employed.
Convolutional neural networks (CNNs) were initially developed for tasks such as facial recognition and image classification due to their ability to handle large datasets and provide highly accurate predictions [74]. However, with the advent of extensive materials databases, the application of CNNs in the field of materials science has gained momentum. Numerous research groups have utilized CNNs to explore the mechanical properties and structural behavior of composite materials. In the work by Yang et al. [75], a combination of principal component analysis (PCA) and convolutional neural networks (CNNs) was employed to accurately predict the stress–strain curve of binary composites beyond their elastic limit. The study demonstrated that utilizing machine learning techniques had the potential to expedite the process of composite design optimization. Similarly, in reference [76], the authors utilized a CNN model to predict the mechanical properties of composite materials, including stiffness, strength, and toughness, beyond the elastic limit. The CNN model achieved satisfactory prediction performance in this regard. Abueidda et al. [77] developed a CNN model that accurately predicted the elastic modulus, strength, and toughness of 2D checkerboard composites across a range of volume fractions. They integrated a genetic algorithm optimizer with the CNN model to search for the optimal microstructural design. The results demonstrated the CNN’s high accuracy in predicting mechanical properties. Li et al. [78] utilized a CNN to predict the mechanical properties of multi-phase heterogeneous materials, providing a general approach for establishing structure–property relationships. Their study successfully predicted the properties of the tested materials, showcasing the effectiveness of the proposed CNN model. In [79], a ConvNet-based method was proposed to predict the effective elastic properties and Poisson’s ratio of composites using microstructural images. The study showed that the CNN-based approach was able to accurately predict these properties from a window of the microstructural image. Pakzad et al. [80] employed a CNN model to predict the compressive strength of steel-fiber-reinforced concrete. The results demonstrated the superior performance of the CNN model in predicting the compressive strength of the material. Ramkumar et al. [81] utilized a CNN to predict the mechanical properties of natural fiber composites. Their findings indicated the promising prediction accuracy and efficiency of the CNN-based approach. Kim et al. [82] proposed a CNN-based model for predicting the transverse mechanical behavior of unidirectional composites. The CNN model utilized microstructure images as input datasets for predicting the stress–strain curves of the composites in the transverse direction.
Indeed, Valishin and Beriachvili [83] employed a convolutional neural network (CNN) to predict the homogenized elastic properties of composite materials based on their representative volume elements. Rao and Liu [84] and Yang et al. [85] utilized three-dimensional convolutional neural networks (3D-CNNs) to determine the effective or homogenized mechanical properties of composites with complex and heterogeneous microstructures. Béji et al. [16] also predicted the effective elastic properties of heterogeneous composite materials using CNNs. Numerous studies have demonstrated the superior performance of CNNs in accurately predicting the mechanical properties of composites [86,87]. These works collectively highlight the rapid and accurate prediction capabilities of CNNs in the context of estimating the material properties of composites. However, it is important to note that CNNs may not be suitable for tasks that rely heavily on the sequential nature of the data.

4.2. Recurrent Neural Networks (RNNs)

Sequential deep learning models, including recurrent neural networks (RNNs) and their variants, such as long short-term memory (LSTM) and gated recurrent units (GRUs), have been extensively investigated for fault diagnosis in rotating machines [88,89]. A simple RNN unit is presented in Figure 10, in which x〈t〉 and y〈t〉 are the input and output at time tn, respectively, and a〈t − 1〉 is the hidden state vector at time tn−1.
Recurrent neural networks (RNNs) are useful for analyzing sequential data and describing the large deformation response of elastic–plastic solids. GRU-based RNNs have been used to study the effects of homogeneous anisotropic hardening [91] and to capture the path-dependent plasticity of composites subject to nonlinear loading [92]. Gorji et al. [91] modeled the plane stress plasticity for arbitrary loading paths to capture uniaxial stress–strain responses such as latent hardening, permanent softening, and the Bauschinger effect. They found that RNNs provided a data-driven modeling framework that could be successfully used in representative volume element (RVE) simulations to predict the plastic response of materials such as polycrystals, composites, metamaterials, architecture materials, and ceramics [93].
Frankel et al. [94] used LSTM-based ANNs to represent the homogenized mechanical response of oligocrystals with deep learning. The predictions of ANNs trained on initial microstructures were in excellent agreement with the elastic regime; however, the plastic regime showed a worsening of the forecast accuracy. Wang and Sun [95] used an RNN approach to develop accurate recursive homogenization models of porous materials. The developed multi-scale, multi-permeability poroplasticity model enabled field-scale simulations in order to gather insights from grain-scale and meso-scale microstructural attributes. The authors analyzed an inverse problem to compute the effective permeability in the normal and tangential directions (Figure 11).
In the context of inverse design, two major modules are typically involved: the sampling module and the forward property prediction model. The sampling module guides the search in the design space and employs various optimization frameworks, including Bayesian optimization [96], the genetic algorithm [97], differential evolution [98], and particle swarm optimization [99]. On the other hand, the forward property prediction module assesses the performance of each design candidate. This module utilizes different methods, such as artificial neural networks [100], random forest (RF) [101], and support vector machines [102]. RF, shallow neural networks, and support vector machines have been applied for screening various materials, including metal hydrides for hydrogen storage [103], magneto-caloric materials [104], amorphous alloys [105,106,107], high-entropy alloys [108], and thermoelectric materials [109]. The prediction of the microstructure and the identification of potential issues prior to the data-driven manufacturing of composite materials are in alignment with the principles of the Industry 4.0 concept [110,111].
Chen et al. [112] utilized the long short-term memory recurrent neural network (LSTM RNN) in conjunction with finite-volume direct-averaging micromechanics (FVDAM) to generate the homogenized response (Figure 12) of unidirectional fiber-reinforced composites. The LSTM RNN, known for its ability to capture long-term dependencies, proved to be highly effective in this context. This approach offers a compelling alternative to the traditional computational frameworks used for advanced multi-phase materials. While RNNs have been widely employed to model elastic–plastic and viscoplastic materials, the analysis of viscoelasticity is less common. Chen [90] proposed a data-driven RNN approach for computing the response of viscoelastic and other history-dependent materials. The study revealed that the extrapolation ability of the RNN model exhibited better performance when using continuous strain inputs compared to when using jump strain inputs.
Wu et al. [113] proposed GRU-based recurrent artificial neural networks (NNWs), to serve as a surrogate of the meso-scale Boundary Value Problem (BVP) in the context of computational multi-scale analyses. The BVP was solved by considering the homogenized material properties extracted from coupled meso-scale-resolution BVPs (Figure 13a). This surrogate model (Figure 13b) was used as the constitutive law of a single-scale simulation. This model allows the computational time to be reduced by four orders of magnitude as compared to FE2 analysis.Ghavamian and Simone [114] proposed a strategy for efficiently collecting stress–strain data from a micro model based on non-physical data. Zhu et al. [115] used RNNs to predict the stress–strain relationship of sand subjected to monotonic triaxial compression loading. Graf et al. [116] performed model-free structural analysis, in which the fuzzy structural behavior of long-term deformation behavior of a textile-reinforced concrete was predicted on the basis of the time-dependent measurement results. RNN surrogate models are able to accelerate the simulation process when compared to FE2 approaches. However, the RNN model that was developed was physically reliable only as long as it was accurate.
Mozaffar et al. [92] developed GRU-based RNNs to represent the meso-scale response of 2D RVEs. The authors simulated the behavior of the RVEs under non-proportional loading conditions using high-fidelity finite element analysis. The results led to the conclusion that complex phenomena such as distortional hardening could be predicted with an error within 0.5%. Logarzo et al. [117] developed a machine learning algorithm and proposed a training scheme (Figure 14) to formulate smart constitutive laws (SCLs) for the homogenization of inelastic microstructures. They integrated the SCL into a finite element method model and performed stress analysis on a dog-bone tensile test specimen. The SCL captured complex loading histories at a very low computational cost when compared to concurrent multi-scale schemes.
Hearley et al. [118] used an RNN trained on virtual NASA’s Multiscale Analysis Tool (NASMAT) data to predict the mechanical behavior of unreinforced fabric based on physics-based solutions. The heuristic-knowledge-based model was able to predict a variety of stress–strain curves for fabrics with different meso-scale geometries. Farizhandi and Mamivand [119] proposed a Predictive Recurrent Neural Network (PredRNN) model, trained to predict the microstructural evolution of the FeCrCo alloy during spinodal decomposition. It was found that the RNN was able to provide quantitatively accurate predictions of microstructure morphology, while also being several orders of magnitude faster than the phase-field method. Zhang et al. [120] developed the LSTM and RNN models to predict the tensile strength prediction of Fused Filament Fabrication (FFF) polylactic acid (PLA) thermoplastic. The vibration and temperature were used as layer-wise signals. It was shown that the LSTM-based predictive model outperformed the support vector regression and random forest machine learning techniques. Using context neurons in RNNs, Freitag et al. [121], Graf et al. [122] and Oeser and Freitag [123] predicted the viscoelastic and elastic–plastic behavior of materials with fuzzy parameters. Koeppe et al. [124] developed a novel RNN-based approach supported by a systematic hyperparameter search strategy that identified the best neural network architectures for the fundamental constitutive models (elastoplasticity, hyperelasticity, viscoelasticity). Their strategy made it possible to find parsimonious and interpretable models for hyperelastic material behavior [125]. Nascimento and Viana [126] used cumulative damage models with RNN to predict fatigue crack length for a fleet of aircraft. The proposed novel physics-informed cell for cumulative damage modeling was found to be reliable when modeling the growth of fatigue cracks. Yang et al. [127] developed self-supervised learning with convolutional RRNs to exploit the processing–structure–property relationships of materials. The trained RNNs were capable of extrapolating beyond the training datasets in spatio-temporal domains. The long-term statistical properties of microstructures have been developed satisfactorily.

4.3. Auto-Encoders (AEs)

Auto-encoders are a specific type of feedforward neural network that focuses on learning the identity function rather than being trained in a supervised manner. It consists of two main components: the encoder, which transforms the input data into a compressed representation, and the decoder, which reconstructs the input data from this encoded representation. Auto-encoder ANNs require less complex computation compared to nonlinear kernel-based Principal Component Analyses (PCAs). Jung et al. [128] used a 3D convolutional auto-encoder (Figure 15) and Bayesian optimization to search for optimal dual-phase microstructures for a given uniaxial tensile strength. The auto-encoder consisted of convolutional layers for the encoder and decoder and connected layers. An open-source library, Tensorflow, was used to train the encoder. Optimization revealed a family of optimal microstructures with uniaxial tensile strength comparable to that of the optimal microstructure. Iraki et al. [129] developed an approach for the optimization of crystallographic textures with the desired properties of cold-rolled DC04 steel sheet. Their machine learning model combined a multi-task learning-based approach with Siamese multi-task learning (SMTL) ANNs. The optimization results demonstrated the capacity of the model to identify diverse sets of textures felling within the specified bounds of the desired properties.
Zhao et al. [130] developed a Variational Auto-Encoder (VAE) ANN (Figure 16), and combined it with the Bayesian optimization method to characterize the 3D structural information of porous membranes and to represent them using low-dimensional latent variables. The VAE model, validated on several microstructures, demonstrated the robust ability of the VAE model to characterize the microstructures of porous materials. VAE ANNs have been successfully adopted in several studies addressing the machine learning of the optical properties of metal oxides [131]. The dataset analyzed contained 67,000 unique quaternary oxides and more than 80,000 unique quinary oxide compositions, making it the largest set of experimental materials utilized in machine learning investigations. Arumugam and Jiran [132] proposed the use of complex-step convolutional auto-encoders (Figure 17) to identify regions of importance in metal microstructures for secure sharing. The ANN developed was capable of reconstructing the original microstructural images from just 3.5% of the original data. The proposed approach would be useful for identifying microstructural regions in biomaterials, metals, and composites. Lee et al. [133] used a convolutional variational auto-encoder (CVAE) for the accurate prediction of the stress profiles of 316 stainless steel and AW-6061 aluminium alloys subjected to four-point bending. The source- and target-domain data were used to train a domain-adaptive CVAE. Kim et al. [134] employed a CVAE to generate a continuous dual-phase microstructure on the basis of the carbon content of the martensite phase and the ferrite grain size. The overall stress–strain behavior was analyzed using RVE with a microstructure-based constitutive model. The VAE with Gaussian process regression (GPR) was able to successfully predict the newly generated microstructures with high accuracy. Computational homogenization RVE models of composite materials with material periodic conditions were used to generate a 3D network of fillers within the RVE [135].
Morand and Helm [136] proposed a knowledge-based approach to optimally sample the parameters of the hardening model using the query-by-committee approach. It was found that by using this approach, automated sampling could be conducted in a goal-directed manner without the need for the additional incorporation of expert knowledge. In the query-by-committee approach, the learner consists of a committee of n regression models. To successfully apply supervised learning models, Morand et al. [137] introduced interactive learning techniques called active learning. The current research demonstrates that implementing active learning reduces the amount of data required to effectively cover microstructure–property spaces. To investigate the microstructure–behavior characteristics of the fiber–matrix interface in composite materials, Chen and Xu [138] developed a hybrid deep-learning-based method by combining an unsupervised auto-encoder with a feedforward ANN. By employing molecular dynamics simulations, a hybrid learning strategy was able to successfully identify the degeneracies present within the original microstructural space of the composite’s interface. Sardeshmukh et at. [139] used a variational auto-encoder to learn low-dimensional microstructure representations of cast iron. They showed that the trained auto-encoder explicitly encoded the factors of variation that were mainly responsible for the yield stress or the ultimate tensile strength. Oommen et al. [140] proposed a framework that integrated a convolutional auto-encoder with a deep neural operator (DeepONet) to shorten the time to solution when predicting meso-scale microstructural evolution. It was concluded that a convolutional auto-encoder–DeepONet approach offered accelerated prediction of the phase-field-based microstructural evolution as compared to other machine learning frameworks. The deepONet architecture can replace the high-fidelity phase-field numerical solver in interpolation tasks. Pitz and Pochiraju [141] developed a transformer neural network architecture that captured the composite microstructure homogenization of composite microcells with varying fiber volume ratios. The authors combined an auto-encoder convolutional ANN and PCA for dimensionality reduction.
In the last decade, VAEs and machine learning approaches have been studied extensively with relation to the following topics: microstructure reconstruction of heterogeneous materials [142], stochastic reconstruction of 3D multi-phase microstructures with periodic boundaries, prediction of the effective thermal conductivities of composite materials and porous media [143], and the establishment of structure–property localization linkages [85,144] and modeling mining structure–property linkages [145] in high-contrast composites. Chalapathy and Chawla [146] and Ruff et al. [147] presented a survey on recent deep-learning-based approaches used in neural-network-based auto-encoders. Bostanabad et al. [148] gave an overview of computational microstructure reconstruction and characterization. The majority of microstructure-related works in the field of machine learning have been focused on finding a relationship between the morphologies and properties of materials [85,149,150], the reconstruction [151,152] or detection [153] of microstructures, and microstructure classification [154].

4.4. Deep Belief Networks (DBNs)

Deep Belief Networks (DBNs) are advanced neural network architectures that utilize unsupervised learning methods to generate results. DBNs are composed of multiple layers, each containing smaller unsupervised neural networks. A distinctive characteristic of DBNs is that, while connections exist between the layers, there are typically no connections between individual nodes within the same layer. This hierarchical structure allows DBNs to capture intricate patterns and dependencies in the data, making them well suited for tasks such as feature learning, dimensionality reduction, and generative modeling.
Due to the nonlinear activation function and hidden neurons, DBNs are able to process complex data mappings [155]. DBNs can be trained in a greedy layer-wise fashion by stacking restricted Boltzmann machines on top of each other [156]. Deutsh et al. [157] developed a new method that integrated a particle filter and a deep belief network for the prediction of the remaining useful life (RUL) of hybrid ceramic bearings. The integrated method was validated based on the vibration data collected from hybrid ceramic bearing run-to-failure tests. The validated results showed promising RUL prediction performance. Cang et al. [142] introduced a feature learning mechanism that utilized a convolutional DBN (CDBN) to automate a bidirectional conversion between the microstructures of Pb63-Sn37 and Ti-6Al-4V alloys. The developed five-layer CDBN model achieved significant dimension reduction and was able to statistically preserve the lower-dimensional feature representations. Fu et al. [158] utilized a DBN to construct feature spaces for end milling, enabling cutting state monitoring based on vibration signals. On the other hand, Wang et al. [159] developed a data-driven prediction model for material removal rate during polishing using a DBM (Deep Boltzmann Machine), which was optimized using a particle swarm optimization algorithm (Figure 18). They also examined the impact of learning rate and network architecture on the accuracy of the predicted material removal rate.
Ye et al. [160] adapted a DBN framework that consisted of stacked restricted Boltzmann machines to the monitoring of selective laser melting. The proposed DBN model was emonstrated to be accurate and convenient for the monitoring of the selective laser melting process. DBNs are appropriate for performing monitoring and in-process high-defect-detection-rate diagnosis based on acoustic signals [161]. Bostanabad et al. [148] illustrated techniques for performing computational microstructure characterization. They demonstrated the potential of the role played by microstructure characterization and reconstruction in materials science.

4.5. Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) are an artificial intelligence that generates new, unique data similar to existing data. GANs originated from game theory, and consist of two neural networks that compete to produce more accurate predictions. They comprise two parts: a generator and a discriminator. The generator creates new data, and the discriminator decides whether the data produced by the generator is real (i.e., belongs to an existing dataset) or fake (i.e., was created by the generator). In recent decades, GANs have attracted interest for their potential in the generation of complex multi-phased microstructures [162,163,164,165].
Buehler [166] proposed a framework based on a cycle-consistent GAN model to achieve atomistic-level prediction of stress fields directly from an input atomic microstructure. He also indicated the possibility of using a GAN to predict stress fields based on experimentally acquired structural data. Chun et al. [167] applied a patch-based, fully convolutional GAN architecture to spawn ensembles of synthetic heterogeneous energetic material microstructures. The GANs successfully generated new morphologies of heterogeneous energetic materials, where the porosity distribution could be spatially manipulated. Mosser et al. [168] showed that GANs could be used to continuously parameterize the micromorphology of materials. Fokina et al. [169] further validated the effectiveness of GANs in generating synthetic microstructures for aluminum alloy foam (trade name Alporas). GANs are also capable of generating realistic synthetic microstructures [165]. Tang [170] applied a regression-based conditional Wasserstein GAN (RCWGAN) to regress microstructures against numerical processing parameters. He also proposed in situ microstructure monitoring using the RCWGAN.
The accuracy when predicting the microstructure of laser-sintered alumina was over 92%, using various microstructural features as metrics. Pütz et al. [171] applied the Wasserstein WGAN to generate input data representing the microstructure of dual-phase steel (DP800). It was found that the concept applied was effective even on small sample sizes, while the use of bigger datasets only improved the quality of the output. Singh et al. [164] developed a three-step WGAN-based approach for generating microstructures. The first step used a standard GAN architecture. In the second step, the traditional discriminator was replaced with a checker function that identified the features of the microstructures. In the third step, they combined the above-mentioned architectures to reconstruct the microstructure. The proposed model could be used to enforce user-defined physics constraints during microstructure synthesis. Yang et al. [165] proposed a deep GAN with a Bayesian optimization framework that was able to overcome the limitations of existing microstructure characterization and reconstruction techniques. It was found that the GAN reduced information loss, while GP-Hedge optimization improved the efficiency of microstructural material design. Hsu et al. [172] used GAN to generate realistic 3D microstructures of solid oxide fuel cell electrodes. The proposed grain-based generation algorithm, DREAM.3D, was used to generate microstructures without requiring 3D image data. Gowtham et al. [173] indicated the capacity of different architectures consisting of GANs to learn the mapping between random latent vectors and microstructural images of Ti-6Al-4V titanium alloy. The proposed models were able to generate microstructures closely resembling the original microstructure dataset in terms of Kernel inception distance, Frechet inception distance, inception scores, and morphometric parameters. Mao et al. [174] combined GANs and mixture density networks (MDN) to perform inverse modeling of the low-dimensional design representations of the microstructure images.
Tharke et al. [175] quantified the physical awareness of dual-phase steel microstructures using a variant of GAN. The similarity between the original and the generated microstructures was quantified on the basis of signal-to-noise ratio, similarity index, and peak signal-to-noise ratio. It was observed that there existed a relationship between the similarity assessment metrics and the microstructure morphology. Henkes and Wessels [176] proposed a GAN tailored towards 3D microstructure generation based on the single computed tomography scan. The developed approach, based on a convolutional–residual generator and a convolutional–residual discriminator, made it possible to generate three-dimensional microstructures with the same properties as the original data. Lee et al. [177] used a conditional GAN, a convolutional GAN, and cycle-consistent GAN-based image-to-image translation to generate realistic scanning electron microscopy micrographs for polished/etched steel surfaces. They concluded that the use of GANs to generate microstructures could be useful in achieving metallography-based theoretical modeling that is reliable and robust. Tang et al. [178] proposed RCWGAN with the Wasserstein loss function and a gradient penalty (GP) for the prediction of SEM micrographs of laser-sintered alumina. The RCWGAN-GP was able to accurately predict the SEM micrographs in terms of the size, shape, and spatial distribution fo the grains. Agrawal and Choundhary [179] presented an overview of deep learning frameworks in materials science applications, along with their challenges, advantages, and recent applications.

4.6. Deep Transfer Learning

Deep neural networks require a very large number of samples and computational resources in order to train the model [180,181]. In many fields, we do not have access to a large collection of samples that could be used in the learning process. A technique called deep transfer learning (DTL) is often used to reduce the number of samples needed. This consists of using a model trained on a set containing many generic images, which allows the use of such a model as a feature detector in the target task. DTLs are commonly applied to tackle limited dataset problems by utilizing the rich features extracted from large datasets [182,183,184,185].
Dong et al. [186] used a combination of a deep ANN model, a genetic algorithm, and Bayesian optimization for the design of composite metal oxide materials. A DTL strategy was applied to overcome the limitation of a small dataset in training the ANN predictor of the optical absorption spectrum using the Materials Agnostic Platform for Informatics and Exploration (Magpie) [187] descriptor of material composition. Li et al. [152] proposed a DTL for identifying the γ′ phase on Ni-based superalloys datasets. They also developed software for recognizing the γ′ phase. It was found that the framework used only needed five (or fewer) labeled images to achieve state-of-the-art segmentation accuracy. Jia et al. [188] also developed an end-to-end network architecture that was able to accurately identify the γ′ phase in superalloys. Deep machine learning has been shown to be useful for microstructure recognition in ceramics [188]. Kondo et al. [189] adopted convolutional neural networks to link experimental microstructures with corresponding ionic conductivities. Gupta et al. [190] introduced a cross-property DTL approach that utilized models trained on large datasets to construct models on smaller datasets from the Joint Automated Repository for Various Integrated Simulations (JARVIS) [191], which encompasses diverse properties. The developed approach had the capacity to integrate various cutting-edge deep learning models, enhancing performance and offering potential applicability to diverse properties of materials. Li et al. [192] incorporated an encoder–decoder process and feature-matching optimization using a transfer-learning-based approach for making structure–property predictions and performing microstructure reconstruction. The proposed approach demonstrated significance when performing structure–property modeling. DeCost et al. [193] employed a DTL convolutional network to capture hierarchical representations of microstructures to study the correlation between the hierarchical layers and the microstructural features. They used SEM micrographs of Ultra-High Carbon Steel (UHCS) to compare convolutional-neural-network-based image texture representations with the classic bag of visual words representation. Feng et al. [194] used DTL and a convolutional neural network (CNN) to predict the crystal structures of inorganic substances. They mapped the chemical formula of inorganic materials into 2D representations with periodic table structures. Figure 19a presents an example of the 2D representation for 316L stainless steel. The VGG-like CNN trained to obtain the transferable feature extractor is shown in Figure 19b. Finally, the workflow of transfer learning is presented in Figure 19c. The developed framework was successfully exploited to accurately discriminate 170 phase prototypes and phases of high-entropy alloys based on their chemical compositions.
Pandiyan et al. [195] showed that the knowledge learned by two native DTL networks, namely VGG and ResNets, regarding four laser powder bed fusion (LPBF) process mechanisms, including conduction mode, balling, keyhole pores, and lack of fusion pores, in 316L steel could be transferred to CuSn8 bronze. The proposed network (Figure 20) was supervisedly trained with labeled spectrogram images from 316L stainless steel. Acoustic emissions and wavelet transform during LPBF were used to train the two DTL ANNs. Once reliable accuracy had been achieved, the pre-trained model was used to train the build quality found in another material (CuSn8 bronze). During training with the bronze material, only some of the ANN weights were re-trained. Pandiyan et al. [196] analyzed the defect-free regimes from anomalies in the LPBF process using variational auto-encoders and GANs. Scime and Beuth [197] implemented unsupervised DTL to classify anomalies in the LPBF process. Finally, Caggiano et al. [198] developed a bi-stream deep convolutional ANN trained with images acquired during the LPBF process to identify defects in workpiece material.
Yamada et al. [199] developed a pre-trained model of the ever-growing library called XenonPy. MDL covers a wide variety of material properties. Farizhandi and Mamivand [200] developed a fused-data deep learning framework (Figure 21) that was able to predict the processing history (temperature, initial chemical composition, heat treatment time) of a Fe-Cr-Co alloy microstructure. The authors identified the cause of inaccurate predictions as the fact that steady-state morphologies, after aging for a long time, offer several paths for identical microstructures.
Yang et al. [201] developed a deep learning model that was able to accurately predict the elastic strains of 3D composites that were characterized by differences in the elastic properties of the components. Predicting the failure-related properties of composite materials is crucial in the analysis of composite materials [43,202]. Altarazi et al. [203] predicted and optimized selected properties of PVC composites (ductility, density, tensile strength) based on different compositions. The different ANN architectures proposed could be extended by considering more datasets and composite ingredients, or by combining the ANN with a GAN. Wei et al. [143] created a database containing the properties and structures of composites and applied the lattice Boltzmann method to analyze the capacity of machine learning methods for performing heat transfer analysis. Rong et al. [204] employed 2D cross-section images and 2D convolutional neural networks (CNNs) to predict the effective thermal conductivity of 3D composites. It was found that 2D CNNs can provide accurate predictions for multiple cross-section images. You and Arumugasamy [205] investigated the effects of time on the molecular weight and reaction temperature of polycaprolactone (PCL) on specified production goals using a framework consisting of an adaptive neural fuzzy inference system (ANFIS) and feedforward neural networks (FFNNs). The authors predicted the molecular weight of the biopolymer using enzymatic polymerization. Tong et al. [206] characterized the dry shrinkage behavior and hydration of emulsified asphalt composites in cement using a framework consisting of GANs and deep ANNs. SEM images characterizing the hydration production and background in the microstructure of the asphalt composites were generated using the GANs. Tong et al. [207] applied a DL method to characterize the carbon fiber morphology distribution in carbon-fiber-reinforced cement-based composites. A cascade deep learning algorithm and carbon fiber morphology distribution results were used to predict the properties of carbon-fiber-reinforced cement-based composites.
Choudhary et al. [208] presented a high-level overview of the challenges and prospective limitations of deep learning (DL) methods, followed by a detailed discussion of recent developments of DL in spectral analysis, natural language processing, and materials imaging. Wang et al. [209] explored and analyzed computational tools designed to address complex challenges in characterizing the behavior of composite materials and enhancing composite performance with reduced effort and cost.

5. Observation, Challenges, and Future Research Directions

In the previous sections, the published literature on the applications of artificial intelligence (AI), especially machine learning (ML), and deep learning (DL) algorithms, in the prediction of the properties of composite materials was presented. In the field of predicting the mechanical properties of composite materials, artificial intelligence algorithms have shown promise and have yielded valuable insights. According to the analysis presented in the studies, traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), have demonstrated impressive prediction results in comparison to experimental measurements and numerical simulations. These machine learning methods can effectively learn from limited experimental data and identify nonlinear, multi-dimensional relationships without prior assumptions about their nature. However, it is important to note that while traditional machine learning methods have strengths, they also have weaknesses. Table 1 provides the strengths and weaknesses of common traditional machine learning methods applied in predicting material properties. Understanding these strengths and weaknesses aids in selecting the appropriate shallow learning method for predicting the mechanical properties of composite materials.
The use of artificial intelligence, particularly machine learning methods, can play a significant role in materials research by establishing relationships between a material’s microstructure and its mechanical properties. However, the success of traditional machine-learning-based methods in the prediction of the properties of composite materials is limited for the following reasons: (1) Traditional machine learning methods require manual feature extraction, which relies on prior knowledge and expert experience. This process can be time consuming and may not fully capture meaningful features from large datasets; (2) Traditional machine learning methods’ shallow structure limits their ability to effectively capture complex relationships and interactions in highly nonlinear and multi-dimensional material data, thereby restricting their applicability in handling such data; and (3) In traditional approaches, feature extraction and prediction are designed separately, which can lead to increased computational time and may restrict the overall prediction performance of the models. This separation can limit the ability to fully exploit the underlying relationships between the material’s features and its mechanical properties. In the context of materials science, the term “features” pertains to physical and chemical characteristics that are relevant to the properties and behavior of materials.
Deep learning methods, as a specific type of machine learning, have gained attention due to their capacity for automated feature extraction from nonlinear and multi-dimensional material data. These methods have been applied to predict the mechanical properties of composite materials. Table 2 reveals the strengths and weaknesses of deep learning methods applied in the prediction of material properties. Understanding these strengths and weaknesses enables researchers and practitioners to make informed decisions when selecting deep learning methods for predicting material properties.
Overall, artificial intelligence algorithms play a crucial role in predicting the mechanical properties of composite materials. Both machine learning and deep learning approaches aim to learn patterns and relationships from input material data to make accurate predictions. Machine learning algorithms perform well in situations with limited data and offer interpretable models. In contrast, deep learning algorithms are better suited for handling extensive and intricate datasets, automating feature extraction, and capturing complex relationships. It is important to note that understanding the strengths and weaknesses of traditional machine learning and deep learning methods in predicting material properties, as illustrated in Table 3, can provide additional valuable insights.
While the machine learning and deep learning methods reviewed herein have shown promise in predicting composite material properties, there are several challenges that require further investigation in this emerging interdisciplinary field. Some of these challenges include.
  • The effectiveness of artificial intelligence methods, particularly deep learning, in predicting material properties relies heavily on the availability of high-quality and comprehensive datasets. The development of accurate artificial intelligence models relies on large and diverse datasets that encompass a wide range of composite materials, manufacturing processes, and mechanical properties. However, such datasets may be limited or difficult to obtain due to various factors such as proprietary information, cost, and time constraints. Limited data availability can hinder the training and validation of artificial intelligence models, potentially leading to reduced performance and generalizability. Consequently, there is a substantial demand for novel approaches that can address the limitations of working with limited data.
  • The quality and consistency of the available data can also pose challenges. Composite materials encompass a wide range of compositions, structures, and manufacturing techniques, resulting in variations in data quality and format. Inconsistencies in experimental methodologies, measurement techniques, and reporting standards can introduce noise and biases into the datasets. Lack of standardized data collection procedures can make it challenging to compare and integrate different datasets, potentially affecting the accuracy and reliability of AI predictions.
  • Compared to traditional machine learning models, designing the architecture of deep learning models is still a challenging task. Deep learning models have numerous hyperparameters, and selecting appropriate values for these hyperparameters can significantly impact prediction accuracy and generalization ability. The absence of standardized rules for hyperparameter selection presents a challenge when utilizing deep learning for material properties prediction. The development of automated methods or guidelines for more efficient and effective hyperparameter tuning in deep learning models would contribute greatly to addressing this challenge in the prediction of material properties.
  • Deep learning models, although powerful in their predictive capabilities, often lack interpretability. The black-box nature of these models makes it difficult to understand the underlying features and mechanisms driving the predictions. This lack of interpretability can limit the trust and acceptance of artificial intelligence predictions in the prediction of material properties. Developing interpretable artificial intelligence models that provide insights into the relationship between input features and predicted mechanical properties is an ongoing research challenge.
  • Artificial intelligence models trained on specific datasets might struggle to generalize to unseen data or different composite material systems. The transferability of artificial intelligence models across different material compositions, fabrication techniques, and environmental conditions remains a challenge. Ensuring robust and reliable predictions across a wide range of composite materials requires careful consideration of model architecture, feature representation, and transfer learning techniques.
  • While artificial intelligence models can provide rapid predictions, it is essential to validate their accuracy and reliability through experimental verification. The reliance on experimental testing to validate artificial intelligence predictions introduces additional time, cost, and resource requirements. Ensuring a strong correlation between predicted and measured mechanical properties is crucial for establishing the trustworthiness and practical utility of artificial intelligence models.
  • Furthermore, many existing studies focus on the prediction of one or two mechanical properties rather than the overall mechanical properties of composite materials. While some studies have explored the prediction of multiple mechanical properties [210], there is still a significant research gap in this area. Therefore, a crucial research direction for the future is to design effective models that can accurately predict multiple mechanical properties in a simultaneous manner. Developing such models would provide a more comprehensive understanding of the material behavior and enable engineers and researchers to make informed decisions across a wide range of mechanical properties. This research direction holds great potential for advancing the field of material properties prediction and its practical applications in various industries.
Given the challenges and limitations discussed earlier, there is a clear need for further research that specifically focuses on the application of artificial intelligence in predicting the mechanical properties of composite materials. This focused research could address the following areas:
  • Due to the limited availability of large datasets for composite materials, research could be conducted to explore data augmentation techniques specific to composite materials. This could involve generating artificial data using physics-based simulations, generative models like generative adversarial networks (GANs), or incorporating domain knowledge. Data augmentation can help increase the diversity and size of the training datasets, improving the generalization and performance of artificial intelligence models.
  • Combining artificial intelligence techniques with physics-based models could be a promising research direction. Hybrid modeling approaches can leverage the strengths of both data-driven artificial intelligence models and mechanistic models in order to improve accuracy and interpretability. Integrating physics-based models with artificial intelligence models can provide a better understanding of the underlying mechanisms governing the mechanical behavior of composite materials.
  • Composite materials exhibit complex hierarchical structures, and their mechanical properties depend on interactions at multiple length scales. Future research can focus on developing artificial intelligence models that can capture and predict mechanical properties at different scales, from micro to macro levels. Multi-scale modeling approaches, such as coupling artificial intelligence models with finite element analysis or molecular dynamics simulations, can facilitate accurate predictions across different length scales.
  • Enhancing the interpretability of artificial intelligence models for predicting the mechanical properties of composites is an important research direction. Developing techniques to explain the underlying factors influencing predictions, such as feature importance analysis or attention mechanisms, can increase the trust and adoption of artificial intelligence models. Explainable artificial intelligence (XAI) can provide valuable insights into the structure–property relationships of composite materials and facilitate knowledge discovery.
  • Collaborative efforts between artificial intelligence researchers and experimentalists are essential to validate and refine artificial intelligence predictions. Integrating artificial intelligence predictions with experimental validation can help assess the accuracy and reliability of the models. Researchers can collaborate with experimentalists to design validation experiments, compare the predicted mechanical properties with actual measurements, and iteratively refine the artificial intelligence models.
  • Composite materials encompass a wide range of material systems, such as fiber-reinforced composites, polymer matrix composites, and ceramic matrix composites. Future research can focus on developing domain-specific artificial intelligence models tailored to the unique characteristics and challenges of each material system. This can involve designing specialized architectures, feature representations, and training strategies that are specific to the properties and behaviors of different composite materials.
In summary, specific recommendations include the selection of appropriate algorithms based on data availability and complexity, the integration of domain knowledge, and the exploration of interpretability techniques. Further research is needed to address challenges related to interpretability, data quality, transferability, and uncertainty quantification in order to advance the field of artificial intelligence in predicting mechanical properties of composite materials.

6. Conclusions

This paper provides a comprehensive review of recent advancements in the applications of artificial intelligence (AI) methods, particularly traditional machine learning (ML) and deep learning (DL), for predicting the mechanical properties of composite materials. The review began with an introduction and proceeded to provide a concise examination and analysis of recent studies to have employed machine learning and deep learning methods for predicting the key mechanical properties of various engineering materials. The basic principles, strengths, and weaknesses of each method were also discussed. Based on the available literature, machine learning and deep learning methods have demonstrated the precise prediction of the key mechanical properties of composite materials. However, the effectiveness of these methods depends on the availability of data and the performance of the learning algorithms. Finally, the paper summarized and discussed the main challenges encountered in the field, along with potential research opportunities and future directions. Overall, this review serves as a valuable reference for individuals seeking to comprehend and advance the development of AI methods for predicting material properties.

Author Contributions

Conceptualization, F.K.; methodology, F.K.; validation, F.K., T.T. and H.S.G.; resources, F.K., T.T. and H.S.G.; data curation, F.K., T.T. and H.S.G.; writing—original draft preparation, F.K., T.T. and H.S.G.; writing—review and editing, F.K., T.T., H.S.G. and D.E.W.; supervision, T.T. and D.E.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors wish to gratefully acknowledge the anonymous reviewers for their valuable comments, which significantly enhanced the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Song, L.; Wang, D.; Liu, X.; Yin, A.; Long, Z. Prediction of mechanical properties of composite materials using multimodal fusion learning. Sens. Actuators A Phys. 2023, 358, 114433. [Google Scholar] [CrossRef]
  2. Yu, Z.; Ye, S.; Sun, Y.; Zhao, H.; Feng, X.Q. Deep learning method for predicting the mechanical properties of aluminum alloys with small data sets. Mater. Today Commun. 2021, 28, 102570. [Google Scholar] [CrossRef]
  3. Ghetiya, N.D.; Patel, K.M. Prediction of Tensile Strength in Friction Stir Welded Aluminium Alloy Using Artificial Neural Network. Procedia Technol. 2014, 14, 274–281. [Google Scholar] [CrossRef]
  4. Mishra, S.K.; Brahma, A.; Dutta, K. Prediction of mechanical properties of Al-Si-Mg alloy using artificial neural network. Sadhana-Acad. Proc. Eng. Sci. 2021, 46, 139. [Google Scholar] [CrossRef]
  5. Tran, H.D.; Kim, C.; Chen, L.; Chandrasekaran, A.; Batra, R.; Venkatram, S.; Kamal, D.; Lightstone, J.P.; Gurnani, R.; Shetty, P.; et al. Machine-learning predictions of polymer properties with Polymer Genome. J. Appl. Phys. 2020, 128, 171104. [Google Scholar] [CrossRef]
  6. Han, T.; Huang, J.; Sant, G.; Neithalath, N.; Kumar, A. Predicting mechanical properties of ultrahigh temperature ceramics using machine learning. J. Am. Ceram. Soc. 2022, 105, 6851–6863. [Google Scholar] [CrossRef]
  7. Liu, J.; Zhang, Y.; Zhang, Y.; Kitipornchai, S.; Yang, J. Machine learning assisted prediction of mechanical properties of graphene/aluminium nanocomposite based on molecular dynamics simulation. Mater. Des. 2022, 213, 110334. [Google Scholar] [CrossRef]
  8. Lee, J.A.; Almond, D.P.; Harris, B. Use of neural networks for the prediction of fatigue lives of composite materials. Compos. Part A Appl. Sci. Manuf. 1999, 30, 1159–1169. [Google Scholar] [CrossRef]
  9. Altinkok, N.; Koker, R. Neural network approach to prediction of bending strength and hardening behaviour of particulate reinforced (Al-Si-Mg)-aluminium matrix composites. Mater. Des. 2004, 25, 595–602. [Google Scholar] [CrossRef]
  10. Koker, R.; Altinkok, N.; Demir, A. Neural network based prediction of mechanical properties of particulate reinforced metal matrix composites using various training algorithms. Mater. Des. 2007, 28, 616–627. [Google Scholar] [CrossRef]
  11. Vinoth, A.; Datta, S. Design of the ultrahigh molecular weight polyethylene composites with multiple nanoparticles: An artificial intelligence approach. J. Compos. Mater. 2020, 54, 179–192. [Google Scholar] [CrossRef]
  12. Daghigh, V.; E Lacy, T.; Daghigh, H.; Gu, G.; Baghaei, K.T.; Horstemeyer, M.F.; Pittman, C.U. Machine learning predictions on fracture toughness of multiscale bio-nano-composites. J. Reinf. Plast. Compos. 2020, 39, 587–598. [Google Scholar] [CrossRef]
  13. Shah, V.; Zadourian, S.; Yang, C.; Zhang, Z.; Gu, G.X. Data-driven approach for the prediction of mechanical properties of carbon fiber reinforced composites. Mater. Adv. 2022, 3, 7319–7327. [Google Scholar] [CrossRef]
  14. Barbosa, A.; Upadhyaya, P.; Iype, E. Neural network for mechanical property estimation of multilayered laminate composite. Mater. Today Proc. 2020, 28, 982–985. [Google Scholar] [CrossRef]
  15. Al Hassan, M.; Derradji, M.; Ali, M.M.M.; Rawashdeh, A.; Wang, J.; Pan, Z.; Liu, W. Artificial neural network prediction of thermal and mechanical properties for Bi2O3-polybenzoxazine nanocomposites. J. Appl. Polym. Sci. 2022, 139, e52774. [Google Scholar] [CrossRef]
  16. Béji, H.; Kanit, T.; Messager, T. Prediction of Effective Elastic and Thermal Properties of Heterogeneous Materials Using Convolutional Neural Networks. Appl. Mech. 2023, 4, 287–303. [Google Scholar] [CrossRef]
  17. Balasundaram, R.; Devi, S.S.; Balan, G.S. Machine learning approaches for prediction of properties of natural fiber composites: Apriori algorithm. Aust. J. Mech. Eng. 2022, 20, 30091. [Google Scholar] [CrossRef]
  18. Gu, G.X.; Chen, C.T.; Buehler, M.J. De novo composite design based on machine learning algorithm. Extreme Mech. Lett. 2018, 18, 19–28. [Google Scholar] [CrossRef]
  19. Stel’makh, S.A.; Shcherban’, E.M.; Beskopylny, A.N.; Mailyan, L.R.; Meskhi, B.; Razveeva, I.; Kozhakin, A.; Beskopylny, N. Prediction of Mechanical Properties of Highly Functional Lightweight Fiber-Reinforced Concrete Based on Deep Neural Network and Ensemble Regression Trees Methods. Materials 2022, 15, 6740. [Google Scholar] [CrossRef]
  20. Turing, A. Machinery and Intelligence. Mind 1950, 59, 433–460. [Google Scholar] [CrossRef]
  21. Helal, S. The Expanding Frontier of Artificial Intelligence. Computer 2018, 51, 14–17. [Google Scholar] [CrossRef]
  22. Ramprasad, R.; Batra, R.; Pilania, G.; Mannodi-Kanakkithodi, A.; Kim, C. Machine learning in materials informatics: Recent applications and prospects. NPJ Comput. Mater. 2017, 3, 54. [Google Scholar] [CrossRef]
  23. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  24. Zhu, J.; Jia, Y.; Lei, J.; Liu, Z. Deep learning approach to mechanical property prediction of single-network hydrogel. Mathematics 2021, 9, 2804. [Google Scholar] [CrossRef]
  25. Chibani, S.; Coudert, F.X. Machine learning approaches for the prediction of materials properties. APL Mater. 2020, 8, 080701. [Google Scholar] [CrossRef]
  26. Chan, C.H.; Sun, M.; Huang, B. Application of machine learning for advanced material prediction and design. EcoMat 2022, 4, e12194. [Google Scholar] [CrossRef]
  27. Guo, K.; Yang, Z.; Yu, C.H.; Buehler, M.J. Artificial intelligence and machine learning in design of mechanical materials. Mater. Horizons 2021, 8, 1153–1172. [Google Scholar] [CrossRef] [PubMed]
  28. Lecun, Y.; Bengio, Y. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  29. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 1–74. [Google Scholar] [CrossRef]
  30. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  31. Ampazis, N.; Alexopoulos, N.D. Prediction of aircraft aluminum alloys tensile mechanical properties degradation using Support Vector Machines. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2010; Volume 6040, pp. 9–18. [Google Scholar] [CrossRef]
  32. Tang, J.L.; Cai, Q.R.; Liu, Y.J. Prediction of material mechanical properties with Support Vector Machine. In Proceedings of the 2010 International Conference on Machine Vision and Human-Machine Interface, MVHI, Kaifeng, China, 24–25 April 2010; pp. 592–595. [Google Scholar] [CrossRef]
  33. Bonifácio, A.L.; Mendes, J.C.; Farage, M.C.R.; Barbosa, F.S.; Barbosa, C.B.; Beaucour, A.L. Application of support vector machine and finite element method to predict the mechanical properties of concrete. Lat. Am. J. Solids Struct. 2019, 16, e205. [Google Scholar] [CrossRef]
  34. Hasanzadeh, A.; Vatin, N.I.; Hematibahar, M.; Kharun, M.; Shooshpasha, I. Prediction of the Mechanical Properties of Basalt Fiber Reinforced High-Performance Concrete Using Machine Learning Techniques. Materials 2022, 15, 7165. [Google Scholar] [CrossRef] [PubMed]
  35. Cheng, W.D.; Cai, C.Z.; Luo, Y.; Li, Y.H.; Zhao, C.J. Mechanical properties prediction for carbon nanotubes/epoxy composites by using support vector regression. Mod. Phys. Lett. B 2015, 29, 1550016. [Google Scholar] [CrossRef]
  36. Bhattacharya, S.; Kalita, K.; Čep, R.; Chakraborty, S. A comparative analysis on prediction performance of regression models during machining of composite materials. Materials 2021, 14, 6689. [Google Scholar] [CrossRef]
  37. Lyu, F.; Fan, X.; Ding, F.; Chen, Z. Prediction of the axial compressive strength of circular concrete-filled steel tube columns using sine cosine algorithm-support vector regression. Compos. Struct. 2021, 273, 114282. [Google Scholar] [CrossRef]
  38. Mahajan, A.; Bajoliya, S.; Khandelwal, S.; Guntewar, R.; Ruchitha, A.; Singh, I.; Arora, N. Comparison of ML algorithms for prediction of tensile strength of polymer matrix composites. Mater. Today Proc. 2022, 12, 105. [Google Scholar] [CrossRef]
  39. Sharma, A.; Madhushri, P.; Kushvaha, V.; Kumar, A. Prediction of the Fracture Toughness of Silicafilled Epoxy Composites using K-Nearest Neighbor (KNN) Method. In Proceedings of the 2020 International Conference on Computational Performance Evaluation, ComPE 2020, Shillong, India, 2–4 July 2020; pp. 194–198. [Google Scholar] [CrossRef]
  40. Li, M.; Zhang, H.; Li, S.; Zhu, W.; Ke, Y. Machine learning and materials informatics approaches for predicting transverse mechanical properties of unidirectional CFRP composites with microvoids. Mater. Des. 2022, 224, 111340. [Google Scholar] [CrossRef]
  41. Thirumoorthy, A.; Arjunan, T.V.; Kumar, K.L.S. Experimental investigation on mechanical properties of reinforced Al6061 composites and its prediction using KNN-ALO algorithms. Int. J. Rapid Manuf. 2019, 8, 161. [Google Scholar] [CrossRef]
  42. Sarker, L.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  43. Qi, Z.; Zhang, N.; Liu, Y.; Chen, W. Prediction of mechanical properties of carbon fiber based on cross-scale FEM and machine learning. Compos. Struct. 2019, 212, 199–206. [Google Scholar] [CrossRef]
  44. Kosicka, E.; Krzyzak, A.; Dorobek, M.; Borowiec, M. Prediction of Selected Mechanical Properties of Polymer Composites with Alumina Modifiers. Materials 2022, 15, 882. [Google Scholar] [CrossRef] [PubMed]
  45. Hegde, A.L.; Shetty, R.; Chiniwar, D.S.; Naik, N.; Nayak, M. Optimization and Prediction of Mechanical Characteristics on Vacuum Sintered Ti-6Al-4V-SiCp Composites Using Taguchi’s Design of Experiments, Response Surface Methodology and Random Forest Regression. J. Compos. Sci. 2022, 6, 339. [Google Scholar] [CrossRef]
  46. Zhang, C.; Li, Y.; Jiang, B.; Wang, R.; Liu, Y.; Jia, L. Mechanical properties prediction of composite laminate with FEA and machine learning coupled method. Compos. Struct. 2022, 299, 116086. [Google Scholar] [CrossRef]
  47. Almohammed, F.; Soni, J. Using Random Forest and Random Tree model to Predict the splitting tensile strength for the concrete with basalt fiber reinforced concrete. IOP Conf. Ser. Earth Environ. Sci. 2023, 1110, 012072. [Google Scholar] [CrossRef]
  48. Karamov, R.; Akhatov, I.; Sergeichev, I.V. Prediction of Fracture Toughness of Pultruded Composites Based on Supervised Machine Learning. Polymers 2022, 14, 3619. [Google Scholar] [CrossRef]
  49. Pathan, M.V.; Ponnusami, S.A.; Pathan, J.; Pitisongsawat, R.; Erice, B.; Petrinic, N.; Tagarielli, V.L. Predictions of the mechanical properties of unidirectional fibre composites by supervised machine learning. Sci. Rep. 2019, 9, 13964. [Google Scholar] [CrossRef]
  50. Shang, M.; Li, H.; Ahmad, A.; Ahmad, W.; Ostrowski, K.A.; Aslam, F.; Joyklad, P.; Majka, T.M. Predicting the Mechanical Properties of RCA-Based Concrete Using Supervised Machine Learning Algorithms. Materials 2022, 15, 647. [Google Scholar] [CrossRef]
  51. Guo, P.; Meng, W.; Xu, M.; Li, V.C.; Bao, Y. Predicting mechanical properties of high-performance fiber-reinforced cementitious composites by integrating micromechanics and machine learning. Materials 2021, 14, 3143. [Google Scholar] [CrossRef]
  52. Krishnan, K.A.; Anjana, R.; George, K.E. Effect of alkali-resistant glass fiber on polypropylene/polystyrene blends: Modeling and characterization. Polym. Compos. 2016, 37, 398–406. [Google Scholar] [CrossRef]
  53. Kabbani, M.S.; El Kadi, H.A. Predicting the effect of cooling rate on the mechanical properties of glass fiber–polypropylene composites using artificial neural networks. J. Thermoplast. Compos. Mater. 2018, 32, 1268–1281. [Google Scholar] [CrossRef]
  54. Wang, J.; Lin, C.; Feng, G.; Li, B.; Wu, L.; Wei, C.; Lv, Y.; Cheng, J. Fracture prediction of CFRP laminates subjected to CW laser heating and pre-tensile loads based on ANN. AIP Adv. 2022, 12, 015010. [Google Scholar] [CrossRef]
  55. Sharan and, M. Mitra Prediction of static strength properties of carbon fiber-reinforced composite using artificial neural network. Model. Simul. Mater. Sci. Eng. 2022, 30, 075001. [Google Scholar] [CrossRef]
  56. Devadiga, U.; Poojary, R.K.R.; Fernandes, P. Artificial neural network technique to predict the properties of multiwall carbon nanotube-fly ash reinforced aluminium composite. J. Mater. Res. Technol. 2019, 8, 3970–3977. [Google Scholar] [CrossRef]
  57. Wang, W.; Wang, H.; Zhou, J.; Fan, H.; Liu, X. Machine learning prediction of mechanical properties of braided-textile reinforced tubular structures. Mater. Des. 2021, 212, 110181. [Google Scholar] [CrossRef]
  58. Rajkumar, A.G.; Hemath, M.; Nagaraja, B.K.; Neerakallu, S.; Thiagamani, S.M.K.; Asrofi, M. An artificial neural network prediction on physical, mechanical, and thermal characteristics of giant reed fiber reinforced polyethylene terephthalate composite. J. Ind. Text. 2022, 51, 769S–803S. [Google Scholar] [CrossRef]
  59. Kumar, C.S.; Arumugam, V.; Sengottuvelusamy, R.; Srinivasan, S.; Dhakal, H.N. Failure strength prediction of glass/epoxy composite laminates from acoustic emission parameters using artificial neural network. Appl. Acoust. 2017, 115, 32–41. [Google Scholar] [CrossRef]
  60. Khademi, F.; Akbari, M.; Jamal, S.M.; Nikoo, M. Multiple linear regression, artificial neural network, and fuzzy logic prediction of 28 days compressive strength of concrete. Front. Struct. Civ. Eng. 2017, 11, 90–99. [Google Scholar] [CrossRef]
  61. Shabley, A.; Nikolskaia, K.; Varkentin, V.; Peshkov, R.; Petrova, L. Predicting the Destruction of Composite Materials Using Machine Learning Methods. Transp. Res. Procedia 2023, 68, 191–196. [Google Scholar] [CrossRef]
  62. Tanyildizi, H. Fuzzy logic model for prediction of mechanical properties of lightweight concrete exposed to high temperature. Mater. Des. 2009, 30, 2205–2210. [Google Scholar] [CrossRef]
  63. Tarasov, V.; Tan, H.; Jarfors, A.E.W.; Seifeddine, S. Fuzzy logic-based modelling of yield strength of as-cast A356 alloy. Neural Comput. Appl. 2020, 32, 5833–5844. [Google Scholar] [CrossRef]
  64. Nawafleh, N.; Al-Oqla, F.M. Evaluation of mechanical properties of fiber-reinforced syntactic foam thermoset composites: A robust artificial intelligence modeling approach for improved accuracy with little datasets. J. Mech. Behav. Mater. 2023, 32, 0285. [Google Scholar] [CrossRef]
  65. Zhang, J.; Xu, J.; Liu, C.; Zheng, J. Prediction of Rubber Fiber Concrete Strength Using Extreme Learning Machine. Front. Mater. 2021, 7, 465. [Google Scholar] [CrossRef]
  66. Li, J.; Salim, R.D.; Aldlemy, M.S.; Abdullah, J.M.; Yaseen, Z.M. Fiberglass-Reinforced Polyester Composites Fatigue Prediction Using Novel Data-Intelligence Model. Arab. J. Sci. Eng. 2019, 44, 3343–3356. [Google Scholar] [CrossRef]
  67. Hestroffer, J.M.; Charpagne, M.A.; Latypov, M.I.; Beyerlein, I.J. Graph neural networks for efficient learning of mechanical properties of polycrystals. Comput. Mater. Sci. 2023, 217, 111894. [Google Scholar] [CrossRef]
  68. Lu, W.; Yang, Z.; Buehler, M.J. Rapid mechanical property prediction and de novo design of three-dimensional spider webs through graph and GraphPerceiver neural networks. J. Appl. Phys. 2022, 132, 074703. [Google Scholar] [CrossRef]
  69. Maurizi, M.; Gao, C.; Berto, F. Predicting stress, strain and deformation fields in materials and structures with graph neural networks. Sci. Rep. 2022, 12, 21834. [Google Scholar] [CrossRef] [PubMed]
  70. Kibrete, F.; Woldemichael, D.E. Applications of Artificial Intelligence for Fault Diagnosis of Rotating Machines: A Review. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST; Springer: Berlin/Heidelberg, Germany, 2023; Volume 455, pp. 41–62. [Google Scholar] [CrossRef]
  71. Holden, A.V. Competition and cooperation in neural nets. Phys. D Nonlinear Phenom. 1983, 8, 284–285. [Google Scholar] [CrossRef]
  72. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  73. Lo, C.C.; Lee, C.H.; Huang, W.C. Prognosis of bearing and gear wears using convolutional neural network with hybrid loss function. Sensors 2020, 20, 3539. [Google Scholar] [CrossRef]
  74. Wu, C.; Jiang, P.; Ding, C.; Feng, F.; Chen, T. Intelligent fault diagnosis of rotating machinery based on one-dimensional convolutional neural network. Comput. Ind. 2019, 108, 53–61. [Google Scholar] [CrossRef]
  75. Yang, C.; Kim, Y.; Ryu, S.; Gu, G.X. Prediction of composite microstructure stress-strain curves using convolutional neural networks. Mater. Des. 2020, 189, 108509. [Google Scholar] [CrossRef]
  76. Yang, C.; Kim, Y.; Ryu, S.; Gu, G.X. Using convolutional neural networks to predict composite properties beyond the elastic limit. MRS Commun. 2019, 9, 609–617. [Google Scholar] [CrossRef]
  77. Abueidda, D.W.; Almasri, M.; Ammourah, R.; Ravaioli, U.; Jasiuk, I.M.; Sobh, N.A. Prediction and optimization of mechanical properties of composites using convolutional neural networks. Compos. Struct. 2019, 227, 111264. [Google Scholar] [CrossRef]
  78. Li, X.; Liu, Z.; Cui, S.; Luo, C.; Li, C.; Zhuang, Z. Predicting the effective mechanical property of heterogeneous materials by image based modeling and deep learning. Comput. Methods Appl. Mech. Eng. 2019, 347, 735–753. [Google Scholar] [CrossRef]
  79. Ye, S.; Li, B.; Li, Q.; Zhao, H.P.; Feng, X.Q. Deep neural network method for predicting the mechanical properties of composites. Appl. Phys. Lett. 2019, 115, 161901. [Google Scholar] [CrossRef]
  80. Pakzad, S.S.; Roshan, N.; Ghalehnovi, M. Comparison of various machine learning algorithms used for compressive strength prediction of steel fiber-reinforced concrete. Sci. Rep. 2023, 13, 3646. [Google Scholar] [CrossRef]
  81. Ramkumar, G.; Sahoo, S.; Anitha, G.; Ramesh, S.; Nirmala, P.; Tamilselvi, M.; Subbiah, R.; Rajkumar, S. An Unconventional Approach for Analyzing the Mechanical Properties of Natural Fiber Composite Using Convolutional Neural Network. Adv. Mater. Sci. Eng. 2021, 2021, 5450935. [Google Scholar] [CrossRef]
  82. Kim, D.-W.; Lim, J.H.; Lee, S. Prediction and validation of the transverse mechanical behavior of unidirectional composites considering interfacial debonding through convolutional neural networks. Compos. Part B Eng. 2021, 225, 109314. [Google Scholar] [CrossRef]
  83. Valishin, A.; Beriachvili, N. Applying neural networks to analyse the properties and structure of composite materials. E3S Web Conf. 2023, 376, 01041. [Google Scholar] [CrossRef]
  84. Rao, C.; Liu, Y. Three-dimensional convolutional neural network (3D-CNN) for heterogeneous material homogenization. Comput. Mater. Sci. 2020, 184, 109850. [Google Scholar] [CrossRef]
  85. Yang, Z.; Yabansu, Y.C.; Al-Bahrani, R.; Liao, W.-K.; Choudhary, A.N.; Kalidindi, S.R.; Agrawal, A. Deep learning approaches for mining structure-property linkages in high contrast composites from simulation datasets. Comput. Mater. Sci. 2018, 151, 278–287. [Google Scholar] [CrossRef]
  86. Hanakata, P.Z.; Cubuk, E.D.; Campbell, D.K.; Park, H.S. Accelerated Search and Design of Stretchable Graphene Kirigami Using Machine Learning. Phys. Rev. Lett. 2018, 121, 255304. [Google Scholar] [CrossRef] [PubMed]
  87. Gu, G.X.; Chen, C.T.; Richmond, D.J.; Buehler, M.J. Bioinspired hierarchical composite design using machine learning: Simulation, additive manufacturing, and experiment. Mater. Horizons 2018, 5, 939–945. [Google Scholar] [CrossRef]
  88. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  89. Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the EMNLP 2014—2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 25–29 October 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
  90. Chen, G. Recurrent neural networks (RNNs) learn the constitutive law of viscoelasticity. Comput. Mech. 2021, 67, 1009–1019. [Google Scholar] [CrossRef]
  91. Gorji, M.B.; Mozaffar, M.; Heidenreich, J.N.; Cao, J.; Mohr, D. On the potential of recurrent neural networks for modeling path dependent plasticity. J. Mech. Phys. Solids 2020, 143, 103972. [Google Scholar] [CrossRef]
  92. Mozaffar, M.; Bostanabad, R.; Chen, W.; Ehmann, K.; Cao, J.; Bessa, M.A. Deep learning predicts path-dependent plasticity. Proc. Natl. Acad. Sci. USA 2019, 116, 26414–26420. [Google Scholar] [CrossRef]
  93. Trzepieciński, T.; Ryzińska, G.; Biglar, M.; Gromada, M. Modelling of multilayer actuator layers by homogenisation technique using Digimat software. Ceram. Int. 2017, 43, 3259–3266. [Google Scholar] [CrossRef]
  94. Frankel, A.L.; Jones, R.E.; Alleman, C.; Templeton, J.A. Predicting the mechanical response of oligocrystals with deep learning. Comput. Mater. Sci. 2019, 169, 109099. [Google Scholar] [CrossRef]
  95. Wang, K.; Sun, W.C. A multiscale multi-permeability poroplasticity model linked by recursive homogenizations and deep learning. Comput. Methods Appl. Mech. Eng. 2018, 334, 337–380. [Google Scholar] [CrossRef]
  96. Li, X.; Hu, Y.; Zio, E.; Kang, R. A Bayesian Optimal Design for Accelerated Degradation Testing Based on the Inverse Gaussian Process. IEEE Access 2017, 5, 5690–5701. [Google Scholar] [CrossRef]
  97. Qin, L.; Huang, W.; Du, Y.; Zheng, L.; Jawed, M.K. Genetic algorithm-based inverse design of elastic gridshells. Struct. Multidiscip. Optim. 2020, 62, 2691–2707. [Google Scholar] [CrossRef]
  98. Bureerat, S.; Pholdee, N. Inverse problem based differential evolution for efficient structural health monitoring of trusses. Appl. Soft Comput. 2018, 66, 462–472. [Google Scholar] [CrossRef]
  99. Khadilkar, M.R.; Paradiso, S.; Delaney, K.T.; Fredrickson, G.H. Inverse Design of Bulk Morphologies in Multiblock Polymers Using Particle Swarm Optimization. Macromolecules 2017, 50, 6702–6709. [Google Scholar] [CrossRef]
  100. Sun, G.; Sun, Y.; Wang, S. Artificial neural network based inverse design: Airfoils and wings. Aerosp. Sci. Technol. 2015, 42, 415–428. [Google Scholar] [CrossRef]
  101. Wirkert, S.J.; Kenngott, H.; Mayer, B.; Mietkowski, P.; Wagner, M.; Sauer, P.; Clancy, N.T.; Elson, D.S.; Maier-Hein, L. Robust near real-time estimation of physiological parameters from megapixel multispectral images with inverse Monte Carlo and random forest regression. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 909–917. [Google Scholar] [CrossRef]
  102. Wu, Z.; Ding, C.; Li, G.; Han, X.; Li, J. Learning solutions to the source inverse problem of wave equations using LS-SVM. J. Inverse Ill-Posed Probl. 2019, 27, 657–669. [Google Scholar] [CrossRef]
  103. Rahnama, A.; Zepon, G.; Sridhar, S. Machine learning based prediction of metal hydrides for hydrogen storage, part II: Prediction of material class. Int. J. Hydrogen Energy 2019, 44, 7345–7353. [Google Scholar] [CrossRef]
  104. Zhang, Y.; Xu, X. Machine learning the magnetocaloric effect in manganites from lattice parameters. Appl. Phys. A Mater. Sci. Process. 2020, 126, 341. [Google Scholar] [CrossRef]
  105. Sun, Y.T.; Bai, H.Y.; Li, M.Z.; Wang, W.H. Machine Learning Approach for Prediction and Understanding of Glass-Forming Ability. J. Phys. Chem. Lett. 2017, 8, 3434–3439. [Google Scholar] [CrossRef]
  106. Xiong, J.; Shi, S.Q.; Zhang, T.Y. A machine-learning approach to predicting and understanding the properties of amorphous metallic alloys. Mater. Des. 2020, 187, 108378. [Google Scholar] [CrossRef]
  107. Ward, L.; O’Keeffe, S.C.; Stevick, J.; Jelbert, G.R.; Aykol, M.; Wolverton, C. A machine learning approach for engineering bulk metallic glass alloys. Acta Mater. 2018, 159, 102–111. [Google Scholar] [CrossRef]
  108. Zhang, Y.; Wen, C.; Wang, C.; Antonov, S.; Xue, D.; Bai, Y.; Su, Y. Phase prediction in high entropy alloys with a rational selection of materials descriptors and machine learning models. Acta Mater. 2020, 185, 528–539. [Google Scholar] [CrossRef]
  109. Prashun, G.; Vladan, S.; Eric, S.T. Computationally guided discovery of thermoelectric materials. Nat. Rev. Mater. 2017, 2, 17053. [Google Scholar] [CrossRef]
  110. Ratnayake, R.M.C.; Antosz, K. Risk-Based Maintenance Assessment in the Manufacturing Industry: Minimisation of Suboptimal Prioritisation. Manag. Prod. Eng. Rev. 2017, 8, 38–45. [Google Scholar] [CrossRef]
  111. Kozłowski, E.; Antosz, K.; Mazurkiewicz, D.; Sęp, J.; Żabiński, T. Integrating advanced measurement and signal processing for reliability decision-making. Eksploat. i Niezawodn.-Maint. Reliab. 2021, 23, 777–787. [Google Scholar] [CrossRef]
  112. Chen, Q.; Jia, R.; Pang, S. Deep long short-term memory neural network for accelerated elastoplastic analysis of heterogeneous materials: An integrated data-driven surrogate approach. Compos. Struct. 2021, 264, 113688. [Google Scholar] [CrossRef]
  113. Wu, L.; Nguyen, V.D.; Kilingar, N.G.; Noels, L. A recurrent neural network-accelerated multi-scale model for elasto-plastic heterogeneous materials subjected to random cyclic and non-proportional loading paths. Comput. Methods Appl. Mech. Eng. 2020, 369, 113234. [Google Scholar] [CrossRef]
  114. Ghavamian, F.; Simone, A. Accelerating multiscale finite element simulations of history-dependent materials using a recurrent neural network. Comput. Methods Appl. Mech. Eng. 2019, 357, 112594. [Google Scholar] [CrossRef]
  115. Zhu, J.H.; Zaman, M.M.; Anderson, S.A. Modeling of soil behavior with a recurrent neural network. Can. Geotech. J. 1998, 35, 858–872. [Google Scholar] [CrossRef]
  116. Graf, W.; Freitag, S.; Kaliske, M.; Sickert, J.U. Recurrent Neural Networks for Uncertain Time-Dependent Structural Behavior. Comput. Civ. Infrastruct. Eng. 2010, 25, 322–323. [Google Scholar] [CrossRef]
  117. Logarzo, H.J.; Capuano, G.; Rimoli, J.J. Smart constitutive laws: Inelastic homogenization through machine learning. Comput. Methods Appl. Mech. Eng. 2021, 373, 113482. [Google Scholar] [CrossRef]
  118. Hearley, B.; Park, B.; Stuckner, J.; Pineda, E.; Murman, S. Predicting Unreinforced Fabric Mechanical Behavior with Recurrent Neural Networks. 2022. Available online: https://ntrs.nasa.gov/citations/20210023708 (accessed on 10 June 2023).
  119. Farizhandi, A.A.K.; Mamivand, M. Spatiotemporal prediction of microstructure evolution with predictive recurrent neural network. Comput. Mater. Sci. 2023, 223, 112110. [Google Scholar] [CrossRef]
  120. Zhang, J.; Wang, P.; Gao, R.X. Deep learning-based tensile strength prediction in fused deposition modeling. Comput. Ind. 2019, 107, 11–21. [Google Scholar] [CrossRef]
  121. Freitag, S.; Graf, W.; Kaliske, M. A material description based on recurrent neural networks for fuzzy data and its application within the finite element method. Comput. Struct. 2013, 124, 29–37. [Google Scholar] [CrossRef]
  122. Graf, W.; Freitag, S.; Sickert, J.U.; Kaliske, M. Structural Analysis with Fuzzy Data and Neural Network Based Material Description. Comput. Civ. Infrastruct. Eng. 2012, 27, 640–654. [Google Scholar] [CrossRef]
  123. Oeser, M.; Freitag, S. Modeling of materials with fading memory using neural networks. Int. J. Numer. Methods Eng. 2009, 78, 843–862. [Google Scholar] [CrossRef]
  124. Koeppe, A.; Bamer, F.; Selzer, M.; Nestler, B.; Markert, B. Explainable Artificial Intelligence for Mechanics: Physics-Explaining Neural Networks for Constitutive Models. Front. Mater. 2022, 8, 824958. [Google Scholar] [CrossRef]
  125. Flaschel, M.; Kumar, S.; De Lorenzis, L. Unsupervised discovery of interpretable hyperelastic constitutive laws. Comput. Methods Appl. Mech. Eng. 2021, 381, 113852. [Google Scholar] [CrossRef]
  126. Nascimento, R.G.; Viana, F.A.C. Cumulative damage modeling with recurrent neural networks. AIAA J. 2020, 58, 5459–5471. [Google Scholar] [CrossRef]
  127. Yang, K.; Cao, Y.; Zhang, Y.; Fan, S.; Tang, M.; Aberg, D.; Sadigh, B.; Zhou, F. Self-supervised learning and prediction of microstructure evolution with convolutional recurrent neural networks. Patterns 2021, 2, 100243. [Google Scholar] [CrossRef] [PubMed]
  128. Jung, J.; Yoon, J.I.; Park, H.K.; Jo, H.; Kim, H.S. Microstructure design using machine learning generated low dimensional and continuous design space. Materialia 2020, 11, 100690. [Google Scholar] [CrossRef]
  129. Iraki, T.; Morand, L.; Dornheim, J.; Link, N.; Helm, D. A multi-task learning-based optimization approach for finding diverse sets of material microstructures with desired properties and its application to texture optimization. J. Intell. Manuf. 2023, 23, 1–17. [Google Scholar] [CrossRef]
  130. Zhao, Y.; Altschuh, P.; Santoki, J.; Griem, L.; Tosato, G.; Selzer, M.; Koeppe, A.; Nestler, B. Characterization of porous membranes using artificial neural networks. Acta Mater. 2023, 253, 118922. [Google Scholar] [CrossRef]
  131. Stein, H.S.; Guevarra, D.; Newhouse, P.F.; Soedarmadji, E.; Gregoire, J.M. Machine learning of optical properties of materials-predicting spectra from images and images from spectra. Chem. Sci. 2019, 10, 47–55. [Google Scholar] [CrossRef]
  132. Arumugam, D.; Kiran, R. Compact representation and identification of important regions of metal microstructures using complex-step convolutional autoencoders. Mater. Des. 2022, 223, 111236. [Google Scholar] [CrossRef]
  133. Lee, S.M.; Park, S.Y.; Choi, B.H. Application of domain-adaptive convolutional variational autoencoder for stress-state prediction. Knowl.-Based Syst. 2022, 248, 108827. [Google Scholar] [CrossRef]
  134. Kim, Y.; Park, H.K.; Jung, J.; Asghari-Rad, P.; Lee, S.; Kim, J.Y.; Jung, H.G.; Kim, H.S. Exploration of optimal microstructure and mechanical properties in continuous microstructure space using a variational autoencoder. Mater. Des. 2021, 202, 109544. [Google Scholar] [CrossRef]
  135. Frącz, W.; Janowski, G. Influence of homogenization methods in prediction of strength properties for wpc composites. Appl. Comput. Sci. 2017, 13, 77–89. [Google Scholar] [CrossRef]
  136. Morand, L.; Helm, D. A mixture of experts approach to handle ambiguities in parameter identification problems in material modeling. Comput. Mater. Sci. 2019, 167, 85–91. [Google Scholar] [CrossRef]
  137. Morand, L.; Link, N.; Iraki, T.; Dornheim, J.; Helm, D. Efficient Exploration of Microstructure-Property Spaces via Active Learning. Front. Mater. 2022, 8, 824441. [Google Scholar] [CrossRef]
  138. Chen, S.; Xu, N. Detecting Microstructural Criticality/Degeneracy through Hybrid Learning Strategies Trained by Molecular Dynamics Simulations. ACS Appl. Mater. Interfaces 2022, 15, 10193–10202. [Google Scholar] [CrossRef] [PubMed]
  139. Sardeshmukh, A.; Reddy, S.; GauthamB, P.; Bhattacharyya, P. TextureVAE: Learning interpretable representations of material microstructures using variational autoencoders. In CEUR Workshop Proceedings; RWTH Aachen University: Aachen, Germany, 2021; Volume 2964, ISSN 1613-0073. [Google Scholar]
  140. Oommen, V.; Shukla, K.; Goswami, S.; Dingreville, R.; Karniadakis, G.E. Learning two-phase microstructure evolution using neural operators and autoencoder architectures. npj Comput. Mater. 2022, 8, 190. [Google Scholar] [CrossRef]
  141. Pitz, E.; Pochiraju, K. A Neural Network Transformer Model for Composite Microstructure Homogenization. arXiv 2023, arXiv:2304.07877v1. [Google Scholar]
  142. Cang, R.; Xu, Y.; Chen, S.; Liu, Y.; Jiao, Y.; Ren, M.Y. Microstructure Representation and Reconstruction of Heterogeneous Materials Via Deep Belief Network for Computational Material Design. J. Mech. Des. 2017, 139, 071404. [Google Scholar] [CrossRef]
  143. Wei, H.; Zhao, S.; Rong, Q.; Bao, H. Predicting the effective thermal conductivities of composite materials and porous media by machine learning methods. Int. J. Heat Mass Transf. 2018, 127, 908–916. [Google Scholar] [CrossRef]
  144. Cecen, A.; Dai, H.; Yabansu, Y.C.; Kalidindi, S.R.; Song, L. Material structure-property linkages using three-dimensional convolutional neural networks. Acta Mater. 2018, 146, 76–84. [Google Scholar] [CrossRef]
  145. Yang, Z.; Yabansu, Y.C.; Jha, D.; Liao, W.-K.; Choudhary, A.N.; Kalidindi, S.R.; Agrawal, A. Establishing structure-property localization linkages for elastic deformation of three-dimensional high contrast composites using deep learning approaches. Acta Mater. 2018, 166, 335–345. [Google Scholar] [CrossRef]
  146. Chalapathy, R.; Chawla, S. Deep Learning for Anomaly Detection: A Survey. arXiv 2019, arXiv:1901.03407v2. [Google Scholar]
  147. Ruff, L.; Kauffmann, J.R.; Vandermeulen, R.A.; Montavon, G.; Samek, W.; Kloft, M.; Dietterich, T.G.; Muller, K.-R. A Unifying Review of Deep and Shallow Anomaly Detection. Proc. IEEE 2021, 109, 756–795. [Google Scholar] [CrossRef]
  148. Bostanabad, R.; Zhang, Y.; Li, X.; Kearney, T.; Brinson, L.C.; Apley, D.W.; Liu, W.K.; Chen, W. Computational microstructure characterization and reconstruction: Review of the state-of-the-art techniques. Prog. Mater. Sci. 2018, 95, 1–41. [Google Scholar] [CrossRef]
  149. Ma, W.; Kautz, E.J.; Baskaran, A.; Chowdhury, A.; Joshi, V.; Yener, B.; Lewis, D.J. Image-driven discriminative and generative machine learning algorithms for establishing microstructure-processing relationships. J. Appl. Phys. 2020, 128, 134901. [Google Scholar] [CrossRef]
  150. Kautz, E.; Ma, W.; Jana, S.; Devaraj, A.; Joshi, V.; Yener, B.; Lewis, D. An image-driven machine learning approach to kinetic modeling of a discontinuous precipitation reaction. Mater. Charact. 2020, 166, 110379. [Google Scholar] [CrossRef]
  151. Bostanabad, R. Reconstruction of 3D Microstructures from 2D Images via Transfer Learning. CAD Comput.-Aided Des. 2020, 128, 102906. [Google Scholar] [CrossRef]
  152. Li, W.; Li, W.; Qin, Z.; Tan, L.; Huang, L.; Liu, F.; Xiao, C. Deep Transfer Learning for Ni-Based Superalloys Microstructure Recognition on γ′ Phase. Materials 2022, 15, 4251. [Google Scholar] [CrossRef]
  153. Chowdhury, A.; Kautz, E.; Yener, B.; Lewis, D. Image driven machine learning methods for microstructure recognition. Comput. Mater. Sci. 2016, 123, 176–187. [Google Scholar] [CrossRef]
  154. Luo, Q.; Holm, E.A.; Wang, C. A transfer learning approach for improved classification of carbon nanomaterials from TEM images. Nanoscale Adv. 2021, 3, 206–213. [Google Scholar] [CrossRef]
  155. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  156. Bengio, Y.; Lamblin, P.; Popovici, D.; Larochelle, H. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2007; Volume 19, pp. 153–160. [Google Scholar] [CrossRef]
  157. Deutsch, J.; He, M.; He, D. Remaining useful life prediction of hybrid ceramic bearings using an integrated deep learning and particle filter approach. Appl. Sci. 2017, 7, 649. [Google Scholar] [CrossRef]
  158. Fu, Y.; Zhang, Y.; Qiao, H.; Li, D.; Zhou, H.; Leopold, J. Analysis of feature extracting ability for cutting state monitoring using deep belief networks. Procedia CIRP 2015, 31, 29–34. [Google Scholar] [CrossRef]
  159. Wang, P.; Gao, R.X.; Yan, R. A deep learning-based approach to material removal rate prediction in polishing. CIRP Ann. 2017, 66, 429–432. [Google Scholar] [CrossRef]
  160. Ye, D.; Fuh, J.Y.H.; Zhang, Y.; Hong, G.S.; Zhu, K. In situ monitoring of selective laser melting using plume and spatter signatures by deep belief networks. ISA Trans. 2018, 81, 96–104. [Google Scholar] [CrossRef] [PubMed]
  161. Ye, D.; Hong, G.S.; Zhang, Y.; Zhu, K.; Fuh, J.Y.H. Defect detection in selective laser melting technology by acoustic signals with deep belief networks. Int. J. Adv. Manuf. Technol. 2018, 96, 2791–2801. [Google Scholar] [CrossRef]
  162. Iyer, A.; Dey, B.; Dasgupta, A.; Chen, W.; Chakraborty, A. A Conditional Generative Model for Predicting Material Microstructures from Processing Methods. arXiv 2019, arXiv:1910.02133v1. [Google Scholar]
  163. Cang, R.; Li, H.; Yao, H.; Jiao, Y.; Ren, Y. Improving direct physical properties prediction of heterogeneous materials from imaging data via convolutional neural network and a morphology-aware generative model. Comput. Mater. Sci. 2018, 150, 212–221. [Google Scholar] [CrossRef]
  164. Singh, R.; Shah, V.; Pokuri, B.; Sarkar, S.; Ganapathysubramanian, B.; Hegde, C. Physics-aware Deep Generative Models for Creating Synthetic Microstructures. arXiv 2018, arXiv:1811.09669v1. [Google Scholar]
  165. Yang, Z.; Li, X.; Brinson, L.C.; Choudhary, A.N.; Chen, W.; Agrawal, A. Microstructural materials design via deep adversarial learning methodology. J. Mech. Des. 2018, 140, 4041371. [Google Scholar] [CrossRef]
  166. Buehler, M.J. Prediction of atomic stress fields using cycle-consistent adversarial neural networks based on unpaired and unmatched sparse datasets. Mater. Adv. 2022, 3, 6280–6290. [Google Scholar] [CrossRef]
  167. Chun, S.; Roy, S.; Nguyen, Y.T.; Choi, J.B.; Udaykumar, H.S.; Deep, S.S.B. learning for synthetic microstructure generation in a materials-by-design framework for heterogeneous energetic materials. Sci. Rep. 2020, 10, 13307. [Google Scholar] [CrossRef]
  168. Mosser, L.; Dubrule, O.; Blunt, M.J. Stochastic Reconstruction of an Oolitic Limestone by Generative Adversarial Networks. Transp. Porous Media 2018, 125, 81–103. [Google Scholar] [CrossRef]
  169. Fokina, D.; Muravleva, E.; Ovchinnikov, G.; Oseledets, I. Microstructure synthesis using style-based generative adversarial networks. Phys. Rev. E 2020, 101, 043308. [Google Scholar] [CrossRef] [PubMed]
  170. Tang, J. Deep Learning-Guided Prediction of Material’s Microstructures and Applications to Advanced Manufacturing. 2021. Available online: https://tigerprints.clemson.edu/all_dissertations/2936 (accessed on 10 June 2023).
  171. Pütz, F.; Henrich, M.; Fehlemann, N.; Roth, A.; Münstermann, S. Generating input data for microstructure modelling: A deep learning approach using generative adversarial networks. Materials 2020, 13, 4236. [Google Scholar] [CrossRef]
  172. Hsu, T.; Epting, W.K.; Kim, H.; Abernathy, H.W.; Hackett, G.A.; Rollett, A.D.; Salvador, P.A.; Holm, E.A. Microstructure Generation via Generative Adversarial Network for Heterogeneous, Topologically Complex 3D Materials. JOM 2021, 73, 90–102. [Google Scholar] [CrossRef]
  173. Gowtham, N.H.; Jegadeesan, J.T.; Bhattacharya, C.; Basu, B. A Deep Adversarial Approach for the Generation of Synthetic Titanium Alloy Microstructures with Limited Training Data. SSRN Electron. J. 2022, 4148217. [Google Scholar] [CrossRef]
  174. Mao, Y.; Yang, Z.; Jha, D.; Paul, A.; Liao, W.-K.; Choudhary, A.; Agrawal, A. Generative Adversarial Networks and Mixture Density Networks-Based Inverse Modeling for Microstructural Materials Design. Integr. Mater. Manuf. Innov. 2022, 11, 637–647. [Google Scholar] [CrossRef]
  175. Thakre, S.; Karan, V.; Kanjarla, A.K. Quantification of similarity and physical awareness of microstructures generated via generative models. Comput. Mater. Sci. 2023, 221, 112074. [Google Scholar] [CrossRef]
  176. Henkes, A.; Wessels, H. Three-dimensional microstructure generation using generative adversarial neural networks in the context of continuum micromechanics. Comput. Methods Appl. Mech. Eng. 2022, 400, 115497. [Google Scholar] [CrossRef]
  177. Lee, J.W.; Goo, N.H.; Park, W.B.; Pyo, M.; Sohn, K.S. Virtual microstructure design for steels using generative adversarial networks. Eng. Rep. 2021, 3, e12274. [Google Scholar] [CrossRef]
  178. Tang, J.; Geng, X.; Li, D.; Shi, Y.; Tong, J.; Xiao, H.; Peng, F. Machine learning-based microstructure prediction during laser sintering of alumina. Sci. Rep. 2021, 11, 10724. [Google Scholar] [CrossRef]
  179. Agrawal, A.; Choudhary, A. Deep materials informatics: Applications of deep learning in materials science. MRS Commun. 2019, 9, 779–792. [Google Scholar] [CrossRef]
  180. Suhartono, D.; Purwandari, K.; Jeremy, N.H.; Philip, S.; Arisaputra, P.; Parmonangan, I.H. Deep neural networks and weighted word embeddings for sentiment analysis of drug product reviews. Procedia Comput. Sci. 2023, 216, 664–671. [Google Scholar] [CrossRef]
  181. Chan, K.Y.; Abu-Salih, B.; Qaddoura, R.; Al-Zoubi, A.M.; Palade, V.; Pham, D.-S.; Del Ser, J.; Muhammad, K. Deep neural networks in the cloud: Review, applications, challenges and research directions. Neurocomputing 2023, 545, 126327. [Google Scholar] [CrossRef]
  182. Oda, H.; Kiyohara, S.; Tsuda, K.; Mizoguchi, T. Transfer learning to accelerate interface structure searches. J. Phys. Soc. Jpn. 2017, 86, 123601. [Google Scholar] [CrossRef]
  183. Kailkhura, B.; Gallagher, B.; Kim, S.; Hiszpanski, A.; Han, T.Y.J. Reliable and explainable machine-learning methods for accelerated material discovery. NPJ Comput. Mater. 2019, 5, 108. [Google Scholar] [CrossRef]
  184. Lee, J.; Transfer, R.A. learning for materials informatics using crystal graph convolutional neural network. Comput. Mater. Sci. 2021, 190, 110314. [Google Scholar] [CrossRef]
  185. McClure, Z.D.; Strachan, A. Expanding Materials Selection Via Transfer Learning for High-Temperature Oxide Selection. JOM 2021, 73, 103–115. [Google Scholar] [CrossRef]
  186. Dong, R.; Dan, Y.; Li, X.; Hu, J. Inverse design of composite metal oxide optical materials based on deep transfer learning and global optimization. Comput. Mater. Sci. 2021, 18, 110166. [Google Scholar] [CrossRef]
  187. Ward, L.; Agrawal, A.; Choudhary, A.; Wolverton, C. A general-purpose machine learning framework for predicting properties of inorganic materials. NPJ Comput. Mater. 2016, 2, 16028. [Google Scholar] [CrossRef]
  188. Jia, K.; Li, W.; Wang, Z.; Qin, Z. Accelerating Microstructure Recognition of Nickel-Based Superalloy Data by UNet++. In Lecture Notes on Data Engineering and Communications Technologies; Springer Science and Business Media Deutschland GmbH: Berlin, Germany, 2022; Volume 80, pp. 863–870. [Google Scholar] [CrossRef]
  189. Kondo, R.; Yamakawa, S.; Masuoka, Y.; Tajima, S.; Asahi, R. Microstructure recognition using convolutional neural networks for prediction of ionic conductivity in ceramics. Acta Mater. 2017, 141, 29–38. [Google Scholar] [CrossRef]
  190. Gupta, V.; Choudhary, K.; Tavazza, F.; Campbell, C.; Liao, W.-K.; Choudhary, A.; Agrawal, A. Cross-property deep transfer learning framework for enhanced predictive analytics on small materials data. Nat. Commun. 2021, 12, 6595. [Google Scholar] [CrossRef]
  191. Choudhary, K.; Garrity, K.F.; Reid, A.C.E.; DeCost, B.; Biacchi, A.J.; Walker, A.R.H.; Trautt, Z.; Hattrick-Simpers, J.; Kusne, A.G.; Centrone, A.; et al. The joint automated repository for various integrated simulations (JARVIS) for data-driven materials design. NPJ Comput. Mater. 2020, 6, 173. [Google Scholar] [CrossRef]
  192. Li, X.; Zhang, Y.; Zhao, H.; Burkhart, C.; Brinson, L.C.; Chen, W. A Transfer Learning Approach for Microstructure Reconstruction and Structure-property Predictions. Sci. Rep. 2018, 8, 13461. [Google Scholar] [CrossRef] [PubMed]
  193. DeCost, B.L.; Francis, T.; Holm, E.A. Exploring the microstructure manifold: Image texture representations applied to ultrahigh carbon steel microstructures. Acta Mater. 2017, 133, 30–40. [Google Scholar] [CrossRef]
  194. Feng, S.; Zhou, H.; Dong, H. Application of deep transfer learning to predicting crystal structures of inorganic substances. Comput. Mater. Sci. 2021, 195, 110476. [Google Scholar] [CrossRef]
  195. Pandiyan, V.; Drissi-Daoudi, R.; Shevchik, S.; Masinelli, G.; Le-Quang, T.; Logé, R.; Wasmer, K. Deep transfer learning of additive manufacturing mechanisms across materials in metal-based laser powder bed fusion process. J. Mater. Process. Technol. 2022, 303, 117531. [Google Scholar] [CrossRef]
  196. Pandiyan, V.; Drissi-Daoudi, R.; Shevchik, S.; Masinelli, G.; Logé, R.; Wasmer, K. Analysis of time, frequency and time-frequency domain features from acoustic emissions during Laser Powder-Bed fusion process. Procedia CIRP 2020, 94, 392–397. [Google Scholar] [CrossRef]
  197. Scime, L.; Beuth, J. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Addit. Manuf. 2018, 19, 114–126. [Google Scholar] [CrossRef]
  198. Caggiano, A.; Zhang, J.; Alfieri, V.; Caiazzo, F.; Gao, R.; Teti, R. Machine learning-based image processing for on-line defect recognition in additive manufacturing. CIRP Ann. 2019, 68, 451–454. [Google Scholar] [CrossRef]
  199. Yamada, H.; Liu, C.; Wu, S.; Koyama, Y.; Ju, S.; Shiomi, J.; Morikawa, J.; Yoshida, R. Predicting Materials Properties with Little Data Using Shotgun Transfer Learning. ACS Central Sci. 2019, 5, 1717–1730. [Google Scholar] [CrossRef]
  200. Farizhandi, A.A.K.; Mamivand, M. Processing time, temperature, and initial chemical composition prediction from materials microstructure by deep network for multiple inputs and fused data. Mater. Des. 2022, 219, 110799. [Google Scholar] [CrossRef]
  201. Yang, L.; Dai, W.; Rao, Y.; Chyu, M.K. Optimization of the hole distribution of an effusively cooled surface facing non-uniform incoming temperature using deep learning approaches. Int. J. Heat Mass Transf. 2019, 145, 118749. [Google Scholar] [CrossRef]
  202. Mendizabal, A.; Márquez-Neila, P.; Cotin, S. Simulation of hyperelastic materials in real-time using deep learning. Med. Image Anal. 2019, 59, 10156. [Google Scholar] [CrossRef] [PubMed]
  203. Altarazi, S.; Ammouri, M.; Hijazi, A. Artificial neural network modeling to evaluate polyvinylchloride composites’ properties. Comput. Mater. Sci. 2018, 153, 1–9. [Google Scholar] [CrossRef]
  204. Rong, Q.; Wei, H.; Huang, X.; Bao, H. Predicting the effective thermal conductivity of composites from cross sections images using deep learning methods. Compos. Sci. Technol. 2019, 184, 107861. [Google Scholar] [CrossRef]
  205. You, K.W.; Arumugasamy, S.K. Deep learning techniques for polycaprolactone molecular weight prediction via enzymatic polymerization process. J. Taiwan Inst. Chem. Eng. 2020, 116, 238–255. [Google Scholar] [CrossRef]
  206. Tong, Z.; Wang, Z.; Wang, X.; Ma, Y.; Guo, H.; Liu, C. Characterization of hydration and dry shrinkage behavior of cement emulsified asphalt composites using deep learning. Constr. Build. Mater. 2021, 274, 121898. [Google Scholar] [CrossRef]
  207. Tong, Z.; Gao, J.; Wang, Z.; Wei, Y.; Dou, H. A new method for CF morphology distribution evaluation and CFRC property prediction using cascade deep learning. Constr. Build. Mater. 2019, 222, 829–838. [Google Scholar] [CrossRef]
  208. Choudhary, K.; DeCost, B.; Chen, C.; Jain, A.; Tavazza, F.; Cohn, R.; Park, C.W.; Choudhary, A.; Agrawal, A.; Billinge, S.J.L.; et al. Recent advances and applications of deep learning methods in materials science. NPJ Comput. Mater. 2022, 8, 59. [Google Scholar] [CrossRef]
  209. Wang, Y.; Soutis, C.; Ando, D.; Sutou, Y.; Narita, F. Application of deep neural network learning in composites design. Eur. J. Mater. 2022, 2, 117–170. [Google Scholar] [CrossRef]
  210. Kong, S.; Guevarra, D.; Gomes, C.P.; Gregoire, J.M. Materials representation and transfer learning for multi-property prediction. Appl. Phys. Rev. 2021, 8, 021409. [Google Scholar] [CrossRef]
Figure 1. Basic properties of engineering materials.
Figure 1. Basic properties of engineering materials.
Jcs 07 00364 g001
Figure 2. Classification of machine learning algorithms.
Figure 2. Classification of machine learning algorithms.
Jcs 07 00364 g002
Figure 3. Schematic representation of a deep neural network.
Figure 3. Schematic representation of a deep neural network.
Jcs 07 00364 g003
Figure 4. The relationship between artificial intelligence, machine learning, and deep learning (reproduced from Reference [29]—this is an open access article distributed under the terms of the Creative Commons CC-BY license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited).
Figure 4. The relationship between artificial intelligence, machine learning, and deep learning (reproduced from Reference [29]—this is an open access article distributed under the terms of the Creative Commons CC-BY license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited).
Jcs 07 00364 g004
Figure 5. Typical prediction process of artificial-intelligence-based methods.
Figure 5. Typical prediction process of artificial-intelligence-based methods.
Jcs 07 00364 g005
Figure 6. General schematic of support vector machine.
Figure 6. General schematic of support vector machine.
Jcs 07 00364 g006
Figure 7. General schematic of decision tree models (prepared on the basis of [42]).
Figure 7. General schematic of decision tree models (prepared on the basis of [42]).
Jcs 07 00364 g007
Figure 8. A typical architecture of an ANN model.
Figure 8. A typical architecture of an ANN model.
Jcs 07 00364 g008
Figure 9. A typical architecture of an ANN model [73].
Figure 9. A typical architecture of an ANN model [73].
Jcs 07 00364 g009
Figure 10. A simple RNN unit (reproduced with permission from Reference [90]; copyright © 2021, The Author(s), under exclusive license to Springer-Verlag GmbH, Germany, part of Springer Nature).
Figure 10. A simple RNN unit (reproduced with permission from Reference [90]; copyright © 2021, The Author(s), under exclusive license to Springer-Verlag GmbH, Germany, part of Springer Nature).
Jcs 07 00364 g010
Figure 11. (a) The initial configuration of micro-scale RVE; (b) the deformed configuration and (c) the flow network generated from the deformed configuration (reproduced with permission from Reference [95]; copyright © 2018 Elsevier B.V. All rights reserved).
Figure 11. (a) The initial configuration of micro-scale RVE; (b) the deformed configuration and (c) the flow network generated from the deformed configuration (reproduced with permission from Reference [95]; copyright © 2018 Elsevier B.V. All rights reserved).
Jcs 07 00364 g011
Figure 12. LSTM with sequential data processing (reproduced with permission from Reference [112]; copyright © 2021 Elsevier Ltd. All rights reserved).
Figure 12. LSTM with sequential data processing (reproduced with permission from Reference [112]; copyright © 2021 Elsevier Ltd. All rights reserved).
Jcs 07 00364 g012
Figure 13. (a) FE2 multi-scale simulation and (b) the meso-scale BVP during multi-scale simulation (reproduced with permission from Reference [113]; copyright © 2020 Elsevier B.V. All rights reserved).
Figure 13. (a) FE2 multi-scale simulation and (b) the meso-scale BVP during multi-scale simulation (reproduced with permission from Reference [113]; copyright © 2020 Elsevier B.V. All rights reserved).
Jcs 07 00364 g013
Figure 14. Approach for building a generic data-driven material model (reproduced with permission from Reference [117]; copyright © 2020 Elsevier B.V. All rights reserved).
Figure 14. Approach for building a generic data-driven material model (reproduced with permission from Reference [117]; copyright © 2020 Elsevier B.V. All rights reserved).
Jcs 07 00364 g014
Figure 15. Structure of a convolutional encoder (reproduced with permission from Reference [128]; copyright © 2020 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved).
Figure 15. Structure of a convolutional encoder (reproduced with permission from Reference [128]; copyright © 2020 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved).
Jcs 07 00364 g015
Figure 16. Schematic diagram of the variational auto-encoder for the characterization of porous membranes (reproduced with permission from Reference [130]; copyright © 2023 The Author(s). Published by Elsevier Ltd. on behalf of Acta Materialia Inc.).
Figure 16. Schematic diagram of the variational auto-encoder for the characterization of porous membranes (reproduced with permission from Reference [130]; copyright © 2023 The Author(s). Published by Elsevier Ltd. on behalf of Acta Materialia Inc.).
Jcs 07 00364 g016
Figure 17. Schematic diagram of the auto-encoder structure (reproduced with permission from Reference [132]; copyright © 2022 The Author(s). Published by Elsevier Ltd.).
Figure 17. Schematic diagram of the auto-encoder structure (reproduced with permission from Reference [132]; copyright © 2022 The Author(s). Published by Elsevier Ltd.).
Jcs 07 00364 g017
Figure 18. A DBN architecture with one multilayer perceptron and two restricted Boltzmann machines (reproduced with permission from Reference [159]; copyright © 2017. Published by Elsevier Ltd. on behalf of CIRP).
Figure 18. A DBN architecture with one multilayer perceptron and two restricted Boltzmann machines (reproduced with permission from Reference [159]; copyright © 2017. Published by Elsevier Ltd. on behalf of CIRP).
Jcs 07 00364 g018
Figure 19. (a) Mapping the chemical formula of a material into a 2D representation with a periodic table structure; (b) structure of the VGG-like CNN; (c) the workflow of transfer learning (CONV—convolutional operation; CNN—convolutional neural network; FC—fully connected layer; OQMD—open quantum materials database; RF—random forest; SNN—shallow neural network; SVN—supported vector machine) (reproduced with permission from Reference [194]; copyright © 2021 Elsevier B.V. All rights reserved).
Figure 19. (a) Mapping the chemical formula of a material into a 2D representation with a periodic table structure; (b) structure of the VGG-like CNN; (c) the workflow of transfer learning (CONV—convolutional operation; CNN—convolutional neural network; FC—fully connected layer; OQMD—open quantum materials database; RF—random forest; SNN—shallow neural network; SVN—supported vector machine) (reproduced with permission from Reference [194]; copyright © 2021 Elsevier B.V. All rights reserved).
Jcs 07 00364 g019
Figure 20. Workflow for transferring knowledge acquired regarding different build qualities obtained using a CNN from one material to another material (reproduced from Reference [195]—this is an open-access article distributed under the terms of the Creative Commons CC-BY license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited).
Figure 20. Workflow for transferring knowledge acquired regarding different build qualities obtained using a CNN from one material to another material (reproduced from Reference [195]—this is an open-access article distributed under the terms of the Creative Commons CC-BY license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited).
Jcs 07 00364 g020
Figure 21. Flowchart for the prediction of chemistry and processing history on the basis of microstructure images (reproduced with permission from Reference [200]; copyright © 2022 The Authors. Published by Elsevier Ltd.).
Figure 21. Flowchart for the prediction of chemistry and processing history on the basis of microstructure images (reproduced with permission from Reference [200]; copyright © 2022 The Authors. Published by Elsevier Ltd.).
Jcs 07 00364 g021
Table 1. Strengths and weaknesses of common shallow learning methods in the prediction of material properties.
Table 1. Strengths and weaknesses of common shallow learning methods in the prediction of material properties.
Traditional ML MethodsStrengthsWeaknesses
Support Vector Machine (SVM)High prediction speed and accuracy for small datasets
Ability to handle high-dimensional data
Relatively memory efficient
Inefficient for large datasets
Not suitable for noisy data
k-Nearest Neighbor (k-NN)Simple structure and easy implementation
Robust to noise
Mature theory
Slow performance with large-volume datasets
Computationally expensive
Poor performance with high-dimensional data
Requires significant storage space
Performance influenced by the choice of k
Decision TreeEasy to understand and interpret
Good visualization of results
Prone to overfitting
Longer training period
Requires additional domain knowledge
Random ForestEasy to understand and interpret
Low computational cost
Good performance with high-dimensional data
Prone to overfitting
Artificial Neural Network (ANN)Parallel information processing capability
High prediction accuracy and speed
Effective approximation of complex nonlinear functions
Suitable for relatively large datasets
Computationally expensive
Prone to overfitting with small datasets
Lack of transparency due to the “black box” nature of training procedures
Table 2. Strengths and weaknesses of common deep learning methods in the prediction of material properties.
Table 2. Strengths and weaknesses of common deep learning methods in the prediction of material properties.
DL MethodsStrengthsWeaknesses
Convolutional Neural Network (CNN)Well-suited for multi-dimensional data, particularly images
Effective for extracting relevant features
Excellent performance in local feature extraction
Complex architecture, requiring longer training times
Requires a sufficient amount of training data
Prone to overfitting
Recurrent Neural Network (RNN)Suitable for sequential data analysis
Can capture temporal changes and patterns effectively
Well suited for time series data.
Difficult to train and implement due to complex architectures
Auto-Encoder (AE)Easy to implement
Computationally efficient
Can learn enriched representations
Requires a large amount of training data
Ineffective when relevant information is overshadowed by noise
Performance can degrade if errors occur in the initial layers
Deep Belief Network (DBN)Well suited for one-dimensional data
Extracts high-level features from input data
Performs well with complex data without requiring extensive data preparation
Pre-training stage removes the need for labeled data.
Training can be slow due to complex initialization and computational expense
Inference and learning with multiple stochastic hidden layers can be challenging
Generative Adversarial Network (GAN)Efficient at generating synthetic data with limited training dataDifficult to train and optimize
Limited data generation ability when training data are extremely limited
Table 3. Strengths and weaknesses of traditional machine learning and deep learning methods.
Table 3. Strengths and weaknesses of traditional machine learning and deep learning methods.
AI MethodsStrengths Weaknesses
Traditional machine learningAccurate for small datasets
Requires less training time
Efficient CPU utilization
Less accuracy in the case of high-dimensional data
Preprocessing is necessary
Requires highly accurate preprocessing
Deep learning Accurate for big data
Automatically extracts relevant features
Preprocessing is not necessary
Requires big data for optimal performance
Computationally expensive and requires GPU acceleration
Highly complex network architecture
Not easily interpretable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kibrete, F.; Trzepieciński, T.; Gebremedhen, H.S.; Woldemichael, D.E. Artificial Intelligence in Predicting Mechanical Properties of Composite Materials. J. Compos. Sci. 2023, 7, 364. https://doi.org/10.3390/jcs7090364

AMA Style

Kibrete F, Trzepieciński T, Gebremedhen HS, Woldemichael DE. Artificial Intelligence in Predicting Mechanical Properties of Composite Materials. Journal of Composites Science. 2023; 7(9):364. https://doi.org/10.3390/jcs7090364

Chicago/Turabian Style

Kibrete, Fasikaw, Tomasz Trzepieciński, Hailu Shimels Gebremedhen, and Dereje Engida Woldemichael. 2023. "Artificial Intelligence in Predicting Mechanical Properties of Composite Materials" Journal of Composites Science 7, no. 9: 364. https://doi.org/10.3390/jcs7090364

APA Style

Kibrete, F., Trzepieciński, T., Gebremedhen, H. S., & Woldemichael, D. E. (2023). Artificial Intelligence in Predicting Mechanical Properties of Composite Materials. Journal of Composites Science, 7(9), 364. https://doi.org/10.3390/jcs7090364

Article Metrics

Back to TopTop