Next Article in Journal
Reduction of Die Wear and Structural Defects of Railway Screw Spike Heads Estimated by FEM
Previous Article in Journal
Demulsification Behavior of Alkali and Organic Acid in Zinc Extraction
Previous Article in Special Issue
Preparation Methods for Graphene Metal and Polymer Based Composites for EMI Shielding Materials: State of the Art Review of the Conventional and Machine Learning Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering

Department of Engineering Materials and Biomaterials, Faculty of Mechanical Engineering, Silesian University of Technology, 44-100 Gliwice, Poland
*
Authors to whom correspondence should be addressed.
Metals 2021, 11(11), 1832; https://doi.org/10.3390/met11111832
Submission received: 14 October 2021 / Revised: 11 November 2021 / Accepted: 12 November 2021 / Published: 15 November 2021

Abstract

:
Artificial neural networks are an effective and frequently used modelling method in regression and classification tasks in the area of steels and metal alloys. New publications show examples of the use of artificial neural networks in this area, which appear regularly. The paper presents an overview of these publications. Attention was paid to critical issues related to the design of artificial neural networks. There have been presented our suggestions regarding the individual stages of creating and evaluating neural models. Among other things, attention was paid to the vital role of the dataset, which is used to train and test the neural network and its relationship to the artificial neural network topology. Examples of approaches to designing neural networks by other researchers in this area are presented.

1. Introduction

In recent years, there has been a dynamic development of methods and tools enabling modelling and simulation of technological processes of manufacturing, processing and shaping the structure and properties of steels and metal alloys. Computer-assisted modelling is used in scientific and industrial research. It is a relatively cheap and effective method of optimizing, among others, the chemical composition and conditions of technological processes, supporting the achievement of the desired properties of metal materials [1,2].
Computer-aided computing helps to reduce costs also by reducing the number of necessary experiments [3,4]. The increasing availability of material databases and progress in machine learning methods open up new opportunities to predict material properties, and design and implement next-generation materials [4,5,6]. There is also a growing interest in applying artificial intelligence and computational intelligence in many fields of science and technology in materials engineering [2,7,8,9,10]. One of the most popular methods of computational intelligence is artificial neural networks [11,12,13,14]. Artificial neural networks are a valuable tool for implementing practical tasks, and their use is justified, especially when there are problems related to the creation of mathematical models.
Artificial neural networks make it possible to build relationships between the studied quantities without defining a mathematical description of the analyzed problem. Artificial neural networks learn problem-solving based on patterns. Therefore, it is essential to prepare a representative set of data obtained experimentally.
Many authors presented the excellent application potential of artificial neural networks in materials science in review papers [14,15,16,17,18,19]. Bhadeshia [15] was the first to review the applications of artificial neural networks in materials science. The paper has been an introduction to a particular issue devoted to this topic. Sha and Edwards [16] have presented the mistakes made when creating neural models. The authors noted, among other things, too few patterns in the data set, the excessively complex structure of the network, and incorrect interpretation of the results. Bhadeshia et al. [17] drew attention to the verification of neural models and their potential application by other researchers. Mukherjee et al. [18] presented examples of artificial neural networks to model the properties and structure of steel with the TRIP (Transformation Induced Plasticity) effect. The authors emphasized the high efficiency of artificial neural networks in modelling practical problems with many variables. Dobrzański et al. [19] presented numerous examples of the application of artificial neural networks concerning steels and metal alloys based on their own research. Their paper focused on practical issues related to the design of MLP (Multilayer Perceptron) neural networks. The number of publications related to applying artificial neural networks in the research of steels and metal alloys is systematically growing (Figure 1a).
Machine learning, especially the more frequently used deep learning, is an effective method of extracting knowledge from large data sets and contributes to the development of materials science [4,20,21,22,23,24]. In the area of research of steels and metal alloys, the trend of using deep learning and Convolutional Neural Networks (CNN) to solve problems related to image classification is evident (Figure 1b).
The paper aims to review the applications of artificial neural networks in steel and metal alloys. Special attention was paid to the multilayer perceptron, which is the most commonly used type of neural network in materials engineering. The authors attempted to draw attention to important, practical issues related to the design of neural models based on their own experience and the latest works of other authors.
The paper focuses on the preparation of a data set for training and testing a neural network, including the relationship between the number of patterns and the topology of the neural network. The issue of assessing the significance of independent variables and the use of qualitative variables in the neural model are discussed. An important problem of overfitting the neural network is presented. In this case, reference was made to the assessment of the quality of the model and the simulation results with the use of neural networks models. The attention was paid to new trends in neural modelling such as deep neural networks and hybrid systems. In the case of deep neural networks, consideration has been limited to convolutional neural networks, which are often used in materials engineering to solve tasks related to image classification.

2. Neural Networks Design

Artificial neural networks are a universal tool designed for numerical modelling. Like mathematical modelling, neural modelling comes down to searching for a functional form of an unknown transformation. An essential feature of artificial neural networks is the ability to learn from patterns. An artificial neural network is defined by a mathematical model of a neuron, a characteristic arrangement of neurons in the network, and a way of connecting neurons. Neural modelling can be divided into four stages: preparation of a representative set of data for network training and testing, selection of the type of neural network and determination of parameters characterizing the neural network, training of the neural network and assessment of the quality of the developed model.
The most significant number of applications of artificial neural networks in steel and metal alloys concerns solving classification and regression problems. MLP networks are the most commonly used. MLP neural network neurons are regular structures in the form of layers. These are input, output and hidden layer (or layers). Signals between neurons flow in one direction towards subsequent layers of the network. The weighted sum of the input signals calculated in the neuron, minus the threshold value, is transformed by the determined neuron activation function. In designing the structure of a multilayer perceptron, it is essential to determine the number of hidden layers and the number of neurons in these layers and define the activation function and the function of the postsynaptic potential in individual layers of the neural network.
Most of the publications related to artificial neural networks in steel and metal alloys concern supervised learning. Supervised learning of a neural network consists in forcing a specific reaction to the input signal. During training, the weights of connections between processing elements (neurons) are calculated to map the input to the output with the slightest possible error. Weights tuning takes place in successive learning cycles (epochs).

2.1. Data Set and Neural Network Topology

The prerequisite for developing an adequate neural model is preparing a representative data set. This applies to both the required number of patterns and the appropriate distribution of the values of the variables in the data set. The choice of variables representing the model is determined by the knowledge of the modelled process and the accessibility of data. The accessibility of data often forces the introduction of simplifications to the model. The patterns presented to the network during training should contain the values of all variables. The analysis of the possibility of obtaining data and its cost is also critical. It is related to performing an appropriate number of experiments and acquiring information from other sources, such as literature.
It is also helpful to use different methods of assessing the significance of independent variables [25]. At this stage of modelling, it is also necessary, in many cases, to introduce simplifications to the model. An essential stage in preparing the data set is the statistical analysis of the model variables, including the identification of collinear variables and outliers. The collinearity occurs in the case of the existing correlation between the independent variables and significantly hinders the assessment of the influence of the independent variables on the dependent variable. The influence of the independent variable is biased with the influence of the variables correlated with it. The results of the collinearity analysis of independent variables are presented in a few publications on neural modelling in the area of steels and metal alloys [26,27,28,29].
An important issue is determining the range of the values of the variables and the statistical evaluation of their distribution. The patterns presented to the neural network during training should evenly cover the entire domain. The results of the statistical distribution of the values of the model variables are usually limited to the minimum and maximum values, the mean value and the standard deviation. This is due to limitations resulting, in many cases, from the capacity of the publication. Distributions of the values of the model’s variables can be found in [26,28,30,31,32,33].
Based on this analysis, the independent variable value domain should be defined, in which the neural model can be used. Extrapolation outside the range of the training data usually leads to much larger errors than the estimated model error. In a multidimensional input space, there may be areas where the values of the independent variables are not represented. The presence of only single values in specific ranges of input variables does not allow for the assumption that the developed neural model will correctly predict the value of the dependent variable in the area defined by the minimum and maximum values of individual independent variables. In such a case, it is advantageous to limit the range of the model application and remove the mentioned values or define additional conditions that will limit the application of the model. These conditions may concern the mutual relations between the values of the independent variables. For example, there are combinations of mass concentrations of elements in steel or metal alloys that do not make sense for technological or other reasons. In such a case, the specialist knowledge of the neural network designer of the modelled problem is essential. Such conditions were presented in the works [26,27], in which the neural model of CCT (Continuous Cooling Transformation) diagrams was described. Similarly, the scope of the model’s application was limited in work devoted to the use of neural networks to calculate the hardness and fracture toughness of high-speed steels [34].
The dataset must contain an appropriate number of patterns used to train the neural network and test the model. The minimum number of patterns depends on the complexity of the modelled process and the structure of the neural network. Moreover, in classification problems, the class that occurs more frequently, in reality, should have proportionally more patterns in the data set. For example, in the case of two classes represented by 95% and 5% of the patterns, the classifier, striving to minimize the error, may ignore the second class, obtaining the value of the correct classification coefficient of 0.95.
The growing number of model variables and the number of neurons in the hidden layer or layers increase the number of training patterns. The number of hidden layers and the number of neurons in these layers determine the number of connections between neurons (Figure 2). Various equations are proposed that allow the estimation of the number of neurons in the hidden layer. These equations often combine the number of neurons in the hidden layer with the number of neurons in the input and output layers and optionally with the number of patterns in the training data set [35,36,37]. In practice, the number of neurons in the hidden layer or layers is most often determined experimentally.
Each connection has an associated weight—a numerical value determined during training of the neural network. A consequence of increasing the number of neurons is the increase in the number of weight factors that must be calculated when training the neural network. There can be no fewer training patterns than model parameters, which are determined during the training process. In addition to the training set, a sufficient number of patterns must include the set used to test the model. The relationship between the topology of the neural network and the number of patterns necessary to develop the model is emphasized by many authors [19,30,37].
The data set is divided, usually randomly, into two or three subsets: training and test or training, validation and test. Data from the training set is used to determine weight values during the training process. The other sets are used to verify the network during or after training. The proportions of the division of sets are a choice between providing the network with an appropriate number of training patterns and the reliability of assessing the correctness of the network operation. Cross-validation is another technique used in many studies, especially in the case of a relatively small dataset [30].
Errors related to the insufficient number of patterns based on which the neural model is developed and, at the same time, the excessively complex structure of the neural network occur in many works related to the application of artificial neural networks in materials engineering. The lack of a sufficient number of patterns is often the high cost of acquiring them, which results from the necessity to perform experiments [18].
In the case of a limited number of learning patterns, the neural model should be simplified to one hidden layer, and the number of neurons in this layer should be as small as possible. In practice, most of the modelling results in steels and metal alloys presented in the literature concern neural networks with one hidden layer. There are fewer cases of using two [38,39,40,41,42,43] or more hidden layers [30,44].

2.2. Independent Variables and Assessment of Their Significance

The number of neurons in the neural network’s input layer is closely related to the number of independent variables of the model. Removing insignificant variables from the model may contribute to the improvement of the model’s efficiency. Assessing the significance of independent variables is an essential step in modelling. Various methods are used for artificial neural networks [45]. The results are usually presented in graphs showing the impact’s value and/or direction [25].
Reddy et al. [39] and Wang et al. [46] used the method, consisting in the preparation of two new data sets, in which the values of the analyzed variable differed by 5% [39] and 10% [46], respectively. In the next step, the authors performed simulation calculations and then calculated the average value of the obtained difference. In this way, they determined the positive or negative impact and its value.
In the works [26,27,34], the significance of independent variables was assessed based on the quotient of the error made by the network without the influence of the analyzed variable and the error of the neural network. The error quotient was calculated independently for the training and test sets. In the simulation calculations, the mean value of the evaluated variable was assumed for all cases. The independent variable was considered significant if the value of the quotient was greater than 1. This method of assessing the significance of the independent variables of the model does not allow to determine the direction of the influence of the independent variables on the response of the neural network (Figure 3a).
A common solution is to calculate the relative importance of the input variables [35,47,48]. The coefficient value for each independent variable can be calculated based on the weights of connections of the input and hidden layer neurons. The sum of the coefficients calculated for all independent variables equals 1 (Figure 3b).
The genetic algorithm can also be used to select the input variables of the artificial neural network. The essence of genetic algorithms, like other evolutionary methods in which they are included, is the search for a solution based on the mechanisms of natural selection. Genetic algorithms [49,50] are random ones, and thanks to operations modelled on natural evolution, they create better and better solutions in subsequent iterations.
In their actions, they use the evolutionary rule, according to which the best-fitted individuals have a chance of survival. The method consists in searching for the optimal set of independent variables. In subsequent iterations of the algorithm, a different set of independent variables is checked for which the neural network is trained. The genetic algorithm makes the selection of variables in the set. Such a solution was described in [51]. In this case, artificial neural networks were used to model the mechanical properties of corrosion-resistant steels.

2.3. Dependent Variables in the Neural Model

In many cases, there are more dependent variables in modelling regression tasks. Neural modelling makes it possible to take into account many output variables using a single neural network. In such a case, each dependent variable usually corresponds to one neuron in the output layer of the artificial neural network.
The acceptance of several neurons in the output layer of the neural network allows taking into account the mutual relations between the dependent variables and, in many cases, is justified.
Reddy et al. [39] applied an artificial neural network with two neurons in the output layer to model the α and β phases in titanium alloys. Trzaska [27] used four neurons in the output layer to calculate the volume fractions of ferrite, perlite, bainite and martensite in steel continuously cooled from the austenitizing temperature. Krajewski and Nowacki [29] proposed the network with two neurons in the output layer to calculate two-phase steels’ tensile strength and yield strength. Pawar and Date [52] used seven neurons in the output layer to calculate the mechanical properties and microstructure at selected points of the steel rotor shaft. Narayana et al. [40] presented the network with four neurons in the output layer of a neural network to calculate the mechanical properties of corrosion-resistant steel. Chakraborty et al. [53] used six neurons in the hidden layer to calculate the phase transformation temperatures of supercooled austenite in steel. Smoljan et al. [54] applied an artificial neural network with two neurons in the output layer to calculate the volume fractions of microstructural constituents in low-alloyed steels.
However, there are often problems with training a neural network with many neurons in the output layer. These problems usually relate to the effective minimization of the error function for all outputs simultaneously. Therefore, a more commonly used solution is using several neural networks with one neuron in the output layer [27,30,31,34,46,55]. This ensures a more specific and more effective training of the neural network.

2.4. Qualitative Variables in the Neural Model

Artificial neural networks enable the processing of both quantitative and qualitative variables. Qualitative, nominal, categorical or ordinal variables can have a finite number of values. These values are conventionally assigned a name (label). The use of qualitative variables in modelling requires appropriate coding of their values.
One possibility is to use one neuron and assign each value of a qualitative variable a numerical value. This solution works well for ordinal variables. One-of-N (one-hot) representation is a common encoding for nominal variables. The number of neurons that are used to code the variable is, in this case, equal to the number of values that the nominal variable can take (Figure 4). In neural modelling, in the area of steel and metal alloys research, nominal variables are used, for example, to describe the type of heat treatment, product form, surface quality, etc.
In [46], a single neuron coded the cooling medium used during the heat treatment of corrosion-resistant steels. The authors adopted a value of 1 for water and a value of 0 for air. Dutta et al. Reference [56] coded three variants of heat treatment with one neuron and changed the input signal values from 1 to 3. Yetim et al. [57] encoded three different time values with three neurons when modelling the wear rate of the surface of nitrided corrosion-resistant steel. Trzaska [27] used four neurons to code information on the presence of ferrite, perlite, bainite and martensite in the steel microstructure. Nominal variables describing the steel microstructure were used in the hardness model of steels cooled from the austenitizing temperature.
The number of neurons in the output layer of the artificial neural network solving classification problems depends on the number of classes and the type of response expected by the network designer. A popular way of coding the neural network response for two-class problems is to use one output neuron. The activation value of the output neuron varies from 0 to 1. In this case, a neural network response close to 0 corresponds to the first class, and activation close to 1 indicates the selection of the second class. This method of classification requires a limit value that separates the two classes. This value is determined experimentally so that the number of wrong classifications is as low as possible. The activation value of the output neuron also indicates the certainty of selecting a given class. This coding method was used in [27] in a neural model to predict the type of microstructural components present in the steel after cooling with a known cooling rate.
If the number of classes is greater than two, the artificial neural network response may be coded differently. One method is to use one dependent variable that can take one of the n values, where n is the number of classes. The use of one-of-N encoding makes the number of neurons in the output layer equal to the number of classes. During network training, assigning the analyzed case to one of the classes requires the activation of one of the neurons while extinguishing the remaining neurons of the output layer. Class membership is determined by the activation level with the highest output signal value when testing and using the network. Dobrzański et al. [58] used five neurons in the output layer to automatically classify damage in creep steel. Another method of coding the network response for multi-class tasks is to take the number of dependent variables equal to the number of classes. In this case, each output variable can take two values (yes or no). These values define class membership. The decision to belong to a class is made independently for each variable. This may lead to the choice of several solutions or not classifying the case in any class. Both methods allow the user to interpret the classification results of the artificial neural network based on the signal value of the output neurons. Both methods of coding of dependent variables in classification problems were used in the [59]. The authors presented a method of selecting a steel grade with the required course of the hardenability curve.

2.5. Model Selection and Overfitting Problem

In practice, designing an artificial neural network involves learning many neural networks with different topologies using various methods with variable training conditions. Training must be associated with a critical evaluation of the results obtained. In a typical artificial neural network training process, after a certain number of training epochs, the error for the test set begins to increase despite decreasing the error value for the training set. Further training of the neural network leads to an excessive adjustment of the model to the data from the training set. The adverse effect of overfitting the neural network is relatively frequent.
Overfitting the neural network is favored by increasing the number of hidden layers, increasing the number of neurons in the hidden layer or layers, and excessively increasing the number of training epochs (Figure 5). By increasing the number of neurons in the hidden layer or layers and extending the training process, in many cases for the training set, a learning error, close to the value of 0, can be achieved. Such a procedure does not make sense. In this way, only the training set model can be obtained, not the process model.
The problem of the proper selection of the number of neurons in the hidden layer of the MLP network can be presented as a compromise between the ability to approximate and generalize. In this case, the approximation is understood as the correctness of the operation for the data to which the neural network has had access during training. Generalization is understood as the proper operation of the neural network for data not presented during training. The problem of overfitting is discussed in many publications [13,15,19,26,28,30,33,46,60,61,62].
The final stage of modelling is usually selecting one neural network. The primary selection criterion used by many authors is the error value. Many publications present the dependence of the error obtained while training the artificial neural network on the number of neurons in the hidden layer. Such an analysis makes sense, assuming that the correct method was used and the training conditions are close to optimal. Most of the authors use the same training method.
Powar and Date [52] proposed a different approach to this problem by comparing the error values obtained with different training methods for two sizes of the hidden layer. Razavi et al. [63], Kocaman et al. [48] and Murugesan et al. [37] included different activation functions in their analysis. Wang et al. [46] and Reddy et al. [64] presented the error values independently for the training and test set. Narayana et al. [40] presented the results for a different number of neurons in one and two hidden layers.
The result of this analysis is the basis for the selection of the number of neurons in the hidden layer. Often, the criterion of the lowest error value is adopted arbitrarily, even though similar error values occur for a less complex neural network. In the case of similar error values, it is better to choose a less complicated model. In such a case, the risk of overfitting the model to the training data is lower.
The test set plays an essential role in the assessment of the neural model. Similarly to the training set, the test set should be sufficiently numerous, and the values of the variables should evenly cover the domain of the model. The statistical values obtained for the data from the test set, such as the mean absolute error value, the value of the correlation coefficient and/or other, should be similar to the values obtained for the training set. The comparison of the error value for the training and test sets gives essential information about the quality of the model and is presented in many publications [27,34,46,55]. Such an assessment is facilitated by the scatter plots presented in many publications, where the measured and calculated values of the dependent variable are compared [28,32,33,36,41,46,48,62,65,66,67,68,69].
The results of neural modelling are compared with the results obtained using other modelling methods. Various methods of machine learning, including artificial neural networks, were used, among others, by Geng et al. [70] for modelling CCT diagrams of tool steels and Sourmanil and Garcia-Mateo [71] and Rahaman et al. [28] for modelling the start temperature of the martensitic phase transformation in steel. Sandhya et al. [44] used artificial neural networks and other regression methods to model the tensile strength of corrosion-resistant steels, including Random Forest, Decision Tree, linear regression, and K-Nearest Neighbors.

3. Simulation Using Artificial Neural Network

The relationship between the independent and dependent variables, described by the neural model, is often used in simulations. The simulation results are presented in the form of graphs describing the influence of one or two independent variables on the dependent variable. When performing calculations, it is necessary to establish constant values of independent variables that are not shown in the diagram.
Examples of such simulations, including the analysis of the obtained results, can be found in many publications in the field of steels and metal alloys [29,32,40,42,46,55,60,61,64,65,68,69,72,73].
Krajewski and Nowacki [29] presented the effect of selected elements’ concentration on the tensile strength of two-phase steels. Narayana et al. [40] showed examples of the simulation of mass concentration of nickel and chromium on the mechanical properties of austenitic corrosion-resistant steel. Sun et al. [65] presented a simulation of the influence of thermomechanical treatment conditions on the mechanical properties of a titanium alloy. Reddy et al. [64] showed examples of simulations of the influence of alloying elements on the mechanical properties of medium carbon steels. Trzaska [60] presented examples of simulations of the influence of selected elements on the transformation temperatures of supercooled austenite, hardness and the volume fraction of structural constituents in steel, cooled from the austenitizing temperature. It is also essential to consider the model error when interpreting the results.
Small, compared to the error value, changes in the values presented in the graph may lead to erroneous conclusions about the influence of independent variables on the dependent variable. As already mentioned, when training artificial neural networks, there is a risk of too accurately fitting the model to the data from the training set. Overfitting of neural networks is a phenomenon that occurs relatively frequently. In this case, the graphs presenting the simulation results may contain values that result from overfitting and not the actual impact of the analyzed independent variables.
When discussing the simulation results and presenting conclusions, one should also consider the quality of the data used for training and testing the network, including the accuracy of the measurement methods and simplifications made while selecting the features describing the model.

4. Deep Neural Networks

Image analysis is an essential element of research in the field of steels and metal alloys. The image is represented by a finite number of pixels assigned a value related to the color. The use of neural networks, e.g., of the MLP type, is theoretically possible. In practice, the coding of the color of each pixel by one neuron of the input layer forces a vast number of connections in the neural network. As a result, neural networks in which all neurons are connected are not suitable for efficiently modelling such tasks. The solution to this problem is the use of deep neural networks. The most popular deep neural networks used in image analysis are Convolutional Neural Networks (CNN). The topology and tasks of individual layers of convolutional neural networks are described, i.a., in [74,75,76].
CNN networks perform two functions simultaneously: feature extractor and final classifier or regression system. Training deep neural networks requires high computing power and large sets of patterns and is time-consuming. Large databases are available for use in visual object recognition software research such as ImageNet. According to [76], training CNN on ImageNet using multiple GPUs may take several weeks.
An essential feature of the CNN network is learning some general image features in the first layers and creating increasingly specific features in the following layers. As a result, the last convolution layer generates images of relatively small size, which represent the characteristics of the processed set. The final data processing element in the CNN network is layers of fully connected neurons that act as a classifier or regression system. This role can be fulfilled by, e.g., MLP or SVM (Support Vector Machine) network (Figure 6).
CNN networks can be pre-trained on any chosen data set. Pre-trained CNN networks can then be trained on user data. The user can also adapt the classifying layers to the needs of the model being created. This method, called transfer learning, is often used in steels and metal alloys [75,77,78,79,80]. Numerous models are available, including AlexNet [81], GoogLeNet, VGGNet, and ResNet, which can be used with this technique.
Lenz et al. [77] applied transfer learning to the automatic evaluation of the adhesion of a coating applied to a steel substrate using the PVD method. The classifier selected one of the six adhesion classes based on the impression image after the standard Rockwell hardness test with a visible network of cracks and peeling of the coating. Wei et al. [79] presented a model for automatic detection of surface defects of steel bars. Lee et al. [80] proposed a CNN model for the real-time detection of surface defects of steel products. They divided the surface defects into six classes. They compared the calculation results with the models developed by the logistic regression method and the SVM method. The results of modelling, aimed at detecting defects in the surface of products using deep neural networks, can be found, among others, in References [82,83,84,85,86,87].
Another popular application of deep neural networks is the classification of the microstructure of steels and metal alloys [75,78,88]. Mulewicz et al. [78] developed the method for classifying the steel microstructure. They included eight classes in the model. They used photos from a light microscope to train the CNN network. The classification of the microstructure of low-carbon steels using CNN was also the subject of the work [75].

5. Artificial Neural Networks in Hybrid Systems

A visible trend in modelling, also in steels and metal alloys, is the use of hybrid methods. Combining different methods in one model allows one to consider a wider problem area and obtain a synergistic effect by using the advantages of each method. Artificial neural networks are often combined with other modelling methods. These include mathematical modelling and other methods of computational intelligence and artificial intelligence [89].
One example is the use of artificial neural networks and genetic algorithms. The combination of artificial neural networks and genetic algorithms enables the solution of optimization tasks. Artificial neural networks are used in this case to calculate the value of the fitness function of individual chromosomes. Chromosomes are the encoded form of the value of decision variables and form a set of potential solutions. With the proper definition of the task conditions, the value of chromosome adaptation corresponds to the value of the optimized objective function. This allows for the identification of the values of independent variables that meet specific criteria. Examples of such a solution can be found, i.a., in the works [26,34,38,56,63,90,91,92,93].
Reddy et al. [38] applied artificial neural networks and a genetic algorithm to optimize the chemical composition and heat treatment conditions of medium carbon steels with the required mechanical properties: yield strength, ultimate tensile strength, elongation, reduction in area and impact strength. Dutta et al. [56] proposed a similar methodology for designing the chemical composition of dual-phase steels. Pattanayak et al. [94] designed the chemical composition of steel and heat treatment conditions for micro-alloyed steels intended to produce pipes. Sitek [34] presented a method supporting the chemical composition design of high-speed steels with the required hardness and fracture toughness. Sinha et al. [90] focused their research on shape memory Ni-Ti alloys. In this case, the optimization aimed to improve the shape recovery behavior while maintaining high mechanical properties. Trzaska [26] presented a method of multi-criteria optimization of the chemical composition of steel with the required hardness and the required values of the temperature of phase transformations, occurring during continuous cooling from the austenitizing temperature. Razavi et al. [63] described a method of optimizing the heat treatment conditions of corrosion-resistant steel, focusing on maximizing the hardness. The authors of these publications point out that the proposed method limits the number of experiments when designing new steel grades with the required properties.
Another noteworthy example is the combination of the finite element method and artificial neural networks in one model. Shahani et al. [95] used the results of modeling with the finite element method of the hot rolling of an aluminum alloy for training the neural network. The purpose of this solution was to shorten the calculation time of the process parameters. Guo et al. [96] applied a similar approach to magnesium alloys. The authors also focused on reducing the computation time. They indicated the use of a neural model in online control of the rolling process of magnesium alloy sheets. Powar and date [52] proposed a method of predicting mechanical properties and microstructure in essential places of the steel rotor shaft. In their method, they used artificial neural networks and the ANSYS FLUENT software. Cai et al. [47] developed a neural model for calculating the grain size, stress and volume fraction of the recrystallized phase in the heat-resistant alloy steel. The neural model was used to model the thermomechanical treatment process by the finite element method.
Artificial neural networks are also used to calculate the parameters of the mathematical model. Sitek et al. [97] proposed a new method of estimating the parameters of the mathematical model to calculate the steel hardenability curve. In the model developed by Tartaglia et al. [98], the equations developed by the multiple regression method were used for this purpose. Replacing the equations with artificial neural networks resulted in a reduction in the calculation errors of the Jominy curve. Artificial neural networks are very good at solving classification problems. A model of steel CCT diagrams is presented in [99]. The equations developed by the multiple regression method were used to calculate the phase transformations temperature. Artificial neural networks played the role of classifiers, which was to answer the question, whether a phase change takes place for a given cooling rate.

6. Summary

Artificial neural networks are an effective and frequently used modelling method in steels and metal alloys. As evidence supporting such a statement, there are numerous publications presenting the results of research. Examples of such publications are presented in this paper.
However, it should be noted that despite its advantages, the use of artificial neural networks is not always justified. In many cases, a model of similar quality can be obtained by, for example, multiple regression or logistic regression. An additional advantage of such a model is the explicit representation of the influence of individual independent variables in the form of regression coefficients.
Computer programs simulating artificial neural networks’ operation are equipped with tools supporting the design of neural models. User-friendly interface, wizards and help systems allow designing the artificial neural networks by an intuitive selection of available options. It can be noted that the automatic neural network wizards have implemented universal algorithms for selecting the basic parameters that define the neural network and the learning process. It concerns, among others: the number of neurons in the hidden layer, the number of these layers, the assessment of significance and selection of independent variables, training methods, error function, and activation function. These universal solutions are not optimal. The development of a neural model is simple and can be completed relatively quickly. This is one of the reasons why artificial neural networks are willingly used to solve regression and classification tasks, also in steels and metal alloys.
Unfortunately, neural modelling is often not accompanied by the proper preparation of an empirical data set: necessary statistical analysis, including analysis of outliers and the study of collinearity of independent variables. Another problem in the research area discussed in this paper is the insufficient number of training patterns. Too few learning patterns are often associated with too complex a neural network structure. In the literature, there are many examples of training the neural network based on sets containing significantly fewer learning patterns than the number of weights of the neural network.
Neural modelling is always accompanied by the risk of overlearning, i.e., overfitting the model with the data from the training data set. In this case, the neural network loses its ability to generalize the knowledge gained during learning, and as a result, it is useless. In many cases, the range of values of the independent variables in which the neural model can be used is not defined. Extrapolation outside the range of training values usually leads to significant prediction errors.
Preparation of data including the often necessary transformations, appropriate selection of the type and topology of the network, definition of functions such as the error function and activation function, training and proper assessment of the model quality and interpretation of the results require knowledge and experience. To a large extent, the success of neural modelling depends on the knowledge of the analyzed process, which allows for the adoption of the necessary simplifications and the correct interpretation of the obtained results.
An important issue is publishing information about the neural model to allow the verification of calculations and the application of the model.
The results presented in the papers usually include only general information about the network’s structure, the neuron’s mathematical model and the quality of the model, presented in the form of various statistics. Some publications present data used for training and testing neural networks or a link to a website with detailed information about the neural model and data. According to the authors, it is an excellent trend that contributes to the popularization and development of this modelling method.
In this paper, an attempt was made to present practical tips on the design of artificial neural networks resulting from many years of experience, being aware that due to the volume of the study, it is impossible to describe in detail the issue.

Author Contributions

This article was jointly written by all authors (W.S. and J.T.) who also served as Guest Editors for the Special Issue “Application of Artificial Neural Networks in Studies of Steels and Alloys”. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rajan, K. Materials informatics. Mater. Today 2005, 8, 38–45. [Google Scholar] [CrossRef]
  2. Sha, W.; Guo, Y.; Yuan, Q.; Tang, S.; Zhang, X.; Lu, S.; Guo, X.; Cao, Y.C.; Cheng, S. Artificial Intelligence to Power the Future of Materials Science and Engineering. Adv. Intell. Syst. 2020, 2, 1900143. [Google Scholar] [CrossRef] [Green Version]
  3. Bhadeshia, H.K.D.H. Mathematical Models in Materials Science. Mater. Sci. Technol. 2008, 24, 128–136. [Google Scholar] [CrossRef]
  4. Liu, Y.; Zhao, T.; Ju, W.; Shi, S. Materials discovery and design using machine learning. J. Mater. 2017, 3, 159–177. [Google Scholar] [CrossRef]
  5. Mueller, T.; Kusne, A.G.; Ramprasad, R. Machine Learning in Materials Science: Recent Progress and Emerging Applications. Rev. Comp. Chem. 2016, 29, 186–273. [Google Scholar] [CrossRef]
  6. Wei, J.; Chu, X.; Sun, X.Y.; Xu, K.; Deng, H.X.; Chen, J.; Wei, Z.; Lei, M. Machine learning in materials science. InfoMat 2019, 1, 338–358. [Google Scholar] [CrossRef]
  7. Chakraborti, N. Genetic algorithms in materials design and processing. Int. Mater. Rev. 2004, 49, 246–260. [Google Scholar] [CrossRef]
  8. Datta, S. Materials Design Using Computational Intelligence Techniques; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  9. Datta, S.; Chattopadhyay, P.P. Soft computing techniques in advancement of structural metals. Int. Mater. Rev. 2013, 58, 475–504. [Google Scholar] [CrossRef]
  10. Sitek, W.; Dobrzański, L.A. Application of genetic methods in materials’ design. J. Mater. Process. Technol. 2005, 164–165, 1607–1611. [Google Scholar] [CrossRef]
  11. Anderson, J.A. An Introduction to Neural Networks; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  12. Hassoun, M.H. Fundamentals of Artificial Neural Networks; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  13. Bishop, C.M. Neural Networks for Pattern Recognition, 1st ed.; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  14. Bhadeshia, H.K.D.H. Neural Networks and Information in Materials Science. Stat. Anal. Data Min. 2009, 1, 296–304. [Google Scholar] [CrossRef]
  15. Bhadeshia, H.K.D.H. Neural Networks in Materials Science. ISIJ Int. 1999, 39, 966–979. [Google Scholar] [CrossRef]
  16. Sha, W.; Edwards, K.L. The use of artificial neural networks in materials science based research. Mater. Des. 2007, 28, 1747–1752. [Google Scholar] [CrossRef]
  17. Bhadeshia, H.K.D.H.; Dimitriu, R.C.; Forsik, S.; Pak, J.H.; Ryu, J.H. Performance of neural networks in materials science. Mater. Sci. Technol. 2009, 25, 504–510. [Google Scholar] [CrossRef] [Green Version]
  18. Mukherjee, M.; Singh, S.B. Artificial Neural Network: Some Applications in Physical Metallurgy of Steels. Mater. Manuf. Process. 2009, 24, 198–208. [Google Scholar] [CrossRef]
  19. Dobrzański, L.A.; Trzaska, J.; Dobrzańska-Danikiewicz, A.D. Use of Neural Networks and Artificial Intelligence Tools for Modeling, Characterization, and Forecasting in Material Engineering, Comprehensive Materials Processing. In Materials Modelling and Characterization; Hashmi, S., Ed.; Elsevier Science: Amsterdam, The Netherlands, 2014; Volume 2, pp. 161–198. [Google Scholar]
  20. Agrawal, A.; Choudhary, A. Deep materials informatics: Applications of deep learning in materials science. MRS Commun. 2019, 9, 779–792. [Google Scholar] [CrossRef] [Green Version]
  21. Bock, F.E.; Aydin, R.C.; Cyron, C.J.; Huber, N.; Kalidindi, S.R.; Klusemann, B. A Review of the Application of Machine Learning and Data Mining Approaches in Continuum Materials Mechanics. Front. Mater. 2009, 6, 110. [Google Scholar] [CrossRef] [Green Version]
  22. Hong, Y.; Hou, B.; Jiang, H.; Zhang, J. Machine learning and artificial neural network accelerated computational discoveries in materials science. WIREs Comput. Mol. Sci. 2020, 10, e1450. [Google Scholar] [CrossRef]
  23. Schmidt, J.; Marques, M.R.G.; Botti, S.; Marques, M.A.L. Recent advances and applications of machine learning in solid-state materials science. Npj Comput. Mater. 2019, 5, 83. [Google Scholar] [CrossRef]
  24. Kalidindi, S.R.; Graef, M.D. Materials data science: Current status and future outlook. Ann. Rev. Mater. Res. 2015, 45, 171–193. [Google Scholar] [CrossRef]
  25. May, R.; Dandy, G.; Maier, H. Review of Input Variable Selection Methods for Artificial Neural Networks. In Artificial Neural Networks—Methodological Advances and Biomedical Applications; Suzuki, K., Ed.; InTech: Rijeka, Croatia, 2011; pp. 19–44. [Google Scholar]
  26. Trzaska, J. Prediction Methodology for the Anisothermal Phase Transformation Curves of the Structural and Engineering Steels; Silesian University of Technology Press: Gliwice, Poland, 2017. (In Polish) [Google Scholar]
  27. Trzaska, J. A new neural networks model for calculating the continuous cooling transformation diagrams. Arch. Metall. Mater. 2018, 63, 2009–2015. [Google Scholar] [CrossRef]
  28. Rahaman, M.; Mu, W.; Odqvist, J.; Hedstrom, P. Machine Learning to Predict the Martensite Start Temperature in Steels. Metall. Mater. Trans. A 2019, 50, 2081–2091. [Google Scholar] [CrossRef] [Green Version]
  29. Krajewski, S.; Nowacki, J. Dual-phase steels microstructure and properties consideration based on artificial intelligence techniques. Arch. Civ. Mech. Eng. 2014, 14, 278–286. [Google Scholar] [CrossRef]
  30. Merayo, D.; Rodríguez-Prieto, A.; Camacho, A.M. Prediction of Mechanical Properties by Artificial Neural Networks to Characterize the Plastic Behavior of Aluminum Alloys. Materials 2020, 13, 5227. [Google Scholar] [CrossRef] [PubMed]
  31. Kemp, R.; Cottrell, G.A.; Bhadeshia, H.K.D.H.; Odette, G.R.; Yamamoto, T.; Kishimoto, H. Neural-network analysis of irradiation hardening in low-activation steels. J. Nucl. Mater. 2006, 348, 311–328. [Google Scholar] [CrossRef] [Green Version]
  32. Yescas, M.A. Prediction of the Vickers hardness in austempered ductile irons using neural networks. Int. J. Cast Metals Res. 2003, 15, 513–521. [Google Scholar] [CrossRef]
  33. Sourmail, T.; Bhadeshia, H.K.D.H.; MacKay, D.J.C. Neural network model of creep strength of austenitic stainless steels. Mater. Sci. Technol. 2002, 18, 655–663. [Google Scholar] [CrossRef]
  34. Sitek, W. Methodology of High-Speed Steels Design Using the Artificial Intelligence Tools. J. Achiev. Mater. Manuf. Eng. 2010, 39, 115–160. [Google Scholar]
  35. Singh, K.; Rajput, S.K.; Mehta, Y. Modeling of the hot deformation behavior of a high phosphorus steel using artificial neural networks. Mater. Discov. 2016, 6, 1–8. [Google Scholar] [CrossRef]
  36. Kumar, S.; Karmakar, A.; Nath, S.K. Construction of hot deformation processing maps for 9Cr-1Mo steel through conventional and ANN approach. Mater. Today Commun. 2021, 26, 101903. [Google Scholar] [CrossRef]
  37. Murugesan, M.; Sajjad, M.; Jung, D.W. Hybrid Machine Learning Optimization Approach to Predict Hot Deformation Behavior of Medium Carbon Steel Material. Metals 2019, 9, 1315. [Google Scholar] [CrossRef] [Green Version]
  38. Reddy, N.S.; Krishnaiah, J.; Young, H.B.; Lee, J.S. Design of medium carbon steels by computational intelligence techniques. Comp. Mater. Sci. 2015, 101, 120–126. [Google Scholar] [CrossRef]
  39. Reddy, N.S.; Panigrahi, B.B.; Ho, C.M.; Kim, J.H.; Lee, C.S. Artificial neural network modeling on the relative importance of alloying elements and heat treatment temperature to the stability of α and β phase in titanium alloys. Comp. Mater. Sci. 2015, 107, 175–183. [Google Scholar] [CrossRef]
  40. Narayana, P.L.; Lee, S.W.; Park, C.H.; Yeom, J.T.; Hong, J.K.; Maurya, A.K.; Reddy, N.S. Modeling high-temperature mechanical properties of austenitic stainless steels by neural networks. Comp. Mater. Sci. 2020, 179, 109617. [Google Scholar] [CrossRef]
  41. Dehghani, K.; Nekahi, A. Artificial neural network to predict the effect of thermomechanical treatments on bake hardenability of low carbon steels. Mater. Des. 2010, 31, 2224–2229. [Google Scholar] [CrossRef]
  42. Liu, Y.; Zhu, J.; Cao, Y. Modeling effects of alloying elements and heat treatment parameters on mechanical properties of hot die steel with back-propagation artificial neural network. J. Iron Steel Res. Int. 2017, 24, 1254–1260. [Google Scholar] [CrossRef]
  43. Khalaj, G.; Nazariy, A.; Pouraliakbar, H. Prediction of martensite fraction of microalloyed steel by artificial neural networks. Neural Netw. World 2013, 2, 117–130. [Google Scholar] [CrossRef] [Green Version]
  44. Sandhya, N.; Sowmya, V.; Bandaru, C.R.; Raghu Babu, G. Prediction of Mechanical Properties of Steel using Data Science Techniques. Int. J. Recent Technol. Eng. 2019, 8, 235–241. [Google Scholar] [CrossRef]
  45. Olden, J.D.; Joy, M.K.; Death, R.G. An accurate comparison of methods for quantifying variable importance in artificial neural networks sing simulated data. Ecol. Model. 2004, 178, 389–397. [Google Scholar] [CrossRef]
  46. Wang, Y.; Wu, X.; Li, X.; Xie, Z.; Liu, R.; Liu, W.; Zhang, Y.; Xu, Y.; Liu, C. Prediction and Analysis of Tensile Properties of Austenitic Stainless Steel Using Artificial Neural Network. Metals 2020, 10, 234. [Google Scholar] [CrossRef] [Green Version]
  47. Cai, Z.; Ji, H.; Pei, W.; Tang, X.; Xin, L.; Lu, Y.; Li, W. An Investigation into the Dynamic Recrystallization (DRX) Behavior and Processing Map of 33Cr23Ni8Mn3N Based on an Artificial Neural Network (ANN). Materials 2020, 13, 1282. [Google Scholar] [CrossRef] [Green Version]
  48. Kocaman, E.; Sirin, S.; Dispinar, D. Artificial Neural Network Modeling of Grain Refinement Performance in AlSi10Mg Alloy. Inter. J. Metalcast. 2021, 15, 338–348. [Google Scholar] [CrossRef]
  49. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; U Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  50. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning, 1st ed.; Addison-Wesley Longman Publishing Co.: Boston, MA, USA, 1989. [Google Scholar]
  51. Honysz, R. Optimization of ferrite stainless steel mechanical properties prediction with artificial intelligence algorithms. Arch. Metall. Mater. 2020, 65, 749–753. [Google Scholar] [CrossRef]
  52. Powar, A.; Date, P. Modeling of microstructure and mechanical properties of heat treated components by using Artificial Neural Network. Mat. Sci. Eng. A-Struct. 2015, 628, 89–97. [Google Scholar] [CrossRef]
  53. Chakraborty, S.; Chattopadhyay, P.P.; Ghosh, S.K.; Datta, S. Incorporation of prior knowledge in neural network model for continuous cooling of steel using genetic algorithm. Appl. Soft. Comput. 2017, 58, 297–306. [Google Scholar] [CrossRef]
  54. Smoljan, B.; Smokvina Hanza, S.; Tomašić, N.; Iljkić, D. Computer simulation of microstructure transformation in heat treatment processes. J. Achiev. Mater. Manuf. Eng. 2007, 24, 275–282. [Google Scholar]
  55. Xia, X.; Nie, J.F.; Davies, C.H.J.; Tang, W.N.; Xu, S.W.; Birbilis, N. An artificial neural network for predicting corrosion rate and hardness of magnesium alloys. Mater. Design 2016, 90, 1034–1043. [Google Scholar] [CrossRef]
  56. Dutta, T.; Dey, S.; Datta, S.; Das, D. Designing dual-phase steels with improved performance using ANN and GA in tandem. Comp. Mater. Sci. 2019, 157, 6–16. [Google Scholar] [CrossRef]
  57. Yetim, A.F.; Codur, M.Y.; Yazici, M. Using of artificial neural network for the prediction of tribological properties of plasma nitride 316L stainless steel. Mater. Lett. 2015, 158, 170–173. [Google Scholar] [CrossRef]
  58. Dobrzański, J.; Sroka, M.; Zieliński, A. Methodology of classification of internal damage the steels during creep service. J. Achiev. Mater. Manuf. Eng. 2006, 18, 263–266. [Google Scholar]
  59. Trzaska, J.; Sitek, W.; Dobrzański, L.A. Application of neural networks for selection of steel grade with required hardenability. Int. J. Comput. Mater. Sci. Surf. Eng. 2007, 1, 336–382. [Google Scholar] [CrossRef]
  60. Trzaska, J. Examples of simulation of the alloying elements effect on austenite transformations during continuous cooling. Arch. Metall. Mater. 2021, 66, 331–337. [Google Scholar] [CrossRef]
  61. Sidhu, G.; Bhole, S.D.; Chen, D.L.; Essadiqi, E. Determination of volume fraction of bainite in low carbon steels using artificial neural networks. Comp. Mater. Sci. 2011, 50, 337–3384. [Google Scholar] [CrossRef]
  62. Garcia-Mateo, C.; Capdevila, C.; Garcia Caballero, F.; Garcia de Andres, C. Artificial neural network modeling for the prediction of critical transformation temperatures in steels. J. Mater. Sci. 2007, 42, 5391–5397. [Google Scholar] [CrossRef] [Green Version]
  63. Razavi, A.R.; Ashrafizadeh, F.; Fooladi, S. Prediction of age hardening parameters for 17-4PH stainless steel by artificial neural network and genetic algorithm. Mat. Sci. Eng. A-Struct. 2016, 675, 147–152. [Google Scholar] [CrossRef]
  64. Reddy, N.S.; Krishnaiah, J.; Hong, S.G.; Lee, J.S. Modeling medium carbon steels by using artificial neural networks. Mat. Sci. Eng. A-Struct. 2009, 508, 93–105. [Google Scholar] [CrossRef]
  65. Sun, Y.; Zeng, W.; Han, Y.; Ma, X.; Zhao, Y.; Guo, P.; Wang, G.; Dargusch, M.S. Determination of the influence of processing parameters on the mechanical properties of the Ti–6Al–4V alloy using an artificial neural network. Comp. Mater. Sci. 2012, 60, 239–244. [Google Scholar] [CrossRef]
  66. Bhattacharyya, T.; Singh, S.B.; Sikdar, S.; Bhattacharyya, S.; Bleck, W.; Bhattacharjee, D. Microstructural prediction through artificial neural network (ANN) for development of transformation induced plasticity (TRIP) aided steel. Mat. Sci. Eng. A-Struct. 2013, 565, 148–157. [Google Scholar] [CrossRef]
  67. Lin, Y.C.; Zhang, J.; Zhong, J. Application of neural networks to predict the elevated temperature flow behavior of a low alloy steel. Comp. Mater. Sci. 2008, 43, 752–758. [Google Scholar] [CrossRef]
  68. Lin, Y.C.; Liu, G.; Chen, M.S.; Zhong, J. Prediction of static recrystallization in a multi-pass hot deformed low-alloy steel using artificial neural network. J. Mater. Process. Technol. 2009, 209, 4611–4616. [Google Scholar] [CrossRef]
  69. Monajati, H.; Asefi, D.; Parsapour, A.; Abbasi, S. Analysis of the effects of processing parameters on mechanical properties and formability of cold rolled low carbon steel sheets using neural networks. Comp. Mater. Sci. 2010, 49, 876–881. [Google Scholar] [CrossRef]
  70. Geng, X.; Wang, H.; Xue, W.; Xiang, S.; Huang, H.; Meng, L.; Ma, G. Modeling of CCT diagrams for tool steels using different machine learning techniques. Comp. Mater. Sci. 2020, 171, 109235. [Google Scholar] [CrossRef]
  71. Sourmail, T.; Garcia-Mateo, C. Critical assessment of models for predicting the Ms temperature of steels. Comp. Mater. Sci. 2005, 34, 323–334. [Google Scholar] [CrossRef] [Green Version]
  72. Sitek, W.; Trzaska, J. Numerical Simulation of the Alloying Elements Effect on Steels’ Properties. J. Achiev. Mater. Manuf. Eng. 2011, 45, 71–78. [Google Scholar]
  73. Sidhu, G.; Bhole, S.D.; Chen, D.L.; Essadiqi, E. Development and experimental validation of a neural network model for prediction and analysis of the strength of bainitic steels. Mater. Des. 2012, 41, 99–107. [Google Scholar] [CrossRef]
  74. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  75. Azimi, S.M.; Britz, D.; Engstler, M.; Fritz, M.; Mücklich, F. Advanced Steel Microstructural Classification by Deep Learning Methods. Sci. Rep. 2018, 8, 2128. [Google Scholar] [CrossRef]
  76. Patterson, J.; Gibson, A. Deep Learning. A Practitioner’s Approach, 1st ed.; O’Reilly Media, Inc.: Sebastopol, MA, USA, 2017. [Google Scholar]
  77. Lenz, B.; Hasselbruch, H.; Mehner, A. Automated evaluation of Rockwell adhesion tests for PVD coatings using convolutional neural networks. Surf. Coat. Technol. 2020, 385, 125365. [Google Scholar] [CrossRef]
  78. Mulewicz, B.; Korpala, G.; Kusiak, J.; Prahl, U. Autonomous Interpretation of the Microstructure of Steels and Special Alloys. Mater. Sci. Forum 2019, 949, 24–31. [Google Scholar] [CrossRef] [Green Version]
  79. Wei, R.; Song, Y.; Zhang, Y. Enhanced Faster Region Convolutional Neural Networks for Steel Surface Defect Detection. ISIJ Int. 2020, 60, 539–545. [Google Scholar] [CrossRef] [Green Version]
  80. Lee, S.Y.; Tama, B.A.; Moon, S.J.; Lee, S. Steel Surface Defect Diagnostics Using Deep Convolutional Neural Network and Class Activation Map. Appl. Sci. 2019, 9, 5449. [Google Scholar] [CrossRef] [Green Version]
  81. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  82. Gao, Y.; Gao, L.; Li, X.; Yan, X. A semi-supervised convolutional neural network-based method for steel surface defect recognition. Robot. Comput.-Integr. Manuf. 2020, 61, 101825. [Google Scholar] [CrossRef]
  83. Yi, L.; Li, G.; Jiang, M. An End-to-End Steel Strip Surface Defects Recognition System Based on Convolutional Neural Networks. Steel Res. Int. 2017, 88, 1600068. [Google Scholar] [CrossRef]
  84. Konovalenko, I.; Maruschak, P.; Brezinová, J.; Viňáš, J.; Brezina, J. Steel Surface Defect Classification Using Deep Residual Neural Network. Metals 2020, 10, 846. [Google Scholar] [CrossRef]
  85. Wang, S.; Xia, X.; Ye, L.; Yang, B. Automatic Detection and Classification of Steel Surface Defect Using Deep Convolutional Neural Networks. Metals 2021, 11, 388. [Google Scholar] [CrossRef]
  86. He, D.; Xu, K.; Wang, D. Design of multi-scale receptive field convolutional neural network for surface inspection of hot rolled steels. Image Vision Comput. 2019, 89, 12–20. [Google Scholar] [CrossRef]
  87. Zhang, S.; Zhang, Q.; Gu, J.; Su, L.; Li, K.; Pecht, M. Visual inspection of steel surface defects based on domain adaptation and adaptive convolutional neural network. Mech. Syst. Signal Pract. 2021, 153, 107541. [Google Scholar] [CrossRef]
  88. Choudhury, A.; Pal, S.; Naskar, R.; Basumallick, A. Computer vision approach for phase identification from steel microstructure. Eng. Comput. 2019, 36, 1913–1933. [Google Scholar] [CrossRef]
  89. Sitek, W.; Trzaska, J. Hybrid Modelling Methods in Materials Science—Selected Examples. J. Achiev. Mater. Manuf. Eng. 2012, 54, 93–102. [Google Scholar]
  90. Sinha, A.; Sikdar, S.; Chattopadhyay, P.P.; Datta, S. Optimization of mechanical property and shape recovery behavior of Ti-(~49 at.%) Ni alloy using artificial neural network and genetic algorithm. Mater. Design. 2013, 46, 227–234. [Google Scholar] [CrossRef]
  91. Zhu, Z.; Liang, Y.; Zou, J. Modeling and Composition Design of Low-Alloy Steel’s Mechanical Properties Based on Neural Networks and Genetic Algorithms. Materials 2020, 13, 5316. [Google Scholar] [CrossRef]
  92. Song, R.G.; Zhang, Q.Z. Heat treatment optimization for 7175 aluminum alloy by genetic algorithm. Mater. Sci. Eng. C 2001, 17, 133–137. [Google Scholar] [CrossRef]
  93. Mousavi Anijdan, S.H.; Bahrami, A.; Madaah Hosseini, H.R.; Shafyei, A. Using genetic algorithm and artificial neural network analyses to design an Al–Si casting alloy of minimum porosity. Mater. Des. 2006, 27, 605–609. [Google Scholar] [CrossRef]
  94. Pattanayak, S.; Dey, S.; Chatterjee, S.; Chowdhury, S.G.; Datta, S. Computational intelligence based designing of microalloyed pipeline steel. Comp. Mater. Sci. 2015, 104, 60–68. [Google Scholar] [CrossRef]
  95. Shahani, A.R.; Setayeshi, S.; Nodamaie, S.A.; Asadi, M.A.; Rezaie, S. Prediction of influence parameters on the hot rolling process using finite element method and neural network. J. Mater. Process. Technol. 2009, 209, 1920–1935. [Google Scholar] [CrossRef]
  96. Guo, Z.Y.; Sun, J.N.; Du, F.S. Application of finite element method and artificial neural networks to predict the rolling force in hot rolling of Mg alloy plates. J. S. Afr. Inst. Min. Metall. 2016, 116, 43–48. [Google Scholar] [CrossRef] [Green Version]
  97. Sitek, W.; Trzaska, J.; Dobrzański, L.A. Modified Tartagli method for calculation of Jominy hardenability curve. Mater. Sci. Forum 2008, 575–578, 892–897. [Google Scholar] [CrossRef]
  98. Tartaglia, J.M.; Eldis, G.T.; Geissler, J.J. Hyperbolic secant method for predicting Jominy hardenability. Metall. Trans. 1984, 15, 1173–1183. [Google Scholar] [CrossRef]
  99. Trzaska, J.; Dobrzański, L.A. Modelling of CCT diagrams for engineering and constructional steels. J. Mater. Process. Technol. 2007, 192–193, 504–510. [Google Scholar] [CrossRef]
Figure 1. Number of publications indexed in the Web of Science database in research of steels and metal alloys, which describe the application of (a) artificial neural networks and (b) deep neural networks.
Figure 1. Number of publications indexed in the Web of Science database in research of steels and metal alloys, which describe the application of (a) artificial neural networks and (b) deep neural networks.
Metals 11 01832 g001
Figure 2. The number of weights in the MLP neural network with one hidden layer and one neuron in the output layer.
Figure 2. The number of weights in the MLP neural network with one hidden layer and one neuron in the output layer.
Metals 11 01832 g002
Figure 3. Examples of the sensitivity analysis used to evaluate the effects of independent variables on dependent variables based on the (a) error quotient and (b) relative importance of the input variables.
Figure 3. Examples of the sensitivity analysis used to evaluate the effects of independent variables on dependent variables based on the (a) error quotient and (b) relative importance of the input variables.
Metals 11 01832 g003
Figure 4. An example of coding the value of a qualitative variable with the one-of-N method.
Figure 4. An example of coding the value of a qualitative variable with the one-of-N method.
Metals 11 01832 g004
Figure 5. Error value changes in a typical artificial neural network training process.
Figure 5. Error value changes in a typical artificial neural network training process.
Metals 11 01832 g005
Figure 6. Typical topology of convolutional neural network.
Figure 6. Typical topology of convolutional neural network.
Metals 11 01832 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sitek, W.; Trzaska, J. Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering. Metals 2021, 11, 1832. https://doi.org/10.3390/met11111832

AMA Style

Sitek W, Trzaska J. Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering. Metals. 2021; 11(11):1832. https://doi.org/10.3390/met11111832

Chicago/Turabian Style

Sitek, Wojciech, and Jacek Trzaska. 2021. "Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering" Metals 11, no. 11: 1832. https://doi.org/10.3390/met11111832

APA Style

Sitek, W., & Trzaska, J. (2021). Practical Aspects of the Design and Use of the Artificial Neural Networks in Materials Engineering. Metals, 11(11), 1832. https://doi.org/10.3390/met11111832

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop