Next Article in Journal
Strain-Rate Effect on Anisotropic Deformation Characterization and Material Modeling of High-Strength Aluminum Alloy Sheet
Previous Article in Journal
Wear and Corrosion Resistance of FeCoCrxNiAl High-Entropy Alloy Coatings Fabricated by Laser Cladding on Q345 Welded Joint
 
 
Article
Peer-Review Record

Comparison of the Warm Deformation Constitutive Model of GH4169 Alloy Based on Neural Network and the Arrhenius Model

Metals 2022, 12(9), 1429; https://doi.org/10.3390/met12091429
by Peng Cheng 1,2, Decheng Wang 1,*, Junying Zhou 1, Shanchao Zuo 1,2 and Pengfei Zhang 1,2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3: Anonymous
Metals 2022, 12(9), 1429; https://doi.org/10.3390/met12091429
Submission received: 11 July 2022 / Revised: 11 August 2022 / Accepted: 13 August 2022 / Published: 29 August 2022

Round 1

Reviewer 1 Report

The paper considers comparative of warm deformation constitutive model of 3GH4169 alloy and Arrhenius model. The article is interesting, written in good scientific language. I consider that the article corresponds to the journal subject and can be published in the form in which it is.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper must be improved. It appears as a list of results, with a poor analysis of those results.

Analysis of results must be enhanced.

Description of some results is not complete. For example figures 11 to 14 are poorly analyzed.

Along the paper, there are some mistakes that were identified in the attached file, such as the following: 

Line 102: sigma is the flow stress

Lines 108: sigma is the peak stress

Really, what is sigma?

The English requires a revision because some sentences have grammar errors.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The Introduction section have to be enhanced by additional crucial sources offering various artificial neural network solutions for the solving of the flow stress forecasting issue – there should be mentioned and cited various ANN architectures – Feed-Forward and Cascade-Forward Multi-Layer Perceptron, Radial Basis Neural Networks or Generalized Regression Networks. Also, there should be mentioned advanced ANN learning techniques – like deep learning via restricted Boltzmann machine or auto-encoder. You can find useful works e.g. in: https://www.webofscience.com/wos/woscc/summary/15b8389f-7082-4a9c-a9dd-f8cc3a1f93f2-4504cb88/relevance/1

https://www.sciencedirect.com/search?qs=hot%20flow%20stress%20and%20artificial%20neural%20networks

The dependency of the parameters of the Arrhenius equation on the strain value is presented only in the table form – Table 2. It is common practice that the parameters of eq. (4) are obtained via eqs. (5) – (10) first and then described via polynomials with respect to the strain. In the submitted research, it is not certainly clear what the data in table 2 represent. Are the data in table 2 representing results of the mentioned polynomial regression analysis or represent only the rough estimate from eqs. (5) – (10)? In connection with this, there should be formulas expressing the polynomial fit between the Arrhenius parameters and the strain. In addition, eq. (13) should be rearranged to express the influence of flow stress on the Arrhenius parameters.

In the section 3.3., some crucial information missing, as follows:

You only mentioned that there is the back-propagation (BP) ANN model. However, what ANN architecture was used – a feed-forward multi-layer or cascade-forward multi-layer or something different?

In addition, The BP algorithm is always associated with some nonlinear minimization algorithm to gain proper values of weights and biases minimizing the network error – e.g. the Levenberg-Marquardt algorithm or Gauss-Newton or Cauchy’s Gradient Descent, etc. What minimization algorithm were used in this study?

Hidden neurons have to be activated via some activation functions. Since the flow stress dependency is highly nonlinear issue, it is common practice to use a nonlinear function, e.g. hyperbolic tangent or logistic sigmoid. What activation function has been applied to activate the hidden neurons in this research?

The ANNs usually suffer by occurrence of overtraining (overfitting) issue. What methodism has been applied to prevent the overfitting – e.g. Bayesian regularization or application of cross-validation?

An examined dataset has to be divided at least on two subsets – the training one to learn the network and the testing one to test the network reaction on new data – which is a standard approach when applying the above-mentioned Bayesian regularization. When applying the cross-validation, the dataset is divided to three subsets - training, testing and cross-validating. The question is how the dataset was divided in this study?

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Beta (see eq 6) has the units (1/MPa). Remember that beta multiplied by sigma has not units.

 

InTable 2, third column, replace A by alfa.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop