Next Article in Journal
Microwave Digestion and ICP-MS Determination of Major and Trace Elements in Waste Sm-Co Magnets
Previous Article in Journal
Analysis on Ancient Bloomery Ironmaking Technology: The Earliest Ironmaking Evidence in the Central Plains of China Was Taken as the Research Object
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real and Imaginary Impedance Prediction of Ni-P Composite Coating for Additive Manufacturing Steel via Multilayer Perceptron

by
Mohammad Fakhratul Ridwan Zulkifli
1,*,
Nur Faraadiena Roslan
1,
Suriani Mat Jusoh
1,
Mohd Sabri Mohd Ghazali
2,
Samsuri Abdullah
1 and
Wan Mohd Norsani Wan Nik
1
1
Marine Corrosion Research Group, Faculty of Ocean Engineering Technology and Informatics, Universiti Malaysia Terengganu, Kuala Nerus 21030, Terengganu, Malaysia
2
Marine Corrosion Research Group, Faculty of Science and Marine Environment, Universiti Malaysia Terengganu, Kuala Nerus 21030, Terengganu, Malaysia
*
Author to whom correspondence should be addressed.
Metals 2022, 12(8), 1306; https://doi.org/10.3390/met12081306
Submission received: 4 July 2022 / Revised: 29 July 2022 / Accepted: 2 August 2022 / Published: 3 August 2022

Abstract

:
Mathematical models are beneficial in representing a given dataset, especially in engineering applications. Establishing a model can be used to visualise how the model fits the dataset, as was done in this research. The Levenberg–Marquardt model was proposed as a training algorithm and employed in the backpropagation algorithm or multilayer perceptron. The dataset obtained from a previous researcher consists of electrochemical data of uncoated and coated additive manufacturing steel with Ni-P at several testing periods. The model’s performance was determined by regression value (R) and mean square error (MSE). It was found that the R values for non-coated additive manufacturing steel were 0.9999, 1, and 1, while MSE values were 1.14 × 10−6, 2.99 × 10−7, and 5.10 × 10−7 for 0 h, 288 h, and 572 h, respectively. Meanwhile, the R values for the Ni-P coated additive manufacturing steel were 1, 1, 1, while the MSE values were 1.06 × 10−7, 1.15 × 10−8, and 6.59 × 10−8 for 0 h, 288 h, and 572 h, respectively. The high R and low values of MSE emphasise that this training algorithm has shown good accuracy. The proposed training algorithm provides an advantage in processing time due to its ability to approach second-order training speed without having to compute the Hessian Matrix.

1. Introduction

Metal additive manufacturing (AM) has enabled the creation of quick prototypes with lower manufacturing periods, producing increasingly complex geometric components [1,2]. Several techniques used during production may cause porosities and cracks, resulting in a deterioration in mechanical properties and corrosion resistance. These problems can be solved by depositing a coating on the metal surface [3,4]. A Ni-P coating deposited on the metal surface can be an adaptive solution to a vast range of environmental and working conditions [5,6,7]. The composition of the P element in the coating alters the crystalline structure into amorphous, crystalline, or both, which consequently modifies the corrosion resistance of the coatings [8,9].
Electrochemical Impedance Spectroscopy (EIS) is one of the techniques for monitoring and detecting corrosion, primarily focusing on the corrosion of metallic materials. However, corrosion detection and monitoring for field applications are more complicated compared with those in laboratory experiments. Hence, electrochemical instrumentation, probe/sensor design, and data processing are some aspects that must be continually developed [10].
Data processing is a rapidly growing method that provides fast corrosion evaluation methods and effective corrosion control strategies. It offers a wide range of efficiency, high accuracy, and low cost. Recently, a data processing technique through machine learning and artificial intelligence has become vital because it provides a fast, non-invasive, low-cost analysis of metallic materials corrosion [11].
Due to a lack of understanding of the components influencing the corrosion process, researchers have yet to find the optimum model for predicting corrosion [12]. Many well-known factors causing corrosion of the steel structure, such as electrochemical calculation of pitting corrosion, are not included in current models. Because of that, an accurate model for predicting corrosion based on electrochemical data is required with the help of Artificial Intelligence. Many well-known elements that cause corrosion are not incorporated in current prediction and forecasting models. As a result, with the help of Artificial Intelligence, a reliable model for forecasting corrosion based on electrochemical data is required [12,13]. Other alternatives must be developed to better understand corrosion phenomena, minimise the length of time and the number of experiments, and regulate the process [14].
Curve-fitting difficulties are commonly solved using Artificial Neural Networks (ANNs), which provide more capabilities than other alternatives in terms of performance [15,16]. ANNs are speedier, can adapt to their surroundings, and learn to improve their performance [17]. They are comprised of various algorithms widely utilised on diverse streams to solve problems like categorisation, prediction, and machine learning. Backpropagation is one method that calculates error gradients for each weight in two steps—forward and backward [18,19]. The network calculates the output using the provided input using its weight and bias vector in the forward step, and a loss function is produced from the output values. The weights are modified in the backward step using a gradient descent technique or something similar until the desired result is obtained or the neural network reaches its maximum loop limit [20,21,22].
This work established a prediction model of real and imaginary impedance (Z’ and Z’’) using secondary data from another researcher [23]. The original work provides electrochemical characterisation data further used to feed the neural network’s program.

2. Methodology

2.1. Levenberg–Marquardt (LM) Algorithm

The Levenberg–Marquardt algorithm was created to approach second-order training speed despite having to compute the Hessian matrix [24]. When the performance function has the shape of a sum of squares, as is typical when training feedforward networks, the Hessian matrix can be roughly estimated as follows [25]:
H = JTJ
and the gradient can be computed as:
g = JTe
where J is the Jacobian matrix that contains the first derivatives of the network errors concerning the weights and biases, and e is a vector of network errors. The Jacobian matrix can be computed through a standard backpropagation technique much less complex than computing the Hessian matrix.
The Levenberg–Marquardt algorithm uses this approximation to the Hessian matrix in the following Newton-like update [26]:
x k + 1 = x k [ J T J + μ I ] 1   J T e
  • x = Weight of neural networks
  • J = Jacobian matrix of the performance criteria to be minimised
  • μ = Scalar that controls the learning process
  • e = The residual error factor
When µ is large, this becomes the gradient descent with a small step size. Newton’s method is faster and more accurate near an error minimum; as such, the aim is to shift toward Newton’s method as quickly as possible. Thus, µ is decreased after each successful step (reduction in performance function) and is increased only when a tentative step increases the performance function. This way, the performance function is permanently reduced at each iteration of the algorithm.

2.2. Secondary Data Retrieved for Electrochemical Investigation

The dataset for this modelling study was obtained from a previous researcher who conducted the experiment to evaluate the corrosion resistance of a Ni-P composite coating deposited on additive manufacturing steel. In the corrosion investigation, a three-electrode electrochemical cell containing a reference saturated calomel electrode (SCE), a graphite plate as the counter electrode, and the sample as the working electrode are utilised. The sweep frequency for the electrochemical impedance spectroscopy data ranged from 104 to 101, acquiring 10 points/decade, and a sine wave with an amplitude of 10 mV was employed as a system disturbance [23].
For the modelling part, several parameters from the dataset have been extracted. Phase angle (°), impedance (Ω.cm2) and frequency (Hz) were proposed as the input, while the real and imaginary impedance (Z′ and Z″) were proposed as the output.

2.3. The Prediction Model

The ANN prediction model was designed with three layers: one input layer, one hidden layer and one output layer, as shown in Figure 1. Before the execution of the prediction, the data were normalised using Equation 1 to obtain values ranging from 0 to 1 (unity-based normalisation) [27]:
x = (xixmin)/(xmaxxmin)
where xi is the data value, xmin is the minimum value of the data set, and xmax is the maximum value of the data set.
Normalising the dataset is essential because it usually accelerates learning and leads to faster convergence [28]. Unnormalised data could lead to an uncomfortable topology of the loss function that emphasises specific gradients of parameters [29].
The database input was divided into three sets in MATLAB 2020: training, validation, and testing with ratios of 70%, 15%, and 15%. The data were divided into 70% for training, 15% for validation, and 15% for testing. The data selection for each division is random to avoid ambiguity for the model. The Levenberg–Marquardt (LM) model was proposed as a training algorithm in this study as this widely used technique can be used to solve non-linear optimisation problems [30].

3. Results

The prediction of Z′ and Z″ were conducted separately for non-coated and coated additive manufacturing steel at several testing periods, namely 0 h, 288 h, and 576 h. Table 1 shows the total number of neurons proposed in this study, and the best neuron is selected based on its performance indicator. The performance indicator is represented by regression, R and means square error (MSE) value. The highest R and lowest MSE values determine each prediction model’s best neurons. In non-coated additive manufacturing steel, neuron 4 exhibits the best neurons for all testing periods. Coated additive manufacturing steel exhibits neurons 6 for 0 h, 3 for 288 h, and 4 for 576 h as the best neurons in the proposed training network.
Figure 2 shows a non-coated and coated additive manufacturing steel regression plot at the selected testing periods. The regression value R for all prediction models shows a value near unity. For non-coated samples, the recorded R-value were 0.9999, 1, and 1 for 0 h, 288 h, and 572 h, respectively, while for the coated samples, the recorded R-value was 1, 1, and 1 for 0 h, 288 h and 572 h, respectively. The MSE values were 1.14 × 10−6, 2.99 × 10−7, and 5.10 × 10−7 for 0 h, 288 h, and 572 h, respectively, for non-coated additive manufacturing steel. Meanwhile, the R MSE values were 1.06 × 10−7, 1.15 × 10−8, and 6.59 × 10−8 for 0 h, 288 h and 572 h, respectively, for Ni-P coated additive manufacturing steel.
Table 2 shows the prediction model obtained from the proposed network. The employment of the Levenberg–Marquardt algorithm produces a high R-value, as seen in the linear prediction model. In the equation, y denotes the predicted imaginary and real impedance, while x denotes the observed imaginary and real impedance. This equation can be applied to predict real and imaginary impedance values if the frequency, phase angle, and impedance data are available.

4. Discussions

The trained data in all prediction models lies mostly in the 90% confidence interval region, producing a good regression value. The Levenberg–Marquardt model as a training algorithm helped establish a good prediction model, as this algorithm trains the model using a curve-fitting method [31]. It uses loss functions that are sums of squared errors, like Quasi-Newton methods but without the need to compute the actual Hessian matrix. Instead, it calculates the Jacobian matrix and gradient vector instead [24,26,32]. The Jacobian matrix uses the standard backpropagation technique, resulting in less complex calculation rather than computing the Hessian matrix. It presents the best performance in the search for the weights of neuron connectors. In addition, this algorithm is the fastest method for training a moderate-sized feedforward neural network and provides an efficient implementation. In addition, the performance function is permanently reduced at each iteration of the algorithm when the scalar that controls the learning function decreases after each successful step and would only increase if the tentative step increases the performance. Thus, it contributes to the faster learning process in training the dataset.
The results are also consistent with other researchers who proposed the Levenberg–Marquardt model as a training algorithm, as it shows g an outstanding regression value, as tabulated in Table 3. Providing a more well-defined dataset could improve performance for the predicted dataset. It will enhance the dataset’s training process, providing a better performance indicator.

5. Conclusions

This research employed a three-layer neural network architecture using multilayer perceptron, or the backpropagation algorithm, for Ni-P coating on additive manufacturing steel. The performance indicator, represented by regression value R and mean squared error (MSE), is the key factor in determining the reliability of the prediction model. The Levenberg–Marquardt algorithm shows a great performance indicator with a high R-value and a low value of MSE in this study. The proposed training algorithm gives an accurate result due to its fast processing time and ability to approach a second-order training speed without having to compute the Hessian matrix.

Author Contributions

Conceptualisation, M.F.R.Z. and N.F.R.; methodology, S.A.; software, S.A.; validation, S.M.J. and M.S.M.G.; formal analysis, N.F.R.; investigation, N.F.R.; writing—original draft preparation, N.F.R.; writing—review and editing, M.F.R.Z.; supervision, W.M.N.W.N.; project administration, M.F.R.Z.; funding acquisition, M.F.R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Higher Education Malaysia through Fundamental Research Grant Scheme (FRGS), grant number FRGS/1/2020/STG05/UMT/03/1 (VOT59608).

Data Availability Statement

Data was obtained from Diaz et al., 2020 and are available from https://doi.org/10.1016/j.dib.2020.106159 accessed on 12 November 2020. Data citation: Dayi Gilberto Agredo Diaz, Arturo Barba Pingarrón, Jhon Jairo Olaya Florez, Jesús Rafael González Parra, Javier Cervantes Cabello, Irma Angarita Moncaleano, Alba Covelo Villar, Miguel Ángel Hernández Gallegos, Evaluation of the corrosion resistance of a Ni-P coating deposited on additive manufacturing steel: A dataset, Data in Brief, Volume 32, 2020, 106159, ISSN 2352-3409, https://doi.org/10.1016/j.dib.2020.106159 accessed on 12 November 2020.

Acknowledgments

The authors would like to acknowledge the research grant funder and the postgraduate and undergraduate students for their support in completing this research.

Conflicts of Interest

The authors declare no conflict of interest. The data obtained from another researcher for this study had been clearly cited. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Bajaj, P.; Hariharan, A.; Kini, A.; Kürnsteiner, P.; Raabe, D.; Jägle, E.A. Steels in Additive Manufacturing: A Review of Their Microstructure and Properties. Mater. Sci. Eng. A 2020, 772, 138633. [Google Scholar] [CrossRef]
  2. Agredo Diaz, D.G.; Barba Pingarrón, A.; Olaya Florez, J.J.; González Parra, J.R.; Cervantes Cabello, J.; Angarita Moncaleano, I.; Covelo Villar, A.; Hernández Gallegos, M.Á. Effect of a Ni-P Coating on the Corrosion Resistance of an Additive Manufacturing Carbon Steel Immersed in a 0.1 M NaCl Solution. Mater. Lett. 2020, 275, 128159. [Google Scholar] [CrossRef]
  3. Ron, T.; Levy, G.K.; Dolev, O.; Leon, A.; Shirizly, A.; Aghion, E. Environmental Behavior of Low Carbon Steel Produced by a Wire Arc Additive Manufacturing Process. Metals 2019, 9, 888. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, J.; Wu, Y.; Cheng, X.; Zhang, S.; Wang, H. Study of Microstructure Evolution and Preference Growth Direction in a Fully Laminated Directional Micro-Columnar TiAl Fabricated Using Laser Additive Manufacturing Technique. Mater. Lett. 2019, 243, 62–65. [Google Scholar] [CrossRef]
  5. Berkh, O.; Zahavi, J. Electrodeposition and Properties of NiP Alloys and Their Composites—A Literature Survey. Corros. Rev. 1996, 14, 323–341. [Google Scholar] [CrossRef]
  6. Daly, B.P.; Barry, F.J. Electrochemical Nickel–Phosphorus Alloy Formation. Int. Mater. Rev. 2013, 48, 326–338. [Google Scholar] [CrossRef]
  7. Lelevic, A.; Walsh, F.C. Electrodeposition of NiP Alloy Coatings: A Review. Surf. Coat. Technol. 2019, 369, 198–220. [Google Scholar] [CrossRef]
  8. Narayanan, T.S.N.S.; Krishnaveni, K.; Seshadri, S.K. Electroless Ni–P/Ni–B Duplex Coatings: Preparation and Evaluation of Microhardness, Wear and Corrosion Resistance. Mater. Chem. Phys. 2003, 82, 771–779. [Google Scholar] [CrossRef]
  9. Ilangovan, S. Investigation of The Mechanical, Corrosion Properties and Wear Behaviour of Electroless Ni-P Plated Mild Steel. IJRET Int. J. Res. Eng. Technol. 2014, 3, 151–155. [Google Scholar]
  10. Xia, D.H.; Deng, C.M.; Macdonald, D.; Jamali, S.; Mills, D.; Luo, J.L.; Strebl, M.G.; Amiri, M.; Jin, W.; Song, S.; et al. Electrochemical Measurements Used for Assessment of Corrosion and Protection of Metallic Materials in the Field: A Critical Review. J. Mater. Sci. Technol. 2022, 112, 151–183. [Google Scholar] [CrossRef]
  11. Xia, D.H.; Song, S.; Tao, L.; Qin, Z.; Wu, Z.; Gao, Z.; Wang, J.; Hu, W.; Behnamian, Y.; Luo, J.L. Review-Material Degradation Assessed by Digital Image Processing: Fundamentals, Progresses, and Challenges. J. Mater. Sci. Technol. 2020, 53, 146–162. [Google Scholar] [CrossRef]
  12. Chou, J.S.; Ngo, N.T.; Chong, W.K. The Use of Artificial Intelligence Combiners for Modeling Steel Pitting Risk and Corrosion Rate. Eng. Appl. Artif. Intell. 2017, 65, 471–483. [Google Scholar] [CrossRef]
  13. Bhandari, J.; Khan, F.; Abbassi, R.; Garaniya, V.; Ojeda, R. Modelling of Pitting Corrosion in Marine and Offshore Steel Structures—A Technical Review. J. Loss Prev. Process Ind. 2015, 37, 39–62. [Google Scholar] [CrossRef]
  14. Millán-Ocampo, D.E.; Parrales-Bahena, A.; González-Rodríguez, J.G.; Silva-Martínez, S.; Porcayo-Calderón, J.; Hernández-Pérez, J.A. Modelling of Behavior for Inhibition Corrosion of Bronze Using Artificial Neural Network (ANN). Entropy 2018, 20, 409. [Google Scholar] [CrossRef] [Green Version]
  15. Wu, D.; Olson, D.L.; Dolgui, A. Decision Making in Enterprise Risk Management: A Review and Introduction to Special Issue. Omega 2015, 57, 1–4. [Google Scholar] [CrossRef]
  16. Chou, J.S.; Ngo, N.T. Smart Grid Data Analytics Framework for Increasing Energy Savings in Residential Buildings. Autom. Constr. 2016, 72, 247–257. [Google Scholar] [CrossRef]
  17. Yao, X. Evolving Artificial Neural Networks. Proc. IEEE 1999, 87, 1423–1447. [Google Scholar] [CrossRef] [Green Version]
  18. Møller, M.F. A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
  19. Shrestha, A.; Fang, H.; Wu, Q.; Qiu, Q. Approximating Back-Propagation for a Biologically Plausible Local Learning Rule in Spiking Neural Networks. In Proceedings of the ICONS ’19 International Conference on Neuromorphic Systems, Knoxville, TN, USA, 23–25 July 2019; pp. 1–8. [Google Scholar] [CrossRef]
  20. Battiti, R.; Tecchiolli, G. Training Neural Nets with the Reactive Tabu Search. IEEE Trans. Neural Netw. 1995, 6, 1185–1200. [Google Scholar] [CrossRef] [Green Version]
  21. Mukherjee, I.; Routroy, S. Comparing the Performance of Neural Networks Developed by Using Levenberg–Marquardt and Quasi-Newton with the Gradient Descent Algorithm for Modelling a Multiple Response Grinding Process. Expert Syst. Appl. 2012, 39, 2397–2407. [Google Scholar] [CrossRef]
  22. Ranganathan, V.; Natarajan, S. A New Backpropagation Algorithm without Gradient Descent. arXiv 2018, arXiv:1802.00027. [Google Scholar] [CrossRef]
  23. Agredo Diaz, D.G.; Barba Pingarrón, A.; Olaya Florez, J.J.; González Parra, J.R.; Cervantes Cabello, J.; Angarita Moncaleano, I.; Covelo Villar, A.; Hernández Gallegos, M.Á. Evaluation of the Corrosion Resistance of a Ni-P Coating Deposited on Additive Manufacturing Steel: A Dataset. Data Brief 2020, 32, 106159. [Google Scholar] [CrossRef]
  24. Kingma, D.P.; Ba, J.L. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  25. Marquardt, D.W. An Algorithm for Least-Squares Estimation of Nonlinear Parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  26. Neural Network Toolbox for Use with MATLAB. Available online: https://www.researchgate.net/publication/2801351_Neural_Network_Toolbox_for_use_with_MATLAB (accessed on 9 June 2022).
  27. Kumari, S.; Tiyyagura, H.R.; Douglas, T.E.L.; Mohammed, E.A.A.; Adriaens, A.; Fuchs-Godec, R.; Mohan, M.K.; Skirtach, A.G. ANN Prediction of Corrosion Behaviour of Uncoated and Biopolymers Coated Cp-Titanium Substrates. Mater. Des. 2018, 157, 35–51. [Google Scholar] [CrossRef] [Green Version]
  28. Cheng, Q.; Li, H.L.; Wu, Q.; Ma, L.; Ngan, K.N. Parametric Deformable Exponential Linear Units for Deep Neural Networks. Neural Netw. 2020, 125, 281–289. [Google Scholar] [CrossRef]
  29. Normalizing Your Data (Specifically, Input and Batch Normalization). Available online: https://www.jeremyjordan.me/batch-normalization/ (accessed on 9 June 2022).
  30. Liu, H. On the Levenberg-Marquardt Training Method for Feed-Forward Neural Networks. In Proceedings of the 6th International Conference on Natural Computation, ICNC, Yantai, China, 10–12 August 2010; Volume 1, pp. 456–460. [Google Scholar] [CrossRef]
  31. Al Bataineh, A.; Kaur, D. A Comparative Study of Different Curve Fitting Algorithms in Artificial Neural Network Using Housing Dataset. In Proceedings of the IEEE National Aerospace Electronics Conference, NAECON, Dayton, OH, USA, 23–26 July 2018; pp. 174–178. [Google Scholar] [CrossRef]
  32. Al-Shehri, D.A. Oil and Gas Wells: Enhanced Wellbore Casing Integrity Management through Corrosion Rate Prediction Using an Augmented Intelligent Approach. Sustainability 2019, 11, 818. [Google Scholar] [CrossRef] [Green Version]
  33. Shaik, N.B.; Pedapati, S.R.; Ammar Taqvi, S.A.; Othman, A.R.; Abd Dzubir, F.A. A Feed-Forward Back Propagation Neural Network Approach to Predict the Life Condition of Crude Oil Pipeline. Processes 2020, 8, 661. [Google Scholar] [CrossRef]
  34. Rocabruno-Valdés, C.I.; González-Rodriguez, J.G.; Díaz-Blanco, Y.; Juantorena, A.U.; Muñoz-Ledo, J.A.; El-Hamzaoui, Y.; Hernández, J.A. Corrosion Rate Prediction for Metals in Biodiesel Using Artificial Neural Networks. Renew. Energy 2019, 140, 592–601. [Google Scholar] [CrossRef]
  35. Zulkifli, F.; Abdullah, S.; Suriani, M.J.; Kamaludin, M.I.A.; Wan Nik, W.B. Multilayer Perceptron Model for the Prediction of Corrosion Rate of Aluminium Alloy 5083 in Seawater via Different Training Algorithms. IOP Conf. Ser. Earth Environ. Sci. 2021, 646, 012058. [Google Scholar] [CrossRef]
  36. Tuntas, R.; Dikici, B. An Investigation on the Aging Responses and Corrosion Behaviour of A356/SiC Composites by Neural Network: The Effect of Cold Working Ratio. J. Compos. Mater. 2015, 50, 2323–2335. [Google Scholar] [CrossRef]
  37. Colorado-Garrido, D.; Ortega-Toledo, D.M.; Hernández, J.A.; González-Rodríguez, J.G.; Uruchurtu, J. Neural Networks for Nyquist Plots Prediction during Corrosion Inhibition of a Pipeline Steel. J. Solid State Electrochem. 2009, 13, 1715–1722. [Google Scholar] [CrossRef]
Figure 1. Artificial neural network architecture.
Figure 1. Artificial neural network architecture.
Metals 12 01306 g001
Figure 2. Regression plot of the proposed training network: (a) non-coated 0 h at neuron 4; (b) non-coated 288 h at neuron 4; (c) non-coated 576 h at neuron 4; (d) coated 0 h at neuron 6; (e) coated 288 h at neuron 3; (f) coated 576 h at neuron 4.
Figure 2. Regression plot of the proposed training network: (a) non-coated 0 h at neuron 4; (b) non-coated 288 h at neuron 4; (c) non-coated 576 h at neuron 4; (d) coated 0 h at neuron 6; (e) coated 288 h at neuron 3; (f) coated 576 h at neuron 4.
Metals 12 01306 g002aMetals 12 01306 g002b
Table 1. Selection of the best neuron.
Table 1. Selection of the best neuron.
Non-coated additive manufacturing steel0 h
Neuron1234 *56
R0.991770.999810.999870.999990.999990.99996
MSE1.69 × 10−33.76 × 10−52.63 × 10−51.14 × 1061.48 × 10−69.11 × 10−6
288 h
Neuron1234 *56
R0.995780.99478110.999980.99999
MSE5.34 × 10−49.75 × 10−45.03 × 10−72.99 × 1073.06 × 10−62.61 × 10−6
576 h
Neuron1234 *56
R0.992640.986650.9867310.998940.99264
MSE1.08 × 10−32.08 × 10−31.69 × 10−35.10 × 1071.94 × 10−41.09 × 10−3
Coated additive manufacturing steel0 h
Neuron123456 *
R0.995440.999990.99963111
MSE6.90 × 10−41.60 × 10−66.45 × 10−58.10 × 10−73.84 × 10−71.06 × 107
288 h
Neuron123 *456
R0.993270.9999410.9999911
MSE6.54 × 10−48.31 × 10−61.15 × 10−88.24 × 10−77.13 × 10−71.32 × 10−8
576 h
Neuron1234 *56
R0.9970810.99943111
MSE3.37 × 10−47.71 × 10−87.67 × 10−56.59 × 10−87.62 × 10−86.44 × 10−7
* Indicates the best neuron in the proposed training network.
Table 2. Prediction model for the real and imaginary impedance of coated and non-coated manufacturing steel at several testing periods.
Table 2. Prediction model for the real and imaginary impedance of coated and non-coated manufacturing steel at several testing periods.
Steel TypeTesting Period (h)Prediction Model
Non-coated additive manufacturing steel0y = x + 0.00017
288y = x + 2.4 × 10−6
576y = x + 0.00015
Coated additive manufacturing steel0y = x + 5.8 × 10−6
288y = x + 2 × 10−6
576y = x + 6.2 × 10−7
y = predicted value of imaginary and real impedance. x = observed value of imaginary and real impedance.
Table 3. Past research employed the Levenberg–Marquardt model as a training algorithm in their prediction study.
Table 3. Past research employed the Levenberg–Marquardt model as a training algorithm in their prediction study.
ScopeValue of RAuthors
Life condition prediction of a crude oil pipeline0.9998[33]
Corrosion rate prediction for metals in biodiesel0.93237[34]
Prediction of corrosion rate of Aluminium Alloy 5083 in seawater0.99272[35]
The effect of a cold working ration on the ageing responses and corrosion behaviour0.99986[36]
Nyquist plots prediction of corrosion inhibition in pipeline steel0.9840[37]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zulkifli, M.F.R.; Roslan, N.F.; Mat Jusoh, S.; Mohd Ghazali, M.S.; Abdullah, S.; Wan Nik, W.M.N. Real and Imaginary Impedance Prediction of Ni-P Composite Coating for Additive Manufacturing Steel via Multilayer Perceptron. Metals 2022, 12, 1306. https://doi.org/10.3390/met12081306

AMA Style

Zulkifli MFR, Roslan NF, Mat Jusoh S, Mohd Ghazali MS, Abdullah S, Wan Nik WMN. Real and Imaginary Impedance Prediction of Ni-P Composite Coating for Additive Manufacturing Steel via Multilayer Perceptron. Metals. 2022; 12(8):1306. https://doi.org/10.3390/met12081306

Chicago/Turabian Style

Zulkifli, Mohammad Fakhratul Ridwan, Nur Faraadiena Roslan, Suriani Mat Jusoh, Mohd Sabri Mohd Ghazali, Samsuri Abdullah, and Wan Mohd Norsani Wan Nik. 2022. "Real and Imaginary Impedance Prediction of Ni-P Composite Coating for Additive Manufacturing Steel via Multilayer Perceptron" Metals 12, no. 8: 1306. https://doi.org/10.3390/met12081306

APA Style

Zulkifli, M. F. R., Roslan, N. F., Mat Jusoh, S., Mohd Ghazali, M. S., Abdullah, S., & Wan Nik, W. M. N. (2022). Real and Imaginary Impedance Prediction of Ni-P Composite Coating for Additive Manufacturing Steel via Multilayer Perceptron. Metals, 12(8), 1306. https://doi.org/10.3390/met12081306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop