Next Article in Journal
Analysis of All-Optical Generation of Graphene Surface Plasmons by a Frequency-Difference Process
Previous Article in Journal
Containerised Application Profiling and Classification Using Benchmarks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MFO Tunned SVR Models for Analyzing Dimensional Characteristics of Cracks Developed on Steam Generator Tubes

by
Mathias Vijay Albert William
1,
Subramanian Ramesh
2,
Robert Cep
3,*,
Mahalingam Siva Kumar
4 and
Muniyandy Elangovan
5,*
1
Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi 600062, India
2
Department of Electrical and Electronics Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi 600062, India
3
Department of Machining, Assembly and Engineering Metrology, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 708 00 Ostrava, Czech Republic
4
Department of Mechanical Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi 600062, India
5
Department of R&D, Bond Marine Consultancy, London EC1V 2NX, UK
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(23), 12375; https://doi.org/10.3390/app122312375
Submission received: 7 November 2022 / Revised: 27 November 2022 / Accepted: 1 December 2022 / Published: 3 December 2022

Abstract

:
Accurate prediction of material defects from the given images will avoid the major cause in industrial applications. In this work, a Support Vector Regression (SVR) model has been developed from the given Gray Level Co-occurrence Matrix (GLCM) features extracted from Magnetic Flux Leakage (MFL) images wherein the length, depth, and width of the images are considered response values from the given features data set, and a percentage of data has been considered for testing the SVR model. Four parameters like Kernel function, solver type, and validation scheme, and its value and % of testing data that affect the SVR model’s performance are considered to select the best SVR model. Six different kernel functions, and three different kinds of solvers are considered as two validation schemes, and 10% to 30% of the testing data set of different levels of the above parameters. The prediction accuracy of the SVR model is considered by simultaneously minimizing prediction measures of both Root Mean Square Error (RMSE), and Mean Absolute Error (MAE) and maximizing R 2 values. The Moth Flame Optimization (MFO) algorithm has been implemented to select the best SVR model and its four parameters based on the above conflict three prediction measures by converting multi-objectives into a single object using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method. The performance of the MFO algorithm is compared statistically with the Dragon Fly Optimization Algorithm (DFO) and Particle Swarm Optimization Algorithm (PSO).

1. Introduction

Non-Destructive Testing (NDT) is a breakthrough in the industrial sector in the field of testing. Different components used in the industry need to be checked thoroughly for safety and better operability. Critical components such as SGTs and heater’s pressure values will be tested multiple times for their structural integrity to avoid damage while functioning; electromagnetic testing is also very useful in identifying the cracks in the SGTs and pipes. Daniel et al. [1] developed an ANN model to predict the SGT’s defect in terms of the length, width, and depth of crack from the given gray level co-occurrence matrix features from the Magnetic Flux Leakage (MFL) image. The structural integrity of the engineering components is increasingly very important in the modern scenario because of safety reasons. Usually, defects in steel pipes are cracks that might be in the surface or sub-surface; identifying any kind of such cracks and timely solving the issues will avoid a big disaster [2]. In NDT, different testing methods are available for each type of component or specimen needed to test. Normally, the NDT method will be determined based on the component/specimen size, shape, material, and conductivity of the material to be tested and the application capabilities of NDT techniques, including Visual Testing (VT), Ultrasonic Testing (UT), thermography, Radiographic Testing (RT), Electromagnetic Testing (ET), Acoustic Emission (AE), and shearography testing in terms of the benefits and drawbacks of these techniques. Based on their inherent qualities and applicability, further approaches are categorized. Most of the time, an NDT assessor simply employs one non-destructive test technique. Basic testing in NDT can be done with the expertise of individuals, but complex NDT testing requires experts with a wide knowledge of equipment operation and computer skills to obtain accurate results [3].
Material defects play a vital role in the industry, as they create major issues in the operation and safety of equipment and components. Material defects in the components and equipment can be inspected in two ways, like quality checking after manufacturing and on-site inspections during operations. Usually, most of the industry uses steel for the manufacturing of various equipment and components with different compositions, and steel will normally have defects like porosity, corrosion, pits, surface cracks, and sub-surface cracks, etc.; usually defects like corrosion, porosity, pits, and surface cracks can be manually identified with general NDT methods. The evaluation of the fracture area and the earlier identification of the cracks are particularly crucial, and those that are under stress or strain should be monitored regularly. Fatigue cracks or other pre-existing cracks may cause unexpected failure or disaster. Magnetic testing techniques produce good results in ferromagnetic materials and are extremely efficient for all components [4]. However, sub-surface cracks are difficult to find with the naked eye, and complex NDT methods should be used.
As a powerful and highly efficient non-destructive testing (NDT) method, magnetic flux leakage (MFL) testing is conducted based on the physical phenomenon that a ferromagnetic specimen in a certain magnetization state will produce magnetic flux leakage if any discontinuities are present in it. In the era of modern non-destructive testing, methods like magnetic flux leakage (MFL) testing have distinct benefits over conventional inspection techniques, including quicker inspection speeds, deeper examination depths, and simpler automated inspection. As a result, MFL testing is widely used in industries to evaluate ferromagnetic materials [5]. The specimen used during MFL experimental investigations of inner flaws or sub-surface fractures is often built with flaws that prevent the presentation of inner flaws of the same size but differing buried depths. A specimen with identical-sized interior flaws is conceived, built, and evaluated to study the MFL course of these defects. Identifying the sub-surface cracks also involves the simple magnetization of the material and obtaining the flux leakage output by using a hall sensor [6]. The periodic checking and verification of sub-surface cracks in critical areas may avoid drastic incidents. Magnetic flux testing (MFL) techniques are required in the inspection of steel tubes and pipes in the petrochemical industry, rope cars, nuclear reactor tubes, etc.
Ege and Coramik et al. [7] designed and produced two different PIGs to inspect pipelines using the flux leakage method. The authors also developed a new magnetic measurement system to investigate the effect of the speed variation of the produced PIGS. Shi et al. [8] introduced the principle, measuring methods, and quantitative analysis of the MFL method. The authors used statistical identification methods to establish the relationship between the defect shape parameter and magnetic flux leakage signals. Suresh et al. [9] developed a bobbin coil magnetizer arrangement to inspect the defects in small-diameter tubes and used an ANSYS Maxwell EM V-16-based Finite Element Method (FEM) and analytical model to support the experimental results.
For improving model accuracy, recently support vector regression (SVR), a machine learning tool, has gained focus among researchers in various fields to minimize prediction error. Jin et al. [10] proposed an internal crack-defect-detection method based on the relief algorithm and Adaboost-SVM, to overcome the problems of poor generalization and low accuracy in the existing defect detection process. Zhang et al. [11] developed an SVR regression model to forecast the stock price by optimizing the SVR parameters using dynamic adjustment and the opposition-based chaotic strategy in the Firefly Algorithm (FFA), termed the Modified Firefly Algorithm (MFFA). Kari et al. [12] implemented the SVR model with a Genetic Algorithm (GA) technique to forecast the dissolved gas content in power transformers to maintain the safety of the power system. Houssein et al. [13] used a twin support vector regression model to forecast the wind speed by tuning the SVR parameters by implementing the Particle Swarm Optimization (PSO) algorithm. Li et al. [14] presented a novel Sine Cosine Algorithm–SVR model to select the penalty and kernel function of SVR and validated the effectiveness by solving benchmark datasets. Yuvan et al. [15] introduced the GA–SVR model in forecasting sales volume to achieve better forecasting accuracy and performance than traditional SVR and Artificial Neural Network (ANN) prediction models. Pappadimitriou et al. [16] investigated the efficiency of the SVM forecasting model for the next-day directional change of electricity prices and reported 76.12% forecast accuracy over a 200-day period. Several other applications of SVR models exist in literature, ranging from process parameter prediction [17,18] to flow estimation [19] and from 3D-printing applications [20] to battery monitoring [21].
From the literature, it is understood that the SVR model, a machine learning tool, has been used by researchers in different areas like stock-market prediction, electricity prices, dissolved gas content, wind speed, etc. In this work, the SVR model is to be developed to predict the SGT’s defect in terms of length, width, and depth of cracks from the given gray level co-occurrence matrix feature from the Magnetic Flux Leakage (MFL) image. The selection of parameters, such as kernel function, solver type, and validation scheme, along with % of test data, will affect the performance of the SVR models. The root means square error, mean absolute error, and R 2 values are considered to measure the performance of the SVR model. Multiple contradictory performance measures require the conversion of multi-objectives into a single objective that initiated the implementation of the TOPSIS method. The Moth Flame Optimization Algorithm (MFO) is proposed to select the optimal parameters of SVR to minimize the prediction error. The effectiveness of the MFO algorithm is proven by comparing its performance with the Dragon Fly Optimization (DFO) and Particle Swarm Optimization (PSO) algorithms.
The paper is organized as follows. Section 2 describes the proposed methodology, SVR models, and MFO algorithm, along with its pseudocode and implementation. Section 3 deals with the results and discussion with a quantitative comparison of the performance of the MFO algorithm with the DFO and PSO algorithms. Finally, the conclusion part of the paper is given with future scope.

2. Machine Learning Methods

This paper proposes establishing a support vector regression model to predict the length, depth, and width of the given features of a crack image. The feature data set developed by Daniel et al. [1] is considered in this work to establish a more accurate SVR model compared to the neural network model. The data set had 22 features extracted from the 105 crack images with different lengths, depths, and widths. The model’s performance by different factors, such as kernel function, solver type, validation method and its parameters, and % of testing data set to validate the model are considered in this work. Built-in functions, such as ‘fitrsvm’ and ‘predict’, available in MATLAB 2022a version, have been used in this work to fit and predict the response value for the given training data set. The ‘resume’ function has been used to further train the SVR model until it reaches convergence status. Various performance measures are available; RMSE, MAE, and R 2 are considered in this work, MAE and R 2 are considered most often in literature and expressed in Equations (1)–(3). The above three performance measures will be calculated for the total data set, training data set, and testing data set separately and be considered for developing an optimized SVR model. Simultaneously minimizing both the values of RMSE and MAE and maximizing the R 2 values are taken as conflicting objectives, and a total of nine performance measures are involved; hence, the TOPSIS method is proposed in this work to convert these multi-objectives into a single objective using closeness values. Equations (4)–(9) are used to calculate the normalized value, performance matrix, the positive and negative ideal solution, ideal and negative ideal separation value, and closeness values, respectively. Figure 1 illustrates the proposed methodology of this work.
R M S E = i = 1 n s R i k r i k 2 n s
M A E = i = 1 n s R i k r i k n s
R 2 = i = 1 n s R i k R k ¯ r i k r k ¯ i = 1 n s R i k R k ¯ 2 i = 1 n s r i k r k ¯ 2
where, R k ¯ = i = 1 n s R i k n s and r k ¯ = i = 1 n s r i k n s
N r k = O r k r = 1 i t O r k 2
A r k = N r k W k
Minimization:
P k = m i n r = 1 i t A r k
M k = m a x i = 1 m A r k
Maximization:
P k = m a x r = 1 i t A r k
M k = m i n k = 1 i t A r k
S P r = k = 1 n r ( A r k P k ) 2
S M r = k = 1 n r ( A r k M k ) 2
R r = S M r S P r + S M r
Figure 1. Proposed methodology.
Figure 1. Proposed methodology.
Applsci 12 12375 g001

2.1. Support Vector Regression Model

The SVR model is adopted to compute the function relationship between independent (parameters— P i j ) and dependent (responses— R i k ) variables whose distributions are not known and concurrently minimize both model complexity and estimation errors. The model complexity and estimation errors are concurrently minimized by implementing SVR in the data set and its performance was outperformed compared to neural network models. A training SVR model used a training data set to learn about the data set and to construct a model about the generalization error by testing the model using an unseen data set. The response value is a function of multivariate parameters as represented in Equation (10), which can be written as Equation (11) to represent a linear regression support vector in terms of weight vector (W), a high dimensional space related to input parameter space, and a bias (b).
R i k = f P i j
where,
P i j —jth Parameter of ith training data
R i k —kth Response of ith training data
i—Index for training data (i = 1, 2, 3,…nt)
j—Index for parameter (j = 1, 2, 3,…np)
k—Index for response (k = 1, 2,3,…nr)
f P i j = W φ P i j + b
The value of weight and bias is estimated by transferring the regression problem into the constraint minimization problem expressed in Equation (12).
m i n i m i z e   1 2 W 2 + C i = 1 n t ξ i + ξ i *
where,
ξ i , ξ i * —lower and upper positive slack variables
1 2 W 2 —Flatness value of the function
C i = 1 n t ξ i + ξ i * —Empirical error
C—Box constraint or penalty parameter value
Constraints:
R i k W φ P i j + b ε + ξ i *
W φ P i j + b R i k ε + ξ i
ξ i , ξ i * 0  
where
ε —Error tolerance
The constraint problem is resolved using the following Lagrange multiplier function as given in Equation (16), which should be minimized concerning the conditions mentioned in Equation (17). The list of parameters considered to establish an SVR model is presented in Table 1. A list of kernel functions and their formulas are depicted in Table 2.
f P i j = i = 1 n t α i j α i j *   K ( P i j · P ) + b
L α = 1 2 i , l = 1 n t α i j α i j * α l j α i j *   K ( P i j · P l j ) + ε i = 1 n t α i j * + α i j R i j i = 1 n t α i j * α i j
where
α i j *   and   α i j —Non-negative Lagrange multipliers
K P i j · P —Kernel function
Constraints:
i = 1 n t α i j * α i j = 0 0 α i j * C 0 α i j C
Table 1. List of parameters considered in SVR.
Table 1. List of parameters considered in SVR.
ParametersValues
Kernel   function   ( k f )Gaussian
Radial basis function
Polynomial
Linear
Quadratic
Cubic
Solver (sr)SMO (Sequential Minimal Optimization)
ISDA (Iterative Single Data Algorithm)
Q1LP (Quadratic Programming)
Validation scheme (vs)Cross-validation
Holdout validation
Validation scheme’s parametersK-fold—Number of folds used for cross-validation (5 to 15)
Holdout—% of data holdout for validation (10% to 40%)
% of training data10% to 30%
Table 2. Kernel functions and their formulas.
Table 2. Kernel functions and their formulas.
Kernel FunctionFormulaTerms Used
Gaussian e P i j P l j 2 σ 2 P i j P l j —Euclidean distance
σ —Variance
Radial basis function e γ P i j P l j 2 γ —Scaler value
Polynomial A + P i j T P l j n A—Free parameter
n—Order of polynomial
Linear P i j T P l j
Quadratic A + P i j T P l j 2 A—Free parameter
Cubic A + P i j T P l j 3 A—Free parameter
One SVR model is developed using a kernel function, solver type, validation scheme and its parameter, and % of testing data using the ‘fitrsvm’ function. If the convergence does not occur with the model, then the model runs for another set of iterations using the ‘resume’ function. Once convergence is reached, then, using the ‘predict’ function, the response values are going to be computed by substituting the total data set, training data set, and testing data set. The performance measures of the model are calculated using the predicted ( r i k ) and actual ( R i k ) response values from the data set.

2.2. Moth Flame Optimization

Moth flame optimization also belongs to the family of butterflies inspired by its transverse orientation navigation method used for flying long distances and maintaining a fixed angle with the flying the moon and flying spirally [22]. The MFO algorithm provides very quick convergence at a very initial stage by switching from exploration to exploitation, which increases the efficiency of the algorithm. Apart from that, the MFO is selected in this work for its simplicity, speed in searching, simple hybridization with other algorithms, requiring no derivation information in the starting phase, few parameters, scalability and flexibility [23]. In this work, the number of dimensions that represent the position of the moth will be considered five, and the same is represented in Table 3, and its lower and upper bound values are listed in Table 4. One moth represents one solution that will produce one SVR-trained model with the performance measures of RMSE, MAE, and R 2 values by considering the moth dimensions as the parameters of the SVR model like kernel function, name of the solver, validation scheme, validation scheme’s parameter, and % of training data. Table 3 represents one such moth’s dimensions. It shows that an SVR model will be generated with the parameters of the 3rd kernel function, 1st solver type, 2nd validation scheme, 25% of data for validation as validation scheme’s parameter, and 20% of training data using the ‘fitrsvm’ and ‘resume’ functions in MATLAB 2022a and trained with the output of performance measures. The simultaneous maximizing of the value of R 2 and minimizing the value of RMSE and MAE are considered objective functions in this work. The non-dominated sorting method is adopted to generate 100 Pareto-optimal moths as the archive size and 100 moths as the population size by considering 100 iterations as the stopping criteria. After obtaining the trained SVR model, the three performance measures are calculated for both the testing and total data sets apart from the trained data set. This procedure has been followed for all three responses, L, D, and W; responses in SVR models will be available with their parameters and the output values of three performance measures each for the total trained and tested data sets. The evaluation of one such moth is represented in Figure 2. The parameters of the MFO algorithm considered in this work are presented in Table 5. The pseudo-code of the MFO algorithm is presented in Algorithm 1. Due to conflict objectives, the TOPSIS method has been implemented to convert the multi-objectives (three performance measures for each trained, tested, and total data sets) available in the archive into a single objective. The pseudo-code of this method is shown in Algorithm 2. The implementation of the MFO algorithm is represented as a flow diagram in Figure 3. The pseudo-code of the PSO algorithm is presented in Algorithm 3.
Figure 2. Evaluation of one moth.
Figure 2. Evaluation of one moth.
Applsci 12 12375 g002
Figure 3. Implementation of MFO Algorithm.
Figure 3. Implementation of MFO Algorithm.
Applsci 12 12375 g003
Algorithm 1: MFO Algorithm
Initialize the parameters for Moth-flame
Initialize Moth position Mi randomly
For each i = 1:n do
  Calculate the fitness function fi
End For
While (iteration ≤ max_iteration) do
 Update pareto optimal solution archive using non-dominated sorting method
 Update the position of Mi
 Calculate the no. of flames
 Evaluate the fitness function fi
  If (iteration==1) then
   F = sort (M)
   OF = sort (OM)
  Else
   F = sort (Mt-1, Mt)
   OF = sort (Mt-1, Mt)
  End if
  For each i = 1:n do
     For each j = 1:d do
    Update the values of r and t
    Calculate the value of D w.r.t. corresponding Moth
    Update M(i,j) w.r.t. corresponding Moth
     End For
  End For
End While
Using TOPSIS method (Algorithm 4) convert the archive multi objectives into single objective (closeness value) and display the optimum SVR model based on highest closeness value
Print the best solution
Algorithm 2: DFO Algorithm
Define number of dragonflies (nd), number of iteration (nitr), and archive size
As initial population, initialize position of dragonflies Pij
Assign step vector (Vij) values as Pij
Do While i<=nitr
 Calculate the value of inertia, separation, alignment and cohesion weights,
  food and enemy factor values.
   Compute the objective values of each dragonflies (Fi)
 Determine the non-dominated objective values
 Update the no. of non-dominated solutions in archive
 Assume best solution as food source and worst solution as enemy
 Update the values of Vij and Pij
 Check Pij values lies between the lower and upper limits of process parameters
End
Using TOPSIS method convert the archive multi objectives into single objective (closeness value) and display the optimum SVR model based one highest closeness value
Print the best solution.
Algorithm 3: PSO Algorithm
P = Particle Initialization ();
For i=1 to itrmax
 For each particle p in P do
  fp = f(p);
  If fp is better than f(pBest);
   pBest = p;
  end
 end
gBest = best p in P
 Determine the non-dominated objective values
 Update the no. of non-dominated solutions in archive
 For each particle p in P do
  v = v + c1 *rand*(pBest − p) + C2 *rand*(gBest-P);
  p = p+v;
 end
end
Using TOPSIS method convert the archive multi objectives into single objective (closeness value) and display the optimum SVR model based on highest closeness value
Print the best solution.
Algorithm 4: TOPSIS Method
Read objectives matrix—Ork with weights (Wk) and type of objectives (OT)
For each Alternate r = na
 For each Response k = nr
Compute Normalized value of Ork (Nrk)
Calculate Performance Matrix (Ark)
 End
End
For each Response k = nr
 Determine positive ideal (Pk) and negative ideal solution (Mk)
End
For each Alternate r = na
   Determine Ideal (SPr) and negative ideal separation (SMr)
Compute Relative Closeness (Rr)
End
Arrange alternatives in descending order based on Rr
Display the alternate which has highest Rr value

3. Results and Discussions

MATLAB codes were developed for the MFO algorithm, which has been executed repeatedly 30 times. Each time, there are 100 solutions in an archive, and one best solution is selected using the TOPSIS [25] method; hence, the 30 best solutions obtained by the MFO algorithm are shown in Table 6 for response L (crack length) and similarly for other responses; D (crack depth) and W (crack width) are presented in Table 7 and Table 8. Out of these 30 solutions, the best one is selected again using the TOPSIS method for all of the responses. It is confirmed from Figure 4 that the highest closeness values are obtained for the MFO algorithm as compared to the DFO [26] and PSO [27] algorithms. The convergence plot for three different performance measures is shown in Figure 5.
Figure 4a–c. represent the statistical distribution of closeness values obtained for 30 runs from Minitab 19 software using MFO, DFO and PSO algorithms for response L. Similar representation for responses D and W are illustrated in Figure 4d–f. and Figure 4g–i. respectively. It is observed that all of the p-values are greater than 0.05, which shows that the closeness values are normally distributed. This confirmed that the results obtained by the MFO, DFO, and PSO algorithms are accepted. Figure 5a–c. illustrate the closeness values obtained for various responses using MFO, DFO, and PSO algorithms for responses L, D and W respectively. It is understood that, in most of the runs, the MFO has the highest closeness value as compared to the DFO and PSO algorithms. Based on the highest closeness value, the best SVR model has been selected for each response, L, D, and W, and presented in Table 9. Using these models, the predicted response values are calculated and represented in Figure 6. The actual training data set of responses, L, D, and W, with their predicted values are plotted in Figure 6a–c. A similar representation is shown for the testing data set in Figure 6d–f. The predicted values of L and W are closer to the actual value in the training data set compared with response D. In the testing data set, out of 29 samples of response W, almost all predicted values are closer to the actual values, whereas in L, out of 20 samples, 14 are closer, and in D, it is only 7 out of 25. It is inferred that the SVR models developed to predict the response values of L and W are more accurate compared to D. The probability plots shown in Figure 7a–c. reveal that the results obtained using MFO for 30 runs are normally distributed for trainnig data set of responses L, D and W. Figure 7d–f. represent probability plots for testing data set of responses L, D and W, it also show the results are normally distributed; hence, the developed SVR models are appropriate. The three performance measures of SVR models for responses, L, D, and W, are presented in Figure 8a–c. It is reported that RMSE values of 0.071, 0.0767, and 0.0678 were obtained in the testing data set of responses L, D, and W, respectively. Lesser MAE values of 0.0624, 0.0672, and 0.0583 and higher R 2 values of 0.9996, 0.9909, and 0.9999 were reported in the testing data set. Figure 9 illustrates the convergence plot of performance measures for three responses, L, D, and W, for the total, training, and testing data sets. It is observed from Figure 9a–c that the convergence occurred on the 34th, 41st, and 34th iterations in the MFO algorithm, whereas, in DFO, it occurred on the 52nd, 52nd, and 55th iterations in response L. It is confirmed from Figure 9d–f. that the convergence occurred on the 40th, 54th, and 48th iterations in the MFO algorithm, whereas, in DFO, it occurred on the 61st, 43rd, and 63rd iterations in response D. It is evident from Figure 9g–i. that the convergence occurred on the 40th, 41st and 45th iterations in the MFO algorithm, whereas, in DFO, it occurred on the 61st, 57th, and 67th iterations in response W. It is confirmed from Figure 9a–c. that the convergence occurred in PSO for response L will be at the 89th, 63rd and 87th iterations for RMSE, MAE, and R2 values that are higher than the convergence iteration numbers in the MFO algorithm. It is also observed from Figure 9d–i. that the convergence occurred in a higher iteration number in the case of response D and W in the PSO algorithm compared with the MFO. Hence, the MFO algorithm outperformed compared to the DFO algorithm in the convergence of objective values.

Quantitative Comparison of MFO with DFO and PSO Algorithms

The performance of the MFO algorithm is compared with the DFO and PSO algorithms using two quantitative metrics, namely diversity and spacing [28]. Diversity indicates the spread of the Pareto solutions produced by an algorithm. The higher the value of diversity confirmed the better performance of the algorithm. The mathematical representation is given in Equation (18). Spacing is a straightforward method to compute the distance between the point and its closest neighbor. The minimum value of spacing indicates the better performance of the algorithm. The value of spacing is calculated using Equation (19). Table 10 represents the diversity and spacing values of the MFO, DFO, and PSO algorithms.
D i v = i = 1 n p P i m a x P i m i n 2
where
Pimax—Maximum value of ith performance measures
Pimin—Minimum value of ith performance measures
np—number of performance measures
S P = 1 n p 1 i = 1 n p D ¯ D i 2
where
D ¯ = i = 1 n p D i n p
D i = min i = 1 , 2 , n p j = 1 n r a b s P i j P k j k = i + 1 , n p
where nr—Number of runs, I—index for a no. of performance measures, J—index for a no. of runs.
Table 10. Comparison of performance of algorithms using performance indicators.
Table 10. Comparison of performance of algorithms using performance indicators.
ResponseAlgorithmsDiversitySpacing
LMFO0.06520.0053
DFO0.06440.0054
PSO0.06480.0057
DMFO0.06790.0026
DFO0.06640.0058
PSO0.06530.0062
WMFO0.06850.0063
DFO0.06760.0066
PSO0.06800.0073
It is understood from Table 10 that the diversity value of the MFO algorithm is high compared to both the DFO and PSO algorithms in all of the three-response values, L, D, and W. Furthermore, the spacing value is less in the MFO algorithm for all three responses. Hence, it is confirmed that the MFO algorithm outperformed compared with both the DFO and PSO algorithms. The statistical analysis of three performance measures for training, testing, and total data sets for the MFO, DFO, and PSO algorithms are represented in Table 11, Table 12 and Table 13. The best values of three performance measures, RMSE, MAE, and R 2 are 0.0544, 0.0527, and 0.9999, respectively, for response L in the MFO algorithm, whereas in the DFO algorithm, the values are 0.0626, 0.0539, and 0.9993, and in the PSO algorithm, the values are 0.0626, 0.0538, and 0.9993. It is confirmed that the MFO algorithm is good compared to the DFO and PSO algorithms in predicting the response L. This is true for response W in the DFO algorithm, but in the PSO algorithm, both the RMSE and MAE values are lower compared with the MFO algorithm. In the case of response D in the DFO and PSO algorithms, the RMSE and MAE values are a little bit low compared to the MFO algorithm, but in the meantime, the R 2 value is higher than in the DFO and PSO algorithms. In overall performance, the SVR models developed by the MFO algorithm outperformed compared to the DFO and PSO algorithms.
The comparison of the performance measures of the proposed method using MFO (PM-MFO) and the existing method using Feed Forward Back Propagation (EX-FFBP) by Daniel et al. [1] is illustrated in Figure 10. It is understood that, for the tested data set, the proposed method performed well for responses L in Figure 10a and for response W in Figure 10c as compared to response D in Figure 10b.

4. Conclusions

In this work, SVR models have been developed to predict the length, depth, and width of the defect images using the given GLCM features extracted from MFL images. Five different parameters that decide the performance of SVR have been considered and three various performance measures, RMSE, MAE, and R 2 values, are taken into consideration to evaluate the performance of the SVR models. The MFO algorithm is implemented to find the best parameters of SVR models, and both DFO and PSO algorithms are used to compare the performance of the MFO algorithm. The normality test, probability, and residual plots have ensured that the results obtained using these algorithms are normally distributed and hence, accepted. The convergence, diversity, and spacing of performance measures are considered to evaluate the betterment of the functioning of the MFO with the DFO and PSO algorithms. Low convergence, a high value of diversity, and a low value of spacing reported at 34 iterations, 0.0685 and 0.0026, respectively, obtained with the MFO algorithm confirmed that it outperformed the DFO and PSO algorithms. The reported values of the three performance measures of SVR models, RMSE, MAE, and R 2 values of responses L, D, and W using the MFO algorithm, are 0.0527, 0.0613, and 0.9999. These lower values in both RMSE and MAE and high values in R 2 confirmed the performance of the MFO algorithm. Furthermore, in comparing the performance measures with the existing FFBP method, the proposed MFO–SVR method proved its effectiveness in responses L and W. The proposed SVR model is not suitable for a very large dataset, and all the features have been considered without finding the priority features. As a future work, this work can be extended further by adding the other additional features of GLSM extracted from the MFL image with data augmentation for different L, D, and W. Furthermore, features can be prioritized using the existing different algorithms, and based on that, the performance of the SVR models can be studied further.

Author Contributions

Conceptualization, M.V.A.W., S.R., R.C., M.S.K. and M.E.; data curation, M.V.A.W.; formal analysis, M.V.A.W.; investigation, M.V.A.W.; methodology, S.R. and M.E.; resources, R.C. and M.S.K.; software, S.R., R.C., M.S.K. and M.E.; visualization, M.V.A.W. and S.R.; writing—original draft, M.V.A.W. and S.R.; writing—review and editing, R.C., M.S.K. and M.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request through email to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Daniel, J.; Abudhahir, A.; Paulin, J.J. Magnetic Flux Leakage (MFL) Based Defect Characterization of Steam Generator Tubes Using Artificial Neural Networks. J. Magn. 2017, 22, 34–42. [Google Scholar] [CrossRef] [Green Version]
  2. Raj, B.; Jayakumar, T.; Rao, B.P.C. Non-Destructive Testing and Evaluation for Structural Integrity. Sadhana 1995, 20, 5–38. [Google Scholar] [CrossRef]
  3. Dwivedi, S.K.; Vishwakarma, M.; Soni, A. Advances and Researches on Non Destructive Testing: A Review. Mater. Today Proc. 2018, 5, 3690–3698. [Google Scholar] [CrossRef]
  4. Göktepe, M. Non-Destructive Crack Detection by Capturing Local Flux Leakage Field. Sens. Actuators A Phys. 2001, 91, 70–72. [Google Scholar] [CrossRef]
  5. Ling, Z.W.; Cai, W.Y.; Li, S.H.; Li, C. A Practical Signal Progressing Method for Magnetic Flux Leakage Testing. Appl. Mech. Mater. 2014, 599–601, 782–785. [Google Scholar] [CrossRef]
  6. Wu, J.; Wu, W.; Li, E.; Kang, Y. Magnetic Flux Leakage Course of Inner Defects and Its Detectable Depth. Chin. J. Mech. Eng. 2021, 34, 63. [Google Scholar] [CrossRef]
  7. Ege, Y.; Coramik, M. A New Measurement System Using Magnetic Flux Leakage Method in Pipeline Inspection. Measurement 2018, 123, 163–174. [Google Scholar] [CrossRef]
  8. Shi, Y.; Zhang, C.; Li, R.; Cai, M.; Jia, G. Theory and Application of Magnetic Flux Leakage Pipeline Detection. Sensors 2015, 15, 31036–31055. [Google Scholar] [CrossRef] [Green Version]
  9. Suresh, V.; Abudhahir, A.; Daniel, J. Development of Magnetic Flux Leakage Measuring System for Detection of Defect in Small Diameter Steam Generator Tube. Measurement 2017, 95, 273–279. [Google Scholar] [CrossRef]
  10. Jin, C.; Kong, X.; Chang, J.; Cheng, H.; Liu, X. Internal Crack Detection of Castings: A Study Based on Relief Algorithm and Adaboost-SVM. Int. J. Adv. Manuf. Technol. 2020, 108, 3313–3322. [Google Scholar] [CrossRef]
  11. Zhang, J.; Teng, Y.-F.; Chen, W. Support Vector Regression with Modified Firefly Algorithm for Stock Price Forecasting. Appl. Intell. 2018, 49, 1658–1674. [Google Scholar] [CrossRef]
  12. Kari, T.; Gao, W.; Tuluhong, A.; Yaermaimaiti, Y.; Zhang, Z. Mixed Kernel Function Support Vector Regression with Genetic Algorithm for Forecasting Dissolved Gas Content in Power Transformers. Energies 2018, 11, 2437. [Google Scholar] [CrossRef] [Green Version]
  13. Houssein, E.H. Particle Swarm Optimization-Enhanced Twin Support Vector Regression for Wind Speed Forecasting. J. Intell. Syst. 2017, 28, 905–914. [Google Scholar] [CrossRef]
  14. Li, S.; Fang, H.; Liu, X. Parameter Optimization of Support Vector Regression Based on Sine Cosine Algorithm. Expert Syst. Appl. 2018, 91, 63–77. [Google Scholar] [CrossRef]
  15. Yuan, F.-C. Parameters Optimization Using Genetic Algorithms in Support Vector Regression for Sales Volume Forecasting. Appl. Math. 2012, 03, 1480–1486. [Google Scholar] [CrossRef] [Green Version]
  16. Papadimitriou, T.; Gogas, P.; Stathakis, E. Forecasting Energy Markets Using Support Vector Machines. Energy Econ. 2014, 44, 135–142. [Google Scholar] [CrossRef]
  17. Gayathri, R.; Rani, S.U.; Cepova, L.; Rajesh, M.; Kalita, K. A Comparative Analysis of Machine Learning Models in Prediction of Mortar Compressive Strength. Processes 2022, 10, 1387. [Google Scholar] [CrossRef]
  18. Gupta, K.K.; Kalita, K.; Ghadai, R.K.; Ramachandran, M.; Gao, X.Z. Machine Learning-Based Predictive Modelling of Biodiesel Production—A Comparative Perspective. Energies 2021, 14, 1122. [Google Scholar] [CrossRef]
  19. Ganesh, N.; Joshi, M.; Dutta, P.; Kalita, K. PSO-tuned Support Vector Machine Metamodels for Assessment of Turbulent Flows in Pipe Bends. Eng. Comput. 2019, 37, 981–1001. [Google Scholar] [CrossRef]
  20. Jayasudha, M.; Elangovan, M.; Mahdal, M.; Priyadarshini, J. Accurate Estimation of Tensile Strength of 3D Printed Parts Using Machine Learning Algorithms. Processes 2022, 10, 1158. [Google Scholar] [CrossRef]
  21. Priyadarshini, J.; Elangovan, M.; Mahdal, M.; Jayasudha, M. Machine-Learning-Assisted Prediction of Maximum Metal Recovery from Spent Zinc–Manganese Batteries. Processes 2022, 10, 1034. [Google Scholar] [CrossRef]
  22. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  23. Abdelazim, G.H.; Mohamed, A.; El Aziz, M.A. A comprehensive review of moth-flame optimisation: Variants, hybrids, and applications. J. Exp. Theor. Artif. Intell. 2020, 32, 705–725. [Google Scholar] [CrossRef]
  24. Karthick, M.; Anand, P.; Siva Kumar, M.; Meikandan, M. Exploration of MFOA in PAC Parameters on Machining Inconel 718. Mater. Manuf. Process. 2021, 37, 1433–1445. [Google Scholar] [CrossRef]
  25. Ananthakumar, K.; Rajamani, D.; Balasubramanian, E.; Paulo Davim, J. Measurement and Optimization of Multi-Response Characteristics in Plasma Arc Cutting of Monel 400TM Using RSM and TOPSIS. Measurement 2019, 135, 725–737. [Google Scholar] [CrossRef]
  26. Mirjalili, S. Dragonfly Algorithm: A New Meta-Heuristic Optimization Technique for Solving Single-Objective, Discrete, and Multi-Objective Problems. Neural Comput. Appl. 2015, 27, 1053–1073. [Google Scholar] [CrossRef]
  27. Kumar, M.S.; Rajamani, D.; Nasr, E.A.; Balasubramanian, E.; Mohamed, H.; Astarita, A. A Hybrid Approach of ANFIS—Artificial Bee Colony Algorithm for Intelligent Modeling and Optimization of Plasma Arc Cutting on Monel™ 400 Alloy. Materials 2021, 14, 6373. [Google Scholar] [CrossRef]
  28. Khalilpourazari, S.; Khalilpourazary, S. Optimization of Time, Cost and Surface Roughness in Grinding Process Using a Robust Multi-Objective Dragonfly Algorithm. Neural Comput. Appl. 2018, 32, 3987–3998. [Google Scholar] [CrossRef]
Figure 4. Statistical analysis of closeness values for (a) Response L-MFO; (b) Response L-DFO; (c) Response L-PSO; (d) Response D-MFO; (e) Response D-DFO; (f) Response D-PSO; (g) Response W-MFO; (h) Response W-DFO; (i) Response W-PSO.
Figure 4. Statistical analysis of closeness values for (a) Response L-MFO; (b) Response L-DFO; (c) Response L-PSO; (d) Response D-MFO; (e) Response D-DFO; (f) Response D-PSO; (g) Response W-MFO; (h) Response W-DFO; (i) Response W-PSO.
Applsci 12 12375 g004aApplsci 12 12375 g004b
Figure 5. Comparison of closeness values for (a) Response L; (b) Response D; (c) Response W.
Figure 5. Comparison of closeness values for (a) Response L; (b) Response D; (c) Response W.
Applsci 12 12375 g005
Figure 6. Comparison of actual and predicted values for (a) Training Dataset-Response L; (b) Training Dataset-Response D; (c) Training Dataset-Response W; (d) Testing Dataset-Response L; (e) Testing Dataset-Response D; (f) Testing Dataset-Response W.
Figure 6. Comparison of actual and predicted values for (a) Training Dataset-Response L; (b) Training Dataset-Response D; (c) Training Dataset-Response W; (d) Testing Dataset-Response L; (e) Testing Dataset-Response D; (f) Testing Dataset-Response W.
Applsci 12 12375 g006
Figure 7. Probability Plot for (a) Training Dataset-Response L; (b) Training Dataset-Response D; (c) Training Dataset-Response W; (d) Testing Dataset-Response L; (e) Testing Dataset-Response D; (f) Testing Dataset-Response W.
Figure 7. Probability Plot for (a) Training Dataset-Response L; (b) Training Dataset-Response D; (c) Training Dataset-Response W; (d) Testing Dataset-Response L; (e) Testing Dataset-Response D; (f) Testing Dataset-Response W.
Applsci 12 12375 g007
Figure 8. Performance measures of SVR models (a) RMSE; (b) MAE; (c) R2.
Figure 8. Performance measures of SVR models (a) RMSE; (b) MAE; (c) R2.
Applsci 12 12375 g008
Figure 9. Convergence plot of performance of SVR for (a) Response L—RMSE; (b) Response L—MAE; (c) Response L—R2; (d) Response D—RMSE; (e) Response D—MAE; (f) Response D—R2; (g) Response W—RMSE; (h) Response W—MAE; (i) Response W—R2.
Figure 9. Convergence plot of performance of SVR for (a) Response L—RMSE; (b) Response L—MAE; (c) Response L—R2; (d) Response D—RMSE; (e) Response D—MAE; (f) Response D—R2; (g) Response W—RMSE; (h) Response W—MAE; (i) Response W—R2.
Applsci 12 12375 g009aApplsci 12 12375 g009b
Figure 10. Comparison of performance measures (a) RMSE; (b) MAE (c) R2.
Figure 10. Comparison of performance measures (a) RMSE; (b) MAE (c) R2.
Applsci 12 12375 g010
Table 3. Moth’s representation.
Table 3. Moth’s representation.
Kernel FunctionName of SolverValidation SchemeValidation Schemes Value% of Training Data
Dimension 1Dimension 2Dimension 3Dimension 4Dimension 5
3122520
Table 4. Lower and upper bound value of SVR parameters.
Table 4. Lower and upper bound value of SVR parameters.
DimensionParameter Considered for OptimizationLower Bound ValueUpper Bound Value
1 Kernel   function   ( k f )16
2Solver (sr)13
3Validation scheme (vs)12
Validation scheme’s parameters
4K-fold—Number of folds used for cross-validation (nf)510
Holdout—% of data holdout for validation (ho)1040
5% of training data (td)1035
Table 5. Parameters of MFO, DFO and PSO algorithms [24].
Table 5. Parameters of MFO, DFO and PSO algorithms [24].
MFO AlgorithmDFO AlgorithmPSO Algorithm
ParameterValueParameterValueParameterValue
No. of moths (N)100No. of dragonflies (nd)100Particle size (N)100
No. of iterations (nitr)100No. of Iterations (nitr)100No. of iterations (nitr)100
Position of moth close to the flame (t)−1 to −2Minimum and maximum Inertia weightwmin = 0.2 and wmax = 0.9Learning factors
(C1 and C2)
2 & 2
Archive size100Archive size100Inertia weight (ω)0.6
Table 6. Performance of SVR model for crack length (L) using MFO algorithm.
Table 6. Performance of SVR model for crack length (L) using MFO algorithm.
Sl. No.RMSEMAE R 2 TOPSIS
Value
TotalTrainingTestingTotalTrainingTestingTotalTrainingTesting
10.06520.06130.07270.05640.05450.06290.99460.99140.98530.4874
20.06600.06190.07130.05440.05470.06310.96550.97780.99790.4966
30.06470.06270.07070.05470.05350.06350.96780.99650.96710.5314
40.06440.06310.07230.05690.05430.06200.98460.96340.99600.4418
50.06450.06250.07300.05580.05370.06340.99080.99990.98240.5024
60.06480.06330.07190.05580.05490.06340.96970.98910.96740.3462
70.06530.06310.07130.05680.05290.06270.97460.98230.98780.4752
80.06480.06370.07110.05450.05540.06400.97340.96570.98570.3977
90.06510.06190.07020.05660.05480.06360.97800.99350.97290.4535
100.06420.06290.07110.05670.05280.06270.96660.98900.98050.5199
110.06490.06250.07130.05480.05510.06360.98960.97670.96060.4327
120.06410.06260.07100.05530.05380.06240.99980.99990.99960.6882
130.06560.06420.07210.05600.05460.06420.99540.97260.99150.3502
140.06360.06410.07180.05470.05510.06150.96310.98330.96390.4548
150.06400.06270.06990.05450.05400.06390.99100.98760.98720.6152
160.06380.06370.07000.05520.05370.06290.99370.99450.99060.6295
170.06590.06420.07160.05530.05300.06160.97780.96840.99130.4954
180.06340.06250.07120.05450.05540.06420.98400.98030.98800.5066
190.06310.06170.07150.05610.05510.06380.97400.97700.99750.4963
200.06510.06300.06970.05570.05270.06140.99010.96960.97150.5951
210.06310.06240.07160.05690.05460.06200.98340.99030.99940.5545
220.06350.06400.07180.05480.05450.06250.98420.99170.98180.5318
230.06480.06190.07130.05670.05380.06310.99350.97540.98590.5037
240.06510.06200.07010.05600.05400.06400.97810.97270.99740.5030
250.06570.06300.07230.05630.05380.06250.99510.97920.96860.4266
260.06480.06160.07240.05520.05400.06340.97950.97110.99890.5173
270.06560.06210.07060.05550.05280.06420.98470.96630.96640.4822
280.06500.06330.07140.05450.05390.06250.98230.97580.99720.5544
290.06530.06170.07040.05660.05300.06320.98860.97140.99790.5570
300.06450.06320.07030.05540.05290.06400.98160.96970.96660.4931
Table 7. Performance of SVR model for crack depth (D) using MFO algorithm.
Table 7. Performance of SVR model for crack depth (D) using MFO algorithm.
R.No.RMSEMAE R 2 TOPSIS
Value
TotalTrainingTestingTotalTrainingTestingTotalTrainingTesting
10.06200.05990.07820.05280.04960.06600.98360.97440.98360.6137
20.06310.05740.07870.05400.05080.06910.96130.98710.95870.4171
30.06310.05820.07820.05270.04930.06920.97590.95420.95610.4690
40.06340.05780.07630.05470.05020.06630.95650.98840.98380.5581
50.06420.06000.07580.05440.05110.06750.97250.95770.97650.3767
60.06380.05750.07840.05490.04880.06810.98530.98560.95190.4897
70.06440.05900.07820.05380.04890.06690.98860.98430.98670.5677
80.06180.05920.07620.05420.04880.06840.97180.98900.97160.5800
90.06230.05920.07610.05480.05040.06780.96300.95800.97280.4251
100.06310.05980.07640.05390.05020.06730.97790.97260.97530.4878
110.06470.05750.07840.05320.04900.06680.98840.96860.98140.5825
120.06410.05970.07830.05510.05070.06840.97220.95940.98510.2967
130.06310.05890.07740.05260.05030.06890.99070.97540.96640.5169
140.06270.05930.07740.05330.04890.06840.95990.97660.95460.4781
150.06180.05790.07630.05510.04890.06730.95640.96100.98030.5524
160.06400.05890.07790.05340.04910.06910.96380.97320.96440.4242
170.06320.05940.07820.05330.05070.06680.96780.99180.98450.5149
180.06290.05850.07670.05350.04970.06720.99170.99220.99090.6852
190.06300.05860.07690.05490.04930.06910.96440.98010.98400.4726
200.06220.05760.07700.05280.04930.06680.97950.96890.95820.6559
210.06400.05750.07630.05330.04900.06670.95570.95390.95640.5229
220.06280.05920.07780.05490.05090.06900.96800.96410.98610.3616
230.06460.05960.07870.05380.05050.06610.96370.98430.95300.3937
240.06170.05930.07870.05410.05050.06690.96420.96630.97840.4637
250.06420.05830.07810.05400.05030.06790.95620.95580.98030.3697
260.06360.06010.07620.05430.05000.06660.97560.97280.96850.4663
270.06330.05880.07780.05250.04950.06800.96320.96710.96630.4892
280.06370.06010.07570.05390.05040.06860.95820.98190.99000.4453
290.06390.05750.07570.05250.04900.06760.95210.97340.96700.5811
300.06190.05790.07590.05470.04910.06810.96330.98450.96870.5781
Table 8. Performance of SVR model for crack width (W) using MFO algorithm.
Table 8. Performance of SVR model for crack width (W) using MFO algorithm.
R.No.RMSEMAE R 2 TOPSIS
Value
TotalTrainingTestingTotalTrainingTestingTotalTrainingTesting
10.07750.08310.06800.06390.06670.05800.98400.97870.96090.6411
20.08170.08300.06680.06540.06880.05990.96040.98550.98270.4248
30.07870.08250.06720.06470.06680.05920.99570.99010.97300.6467
40.08100.08220.06660.06690.06960.05740.96220.99620.99920.5086
50.08120.08370.06830.06390.06920.05990.96750.97090.98740.4201
60.07960.08100.07020.06710.06800.05730.96940.97450.96890.4598
70.08020.08280.06860.06710.06780.05800.96250.99090.97270.4465
80.08170.08080.06650.06660.06980.05800.96610.98540.97420.4744
90.07880.08040.06900.06680.06730.05780.97530.96180.99300.5537
100.08100.08150.06910.06680.06800.05820.96720.97160.97190.4151
110.08030.08360.06880.06710.06950.05720.96140.98190.98370.4101
120.07830.08320.06850.06620.06680.05790.96660.99610.98060.5763
130.07720.08320.06670.06420.06760.06050.96310.98820.98140.5732
140.07770.08300.06700.06580.06640.05750.99270.99930.97440.6982
150.07960.08350.07000.06590.06960.05790.96720.97560.98020.3904
160.07920.08360.06840.06600.06650.05790.99390.98240.98050.5847
170.07820.08310.06890.06530.06820.05940.96940.97150.98750.4838
180.08080.08460.06850.06690.06820.06040.97010.96950.99940.3360
190.07970.08430.06770.06520.06830.06020.99360.98850.98930.4873
200.08180.08470.06900.06590.06810.05760.96980.98060.98820.4237
210.07940.08390.06650.06500.06830.05740.96830.96090.98490.5471
220.07900.08120.06670.06770.06690.05730.99090.99610.97770.6279
230.07870.08200.06780.06510.06750.05830.99990.99990.99990.7137
240.08170.08420.06800.06580.06690.05970.99120.97060.97940.4298
250.07850.08200.06690.06410.06680.06050.99580.99210.97780.6317
260.07920.08480.06960.06560.06880.05850.96740.96250.98470.3872
270.08130.08450.06970.06450.06860.05960.99340.96770.99440.4132
280.08090.08140.06740.06400.06630.05890.99470.97710.98640.6391
290.07710.08420.06830.06460.07010.05890.98930.96090.98640.5007
300.07730.08300.06690.06540.06710.05740.97660.98550.98500.6881
Table 9. Best Parameters of the SVR model for responses L, D, and W.
Table 9. Best Parameters of the SVR model for responses L, D, and W.
ResponseKernel
Function
SolverValidation
Scheme
Validation’s Scheme
Parameter
% of Test
Data
No. of Test
Data
LLinearSMOHoldout1417.120
DLinearSMOHoldout1621.725
WLinearSMDHoldout1525.229
Table 11. Statistical analysis of output obtained by the MFO algorithm.
Table 11. Statistical analysis of output obtained by the MFO algorithm.
ResponseStatisticsMFO
RMSEMAE R 2
TotalTrainingTestingTotalTrainingTestingTotalTrainingTesting
LMean0.06470.06280.07130.05560.05400.06310.98250.98070.9842
StDev0.00080.00080.00090.00090.00080.00080.01000.01080.0126
Minimum0.06310.06130.06970.05440.05270.06140.96310.96350.9606
Q10.06410.06200.07060.05480.05340.06250.97440.97130.9708
Median0.06480.06270.07130.05560.05400.06310.98370.97850.9865
Q30.06520.06330.07180.05650.05470.06380.99080.99060.9973
Maximum0.06600.06420.07300.05690.05540.06420.99980.99990.9996
Range0.00280.00290.00330.00250.00270.00290.03670.03650.0390
DMean0.08040.08430.06970.06680.06930.05970.97670.97710.9810
StDev0.00110.00120.00090.00100.00100.00080.01210.01280.0125
Minimum0.07870.08210.06780.06530.06750.05830.96030.96000.9600
Q10.07920.08340.06890.06590.06840.05900.96580.96670.9712
Median0.08040.08460.06990.06680.06950.05980.97430.97500.9809
Q30.08130.08530.07040.06750.07010.06030.98440.99270.9945
Maximum0.08240.08610.07110.06830.07070.06110.99970.99700.9997
Range0.00370.00390.00330.00310.00320.00280.03940.03710.0398
WMean0.07960.08300.06810.06570.06800.05860.97750.98040.9829
StDev0.00150.00120.00110.00110.00110.00110.01340.01200.0092
Minimum0.07710.08040.06650.06390.06630.05720.96040.96090.9609
Q10.07840.08200.06690.06460.06690.05750.96700.97080.9769
Median0.07950.08310.06810.06570.06800.05810.96990.98120.9832
Q30.08100.08400.06890.06680.06880.05960.99290.99030.9877
Maximum0.08180.08480.07020.06770.07010.06050.99990.99990.9999
Range0.00460.00440.00370.00390.00380.00330.03960.03900.0390
Table 12. Statistical analysis of output obtained by the DFO algorithm.
Table 12. Statistical analysis of output obtained by the DFO algorithm.
ResponseStatisticsDFO
RMSEMAE R 2
TotalTrainingTestingTotalTrainingTestingTotalTrainingTesting
LMean0.06560.06400.07290.05650.05490.06430.98060.98400.9850
StDev0.00100.00080.00090.00070.00080.00090.01250.01300.0101
Minimum0.06410.06260.07110.05540.05390.06250.96110.96130.9637
Q10.06470.06330.07250.05580.05420.06350.97090.97340.9776
Median0.06570.06400.07280.05660.05450.06440.97870.98650.9878
Q30.06660.06460.07380.05700.05550.06500.99310.99670.9930
Maximum0.06720.06550.07430.05800.05650.06540.99930.99930.9983
Range0.00310.00300.00330.00260.00260.00290.03820.03800.0347
DMean0.06440.05980.07870.05490.05100.06900.97370.97140.9741
StDev0.00080.00080.00120.00070.00080.00100.01200.01120.0113
Minimum0.06300.05850.07670.05370.04970.06740.95310.95350.9518
Q10.06370.05910.07780.05440.05030.06800.96290.96260.9664
Median0.06430.05990.07890.05480.05110.06930.97600.96990.9726
Q30.06510.06060.07970.05540.05190.07000.98460.97970.9842
Maximum0.06580.06120.08030.05610.05220.07050.99060.99170.9905
Range0.00280.00270.00360.00240.00240.00320.03750.03820.0387
WMean0.08040.08430.06970.06680.06930.05970.97670.97710.9810
StDev0.00110.00120.00090.00100.00100.00080.01210.01280.0125
Minimum0.07870.08210.06780.06530.06750.05830.96030.96000.9600
Q10.07920.08340.06890.06590.06840.05900.96580.96670.9712
Median0.08040.08460.06990.06680.06950.05980.97430.97500.9809
Q30.08130.08530.07040.06750.07010.06030.98440.99270.9945
Maximum0.08240.08610.07110.06830.07070.06110.99970.99700.9997
Range0.00370.00390.00330.00310.00320.00280.03940.03710.0398
Table 13. Statistical analysis of output obtained by the PSO algorithm.
Table 13. Statistical analysis of output obtained by the PSO algorithm.
ResponseStatisticsPSO
RMSEMAE R 2
TotalTrainingTestingTotalTrainingTestingTotalTrainingTesting
LMean0.06580.06400.07260.05670.05490.06390.98020.98000.9816
StDev0.00090.00090.00090.00080.00070.00090.01070.01040.0122
Minimum0.06420.06260.07100.05530.05380.06250.96000.96210.9609
Q10.06540.06340.07210.05610.05420.06310.97210.96990.9727
Median0.06600.06390.07260.05680.05470.06400.98090.98100.9842
Q30.06650.06470.07320.05730.05550.06460.98980.98830.9903
Maximum0.06720.06560.07430.05790.05620.06530.99800.99720.9993
Range0.00300.00310.00330.00260.00240.00290.03800.03510.0384
DMean0.06600.06400.07290.05680.05520.06420.98180.98360.9779
StDev0.00090.00090.00100.00080.00080.00070.01180.01200.0109
Minimum0.06430.06270.07100.05540.05380.06280.96320.96120.9610
Q10.06530.06330.07200.05630.05450.06370.97230.97690.9693
Median0.06630.06380.07290.05680.05530.06430.98020.98510.9775
Q30.06680.06480.07370.05740.05600.06490.99310.99410.9846
Maximum0.06720.06540.07450.05800.05650.06550.99930.99960.9987
Range0.00300.00270.00350.00260.00270.00280.03610.03860.0377
WMean0.06420.06260.07130.05540.05410.06260.98260.97930.9811
StDev0.00080.00080.00110.00080.00080.00100.01340.01110.0105
Minimum0.06290.06130.06960.05420.05280.06120.96120.96070.9603
Q10.06350.06210.07030.05490.05340.06190.96980.96980.9773
Median0.06410.06260.07110.05530.05410.06240.98600.98080.9813
Q30.06470.06300.07220.05600.05470.06360.99320.98710.9879
Maximum0.06560.06440.07300.05680.05540.06430.99980.99980.9996
Range0.00270.00300.00340.00260.00260.00300.03860.03930.0393
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

William, M.V.A.; Ramesh, S.; Cep, R.; Kumar, M.S.; Elangovan, M. MFO Tunned SVR Models for Analyzing Dimensional Characteristics of Cracks Developed on Steam Generator Tubes. Appl. Sci. 2022, 12, 12375. https://doi.org/10.3390/app122312375

AMA Style

William MVA, Ramesh S, Cep R, Kumar MS, Elangovan M. MFO Tunned SVR Models for Analyzing Dimensional Characteristics of Cracks Developed on Steam Generator Tubes. Applied Sciences. 2022; 12(23):12375. https://doi.org/10.3390/app122312375

Chicago/Turabian Style

William, Mathias Vijay Albert, Subramanian Ramesh, Robert Cep, Mahalingam Siva Kumar, and Muniyandy Elangovan. 2022. "MFO Tunned SVR Models for Analyzing Dimensional Characteristics of Cracks Developed on Steam Generator Tubes" Applied Sciences 12, no. 23: 12375. https://doi.org/10.3390/app122312375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop