Next Article in Journal
A Certificateless Aggregate Arbitrated Signature Scheme for IoT Environments
Next Article in Special Issue
Infrared Thermography for the Detection and Characterization of Photovoltaic Defects: Comparison between Illumination and Dark Conditions
Previous Article in Journal
A Hybrid Interweave–Underlay Countrywide Millimeter-Wave Spectrum Access and Reuse Technique for CR Indoor Small Cells in 5G/6G Era
Previous Article in Special Issue
Principal Component Thermography for Defect Detection in Concrete
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predictive Models for the Characterization of Internal Defects in Additive Materials from Active Thermography Sequences Supported by Machine Learning Methods

by
Manuel Rodríguez-Martín
1,2,
José G. Fueyo
1,
Diego Gonzalez-Aguilera
3,*,
Francisco J. Madruga
4,
Roberto García-Martín
1,
Ángel Luis Muñóz
3 and
Javier Pisonero
3
1
Department of Mechanical Engineering, Universidad de Salamanca, 37008 Salamanca, Spain
2
Department of Technology, Universidad Católica de Ávila, 05005 Ávila, Spain
3
Department of Cartographic and Land Engineering, Universidad de Salamanca, 05003 Ávila, Spain
4
Photonics Engineering Group, CIBER-BBN and IDIVAL, Universidad de Cantabria, 39005 Santander, Cantabria, Spain
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(14), 3982; https://doi.org/10.3390/s20143982
Submission received: 14 June 2020 / Revised: 11 July 2020 / Accepted: 15 July 2020 / Published: 17 July 2020
(This article belongs to the Special Issue Thermography Sensing in Non-destructive Testing and Monitoring)

Abstract

:
The present article addresses a generation of predictive models that assesses the thickness and length of internal defects in additive manufacturing materials. These modes use data from the application of active transient thermography numerical simulation. In this manner, the raised procedure is an ad-hoc hybrid method that integrates finite element simulation and machine learning models using different predictive feature sets and characteristics (i.e., regression, Gaussian regression, support vector machines, multilayer perceptron, and random forest). The performance results for each model were statistically analyzed, evaluated, and compared in terms of predictive performance, processing time, and outlier sensibility to facilitate the choice of a predictive method to obtain the thickness and length of an internal defect from thermographic monitoring. The best model to predictdefect thickness with six thermal features was interaction linear regression. To make predictive models for defect length and thickness, the best model was Gaussian process regression. However, models such as support vector machines also had significative advantages in terms of processing time and adequate performance for certain feature sets. In this way, the results showed that the predictive capability of some types of algorithms could allow for the detection and measurement of internal defects in materials produced by additive manufacturing using active thermography as a non-destructive test.

1. Introduction

Every industrial manufacturing process aims for the highest possible quality. Generally speaking, decreases in quality standards are linked to a wide range of defects that are inherent to manufacturing processes. These defects may be internal and may lead to failure and collapse of those structures, devices, or machines with additive-manufactured functional parts. Dealing with defects implies previous actions that detect and repairs parts in which they appeared or dismissed them, especially if the repair costs exceeded the manufacturing costs of new parts. Quality requirements are more critical in additive manufacturing (AM), which allows for the production of customized elements, even with complex geometries with no restrictions provoked by traditional manufacturing processes [1].
There are different defect detection methods based on destructive or non-destructive testing (NDT). Thermography is an NDT that can be classified by the type of information obtained (qualitative and quantitative) and also by the method used: active thermography (AT) or passive thermography (PT). ATs working principle is that defects and other types of discontinuities in materials tructure can alter a specimen’s diffusivity and cause heat flow alterations [2]. In this work, we focus on AT and the detection of internal defects [3], as well as their dimensional analysis [4].
Physical properties, such as the temperature [5] or heating-cooling rate obtained from AT [6], can be used as predictive parameters whene stimating the depth of cracks in steel. In fact, in Rodríguez-Martin et al. [6], a pixelwise algorithm for time derivative of temperature (PATDT) was developed to predict geometric features of the crack in steel welds from a sequence of thermograms using AT.
Numerical methods are useful to complement and optimize AT cost and efficiency [7]. Using them, the equation of thermal conductivity [8] can be numerically solved via sophisticated simulation tools. 3D modelling is also a useful tool to provide information about the influence of different factors, such as changes in the dimensions of defects and their depths [9].
Different authors have applied the finite element method (FEM) to investigate the heat transfer phenomenon during thermographic inspections using different codes and solutions like Ansys [9,10] or Abaqus [11]. However, the application of numerical methods normally entails the assumption of simplifications that may impact the interpretation of physical phenomenon, such as the uniform nature of heating applied or the variability of density, among others. In turn, 3D modelling and simulation methods allow for the generation of ideal surfaces under which geometrical conditions may differ from the real ones. Carvalho et al. [7] apply a model based on FEM simulation to solve the heat transient problem, while other authors apply a surface flux [10,12,13,14].
The datasets obtained using FEM can be useful to estimate parameters and thus to train predictive models, allowing for the estimation of a geometrical feature of the defect from thermal features. Within the machine learning (ML) approaches, regression learners study the relationship between one or more explanatory random variables and their responses [15]. Specifically, the artificial neural network (ANN) has been applied for regressions in various investigations with thermography to estimate the depth of the defects [4,16] or for biomedical applications [17]. ANN can be applied using visualization approaches that provide information about its behavior and structure [18]. ANN and support vector machines (SVM) models have also been applied for coating thickness estimation [19]. Regression learner using the Gaussian process regression model (GPR) has been applied to results of different computational fluid-dynamic simulations, interpolating the positions where experimental data were unknown [20]. Thermography has also been combined with deep learning (DL) strategies to detect cracks in steel [21].
Initially, AM was used to manufacture prototypes, but today the production of final parts for engineering applications is demanded and therefore technical plastics take on special relevance [22]. Nylon stands out with specific properties: a semi-crystal polyamide polymer; very low specific weight; excellent tensile strength and elastic recovery; toughness; resistance to bending and wear; and a good surface finish [23,24,25]. This material is usually processed using powder-bed laser sintering [26] but we can find more accurate and flexible processes such as inkjet-based manufacturing [22]. It should be highlighted that Nylon can be recycled for later use in the additive manufacturing process as a component for forming enhanced physical mechanical composites. This is helpful to minimize the environmental impact of non-biodegradable polymers [24].
In this article, a FEM simulation configuration is established using the physical properties of Nylon PA-12. Different types of models were designed and trained using different feature sets in order to establish the more efficient model and the thermal features needed to predict the geometrical features as response. Since different thermal features are relevant in the heat transfer process, it is convenient to know those that have a greater influence on the prediction of the defect geometry which can be used as input for the prediction model. These models could serve to design intelligent, automated, and non-destructive inspection protocols of additive-manufactured parts using active thermography.
Performance results for several generated multiparameter models are scientifically compared. In this way, a predictive technique based on the last advances in ML is proposed for the estimation of the geometric parameters of internal defects, using the thermal properties acquired with AT.

2. Materials and Methods

An ad-hoc hybrid strategy that integrates FEM and ML was designed to address this research. The two phases are described in the workflow outlined in Figure 1.

2.1. Numerical Model Design

Additive manufacturing (AM) procedures use different techniques for material deposition. Depending on these techniques, pores can be confused with small defects and appear, causing variations in the thermomechanical behavior in the different points and working directions. This problem was studied in laminated object manufacturing (LOM) [27] and fused filament fabrication (FFF) [28]. However, addressing this issue would imply a study of material properties after deposition at the mesostructural level and the preparation of a FEM model capable of simulating it. Although the study of this problem could be extremely interesting, it would enormously complicate this work. Thus, in this study, a simpler approach was carried out in which the macrostructural thermomechanical behavior of the material was considered as continuum and isotropic. This approach was previously considered in several works [7,12,14].
A FEM model was designed to study the effect that an internal defect (e.g., a hole-like) provokes in the heat flux and the temperature distribution. The geometry and the principal dimensions of the FEM model proposed in this work can be seen in Figure 2b. In addition, four points, P1P2 and P3P4, were located on the upper and lower surfaces, respectively, in order to study the evolution of temperatures through time. P1 and P3 are close to the defect, while P2 and P4 are far from it. The distance between P1P2 and P3P4 is 0.025 mm. The comparison between these points allows us to see the effect of the defect together with its superficial temperature distribution, using different thermal loads applied to the model. Considering that the model was prepared with a small thickness, it was possible to study the effect of the defect in the upper surface (reflection case) and lower surface (transmission case). The reflection case studies the temperature trace in the upper surface, where the heat excitation is applied. For its part, the transmission case studies the temperature trace in the lower surface, i.e., in the opposite side where the heat excitation is applied. This model was used to study the effect of the principal thermal properties (i.e., conductivity, specific heat, density, film coefficient, and emissivity coefficient) on both surfaces. The properties of the material were those corresponding to a polymeric material Nylon PA-12, a widely used material in 3D printing. All these material properties, the geometry and the heat process were proposed following [7] and can be seen in Table 1.
The model was subjected to a heating process (heating-step) in its upper face from 24 °C to 120 °C through a linear ramp for 20 s. Once the highest temperature was reached, the heat source moved away and the model started to exchange heat with the external environment through convection and radiation heat transfer processes (cooling-step). The studied values of this interaction can be seen in Table 1.
Finally, the model was meshed with DC3D8 for heat transfer 3D 8-node linear isoparametric elements using the commercially FEM software Abaqus2019® [11,12]. A biased, non-uniform meshing was defined to increase the density of elements in defective areas, improving the precision of data, and reducing the density of elements in the background area. The number of elements was reduced to 25% of the number of elements corresponding to a uniform mesh, maintaining the same precision in the areas close to the defect (Figure 2). To complete the meshing design, different convergence analyses were conducted in order to obtain a mesh size, which can give accurate results in the defect areas, without penalizing considerably the time needed to compute the models. A size element of 1 mm was considered precise enough without penalizing the computational cost. Finally, each model had 5020 elements.
Several command lines were added, using Python language to the file created by Abaqus. These command lines were programmed to obtain temporal evolutions of the temperatures in points P1 to P4. More command lines were used to plot contrast curves based on temperature versustime between points P1P2 and P3P4. The higher this contrast, the easier it is to detect a defect, as well as its size and location.
After all these steps were completed, the obtained results were used to apply ML techniques that allowed us to estimate the geometrical features of the defects using AT data.

2.2. Machine Learning Modelling

Different regression learners were applied and trained to compare their performances. The same model type was trained using different sets of features and different k-fold validations and/or hyperparameters in order to obtain the best performance setup.
MATLAB © [29] was used to train the next model types: linear regression, GPR, and SVM, while the open source software, Weka [30], was applied to train the random forest (RF) and multilayer perceptron (MLP) models. All the models were trained considering different features frames and parameters. The results of unsuccessful models were not reported, although some of them were indicated in Appendix A. The different predictive model typologies used are widely defined in the literature, yet in order to contextualize the raised research, a brief description of each of them is given below.

2.2.1. Linear Regression Model

Linear regression models are predictive algorithms which are easy to interpret and fast to predict. However, these models provide a low flexibility and their highly constrained form means that they usually have poor predictive accuracy compared to other more complex models. In this case, three different linear regression models were applied: (i) linear regression which uses a constant and linear term; (ii) interaction linear regression which applies interaction between predictors; (iii) stepwise linear regression, which analyses the significance of each variable [31]. In this work, we considered stepwise linear regression to prioritize the detection potential of the algorithm with respect to the physical significance of the statistical relationships between variables.

2.2.2. Gaussian Process Regression Model (GPR)

In the last decade, the GPR model has attracted considerable attention, especially in ML approaches [32]. These methods apply non-parametric kernel functions based on probabilistic models (Bayesian inference) [20]. These non-parametric methods are usually more rigorous than the standard regression methods described above, especially for the treatment of complex and noisy non-linear functions [33] and its cross validation [34].

2.2.3. Support Vector Machine

SVM are supervised learning models initially used for classification problems but also for robust regression solutions [31]. SVM are non-parametric techniques that are still affected by outliers [35]. SVM robust regression may be useful to add robust estimators based on variable weight functions [31]. The flexibility of SVM methods are due to the kernel functions (radial basis function (RBF), quadratic, cubic, or linear) [36]. In this research, the four kernel functions were used. Furthermore, for RBF, three different kernel scales were used: fine, medium, and coarse. Those prediction errors that were smaller than the threshold (ε) were ignored and treated as equal to zero. Epsilon mode was automatically calculated using a heuristic procedure to select the kernel scale.

2.2.4. Random Forest

RF [37] is a known ensemble classifier that can be used for both classification and regression, like trees, where each tree is generated from different bootstrapped samples of training data [38], enabling many weakly-correlated classifiers form a strong classifier. RF is usually easy to implement and computationally fast, which performs well in many real-world tasks.

2.2.5. Multilayer Perceptron

MLP is an ANN method that uses backpropagation to learn a multilayer perceptron to classify instances. The MLP allows to represent some smooth measurable functional relationships between the inputs (predictors features) and the outputs (responses). MLP is a distributed, information processing system massively parallel and successfully applied for the generation of models to solve non-linear problems [39,40]. The processes are based on three different layers of neurons: input layers (N neurons), hidden layers (S neurons) and output layers (L neurons), where each layer has a group of connected points (neurons). Each connection has a numerical weight and each neuron of the network performs as a weighted sum of its inputs and thresholds the results. The momentum rate for the backpropagation algorithm was established as 0.2 for the standard value and 0.3 for the learning rate, while nominal to binary filter was applied. Hidden layers were established as (attributes+ classes)/2 for each test.

2.3. Evaluation of the Model Performance

The evaluation of the models can be implemented by assessing the difference between the observed values ( y j ^ ) and predicted values ( y j ) [20]. The performance of the regression learning models can be evaluated using classical performance results [41]. In this research, three statistical error types were obtained for each model:
  • Determination of the correlation coefficient (R2) between observed values and predicted values (1). When it is closer to 1, the correlation between observed and predicted values will be more adjusted. A theoretical value of 1 means a perfect correlation between the observed and predicted values, which could be interpreted as a perfect prediction (graphically, this would mean that all points represented in the predicted vs. actual plot are located in the regression line).
  • Mean absolute error (MAE): this error describes the typical magnitude of the residuals being robust to outliers (2). MAE was used to independently evaluate the accuracy of the model.
  • Mean square error (MSE): this error estimation was computed considering the square of the differences, being more sensitive to outliers than MAE.
  • Root mean square error (RMSE): it was calculated as the square root of the MSE (3). In this way, the error data was converted to the units of the variable, making the data interpretation more intuitive in the magnitude of the response.
    R 2 = 1 j = 1 n ( y j y j ^ ) 2 j = 1 n ( y j y j ¯ ) 2
    M A E = 1 n j = 1 n | y j y j ^ |
    R M S E = 1 n j = 1 n ( y j y j ^ ) 2
Finally, the training time is a parameter that was reported for each model in order to compare the response speed of each algorithm. To this end, all the trainings of the different models were implemented in an Intel Core i7-5700HQ, 2.7 GHz CPU without parallel computing. Additionally, the distribution and morphology of the residuals was another performance model indicator evaluated.

3. Results

3.1. Simulation Results

Some of the calculations carried out and the results achieved in this study are shown below. With the initial values of the geometric variables, the thermal properties of the material and the thermal load curves applied, a calculation of the temperature distribution along the whole model was performed. Figure 3 shows the temperature distribution in the model at 53 s.
Using the script developed in Python, the variation of the temperatures through time at points P1P4 was recorded. Figure 4a shows the values of the temperatures through time at points P1P4.
Also, the temperature contrast curves, P1 minus P2 and P4 minus P3, through time were calculated and plotted (Figure 4b), which show the difference of temperatures between areas near and far to the defect. Contrast curve P1P2 shows how the presence of the defect affects the upper surface, the so-called reflection case, while contrast curve P4P3, exhibits the effect of the defect on the rear surface, that is, due to the difference of transmissibility temperature between zones with defects and zones without them. It can be also observed how the maximum in the temperature curves appears at 53 s of the total time, that is, 33 s after the start of the cooling step.
The point of maximum contrast is of great interest since it would allow to determine the presence of the defect and its characteristics. Therefore, these points were used to analyze the variation of the input variables (i.e., thermal properties, size and thickness of the defect) over the upper face-reflection case (Contrast Front ( Δ T F )) and over the lower face-transmission case (contrast rear ( Δ T R )). The ranges of variation of the input variables established for this research are outlined in Table 1. The sets of values used in each simulation were automatically selected by the software within the thresholds indicated in this table. In this manner, two datasets first repeated the simulation 100 times and the second repeated the simulation 500 times.
A design of experiment (DOE) study was carried out using the Latin hypercube technique with 500 points. Figure 5 shows the Pareto plot for responses “Contrast Front” and “Contrast Rear”. The size of the bars indicates the proportion in which each one of the input variables affects the variation of the output variables. The blue color indicates that the relationship is direct, while the red color indicates that is inverse, i.e., if the value of the input variable increases, the value of the output decreases and vice versa.
Figure 5 shows how the most influential variable in both cases was the maximum heating temperature ( T H ). This indicates the need to carry out a good design of the thermal loading process, adjusting this temperature as much as possible. Moreover, the size of the defect ( L D ) had a high weight that indicates that the magnitude of the contrast could be used to estimate the size of the defect. On the other hand, it seems significant how, in both cases, the thickness of the defect ( t D ) had a low effect, especially in the “Contrast Rear” case.
Finally, Figure 6 shows two of the many possible approximated surfaces that can be prepared to study the variation of the output values as a function of the variation of the input values. In Figure 6b, it can be seen that the variation of “Contrast Front” with the variation of the maximum heating temperature and with the size of the defect. Since both input variables have a high effect, the surface varies almost equally in both base coordinates. Instead, in Figure 6a, the “Contrast Rear” is shown in relationship with the length and thickness of the defect. Because the thickness of the defect has a smaller effect, the approximated surface changes more along the length related coordinate.

3.2. Machine Learning

First, an exploratory data analysis was applied to the datasets for both the 100-value and the 500-value. This was implemented using scattering plots, which showed similar trends in the relationship of the features for the two data collections. An apparent collinearity is detected between the “Contrast Font” ( Δ T F ) and “Contrast Rear” ( Δ T R ) because both are independent with respect to the rest of the features. This phenomenon could be due to the heat transfer and the presence of the defect, which makes the difference in temperature between the defect and non-defect zones very similar in both sides of the model. However, the relationship between the two features is not rigorously linear because it has a non-constant variability. The rest of variables are independent since they are inputs for the simulation processes (Table 1). Therefore, in the following sections, different tests were implemented in order to find an adequate parsimonious model with the fewest assumptions. For the different trained models, MAE, R2, and RMSE were reported. The rest of parameters analyzed for each model were reported in the Appendix A in order to make easier its reading. Please note that in the finite element part, the geometric variables and the thermal properties are inputs, while the temperature contrasts Δ T F and Δ T R are outputs. On the other hand, this changes in the machine learning part of the study; the contrasts become Δ T F and Δ T R together with the thermal properties inputs, while the geometric variables length and thickness of the defect are outputs, being this last the variables to predict using the model.

3.2.1. Defect Thickness Predictor

Firstly, models were trained only using the 100 sets of values to analyze what happens to a small sample size, but the predictive capacity was low. The most accurate model yielded poor prediction results (e.g., stepwise regression model was the more effective yielding R2 = 0.45). Then, the predictor model was calculated using the 500 sets of values of the dataset. The results for the models were trained using 500 sets of values (Table 2). The “Contrast Rear” feature had to be included in the model to get suitable results. Otherwise, the predictive performance decreased significantly and the models obtained were not adequate (maximum R2 = 0.35).
The best results were achieved by the following two models: stepwise regression model and interaction regression model, although the training time is extremely much longer for the stepwise, as shown in Appendix A. The best predictor was the one that used all features (MAE = 5.148 × 10−5, R2 = 0.79). The error obtained was acceptable considering the range and order of magnitude of the predicted variable (5 × 10−4 ± 50% mmin Table 1) However, when only 5 features ( k ,   h , T H , Δ T F , Δ T R ) were considered (5 excluded), MAE increased by 36.71% and R2 was 0.63. For the thickness predictor, it was always necessary to consider “Contrast Rear” ( Δ T R ) to obtain suitable results.
Additionally, all the models were calculated using three different k-fold validation parameter (5, 10, and 15). The results for the 10-fold validation were reported and the MAE of the other two k-fold’s validations, and indicated as deviation in the Appendix A. In this way, deviation values for the MAE were not meaningfully high. The results referred to the regression models and are reported in this section because the rest of the models (i.e., GPR, SVM, RF, and MLP) did not provide suitable results due to the small size of the dataset.
An appropriate linear relationship between predicted response and observed response was observed for all the predictions. In Figure 7, this regression line is shown for the predictor model, which provides a minor MAE (Table 2). Residuals are close to a symmetrical distribution around zero.

3.2.2. Defect Length Predictor

Firstly, the experiment was implemented using 100 sets of values. Unlike in the predictive thickness model, in this case, significative different results were obtained considering or not considering the “Contrast Rear” feature ( Δ T R ), so this aspect allowed us to compare the model performance when Δ T R wasconsidered or excluded. Consequently, the results for the two configurations are reported (Table 3) and the different predictive features are removed in order to analyze the model performance for each feature setup. The best MAE result (1.398 × 10−3) was obtained using the interaction linear model when the minimum number of features was included. This result could be considered as adequate considering the small size of the dataset and the magnitude order of the response: defect length (0.01 ± 50% mmin Table 1).
However, the models that provided better predictive potentials were the stepwise and the interaction regression models. On the other hand, the performance results were not very suitable (maximum R2 is 0.65).
The decrease of the error when the “Contrast Rear” was excluded is shown in Table 4. In this case, when “Contrast Rear” ( Δ T R ) was not considered, the model performance increased in terms of MAE and RMSE (both are reduced) (Table 4). Moreover, in this case, the results of the deviation values for MAE, when different k-fold parameters were applied, can be higher in some cases (up to 26.86% increase for stepwise regression model).
Once 100 sets of values were studied, the experiment was repeated considering 500 sets of values (Table 5, Table 6, Table 7 and Table 8). In this case, the model typologies that provided the least amount of error were the GPR, specifically the square exponential (MAE = 6.665 × 10−4, R2 = 0.92) and the rational quadratic GPR (MAE = 6.666 × 10−4, R2 = 0.92), when the feature “Contrast Rear” ( Δ T R ) was considered and the defect features (thickness and emissivity coefficient) were excluded (Table 5 and Table 6). Note that the training time used when the rational quadratic kernel was chosen is three times higher than square exponential kernel, as is shown in the Appendix A. In this way, a model based on the rest of features using GPR provided a high performance.
However, training time was also higher in comparison with the other methods, especially the SVM (considering that four kernel functions and specifically three for RBF in function of kernel scale—fine, medium, and coarse—were considered), but these last ones provided a lower predictive model performance for the same setup (e.g., quadratic SVM provided MAE = 1.302 × 10−3 and R2 = 0.66). RMSE results calculated using SVM demonstrated that the outliers have an important effect (RMSE was significantly much higher than MSA). In addition, these regression models were shown for being the least sensitive to sample size because they were the only ones that at least provided acceptable results with 100 sets of values.
The interaction and stepwise regression models also provided adequate performance results, specifically the interaction regression, when all features were considered (MAE = 9.588 × 10−4, R2 = 0.81) (Table 5). The results showed a higher error than the GPR models, which is compatible with complex noisy non-linear functions [33]. Nevertheless, interaction regression required significantly less computational time (except for the stepwise regression model, which took very much longer). The difference between the different k-fold’s validations used was less than in the previous dataset for the same type of model, possibly due to the larger size of the dataset, as is shown in Appendix A.
Once we observed that both regression models and GPR models provided more adequate predictive results, a correlation between the observed and the prediction response was plotted. The two models of each type with lower error and the best fit are shown in Figure 8. Residuals were approximately and symmetrically distributed for the regression model (being a favorable aspect for the suitability of the model), as well as non-linearly distributed for the GPR.
Finally, MLP and RF models were the fastest training algorithms (Table 6, Table 7 and Table 8), but the MAE was significantly higher than the other models, indicating that they tend to improve for cases where fewer predictive features are used. Moreover, the rest of the performance parameters were less suitable than other models for the chosen configuration and setup.
When the “Contrast Rear” feature was excluded (Table 7 and Table 8), the highest performance model was the GPR (Table 7), especially for both square exponential (R2 = 0.86, MAE = 8.513 × 10−4) and rational quadratic (R2 = 0.86, MAE = 8.531 × 10−4) kernels. When MAE results were compared between the models, which included the “Constant Rear” feature ( Δ T R ), an increase in the MAE was detected for almost all trainline models (Table 9). In this way, there were models that “suffer less” from the loss of that feature: MAE increased when GPR models were used while the models where MAE increased less were SVM, RF, and MLP. The GPR models were more sensitive to the absence of such property than the regression models. In this manner, we can indicate, in general terms, that the “Contrast Rear” feature increased the predictive model performance, but this increase was not always significant (Table 9).

4. Conclusions

Using Python, a parametrical FEM model was prepared to study the effect that the presence of an internal defect generates on the temperature distribution of a thermally loaded solid. To check that the model worked correctly, a first battery of tests was carried out using the same geometries and materials employed by other authors [7,9,14]. Once it was checked that the thermal distribution results of these tests coincided with those from the authors in both shapes and values. The model was used to study how the thermal properties of the material ( c , k , ρ , T E ,   ε ,   h , T H ) and the geometric variables of the defect ( t D , L D ) affected some interesting contrast values ( Δ T F ,   Δ T R ), which were defined in Section 3.1. As a result, the influence of each geometric and thermal parameter (Table 1) over the contrast values were obtained (Figure 5).
In a second step, the simulation output frames were used as input to train 474 different prediction models to estimate the possibility of using thermal parameters ( c , k , ρ , T E ,   ε ,   h , T H ,   Δ T F ,   Δ T R ) and thus to predict the geometric features of the defect ( t D , L D ). Different models in function of different features were established, trained, evaluated, and, finally, compared. The comparison of the different algorithms was the main contribution of this work.
Regarding defect thickness, it is possible to provide predictive models with moderate predictive performance. In particular, interaction linear regression and stepwise regression models provided adequate results. However, stepwise model was slower to train. The best model for defect thickness prediction using five features ( k ,   h , T H , Δ T F ,   Δ T R ) was interaction linear regression (MAE = 7.038 × 10−5, R2 = 0.63). Using all the features the model gave a MAE of 5.148 × 10−5 (R2 = 0.79). In this case the “Contrast Rear” feature ( Δ T R ) was necessarily included in the model to get adequate results.
It was also possible to make predictive models for the defect length or thickness. In this case, a higher result for a higher number of model’s types was reported. When 100 sets of values were applied to train the models, only regression models provided adequate results, while if 500 sets of values were applied, different type of models gave adequate results. These models can be established both considering the “Contrast Rear” feature ( Δ T R ) and without considering it. However, when it is considered, the error tends to reduce (Table 10) and, consequently, the model performance improves despite the possible tendency towards collinearity between “Contrast Rear” and “Contrast Front” features. When “Contrast Rear” feature was considered, the best model was GPR based on a square exponential kernel that provided MAE of 6.665 × 10−4 when defect thickness and emissivity coefficient were also excluded.
Regression models were also tested and these gave adequate performance results but more unfavorable than those provided by GPR models (interaction regression model gave MAE of 9.588 × 10−4 and R2 of 0.81 when all features were used). MAE slightly increased when “Contrast Rear” feature was not considered (MAE = 1.183 × 10−3 and R2 = 0.74) and increased as the different variables were excluded (for the minimum numbers of features: MAE = 1.628 × 10−3 and R2 = 0.53). It was demonstrated that, for this case, the stepwise regression model did not provide significantly better results than the interaction regression models but significantly increased computational training time. However, the predicted versus actual plots showed an adequate linearity and constant variability for the interaction and stepwise regression models.
SVM were also models which allow the prediction of the defect length and their training times were very low, but their performances were less than the one obtained using GPR. However, a high outliers influence was detected for SVM model based on RMSE and MAE results, in predicted versus observed plots and was shown in the residual plots. If the weight given to the more extreme residuals is less, these models can be useful [31]. Additionally, MLP and RF methods provided predictions very quickly, but theirs performs were significantly worse than other indicated methods. A qualitative comparison based on the information obtained in this research is outlined in Table 10.
The key variables to establish an adequate predictive model for the different performed experiments were compatible with the weight given by the simulation results (Figure 5). The predictive performance was improved using both front and rear contrast data ( Δ T F ,   Δ T R ). Monitoring of both sample’s sides improved predictive performance but, in the case of defect length prediction, adequate results could be also obtained from monitoring the front surface (reflection).
Futures lines will address the testing of the calculated algorithms from experimental results and a deeper study of the regression models modifying different parameters, especially in the case of the multilayer perceptron. Moreover, a mesostructural model should be proposed to take into account the presence of pores provoked by the material deposition process, which can be confused with small defects and cause variations in the mechanical properties in the different points and directions.

Author Contributions

Conceptualization, M.R.-M., J.G.F., and D.G.-A.; methodology, M.R.-M. and J.G.F.; software, M.R.-M. and J.G.F.; validation, M.R.-M., J.G.F., D.G.-A., and F.J.M.; Formal Analysis M.R.-M. and J.G.F.; investigation, M.R.-M. and J.G.F.; resources, M.R.-M., J.G.F., D.G.A., F.J.M., Á.L.M., and R.G.-M.; data curation, M.R.-M., J.G.F., D.G.-A., J.P., and R.G.-M.; writing—original draft preparation, M.R.-M., J.G.F., D.G.-A., Á.L.M., F.J.M., and R.G.-M.; writing—review and editing, M.R.-M., J.G.F., D.G.-A., Á.L.M., R.G.-M., F.J.M., and J.P.; project administration, D.G.-A. and M.R.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science and Innovation, Government of Spain, through the research project titled Fusion of non-destructive technologies and numerical simulation methods for the inspection and monitoring of joints in new materials and additive manufacturing processes (FaTIMA) with code RTI2018-099850-B-I00.

Acknowledgments

The authors are grateful to the Fundación Universidad de Salamanca for the indirect support provided by the ITACA proof-of-concept project (PC_TCUE_18-20_047), being this helpful for some of the purposes of this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

All the parameters analyzed for the different predictive models with the different configurations are outlined in this section (Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6). Please note that only MAE, R2, and RMSE are reported in the manuscript but the rest of the parameters for each trained model is given here to facilitate its reading.
Table A1. Performance results for thickness predictor models using dataset of 500 sets of values.
Table A1. Performance results for thickness predictor models using dataset of 500 sets of values.
Excluding Features500 DataRegression
LinearInteractionStepwise
NoneRMSE1.046 × 10−46.560 × 10−58.234 × 10−5
R20.470.790.68
MAE8.690 × 10−55.148 × 10−56.612 × 10−5
Training time0.4990.610108.630
Dev.MAE 5 Folder−0.09%0.21%1.63%
Dev.MAE 15 Folder−0.60%−0.04%2.23%
L D RMSE1.056 × 10−47.388 × 10−58.153 × 10−5
R20.470.740.68
MAE8.720 × 10−55.884 × 10−56.492 × 10−5
Training time0.4540.49454.771
Dev.MAE 5 Folder−0.09%2.20%2.06%
Dev.MAE 15 Folder−1.44%−1.39%−2.45%
L D ,   ε RMSE1.070 × 10−47.286 × 10−58.320 × 10−5
R20.460.750.67
MAE8.710 × 10−55.777 × 10−56.610 × 10−5
Training time0.4300.43233.783
Dev.MAE 5 Folder−0.19%2.09%1.91%
Dev.MAE 15 Folder0.46%−1.58%0.37%
L D ,   ε ,   T E RMSE1.053 × 10−47.575 × 10−58.353 × 10−5
R20.470.730.67
MAE8.688 × 10−55.985 × 10−56.655 × 10−5
Training time0.4450.39722.331
Dev.MAE 5 Folder−0.51%−1.76%2.55%
Dev.MAE 15 Folder−0.09%1.26%−0.80%
L D ,   ε ,   T E , c,RMSE1.060 × 10−48.290 × 10−58.562 × 10−5
R20.460.670.65
MAE8.730 × 10−56.675 × 10−56.890 × 10−5
Training time0.4430.39411.998
Dev.MAE 5 Folder0.59%−0.87%−0.60%
Dev.MAE 15 Folder−0.37%1.87%2.01%
L D ,   ε ,   T E , c,   ρ RMSE1.080 × 10−48.773 × 10−58.886 × 10−5
R20.450.630.62
MAE8.910 × 10−57.038 × 10−57.134 × 10−5
Training time0.4900.4276.739
Dev.MAE 5 Folder−0.29%−3.04%0.56%
Dev.MAE 15 Folder0.38%−0.85%−1.71%
Table A2. Performance results for defect length predictor models using dataset of 100 sets of values.
Table A2. Performance results for defect length predictor models using dataset of 100 sets of values.
Considering Contrast RearWithout Contrast Rear
RegressionRegression
Excluding Features LinearInteractionStepwiseLinearInteractionStepwise
NoneRMSE2.671 × 10−33.330 × 10−31.949 × 10−32.440 × 10−31.995 × 10−31.853 × 10−3
R20.190.250.570.320.540.61
MAE2.180 × 10−32.299 × 10−31.506 × 10−32.027 × 10−31.534 × 10−31.470 × 10−3
Training time0.3210.30332.7870.4160.47730.602
Dev.MAE 5 Folder−8.49%−16.74%20.36%−2.86%10.10%−3.21%
Dev.MAE 15 Folder−4.60%−27.24%−5.04%−1.14%10.58%−6.36%
t D RMSE2.633 × 10−32.382 × 10−31.864 × 10−32.423 × 10−32.008 × 10−31.736 × 10−3
R20.220.360.610.320.540.65
MAE2.164 × 10−31.719 × 10−31.439 × 10−32.033 × 10−31.624 × 10−31.400 × 10−3
Training time0.3040.28620.3970.5180.61829.631
Dev.MAE 5 Folder−8.50%−13.81%−3.51%0.22%6.47%2.94%
Dev.MAE 15 Folder−4.87%−15.70%−3.88%−8.06%−15.87%−1.49%
t D ,   ε RMSE2.613 × 10−32.144 × 10−31.854 × 10−32.405 × 10−31.821 × 10−31.782 × 10−3
R20.230.480.610.330.620.63
MAE2.156 × 10−31.527 × 10−31.456 × 10−32.015 × 10−31.507 × 10−31.509 × 10−3
Training time0.3200.26114.6400.5360.54820.132
Dev.MAE 5 Folder−9.18%−17.08%−10.89%−0.04%3.40%−5.42%
Dev.MAE 15 Folder−6.16%−14.08%−4.83%−7.52%−10.13%−8.80%
t D ,   ε ,   T E RMSE2.556 × 10−32.040 × 10−31.770 × 10−32.392 × 10−31.859 × 10−31.804 × 10−3
R20.260.530.650.340.60.63
MAE2.108 × 10−31.482 × 10−31.411 × 10−31.994100−31.443 × 10−31.409 × 10−3
Training time0.3320.24910.1050.5280.50913.732
Dev.MAE 5 Folder−6.55%−5.71%−1.92%1.51%7.46%26.86%
Dev.MAE 15 Folder−4.30%0.76%5.47%−5.57%−0.15%3.52%
t D ,   ε ,   T E , kRMSE2.494 × 10−31.820 × 10−31.893 × 10−32.452 × 10−32.092 × 10−32.132 × 10−3
R20.30.630.60.310.50.48
MAE2.013 × 10−31.398 × 10−31.471 × 10−32.021 × 10−31.689 × 10−31.764 × 10−3
Training time0.3560.2697.2930.4820.3965.240
Dev.MAE 5 Folder−2.69%−7.08%4.25%−3.41%−15.14%−15.26%
Dev.MAE 15 Folder0.17%5.97%12.83%−5.51%−3.95%−7.81%
Table A3. Performance results for defect length predictor models using dataset of 500 sets of values. Regression and GPR when “Contrast Rear” is contemplated as feature.
Table A3. Performance results for defect length predictor models using dataset of 500 sets of values. Regression and GPR when “Contrast Rear” is contemplated as feature.
RegressionGaussian Processes Regression
Excluding Features LinearInteractionStepwiseSquare ExpGPRMatern 5/2GPRRational Quadratic GPR
NoneRMSE2.273 × 10−31.276 × 10−31.278 × 10−39.247 × 10−49.460 × 10−49.252 × 10−4
R20.380.810.810.900.890.90
MAE1.805 × 10−39.588 × 10−49.594 × 10−47.256 × 10−47.505 × 10−47.267 × 10−4
Training time0.4950.575102.0105.7116.02116.218
Dev.MAE 5 Folder−0.37%2.98%3.29%3.78%4.33%3.69%
Dev.MAE 15 Folder−0.15%1.48%0.15%−0.50%0.02%−0.56%
t D RMSE2.292 × 10−31.367 × 10−31.489 × 10−39.963 × 10−48.720 × 10−49.975 × 10−4
R20.370.780.740.880.910.88
MAE1.800 × 10−39.663 × 10−41.017 × 10−36.967 × 10−46.940 × 10−46.976 × 10−4
Training time0.5010.56265.6406.7688.27817.785
Dev.MAE 5 Folder0.22%8.04%0.90%8.08%9.40%7.98%
Dev.MAE 15 Folder0.82%6.16%−0.46%2.96%6.27%2.97%
t D ,   ε RMSE2.259 × 10−31.341 × 10−31.321 × 10−38.221 × 10−48.546 × 10−48.221 × 10−4
R20.390.790.790.920.910.92
MAE1.804 × 10−31.027 × 10−31.031 × 10−36.665 × 10−46.967 × 10−46.666 × 10−4
Training time0.6280.71447.6155.3393.41815.990
Dev.MAE 5 Folder0.42%4.66%5.61%3.65%3.93%3.67%
Dev.MAE 15 Folder−0.22%−0.44%−1.52%−2.56%−2.02%−2.56%
t D ,   ε ,   T E RMSE2.277 × 10−31.485 × 10−31.478 × 10−31.172 × 10−31.173 × 10−31.172 × 10−3
R20.380.740.740.840.840.84
MAE1.819 × 10−31.156 × 10−31.152 × 10−39.310 × 10−49.374 × 10−49.310 × 10−4
Training time0.5610.47726.1483.5494.60212.395
Dev.MAE 5 Folder−0.42%−0.22%−0.35%−1.00%−0.84%−0.99%
Dev.MAE 15 Folder0.20%0.55%0.71%−1.11%−1.17%−1.11%
t D ,   ε ,   T E ,   k RMSE2.264 × 10−31.447 × 10−31.454 × 10−31.158 × 10−31.153 × 10−31.158 × 10−3
R20.390.750.750.840.840.84
MAE1.810 × 10−31.130 × 10−31.144 × 10−39.120 × 10−49.170 × 10−49.120 × 10−4
Training time0.6350.53216.6505.4536.15315.873
Desv MAE0.72%1.58%1.66%1.32%5.02%1.32%
Desv MAE0.50%0.28%0.05%−0.54%−0.65%−0.55%
Table A4. Performance results for defect length predictor models using dataset of 500 sets of values. SVM, MLP, and RF when “Contrast Rear” is contemplated as feature.
Table A4. Performance results for defect length predictor models using dataset of 500 sets of values. SVM, MLP, and RF when “Contrast Rear” is contemplated as feature.
SVMMultilayer PerceptronRandom Forest
Excluding Features CubicQuadraticMedium Gaussian
NoneRMSE1.774 × 10−31.622 × 10−31.756 × 10−32.241 × 10−32.376 × 10−3
R20.630.690.630.550.36
MAE1.180 × 10−31.246 × 10−31.423 × 10−31.741 × 10−32.031 × 10−3
Training time0.2200.2250.2180.1770.116
Dev.MAE 5 Folder7.22%14.77%6.28%3.96%1.67%
Dev.MAE 15 Folder−3.83%1.07%−0.72%−1.78%−0.10%
t D RMSE2.784 × 10−31.713 × 10−31.737 × 10−32.080 × 10−32.340 × 10−3
R20.080.650.640.600.38
MAE1.210 × 10−31.298 × 10−31.383 × 10−31.650 × 10−31.990 × 10−3
Training time0.2550.2590.2220.1650.107
Dev.MAE 5 Folder0.68%7.22%8.79%3.03%2.51%
Dev.MAE 15 Folder−0.60%−5.78%3.58%−1.21%−0.50%
t D ,   ε RMSE1.961 × 10−31.684 × 10−31.744 × 10−32.070 × 10−32.280 × 10−3
R20.540.660.640.630.42
MAE1.204 × 10−31.302 × 10−31.373 × 10−31.640 × 10−31.940 × 10−3
Training time0.3400.2810.2760.1810.105
Dev.MAE 5 Folder1.15%5.61%2.78%3.05%2.06%
Dev.MAE 15 Folder−6.76%−6.60%1.26%−3.66%−0.52%
t D ,   ε ,   T E RMSE1.900 × 10−31.708 × 10−31.731 × 10−32.060 × 10−32.270 × 10−3
R20.570.650.640.600.42
MAE1.338 × 10−31.338 × 10−31.360 × 10−31.640 × 10−31.930 × 10−3
Training time0.2970.2780.2800.1180.085
Dev.MAE 5 Folder1.27%5.04%12.06%6.10%1.55%
Dev.MAE 15 Folder−6.52%4.62%6.00%2.44%−1.04%
t D ,   ε ,   T E ,   k RMSE1.708 × 10−31.747 × 10−31.720 × 10−32.130 × 10−32.200 × 10−3
R20.650.640.640.580.46
MAE1.284 × 10−31.358 × 10−31.379 × 10−31.690 × 10−31.870 × 10−3
Training time0.4190.4360.3260.0880.085
Desv MAE15.26%10.90%7.29%3.55%1.60%
Desv MAE−0.31%−0.37%−0.43%2.37%−0.53%
Table A5. Performance results for defect length predictor models using dataset of 500 sets of values. Regression and GPR when “Contrast Rear” is not contemplated as feature.
Table A5. Performance results for defect length predictor models using dataset of 500 sets of values. Regression and GPR when “Contrast Rear” is not contemplated as feature.
RegressionGaussian Processes Regression
Excluding Features LinearInteractionStepwiseSquare ExpGPRMatern 5/2GPRRational Quadratic GPR
NoneRMSE2.231 × 10−31.485 × 10−31.466 × 10−31.072 × 10−31.103 × 10−31.073 × 10−3
R20.410.740.750.860.860.86
MAE1.793 × 10−031.183 × 10−31.168 × 10−38.513 × 10−48.826 × 10−48.531 × 10−4
Training time0.4160.96266.5507.7327.20816.202
Dev.MAE 5 Folder1.10%1.85%2.58%3.84%3.08%4.13%
Dev.MAE 15 Folder1.02%0.89%0.13%−2.08%−2.59%−2.27%
t D RMSE2.285 × 10−31.727 × 10−31.724 × 10−31.501 × 10−31.505 × 10−31.499 × 10−3
R20.380.650.650.730.730.73
MAE1.863 × 10−31.383 × 10−31.379 × 10−31.208 × 10−31.215 × 10−31.209 × 10−3
Training time0.4530.45934.9593.9094.3468.089
Dev.MAE 5 Folder0.12%2.02%3.17%1.37%0.67%0.67%
Dev.MAE 15 Folder0.92%−1.02%0.04%−2.82%−3.29%−3.05%
t D ,   ε RMSE2.277 × 10−31.485 × 10−31.478 × 10−31.172 × 10−31.173 × 10−31.172 × 10−3
R20.380.740.740.840.840.84
MAE1.819 × 10−31.156 × 10−31.152 × 10−39.310 × 10−49.374 × 10−49.310 × 10−4
Training time0.5610.47726.1483.5494.60212.395
Dev.MAE 5 Folder−0.42%−0.22%−0.35%−1.00%−0.84%−0.99%
Dev.MAE 15 Folder0.20%0.55%0.71%−1.11%−1.17%−1.11%
t D ,   ε ,   T E RMSE2.301 × 10−31.809 × 10−31.837 × 10−31.624 × 10−31.631 × 10−31.625 × 10−3
R20.370.610.60.690.680.69
MAE1.881 × 10−31.453 × 10−31.473 × 10−31.301 × 10−31.309 × 10−31.302 × 10−3
Training time0.4810.39313.2912.9853.7598.552
Dev.MAE 5 Folder−0.56%1.18%1.18%−0.03%0.34%0.12%
Dev.MAE 15 Folder0.25%−0.25%−0.26%−0.22%−0.38%−0.31%
t D ,   ε ,   T E ,   k RMSE2.323 × 10−31.984 × 10−31.986 × 10−31.864 × 10−31.866 × 10−31.866 × 10−3
R20.360.530.530.590.590.59
MAE1.913 × 10−31.628 × 10−31.631 × 10−31.516 × 10−31.517 × 10−31.517 × 10−3
Training time0.4940.3686.0562.7623.1329.243
Desv MAE−0.58%−0.09%−0.75%−14.21%−13.40%−14.07%
Desv MAE0.22%−0.18%−0.57%0.12%0.13%0.05%
Table A6. Performance results for defect length predictor models using dataset of 500 sets of values. SVM, MLP and RF when “Contrast Rear” is not contemplated as feature.
Table A6. Performance results for defect length predictor models using dataset of 500 sets of values. SVM, MLP and RF when “Contrast Rear” is not contemplated as feature.
SVMMultilayer PerceptronRandom Forest
CubicQuadraticMedium Gaussian
NoneRMSE1.677 × 10−31.829 × 10−32.000 × 10−32.290 × 10−32.490 × 10−3
R20.670.600.530.540.30
MAE1.276 × 10−31.440 × 10−31.614 × 10−31.810 × 10−32.140 × 10−3
Training time0.2210.2460.2200.1770.114
Dev.MAE 5 Folder4.73%0.20%2.51%−3.31%0.93%
Dev.MAE 15 Folder0.89%−3.82%1.77%−2.76%−0.47%
t D RMSE1.845 × 10−31.944 × 10−32.066 × 10−32.350 × 10−32.480 × 10−3
R20.600.550.490.520.29
MAE1.443 × 10−31.540 × 10−31.662 × 10−31.890 × 10−32.120 × 10−3
Training time0.2130.2100.2140.1350.113
Dev.MAE 5 Folder4.05%8.07%−3.49%−4.23%0.94%
Dev.MAE 15 Folder−0.63%0.14%−1.28%−2.65%−0.94%
t D ,   ε RMSE1.900 × 10−31.708 × 10−31.731 × 10−32.060 × 10−32.270 × 10−3
R20.570.650.640.597513540.421201
MAE1.338 × 10−31.338 × 10−31.360 × 10−31.640 × 10−31.930 × 10−3
Training time0.2970.2780.2800.1180.085
Dev.MAE 5 Folder1.27%5.04%12.06%6.10%1.55%
Dev.MAE 15 Folder−6.52%4.62%6.00%2.44%−1.04%
t D ,   ε ,   T E RMSE1.991 × 10−32.029 × 10−32.073 × 10−32.400 × 10−32.400 × 10−3
R20.530.510.490.50.34
MAE1.557 × 10−31.606 × 10−31.650 × 10−31.930 × 10−32.050 × 10−3
Training time0.2360.2270.2190.0960.084
Dev.MAE 5 Folder5.04%3.57%1.25%−1.04%0.49%
Dev.MAE 15 Folder−3.19%0.60%−0.47%−2.59%−0.98%
t D ,   ε ,   T E ,   k RMSE2.296 × 10−32.333 × 10−32.151 × 10−32.520 × 10−32.340 × 10−3
R20.370.350.450.450.37
MAE1.793 × 10−31.834 × 10−31.722 × 10−32.040 × 10−31.980 × 10−3
Training time0.2780.2620.2420.0890.080
Desv MAE−8.80%−9.32%−2.99%−4.41%0.51%
Desv MAE0.86%−1.85%0.38%−2.45%−0.51%

References

  1. Holmström, J.; Partanen, J.; Tuomi, J.; Walter, M. Rapid manufacturing in the spare parts supply chain: Alternative approaches to capacity deployment. J. Manuf. Technol. Manag. 2010, 21, 687–697. [Google Scholar] [CrossRef]
  2. Maldague, X.P. Theory and Practice of Infrared Technology for Nondestructive Testing; John Wiley & Sons Interscience: New York, NY, USA, 2001. [Google Scholar]
  3. Madruga, F.J.; Sfarra, S.; Perilli, S.; Pivarčiová, E.; López-Higuera, J.M. Measuring the Water Content in Wood Using Step-Heating Thermography and Speckle Patterns-Preliminary Results. Sensors 2020, 20, 316. [Google Scholar] [CrossRef] [Green Version]
  4. Dudzik, S. Two-stage neural algorithm for defect detection and characterization uses an active thermography. Infrared Phys. Technol. 2015, 71, 187–197. [Google Scholar] [CrossRef]
  5. Rodríguez-Martín, M.; Lagüela, S.; González-Aguilera, D.; Martínez-Sánchez, J. Prediction of depth model for cracks in steel using infrared thermography. Infrared Phys. Technol. 2015, 71, 492–500. [Google Scholar] [CrossRef]
  6. Rodríguez-Martín, M.; Lagüela, S.; González-Aguilera, D.; Rodríguez-Gonzálvez, P. Crack-Depth Prediction in Steel Based on Cooling Rate. Adv. Mater. Sci. Eng. 2016, 2016, 1–9. [Google Scholar] [CrossRef] [Green Version]
  7. Carvalho, M.; Martins, A.; Santos, T.G. Simulation and validation of thermography inspection for components produced by additive manufacturing. Appl. Therm. Eng. 2019, 159, 113872. [Google Scholar] [CrossRef]
  8. Balageas, D. Thickness or diffusivity measurements from front-face flash experiments using the TSR (thermographic signal reconstruction) approach. In Proceedings of the 2010 International Conference on Quantitative InfraRed Thermography, Quebec, QC, Canada, 27–30 July 2010. [Google Scholar] [CrossRef]
  9. Grys, S.; Vokorokos, L.; Borowik, L. Size determination of subsurface defect by active thermography—Simulation research. Infrared Phys. Technol. 2014, 62, 147–153. [Google Scholar] [CrossRef]
  10. Pastuszak, P.D. Characterization of Defects in Curved Composite Structures Using Active Infrared Thermography. Procedia Eng. 2016, 157, 325–332. [Google Scholar] [CrossRef] [Green Version]
  11. Abaqus. Analysis User’s Manual Version 2019; Simulia: Cracow, Poland, 2019. [Google Scholar]
  12. Ghadermazi, K.; Khozeimeh, M.; Taheri-Behrooz, F.; Safizadeh, M. Delamination detection in glass–epoxy composites using step-phase thermography (SPT). Infrared Phys. Technol. 2015, 72, 204–209. [Google Scholar] [CrossRef]
  13. Mabrouki, F.; Genest, M.; Shi, G.; Fahr, A. Numerical modeling for thermographic inspection of fiber metal laminates. NDT E Int. 2009, 42, 581–588. [Google Scholar] [CrossRef]
  14. Grosso, M.; Lopez, J.E.C.; Silva, V.M.; Soares, S.D.; Rebello, J.M.; Pereira, G.R. Pulsed thermography inspection of adhesive composite joints: Computational simulation model and experimental validation. Compos. Part B Eng. 2016, 106, 1–9. [Google Scholar] [CrossRef]
  15. Hardle, W. Applied Nonparametric Regression; Cambridge University Press (CUP): Cambridge, UK, 1990. [Google Scholar] [CrossRef]
  16. Dudzik, S. Analysis of the accuracy of a neural algorithm for defect depth estimation using PCA processing from active thermography data. Infrared Phys. Technol. 2013, 56, 1–7. [Google Scholar] [CrossRef]
  17. Cruz-Vega, I.; Hernandez-Contreras, D.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.D.J.; Ramirez-Cortes, J.M. Deep Learning Classification for Diabetic Foot Thermograms. Sensors 2020, 20, 1762. [Google Scholar] [CrossRef] [Green Version]
  18. Pardo, A.; Gutiérrez-Gutiérrez, J.A.; López-Higuera, J.; Pogue, B.W.; Conde, O.M. Coloring the Black Box: Visualizing neural network behavior with a self-introspective model (preprint). arXiv 2019, arXiv:1910.04903. [Google Scholar]
  19. Wang, H.; Hsieh, S.-J.; Peng, B.; Zhou, X. Non-metallic coating thickness prediction using artificial neural network and support vector machine with time resolved thermography. Infrared Phys. Technol. 2016, 77, 316–324. [Google Scholar] [CrossRef]
  20. Duan, Y.; Cooling, C.; Ahn, J.S.; Jackson, C.; Flint, A.; Eaton, M.D.; Bluck, M.J. Using a Gaussian process regression inspired method to measure agreement between the experiment and CFD simulations. Int. J. Heat Fluid Flow 2019, 80, 108497. [Google Scholar] [CrossRef]
  21. Yang, J.; Wang, W.; Lin, G.; Li, Q.; Sun, Y.; Sun, Y. Infrared Thermal Imaging-Based Crack Detection Using Deep Learning. IEEE Access 2019, 7, 182060–182077. [Google Scholar] [CrossRef]
  22. Fathi, S.; Dickens, P. Challenges in drop-on-drop deposition of reactive molten nylon materials for additive manufacturing. J. Mater. Process. Technol. 2013, 213, 84–93. [Google Scholar] [CrossRef]
  23. Conner, B.P.; Manogharan, G.P.; Martof, A.N.; Rodomsky, L.M.; Rodomsky, C.M.; Jordan, D.C.; Limperos, J.W. Making sense of 3-D printing: Creating a map of additive manufacturing products and services. Addit. Manuf. 2014, 1, 64–76. [Google Scholar] [CrossRef]
  24. Farina, I.; Singh, N.; Colangelo, F.; Luciano, R.; Bonazzi, G.; Fraternali, F. High-Performance Nylon-6 Sustainable Filaments for Additive Manufacturing. Materials 2019, 12, 3955. [Google Scholar] [CrossRef] [Green Version]
  25. Slotwinski, J.A.; Labarre, E.; Forrest, R.; Crane, E. Analysis of Glass-Filled Nylon in Laser Powder Bed Fusion Additive Manufacturing. JOM 2016, 68, 811–821. [Google Scholar] [CrossRef]
  26. Starr, T.L.; Gornet, T.J.; Usher, J.S. The effect of process conditions on mechanical properties of laser-sintered nylon. Rapid Prototyp. J. 2011, 17, 418–423. [Google Scholar] [CrossRef]
  27. Olivier, D.; Travieso-Rodriguez, J.A.; Borros, S.; Reyes, G.; Jerez-Mesa, R. Influence of building orientation on the flexural strength of laminated object manufacturing specimens. J. Mech. Sci. Technol. 2017, 31, 133–139. [Google Scholar] [CrossRef]
  28. Domingo-Espin, M.; Travieso-Rodriguez, J.A.; Jerez-Mesa, R.; Lluma-Fuentes, J.; Jerez-Mesa, R. Fatigue Performance of ABS Specimens Obtained by Fused Filament Fabrication. Materials 2018, 11, 2521. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Mathworks. MATLAB. 2020. Available online: https://es.mathworks.com/?s_tid=gn_logo (accessed on 10 January 2020).
  30. Weka 3. Data Mining Software in Java. Available online: http://www.cs.waikato.ac.nz/ml/weka/ (accessed on 30 June 2012).
  31. Berk, R.A. Statistical Learning from a Regression Perspective; Springer International Publishing: Cham, Switzlerland, 2016. [Google Scholar] [CrossRef]
  32. Alodat, M.; Shakhatreh, M.K. Gaussian process regression with skewed errors. J. Comput. Appl. Math. 2020, 370, 112665. [Google Scholar] [CrossRef]
  33. Chilenski, M.; Greenwald, M.; Marzouk, Y.; Howard, N.T.; White, A.E.; Rice, J.E.; Walk, J.R. Improved profile fitting and quantification of uncertainty in experimental measurements of impurity transport coefficients using Gaussian process regression. Nucl. Fusion 2015, 55, 23012. [Google Scholar] [CrossRef]
  34. Rasmussen, C.E.; Williams, C. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  35. Dhhan, W.; Midi, H.; Alameer, T. Robust Support Vector Regression Model in the Presence of Outliers and Leverage Points. Mod. Appl. Sci. 2017, 11, 92. [Google Scholar] [CrossRef] [Green Version]
  36. Piri, J.; Shamshirband, S.; Petković, D.; Tong, C.W.; Rehman, M.H.U. Prediction of the solar radiation on the Earth using support vector regression technique. Infrared Phys. Technol. 2015, 68, 179–185. [Google Scholar] [CrossRef]
  37. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  38. Rodríguez-Gonzálvez, P.; Rodríguez-Martín, M. Weld Bead Detection Based on 3D Geometric Features and Machine Learning Approaches. IEEE Access 2019, 7, 14714–14727. [Google Scholar] [CrossRef]
  39. Lam, J.C.; Wan, K.K.; Yang, L. Solar radiation modelling using ANNs for different climates in China. Energy Convers. Manag. 2008, 49, 1080–1090. [Google Scholar] [CrossRef]
  40. Dos Santos, C.M.; Escobedo, J.F.; Teramoto, E.T.; Modenese, S.H. Assessment of ANN and SVM models for estimating normal direct irradiation (Hb). Energy Convers. Manag. 2016, 126, 826–836. [Google Scholar] [CrossRef] [Green Version]
  41. Achieng, K. Modelling of soil moisture retention curve using machine learning techniques: Artificial and deep neural networks vs support vector regression models. Comput. Geosci. 2019, 133, 104320. [Google Scholar] [CrossRef]
Figure 1. Workflow for the hybrid methodology applied.
Figure 1. Workflow for the hybrid methodology applied.
Sensors 20 03982 g001
Figure 2. (a) Geometry of the model and the defect and location of the points used in the thermal study. (b) Biased mesh with greater mesh density close to the defect area. PL andDL refer to the length of the plate and defect, respectively, being PL = 0.1 m and DL = 0.01 m, whereas PT andDT refer to the thickness of the plate and defect, respectively, being PT = 0.005 m and DT = 0.0005 m.
Figure 2. (a) Geometry of the model and the defect and location of the points used in the thermal study. (b) Biased mesh with greater mesh density close to the defect area. PL andDL refer to the length of the plate and defect, respectively, being PL = 0.1 m and DL = 0.01 m, whereas PT andDT refer to the thickness of the plate and defect, respectively, being PT = 0.005 m and DT = 0.0005 m.
Sensors 20 03982 g002
Figure 3. Model temperature distribution at 53 s.
Figure 3. Model temperature distribution at 53 s.
Sensors 20 03982 g003
Figure 4. (a) Temperature vs. time at points P1 to P4. (b) Temperature contrast curves P1P2 and P4P3 vs. time.
Figure 4. (a) Temperature vs. time at points P1 to P4. (b) Temperature contrast curves P1P2 and P4P3 vs. time.
Sensors 20 03982 g004
Figure 5. Pareto plot indicating the influence weight of each input variable in responses (a) “Contrast Front” and(b) “Contrast Rear”.
Figure 5. Pareto plot indicating the influence weight of each input variable in responses (a) “Contrast Front” and(b) “Contrast Rear”.
Sensors 20 03982 g005
Figure 6. Approximation surfaces for different input and output variables combinations. (a) “Contrast Front”; (b) “Contrast Rear”.
Figure 6. Approximation surfaces for different input and output variables combinations. (a) “Contrast Front”; (b) “Contrast Rear”.
Sensors 20 03982 g006
Figure 7. Interaction regression model for all the features (R2 = 0.79, MAE = 5.148 × 10−5).
Figure 7. Interaction regression model for all the features (R2 = 0.79, MAE = 5.148 × 10−5).
Sensors 20 03982 g007
Figure 8. (a) Interaction regression model for all the features (R2 = 0.81, MAE = 9.588 × 10−4). (b) Gaussian Regression model for 8 features (R2 = 0.92, MAE = 6.666 × 10−4), including “Contrast Rear” and excluding defect thickness and emissivity coefficient.
Figure 8. (a) Interaction regression model for all the features (R2 = 0.81, MAE = 9.588 × 10−4). (b) Gaussian Regression model for 8 features (R2 = 0.92, MAE = 6.666 × 10−4), including “Contrast Rear” and excluding defect thickness and emissivity coefficient.
Sensors 20 03982 g008
Table 1. Material properties correspond to Nylon PA-12 [7]. Ranges of variation of the input variables are in absolute values and %.
Table 1. Material properties correspond to Nylon PA-12 [7]. Ranges of variation of the input variables are in absolute values and %.
FeatureDescriptionInitial ValueLower Range (%)Upper Range (%)
Defect thickness t D (m)Thickness of the defect0.000500.00025 (−50%)0.00075 (+50%)
Defect length L D (m)Length of the quadrangular side of the defect0.0100.005 (−50%)0.015 (+50%)
Specific heat c (J/kgK)Capacity to absorb heat in the material1590795 (−50%)2385 (+50%)
Conductivity coef. k (W/m2 K)Capacity to transfer heat inside the material0.220.11 (−50%)0.33 (+50%)
Density ρ (kg/m3))Mass divided by volume in the material1100550 (−50%)1650 (+50%)
Environment temperature T E (°C)Temperature of the air room during the experiment24.412.2 (−50%)36.6 (+50%)
Emissivity coef. ε For radiation heat transfer between the material and the environment0.95000.9025 (−5%)0.9975 (+5%)
Film coef. h (W/m2/°C)For convention heat transfer between the material and the environment10.505.25 (−50%)15.75 (+50%)
Max. heating temperature T H (°C)Maximum temperature applied to the upper surface during the heating step12060 (−50%)180 (+50%)
Contrast front Δ T F (°C)Maximum difference between P1 and P2 temperaturesOutput of the FEM simulation
Contrast rear Δ T R (°C)Maximum difference between P4 and P3 temperaturesOutput of the FEM simulation
Table 2. Performance results for defect thickness predictor models using a dataset of 500 sets of values. The models with the best predictive performance are indicated in bold type.
Table 2. Performance results for defect thickness predictor models using a dataset of 500 sets of values. The models with the best predictive performance are indicated in bold type.
500 DataRegression
LinearInteractionStepwise
NoneRMSE1.046 × 10−46.560 × 10−58.234 × 10−5
R20.470.790.68
MAE8.690 × 10−55.148 × 10−56.612 × 10−5
L D RMSE1.056 × 10−47.388 × 10−58.153 × 10−5
R20.470.740.68
MAE8.720 × 10−55.884 × 10−56.492 × 10−5
L D ,   ε RMSE1.070 × 10−47.286 × 10−58.320 × 10−5
R20.460.750.67
MAE8.710 × 10−55.777 × 10−56.610 × 10−5
L D ,   ε ,   T E RMSE1.053 × 10−47.575 × 10−58.353 × 10−5
R20.470.730.67
MAE8.688 × 10−55.985 × 10−56.655 × 10−5
L D ,   ε ,   T E , c,RMSE1.060 × 10−48.290 × 10−58.562 × 10−5
R20.460.670.65
MAE8.730 × 10−56.675 × 10−56.890 × 10−5
L D ,   ε ,   T E , c, ρ RMSE1.080 × 10−48.773 × 10−58.886 × 10−5
R20.450.630.62
MAE8.910 × 10−57.038 × 10−57.134 × 10−5
Table 3. Performance results for defect length predictor models using a dataset of 100 sets of values. The models with the best predictive performance are indicated in bold type.
Table 3. Performance results for defect length predictor models using a dataset of 100 sets of values. The models with the best predictive performance are indicated in bold type.
Considering Contrast RearWithout Contrast Rear
RegressionRegression
Excluding Features LinearInteractionStepwiseLinearInteractionStepwise
NoneRMSE2.671 × 10−33.330 × 10−31.949 × 10−32.440 × 10−31.995 × 10−31.853 × 10−3
R20.190.250.570.320.540.61
MAE2.180 × 10−32.299 × 10−31.506 × 10−32.027 × 10−31.534 × 10−31.470 × 10−3
t D RMSE2.633 × 10−32.382 × 10−31.864 × 10−32.423 × 10−32.008 × 10−31.736 × 10−3
R20.220.360.610.320.540.65
MAE2.164 × 10−31.719 × 10−31.439 × 10−32.033 × 10−31.624 × 10−31.400 × 10−3
t D ,   ε RMSE2.613 × 10−32.144 × 10−31.854 × 10−32.405 × 10−31.821 × 10−31.782 × 10−3
R20.230.480.610.330.620.63
MAE2.156 × 10−31.527 × 10−31.456 × 10−32.015 × 10−31.507 × 10−31.509 × 10−3
t D ,   ε ,   T E RMSE2.556 × 10−32.040 × 10−31.770 × 10−32.392 × 10−31.859 × 10−31.804 × 10−3
R20.260.530.650.340.60.63
MAE2.108 × 10−31.482 × 10−31.411 × 10−31.994 × 10−31.443 × 10−31.409 × 10−3
t D ,   ε ,   T E ,kRMSE2.494 × 10−31.820 × 10−31.893 × 10−32.452 × 10−32.092 × 10−32.132 × 10−3
R20.30.630.60.310.50.48
MAE2.013 × 10−31.398 × 10−31.471 × 10−32.021 × 10−31.689 × 10−31.764 × 10−3
Table 4. Variation of MAE (100 sets of values) when “Contrast Rear” is excluded as predictive feature calculated as: M A E w i t h o u t   Δ T R M A E w i t h   Δ T R M A E w i t h   Δ T R × 100 .
Table 4. Variation of MAE (100 sets of values) when “Contrast Rear” is excluded as predictive feature calculated as: M A E w i t h o u t   Δ T R M A E w i t h   Δ T R M A E w i t h   Δ T R × 100 .
Excluding Features:LinearInteractionStepwise
None−8.65%−40.09%−4.94%
−7.03%−33.28%−2.42%
t D −7.96%−15.69%−6.86%
−6.07%−5.50%−2.71%
t D ,   ε −7.98%−15.07%−3.87%
−6.52%−1.32%3.65%
t D ,   ε ,   T E −6.45%−8.85%1.93%
−5.40%−2.64%−0.11%
t D ,   ε ,   T E ,   k −1.70%14.96%12.65%
0.43%20.78%19.93%
Table 5. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 1: Regression and Gaussian process regression model (GPR) when “Contrast Rear” is contemplated as feature. The models with the best predictive performance are indicated in bold type.
Table 5. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 1: Regression and Gaussian process regression model (GPR) when “Contrast Rear” is contemplated as feature. The models with the best predictive performance are indicated in bold type.
RegressionGaussian Processes Regression
Excluding Features LinearInteractionStepwiseSquare ExpGPRMatern 5/2GPRRational Quadratic GPR
NoneRMSE2.273 × 10−31.276 × 10−31.278 × 10−39.247 × 10−49.460 × 10−49.252 × 10−4
R20.380.810.810.900.890.90
MAE1.805 × 10−39.588 × 10−49.594 × 10−47.256 × 10−47.505 × 10−47.267 × 10−4
t D RMSE2.292 × 10−31.367 × 10−31.489 × 10−39.963 × 10−48.720 × 10−49.975 × 10−4
R20.370.780.740.880.910.88
MAE1.800 × 10−39.663 × 10−41.017 × 10−36.967 × 10−46.940 × 10−46.976 × 10−4
t D ,   ε RMSE2.259 × 10−31.341 × 10−3 1.32100B7×10−38.221 × 10−48.546 × 10−48.221 × 10−4
R20.390.79 0.790.920.910.92
MAE1.804 × 10−31.027 × 10−3 1.031 × 10−36.665 × 1046.967 × 10−46.666 × 10−4
t D ,   ε ,   T E RMSE2.277 × 10−31.485 × 10−3 1.478 × 10−31.172 × 10−31.173 × 10−31.172 × 10−3
R20.380.74 0.740.840.840.84
MAE1.819 × 10−31.156 × 10−3 1.152 × 10−39.310 × 10−49.374 × 10−49.310 × 10−4
t D ,   ε ,   T E ,   k RMSE2.264 × 10−31.447 × 10−3 1.454 × 10−31.158 × 10−31.153 × 10−31.158 × 10−3
R20.390.75 0.750.840.840.84
MAE1.810 × 10−31.130 × 10−3 1.144 × 10−39.120 × 10−49.170 × 10−49.120 × 10−4
Table 6. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 2: support vector machines (SVM), multilayer perceptron (MLP), and random forest (RF) when “Contrast Rear” is contemplated as feature.
Table 6. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 2: support vector machines (SVM), multilayer perceptron (MLP), and random forest (RF) when “Contrast Rear” is contemplated as feature.
SVMMultilayer PerceptronRandom Forest
Excluding Features: CubicQuadraticMedium Gaussian
NoneRMSE1.774 × 10−31.622 × 10−31.756 × 10−32.241 × 10−32.376 × 10−3
R20.630.690.630.550.36
MAE1.180 × 10−31.246 × 10−31.423 × 10−31.741 × 10−32.031 × 10−3
t D RMSE2.784 × 10−31.713 × 10−31.737 × 10−32.080 × 10−32.340 × 10−3
R20.080.650.640.600.38
MAE1.210 × 10−31.298 × 10−31.383 × 10−31.650 × 10−31.990 × 10−3
t D ,   ε RMSE1.961 × 10−31.684 × 10−31.744 × 10−32.070 × 10−32.280 × 10−3
R20.540.660.640.630.42
MAE1.204 × 10−31.302 × 10−31.373 × 10−31.640 × 10−31.940 × 10−3
t D ,   ε ,   T E RMSE1.900 × 10−31.708 × 10−31.731 × 10−32.060 × 10−32.270 × 10−3
R20.570.650.640.600.42
MAE1.338 × 10−31.338 × 10−31.360 × 10−31.640 × 10−31.930 × 10−3
t D ,   ε ,   T E ,   k RMSE1.708 × 10−31.747 × 10−31.720 × 10−32.130 × 10−32.200 × 10−3
R20.650.640.640.580.46
MAE1.284 × 10−31.358 × 10−31.379 × 10−31.690 × 10−31.870 × 10−3
Table 7. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 1: Regression and GPR when “Contrast Rear” is not contemplated as feature. The models with the best predictive performance are indicated in bold type.
Table 7. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 1: Regression and GPR when “Contrast Rear” is not contemplated as feature. The models with the best predictive performance are indicated in bold type.
RegressionGaussian Processes Regression
Excluding Features: LinearInteractionStepwiseSquare ExpGPRMatern 5/2GPRRational Quadratic GPR
NoneRMSE2.231 × 10−31.485 × 10−31.466 × 10−31.072 × 10−31.103 × 10−31.073 × 10−3
R20.410.740.750.860.860.86
MAE1.793 × 10−31.183 × 10−31.168 × 10−38.513 × 10−48.826 × 10−48.531 × 10−4
t D RMSE2.285 × 10−31.727 × 10−31.724 × 10−31.501 × 10−31.505 × 10−31.499 × 10−3
R20.380.650.650.730.730.73
MAE1.863 × 10−31.383 × 10−31.379 × 10−31.208 × 10−31.215 × 10−31.209 × 10−3
t D ,   ε RMSE2.277 × 10−31.485 × 10−31.478 × 10−31.172 × 10−31.173 × 10−31.172 × 10−3
R20.380.740.740.840.840.84
MAE1.819 × 10−31.156 × 10−31.152 × 10−39.310 × 10−49.374 × 10−49.310 × 10−4
t D ,   ε ,   T E RMSE2.301 × 10−31.809 × 10−31.837 × 10−31.624 × 10−31.631 × 10−31.625 × 10−3
R20.370.610.60.690.680.69
MAE1.881 × 10−31.453 × 10−31.473 × 10−31.301 × 10−31.309 × 10−31.302 × 10−3
t D ,   ε ,   T E ,   k RMSE2.323 × 10−31.984 × 10−31.986 × 10−31.864 × 10−31.866 × 10−31.866 × 10−3
R20.360.530.530.590.590.59
MAE1.913 × 10−31.628 × 10−31.631 × 10−31.516 × 10−31.517 × 10−31.517 × 10−3
Table 8. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 2: SVM, MLP, and RF when “Contrast Rear” is not contemplated as feature.
Table 8. Performance results for defect length predictor models using a dataset of 500 sets of values. Part 2: SVM, MLP, and RF when “Contrast Rear” is not contemplated as feature.
SVMMultilayer PerceptronRandom Forest
CubicQuadraticMedium Gaussian
NoneRMSE1.677 × 10−31.829 × 10−32.000 × 10−32.290 × 10−32.490 × 10−3
R20.670.600.530.540.30
MAE1.276 × 10−31.440 × 10−31.614 × 10−31.810 × 10−32.140 × 10−3
t D RMSE1.845 × 10−31.944 × 10−32.066 × 10−32.350 × 10−32.480 × 10−3
R20.600.550.490.520.29
MAE1.443 × 10−31.540 × 10−31.662 × 10−31.890 × 10−32.120 × 10−3
t D ,   ε RMSE1.900 × 10−31.708 × 10−31.731 × 10−32.060 × 10−32.270 × 10−3
R20.570.650.640.597513540.421201
MAE1.338 × 10−31.338 × 10−31.360 × 10−31.640 × 10−31.930 × 10−3
t D ,   ε ,   T E RMSE1.991 × 10−32.029 × 10−32.073 × 10−32.400 × 10−32.400 × 10−3
R20.530.510.490.50.34
MAE1.557 × 10−31.606 × 10−31.650 × 10−31.930 × 10−32.050 × 10−3
t D ,   ε ,   T E ,   k RMSE2.296 × 10−32.333 × 10−32.151 × 10−32.520 × 10−32.340 × 10−3
R20.370.350.450.450.37
MAE1.793 × 10−31.834 × 10−31.722 × 10−32.040 × 10−31.980 × 10−3
Table 9. Variation of MAE (500 sets of values) when “Contrast Rear” is excluded as predictive feature calculated as: M A E w i t h o u t   Δ T R M A E w i t h   Δ T R M A E w i t h   Δ T R × 100 .
Table 9. Variation of MAE (500 sets of values) when “Contrast Rear” is excluded as predictive feature calculated as: M A E w i t h o u t   Δ T R M A E w i t h   Δ T R M A E w i t h   Δ T R × 100 .
RegressionGaussian Regression ModelSVMMultilayer PerceptronRandom Forest
Excluding Feature: LinearInteractionStepwiseSquare ExpGPRMatern 5/2GPRRational Quadratic CubicQuadraticMedium Gaussian
NoneRMSE−1.85%16.30%14.77%15.93%16.64%15.97%−5.45%12.74%13.87%2.19%4.80%
MAE−0.68%23.33%21.76%17.33%17.60%17.39%8.13%15.57%13.41%3.96%5.37%
t D RMSE−0.34%26.31%15.73%50.66%72.61%50.25%−33.73%13.44%18.95%12.98%5.98%
MAE3.51%43.12%35.64%73.39%75.01%73.34%19.21%18.59%20.16%14.55%6.53%
t D ,   ε RMSE0.79%10.72%11.89%42.53%37.30%42.52%−3.10%1.44%−0.74%−0.48%−0.44%
MAE0.81%12.51%11.74%39.66%34.56%39.65%11.13%2.78%−0.92%0.00%−0.52%
t D ,   ε ,   T E RMSE1.03%21.86%24.22%38.61%39.00%38.70%4.79%18.79%19.76%16.50%5.73%
MAE3.41%25.70%27.84%39.71%39.62%39.87%16.33%19.99%21.30%17.68%6.22%
t D ,   ε ,   T E ,   k RMSE2.61%37.08%36.58%61.02%61.86%61.20%34.44%33.54%25.03%18.31%6.36%
MAE5.66%44.03%42.56%66.18%65.39%66.35%39.64%35.06%24.84%20.71%5.88%
Mean %1.50%26.10%24.27%44.50%45.96%44.52%9.14%17.19%15.57%10.64%4.59%
Table 10. Qualitative comparison of the predictive models based on results.
Table 10. Qualitative comparison of the predictive models based on results.
Prediction Model for Defect LengthPrediction Model for Defect Thickness
ModelPerformanceProcessing TimeOutliers InfluenceSensitive to Lack of Contrast FrontSample Size Sensitive *PerformanceProcessing TimeOutliers Sensitive
RegressionLinearVery lowLowHighLowModerateLowLowLow
InteractionHighLowModerateHighModerateModerateLowLow
StepwiseHighVery highLowHighModerateModerateVery highLow
GPRSquare exp.Very highModerateLowVery highHigh
Matern 5/2Very highModerateLowVery highHigh
Rational QuadraticVery highHighLowVery highHigh
SVMCubicModerateVery lowHighLowHigh
QuadraticModerateVery lowMediumModerateHigh
Gaussian MediumModerateVery lowMediumModerateHigh
Multilayer PerceptronLowVery lowModerateLow High
Random ForestVery lowVery lowLowVery lowHigh
* Only based on the experiments with two datasets (100 and 500 sets of values).

Share and Cite

MDPI and ACS Style

Rodríguez-Martín, M.; Fueyo, J.G.; Gonzalez-Aguilera, D.; Madruga, F.J.; García-Martín, R.; Muñóz, Á.L.; Pisonero, J. Predictive Models for the Characterization of Internal Defects in Additive Materials from Active Thermography Sequences Supported by Machine Learning Methods. Sensors 2020, 20, 3982. https://doi.org/10.3390/s20143982

AMA Style

Rodríguez-Martín M, Fueyo JG, Gonzalez-Aguilera D, Madruga FJ, García-Martín R, Muñóz ÁL, Pisonero J. Predictive Models for the Characterization of Internal Defects in Additive Materials from Active Thermography Sequences Supported by Machine Learning Methods. Sensors. 2020; 20(14):3982. https://doi.org/10.3390/s20143982

Chicago/Turabian Style

Rodríguez-Martín, Manuel, José G. Fueyo, Diego Gonzalez-Aguilera, Francisco J. Madruga, Roberto García-Martín, Ángel Luis Muñóz, and Javier Pisonero. 2020. "Predictive Models for the Characterization of Internal Defects in Additive Materials from Active Thermography Sequences Supported by Machine Learning Methods" Sensors 20, no. 14: 3982. https://doi.org/10.3390/s20143982

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop