Next Article in Journal
Preparation and Characterization of Porous Core-Shell Fibers for Slow Release of Tea Polyphenols
Next Article in Special Issue
The Effect of Irradiation on Mechanical and Thermal Properties of Selected Types of Polymers
Previous Article in Journal
Study on the Nanomechanical and Nanotribological Behaviors of PEEK and CFRPEEK for Biomedical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vacuum Thermoforming Process: An Approach to Modeling and Optimization Using Artificial Neural Networks

by
Wanderson De Oliveira Leite
1,*,
Juan Carlos Campos Rubio
2,
Francisco Mata Cabrera
3,
Angeles Carrasco
4 and
Issam Hanafi
5
1
Departamento de Mecânica, Instituto Federal de Educação, Ciência e Tecnologia de Minas Gerias—Campus Betim, Rua Itaguaçu, No. 595, São Caetano, 32677-780 Betim, Brazil
2
Escola de Engenharia, Departamento de Engenharia Mecânica, Universidade Federal de Minas Gerais, Av. Pres. Antônio Carlos, No. 6627, Pampulha, 31270-901 Belo Horizonte, Brazil
3
Escuela de Ingeniería Minera e Industrial de Almadén, Departamento Mecánica Aplicada e Ingeniería de Proyectos, Universidad de Castilla-La Mancha, Plaza Manuel Meca No. 1, 13400 Ciudad Real, Spain
4
Escuela de Ingeniería Minera e Industrial de Almadén, Departamento de Filología Moderna, Universidad de Castilla-La Mancha, Plaza Manuel Meca No. 1, 13400 Ciudad Real, Spain
5
Ecole Nationale des Sciences Appliquées d’Al Hoceima (ENSAH), Département of Civil and Environmental Engineering, 32000 Al Hoceima, Morocco
*
Author to whom correspondence should be addressed.
Polymers 2018, 10(2), 143; https://doi.org/10.3390/polym10020143
Submission received: 16 December 2017 / Revised: 23 January 2018 / Accepted: 31 January 2018 / Published: 2 February 2018
(This article belongs to the Special Issue Model-Based Polymer Processing)

Abstract

:
In the vacuum thermoforming process, the group effects of the processing parameters, when related to the minimizing of the product deviations set, have conflicting and non-linear values which make their mathematical modelling complex and multi-objective. Therefore, this work developed models of prediction and optimization using artificial neural networks (ANN), having the processing parameters set as the networks’ inputs and the deviations group as the outputs and, furthermore, an objective function of deviation minimization. For the ANN data, samples were produced in experimental tests of a product standard in polystyrene, through a fractional factorial design (2k-p). Preliminary computational studies were carried out with various ANN structures and configurations with the test data until reaching satisfactory models and, afterwards, multi-criteria optimization models were developed. The validation tests were developed with the models’ predictions and solutions showed that the estimates for them have prediction errors within the limit of values found in the samples produced. Thus, it was demonstrated that, within certain limits, the ANN models are valid to model the vacuum thermoforming process using multiple parameters for the input and objective, by means of reduced data quantity.

1. Introduction

Thermoforming of polymers is a generic term for a group of processes that involves the forming or stretching of a preheated polymer sheet on a mold producing the specific shape. It is considered to be one of the oldest methods of processing plastic materials [1]. The process which uses the vacuum negative pressure force to stretch this heated polymer sheet on a mold is called vacuum forming or vacuum thermoforming [2]. Specifically, this is the forming technique and/or stretching where a sheet of thermoplastic material is preheated by a heating system (Figure 1a,b), and forced against the mold surface (positive or negative) by means of the negative vacuum pressure produced in the space between the mold and sheet (Figure 1c, by mold suction holes and a vacuum pump which “sucks” the air from the space and “pulls” the sheet against the surface of the mold, transferring it, after cooling and removing excess material to shape it (Figure 1d) [3,4]. The typical sequence of this technique by Ghobadnam et al. [5] is presented in Figure 1.
However, what is observed, in practice, is that incorporating prior knowledge or a trial-and-error methods to predict the final result of the process and the quality of the product can be far more difficult. Thus, the evaluation of the final performance of the system is sometimes complex, due to various factors, such as the raw material of the mold, the equipment characteristics, the type and raw material of the sheet, and other factors [6,7,8]. In addition, the process often highlights the conflicts between aspects of quality and adjustments of process control variables [9,10]. In recent years, several authors have developed work with the objective of modelling and predicting the quality of the final product of the vacuum thermoforming process.
Thus, Engelmann and Salmang [6] presented a computational statistics model and data analysis, and Sala et al. [11] and Warby et al. [12] in a complementary focus, worked on the development of an elastic-plastic model for thickness analysis. Many studies concentrated on aspects of mold geometry and process parameters to verify their influence on the wall thickness distribution [5,13,14,15]. A hierarchically-ordered multi-stage optimization strategy for solving complex engineering problems was developed, [3,16]. Martin et al. [17] presented the study of the instrumentation and control of thermoforming equipment and its analysis and control in real-time of multiple variables. The accuracy of the developed controller and its prospective real-time application is evidenced by the results. Some studies focused on modeling, simulation, and optimization of the heating system by different methods and techniques [18,19,20].
However, in complex manufacturing processes such as this, Meziane et al. [21], Tadeusiewicz [22] and Pham [23] suggest that the traditional approaches to process control fail to understand all aspects of process control or existing subsystems. Sometimes the amount and type of variables involved make the computational and mathematical modelling of the system a multi-variate, multi-objective, complex process with non-linear and conflicting objectives [9,10,24]. Thus, according to them, in the last few years, several studies have been presented, using computational intelligence (CI) techniques aimed at the modeling of the non-linear characteristics and conflicting objectives of these processes. The research was carried out using a series of computational tools for the resolution of problems that require human intelligence abilities for their resolution or computational modeling, with artificial neural networks (ANNs) being more intensively investigated and studied [25,26].
ANNs are mathematical computational models inspired by biological neural structures or biological neurons [27,28]. The artificial neurons, or perceptron, is constituted of three elements. One input, “X”, one weight “W”, and a combination of sum function (φ) which may be linear or not, and in some cases, a bias, θj, is included [29]. The “Y” response of the ANN is obtained by applying the activation function on the output of the combiner or sum function matrix Y = φ (W × X + θ) [30].
One algorithm model, called a basic ANN, is the multi-layer perceptron (MLP), which is typically composed of combinations of artificial neurons that are interconnected, usually by a node system or mesh. The MLP generally consists of “n” neurons interconnected in a system of meshes of nodes and divided into: an input layer, an output layer, and one or more hidden layers, and, between layers, the neurons are connected with their respective weights (biological synapses), which learn or record knowledge (by adjustable weights) between the input and output layers of the network. Furthermore, the network of layers is interconnected externally with their supervised training or learning algorithms [26,27].
In the MLP network, through the input and output data of the network or patterns, the network is trained in a cyclical process by its algorithms and a performance index is calculated for the network in each training round or epoch. These supervised training and learning MLP processes can be continuous until the ANN model “learns” to produce desired outputs for input from its pattern [27] or a performance index of the network, such as the mean square error (MSE), which achieves an error equal to or less than specified, or when the network reaches any other stop criteria specified during model programming. For this, the networks are implemented with training algorithms, the most commonly used being the back propagation (BP) and Levenberg-Marquardt (LM) algorithms. The BP algorithm is a method of supervised learning (batch) that seeks to minimize a global error function or Sum Squared Error (SSE) for the j neurons of the layer(s) at each epoch [31,32]. The LM algorithm, developed by Hagan and Menhaj [33] and implemented in MATLAB® software (MathWorks Inc., Natick, MA, USA) by Demuth and Beale [34], is a method that provides a solution to the minimization problem of a non-linear function based on the Gauss-Newton method and gradient descent algorithm via calculation of Jacobian matrices [35].
The ability to work with complex or multi-dimensional and multi-criteria problems makes ANNs one of the main methods used in engineering for computational modeling [22]. A model with multi-criteria optimization is defined when it is desired simultaneously to optimize several objective functions and, in some cases, these functions are in conflict, or compete with, each other and, thus, the possible optimal solutions do not allow, for example, the maximization of all the objectives in a joint manner [36].
In this context, some authors have developed computational models based on Computational Intelligence (IC) techniques associated, or otherwise, with statistical optimization for the analysis of quality characteristics of the piece produced by vacuum thermoforming, some of them described by Chang et al. [24]. Likewise, Yang and Hung [9,10] proposed an “inverse” neural network model which was used to predict the optimum processing conditions. The network inputs in this work included the thickness distribution at different positions various parts, and the output or optimal process parameters were obtained by ANNs. Additionally, Küttneret et al. [3] and Martin et al. [17] presented the development of a methodology that uses an ANN to optimize the production technologies together with the product design. Finally, Chang et al. [24] tested an inverse model of ANN on a laboratory scale machine, where it used the desired local thicknesses as inputs and the processing parameters as outputs, with the aim being process optimization.
Thus, first of all, the current work studied both the values of manufacturing parameters and the quality of samples produced by the vacuum thermoforming process on a laboratory scale. Additionally, these initial experimental results were used to investigate the computational modeling of the process through several ANN models that aimed to correctly present the deviation values given a set of manufacturing parameters. These study sequences allowed the study of multivariable and multi-objective optimization algorithms using ANN models to obtain optimum values of the manufacturing parameters simultaneously with the group predictions of product deviations. Finally, validation tests and confirmation are carried out with the objective of evaluating the ability of each model to simulate the process under new experimental conditions and, also, estimate deviations, verify the efficiency of the approach, and validate the proposed methodology.

2. Experimental Work

2.1. Material, Equipment, and System

For the three-dimensional (3D) design of the model and mold, aspects inherent to the manufacturing process and contraction of 0.5% were considered [8,37] and computer-aided design (CAD) software, integrated with computer-aided manufacturing (CAM), was used. The mold was machined in a computer numeric control (CNC) using plates of medium density fiberboard (MDF) as a raw material. This has dimensional and geometric characteristics of a product standard and, also, a 3D coordinate measuring machine (3D CMM) was used to determine the dimensional and geometric deviations present in the mold.
A semi-automated vacuum-forming machine was developed and automated by the researchers. This equipment has the capacity to work with plates of thickness of 0.1 to 3.0 mm, a useful area of 280 × 340 mm, a displacement of the mold (z axis) of 150 mm, a vacuum pump of 160 mbar with a motor of 1.0 CV, an infrared heating system composed of two resistors of 750 and 1000 W, movement by pneumatic systems, and acquisition of temperature data by “K” thermocouples and non-contact infrared. The system is programmable and controlled by a commercial personal computer (PC) integrated with an Arduino microcontroller (Arduino Company Open Source Hardware, Somerville, MA, USA).
In this work, 2.0 × 2.5 m of white laminated polystyrene (PS) sheets with a thickness of 1.0 mm were used to manufacture the parts. The plates were cut into 300 × 360 (machine size) sheets, cleaned with water and liquid soap of neutral pH, and then dried and packaged in plastic film packages that had previously been heated at 50 °C for two hours.
The commercial equipment and software used in the development of this study are described below and included: a Micro-Hite 3D TESATM 3D coordinate measuring machine (3D CMM, Hexagon AB, Stockholm, Sweden), Discovery 560 ROMI TM Machining Center (CNC, INDÚSTRIAS ROMI S.A, São Paulo, Brazil), and Arduino UNO Revision 3 microcontroller board (ATmega328, Arduino Company). A commercial personal computer (PC) environment with Windows® 7 Home Premium 64-bit operating system (Microsoft Company, Redmond, WA, USA), Intel® CoreTM i3-2100 3.10 GHz processor (Intel Corporation, Santa Clara, CA, USA.) and 6 GB of RAM to integrate the machine with the Arduino system’s software and equipment. The software was chosen so that information could be shared, and the main packages used were: Arduino Software (IDE) Release 1.0.5 Revision 2 (Arduino Company) for Arduino microcontroller board, SolidWorks® 2008 (SOLIDWORKS Corp, Waltham, MA, USA), EdgeCAM® 2010 by SolidWorks® (Vero Software, Brockworth, Gloucester, UK), Reflex Software for Micro-Hite 3D TESATM (Hexagon AB, Stockholm, Sweden), MiniTab 16® (Minitab, Inc., State College, PA, USA), and MATLAB® 2011 version 7. 12. 0. 635 (R2011a) 64-bit (MathWorks Inc.).

2.2. Parameters and Measurement Procedure

There is no consensus among authors about the measurement parameters and procedures. According to Küttner et al. [3], Muralisrinivasan [4], Yang and Hung [9,10] and Chang et al. [24] in the vacuum thermoforming process several parameters of control and quality can be used, depending on the type of equipment, mold, and product geometry. Throne [2], Klein [7], Throne [8] and Chang [24] explain that there is no specific measurement procedure or equipment to be used. Thus, they were defined to control the deviations as described in the following paragraphs, with the scales, measurement procedures, and tolerances presented.
For measurement of the errors, 3D MMC was used carrying a 4mm diameter solid probe, calibrated with an error of ±0.004 mm, which has an accuracy of 0.003 mm and CAI software. The reference values for dimensions were calculated, based on the final dimensions of the mold. Additionally, according to Throne [2] and Klein [7], a deviation of ±1% for linear dimension and ±50% for flatness on surfaces are acceptable and, as a reference, the values calculated for dimensions were adopted as the general criteria for acceptance of sample dimensions.
Figure 2 presents the geometry of the product standard, where dimensions and deviations to be measured in the samples are represented.
The dimensional deviation height (DDHi) or DEV 01 was defined as:
D D H i = ( M H S i T S H ) = = D E V 01 i = ( M H S i 57.92 )
where TSH is theoretical sample height and a negative (−) mean value indicates that the height is less than the ideal and a positive mean value (+) that it is greater than the ideal. For the calculation of DEV 01, eight (8) points were collected on each surface. Additionally, in all equations in this section, the index i represents the i-th analyzed sample.
The deviation of the diagonal length (DDLi) or DEV 02 is calculated by the difference between the values of the MLDSi and the value of the TDL, being:
D D L i = ( M L D S i T D L )
where MLDSi is the measured length of the diagonal in the sample, which in this work was defined as the quadratic relation of the lateral distances of the upper end of the sample (length and width) and TDL is theoretical diagonal length of the Sample = 207.97mm, so:
D D L i = D E V 02 i ( ( w i d t h i ) 2 + ( l e n g t h i ) 2 207.97 )
For the calculation of DEV 02, five points were collected along each lateral of the samples. A negative (−) mean value indicates that the length is smaller than the ideal and a positive mean value that it is greater than the ideal.
The geometric deviation of flatness (GDi) or DEV 03, which will have a zero value (0) for an ideal surface or positive value, was calculated as:
G D i = ( M G D S i T G D S ) = = D E V 03 i = ( M G D S i 0.11 )
where MGDSi is the measurement geometric deviation flatness in the sample and TGDS is the theoretical geometric deviation flatness of the sample, that is, the deviation calculated, which was 0.11 mm. For DEV 03, nine (9) points were collected on the lower/bottom surface of the samples.
The DEV 04 or Geometric Deviation of Side Angles (GDSAi), in this study, is expressed as:
G D S A i = 1 z J 1 Z G D L A i = = D E V 04 i = 1 4 J 1 4 ( L A M F S T L A F S )
where z is the number of sides and s the evaluated face. The GDLA is the difference between the Lateral Angle Measured on the Face of sample i (LAMFi) and the theoric lateral angle of the face (TLAF), for s = 1 ... 4, respectively, 95.93°, 95.93°, 96.02°, and 96.06°. For DEV 04, nine (9) points were collected on each surface analyzed.

2.3. Experimental Study

In this research, we used the manufacturing parameters (factors) described by Throne [2] and compatible with the geometry of sample and equipment, namely: A. heating time (in seconds—s); B. electric heating power (in percentage—%); C. mold actuator power (in Bar and cm/s); D. vacuum time (s); E. vacuum pressure (in millibar—mbar). Table 1 shows the levels/values for each parameter.
The experiment was composed of 68 tests according to the planning 25-1V (fractional factorial design, by Montgomery [38]) with 16 processes of parameter settings and one center point. For each setting and the center point, two (2) replicates were performed in a random sequence. Still, a sample and a repetition were manufactured in the same sequence, totaling 68 pieces (4 samples per processing parameters settings).
The 68 samples of PS were produced and then cooled completely in an air-conditioned room at 22 °C with 60% humidity. After, the inspection methods described in the previous chapter were applied to quantify the linear and geometric dimensions of the samples.
Table 2 shows the types of deviations and respective values of the sample means (by four samplings), the accuracy of this estimate of sample mean (AE) and the standard deviation (S) of estimate of mean [38], for the 17 process parameters settings tested (center point, test No. 17). It is observed that the data vs. type of deviation are well distributed, except for only one (1) point for DEV 03, respectively, standard test 1 (samples 26 and 31 and their repetitions—outlier).

2.4. Analysis of Data

First, the analysis of variance (ANOVA) was developed to test the factors and their effects of first and second order and to evaluate whether each factor was significant or not. The ANOVA results for deviations versus the factors studied are summarized in Table 3, or F-test table, with a confidence level of 95% (α = 0.05), and where the critical test value for the F distribution is f0,05;1;17 = 4.45.
In general, for main effects, from Table 3, it can be seen that factors “A” and “B” are the most significant for all deviations and for DEV 01. Additionally, for DEV 02, the parameter of manufacturing B stands out as significant; for DEV 03, all factors are significant; and in DEV 04, in sequence, the most significant parameters are B, A, and D. Furthermore, many interaction effects are significant in terms of the deviations. It is concluded that the critical manufacturing parameter for the deviations analyzed are the electric heating power (B) followed by the heating time (A), and also, except for the vacuum pressure factor (E) for the dimensional deviation of the diagonal length (DEV 02), at least one factor, or its interaction effect, is significant for one of the deviations.
Figure 3 presents the results of mean deviation values of all factor levels for all factors for each type of deviation. In the figure, we verified that the most relevant factors are those related to heating (A and B). Additionally, in general, it reveals that there is no predominant behavior between factor levels and lower ranges of deviations and the relationships between factors are not proportional. Furthermore, the variation of any input variable (+1 or −1) generates modifications in at least one type of deviation. It can be concluded, in this analysis of data, that the modification of factor levels cannot be studied in isolation for each type of deviation. Therefore, they must be evaluated simultaneously, and also, none of these factors, or their interaction (second-order), can be eliminated from a study or computational modeling of the process since they are significant in at least one type of deviation.

3. Development of Modeling and Optimization of Process Based on ANN Models

3.1. Modeling, Tests, and Selection of Artificial Neural Network Models

For tests of programming of ANN multilayer models, as input data of the nets, we have used the sequence of factors (process parameters settings) and factor levels of the fractional factorial planning “25-1V” with center points, respectively. The output data are the sample means of the results of the deviations (Table 2).
The networks were tested with back propagation and the Levenberg-Marquardt training algorithm. The transfer functions “tansig” was used in the first layer and, in the other layers, combinations of the functions “purelin” and “tansig” were tested. The various network architecture tested were composed of an entrance layer with five data (Xi), an exit layer with four values (Ylj(p)), and still, l-th hidden layer with j-th neurons in each. Figure 4 presents the general architecture of the ANN used.
As general parameters of training of ANNs, the following were used: learning rate = 0.001, ratio to decrease learning rate = 0.001, error maximum increment = 0.001 and network performance = “mae”. As general parameters to stop the network, the following were used: performance goal = 0, minimum performance gradient = 1 × 10−25, maximum number of epochs to train = 10000, maximum number of validation increases = 100, and momentum constant maximum = 1 × 10308. Additonally, as the mean absolute error (MAE) was adopted in substitution of MSE as a performance parameter of the network, where MAE ≤ 0.145 (General MAE of the mean deviation in the samples). Equation (6) describes the calculations of MAE.
M A E = 1 k J 1 K 1 n i 1 n | e j , i |
For the development of multi-criteria optimization algorithms, based on the ANN models, the script codes were implemented and processed using MATLAB® software. In each computational test of a model of optimization, for the patterns shown to the ANN, the four initial solutions and the MAE values were recorded. Then a new test of the algorithm was recursively initialized. Where the model reached an improved general value of the MAE in a new test run, the code recorded all input and output data of the network and classified it in a sequence of solutions, but, if the MAE does not improve, the algorithm continues the tests until it reaches a net stop criterion and initializes a new model. At each renewal of the network by a stop criterion, all weights and bias were updated with random values. Each model was tested for even 2000 epochs or for the total time of simulation of 1020 min.
Table 4 summarizes the performance values and processing of main of multi-criteria ANN models and data of the ANNs tested. In this table, we observe the evolution of models by modification of the models’ characteristics, where techniques to improve or simplify the ANN already discussed in other works were applied, along with the change of the training algorithms (model “D”, “K”, etc.), the modification of the net structure (model “H”, “M”, etc.), the modification of the transfer function of layers (model “T”, “W”, etc.), the proportional adjustment between the amount the patterns of the network and the number of neuron layers (model “P”, “V”, etc.), and the adjustment of the amount of training data and test data of the models [39,40,41].
In Table 4, the model “A” was the first satisfactory solution (MAE ≤ 0.145); however, it presents a net structure with many nodes, a considerable number of weights and bias and, in addition, a significant amount of processing time, which results in slow computing. The models D, H, K, M, O, P, and T are some intermediate models, but they presented problems that evolved or were improved, such as Model “D” and “K”, that have an MAE > 0.145, i.e., with errors of predicted values higher found in the process (Table 2). The V, X, Y, and Z models generally achieved the best performances and predicted values errors considerably lower than the limits found in the process samples. The models are theoretically similar, and present a network structure that simplifies and reduces the processing time, with differences in the training process, the functions used and amount of data. Just as the amount of data and the functions used can modify the models the ANN generated, it cannot be said that the values of the weights and bias are the same, and, consequently, the predicted values (for 68 output data) and the general performance of the ANNs are not the same. Figure 5 presents the predicted values by these models and model “A” for each type of deviation and the target values of each pattern.
As seen in Figure 5, model “A” has significant prediction errors in all deviations, being more evident in DEV 02 as, for example, data 5 = −0.222 ± 0.010 mm, and in model “A” = −0.254 mm. Model “V” has several errors in the forecasts, highlighting the data value number 5 for DEV 01 and data value number 5 for DEV 04. Of the other models, in general, “X” presents the worst performance in the predictions and one significant prediction error, for test 9 of DEV 03, considering the sample variation with value of 0.933° ± 0.132°. Models “Y” and “Z” have negligible errors and, within the ranges found in the samples, are considerably lower when compared with previous values of the other models. The gain in performance value is due to the increase in the number of training data and test data.
In Figure 6, the response surface of “V”, “Y”, and “Z” models for variables temperature vs. types of deviations are shown. When we compare them, we observed that, although the “V” model has a network structure similar to the “Y” and “Z” models, the use of a linear fit function (purelin) in the network contributed to a “linearization” of the surface and the generalization errors (Figure 6(C1–C4)); this was generally observed in other models. Already, the “Y” and “Z” models have hyperbolic tangent sigmoid transfer functions (tansig), which contributed to the nonlinear generalization of the models. However, as shown in Figure 6(B1–B4), the amount of data used in model “Y”, up to now, was not adequate to generate an improved model, which was only achieved with the progressive increase of the amount of data of model “Z” (Figure 6(A1–A4)), which makes this model more suitable for this work.

3.2. Modeling and Test of Multi-Criteria Optimization Algorithm Models

The multi-criteria optimization algorithms were developed based on the “Z” model (Table 4). The coefficient of performance or the objective function of the algorithm for simultaneous minimization of responses [36] was defined by Equation (7):
O i = 1 8 i = 1 4 { ( Y i , j ( p ) a d m i s s i b l e   e r r o s i ) x w e i g h t s i }
where j represents the j-th coefficient of performance for a (01) solution vector and i the deviation type, where i = 1, 2, 3, and 4 for the deviations DEV 01, DEV 02, DEV 03, and DEV 04. The values of the “admissible errors” for i = 1, 2, …, 4 were defined as |0.6 mm|, |2.1 mm|, |1 mm|, |0.72°|, and the i-th weights adopted are: 2, 2, 3, and 1, respectively.
With this data, new codes were programmed with two variations of the algorithm, each with its domain, constraints, and discretization. The data used are described in Table 5.
The two variations of the algorithm were processed according to the same logic, where: the input values for the j-th possible solutions were generated in a data matrix, and then the matrix, the ANN model, and the sub-codes were used to find the initial solution. Next, the deviations of this solution were determined and the value of coefficient of performance (Oj) calculated. Finally, the information and data from this possible solution were recorded in a control table in decreasing order. Once this part is processed, the algorithm returns to the first step (internal loop process), repeating the process in search of an improved solution. If it finds one, it writes the data again for this new solution in the decreasing control table. The process was repeated until the model ran in the entire solution space, selected and, thus, found the global minimum value of the solutions vector Oj and the optimal parameters of manufacturing. Table 6 and Table 7 present the best results.
In Table 6 and Table 7 we see that several configurations have the same value of Oj, or very close values, which were already predicted when dealing with a problem with multiple solution spaces, with all being possible optimal solutions to the problem. However, analyzing Figure 3, we see that, in general, for the set of deviations, factor “A” has better results in levels ≥85, factor “B” in levels ≥95, since factor “C” improves next at levels ≤92.5, factor “D” at mean levels ≥8.1, and factor “E” close to levels ≥12.5. From this it follows that the first solution from Table 6 and the sixth solution from Table 7 are the most appropriate solutions to the problem.

3.3. Confirmation Experiment

To validate the multi-criteria optimization models developed, new experimental tests were performed, with the respective factors and levels selected. For the processing of the samples, two sequences of tests were performed with the processes of parameter settings or the solutions selected, where five (5) sequentially-manufactured repetitions were performed for each type of setting. Additionally, the same experimental conditions were preserved, as well as the same raw material and infrastructure. In addition, the same steps of the experimental tests were followed. Afterwards, the samples were inspected, adopting the same procedures already described and the deviations previously calculated.
Table 8 and Table 9 present the results of the expected values of the means of the four deviations for samples in the validation tests, with the 95% confidence interval (CI) on the mean (n = 5 and α = 0.05). The predictions, and the results of the best samples by the Oj value in the main experimental tests, the standard test number being 5, are also shown (Table 2).
From Table 8 and Table 9 it can be seen that the samples of the validation tests have mean deviations at lower levels than those of the main experimental tests and, also, the CI limit values are at lower levels. This being the case, in relation to the average values, there is a significant improvement of 20% when compared to the best samples of each type test (type A = 18% and Type B = 22.5%). With regard to the predictions of the multi-criteria optimization algorithms models, the deviations predicted by the models are within of CI limits for the validation samples. Additionally, in relation to the means values of these samples the predicted values of the model type “A” have a mean error on average of 13.2% and type “B” o15.5%, both inside the CI. Furthermore, the values of Oj are, on average, 76% below the tolerance limits defined in this work.

4. Conclusions

In general, it is concluded that the work developed with ANN models was able to simultaneously and satisfactorily model the geometric deviations in the polymer vacuum thermoforming process, where there are conflicts of objectives between the quality parameters and the manufacture of the variables using a laboratory infrastructure and with a small number of tests.
The tests allowed us to determine that, to minimize deviations, one should use factor “A” between 85 and 95 s, “B” within the range of 87.5% to 100%, “C” in the range of 85% to 100%, “D” for 6.3 to 8.1, and “E” between 12.5 and 15 mbar. Additionally, the main factors of the analysis of the process are heating time (A) and heating electric power (B). The understanding of their interactions is the critical point for minimizing the set of deviations. In addition, we note that the analysis of results of experimental tests does not allow us to select a (1) single set of factors and levels that simultaneously optimize all parameters. This is because different levels of the same factor could be optimal for different responses, e.g., factor “D” [9].
It has been verified that the gradual modification of the ANN architecture with the modification of functions, algorithms, and the number of layers associated with the progressive increase in the amount of data presented to ANNs significantly reduces the residues and can improve the approximation of the network. Additionally, it can lead to the development of models of optimization by ANNs with reduced numbers of neurons and satisfactory levels of generalization error.
In the validation tests, a gain was obtained in the general minimization of deviations of 20% and coefficient of performance (Oj) of 22.6% and, also, forecast efficiency average values of 84% for the target value. It was verified by CI limit values, that the predicted values by two models are within the expected variability for the process. Additionally, it is concluded that the ANN’s models are an option for the development of algorithms for prediction and optimization of the polymer vacuum thermoforming process with a median amount of data.
Finally, each solution presented by the optimization models represents a (1) set of possible values of the manufacturing parameters within the established modeling criteria, and the choice of one of the solutions will depend on other technical or economic factors involved in the process, such as processing time, operating cost, electric energy consumption, etc.

Acknowledgments

The authors are grateful to “Instituto Federal de Educação, Ciência e Tecnologia de Minas Gerais—Campus Betim”, for supporting the development of this paper.

Author Contributions

Wanderson de Oliveira Leite, Juan Carlos Campos Rubio and Francisco Mata conceived and designed the experiments; Wanderson de Oliveira Leite and Juan Carlos Campos Rubio performed the experiments; Wanderson de Oliveira Leite, Juan Carlos Campos Rubio, Francisco Mata, Angeles Carrasco and Issam Hanafi analyzed the data; Juan Carlos Campos Rubio, Francisco Mata, Angeles Carrasco and Issam Hanafi contributed reagents/materials/analysis tools; Wanderson de Oliveira Leite, Juan Carlos Campos Rubio, Francisco Mata, Angeles Carrasco and Issam Hanafi wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Throne, J. Thermoforming (Part III, Chapter 19). In Applied Plastics Engineering Handbook: Processing and Materials, 1st ed.; Kutz, M., Ed.; William Andrew (Elsevier): Waltham, MA, USA, 2011; p. 784. ISBN 9781437735154. [Google Scholar]
  2. Throne, J.L. Technology of Thermoforming, 1st ed.; Carl Hanser Verlag GmbH & Co. KG: New York, NY, USA, 1996; Volume 1, p. 882. ISBN 978-1569901984. [Google Scholar]
  3. Küttner, R.; Karjust, K.; Ponlak, M. The design and production technology of large composite plastic products. J. Proc. Estonian Acad. Sci. Eng. 2007, 13, 117–128. [Google Scholar]
  4. Muralisrinivasan, N.S. Update on Troubleshooting in Thermoforming, 1st ed.; Smithers Rapra Technology: Shrewsbury, Shropshire, UK, 2010; p. 140. ISBN 978-1-84735-137-1. [Google Scholar]
  5. Ghobadnam, M.; Mosaddegh, P.; Rejani, M.R.; Amirabadi, H.; Ghaei, A. Numerical and experimental analysis of HIPS sheets in thermoforming process. Int. J. Adv. Manuf. Technol. 2015, 76, 1079–1089. [Google Scholar] [CrossRef]
  6. Engelmann, S.; Salmang, R. Optimizing a thermoforming process for packaging (chapter 21). In Advanced Thermoforming: Methods, Machines and Materials, Applications and Automation, 1st ed.; Engelmann, S., Ed.; Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 1, pp. 125–136. [Google Scholar]
  7. Klein, P. Fundamentals of Plastics Thermoforming, 1st ed.; Editon Morgan & Claypool Publishers: Williston, WI, USA, 2009; p. 98. ISBN 9781598298840. [Google Scholar]
  8. Throne, J.L. Understanding Thermoforming, 2nd ed.; Hanser: New York, NY, USA, 2008; p. 279, ISBN 10:1569904286. [Google Scholar]
  9. Yang, C.; Hung, S.W. Modeling and Optimization of a Plastic Thermoforming Process. J. Reinf. Plast. Compos. 2004, 23, 109–121. [Google Scholar] [CrossRef]
  10. Yang, C.; Hung, S.W. Optimising the thermoforming process of polymeric foams: An approach by using the Taguchi method and the utility concept. Int. J. Adv. Manuf. Technol. 2004, 24, 353–360. [Google Scholar] [CrossRef]
  11. Sala, G.; Landro, L.D.; Cassago, D. A numerical and experimental approach to optimise sheet stamping technologies: Polymers thermoforming. J. Mater. Des. 2002, 23, 21–39. [Google Scholar] [CrossRef]
  12. Warby, M.K.; Whitemana, J.R.; Jiang, W.G.; Warwick, P.; Wright, T. Finite element simulation of thermoforming processes for polymer sheets. Math. Comput. Simul. 2003, 61, 209–218. [Google Scholar] [CrossRef]
  13. Ayhan, Z.; Zhang, H. Wall Thickness Distribution in Thermoformed Food Containers Produced by a Benco Aseptic. Polym. Eng. Sci. 2000, 40, 1–10. [Google Scholar] [CrossRef]
  14. Erdogan, E.S.; Eksi, O. Prediction of Wall Thickness Distribution in Simple Thermoforming Moulds. J. Mech. Eng. 2014, 60, 195–202. [Google Scholar] [CrossRef]
  15. Kommoji, S.; Banerjee, R.; Bhatnaga, N.; Ghosh, K.G. Studies on the stretching behaviour of medium gauge high impact polystyrene sheets during positive thermoforming. J. Plast. Film Sheet. 2015, 31, 96–112. [Google Scholar] [CrossRef]
  16. Velsker, T.; Eerme, M.; Majak, J.; Pohlak, M.; Karjust, K. Artificial neural networks and evolutionary algorithms in engineering design. J. Achiev. Mater. Manuf. Eng. 2011, 44, 88–95. [Google Scholar]
  17. Martin, P.J.; Keaney, T.; McCool, R. Development of a Multivariable Online Monitoring System for the Thermoforming Process. Polym. Eng. Sci. 2014, 54, 2815–2823. [Google Scholar] [CrossRef]
  18. Chy, M.M.I.; Boulet, B.; Haidar, A. A Model Predictive Controller of Plastic Sheet Temperature for a Thermoforming Process. In Proceedings of the American Control Conference, San Francisco, CA, USA, 29 June–1 July 2011; pp. 4410–4415. [Google Scholar] [CrossRef]
  19. Boutaous, M.; Bourgin, P.; Heng, D.; Garcia, D. Optimization of radiant heating using the ray tracing method: Application to thermoforming. J. Adv. Sci. 2005, 17, 1–2, 139–145. [Google Scholar] [CrossRef]
  20. Zhen-zhe, L.; Cheng, T.H.; Shen, Y.; Xuan, D.J. Optimal Heater Control with Technology of Fault Tolerance for Compensating Thermoforming Preheating System. Adv. Mater. Sci. Eng. 2015, 12, 1–5. [Google Scholar]
  21. Meziane, F.; Vadera, S.; Kobbacy, K.; Proudlove, N. Intelligent systems in manufacturing: Current developments and future prospects. Integr. Manuf. Syst. 1990, 11, 4, 218–238. [Google Scholar] [CrossRef]
  22. Tadeusiewicz, R. Introduction to intelligent systems (Part I: Chapter 1). In Intelligent Control Systems (Neural Networks), 2nd ed.; Wilamowski, B.M., Irwin, J., Eds.; CRC Press: New York, NY, USA, 2011; Volume 1, pp. 1-1–1-12. ISBN 9781439802830. [Google Scholar]
  23. Pham, D.T.; Pham, P.T.N. Computational intelligence for manufacturing (Part I). In Computational Intelligence in Manufacturing Handbook, 1st ed.; Wang, J., Kusiak, A., Eds.; CRC Press LLC: Boca Raton, FL, USA, 2001; Volume 1, p. 560. ISBN 0-8493-0592-6. [Google Scholar]
  24. Chang, Y.Z.; Wen, Y.Z.; Liu, S.J. Derivation of optimal processing parameters of polypropylene foam thermoforming by an artificial neural network. J. Polym. Eng. Sci. 2005, 45, 375–384. [Google Scholar] [CrossRef]
  25. Efe, M.Ö. From Backpropagation to Neurocontrol (Part III: Chapter 1). In Intelligent Control Systems (Neural Networks), 2nd ed.; Wilamowski, B.M., Irwin, J.D., Eds.; CRC Press: New York, NY, USA, 2011; Volume 1, pp. 2-1–2-11. ISBN 9781439802830. [Google Scholar]
  26. Huang, S.H.; Zhang, H.C. Artificial neural networks in manufacturing: Concepts, applications, and perspectives. IEEE Comp. Packag. Manuf. Technol. (Part I) 1994, 17, 212–228. [Google Scholar] [CrossRef]
  27. Kumar, K.; Thakur, G.S.M. Advanced applications of neural networks and artificial intelligence: A review. Int. J. Inf. Technol. Comput. Sci. 2012, 6, 57–68. [Google Scholar] [CrossRef]
  28. Karnik, S.R.; Gaitonde, V.N.; Campos Rubio, J.; Esteves Correia, A.; Abrão, A.M.; Paulo Davim, J. Delamination analysis in high speed drilling of carbon fiber reinforced plastics (CFRP) using artificial neural network model. Mater. Des. 2008, 29, 1768–1776. [Google Scholar] [CrossRef]
  29. Mehta, H.; Meht, A.M.; Manjunath, T.C.; Ardil, C. Multi-layer Artificial Neural Network Architecture Design for Load Forecasting in Power Systems. Int. J. Appl. Math. Comput. Sci. 2008, 5, 207–220. [Google Scholar]
  30. Esteban, L.G.; García Fernández, F.; Palacios, P.; Conde, M. Artificial neural networks in variable process control: Application in particleboard manufacture. J. Investig. Agrar. Sist. Recur. For. 2009, 18, 92–100. [Google Scholar] [CrossRef]
  31. Kosko, B. Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence, 1st ed.; Dskt Edition: New Delhi, India, 1994; p. 449. ISBN 978–0136114352. [Google Scholar]
  32. Schalkoff, R.J. Artificial Neural Networks, 1st ed.; McGraw-Hill Companies: New York, NY, USA, 1997; p. 422. [Google Scholar]
  33. Hagan, M.T.; Menhaj, M.B. Training feedforward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5, 989–993. [Google Scholar] [CrossRef] [PubMed]
  34. Demuth, H.; Beale, M. Neural Network Toolbox, User’s Guide, version 4.0.4.; The MathWorks, Inc.: Natick, MA, USA, 2004; p. 840. [Google Scholar]
  35. Hao, Y.U.; Wilamowski, B.M. Levenberg-Marquardt training (Part II: Chapter 12). In Intelligent Control Systems (Neural Networks), 2nd ed.; Wilamowski, B.M., Irwin, J.D., Eds.; CRC Press: New York, NY, USA, 2011; Volume 1, pp. 12-1–12-16. ISBN 9781439802830. [Google Scholar]
  36. Eschenauer, H.; Koski, J.; Osyczka, A. Multicriteria Design Optimization: Procedures and Applications, 1st ed.; Springer: Berlin, Germany, 1990; p. 482. ISBN 978-3-642-48699-9. [Google Scholar]
  37. Rosen, S. Thermoforming: Improving Process Performance, 1st ed.; Society of Manufacturing Engineers (Plastics Molders & Manufacturers Association of SME): Dearborn, MI, USA, 2002; p. 344. ISBN 978-0872635821. [Google Scholar]
  38. Montgomery, D.C. Design and Analysis Of Experiments, 8th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2013; p. 730. ISBN 978-1118146927. [Google Scholar]
  39. Zain, A.M.; Haron, H.; Sharif, S. Prediction of surface roughness in the end milling machining using Artificial Neural Network. Expert Syst. Appl. 2010, 37, 1755–1768. [Google Scholar] [CrossRef]
  40. Vongkunghae, A.; Chumthong, A. The performance comparisons of backpropagation algorithm’s family on a set of logical functions. ECTI Trans. Electr. Eng. Electron. Commun. 2007, 5, 114–118. [Google Scholar]
  41. Manjunath Patel, G.C.; Krishna, P. A review on application of artificial neural networks for injection moulding and casting processes. Int. J. Adv. Eng. Sci. 2013, 3, 1–12. [Google Scholar]
Figure 1. Schematic of basic vacuum thermoforming. (a) Heating; (b) sealing or pre-stretch; (c) forming and cooling; and (d) demolding and trimming.
Figure 1. Schematic of basic vacuum thermoforming. (a) Heating; (b) sealing or pre-stretch; (c) forming and cooling; and (d) demolding and trimming.
Polymers 10 00143 g001
Figure 2. Product standard: dimensions on piece or dimensional deviations parameters.
Figure 2. Product standard: dimensions on piece or dimensional deviations parameters.
Polymers 10 00143 g002
Figure 3. Main experiment: (a) DEV 01 vs. variations of factor levels; (b) DEV 02 vs. variations of factor levels; (c) DEV 03 vs. variations of factor levels; and(d) DEV 04 vs. variations of factor levels.
Figure 3. Main experiment: (a) DEV 01 vs. variations of factor levels; (b) DEV 02 vs. variations of factor levels; (c) DEV 03 vs. variations of factor levels; and(d) DEV 04 vs. variations of factor levels.
Polymers 10 00143 g003
Figure 4. Neural network structure model developed for the tests.
Figure 4. Neural network structure model developed for the tests.
Polymers 10 00143 g004
Figure 5. Performance analysis of multi-criteria ANN models—type of deviations vs. predicted values of models vs. target value: (a) predicted values of models vs. target values of dimensional deviation height; (b) predicted values of models vs. target values of dimensional deviation of the diagonal length; (c) predicted values of models vs. target values of geometric deviation of the flatness; (d) predicted values of models vs. target values of geometric deviation of the side angles.
Figure 5. Performance analysis of multi-criteria ANN models—type of deviations vs. predicted values of models vs. target value: (a) predicted values of models vs. target values of dimensional deviation height; (b) predicted values of models vs. target values of dimensional deviation of the diagonal length; (c) predicted values of models vs. target values of geometric deviation of the flatness; (d) predicted values of models vs. target values of geometric deviation of the side angles.
Polymers 10 00143 g005
Figure 6. Comparison of the response surfaces of the models for heating time variables vs. electric heating power vs. type of deviations, being: (A) “Z” model, (B) “Y” model; and (C) “V” model, and DEV 01 is the dimensional deviation height, DEV 02 is the deviation of diagonal length, DEV 03 is the geometric deviation of flatness (GDi), and DEV 04 is geometric deviation of side angles.
Figure 6. Comparison of the response surfaces of the models for heating time variables vs. electric heating power vs. type of deviations, being: (A) “Z” model, (B) “Y” model; and (C) “V” model, and DEV 01 is the dimensional deviation height, DEV 02 is the deviation of diagonal length, DEV 03 is the geometric deviation of flatness (GDi), and DEV 04 is geometric deviation of side angles.
Polymers 10 00143 g006
Table 1. Factors and levels selected for the main experiments.
Table 1. Factors and levels selected for the main experiments.
LevelFactors
A (s a)B (% a)C (bar and cm/s a)D (s a)E (mbar a)
1 (−1)80903.4 and 18.4 (100%)7.210
2 (+1)901004.0 and 21.6 (85%)9.015
a Unit.
Table 2. Experimental main results.
Table 2. Experimental main results.
Standard order testResponses
DEV 01 (mm a)DEV 02 (mm a)DEV 03a)DEV 04 (mm a)
Mean bAE eSMean bAE eSMean bAE eSMean bAE eS
1−1.300±0.0400.025−0.263±0.0390.0241.542 c±0.1040.0650.635±0.0230.015
2−0.871±0.4610.290−0.308±0.0400.0250.411±0.2220.1390.455±0.0980.062
3−0.408±0.1920.121−0.335±0.2530.1590.349±0.1600.1000.351±0.1210.076
4−0.293±0.3270.206−0.310±0.1330.0840.323±0.1340.0840.188±0.1540.097
5−0.596±0.1290.081−0.222±0.0100.0061.100±0.1230.0770.476±0.0660.041
6−0.971±0.1450.091−0.259±0.0350.0220.366±0.2010.1260.407±0.0210.013
7−0.618±0.1310.082−0.395±0.0540.0340.321±0.4700.2960.239±0.0060.004
8−0.576±0.4670.293−0.416±0.0720.0450.164±0.2000.1250.230±0.0200.013
9−1.498±0.2700.170−0.207±0.0870.0540.933±0.1320.0830.501±0.0950.060
10−0.611±0.2830.178−0.301±0.0150.0100.234±0.1520.0960.078±0.0640.040
11−0.625±0.4280.269−0.394±0.0680.0430.500±0.4500.2830.227±0.0070.005
12−0.476±0.2260.142−0.268±0.0380.0240.208±0.0690.0430.253±0.0980.061
13−1.128±0.2410.152−0.278±0.0600.0380.955±0.3640.2290.442±0.0010.000
14−0.728±0.4830.303−0.224±0.0160.0100.297±0.1010.0630.105±0.0670.042
15−0.684±0.2000.126−0.463±0.0280.0180.214±0.0420.0270.198±0.0630.039
16−0.461±0.4490.282−0.350±0.1050.0660.254±0.0310.0200.200±0.0340.021
17 d−0.789±0.0790.049−0.309±0.0190.0120.481±0.2760.1740.304±0.0450.029
a Unit; b Mean average value for four (4) samplings; c Outlier; d Center point; e Accuracy of estimate of sample mean (AE) with n = 4 and α = 0.05; DEV 01, DEV 02 and DEV 04 are in millimeters; DEV 03 is in decimal degrees.
Table 3. ANOVA summary table, results for the deviation analysis vs. factors in main experiments.
Table 3. ANOVA summary table, results for the deviation analysis vs. factors in main experiments.
FactorResponses
DEV 01DEV 02DEV 03DEV 04
F(0)p-ValueF(0)p-ValueF(0)p-ValueF(0)p-Value
A10.2 a0.0050.420.54289.7 a0.00077.72 a0.000
B37.0 a0.00022.5 a0.00082.6 a0.00086.23 a0.000
C0.300.5921.440.2464.6 a0.0468.93 a0.008
D0.980.3360.020.8996.43 a0.02156.03 a0.000
E0.080.7760.340.5674.50 a0.0491.360.259
A*B1.920.1843.910.06552.1 a0.00043.81 a0.000
A*C4.86 a0.0420.270.6122.730.1176.24 a0.023
A*D6.13 a0.0242.270.1501.290.2715.58 a0.030
A*E1.870.1890.290.5962.630.1232.040.171
B*C5.66 a0.0295.04 a0.0380.010.9430.420.525
B*D0.050.8330.120.7396.98 a0.01730.14 a0.000
B*E0.630.4380.890.3590.080.7832.450.136
C*D0.030.8670.140.7091.810.1961.540.232
C*E3.020.1001.120.3052.230.15429.55 a0.000
D*E4.89 a0.0411.380.2570.370.5500.250.817
S = 0.0648608; R² = 70.26%; R2(adj) = 42.28% and; a Significant factors and interaction effect.
Table 4. Summary of the main characteristics and performance values of multi-criteria ANN models developed and tested.
Table 4. Summary of the main characteristics and performance values of multi-criteria ANN models developed and tested.
Model nameError model (MAE)Error model (MSE)Processing time of ModelNo. training data of ModelNo. test data of ModelANN architecture Network training function of ANNTransfer function of ANN (1st Layer)Transfer function of ANN (Layer Hidden)Best epoch of ANN
Z0.00010.00000015.34714610-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘tansig’461
Y0.00020.00000036.72812410-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘tansig’873
X0.03010.00001638.00411310-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘tansig’832
W0.08770.072054139.57511310-8-4‘traingd’; η = 0.001; ρ = 0.001; τ = 0.001;‘tansig’‘tansig’10359
V0.03030.00007956.19211310-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ’tansig’685
T0.01640.0000976220.04011316-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ’tansig’19855
P0.03190.000000058.8001135-4-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ‘tansig’, ’purelin’762
O0.00850.000010564.4611138-8-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ‘tansig’, ’purelin’4482
M0.03200.0000620140.26811316-8-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ‘tansig’, ’purelin’7444
K0.15290.166991274.77211324-12-8-4‘traingd’; η = 0.001; ρ = 0.001; τ = 0.001;‘tansig’‘purelin’, ‘tansig’, ’purelin’11882
H0.02560.0000000490.48511324-12-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ‘tansig’, ’purelin’9340
D0.18320.19383147.90011332-16-8-4‘traingd’; η = 0.001; ρ = 0.001; τ = 0.001;‘tansig’‘purelin’, ‘tansig’, ’purelin’1656
A0.021350.0005825205.54411332-16-8-4‘trainlm’;
mu_max = 1 × 10308
‘tansig’‘purelin’, ‘tansig’, ’purelin’3507
Table 5. Restrictions domain used for optimization model “A” and model “B”.
Table 5. Restrictions domain used for optimization model “A” and model “B”.
Optimization modelFactorConstraintsGenerated points
DomainDiscretization
≤ XiUnit
Variation“A”A809053
B9010053
C851007.53
D7.29.00.93
E10152.53
Total243
Variation“B”A75952.210
B851052.59
C77.51002.510
D6.39.90.95
E7.5151.257
Total31500
Table 6. Summary of the 10 best results of the “A” variation of the optimization algorithm.
Table 6. Summary of the 10 best results of the “A” variation of the optimization algorithm.
SolutionFactorOj(p)
A (s)B (%)C (%)D (s)E (mbar)
1st901001008.112.50.27
2nd9010092.57.212.50.27
3rd851001007.212.50.27
4th90951008.112.50.28
5th90100858.1100.28
6th90951007.212.50.28
7th85951007.212.50.28
8th909592.57.212.50.29
9th901001007.212.50.29
10th85951007.212.50.30
Table 7. Summary of the 10 best results of the “B” variation of the optimization algorithm.
Table 7. Summary of the 10 best results of the “B” variation of the optimization algorithm.
SolutionFactorOj(p)
A (s)B (%)C (%)D (s)E (mbar)
1st92.6901007.212.50.24
2nd95901008.112.50.24
3rd9587.51007.212.50.24
4th95901007.212.50.24
5th9587.51006.3100.24
6th959096.258.112.50.24
7th9587.596.256.3100.24
8th92.69096.257.212.50.24
9th92.687.51007.212.50.24
10th9587.51008.112.50.24
Table 8. Comparative results of the multi-criteria optimization model type “A”.
Table 8. Comparative results of the multi-criteria optimization model type “A”.
Validation samples aModel type “A”Main experimental n° 04 b
Mean95% CIPredictedMean95% CI
DEV 01−0.255−0.298−0.213−0.294−0.293−0.6200.034
DEV 02−0.341−0.419−0.263−0.376−0.310−0.444−0.177
DEV 030.1930.1560.2310.1850.3230.1890.456
DEV 040.1340.0500.2180.1880.1880.0340.342
Oj0.230.170.300.270.310.390.27
a For validation samples n = 5 and α = 0.05; b For the main experiment n = 4 and α = 0.05; DEV 01, DEV 02 and DEV 04 are in millimeters; DEV 03 is in decimal degrees.
Table 9. Comparative results of the multi-criteria optimization model type “B”.
Table 9. Comparative results of the multi-criteria optimization model type “B”.
Validation Samples aModel Type “B”Main Experimental n° 04 b
Mean95% CIPredictedMean95% CI
DEV 01−0.366−0.480−0.252−0.293−0.293−0.6200.034
DEV 02−0.246−0.267−0.225−0.242−0.310−0.444−0.177
DEV 030.1080.0780.1390.1820.3230.1890.456
DEV 040.1360.0680.2040.0990.1880.0340.342
Oj0.250.170.330.240.310.390.27
a For validation samples n = 5 and α = 0.05; b For the main experiment n = 4 and α = 0.05; DEV 01, DEV 02 and DEV 04 are in millimeters; DEV 03 is in decimal degrees.

Share and Cite

MDPI and ACS Style

Leite, W.D.O.; Campos Rubio, J.C.; Mata Cabrera, F.; Carrasco, A.; Hanafi, I. Vacuum Thermoforming Process: An Approach to Modeling and Optimization Using Artificial Neural Networks. Polymers 2018, 10, 143. https://doi.org/10.3390/polym10020143

AMA Style

Leite WDO, Campos Rubio JC, Mata Cabrera F, Carrasco A, Hanafi I. Vacuum Thermoforming Process: An Approach to Modeling and Optimization Using Artificial Neural Networks. Polymers. 2018; 10(2):143. https://doi.org/10.3390/polym10020143

Chicago/Turabian Style

Leite, Wanderson De Oliveira, Juan Carlos Campos Rubio, Francisco Mata Cabrera, Angeles Carrasco, and Issam Hanafi. 2018. "Vacuum Thermoforming Process: An Approach to Modeling and Optimization Using Artificial Neural Networks" Polymers 10, no. 2: 143. https://doi.org/10.3390/polym10020143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop