Figure 1.
Work-holding method for securing the low-rigidity shaft specimen in the turning machine. Notation: Fbe—bending force exerted by the cutting tool bit, Fx—tensile force along the x axis. x2, y1, y2—current coordinates at each section of the workpiece, a—distance from spindle to the tip of the cutting tool bit, L—length of shaft, M0, Q0—initial parameters: moment and transverse force at the holding point, respectively, M1—moment generated by the axial component of cutting force, M2—moment generated at the holding point at which the part is secured to the tailstock of the turning machine.
Figure 1.
Work-holding method for securing the low-rigidity shaft specimen in the turning machine. Notation: Fbe—bending force exerted by the cutting tool bit, Fx—tensile force along the x axis. x2, y1, y2—current coordinates at each section of the workpiece, a—distance from spindle to the tip of the cutting tool bit, L—length of shaft, M0, Q0—initial parameters: moment and transverse force at the holding point, respectively, M1—moment generated by the axial component of cutting force, M2—moment generated at the holding point at which the part is secured to the tailstock of the turning machine.
Figure 2.
(a) Roughness measuring instrument; tailstock collet assembly for machining elastic-deformable shafts: (b) idle position, tensile force of 2 kN; (c) view of the test stand with the shaft secured in the lathe (Ø6, L = 300 mm); (d) specimens.
Figure 2.
(a) Roughness measuring instrument; tailstock collet assembly for machining elastic-deformable shafts: (b) idle position, tensile force of 2 kN; (c) view of the test stand with the shaft secured in the lathe (Ø6, L = 300 mm); (d) specimens.
Figure 3.
Curves of objective function y, tensile force Fx1, and eccentricity e for d = 6 mm, Fbe = 49 N, Fx1 = 980 N, L = 300 mm, Ff = 30 N.
Figure 3.
Curves of objective function y, tensile force Fx1, and eccentricity e for d = 6 mm, Fbe = 49 N, Fx1 = 980 N, L = 300 mm, Ff = 30 N.
Figure 4.
Curves of objective function y, tensile force Fx1, and eccentricity e for d = 6 mm, Fbe = 70 N, Fx10 = 980 N, L = 300 mm, Ff = 40 N.
Figure 4.
Curves of objective function y, tensile force Fx1, and eccentricity e for d = 6 mm, Fbe = 70 N, Fx10 = 980 N, L = 300 mm, Ff = 40 N.
Figure 5.
Curves of objective function y, tensile force Fx1, and eccentric e for d = 8 mm, Fbe = 147 N, Fx10 = 980 N, L = 300 mm, Ff = 196 N.
Figure 5.
Curves of objective function y, tensile force Fx1, and eccentric e for d = 8 mm, Fbe = 147 N, Fx10 = 980 N, L = 300 mm, Ff = 196 N.
Figure 6.
Curves of objective function y, tensile force Fx1, and eccentric e for d = 8 mm, Fbe = 147 N, Fx10 = 980 N, L = 300 mm, Ff = 196 N.
Figure 6.
Curves of objective function y, tensile force Fx1, and eccentric e for d = 8 mm, Fbe = 147 N, Fx10 = 980 N, L = 300 mm, Ff = 196 N.
Figure 7.
Structure of the shallow neural network.
Figure 7.
Structure of the shallow neural network.
Figure 8.
Best validation performance is 1.5775 × 10−5 at epoch 20: (a) general view, (b) enlarged view of the terminal part of the curve.
Figure 8.
Best validation performance is 1.5775 × 10−5 at epoch 20: (a) general view, (b) enlarged view of the terminal part of the curve.
Figure 9.
(a) Error histogram with 20 bins, (b) gradient curve, (c) Mu curve.
Figure 9.
(a) Error histogram with 20 bins, (b) gradient curve, (c) Mu curve.
Figure 10.
Structure of nonlinear autoregressive network with exogenous input (NARX) neural network: (a) open-loop architecture; (b) closed-loop architecture.
Figure 10.
Structure of nonlinear autoregressive network with exogenous input (NARX) neural network: (a) open-loop architecture; (b) closed-loop architecture.
Figure 11.
Best validation performance is 3.9897 × 10−8 at epoch 20: (a) general view, (b) enlarged view of the terminal part of the curve.
Figure 11.
Best validation performance is 3.9897 × 10−8 at epoch 20: (a) general view, (b) enlarged view of the terminal part of the curve.
Figure 12.
(a) Error histogram with 20 bins, (b) gradient curve, (c) Mu curve.
Figure 12.
(a) Error histogram with 20 bins, (b) gradient curve, (c) Mu curve.
Figure 13.
Regression statistics for closed-loop step-ahead NARX: (a) R ≈ 1 for whole set of 5980 cases, (b) R = 0.99946 for the subset of 16 cases.
Figure 13.
Regression statistics for closed-loop step-ahead NARX: (a) R ≈ 1 for whole set of 5980 cases, (b) R = 0.99946 for the subset of 16 cases.
Figure 14.
Structure of a long short-term memory (LSTM) layer [
26].
Figure 14.
Structure of a long short-term memory (LSTM) layer [
26].
Figure 15.
Training performance for LSTM.
Figure 15.
Training performance for LSTM.
Figure 16.
Training loss for LSTM.
Figure 16.
Training loss for LSTM.
Figure 17.
Neural-genetic controller.
Figure 17.
Neural-genetic controller.
Figure 18.
Genetic algorithm—the best fitness plot.
Figure 18.
Genetic algorithm—the best fitness plot.
Figure 19.
Machining quality prediction using MLP ANN.
Figure 19.
Machining quality prediction using MLP ANN.
Figure 20.
Machining quality prediction using MLP ANN—detail of the process for L = 154 ÷ 165 mm in
Figure 19.
Figure 20.
Machining quality prediction using MLP ANN—detail of the process for L = 154 ÷ 165 mm in
Figure 19.
Figure 21.
Machining quality prediction using NARX.
Figure 21.
Machining quality prediction using NARX.
Figure 22.
Machining quality prediction using NARX—detail of the process for L = 154 ÷ 165 mm in
Figure 21.
Figure 22.
Machining quality prediction using NARX—detail of the process for L = 154 ÷ 165 mm in
Figure 21.
Figure 23.
Machining quality prediction using LSTM.
Figure 23.
Machining quality prediction using LSTM.
Figure 24.
Machining quality prediction using LSTM—detail of the process for L = 154 ÷ 165 mm in
Figure 22.
Figure 24.
Machining quality prediction using LSTM—detail of the process for L = 154 ÷ 165 mm in
Figure 22.
Figure 25.
Neural-genetic controller for a = 100 mm and a = 200 mm.
Figure 25.
Neural-genetic controller for a = 100 mm and a = 200 mm.
Figure 26.
Neural-genetic controller for a = 250 mm.
Figure 26.
Neural-genetic controller for a = 250 mm.
Table 1.
Training results for the multilayer perceptron (MLP) artificial neural network (ANN) by data subset.
Table 1.
Training results for the multilayer perceptron (MLP) artificial neural network (ANN) by data subset.
Data Subset | Number of Cases in Set | Mean Square Error (MSE) | Regression (R) |
---|
Training set (70%) | 4187 | 1.5059 × 10−5 | 0.99886 |
Validation set (15%) | 897 | 1.5775 × 10−5 | 0.99880 |
Testing set (15%) | 897 | 1.4976 × 10−5 | 0.99878 |
Table 2.
Training results for the open-loop NARX by data subset.
Table 2.
Training results for the open-loop NARX by data subset.
Data Subset | Number of Cases in Set | Mean Square Error (MSE) | Regression (R) |
---|
Training set (70%) | 4187 | 3.7450 × 10−8 | 0.999 |
Validation set (15%) | 897 | 3.9897 × 10−8 | 0.999 |
Testing set (15%) | 897 | 3.8548 × 10−8 | 0.999 |
Table 3.
Closed-loop NARX training results.
Table 3.
Closed-loop NARX training results.
Closed-Loop NARX | Mean Square Error (MSE) | Regression (R) |
---|
step-ahead prediction | 3.7982 × 10−8 | 0.9999 |
whole sequence prediction | 9.7246 × 10−3 | 0.5506 |
Table 4.
Layers of the LSTM after feature extraction.
Table 4.
Layers of the LSTM after feature extraction.
# | Layer Description | Activations | Learnable Parameters (Weights and Biases) |
---|
1 | Sequence input with 3 dimensions | 3 | – |
2 | BiLSTM with 200 hidden units | 200 | Input weights: 800 × 2; Recurrent Weights: 800 × 200; Bias: 800 × 1. |
3 | One fully connected layer | 1 | Weights: 6 × 200; Bias: 1 × 1. |
4 | Regression output | – | – |
Table 5.
Layers of the LSTM after feature extraction.
Table 5.
Layers of the LSTM after feature extraction.
Epoch | Iteration | RMSE Mini-Batch | Mini-Batch Loss | Base Learning Rate |
---|
1 | 1 | 1.06 | 0.6 | 0.05 |
1 | 50 | 0.07 | 2.1 × 10−3 | 0.05 |
2 | 100 | 0.01 | 1.0 × 10−4 | 0.05 |
2 | 150 | 0.01 | 8.9 × 10−5 | 0.05 |
3 | 200 | 0.01 | 7.9 × 10−5 | 0.05 |
3 | 250 | 0.01 | 1.1 × 10−4 | 0.05 |
4 | 300 | 0.01 | 8.7 × 10−5 | 0.05 |
4 | 350 | 0.01 | 8.7 × 10−5 | 0.05 |
5 | 400 | 0.02 | 1.1 × 10−4 | 0.05 |
5 | 450 | 0.01 | 7.0 × 10−5 | 0.05 |
Table 6.
Training results for the closed-loop LSTM.
Table 6.
Training results for the closed-loop LSTM.
Closed-Loop LSTM | Mean Square Error (MSE) | Regression (R) |
---|
step-ahead prediction | 1.4067 × 10−4 | 0.9999 |
whole sequence prediction | 2.6045 × 10−2 | 0.5506 |
Table 7.
The results of neural network tests.
Table 7.
The results of neural network tests.
Neural Network Type | MSE | R |
---|
Deep LSTM (step-ahead prediction) | 1.5456 × 10−5 | 0.9999 |
Shallow MLP ANN | 2.3984 × 10−4 | 0.9997 |
NARX (step-ahead prediction) | 1.8819 × 10−5 | 0.9999 |