Next Article in Journal
Detachment Energy Evaluation in Nano-Particle Cleaning Using Lateral Force Microscopy
Previous Article in Journal
Can Virtual Reality Cognitive Remediation in Bipolar Disorder Enhance Specific Skills in Young Adults through Mirror Neuron Activity?—A Secondary Analysis of a Randomized Controlled Trial
 
 
Article
Peer-Review Record

Fourier Features and Machine Learning for Contour Profile Inspection in CNC Milling Parts: A Novel Intelligent Inspection Method (NIIM)

Appl. Sci. 2024, 14(18), 8144; https://doi.org/10.3390/app14188144
by Manuel Meraz Méndez 1,*,†, Juan A. Ramírez Quintana 2,†, Elva Lilia Reynoso Jardón 3,†, Manuel Nandayapa 3,† and Osslan Osiris Vergara Villegas 3,†
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2024, 14(18), 8144; https://doi.org/10.3390/app14188144
Submission received: 8 August 2024 / Revised: 29 August 2024 / Accepted: 4 September 2024 / Published: 10 September 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript proposed A Novel Intelligent Inspection Method (NIIM) . The manuscript may be interesting and can be accepted after minor revision. However multiple problems might need to be considered.

1. Have the selected machining parameters (such as cutting depth, cutting speed, and feed rate) been optimized, and are these parameters adjusted for different materials or stages of the machining process?

2. Are there measures in place to ensure consistency in machining conditions (such as temperature, coolant flow, etc.) during the actual machining process?

3. Has this study conducted ablation experiments?

Comments on the Quality of English Language

Minor editing of English language required.

Author Response

Comments 1: Have the selected machining parameters (such as cutting depth, cutting speed, and feed rate) been optimized, and are these parameters adjusted for different materials or stages of the machining process?

Response 1:  Thank you for your insightful question. In our study, the machining parameters, including cutting depth, cutting speed, and feed rate, have been optimized for each material. These parameters are carefully adjusted according to the tools, materials properties, and the sequence of the machining process.

However, we want to clarify that the goal of this research is not to optimize the machining parameters. The main objective of the NIIM is to inspect the form deviations caused by machining effects. Based on the results of these form deviations, we aim to classify the contour profile quality by normal and defective products. In response to your question, we have included in section 2.1.2 paragraph 3, lines 150-154 an explanation of how we compute the machining parameters according to the cutting theory by George et al. [29]. Also, we explain the NIIM´s main contribution, by adding a short paragraph of the inspection goal in the milling process, this change can be found in section 6, page 27, second paragraph, lines 733-737 Thank you for pointing this out.

 

Comments 2: Are there measures in place to ensure consistency in machining conditions (such as temperature, coolant flow, etc.) during the actual machining process?

 

Response 2:  Thank you for raising this important point. To ensure consistency in machining conditions, we implemented several measures during the actual machining process. The measures are in place to ensure consistency in machining conditions, including temperature, coolant flow, and other critical factors. These conditions are carefully monitored and controlled throughout the machining process to maintain stability and minimize variations. This consistency is crucial for achieving accurate and repeatable results, as it helps to ensure that the machining parameters remain effective and that the quality of the final product is maintained. Thank you for pointing this out. We agree with this comment. Therefore, we have included a short paragraph on milling process conditions, this change can be found in section 2.1.3, page 6, paragraph 2 lines 162-164.

 

Comments 3: Has this study conducted ablation experiments?

 

Response 3:  Thank you for your question. Our current study did not include ablation experiments. However, Table 6 presents a comparison of different artificial neural networks popular in the literature.

 

However, we recognize that ablation experiments can provide valuable insights into the effects of material removal and tool interaction on measurement accuracy. We appreciate your suggestion and will consider incorporating ablation experiments in future research to explore their impact on machining precision and the effectiveness of our proposed methods.

Thank you for highlighting this area for potential expansion in our study.

 

4. Response to Comments on the Quality of English Language

Point 1:

Response 1:    We have revised and improved the English language.

5. Additional clarifications

No additional clarifications

 

Reviewer 2 Report

Comments and Suggestions for Authors

This paper presents a novel intelligent inspection method (NIIM) designed to improve the accuracy and efficiency of contour profile inspections in CNC milling parts. Traditional inspection methods often rely on contact-based measurements, which can be limited in terms of precision and reliability. The proposed NIIM integrates a vision system (RAM−StarliteTM), Fourier feature extraction, and machine learning techniques to enhance the inspection process. A calibration piece is used to detect form deviations, and the system analyzes contour profile images captured during the milling process. The extracted Fourier features are processed using a feed-forward neural network to classify the quality of the contour profiles. Experimental results based on 356 images demonstrate the method’s high accuracy (96.99%) and computational efficiency, making it a robust tool for profile line tolerance inspection in industrial applications.

The manuscript introduces a novel method that combines Fourier feature extraction with machine learning to tackle a significant challenge in CNC milling inspection. The integration of a vision system with advanced algorithms shows potential for improving the precision and reliability of industrial inspection processes. The reported accuracy of 96.99% demonstrates the effectiveness of the proposed method. This is a notable improvement over traditional contact-based inspection techniques, suggesting that the NIIM could be widely adopted in industrial settings.

Please find some comments and suggestion below:

The manuscript would benefit from a more detailed explanation of the Fourier descriptor extraction process. It is important to clarify how these features are computed and why they are particularly suited for analyzing form deviations in contour profiles.

Provide more information about the architecture and training process of the feed-forward neural network used for classification. Details such as the number of layers, the activation functions used, and the training parameters (e.g., learning rate, number of epochs) would help readers better understand the model's performance.

While the paper mentions the use of 356 images for training and testing, it would be beneficial to provide a more comprehensive description of the dataset. For example, describe how the images were collected, the distribution of different types of form deviations in the dataset, and any pre-processing steps applied before feature extraction.

Explain the experimental setup in greater detail. How was the dataset split between training and testing? Were any cross-validation techniques used to ensure the robustness of the results?

 

The manuscript presents a valuable contribution to the field of industrial inspection by proposing a novel method that leverages Fourier features and machine learning for the inspection of contour profiles in CNC milling parts. The approach is well-suited to address the limitations of traditional methods, offering significant improvements in accuracy and computational efficiency. 

Comments on the Quality of English Language  

The quality of English in the manuscript is generally good, with clear and coherent language.

Author Response

Comments 1: The manuscript would benefit from a more detailed explanation of the Fourier descriptor extraction process. It is important to clarify how these features are computed and why they are particularly suited for analyzing form deviations in contour profiles.

Response 1: Thank you for raising this important point. In the last version of the paper we present a brief introduction of the regular feature extraction methods used in multivariable analysis and industrial applications. Subsequently, the next paragraph and equations (1) to (4) explain the model and algorithm of Fourier feature descriptors. However, based on your suggestion we add in the first paragraph of section 2.3.2. page 9, paragraph 1, and lines 231-241 an explanation about why Fourier feature descriptors are suited for analyzing form deviations in contour profiles. Also, we add an explanation of Fourier descriptor implementation in the paragraph presented after equation (3).

Then the first paragraph of Subsection 2.3.2 is changed to: “According to [3,4,6,15,16], PCA, WT, and FT are common methods for feature extraction in industrial applications. To generate features, WT and FT require other statistical analyses like VMD or EMD. However, PCA, VMD, and EMD cannot be used in online real CNC monitoring and diagnosis. On the other hand, Fourier descriptors are a set of methods for feature extraction based on the FT used in many industrial applications because they generate a one-dimensional feature vector composed of a low number of elements that represent the contour morphology of an object [31]. This feature vector is a highly compressed way of generating frequency signatures that represent the contours of objects. Fourier descriptors generate results with similar precision as WT and other FT methods but with significantly low computational complexity in CNC monitoring, real-time, and industrial applications [32,33]. Then, Fourier descriptors are the foundations for feature extraction for the NIIM.”

Then the paragraph presented after equation (4) of Subsection 2.3.2, lines 262-266 is changed to: “where k is the frequency component, N is the size of the profile. Equation (2) is computed with the Fast Fourier transform radix 2. Based on different works that apply Fourier descriptors in shape analysis as [33,34], the representation of the calibration piece profile Fourier features is defined as the magnitude of Equation (2) divided by N as follows:”

where Mα,β(k) is a profile feature vector composed of real numbers that describe the magnitude frequencies of r(n) contour properties.

 

Comments 2: Provide more information about the architecture and training process of the feed-forward neural network used for classification. Details such as the number of layers, the activation functions used, and the training parameters (e.g., learning rate, number of epochs) would help readers better understand the model's performance.

 

Response 2: Thanks for the observation. The first paragraph of Subsection 2.3.4, page 13, lines 312-317, describes the architecture in the sentence of “The processing method for shape and quality profile classification is a feed-forward neural network (FFNN) composed of three layers: input, hidden, and output”. Also, in the same Subsection 2.3.4, lines 326-328, is mentioned that the hidden layer has a RELU activation function and that the output layer uses a sigmoid. The last paragraph of Subsection 3.0.2, lines 399-408, presents the information on the training parameters.

However, based on this observation, we add a new figure that represents the architecture of the network and add information about activation functions and the learning parameters.

Then, we add a new sentence in section 2.3.4, lines 314-315 to cite the new Figure in the next as follows: ”… is a feed-forward neural network (FFNN) composed of three layers: input, hidden, and output. Figure 18 presents a diagram of the FFNN architecture, where the input is the feature vector, the hidden layer has a number of neurons that is different in each experiment in this work, and the output is just one neuron.”

 

Fig 18. FFNN architecture.

 

Also, we add information of why the activation function was selected. The paragraph presented after equation (7) lines 326-328 is modified to: “j1 is the neuron index of the hidden layer, j1 = 0, ..., J, the activation function f(.) is RELU, which help the FFNN to converge fast than other activation functions [36].The paragraph presented after equation (8) lines 331-332 is modified to: “f (·) is the sigmoid activation function used to represent a binary classification with one neuron [37].”

Furthermore, we change the last paragraph of Subsection 3.0.2 lines 398-402 as follows: “The learning hyperparameters must train the networks with few images but improve convergence with the minor epoch possible. Then, the FFNNs and CNNs are trained with stochastic gradient descendant (SGD) as an optimizer, with a learning rate of 0.001, a batch of one, and 300 epochs with early stops. These hyperparameters are selected because according to [42], …”.

 

________________________________________

 

Comments 3: While the paper mentions the use of 356 images for training and testing, it would be beneficial to provide a more comprehensive description of the dataset. For example, describe how the images were collected, the distribution of different types of form deviations in the dataset, and any pre-processing steps applied before feature extraction.

 

Response 3: Thanks for the observation. Subsection 3.0.1 and Table 6 present the composition of the Image dataset.  However, based on your observation, we changed the paragraph of Subsection 3.0.1, lines 346-354, to explain how the images were collected. Then, this paragraph is written as follows: “The dataset for training and testing comprises 356 images in format Iα,β(x, y)RGB, where 180 were generated based on 60 calibration pieces and the others were generated artificially. The 180 images were obtained using the process presented in Subsection 2.2, and the 60 calibration pieces were designed using the process of Subsection 2.1. The other 176 images were generated with the Generative Adversarial Network (GAN) presented in [39]. The GAN was configured with a model gradient function, 1000 epochs, a batch size of 128, a learning rate of 0.0002, a gradient decay factor of 0.5, and a square gradient decay factor of 0.999. Table 5 presents …”

________________________________________

 

Comments 4: Explain the experimental setup in greater detail. How was the dataset split between training and testing? Were any cross-validation techniques used to ensure the robustness of the results?

 

Response 4: Thanks for the observation. We add a sentence in subsection 3.0.2 about the dataset split at the beginning of the last paragraph lines 396-398. This sentence is: “The dataset is divided into 70\% for training and 30\% and 30\% for testing. This division is regular in machine learning literature and corresponds to 250 images for training and 105 for testing.”

Also, we add a cross-validation subsection 4.2.1 before the Subsection of ‘Computational Cost’ lines 603-609, page 24 This Subsection is:

 

4.2.1 Cross Validation

Table X reports a five-folder cross-validation of kFFNN. The metric used is Acc, and the average results of the five folders are 99.2\% for Shape classification and 94.8\% for Quality classification. These cross-validation results are similar to those reported in Tables 1 and 2 and are superior to those of the rest of the networks used in the comparisons. Then, the Acc of kFFNN is considered consistent at 99.2% for shape classification and 94.8% for quality classification.

 

Table X Cross-validation of kFFNN based on Acc

Folder

1

2

3

4

5

Average

Shape Classification

100%

98%

99%

99%

100%

99.2%

Quality Classification

95%

94%

96%

96%

93%

94.8%

 

 

4. Response to Comments on the Quality of English Language

Point 1:

Response 1:    The quality of English in the manuscript is generally good, with clear and coherent language.

 

5. Additional clarifications

No additional clarifications

Back to TopTop