4.1. Preliminary Data Analyses
Data collection for the analytical process was conducted between April and May 2024, during the final quality control stage of the finished products. The data collected were subject to extensive preliminary analysis. The first step was to assess the completeness of the data, identify any missing data, and analyse outliers that could affect the results of further analyses. Advanced statistical methods and algorithms were used to assess the quality of the data, allowing anomalies to be accurately identified and removed. Data were then standardised and normalised to ensure consistency of scale and to prevent the influence of different units of measurement of variables on subsequent stages of analysis. This process ensured consistent results when comparing different variables.
Furthermore, a detailed analysis of the types of defects in finished products was performed. This analysis included classifying and categorising the different types of defects and determining their frequency of occurrence.
Figure 2 shows that the most common defects recorded in the manufacturing process are D2 and D3, which represent 25.20% and 21.85% of the total defects, respectively. Cracks pose a significant threat to product integrity, often leading to serious performance problems or product failure. Surface imperfections, which affect both the appearance and functionality of the product, are also a major concern, particularly in industries that require high-precision and smooth surfaces. Addressing these two types of defects should be a primary focus to improve overall quality and reduce production losses.
D1 follows with 15.63% of defects, making it the third most common problem. Although not as critical as cracks or irregularities, scratches and abrasions can still affect aesthetic quality and, in certain applications, performance. D4, which accounts for 14.04% of defects, can cause significant problems in components that require rotational accuracy, such as drive shafts or machine parts. Controlling these medium-level defects can improve product reliability and customer satisfaction, especially in performance-critical environments.
The least common defects are D5 at 12.28% and D6 at 11.00%. Incorrect dimensions can lead to assembly problems or operational malfunctions, while inappropriate hardness can undermine the durability and wear resistance of the product. Although less common, these defects still need to be addressed, as their impact on product performance can be significant. Focussing on reducing these defects, in addition to addressing the most common problems, will ensure an overall improvement in production quality and process efficiency.
Figure 3 illustrates the distribution of defect types (D1 to D6) across specific weeks from April to May 2024.
In the initial week of April, the data indicate a notable prevalence of defects D1, D2, and D3, with D2 exhibiting the highest incidence among all identified defects. D4 and D5 are relatively less prevalent, while D6 has a minimal occurrence. In the second week, there is a notable increase in the incidence of defect D3, while D1 and D2 exhibit a slight decline in comparison to the preceding week. The distribution of defects D4, D5, and D6 remains low, but there is a more even distribution of these defects. In the third week, defects D3 and D2 continue to exert a dominant influence, maintaining high counts. In contrast, D1 and D5 exhibit a slight increase, while D4 and D6 remain at lower levels. In the fourth week, D3 reaches its highest count for this defect type throughout the period, with moderate counts for D1 and D2 and lower counts for D5 and D6.
In May, there is a notable shift in the distribution of defects. In the initial week, the frequency of defect D3 declines, while that of D1 and D2 increases, with a slight rise in D4 and minimal change in D5 and D6. In the second week, the number of occurrences of D1 and D2 remained consistent, with D2 exhibiting a slight lead. Defect D3 remains at a moderate level of occurrence, while D4 increases slightly, and D5 and D6 remain at a low level of occurrence. In the third week, there is a notable increase in the prevalence of defect D2, which becomes the most common defect type during this period. Moderate levels of D1, D3, and D4 are observed, while the levels of D5 and D6 remain relatively low. In the fourth week, D2 reaches a peak level comparable to that observed in the fourth week of April. The counts for D1 and D3 are moderate, while those for D4, D5, and D6 remain low.
This analysis demonstrates that defects D2 and D3 are the most prevalent throughout the period under review, with D2 peaking frequently in May and D3 peaking in late April. Defects D4, D5, and D6 demonstrate consistently lower counts, indicating a lesser impact compared to D1, D2, and D3. Both April and May’s fourth weeks exhibit elevated counts for specific defects, suggesting potential end-of-month process variations or batch-specific issues that may necessitate further investigation.
The preliminary analysis of the impact of individual process parameters on defect occurrence is presented in
Figure 4. The preliminary analysis indicated that the curing temperature (X1) (
Figure 4a) exhibited a correlation with the type of defect, with defects D1 and D2 occurring with greater frequency at higher temperatures. This suggests that a reduction in temperature may prove to be an effective strategy for the mitigation of these defects, especially that some outlies (black dots) for D2 defect can be observed. Defect D5 manifests at lower temperatures, thereby underscoring the necessity to adjust the temperature to curtail its prevalence.
Moreover, the cooling speed (X3) (
Figure 4b) has a considerable influence on defect D6, which manifests at elevated values of this parameter. It can be posited that a reduction in cooling speed may prove to be an effective method of limiting the occurrence of this defect. Furthermore, the control of this parameter may also prove beneficial in reducing the incidence of other defects, especially that some outlies (black dots) for the several types of defects can be observed. Furthermore, the feed rate (X4) (
Figure 4c) is found to be correlated with defect D5, which manifests at lower feed rates. It can be posited that an increase in the feed rate may serve to reduce this particular type of defect. Furthermore, it is conceivable that a meticulous calibration of the feed rate for each specific defect type could have a beneficial impact on the overall quality of the product.
Furthermore, the cutting speed (X6) (
Figure 4d) has an impact on the occurrence of various defects. Defects D1 and D2 are more prevalent at higher cutting speeds, while defects D3 and D4 are more common at lower speeds. The implementation of an effective control strategy for cutting speed has the potential to reduce the prevalence of nonconforming products. Regarding the grinding wheel speed (X8) (
Figure 4e), it was observed that defect D1 occurs at higher values, while D5 occurs at lower values. Optimisation of this parameter may prove to be an effective method of reducing the occurrence of these defects.
The grinding time (X9) (
Figure 4f) also plays an important role in this process; the occurrence of defect D4 is more common with longer grinding times. This may suggest that overheating or excessive wear from extended processing times contributes to its occurrence. It can thus be surmised that optimising the grinding time may prove to be an effective method of reducing the number of defects associated with this particular parameter.
The aforementioned analysis demonstrated that a number of process parameters, including the curing temperature, coolant type, cooling speed, feed rate, cutting speed, grinding wheel speed, and grinding time, exert a considerable influence on the occurrence of specific defects. Optimising these parameters has the potential to markedly enhance product quality by reducing the number of defects, thereby improving the efficiency of the production process.
4.3. Machine Learning Models
In the machine learning (ML) process, three different classification models were developed and evaluated: Bagged Trees (BT), Neural Network (NN), and Support Vector Machine (SVM). These models were trained using significant process parameters (X1 to X9) identified in previous stages of the analysis as key to predicting the occurrence of defects labelled D1 to D6. The dataset employed in the present study comprised 565 data points obtained during the final quality control phase of the manufacturing process. These data points encompassed a range of defect types (such as cracks, scratches, surface irregularities, and dimensional inaccuracies) and critical process parameters (e.g., curing temperature, feed rate, tool wear, cooling speed, etc.). This scope permitted a meaningful evaluation of the machine learning models and was sufficient for training and validation purposes. To prevent overfitting, a five-fold cross-validation method was used, and an 80/20 split between the training and test set was used. This ensured that each model was tested on different subsets of the data, allowing full use of the dataset in the validation and training process.
Each model was subjected to a detailed evaluation based on several key metrics. These included prediction speed, which measures how quickly the model can make a prediction using new data; training time, which indicates how long it takes to train the model; and accuracy, which measures how well the model classifies defect cases compared to actual outcomes. Furthermore, the error rate was analysed, which reflects the percentage of incorrect classifications made by the model.
Table 2 provides a detailed comparison of the results for each of these metrics across all models, allowing an assessment of their effectiveness and practical applicability. By comparing these key parameters, it is possible to select the best model for defect prediction in terms of both prediction accuracy and time efficiency.
A comparative analysis of the best tree classification models was performed. The performance of the Bagged Trees (BT), Neural Network (NN), and Support Vector Machine (SVM) models was evaluated in terms of key metrics, including the prediction speed, training time, hyperparameters, accuracy, and error rate.
The Bagged Trees (BT) model is based on an ensemble (bagging) method comprising 30 decision trees, with a maximum of 564 splits in each tree. The principal benefit of this approach is the high degree of stability in the predictions, which is achieved through the aggregation of the results from multiple models. This resulted in an accuracy rate of 94.2%, which is a highly commendable result. However, the model exhibits notable limitations in terms of prediction speed, with a processing capacity of only 1400 observations per second, rendering it the slowest among the models under analysis. The training time was 21.02 s, which also puts it last in terms of time efficiency. The error rate of the BT model was 5.8%, which was marginally higher than that of the neural network, but nevertheless within an acceptable range.
In several respects, the neural network (NN) model demonstrated the greatest efficiency. The neural network comprising 100 neurones in the initial layer and the ReLU activation function demonstrated remarkable efficacy in numerous categories. The NN model demonstrated the highest accuracy, at 94.7%, and was therefore the most effective in terms of prediction accuracy. Furthermore, the neural network exhibited the lowest error rate of 5.3%, indicating the lowest number of misclassifications. In addition, the NN model exhibited an impressive prediction speed of 7800 observations per second, making it the fastest model in the analysis. The training time was 8.39 s, which, although not the shortest, was sufficiently rapid given the complexity of the model.
The Support Vector Machine (SVM) was utilised with a Gaussian kernel having a scale of 0.75, which allowed the creation of intricate decision boundaries.
The most significant advantage of this model was its exceptionally short training time of just 1.67 s, making it the most time-efficient model. However, the SVM demonstrated suboptimal performance in terms of prediction precision, achieving an accuracy of 94%, which was significantly different from the results of NN and BT. The error rate for SVM was 6%, indicating a higher number of misclassifications compared to the other models.
The neural network (NN) was identified as the most comprehensive and efficient model, exhibiting the highest accuracy, lowest error rate, and fastest prediction speed. Although the training time was not the shortest, it was sufficiently rapid to allow the NN to outperform the other models in terms of overall efficiency. Despite demonstrating a level of accuracy similar to that of the NN, the BT exhibited slower processing times and a reduced capacity for real-time operation. Consequently, it is less suitable for applications that require rapid response times. On the contrary, the SVM exhibited the fastest training times but exhibited lower accuracy and a higher error rate, limiting its applicability to scenarios where training speed is a primary consideration. In such cases, the NN would be the optimal choice for accurate and rapid prediction.
The performance of each machine learning model was evaluated using confusion matrices, which are a key tool in this context. These matrices provide a comprehensive breakdown of the performance of the models by comparing the actual class labels (true labels) with the predicted labels generated by the models. This analysis enables not only the evaluation of the overall accuracy of the models but also gains a deeper understanding of the types and frequencies of specific errors made by each model. This allows for a detailed assessment of the models’ strengths and weaknesses, highlighting areas where improvements may be necessary, such as identifying which classes tend to be misclassified and the severity of such errors.
Figure 5 presents the confusion matrices for each machine learning model developed.
The results presented in
Figure 5a are based on a model developed using the bagged tree method. The matrix presents the percentages of correct and incorrect classifications for six different classes, labelled D1 through D6. Cells along the main diagonal of the matrix indicate the correct classifications. For example, the model achieved 96.2% accuracy for class D1, while the accuracy for D2 was 90.3%. The highest level of precision was observed for class D5, with 97.2% of cases correctly classified. In contrast, the least accurate classifications occurred for class D2. Additionally, the misclassifications are illustrated in the off-diagonal cells. For example, 3.2% of the instances belonging to class D1 were incorrectly assigned to class D2, while 4.5% of the cases classified as D3 were, in fact, classified as D2.
Furthermore, the positive predictive value (PPV), which gauges the accuracy of classification for each class, and the false discovery rate (FDR), which quantifies the proportion of erroneous predictions, are also presented. The highest level of precision was observed for class D5, with a value of 97.2%, while the lowest level of precision was observed for class D2, with a value of 90.3%. Similarly, class D2 exhibited the highest FDR at 9.7%, indicating that predictions for this class were more susceptible to error. On the contrary, class D5 exhibited the lowest FDR at 2.8%, demonstrating the highest classification precision. Overall, the Bagged Trees model demonstrates robust performance, although further optimisation is necessary for classes such as D2 to reduce misclassification rates.
The confusion matrix for the NN model (
Figure 5b) indicates a high degree of accuracy, with classes such as D3 and D5 achieving the highest results at 97.4% and 97.2%, respectively. Class D1 is correctly classified 95.2% of the time, while class D2 achieves an accuracy of 92.2%. The highest frequency of misclassification is observed for D6, with 9.0% of instances incorrectly predicted, predominantly as D1. The misclassification rates for the remaining classes are relatively moderate, with 4.5% of instances of class D1 being misclassified as class D6 and 3.6% of instances of class D2 being predicted as class D4.
Furthermore, the precision (PPV) and false discovery rate (FDR) metrics provide further insight into the model’s strengths and weaknesses. D3 and D5 demonstrate the highest precision at 97.4% and 97.2%, accompanied by low FDR values of 2.6% and 2.8%, respectively. This indicates a robust capacity to accurately identify these classes. However, D6 exhibits the lowest precision at 91.0% and the highest FDR at 9.0%, indicating that this class is the most challenging for the model to classify accurately. Although the overall performance of the neural network is strong, additional refinement, particularly for class D6, could help improve the model’s prediction accuracy and reduce misclassification rates.
The confusion matrix for the SVM model (
Figure 5c) indicates a high level of accuracy across most classes, particularly for classes D1, D4, D5, and D6, where the model achieved a perfect classification rate of 100%. However, the performance for class D2 is notably less accurate, with an accuracy of 84%, while class D3 shows a correct classification of 94.3%. The highest frequency of misclassification was observed for class D2, with 16% of instances incorrectly classified as class D1, and for class D3, with 5.7% of instances incorrectly classified as class D2. Despite these misclassifications, classes D4, D5, and D6 demonstrate no misclassification whatsoever, thereby underscoring the robustness of the model with respect to these categories.
In terms of the precision (PPV) and false discovery rate (FDR), the model demonstrated remarkable performance for classes D1, D4, D5, and D6, achieving 100% precision and no false discoveries for these classes. However, the precision for D2 is significantly lower at 84% and D3 has a PPV of 94.3%, which is indicative of the impact of misclassifications. The false discovery rate (FDR) for D2 is the highest at 16%, indicating that the model faces greater challenges with this class. In comparison, D3 has an FDR of 5.7%. Overall, the SVM model demonstrates excellent performance, particularly for specific classes. However, further enhancements could be made to D2 to minimise misclassification and improve precision.
A comparison of the results of the confusion matrices for each model demonstrates a high overall performance, although there are differences in the precision and misclassification rates for specific classes. The BT model demonstrates optimal performance for classes D5 and D1, exhibiting high precision. However, it exhibits suboptimal performance for class D2, with the highest rate of misclassification. The neural network model also demonstrates robust performance, particularly for classes D3 and D5. However, it faces challenges with class D6, which presents the highest misclassification rate. In contrast, the SVM model shows excellent precision for classes D1, D4, D5, and D6, with 100% correct classifications for these classes. However, it significantly underperforms with class D2, showing the highest misclassification rate (16%). In general, all three models perform well in most classes. However, the SVM model demonstrates exceptional accuracy for some classes, while the NN and BT models would benefit from further optimisation for classes D6 and D2, respectively.
Table 3 presents a comparison of the three models (BT, NN, and SVM) based on their performance in classifying different types of defects (D1 to D6). The table includes accuracy information for each model, expressed as the true positive rate (TPR), and notes any classification problems (False Discovery Rate, FDR).
A comparison of these three models allows for the identification of the strengths and weaknesses of each in the classification of defect types D1 to D6. SVM achieves perfect accuracy (100% TPR) for D1, D4, D5, and D6, establishing it as the optimal choice for these defects. The neural network shows superior performance in identifying D3 (97.4% TPR) and exhibits satisfactory results for D5 and D2. However, it exhibits some degree of misclassification for D1 and D6. The bagged tree model exhibits balanced performance across all defects, with notable results for D5 (97.2% TPR) and high precision for D3, D4, and D6. This makes it suitable for general classification without focussing on any specific defect type.
In addition to the confusion matrices, the performance of the developed models was further evaluated using Receiver Operating Characteristic (ROC) curves and the corresponding Area Under the Curve (AUC) metrics. The ROC curve is a visual representation that demonstrates a classifier’s capacity to differentiate between classes. It is constructed by plotting the true positive rate (TPR) against the false positive rate (FPR) at varying threshold levels. This enables a comprehensive analysis of the trade-off between sensitivity (recall) and specificity at varying decision thresholds. The AUC metric serves to complement the ROC curve by providing a single numeric value that can summarise the model’s overall discriminatory ability. An AUC value approaching 1 indicates optimal performance, which indicates that the model is highly capable of distinguishing between the positive and negative classes. On the contrary, an AUC value of approximately 0.5 indicates that the model performance is no more accurate than random guessing. Therefore, the ROC-AUC analysis offers a more comprehensive evaluation of the robustness and effectiveness of the model across various classification thresholds (as illustrated in
Figure 6), facilitating a more nuanced understanding of the model’s generalisation capabilities beyond mere accuracy metrics.
The ROC curve analysis for the three models (
Figure 6) demonstrates their robust performance, although with some variation between different classes.
The BT model (
Figure 6a) exhibits exemplary classification capabilities, with AUC values ranging from 0.9691 for D1 to 1.0 for D5, signifying near-flawless performance for most classes. Furthermore, D6 shows an exceptional degree of performance, with an AUC value of 0.9994, indicating that the model is capable of almost perfectly distinguishing between positive and negative instances. However, the AUC for D1, although still high at 0.9691, exhibits a marginally elevated false positive rate compared to the other classes, thus identifying a potential avenue for improvement. For classes D2, D3, and D4, the model maintains an AUC value greater than 0.98, indicating a robust and reliable classification across the board.
The NN model (
Figure 6b) also demonstrates robust performance, although the variability across the classes is more pronounced in comparison to the bagged trees model. The AUC values range from 0.9392 for D1 to 0.9984 for D5, indicating that, while the model performs well, there is a higher rate of false positives for certain classes, particularly D1 and D4. Regarding D5 and D6, the model demonstrates a near-perfect classification, as evidenced by the AUC values of 0.9984 and 0.9816, respectively. This indicates a high degree of discriminatory power for these classes. However, the somewhat diminished AUC values for D1 (0.9392) and D4 (0.9479) indicate that the model faces greater challenges with these classes. The model’s overall performance remains robust; however, further tuning could prove beneficial, particularly for classes presenting a higher rate of misclassification.
The SVM model (
Figure 6c) demonstrates the most consistent and high performance across all classes, with AUC values ranging from 0.9937 for D4 to a perfect 1.0 for D5. This indicates a near-flawless performance, particularly for classes D5 and D6, where the AUC values reach 1.0 and 0.9991, respectively. Even for classes such as D1, D2, and D3, where the AUC values are 0.9956, 0.9989, and 0.9969, the model performs exceptionally well, consistently maintaining a very high true positive rate while minimising false positives. The slightly lower AUC for D4 (0.9937) is nevertheless still excellent, indicating that the model only struggles marginally with this class.
The above results for all three models demonstrate robust classification capabilities, with AUC values consistently exceeding 0.93 in all classes. The SVM is the most robust and reliable model, demonstrating near-perfect performance across all classes. Furthermore, the Bagged Trees model also performs exceptionally well, particularly for certain classes such as D5 and D6. However, while the NN model is strong overall, it shows more variability and a slightly higher rate of false positives, particularly for D1 and D4. Therefore, for applications that require consistent and highly accurate performance across all categories, the SVM would likely be the optimal choice. However, the Bagged Trees and Neural Network models remain strong contenders, depending on the specific requirements of the task.
Partial dependence analysis of input parameters and predicted defect categories (D1–D6) (
Figure 7) in the optimal neural network (NN) model allows for the identification of pivotal production parameters that exert the most substantial influence on the incidence of diverse defect types. The following section provides a comprehensive account of the influence of each parameter on production quality.
Furthermore, the production temperature (X1) (
Figure 7a) has been identified as a significant factor that influences the occurrence of cracks (D2). As illustrated in the graph, an increase in temperature above 1020 °C is associated with a notable increase in the predicted value for defect D2, indicating that elevated temperatures can contribute to an enhanced likelihood of cracking. In the case of defects such as surface irregularities (D3) and radial runout (D4), the predicted value exhibits a slight decrease at higher temperatures, which may be indicative of a reduced likelihood of occurrence at elevated temperatures. On the contrary, defects such as scratches (D1) and improper hardness (D6) are practically unaffected by temperature changes. This suggests that controlling the production temperature is vital to reducing the incidence of cracks.
The type of coolant used (X2) (
Figure 7b) has a significant impact on the incidence of scratches (D1). A notable reduction in the predicted value for defect D1 was observed when the coolant was changed from type “A” to type “O”, from 0.7 to nearly 0. This suggests that utilising an appropriate coolant can markedly diminish the likelihood of scratches. The remaining defects (D2, D3, D4, D5, D6) exhibit low predicted values and minimal variation dependent on the type of coolant, suggesting that this parameter exerts a relatively limited influence on these defects.
The analysis of the relationship between the parameter X4 (
Figure 7c) and defects indicates that it has a significant impact on the occurrence of cracks (D2) and dimensional problems (D5). Regarding cracks, the predicted value for defect D2 shows a marked increase when X4 exceeds 0.35, suggesting that the control of this parameter may be a crucial factor in the reduction of cracks. Similarly, defect D5 (dimensional inaccuracies) exhibits a maximum at medium X4 values, indicating that maintaining X4 within a specific range can help ensure dimensional accuracy. The influence of X4 on other defects, such as D1 (scratches), D3 (surface irregularities), and D6 (hardness problems), is less pronounced.
The categorical parameter (X5) exerts a moderate influence on a range of defects. The predicted values for defects D2 (cracks) and D5 (dimensional inaccuracies) demonstrate a moderate increase depending on the category of X5, indicating a potential relationship, albeit less pronounced than for other parameters. The remaining defects demonstrate a relatively stable response to changes in X5.
The wear of the tool (X7) (
Figure 7d) has a significant impact on the incidence of cracks (D2). Upon reaching a high level of tool wear (“H”), the predicted value for defect D2 exhibits a notable increase, reaching 0.8. This indicates that tools that show suboptimal conditions represent a significant contributing factor to the appearance of cracks. On the contrary, the predicted values for other defects, including surface irregularities (D3), radial runoff (D4), and improper hardness (D6), exhibit greater stability and less sensitivity to this parameter. This suggests that prioritising the control of tool condition may be an effective strategy to reduce the occurrence of cracks, while other defects appear to be less dependent on this parameter.
Analysis of the relationship between input parameters and defects reveals the pivotal influence of parameters such as tool wear (X7), temperature (X1), and type of coolant (X2) on the quality of the final product. It is of particular importance to monitor and regulate tool wear and temperature to minimise the occurrence of cracks, which are among the most prevalent defects. Additionally, the selection of an appropriate coolant can markedly reduce the incidence of scratches, thus enhancing the overall surface quality of the products.
Furthermore, the parameter X4 (
Figure 7c) has been demonstrated to exert a significant influence on the incidence of cracks and dimensional inaccuracy. This underscores the need to maintain its value within a defined range to eliminate these defects. The results of this analysis should be used to implement strategies to optimise the production process, with the objective of reducing defects and improving product quality.
4.4. Proposal of the Improvements
A critical analysis of the input parameters and their influence on the incidence of defects in the production process reveals several key actions that the company must undertake to optimise production quality and reduce defects.
First, it is recommended that the company implement precise temperature control mechanisms in order to optimise and monitor production temperature (X1). Production temperatures must remain below 1020 °C to minimise the probability of cracks (D2) occurring. Implementing real-time temperature monitoring and control systems is recommended to ensure consistent temperature management, with the specific objective of minimising the occurrence of cracks.
Subsequently, the company should implement a policy of standardising the use of coolant type O throughout the production process, particularly in areas where scratches (D1) are prevalent. Given the significant reduction in the occurrence of scratches associated with the use of the “O” coolant, it would be prudent to establish this as the standard coolant. Additionally, periodic evaluations of coolant efficacy should be carried out to determine its continued capacity to diminish surface imperfections.
Regarding the parameter X4, it is recommended that the company implement monitoring systems to ensure that it remains within an optimal range, particularly below 0.35, with the objective of minimising both cracks (D2) and dimensional inaccuracies (D5). Given the significant impact of X4 on the prevalence of defects, meticulous regulation of this parameter can enhance product quality and reduce the incidence of defects.
Furthermore, the company should adjust the production speed (X5) based on the specific types of products and the observed defect rates. It is recommended that a moderate production speed be maintained to achieve an equilibrium between quality and productivity, as this parameter exerts a moderate influence on the occurrence of cracks and dimensional inaccuracies.
Furthermore, the company should implement a predictive maintenance system for tool wear (X7). The system should monitor the use of tools and the level of wear to facilitate the prompt implementation of maintenance or replacement procedures when wear reaches a critical threshold. It is evident that the wear of the tool plays a significant role in the formation of cracks. Therefore, it is imperative that regular maintenance procedures are implemented to minimise defects and ensure consistent quality.
To facilitate the aforementioned actions, it is recommended that the company implement advanced real-time quality control systems that monitor critical parameters, including the temperature, tool wear, and coolant. Such systems should be capable of providing automated alerts or adjustments to prevent the formation of defects, thereby reducing the need for rework and ensuring consistent quality throughout the production process.
Furthermore, it is imperative that employee training be conducted on a regular basis to ensure that operators and technical staff fully understand the importance of monitoring and controlling key parameters. Training should concentrate on the optimal methods for the management of temperature, tool wear, and coolant usage. Moreover, the company should undertake periodic process audits with a view toward identifying potential areas for further optimisation and ensuring that all production steps comply with the requisite quality standards.
Finally, the company should adopt a data-driven decision-making approach, using machine learning and data analytics to facilitate the continuous analysis of production data. By identifying trends in defect formation, the company can make adjustments to the production process in real time based on the insights gained from these models. This will assist in maintaining optimal conditions and reducing defect rates in a dynamic production environment.
By implementing these measures, the company can significantly reduce defects, particularly those of a cosmetic nature, improve dimensional accuracy, and enhance overall product quality. These steps not only address current problems but also establish a foundation for long-term consistency and optimisation, leading to cost savings, reduced waste, and increased customer satisfaction.