Next Article in Journal
An Investigation of the Sequential Micro-Laser Drilling and Conventional Re-Drilling of Angled Holes in an Inconel 625 Ni-Based Alloy
Previous Article in Journal
Experimental Investigation on the Effect of Velocity Anisotropy on Oil Film State under Different Surface Wettability Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unlocking the Potential of Soft Computing for Predicting Lubricant Elemental Spectroscopy

by
Mohammad-Reza Pourramezan
,
Abbas Rohani
* and
Mohammad Hossein Abbaspour-Fard
Department of Biosystems Engineering, Faculty of Agriculture, Ferdowsi University of Mashhad, Mashhad 9177948974, Iran
*
Author to whom correspondence should be addressed.
Lubricants 2023, 11(9), 382; https://doi.org/10.3390/lubricants11090382
Submission received: 28 June 2023 / Revised: 24 July 2023 / Accepted: 25 July 2023 / Published: 7 September 2023

Abstract

:
Predictive maintenance of mechanical systems relies on accurate condition monitoring of lubricants. This study assesses the performance of soft computing models in predicting the elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) of engine lubricants, based on the electrical properties (ε′, ε″, and tan δ) of oil samples. The study employed a dataset of 49 lubricant samples, comprising elemental spectroscopy and dielectric properties, to train and test several soft computing models (RBF, ANFIS, SVM, MLP, and GPR). Performance of the models was evaluated using error metrics such as MAPE, RMSE, and EF. The RBF model delivered the most accurate predictions for silicon at 7.4 GHz, with an RMSE of 0.4 and MAPE of 0.7. Performance was further improved by fine-tuning RBF parameters, such as the hidden size and training algorithm. The sensitivity analysis showed that utilizing all three input electrical properties (ε′, ε″, and tan δ) resulted in the lowest errors. Nevertheless, there are limitations to the study. In our country, measuring the electrical properties of engine lubricants and equipment is not a common practice, which leads to a limited number of samples studied. Despite these limitations, this study offers a proof-of-concept for predicting lubricant conditions based on readily measurable electrical properties. This paves the way for developing machine learning-based real-time lubricant monitoring systems.

1. Introduction

In today’s technological landscape, there is a growing demand for mechanical systems that can operate with high reliability. These systems are utilized in a diverse range of applications, including aerospace, automotive, energy, and agriculture industries. The importance of appropriate maintenance for industrial equipment increases day by day to avoid equipment failures and also prevent high costs and economic losses [1]. The need for high-reliability mechanical systems has emerged as a result of the increasing complexity of technology and the need to ensure safe and reliable operation under various conditions and stresses. Engineers and technologists in different fields work collaboratively to design and develop high-reliability mechanical systems that incorporate redundancy and undergo rigorous testing and verification procedures to ensure their reliability and safety [2]. Condition monitoring is a vital program for ensuring the safety, durability, and efficiency of machines. It involves the regular monitoring of a machine’s health through various techniques such as vibration analysis, oil analysis, and thermography. By tracking and analyzing the machine’s condition over time, potential issues and faults can be detected early, allowing for timely maintenance and repairs before significant damage occurs. This proactive approach to maintenance helps to minimize downtime, reduce repair costs, and extend the life of the machine. Condition monitoring is particularly critical in industries where machine failure can result in significant safety risks and financial losses, such as manufacturing, transportation, and energy production [3,4]. The lubricant in a machine function similarly to blood in a living organism, providing vital protection against wear and tear. Just as blood delivers oxygen and nutrients to vital organs, lubricant delivers essential oils and additives to the machine’s moving parts, reducing friction and preventing damage. Without proper lubrication, a machine’s performance will suffer, and it may experience premature failure. Therefore, regular lubrication maintenance, including oil changes and filter replacements, is necessary to ensure the smooth and efficient operation of the machine [5,6]. However, the primary purpose of a lubricant is to mitigate tribological operational issues associated with friction and wear [7]. By analyzing lubricating oil, potential issues can be detected early, reducing the risk of costly repairs and downtime. Regular oil analysis is necessary to ensure safe and reliable system operation [8]. Figure 1 illustrates the evolution of maintenance strategies over time, from reactive maintenance to more proactive approaches such as preventive and condition-based maintenance [9]. This evolution has significant implications for improving safety, reliability, and cost-effectiveness in industries such as manufacturing, transportation, agriculture, and energy production. There are various methods for evaluating lubricating oil, as outlined in [10]. These methods include physical and chemical techniques such as chromatography, spectral analysis techniques such as infrared absorption spectroscopy (IAS) and Raman spectroscopy, and electrical diagnosis methods such as return voltage measurement and frequency-domain spectroscopy. Each of these techniques has its advantages and limitations and is useful for different applications.
Determining the appropriate timeline for lubricant maintenance is a controversial issue. One investigation undertook a detailed examination of sophisticated material characterization techniques, such as atomic force microscopy, thermal analysis, and X-ray diffraction. Particular focus was directed toward appraising the applicability of graphene as an emerging nanomaterial to heighten the performance characteristics of several materials employed in pavement engineering, including asphalt binders [11]. While laboratory methods can provide precise results on lubricant condition, they may not be sufficient in determining the optimal maintenance timeline, as it depends on various factors such as the machine’s operating environment and duty cycle. A common limitation of laboratory methodologies is their limited ability to fully represent the state of oil at a particular point in time and their inability to account for changes in its state over time [12]. An effective lubricant maintenance program should consider factors such as the machine’s history, manufacturer’s recommendations, and best industry practices. Valuable data about an engine’s operational circumstances and maintenance history can be obtained from the machinery’s history, aiding in the identification of optimal maintenance schedules. Additionally, manufacturers’ guidelines can provide direction on the suitable lubricant type and change intervals for the engine [13]. Regular oil analysis and consultation with lubricant experts can help determine the optimal maintenance timeline [14,15]. Field testing offers advantages over laboratory testing in determining lubricating oil maintenance schedules. The primary advantage is that field testing provides a more accurate representation of the oil’s state under real operational conditions [16]. Electrical techniques provide an easy-to-use and cost-effective alternative to laboratory methods for evaluating lubricant condition. They allow for in situ measurements that enable early identification of potential issues, reducing the risk of downtime or significant damage. The emergence of electrical techniques has expanded the range of options available for lubricant evaluation [17,18]. For instance, systems such as the “LUBSTER” system use InfraRed and color pigment sensors to indicate the condition of the oil in the dashboard system [19]. This helps to improve the efficiency of engines and machinery that rely on lubricating oil. Electronic technology provides accurate and reliable data on the condition of lubricating oil. For example, one study used multi-sensor information fusion technology to monitor the quality of automotive engine lubricating oil. The study evaluated the moisture content of lubricating oil, dielectric constant, scatter degree and transparency of infrared light, and iron-grinding particle content on the permeability and ultrasonic reflectivity as input factors and established a monitoring model of lubricating oil quality using the theory of information fusion technology. The results showed that the lubricating oil quality monitoring model of multi-sensor information fusion technology can more accurately reflect the quality of lubricating oil [20]. One example is the development of smart sensor systems for monitoring the operational condition of in-service diesel engine oils. These sensors can provide real-time condition monitoring and project the remaining usable life of the lubricant, reducing or eliminating the need for traditional oil analysis methods [21]. Electronic technology can help to reduce the environmental impact of used lubricating oil. For instance, one study explored the potential of using differences in wear particle kinematic characteristics to recognize changes in wear particle diameter and oil viscosity. The study designed and fabricated a wear particle kinematic analysis system (WKAS) that was applied to a pin-disc tester, and the experimental results showed that there is a corresponding relationship between the velocity of the particles and their diameter and the oil viscosity [22]. By monitoring the quality of lubricating oil, potential problems can be detected early, preventing oil contamination and reducing the environmental impact of used lubricating oil. Intelligent diagnosis technology can provide a real-time and continuous collection of data during the operation of the diesel engine, which can help in detecting faults and diagnosing problems in a timely manner [23]. Soft computing methods have also been employed for lubricant condition analysis, eliminating the need for expert intervention. These methods use artificial intelligence and machine learning algorithms to analyze data from sensors and determine the condition of the lubricant. Soft computing methods can help to detect subtle changes in lubricant condition that may not be apparent through traditional methods, improving the accuracy and reliability of the evaluation. Moreover, these methods can be integrated into the machine’s control system, providing real-time monitoring and alerts for maintenance professionals. Soft computing methods offer a promising solution for improving lubricant maintenance, reducing the risk of downtime and costly repairs [15,24,25]. Sophisticated technologies that are employed in monitoring diesel engines encompass deep transfer learning and genetic algorithms. For example, an investigation of a diagnostic approach has been proposed that leverages intelligent methodologies, such as optimized variational mode decomposition and deep transfer learning, to address fault diagnosis in diesel engines [26]. Furthermore, a diagnostic framework for marine diesel engines, founded on an adaptive genetic algorithm, has been devised to achieve efficient and precise classification of faults occurring in diesel engines [27].
Lubricant condition monitoring is an area of active research, with soft computing techniques emerging as a promising focus. These techniques use artificial intelligence and machine learning to improve the accuracy and efficiency of lubricant maintenance, reducing the risk of downtime and costly repairs [15,28,29,30,31]. Overall, soft computing techniques offer a faster [32], more accurate [33], and more adaptable [34] approach to diesel engine lubricant monitoring than traditional methods. Data-driven condition monitoring is an essential technology for intelligent manufacturing systems to identify anomalies from malfunctioning equipment, prevent unplanned downtime, and reduce operation costs by predictive maintenance without interrupting normal machine operations [35]. These techniques are rather easy to develop and perform and can help with accurate means of condition assessment and fault diagnosis. Faults and failures of induction machines can lead to excessive downtimes and generate large losses in terms of maintenance and lost revenues, and this motivates the examination of online condition monitoring [36]. The accuracy of machine learning models in predicting engine lubricant properties has been investigated in recent studies. For example, one study conducted a preliminary test using K-nearest neighbor (KNN) and Radial Basis Function (RBF) models for engine lubricant spectral analysis. The study found that the models reduced twelve indexes to seven, including iron, chromium, lead, copper, aluminum, nickel, and TDPQ. The RBF-ANN modeling approach was particularly noteworthy for its accuracy in detecting all three sizes of the training sets, with an impressive accuracy rate of approximately 99.85%. These results suggest that machine learning models have great potential for improving engine lubricant analysis and predicting properties with high precision [15]. Another study explored the use of the Recursive Feature Elimination (RFE) method to predict external wear failure by reducing independent variables. The study found that this approach achieved an impressive accuracy of 94.20%. Interestingly, the study also revealed that the presence of iron, aluminum, and lead were particularly important in assessing wear conditions. These findings demonstrate the potential of machine learning models to identify important factors that contribute to engine lubricant properties and failure prediction [30].
Dielectric or Impedance Spectroscopy (IS) is a powerful and versatile technique for measuring the electrical properties of various materials, including concrete, paper, liquids, and even biofuels. This non-destructive method is not only cost-effective but also highly accurate, providing researchers and engineers with valuable insights into the electrical behavior of these materials [37]. One of the main advantages of these methods is that they are non-destructive, meaning that they do not damage the material being tested. This makes them ideal for testing delicate or expensive materials, as well as for testing materials that cannot be easily replaced or repaired [38]. Additionally, these methods are cost-effective, as they do not require expensive equipment or specialized training to perform [39]. Another advantage of dielectric or impedance methods is that they provide high-precision measurement results. These methods are able to detect small changes in the electrical properties of materials, which can be used to identify subtle differences between materials or to track changes over time [40]. Researchers have proposed a novel approach to locomotive system maintenance by investigating the relationship between dielectric properties and metallic and non-metallic particles found in engine oil. In this study, artificial neural networks were employed to determine the correlation between the dielectric constant and oil impurities, as well as the dielectric loss factor and oil impurities. Specifically, the researchers used elemental spectroscopy as inputs and dielectric properties as outputs in their modeling approach. Impressively, the study achieved highly promising regression values, with the dielectric constant achieving an R value of 0.8513 and the dielectric loss factor achieving an R value of 0.8015 at 7.4 GHz. These results demonstrate the potential of machine learning models to accurately predict engine lubricant properties and aid in effective maintenance strategies [15]. Table 1 provides a comprehensive summary of recent studies that have utilized soft computing tools for engine and component condition evaluation in machinery. This table highlights the growing interest in machine learning techniques for predicting engine lubricant properties and identifying potential issues before they become major problems. It is evident that the field of engine and lubricant research is vast, with a plethora of studies that have investigated diverse methods for enhancing performance and curtailing energy loss in internal combustion piston engines. These techniques encompass the use of lubrication quality [41], coatings [42], and multifarious tribological approaches [43]. Consequently, there exist multiple prospects for tackling lubricant and engine health concerns, and this academic inquiry delves into maintenance and management techniques that leverage the electrical properties of lubricants and neural networks.
The research conducted in this study brings forth a novel approach to predicting the elemental spectroscopy of lubricants based on their electrical properties. The main objective of this study was to compare the performance of various soft computing models in predicting the elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) of lubricants based on their electrical properties (ε′, ε″, and tan δ). While previous research has primarily focused on a limited number of engine lubricant pollutants, this study goes beyond by considering the effects of multiple elements including Fe, Pb, Cu, Cr, Al, Si, and Zn. This comprehensive analysis of various elements in lubricants is a unique aspect of this research. Another notable novelty of this study is the source of lubricant samples. Unlike previous studies that primarily used laboratory-prepared samples, this research examines lubricant samples extracted directly from the engine. This enables the study to closely mimic real-world conditions, enhancing the accuracy and applicability of the findings. The elements present in the lubricant were identified using spectroscopy, and the electrical properties of each sample were subsequently measured. Furthermore, soft computing algorithms were employed to explore the complex relationship between polluting elements and the electrical properties of lubricants. These algorithms leverage data from both existing literature and experiments conducted within this study, resulting in a comprehensive analysis of the subject matter. The importance of these findings should not be underestimated. The findings of this study have important implications for the development of effective monitoring conditions for engine lubricants. Moreover, the potential for online and portable methods to detect and diagnose these faults is also promoted through this research. In the subsequent section, a detailed description of the research process will be presented, highlighting the key results obtained from this groundbreaking study.

2. Materials and Methods

2.1. Dataset

To conduct this study, we obtained a dataset from two sources, including 33 records from previous research [14] and 16 records provided by an Iranian company (Tirage, Tehran, Iran) in Tehran city. Combining multiple datasets can increase the available data and improve the generalizability of the results. In this case, by combining two datasets, the dataset created was a more comprehensive one that provided a broader range of data points to train machine learning models. This increased the accuracy and reliability of the machine learning models for predicting engine lubricant properties. Using a larger dataset can also help to reduce the risk of bias and increase the representativeness of the data, leading to more robust and reliable predictions. Furthermore, using multiple datasets from different sources can help validate the results and increase the robustness of the findings. Overall, combining datasets can be an effective way to improve the quality of data and enhance the performance of machine learning models.
The dataset used in this study includes the elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) of lubricants, while their electrical properties (ε′, ε″, and tan δ) were measured during the course of this study. The data collection procedure is illustrated in Figure 2, with additional details provided in Section 2.1.1 and Section 2.1.2. Normalization is a common preprocessing technique used to ensure that features are on a similar scale and have equal influence on the machine learning algorithm. In this case, normalizing the data to a range of −1 to 1 ensured that the features were standardized and had a uniform scale, which helped to reduce the influence of outliers and improve the accuracy of the machine learning models. The presence of outliers can skew the results and lead to poor performance of the machine learning algorithms [47,48]. By scaling the data to a uniform range (Equation (1)), we were able to reduce the impact of outliers and ensure that the machine learning models were trained on a consistent and reliable dataset. This can increase the robustness of the results and improve the generalizability of the predictions. Detailed information pertaining to the parameters employed in all formulas explicated in this article is accessible in Table A1, situated conveniently in Appendix A of the paper.
x   n = 1 + 2 x x m i n x m a x x m i n
In this equation, x = x 1 , x 2 ,   , x n represents the principal value of the index vector, while xn represents the normalized value of the index vector. The maximum and minimum values of the index are denoted by xmax and xmin, respectively.

2.1.1. Extracted Datasets

The extracted datasets used in this study comprised 33 records obtained from a published work, which included the elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) and electrical properties (ε′, ε″, and tan δ) of second-hand lubricant [14]. An additional 16 records were provided by a local Iranian company called Tirage. These lubricant samples had been collected from engines and were accompanied by their elemental spectroscopy (Table 2). In this study, we measured the electrical properties (ε′, ε″, and tan δ) of these samples and reported our findings in Section 2.1.2.

2.1.2. Experimental Datasets

Tirage, a local Iranian company, provided 16 lubricant samples extracted from engines along with their elemental spectroscopy, which is shown in Table 2. However, information about the electrical properties of the samples (e.g., ε′, ε″, and tan δ) was not provided. To address this, we measured the electrical properties of the samples using a Vector Network Analyzer (VNA) (R&S ZVL 13, made in the USA) as part of our current research. This device can analyze microwave absorbing properties in the frequency range of 9 kHz–13.6 GHz with an accuracy of ±0.2 dB. To conduct the measurements, we inserted a coaxial dielectric probe into a beaker containing 50 mL of lubricant sample, as shown in Figure 3. We performed measurements in triplicate at 2.4, 5.8, and 7.4 GHz frequencies under similar conditions for all samples, and the results are reported in Table 3. The electrical measurements were conducted in the quality testing laboratory for wireless terminals at the Khajeh Nasir Toosi UNIVERSITY OF TECHNOLOGY in Iran.

2.2. Soft Computing Methods

2.2.1. Fundamentals and Theories

Soft computing models have the potential to enable maintenance without the need for expert intervention. This study aimed to develop such models, and several of them are described below. One widely used neural network architecture in academic research is the Multi-Layer Perceptron (MLP) [49,50]. In the MLP, summation functions are used to obtain the output of hidden neurons [51]. The structure and function of MLP will be discussed in more detail below, highlighting its potential as a powerful tool for predicting engine lubricant properties.
z j = i = 1 n 0 W i j E i + b j
y j = f z j = 1 1 + e z j
p k = j = 1 n 1 W j k y j + b k
In the MLP architecture, z j represents the input of the jth neuron in the hidden layer, while b j represents the bias of the jth neuron in the hidden layer. W i j represents the weight value between the ith input neuron and the jth neuron in the hidden layer, and f z j represents the activation function. The output of the jth neuron is denoted by y j , while p k represents the output of the neurons in the kth output. W j k denotes the weight value between the neuron in the jth hidden layer and the neuron in the kth output layer, and n 1 represents the number of neurons in the hidden layers. To find its parameters, MLP commonly uses the backpropagation algorithm, which is a widely used technique for training neural networks.
Another machine learning technique that we employed in this study is the Adaptive Neuro-Fuzzy Inference System (ANFIS), which combines adaptive neural network (ANN) rules with fuzzy logic (FL) theories within an adaptive network framework to establish a logical relationship between inputs and outputs [52,53]. ANFIS is a five-layered structure consisting of the fuzzy layer, product layer, normalized layer, de-fuzzy layer, and total output layer [54] (see Figure 4).
In addition to ANFIS, we also utilized a Radial Basis Function (RBF) network, which is a feed-forward neural network with one hidden layer of RBF units and a linear output layer [55,56]. The output of the RBF network is calculated as follows [57]:
p i = b i + j = 1 N W i j exp E μ 2 2 σ j 2
where p i is the output, b i is the bias terms, N is the number of basic functions, W i j is the weight between hidden and output layers, E is the input data vector, μ is the center of RBF unit, and σ represents the spread of the Gaussian basis function.
The Support Vector Machine (SVM) algorithm, originally developed by the brilliant Vladimir Vapnik, has become increasingly popular in recent years for various applications [58]. SVM is a powerful statistical learning method that can estimate the function of y(i) with remarkable accuracy, as demonstrated in numerous studies [59,60,61]:
f E = W φ E + b
where φ E defines a nonlinear mapping of E, W is a weight vector, and b represents the bias factor.
Gaussian Process Regression (GPR) is a cutting-edge nonparametric Bayesian approach to regression that has gained popularity in various scientific fields [62]. One of the key advantages of GPR is its ability to provide reliable responses for input data, making it a valuable tool for predicting engine lubricant properties [63]. GPR assumes that the output can be calculated using a sophisticated formula that involves several variables, as demonstrated in multiple studies [64,65]:
p i = f E i + ε i ,         ε i ~ N 0 , σ n o i s e 2 R
where σ n o i s e 2 is the equal noise variance for E i of all samples.
In this groundbreaking study, our focus is on the practical application of soft computing approaches to improve the maintenance and monitoring of engine lubrication conditions based on their electrical properties (ε′, ε″, and tan δ). While we will not dive into the theoretical principles of these methods, which can be found elsewhere in the literature, we will compare the performance of several different soft computing models to determine the most efficient and accurate method for predicting engine lubricant properties. Our goal is to provide valuable insights into the use of soft computing approaches for optimizing engine performance and reducing maintenance costs in the lubricant industry.

2.2.2. Application

In this exciting application section, we put our soft computing models to the test and comparatively evaluated their performance to provide an innovative way of monitoring engine lubricant conditions based on their electrical properties (ε′, ε″, and tan δ). To implement the soft computing algorithms, we took practical steps and employed the powerful MATLAB software.
The inputs of our models were the electrical properties (ε′, ε″, and tan δ) of lubricants, while the outputs were the elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) of lubricants. To ensure the accuracy of our models, we trained them using 80% of the total data, while the remaining 20% was used for testing and validation. To avoid overfitting, various solutions have been proposed, such as early stopping methodology, regularization methods, applying ensemble learning, training with more data, and cross-validation [66]. The training and validation datasets were also separated to avoid overfitting, and the optimizing indexes for the selected model were reported to highlight the impressive performance of the soft computing approach. Overfitting can be detected by comparing the performance of the model on the training and validation datasets. If the training performance continues to increase while the validation performance starts to decrease, it is a sign that the model is overfitting the training data and not generalizing well to new data [15,67]. While adjusting model indexes can often be a time-consuming trial-and-error process, our study offers valuable insights into the most effective ways to optimize soft computing models for predicting engine lubricant properties [49].

2.3. Performance Criteria

To determine the most accurate and reliable soft computing model for predicting engine lubricant properties, we employed rigorous performance criteria. These criteria are essential for evaluating the predictive accuracy of our models, helping us to improve and choose the optimal approach for our study. We used three key metrics as performance criteria: the Mean Absolute Percentage Error (MAPE) (Equation (8)), Root Mean Square Error (RMSE) (Equation (9)), and Efficiency (EF) (Equation (10)). These metrics are widely used in the field and have been shown to be effective in assessing and comparing the accuracy of different soft computing models [68,69,70,71,72,73]. MAPE measures the average percentage deviation of the predicted values from the actual values. A lower MAPE value indicates a better predictive accuracy of the model. For example, a MAPE of 10% means that the model’s predictions are off by an average of 10% from the actual values [74]. RMSE calculates the square root of the average of the squared differences between the predicted values and the actual values. A lower RMSE value indicates a better predictive accuracy of the model. RMSE is more sensitive to large errors than small errors, which means that it penalizes large errors more than small errors [75]. Efficiency is a measure of how well a soft computing model performs compared to a reference model. It is calculated as the ratio of the performance of the soft computing model to the performance of the reference model. A higher efficiency value indicates a better performance of the soft computing model compared to the reference model [76]. By employing these performance criteria, we were able to identify the most accurate and reliable soft computing model for predicting engine lubricant properties, providing valuable insights into how to optimize machine learning models for practical applications.
RMSE = i = 1 n y p i y e i 2 n
MAPE = 1 n i = 1 n y p i y e i y p i
EF = i = 1 n ( y e i y e ) ¯ 2 i = 1 n ( y P i y p ) ¯ 2 i = 1 n ( y e i y e ) ¯ 2
In our study, the desired (actual) output for the ith pattern is represented as y e i , while the predicted (fitted) output produced by the network for the same pattern is represented as y p i . The “n” in the equation represents the total number of lubricant samples used in our study, and y e ¯   and   y p ¯ represent the averages of the desired (actual) and predicted output, respectively.

3. Results and Discussion

3.1. Preliminary Statistical Analysis

Before diving into the development and evaluation of our soft computing models, we conducted a preliminary statistical analysis using correlation analysis. This powerful statistical measure helps determine the linear relationship between two or more variables, shedding light on how they change relative to each other. The results of our correlation analysis played a critical role in selecting the most appropriate modeling method for predicting engine lubricant properties. For example, if the correlation indicated weak relations between variables, multiple linear regression and similar statistical models may not be suitable. This highlights the complex and non-linear nature of predicting engine lubricant properties. On the other hand, a strong correlation close to one indicates a robust connection between the two variables, providing valuable insights into the relationship between engine lubricant properties and their electrical properties (ε′, ε″, and tan δ). Furthermore, the sign of the correlation indicates whether the relationship between the variables is direct or inverse, further informing our modeling decisions.
The negative correlations observed between Fe and its respective electrical properties (ε′, ε″, and tan δ) suggest an inverse relationship between Fe levels in the engine lubricants and their electrical properties. This indicates that as the levels of Fe increase, the electrical properties of the lubricant decrease. The strength of these correlations is relatively weak, ranging from −0.13 to −0.22, indicating that the relationship between Fe and electrical properties is not very strong. In contrast, the positive correlations observed between Pb, Cu, and Cr and their respective electrical properties (ε′, ε″, and tan δ) suggest a direct relationship between these elements and the electrical properties of the engine lubricants. This indicates that as the levels of Pb, Cu, and Cr increase, the electrical properties of the lubricant also increase. The strength of these correlations is moderate to strong, ranging from 0.41 to 0.53, indicating a relatively strong positive relationship between these variables. The negative correlations observed between Al and Si and their respective electrical properties (ε′, ε″, and tan δ) suggest an inverse relationship between these elements and the electrical properties of the engine lubricants. This indicates that as the levels of Al and Si increase, the electrical properties of the lubricant decrease. The strength of these correlations ranges from −0.23 to −0.56, indicating a moderate to strong negative relationship between these variables. Finally, the strongest correlations were observed between Zn and its respective electrical properties (ε′, ε″, and tan δ), with all three correlations being strongly negative. This suggests that Zn levels have a strong inverse relationship with the electrical properties of the engine lubricants. The strongest correlation was found between Zn and tan δ, with a negative correlation coefficient of −0.79. These findings are elucidated in Table 4, providing a comprehensive overview of the correlations observed in our study.

3.2. Performance Evaluation of Models

To evaluate the performance of our soft computing models for predicting engine lubricant properties, we conducted a rigorous statistical analysis, the results of which are shown in Table 5. By comparing the values of RMSE and MAPE, we could make informed judgments about the accuracy and reliability of each model. For Fe, RBF and ANFIS models perform consistently better than the other models at all frequencies, with the lowest RMSE and MAPE values. MLP and GPR models perform poorly for Fe at all frequencies, with the highest RMSE and MAPE values. For Cu, RBF and ANFIS models perform relatively better than the other models at all frequencies, with lower RMSE and MAPE values. SVM and GPR models perform poorly for Cu at all frequencies, with the highest RMSE and MAPE values. For Si, RBF and ANFIS models perform better than the other models at all frequencies, with the lowest RMSE and MAPE values. MLP and SVM models perform poorly for Si at all frequencies, with the highest RMSE and MAPE values. For Zn, RBF and ANFIS models perform better than the other models at 2.40 GHz and 7.40 GHz but not at 5.80 GHz where they perform poorly. MLP, SVM, and GPR models perform poorly for Zn at all frequencies, with the highest RMSE and MAPE values. Although MAPE could not be calculated for Pb, Cr, and Al due to zero actual values, the RMSE still shows that RBF and ANFIS made the smallest errors in concentration predictions for these elements. Overall, the frequency appears to have an influence on the predictive performance of the models. Higher frequencies tend to produce more accurate predictions, though the effect varies between elements.
According to Table 5, our analysis revealed that the RBF model at 7.40 GHz offered the best performance for all variables, with the smallest values of MAPE and RMSE. For example, for Si in RBF at 7.40 GHz, we achieved an impressive RMSE of 0.4 and MAPE of 0.7, highlighting the outstanding accuracy of our approach.

3.2.1. Adjusting RBF Parameters

After conducting a thorough analysis of our soft computing models for predicting engine lubricant properties, we found that the RBF model offered the best performance. In this section, we focus on optimizing the RBF model parameters to further improve its accuracy and reliability.
Figure 5 illustrates the RBF efficiency (EF) for different hidden sizes and various elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) at both the train and test steps. The network’s performance improved with an increase in the hidden size, and a hidden size of 11 was found to be the minimum size that provided efficient results for all evaluated elements. The study found that increasing the hidden size to 13 and above further improved the network’s performance and achieved an EF of 1.00 in both the train and test steps. The study also found that the RBF network achieved high EF values for all evaluated elements, including Pb, Fe, Cr, Cu, Si, Al, and Zn. The EF values ranged from 0.82 to 1.00 in the test step, indicating the network’s effectiveness in forecasting the evaluated elements. The findings of this study can be valuable for researchers and practitioners working in the field of material analysis and forecasting.
In Figure 6, we explored the efficiency of the RBF model when employing different training algorithms for various elemental spectroscopy (Fe, Pb, Cu, Cr, Al, Si, and Zn) at both the train and test steps. The outcomes of this study revealed that the RBF model’s efficiency and performance were affected by the choice of the training algorithm. However, the impact of training algorithm selection is non-uniform across all elements, indicating that certain elements are more responsive to the algorithm than others. For instance, while all training algorithms except for the “Traingdm” algorithm exhibited commendable performance in predicting Fe, only the “Trainlm” and “Trainbr” algorithms demonstrated good performance in predicting Al. Consequently, the effect of algorithm selection on model performance was more pronounced in the latter case. Notably, both “Trainlm” and “Trainbr” algorithms exhibited excellent performance in predicting all elements; of course, the “Trainbfg” algorithm also demonstrated good performance in predicting Fe. Overall, the “Trainlm” algorithm provides the highest RBF efficiency in both the training and testing stages for predicting all elemental spectra. This finding significantly enhances the model’s accuracy and reliability.
Finally, in Figure 7, we plotted the coefficient of determination (R2) between the actual and predicted values of the spectral analysis indices of the lubricant. These results were reported separately for the testing and training stages.
In Figure 7, The R2 values for all elements in both the training and testing stages were 0.99, indicating a strong correlation between the actual and predicted values of the elemental spectroscopy. The slope (0.99) of the regression line for each element was also very close to one, indicating that the predicted values were in good agreement with the actual values. Regarding the intercept, it varied for different elements in both the training and testing stages. For instance, in predicting Fe, the intercept was 0.05 in the training stage and 0.01 in the testing stage. Similarly, the intercept for predicting Pb was 0.02 in the training stage and 0.01 in the testing stage. In contrast, the intercept for predicting Cu was 0.30 in the training stage and 0.01 in the testing stage. Also, the intercept for predicting Si was 0.11 in the training stage and 0.01 in the testing stage. However, for predicting Cr, Al, and Zn, the intercepts were the same in both the training and testing stages and were equal to 0.01. Overall, the R2 values were high for all elements, indicating a strong correlation between the actual and predicted values. Moreover, the intercepts were relatively small, indicating that the predicted values were close to the actual values. However, the intercepts varied for different elements, indicating that the models’ performance varied in predicting different elements.
By optimizing the parameters of our RBF model, we could further improve the accuracy and reliability of our soft computing approach, providing valuable insights into the most effective ways to predict engine lubricant properties based on their electrical properties (ε′, ε″, tan δ).

3.2.2. Sensitivity analysis

To evaluate the performance of our soft computing models for predicting engine lubricant properties, we conducted a rigorous analysis in Section 3.2. Our findings revealed that the Radial Basis Function (RBF) model offered the most accurate and reliable predictions. Subsequently, in Section 3.2.1, we set the RBF parameters and conducted a sensitivity analysis to gain a comprehensive understanding of the model’s inputs. Figure 8 presents the findings of this analysis, enabling us to prioritize the model’s inputs and identify the most influential parameter. The results indicated that the choice of input variables significantly affects the model’s predictive performance, as evidenced by the differences in the root mean square error (RMSE) across different input combinations. For predicting Fe, the input combination of (ε′, ε″, tan δ) was identified as the most influential input variable, resulting in the lowest RMSE value of 0.9. The input combinations of (ε′, tan δ), (ε′, ε″), and (ε″, tan δ) resulted in higher RMSE values of approximately 4.0, 10.5, and 11.0, respectively. For predicting Pb, the input combination of (ε′, ε″, tan δ) resulted in the lowest RMSE value of 0.3, followed by (ε′, ε″) with an RMSE of approximately 0.8. The input combination of (ε″, tan δ) resulted in the highest RMSE of approximately 3.7. For predicting Cu, the input combination of (ε′, ε″, tan δ) resulted in the lowest RMSE value of 2.2, while the input combination of (ε″, tan δ) resulted in the highest RMSE of approximately 12.0. For predicting Cr, the input combination of (ε′, ε″, tan δ) resulted in the lowest RMSE value of 0.2, while the input combination of (ε″, tan δ) resulted in the highest RMSE of approximately 5.7. For predicting Al, the input combination of (ε′, ε″, tan δ) resulted in the lowest RMSE value of 0.1, while the input combination of (ε″, tan δ) resulted in the highest RMSE of approximately 0.57. For predicting Si, the input combination of (ε′, ε″, tan δ) resulted in the lowest RMSE value of 0.5, while the input combination of (ε″, tan δ) resulted in the highest RMSE of approximately 1.4. For predicting Zn, the input combination of (ε′, ε″, tan δ) resulted in the lowest RMSE value of 1.0, while the input combination of (ε″, tan δ) resulted in the highest RMSE of approximately 14.0.
Overall, the present study underscores the importance of selecting appropriate input variables to achieve accurate predictions in elemental spectroscopy using the RBF model. By prioritizing the most influential input parameters, we could further optimize our soft computing models for predicting engine lubricant properties, ultimately improving the accuracy and reliability of our findings. Therefore, we do not recommend removing any input that increases the RMSE, ensuring the most accurate and reliable predictions. Our sensitivity analysis revealed that the removal of ε′ significantly increased the Root Mean Square Error (RMSE) for all elements, highlighting the critical importance of this input parameter for predicting engine lubricant properties based on their electrical properties. By carefully considering the impact of different input parameters on the performance of our soft computing models, the input variables (ε′, ε″, tan δ) were found to be the most influential input variables for most elements, leading to the lowest RMSE values. These findings provide valuable insights into optimizing the RBF model’s performance and improving its reliability in predicting elemental spectroscopy.

4. Conclusions

This study aimed to evaluate the performance of soft computing models in predicting engine lubricant elemental spectroscopy based on their electrical properties. A dataset of 49 lubricant samples was used to train and test various soft computing models (RBF, MLP, ANFIS, GPR, and SVM), and their performance was assessed using error metrics (RMSE, MAPE, and EF). The preliminary statistical analysis using correlation analysis revealed various relationships between elemental spectroscopy and electrical properties. Fe showed a weak inverse relationship, while Pb, Cu, and Cr had a direct relationship with electrical properties. Al and Si also displayed an inverse relationship, and Zn showed a strong inverse relationship with electrical properties (Table 4 depicts the correlation coefficients). The performance evaluation of the soft computing models showed that the RBF and ANFIS models consistently outperformed the other models for Fe, Cu, Si, and Zn at different frequencies. MLP, SVM, and GPR models had a poorer performance for these elements. For Pb, Cr, and Al, where MAPE could not be calculated, the RMSE values indicated that RBF and ANFIS models made smaller errors in predicting the concentrations of these elements. The frequency factor also influenced the predictive performance of the models, with higher frequencies generally leading to more accurate predictions for certain elements (Table 5 reports RMSE and MAPE). The results demonstrated that the radial Basis Function (RBF) model consistently outperformed the other models, achieving the most accurate predictions, especially at the highest frequency of 7.4 GHz. Our findings demonstrate that the RBF model can accurately predict engine lubricant properties, including Fe, Pb, Cu, Cr, Al, Si, and Zn, with high precision. Specifically, the RMSE values obtained for these elements were 0.9, 0.3, 2.2, 0.2, 0.1, 0.4, and 1.0, respectively. These low RMSE values indicate that the RBF model can effectively capture the complex relationships between the electrical properties and elemental concentrations in engine lubricants. This indicates a strong correlative and predictive relationship between lubricant electrical properties (ε′, ε″, and tan δ) and elemental spectroscopy. Tuning the RBF parameters, including increasing the hidden size and optimizing the training algorithm, further improved its performance. This shows that fine-tuning machine learning models can improve their accuracy and reliability for practical applications. Sensitivity analyses were carried out to determine the most influential input variables, which revealed that the combination of ε′, ε″, and tan δ yielded the lowest Root Mean Square Error (RMSE) values for all elements. Notably, the removal of ε′ led to a significant increase in RMSE, emphasizing its crucial role in predicting engine lubricant properties. These findings suggest that lubricant monitoring programs based on measurements of electrical properties, using machine learning models such as RBF, may effectively detect machine faults early through the prediction of elemental spectroscopy changes. Despite these limitations, this approach demonstrated in the current study could provide insights for developing effective condition-based maintenance strategies utilizing real-time lubricant analysis. Such systems could detect abnormal changes early and project remaining lubricant life. These results help advance understanding of the correlation between lubricant electrical properties and elemental spectroscopy and highlight the potential of soft computing methods for real-world lubricant monitoring applications. However, further research is needed to test the approach on a wider range of lubricant types and operating conditions and to explore different machine-learning models and optimization techniques. Collaborating with institutions from other countries presents valuable opportunities to access diverse datasets, different measurement techniques, and a wider range of samples. To enhance our modeling further, we propose exploring ensemble methods such as random forests or gradient boosting, which have been known to improve performance and generalization. In addition, to capture temporal dependencies in engine lubricant properties, we recommend utilizing deep learning models such as recurrent neural networks (RNNs). Optimization techniques such as hyperparameter tuning through grid search, randomized search, or Bayesian optimization, as well as feature selection methods, can also be employed to improve model performance. Nonetheless, the current study provides a solid foundation and starting point for future work in this area. Also, this work provides a practical framework for accurately predicting lubricant conditions based on electrical measurements.

Author Contributions

Conceptualization, M.-R.P.; methodology, A.R. and M.-R.P.; software, A.R. and M.-R.P.; validation, A.R. and M.H.A.-F.; formal analysis, A.R.; investigation, M.-R.P.; resources, M.-R.P.; data curation, A.R.; writing—original draft preparation, M.-R.P.; writing—review and editing, A.R. and M.H.A.-F.; visualization, M.-R.P.; supervision, A.R.; project administration, A.R. and M.H.A.-F.; funding acquisition, A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was made possible by the financial support from Ferdowsi University of Mashhad under grant number 59252.

Data Availability Statement

Data available on request from the authors.

Acknowledgments

We would like to express our sincere gratitude to the Ferdowsi University of Mashhad for providing funding for this project (Grant number 59252). Without their support, this research would not have been possible. We would also like to acknowledge the Tirage Company for providing access to their maintenance database, which was essential to the successful completion of this study. Their support and collaboration have been invaluable, and we are grateful for the opportunity to work with such an outstanding organization. Finally, we would like to thank all of the individuals who contributed to this project, whether through their expertise, guidance, or encouragement. Your contributions have been instrumental in helping us achieve our research goals, and we are deeply appreciative of your efforts.

Conflicts of Interest

The authors declare that there is no conflict of interest.

Appendix A

Table A1. Definition of symbols in equations.
Table A1. Definition of symbols in equations.
SymbolDefinitionNo. Equation
x the principal value of the index vector(1)
x   n the normalized value of the index vector(1)
xmax and xminthe maximum and minimum values of the index(1)
z j the input of the jth neuron in the hidden layer(2)
b j the bias of jth neuron in the hidden layer(2)
W i j the weight value between the ith input neuron and the jth neuron in the hidden layer(2)
f z j the activation function(3)
y j the outputs of jth neuron(3), (4)
p k the output of the neurons in the kth output(4)
W j k weight value between the neuron in the jth hidden layer and the neuron in the kth output layer(4)
n 1 the number of neurons in the hidden layers(4)
p i the output(5)
b i the bias terms(5)
N the number of basic functions(5)
W i j the weight between hidden and output layers(5)
E the input data vector(5)
μ the center of RBF unit(5)
σ the spread of the Gaussian basis function(5)
φ E a nonlinear mapping of E(6)
W a weight vector(6)
b the bias factor(6)
σ n o i s e 2 the equal noise variance for E i of all samples(7)
y e i the ith component of the desired (actual) output for the ith pattern E i
y p i the component of the predicted (fitted) output produced by the network for the ith pattern(8)–(10)
n the number of lubricant samples(8)–(10)
y e ¯   and   y p ¯ the average of the whole desired (actual) and predicted output(10)

References

  1. Molęda, M.; Małysiak-Mrozek, B.; Ding, W.; Sunderam, V.; Mrozek, D. From Corrective to Predictive Maintenance—A Review of Maintenance Approaches for the Power Industry. Sensors 2023, 23, 5970. [Google Scholar] [CrossRef] [PubMed]
  2. Vališ, D.; Gajewski, J.; Žák, L. Potential for using the ANN-FIS meta-model approach to assess levels of particulate contamination in oil used in mechanical systems. Tribol. Int. 2019, 135, 324–334. [Google Scholar] [CrossRef]
  3. Upadhyay, R. Microscopic technique to determine various wear modes of used engine oil. J. Microsc. Ultrastruct. 2013, 1, 111–114. [Google Scholar] [CrossRef]
  4. Yan, R.; Gao, R.X. Complexity as a measure for machine health evaluation. IEEE Trans. Instrum. Meas. 2004, 53, 1327–1334. [Google Scholar] [CrossRef]
  5. Qiang, L.; Xiaowei, L.; Lili, C.; Jiang, W. Research on a Method for the Determination of Iron in Lubricating Oil. In Proceedings of the 1st International Conference on Mechanical Engineering and Material Science (MEMS 2012), Shanghai, China, 28–30 December 2012; pp. 180–182. [Google Scholar]
  6. Avci, O.; Abdeljaber, O.; Kiranyaz, S.; Hussein, M.; Gabbouj, M.; Inman, D.J. A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and Deep Learning applications. Mech. Syst. Signal Process. 2021, 147, 107077. [Google Scholar] [CrossRef]
  7. Zadhoush, M.; Nadooshan, A.A.; Afrand, M. Constructal optimization of longitudinal and latitudinal rectangular fins used for cooling a plate under free convection by the intersection of asymptotes method. Int. J. Heat Mass Transf. 2017, 112, 441–453. [Google Scholar] [CrossRef]
  8. Kumbár, V.; Dostál, P. Oils degradation in agricultural machinery. Acta Univ. Agric. Silvic. Mendel. Brun. 2013, 61, 1297–1303. [Google Scholar] [CrossRef]
  9. Lazakis, I.; Raptodimos, Y.; Varelas, T. Predicting ship machinery system condition through analytical reliability tools and artificial neural networks. Ocean Eng. 2018, 152, 404–415. [Google Scholar] [CrossRef]
  10. Alemayehu, B.; Kota, A.; Neidhard-Doll, A.; Chodavarapu, V.; Subramanyam, G. Cloud-connected real-time oil condition monitoring of utility transformers using impedance spectroscopy. Instrum. Sci. Technol. 2021, 49, 509–520. [Google Scholar] [CrossRef]
  11. Polo-Mendoza, R.; Navarro-Donado, T.; Ortega-Martinez, D.; Turbay, E.; Martinez-Arguelles, G.; Peñabaena-Niebles, R. Properties and characterization techniques of graphene modified asphalt binders. Nanomaterials 2023, 13, 955. [Google Scholar] [CrossRef]
  12. Bongfa, B.; Syahrullail, S.; Hamid, M.A.; Samin, P.; Atuci, B.; Ajibili, H. Maximizing the life of lubricating oils for resources and environmental sustainability through quality monitoring in service. Indian J. Sci. Technol. 2016, 9, 1–10. [Google Scholar] [CrossRef]
  13. Willet, R.; Reichard, K. Vibration and airborne acoustic data fusion for diesel engine health management. J. Acoust. Soc. Am. 2023, 153, A38. [Google Scholar] [CrossRef]
  14. Altıntaş, O.; Aksoy, M.; Ünal, E.; Akgöl, O.; Karaaslan, M. Artificial neural network approach for locomotive maintenance by monitoring dielectric properties of engine lubricant. Measurement 2019, 145, 678–686. [Google Scholar] [CrossRef]
  15. Pourramezan, M.-R.; Rohani, A.; Keramat Siavash, N.; Zarein, M. Evaluation of lubricant condition and engine health based on soft computing methods. Neural Comput. Appl. 2022, 34, 5465–5477. [Google Scholar] [CrossRef]
  16. Hammett, J. Utilising the latest findings on low speed 2-stroke diesel engine oil stress from field & laboratory engine testing. Mar. Eng. 2014, 49, 293–300. [Google Scholar]
  17. Craft, J. Development of Interdigitated Electrode Sensors for Monitoring the Dielectric Properties of Lubricant Oils; Auburn University: Auburn, AL, USA, 2010. [Google Scholar]
  18. Raadnui, S.; Kleesuwan, S. Low-cost condition monitoring sensor for used oil analysis. Wear 2005, 259, 1502–1506. [Google Scholar] [CrossRef]
  19. Senthilkumar, D.K.; Aakash, S.; Kishan, R.; Nithin, N.; Vijay, T. Lubricating Oil Condition Monitoring System in Dashboard. Int. Res. J. Eng. Technol. 2020, 7, 2400–2407. [Google Scholar]
  20. Li, X.; Yan, E. Application of Information Fusion Technology in Lubricating Oil Quality Monitoring. In Proceedings of the 2022 4th International Conference on Robotics, Intelligent Control and Artificial Intelligence, Dongguan, China, 16–18 December 2022. [Google Scholar]
  21. Schmitigal, J.; Moyer, S. Evaluation of Sensors for On-Board Diesel Oil Condition Monitoring of US Army Ground Equipment; Tacom Research Development and Engineering Center: Warren, MI, USA, 2005. [Google Scholar]
  22. Tanwar, M.; Raghavan, N. Lubricating Oil Remaining Useful Life Prediction Using Multi-Output Gaussian Process Regression. IEEE Access 2020, 8, 128897–128907. [Google Scholar] [CrossRef]
  23. Zhou, Y.; Liang, Y.; Li, R. Research on intelligent diagnosis technology of emergency diesel generator in nuclear power plant based on new monitoring method. IOP Conf. Ser. Mater. Sci. Eng. 2019, 677, 032074. [Google Scholar] [CrossRef]
  24. Grimmig, R.; Lindner, S.; Gillemot, P.; Winkler, M.; Witzleben, S. Analyses of used engine oils via atomic spectroscopy–Influence of sample pre-treatment and machine learning for engine type classification and lifetime assessment. Talanta 2021, 232, 122431. [Google Scholar] [CrossRef]
  25. Macin, V.; Tormos, B.; Sala, A.; Ramirez, J. Fuzzy logic-based expert system for diesel engine oil analysis diagnosis. Insight-Non-Destr. Test. Cond. Monit. 2006, 48, 462–469. [Google Scholar] [CrossRef]
  26. Bai, H.; Zhan, X.; Yan, H.; Wen, L.; Jia, X. Combination of Optimized Variational Mode Decomposition and Deep Transfer Learning: A Better Fault Diagnosis Approach for Diesel Engines. Electronics 2022, 11, 1969. [Google Scholar] [CrossRef]
  27. Zhang, D.; Tong, P.; Zhu, W.; Zheng, J. Research on Reciprocating diesel engines fault diagnosis based on adaptive genetic algorithm optimization. In Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Beijing, China, 3–5 October 2022; pp. 872–876. [Google Scholar]
  28. Aghilinategh, N.; Nankali, S.; Babaei, M. Applying capacitance/inductance measurements for characterizing oil debris and pH. Indian J. Sci. Technol. 2016, 9, 28. [Google Scholar] [CrossRef]
  29. Król, A.; Gocman, K.; Giemza, B. Neural networks as a tool to characterise oil state after porous bearings prolonged tests. Mater. Sci. 2015, 21, 466–472. [Google Scholar] [CrossRef]
  30. Li, L.; Chang, W.; Zhou, S.; Xiao, Y. An identification and prediction model of wear-out fault based on oil monitoring data using PSO-SVM method. In Proceedings of the 2017 Annual Reliability and Maintainability Symposium (RAMS), Orlando, FL, USA, 23–26 January 2017; pp. 1–6. [Google Scholar]
  31. RodRigues, J.; Costa, I.; Farinha, J.T.; Mendes, M.; Margalho, L. Predicting motor oil condition using artificial neural networks and principal component analysis. Eksploat. Niezawodn. 2020, 22, 440–448. [Google Scholar] [CrossRef]
  32. More, S.M.; Kakati, J.P.; Pal, S.; Saha, U.K. Implementation of Soft Computing Techniques in Predicting and Optimizing the Operating Parameters of Compression Ignition Diesel Engines: State-of-the-Art Review, Challenges, and Future Outlook. J. Comput. Inf. Sci. Eng. 2022, 22, 050801. [Google Scholar] [CrossRef]
  33. Ghosh, S.; Sarkar, B.; Sanyal, S.; Saha, J. Automated Maintenance Approach for Industrial Machineries by Soft Computing Techniques at Offline Monitoring Process. JJMIE 2009, 3, 1. [Google Scholar]
  34. More, S.M.; Kakati, J.; Saha, U.K. Evaluating the Operating Parameters of a Compression Ignition Engine Fueled With Waste Cooking Oil—Diesel Blends Using Artificial Neural Network and Ensemble Methods. J. Eng. Gas Turbines Power 2023, 145, 071010. [Google Scholar] [CrossRef]
  35. Li, Z.; Fei, F.; Zhang, G. Edge-to-Cloud IIoT for Condition Monitoring in Manufacturing Systems with Ubiquitous Smart Sensors. Sensors 2022, 22, 5901. [Google Scholar] [CrossRef]
  36. Kumar, K.V.; Kumar, S.S.; Praveena, B.J.; John, J.P.; Paul, J.E. Soft computing based fault diagnosis. In Proceedings of the 2010 Second International conference on Computing, Communication and Networking Technologies, Karur, India, 29–31 July 2010; pp. 1–7. [Google Scholar]
  37. Barsoukov, E.; Macdonald, J.R. Impedance Spectroscopy Theory, Experiment, and Applications, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2005. [Google Scholar]
  38. Sapotta, B.; Schwotzer, M.; Wöll, C.; Franzreb, M. On the Integration of Dielectrometry into Electrochemical Impedance Spectroscopy to Obtain Characteristic Properties of a Dielectric Thin Film. Electroanalysis 2022, 34, 512–522. [Google Scholar] [CrossRef]
  39. Xu, X. Enhancements in Dielectric Response Characterization of Insulation Materials; Chalmers Tekniska Hogskola: Gothenburg, Sweden, 2013. [Google Scholar]
  40. Yoon, E.J.; Stelson, A.C.; Orloff, N.D.; Long, C.J.; Booth, J.C.; Meng, E.F. The Effect of Annealing Thin Film Parylene C-Platinum Interfaces Characterized by Broadband Dielectric Spectroscopy. In Proceedings of the 2021 21st International Conference on Solid-State Sensors, Actuators and Microsystems (Transducers), Orlando, FL, USA, 20–24 June 2021; pp. 884–887. [Google Scholar]
  41. Wróblewski, P. The theory of the surface wettability angle in the formation of an oil film in internal combustion piston engines. Materials 2023, 16, 4092. [Google Scholar] [CrossRef]
  42. Wróblewski, P. Investigation of energy losses of the internal combustion engine taking into account the correlation of the hydrophobic and hydrophilic. Energy 2023, 264, 126002. [Google Scholar] [CrossRef]
  43. Fayaz, S.D.; Wani, M. Insights into the tribological behavior of IF-WS2 nanoparticle reinforced mild extreme pressure lubrication for coated chromium/bulk grey cast iron interface. Proc. Inst. Mech. Eng. Part J J. Eng. Tribol. 2021, 235, 1478–1494. [Google Scholar] [CrossRef]
  44. Rajabi-Vandechali, M.; Abbaspour-Fard, M.H.; Rohani, A. Development of a prediction model for estimating tractor engine torque based on soft computing and low cost sensors. Measurement 2018, 121, 83–95. [Google Scholar] [CrossRef]
  45. Al-Dosary, N.M.N.; Al-Hamed, S.A.; Aboukarima, A.M. K-nearest Neighbors method for prediction of fuel consumption in tractor-chisel plow systems. Eng. Agrícola 2019, 39, 729–736. [Google Scholar] [CrossRef]
  46. Kappa, B. Predicting Bearing Failures and Measuring Lubrication Film Thickness in Your Plants Rotating Equipment. Proc. WEFTEC 2006, 6, 5969–5974. [Google Scholar] [CrossRef]
  47. Heidari, P.; Rezaei, M.; Rohani, A. Soft computing-based approach on prediction promising pistachio seedling base on leaf characteristics. Sci. Hortic. 2020, 274, 109647. [Google Scholar] [CrossRef]
  48. Rezaei, M.; Rohani, A.; Heidari, P.; Lawson, S. Using soft computing and leaf dimensions to determine sex in immature Pistacia vera genotypes. Measurement 2021, 174, 108988. [Google Scholar] [CrossRef]
  49. Blanco, A.; Pino-Mejías, R.; Lara, J.; Rayo, S. Credit scoring models for the microfinance industry using neural networks: Evidence from Peru. Expert Syst. Appl. 2013, 40, 356–364. [Google Scholar] [CrossRef]
  50. Borghi, P.H.; Zakordonets, O.; Teixeira, J.P. A COVID-19 time series forecasting model based on MLP ANN. Procedia Comput. Sci. 2021, 181, 940–947. [Google Scholar] [CrossRef]
  51. Bazrafshan, O.; Ehteram, M.; Latif, S.D.; Huang, Y.F.; Teo, F.Y.; Ahmed, A.N.; El-Shafie, A. Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models. Ain Shams Eng. J. 2022, 13, 101724. [Google Scholar] [CrossRef]
  52. Sada, S.; Ikpeseni, S. Evaluation of ANN and ANFIS modeling ability in the prediction of AISI 1050 steel machining performance. Heliyon 2021, 7, e06136. [Google Scholar] [CrossRef] [PubMed]
  53. Vasileva-Stojanovska, T.; Vasileva, M.; Malinovski, T.; Trajkovik, V. An ANFIS model of quality of experience prediction in education. Appl. Soft Comput. 2015, 34, 129–138. [Google Scholar] [CrossRef]
  54. Stojčić, M.; Stjepanović, A.; Stjepanović, Đ. ANFIS model for the prediction of generated electricity of photovoltaic modules. Decis. Mak. Appl. Manag. Eng. 2019, 2, 35–48. [Google Scholar] [CrossRef]
  55. Chen, Y.; Zhao, Y. Face recognition using DCT and hierarchical RBF model. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Manchester, UK, 24–26 November 2022; pp. 355–362. [Google Scholar]
  56. Park, J.-W.; Venayagamoorthy, G.K.; Harley, R.G. MLP/RBF neural-networks-based online global model identification of synchronous generator. IEEE Trans. Ind. Electron. 2005, 52, 1685–1695. [Google Scholar] [CrossRef]
  57. Soleymani, S.A.; Goudarzi, S.; Anisi, M.H.; Hassan, W.H.; Idris, M.Y.I.; Shamshirband, S.; Noor, N.M.; Ahmedy, I. A novel method to water level prediction using RBF and FFA. Water Resour. Manag. 2016, 30, 3265–3283. [Google Scholar] [CrossRef]
  58. Otchere, D.A.; Ganat, T.O.A.; Gholami, R.; Ridha, S. Application of supervised machine learning paradigms in the prediction of petroleum reservoir properties: Comparative analysis of ANN and SVM models. J. Pet. Sci. Eng. 2021, 200, 108182. [Google Scholar] [CrossRef]
  59. Li, X.; Lord, D.; Zhang, Y.; Xie, Y. Predicting motor vehicle crashes using support vector machine models. Accid. Anal. Prev. 2008, 40, 1611–1618. [Google Scholar] [CrossRef]
  60. Ghorbani, M.A.; Zadeh, H.A.; Isazadeh, M.; Terzi, O. A comparative study of artificial neural network (MLP, RBF) and support vector machine models for river flow prediction. Environ. Earth Sci. 2016, 75, 476. [Google Scholar] [CrossRef]
  61. Zhang, Y.; Yang, H.; Cui, H.; Chen, Q. Comparison of the ability of ARIMA, WNN and SVM models for drought forecasting in the Sanjiang Plain, China. Nat. Resour. Res. 2020, 29, 1447–1464. [Google Scholar] [CrossRef]
  62. Cheng, M.-Y.; Huang, C.-C.; Roy, A.F.V. Predicting project success in construction using an evolutionary Gaussian process inference model. J. Civ. Eng. Manag. 2013, 19, S202–S211. [Google Scholar] [CrossRef]
  63. Ghanizadeh, A.R.; Heidarabadizadeh, N.; Heravi, F. Gaussian process regression (Gpr) for auto-estimation of resilient modulus of stabilized base materials. J. Soft Comput. Civ. Eng. 2021, 5, 80–94. [Google Scholar]
  64. Schulz, E.; Speekenbrink, M.; Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 2018, 85, 1–16. [Google Scholar] [CrossRef]
  65. Quinonero-Candela, J.; Rasmussen, C.E. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res. 2005, 6, 1939–1959. [Google Scholar]
  66. Gupta, G.K.; Sharma, D.K. A review of overfitting solutions in smart depression detection models. In Proceedings of the 2022 9th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 23–25 March 2022; pp. 145–151. [Google Scholar]
  67. Ibrahim, M.S.; Pang, D.; Randhawa, G.; Pappas, Y. Development and validation of a simple risk model for predicting metabolic syndrome (MetS) in midlife: A cohort study. Diabetes Metab. Syndr. Obes. Targets Ther. 2022, 15, 1051–1075. [Google Scholar] [CrossRef]
  68. Arvanitidis, A.I.; Bargiotas, D.; Daskalopulu, A.; Kontogiannis, D.; Panapakidis, I.P.; Tsoukalas, L.H. Clustering Informed MLP Models for Fast and Accurate Short-Term Load Forecasting. Energies 2022, 15, 1295. [Google Scholar] [CrossRef]
  69. Thapa, S.; Zhao, Z.; Li, B.; Lu, L.; Fu, D.; Shi, X.; Tang, B.; Qi, H. Snowmelt-driven streamflow prediction using machine learning techniques (LSTM, NARX, GPR, and SVR). Water 2020, 12, 1734. [Google Scholar] [CrossRef]
  70. Okwu, M.O.; Adetunji, O. A comparative study of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) models in distribution system with nondeterministic inputs. Int. J. Eng. Bus. Manag. 2018, 10, 1847979018768421. [Google Scholar] [CrossRef]
  71. Slama, S.; Errachdi, A.; Benrejeb, M. Model reference adaptive control for MIMO nonlinear systems using RBF neural networks. In Proceedings of the 2018 international conference on advanced systems and electric technologies (IC_ASET), Hammamet, Tunisia, 22–25 March 2018; pp. 346–351. [Google Scholar]
  72. Tabari, H.; Kisi, O.; Ezani, A.; Talaee, P.H. SVM, ANFIS, regression and climate based models for reference evapotranspiration modeling using limited climatic data in a semi-arid highland environment. J. Hydrol. 2012, 444, 78–89. [Google Scholar] [CrossRef]
  73. Taki, M.; Ajabshirchi, Y.; Ranjbar, S.F.; Rohani, A.; Matloobi, M. Modeling and experimental validation of heat transfer and energy consumption in an innovative greenhouse structure. Inf. Process. Agric. 2016, 3, 157–174. [Google Scholar] [CrossRef]
  74. Alshahrani, M.; Mekni, S. Comparison between Different Techniques to Predict Municipal Water Consumption in Jeddah. In Proceedings of the 6th International Conference on Future Networks & Distributed Systems, Amman, Jordan, 26–27 June 2018; pp. 429–434. [Google Scholar]
  75. Willmott, C.J.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  76. Kharroubi, L.; Maaref, H.; Vigneron, V. Soft computing based control approach applied to an under actuated system. In Proceedings of the 2022 19th International Multi-Conference on Systems, Signals & Devices (SSD), Setif, Algeria, 6–10 May 2022; pp. 816–822. [Google Scholar]
Figure 1. Development of maintenance strategies.
Figure 1. Development of maintenance strategies.
Lubricants 11 00382 g001
Figure 2. The schematic view of dataset preparation from sources.
Figure 2. The schematic view of dataset preparation from sources.
Lubricants 11 00382 g002
Figure 3. The experimental setup and its schematic view for measuring dielectric properties of lubricants.
Figure 3. The experimental setup and its schematic view for measuring dielectric properties of lubricants.
Lubricants 11 00382 g003
Figure 4. The structure of the Adaptive Neuro-Fuzzy Inference System (ANFIS).
Figure 4. The structure of the Adaptive Neuro-Fuzzy Inference System (ANFIS).
Lubricants 11 00382 g004
Figure 5. Result of RBF efficiency in different hidden sizes for various elemental spectroscopy.
Figure 5. Result of RBF efficiency in different hidden sizes for various elemental spectroscopy.
Lubricants 11 00382 g005aLubricants 11 00382 g005b
Figure 6. Result of RBF efficiency in different train algorithms for various elemental spectroscopy.
Figure 6. Result of RBF efficiency in different train algorithms for various elemental spectroscopy.
Lubricants 11 00382 g006
Figure 7. Actual and predicted values of the elemental spectroscopy in two steps (training and testing).
Figure 7. Actual and predicted values of the elemental spectroscopy in two steps (training and testing).
Lubricants 11 00382 g007
Figure 8. Result of sensitivity analysis to prioritize RBF inputs for various elemental spectroscopy.
Figure 8. Result of sensitivity analysis to prioritize RBF inputs for various elemental spectroscopy.
Lubricants 11 00382 g008
Table 1. Literature summary of significant previous studies.
Table 1. Literature summary of significant previous studies.
ApplicationSoft Computing ToolPerformanceRef.
This study proposes a novel procedure for predicting particle concentrations in the oil phase as a function of operational period times, which can serve as a basis for determining the residual useful life of lubricant agents.FIS, MLP, and RBFLearning rate[2]
This study proposes a recognition and prediction model for estimating wear-out faults in engines. To establish the model, unessential attributes were eliminated from the early stages of oil monitoring data.PSO-SVMAccuracy[30]
This study developed models based on soft computing methods to estimate the engine torque performance across an extensive range of loads and speeds, which represent the operational conditions of the engine.ANFIS, RBFRMSE, EF, R, TSSE[44]
This study focuses on the evaluation of lubricant and engine health status based on wear and lubricant pollution. By analyzing wear and pollution in the lubricant, this approach provides valuable insights into the health of the engine and lubrication system.KNN and RBF-NNAccuracy[15]
The aim of this study is to predict fuel consumption for various tractor sizes while carrying an agricultural implement (chisel plow) under different specifications.KNNMAE, RMSE, RRSE, RAE[45]
This study proposes a novel approach to interpreting the challenges of engine lubricant analysis using machine learning techniques. By leveraging spectral analysis measurements, this approach aims to identify the most important factors and their impact on engine performance. KNN, RBFAccuracy[46]
This study utilized various conditional features of mechanical components to characterize the state of engine oil.ANNAccuracy[29]
Table 2. Spectral analysis results for the samples provided by Tirage company (Unit: ppm).
Table 2. Spectral analysis results for the samples provided by Tirage company (Unit: ppm).
Sample No.FePbCuCrAlSiZn
111.052.830.981.263.628.791319
29.9400.970.461.6117.771362
330.2501.645.3310.189.231359
481.1702.597.4634.5936.211493
513.191.80.591.81.097.141281
624.6501.251.555.059.891398
79.2400.920.1116.111362
815.4601.7500.384.011360
9394.427.786.5210.9316.291297
1039.763.21.42.23.7715.441657
1134.690.181.237.213.4516.551264
1239.673.912.316.4812.4516.331342
1386.061.172.763.6910.9540.051445
1421.733.227.230.915.317.271317
158.171.793.230.0407.22803
1649.753.514.114.155.0713.651327
Table 3. Dielectric properties measurement results for samples provided by Tirage Company.
Table 3. Dielectric properties measurement results for samples provided by Tirage Company.
Sample No.2.40 GHz5.80 GHz7.40 GHz
ε′ε″tan δε′ε″tan δε′ε″tan δ
12.620.150.0582.940.130.0442.550.230.090
22.680.120.0452.990.100.0332.600.180.069
32.450.090.0372.790.070.0252.400.170.071
42.550.050.0202.860.050.0172.470.120.049
52.600.130.0512.910.120.0412.520.210.083
62.580.130.0512.900.110.0382.500.200.080
72.600.170.0662.930.140.0482.520.260.103
82.540.200.0792.850.190.0672.430.300.123
92.530.080.0322.830.060.0212.450.150.061
102.520.060.0252.810.050.0182.430.130.053
112.550.090.0362.880.070.0242.500.140.056
122.500.070.0292.790.050.0182.420.140.058
132.410.050.0212.700.040.0152.340.100.043
142.660.110.0422.970.100.0342.580.160.062
152.600.130.0512.930.120.0412.530.210.083
162.500.070.0292.810.060.0212.420.130.054
Table 4. Result of correlations between variable 1 (elemental spectroscopy) and variable 2 (electrical properties).
Table 4. Result of correlations between variable 1 (elemental spectroscopy) and variable 2 (electrical properties).
Var. 1Var. 2Corr.Var. 1Var. 2Corr.Var. 1Var. 2Corr.Var. 1Var. 2Corr.
Feε−0.13 nsCuε0.45 **Alε−0.23 ***Znε′−0.54 **
ε−0.20 ***ε0.53 **ε−0.49 **ε″−0.77 **
tan δ−0.22 ***tan δ0.53 **tan δ−0.56 **tan δ−0.79 **
Pbε0.41 **Crε0.45 **Siε−0.14 ns
ε0.48 **ε0.45 **ε−0.42 **
tan δ0.48 **tan δ0.41 **tan δ−0.50 **
**, ***, and ns respectively indicate the significance of correlation at the levels of one percent, five percent, and non-significance.
Table 5. Comparing the performance of models by RMSE and MAPE.
Table 5. Comparing the performance of models by RMSE and MAPE.
Frequency2.4 GHz5.80 GHz7.40 GHz
ML ModelRBFMLPANFISGPRSVMRBFMLPANFISGPRSVMRBFMLPANFISGPRSVM
FeRMSE2.411.02.416.823.41.423.91.519.717.30.915.50.914.316.5
MAPE3.733.83.848.551.92.869.52.634.543.50.947.31.140.336.2
PbRMSE1.45.42.23.815.61.04.91.05.36.50.33.30.35.57.5
MAPE---------------
CuRMSE4.422.45.016.419.03.918.73.918.653.22.213.22.218.521.3
MAPE10.370.010.270.768.37.940.39.393.487.21.311.02.496.872.3
CrRMSE4.013.34.312.715.73.913.28.112.715.50.213.23.511.316.2
MAPE---------------
AlRMSE0.70.70.80.70.90.20.70.30.73.50.10.70.10.70.9
MAPE---------------
SiRMSE0.73.42.22.54.30.62.90.83.13.50.42.20.52.93.3
MAPE4.338.48.928.148.21.728.21.933.036.10.725.71.124.434.2
ZnRMSE10.320.780.329.239.16.541.953.828.028.31.06.72.228.635.3
MAPE16.432.070.735.348.39.080.974.975.265.41.425.42.373.165.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pourramezan, M.-R.; Rohani, A.; Abbaspour-Fard, M.H. Unlocking the Potential of Soft Computing for Predicting Lubricant Elemental Spectroscopy. Lubricants 2023, 11, 382. https://doi.org/10.3390/lubricants11090382

AMA Style

Pourramezan M-R, Rohani A, Abbaspour-Fard MH. Unlocking the Potential of Soft Computing for Predicting Lubricant Elemental Spectroscopy. Lubricants. 2023; 11(9):382. https://doi.org/10.3390/lubricants11090382

Chicago/Turabian Style

Pourramezan, Mohammad-Reza, Abbas Rohani, and Mohammad Hossein Abbaspour-Fard. 2023. "Unlocking the Potential of Soft Computing for Predicting Lubricant Elemental Spectroscopy" Lubricants 11, no. 9: 382. https://doi.org/10.3390/lubricants11090382

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop