Next Article in Journal
An Experiment Study on Surface Topography of GH4169 Assisted by Ultrasonic Elliptical Vibration Ultra-Precision Turning
Previous Article in Journal
Collaborative Analysis of Learners’ Emotional States Based on Cross-Modal Higher-Order Reasoning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Advanced Framework for Predictive Maintenance Decisions: Integrating the Proportional Hazards Model and Machine Learning Techniques under CBM Multi-Covariate Scenarios

Predictive Lab, Department of Industrial Engineering, Universidad Técnica Federico Santa María, Avenida Santa María 6400, Santiago 7630000, Chile
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(13), 5514; https://doi.org/10.3390/app14135514
Submission received: 30 May 2024 / Revised: 11 June 2024 / Accepted: 18 June 2024 / Published: 25 June 2024

Abstract

:
Under Condition-Based Maintenance, the Proportional Hazards Model (PHM) uses Cox’s partial regression and vital signs as covariates to estimate risk for predictive management. However, maintenance faces challenges when dealing with a multi-covariate scenario due to the impact of the conditions’ heterogeneity on the intervention decisions, especially when the combined measurement lacks a physical interpretation. Therefore, we propose an advanced framework based on a PHM-machine learning formulation integrating four key areas: covariate prioritization, covariate weight estimation, state band definition, and the generation of an enhanced predictive intervention policy. The paper validates the framework’s effectiveness through a comparative analysis of reliability metrics in a case study using real condition monitoring data from an energy company. While the traditional log-likelihood minimization may fall short in covariate weight estimation, sensitivity analyses reveal that the proposed policy using IPOPT and a non-scaler transformation results in consistent prediction quality. Given the challenge of interpreting merged covariates, the scheme yields improved results compared to expert criteria. Finally, the advanced framework strengthens the PHM modeling by coherently integrating diverse covariate scenarios for predictive maintenance purposes.

1. Introduction

Condition-Based Maintenance (CBM) is an approach that focuses on assessing asset conditions to establish a robust maintenance policy based on the vital signals of an asset [1]. Within CBM, the Proportional Hazards Model (PHM) has proven suitable for estimating the proportional risk generated by these critical signals, referred to as covariates, enabling the development of predictive maintenance policies [2]. This policy leverages real-time sensors and data analysis to predict equipment failure, scheduling maintenance only when needed to avoid equipment failures.
While many studies utilize these techniques, the majority focus on academic or synthetic data, considering only a few covariates. This raises concerns about their applicability to real-world operational databases, where a multi-covariate scenario is more common [3,4]. One significant challenge, due to the high variance in measurement units and magnitudes across covariates, is the need for more guidance on selecting optimal covariates, estimating their weights, and defining conditional states for use in the PHM. Two recent studies have independently addressed these challenges by leveraging machine learning (ML) tools. The authors of Ref. [3] introduced a novel technique for obtaining covariate weights in the PHM, while in Ref. [5], a sophisticated method is presented for determining state bands using automated clustering techniques. However, there is a lack of testing in real-world scenarios, and no further development has been made to define a decision rule for equipment intervention. In summary, the identified challenges are as follows:
  • Asset-intensive industries require a multi-covariate model for CBM predictive policies, capable of prioritizing diverse magnitudes and units when interpretations are lacking.
  • Assessing the impact of condition heterogeneity on decision-making sensitivity is crucial, especially in real-case scenarios with multiple-covariates.
Thus, the objective of this research was to develop a framework capable of formulating predictive maintenance policies for multi-covariate scenarios where expert judgment alone is inadequate for determining the necessary parameters for the PHM. Consequently, the proposed framework is based on a PHM-machine learning (PHM-ML) model, structured into the following four phases to ensure its operation is autonomous:
1.
Optimize covariate selection for the model by integrating a machine learning cross-validation approach with statistical metrics to identify the most influential features.
2.
The estimation of covariate weights for the PHM by combining ML genetic algorithms with a standard solver method.
3.
The determination of covariate range bands for each state using K-means and Gaussian Mixture Model.
4.
Obtaining ML reliability metrics to develop a maintenance policy for interventions on elements prone to failure through a predictive decision-making graph.
The resulting policies undergo a performance evaluation by comparing them with an original solution developed for a Chilean energy distribution company using real operational data. Following this, a sensitivity analysis identifies aspects of the framework that induce positive changes in the model.
In this context, the main contributions of the present work are summarized as follows:
  • Formulate a comprehensive PHM-ML framework that seamlessly integrates covariate selection, weight estimation, range band determination, and predictive maintenance models for multi-condition scenarios, with applicability across different industries.
  • Introduce sensitivity analyses to evaluate the impact of the framework’s optimization methods, scalers, covariate constraint techniques, and selection criteria on the resulting predictive policy.
  • The assessment of the proposed PHM-ML model versus conventional solver methods and optimal log-likelihood scores to evaluate performance and coherence in prediction quality for maintenance purposes.
The subsequent sections of this paper are organized as follows: Section 2 provides a comprehensive literature review. Section 3 introduces the framework’s structure and the embedded models. Section 4 elaborates on the case study, presenting two analyses to test the proposed framework and offering a detailed examination of the results regarding CBM predictive decisions. Finally, Section 5 presents conclusions drawn from this work and considerations for potential future applications.

2. Literature Review

In the following literature review, the fundamental concepts for this research are presented. Initially, a contextual understanding of predictive strategies and tools, with a particular focus on CBM and PHM, is provided. Following this, a brief overview of prevalent machine learning tools in the industry is presented to enrich the CBM-PHM policy, identifying those that have not been extensively explored in this area. Subsequently, a deeper exploration into the application of genetic algorithms as optimization techniques and the utilization of clustering methods such as GMM is undertaken. Lastly, early warning systems are explored, illustrating how these collective concepts address the discerned need within the scope of this paper.

2.1. Condition-Based Maintenance

In recent decades, maintenance strategies have evolved significantly. Initially, corrective maintenance aimed to reduce costs but increased downtime. After that, preventive maintenance followed, scheduling interventions at the cost of not fully optimizing equipment lifespan. Consequently, predictive maintenance (PM) emerged, using sensor data to monitor equipment and predict and prevent imminent failures, thereby extending equipment lifespan [6]. One of the most recent advancements in this evolution is Condition-Based Maintenance (CBM), which enables continuous or intermittent monitoring of asset conditions without operational interruption [2]. CBM predominantly relies on predictive maintenance techniques, analyzing component conditions such as temperature and vibrations to schedule interventions. This approach enhances decision-making and improves operational metrics. However, the feasibility of CBM can vary in different functional scenarios due to the higher implementation costs of monitoring tools [1]. Consequently, numerous studies have explored this field.
In a comprehensive review by the authors of Ref. [1], the advantages and disadvantages of CBM are explored. The study focuses on CBM’s core components: data acquisition, processing, and decision-making. While CBM offers benefits such as saving operator time and transforming data into valuable insights, it also presents challenges. These include higher initial costs, uncertainties due to device failures, and complexities in real-world scenarios like imperfect inspections [6].
In a study by Ref. [7], CBM is compared with Time-Based Maintenance (TBM), revealing that the deterioration process behavior significantly influences CBM’s cost-benefit performance, and practical factors like planning time and uncertainty play a crucial role in their relative performance, suggesting that these factors should be considered when applying either approach.
Multiple other studies have been conducted about this topic, improving CBM policies. In Ref. [8], a novel approach termed Condition-Based Maintenance and Production (CBMP) was proposed, integrating both CBM and Condition-Based Production (CBP). The authors’ findings demonstrated that CBMP, alongside CBP and CBM individually, surpassed traditional static maintenance strategies by effectively balancing production and maintenance costs.
Furthermore, ref. [9] advanced the field by introducing a multi-component CBM-based policy. This policy accommodated the challenges associated with monitoring limited system components and investigated the interplay between predictive model accuracy and the economic benefits in intricate multi-component systems.
As evident from recent advancements, leveraging data collection allows CBM to craft efficient maintenance strategies. Moreover, when combined with robust models like PHM, CBM can significantly elevate the development of predictive maintenance policies.

2.2. Proportional Hazards Model

To enhance predictive maintenance strategies effectively, it is essential to optimize decision-making. Integrating economic aspects into CBM is crucial for this purpose, with PHM playing a key role in achieving this objective [2]. This statistical model utilizes historical data to develop a function that considers both the age and condition factors of the asset when calculating its conditional hazard level [10].
In various industrial sectors, including mining, gastronomy, logistics, and finance, the successful implementation of PHM has been observed. Within the field of maintenance, numerous studies have used PHM to enhance CBM policies. In Ref. [11], a two-level CBM policy for ship pumps was optimized under competing risks, incorporating stochastic maintenance quality, with PHM employed to model the deterioration of the pump system. In Ref. [12], a CBM policy featuring dynamic thresholds and diverse maintenance actions for periodically inspected systems was introduced. PHM was used to model the failure time, considering both minor and catastrophic failures. Finally, in Ref. [13], a hybrid repair–replacement policy integrating PHM with a stochastically increasing Markovian covariate process was introduced.
The literature demonstrates the importance and advantages of integrating PHM with CBM for effective intervention policies. However, challenges related to obtaining parameters for the PHM model in real-world multi-covariate scenarios have not been fully explored. Therefore, this paper aimed to integrate PHM with CBM in complex industrial settings, augmenting them with machine learning tools to streamline reliability calculations and policy generation.

2.3. Machine Learning

Due to the complex nature of CBM, which involves analyzing vast datasets prone to uncertainties and non-stationary processes, integrating ML methodologies like Neural Networks (NNs), genetic algorithms (GAs), Fuzzy Logic (FL), clustering and a mix of others has become crucial for improving data analysis and predictive capabilities [1].
In Ref. [14], an ML-based tool was developed to predict seismic responses in Reinforced Concrete Moment-Resisting Frames (RC MRFs). Using Artificial Neural Networks (ANNs), the study predicted the Maximum Interstory Drift Ratio and spectral acceleration, which are essential for assessing the seismic performance and safety of these structures, achieving accuracies of up to 92.9%, enhancing the reliability and precision of seismic performance assessments for RC MRFs. In Ref. [15], a predictive maintenance planning model is introduced. It starts by utilizing the Java-based Sea Lion Optimization (J-SLnO) algorithm to refine feature selection, minimize data redundancies, and optimize the weights of the Recurrent Neural Network (RNN). Subsequently, the optimized RNN is employed to predict component failures and to further enhance prediction accuracy, and a Support Vector Machine (SVM) is integrated to identify the prediction network across different ranges. The authors of Ref. [16] present a problem involving the optimization of flexible job shop scheduling and flexible PM using a Digital Twin, considering fixed and human resources. They apply a double-layer Q-learning algorithm as an optimization method, which has been adapted for dynamic environments with proven low computational overhead. In Ref. [17], a combined model of a multi-population memetic algorithm and Q-learning is employed for the optimization of a hybrid flow shop scheduling problem with flexible predictive maintenance. The study concludes that the proposed methodology yields satisfactory results compared to traditional evolutionary techniques.
As seen in these examples, ML is widely used in maintenance. In Ref. [4], a comprehensive review examining the employment of ML techniques in CBM is conducted. The review reveals that Neural Networks are utilized in 50% of CBM studies, followed by Fuzzy Logic in 13%, and clustering methods and evolutionary algorithms in 8%. Additionally, the study analyzes the types of datasets used in these studies, with results showing that approximately 47% utilize simulated or synthetic data, 29% use public datasets, 18% use real operational data, and only 6% rely on industry case studies.
It is clear that there is a lack of exploration of clustering techniques and evolutionary algorithms in CBM. Moreover, the utilization of these techniques on real operational or industrial datasets is lacking. However, these gaps are primarily due to the substantial challenges in integrating covariates from authentic datasets into the models [18]. Therefore, this study aimed to address these obstacles by implementing optimization and clustering ML techniques, while also leveraging real operational data in the proposed PHM-ML model.

2.4. Optimization Methods and Techniques

To effectively leverage a PHM model, acquiring the Weibull parameters and their corresponding covariate weights is crucial. While these parameters are typically known when dealing with synthetic data, real-world applications often require expert judgment to gather this information. Without such expertise, implementing the PHM model becomes challenging. Hence, optimization techniques like minimizing the log-likelihood prove invaluable in determining these parameters [3]. However, solving nonlinear problems can be complex, and traditional solver software may struggle due to its limitations in handling optimization problems, such as local domain restrictions and rigid formulations [19]. One ML tool suitable for solving these types of problems are GAs, which offer a flexible approach to optimization by representing solutions as chromosomes and using selection, crossover, and mutation operators to find better solutions across a wider solution domain [20]. Therefore, GAs excel in optimizing complex problems, often outperforming traditional solvers [21].
Recent studies have applied GAs to maintenance challenges. For instance, Ref. [22] used GA to address aircraft maintenance planning, handling the problem’s complexity that traditional solvers struggle with. In another application, the authors of Ref. [23] combined a GA with an arithmetic optimization algorithm to improve feature selection in survival analysis, achieving better results than other methods. Furthermore, the authors of Ref. [3] used a GA alongside a traditional solver to optimize covariate weights for a PHM. While the conventional solver performed better on the primary score index, its impact on reliability metrics and maintenance policy remains to be explored. This study aimed to investigate these effects.

2.5. Clustering Techniques

Similar to the requirements of the PHM model to have the Weibull and covariate parameters known, it is also necessary to define the operational state bands of an asset. Typically, when the model involves one or two covariates, expert judgment can readily establish these ranges. However, when dealing with multiple covariates, interpreting these variables becomes more complex, and expert judgment may struggle to define the ranges accurately. Therefore, machine learning tools such as clustering algorithms are valuable for understanding the data behavior and automatically generating these bands. Clustering algorithms are unsupervised techniques that facilitate the detection of standard features in data, enabling their grouping into subgroups, which aids in categorizing new data [4].
One well-regarded clustering technique is K-means, which creates “K” clusters. In K-means, data points in each group are closest to the center of that cluster [24,25,26]. However, there can be challenges when applying K-means to small datasets, especially when determining the optimal number of clusters [27].
Gaussian Mixture Modeling (GMM), instead of assuming data points have a single center, assumes they follow a mix of normal distributions. This approach lets us describe each group by its average and standard deviation. Consequently, GMM offers insights into the likelihood of data points belonging to specific classes or clusters by utilizing the probability distribution [26,28,29].
GMM has various uses in research. For example, in Ref. [30], GMM was used with a weighted Principal Component Analysis (PCA) to diagnose bearing faults more accurately. The authors also used an algorithm called expectation selection maximization to improve feature selection and diagnosis. In another study [31], GMM helped classify corrosion severity in building materials, guiding maintenance planning. Additionally, in Ref. [32], GMM and PCA are combined to diagnose faults in air conditioning systems more precisely and efficiently.
In Refs. [5,27], GMM was compared to K-means for estimating state ranges using PHM. This facilitated calculating transition probabilities for reliability and remaining useful life estimation where the main finding was that GMM enhanced the model’s robustness. While the latter study utilized an industrial database and effectively established decision rules, the covariates used were solely determined by expert judgment, without any exploration of the changes in optimization parameters or covariate selection. Hence, this current research endeavored to address these aspects and integrate them for the development of a predictive modeling framework.

2.6. Early-Warning Systems

It is clear that implementing CBM provides a significant advantage by proactively identifying components at risk of failure. A strategic approach to utilizing this information for intervention policy planning involves establishing early warning systems. These systems promptly alert maintenance personnel to potential failures and are commonly deployed in high-impact industrial and safety-critical systems [1].
Several systems are discussed in the literature [33,34,35]. However, these systems does not share their methodology. Consequently, defining a standard is essential, and one approach is optimizing decision-making in CBM by jointly minimizing the risk and cost factors associated with predictive and corrective maintenance through PHM [2].
In Ref. [36], a function dependent on optimal risk and the Weibull parameters of components is proposed. The optimal risk can be derived from a cost function [2], where the fixed-point iteration method introduced by Ref. [37] can be used to obtain the optimal cost and risk. This yields a “warning-limit function”, a graphical representation outlining component states, distinguishing between safe, warning, or failure-prone zones. In Ref. [2], this function is integral to developing the computational software “EXAKT”, enabling data input for components and the computation of conditional reliability, remaining useful life, costs, risk, and a maintenance policy.
While this methodology for deriving maintenance policies is well documented in the literature, it has been observed that few studies have employed this approach to develop such policies in recent decades. This missed opportunity is particularly striking considering the advancements in data processing tools developed in recent years. Consequently, utilizing these tools has the potential to greatly enhance the predictive capability of these policies across different industries. Among the few studies that follow a similar structure, it is worth noting that in Refs. [13,38], while similar curves to the “warning-limit” function are derived, slight variations exist compared to the original proposal due to specific aspects of PHM application. Nevertheless, the interpretation of both graphs is similar to the one shown in Ref. [2].

2.7. Research Motivation

As highlighted in this literature review, various models show promise in enhancing the effectiveness and application of PHM and CBM in early warning systems. One of the main challenges identified is the complexity of dealing with multi-covariate scenarios, made worse by the lack of consistent criteria for evaluating the impact of each covariate on the risk rate. Essentially, the absence of an established process for determining covariate weights and defining state bands makes it difficult to develop robust policies to address such scenarios effectively.
To overcome this challenge, the present research aims to integrate all previously mentioned methodologies and tools. This integration is designed to create robust processes for optimal covariate selection, accurate estimation of their weights, and precise definition of state bands. These objectives are achieved through the strategic use of ML techniques within a versatile, scalable framework specifically designed to accommodate diverse real industrial conditions.
Furthermore, the research concludes with a thorough sensitivity analysis of each process within the framework. This analysis aims to identify the aspects that significantly improve the performance of resulting policies. Ultimately, this comprehensive approach seeks to confirm the effectiveness of the proposed framework and its ability to drive tangible improvements in the field of PHM and CBM.

3. Model Formulation

This section explores the models and techniques employed to construct the framework, covering the PHM, optimization techniques, covariates selection process, estimation of weights and band processes, and reliability metrics.

3.1. Proportional Hazards Model

The PHM is a statistical tool for estimating failure risk. This model consists of a compound function, integrating a baseline hazard function and one accounting for the life signals of an asset [2]. The PHM distribution selected for this research was the Weibull distribution, as depicted in Equation (1).
h ( t , Z ( t ) ) = β η t η β 1 exp i = 1 n γ i Z i ( t ) .
Here, h ( t , Z ( t ) ) represents the conditional probability of failure at time t, given the covariates Z i ( t ) . β denotes the asset’s failure shape factor, η corresponds to the scale life factor, and γ i describes the weights or impacts of life signals on the model.

3.2. Data Input and Normalization Criteria

Scaling is necessary to account for potential differences in covariate magnitudes following covariate selection. The selected scaling methods to be compared were derived from the recent work of the authors of Ref. [3]. The first was the MinMax (0,1) scaler, and the second was the“NS transformation”, a non-scaler based on an increasing monotonic function. Additional details can be found in the referenced study.

3.3. Covariate Weight Estimation

Determining covariate weights for a multi-covariate PHM is challenging due to the introduction of significant variability between each vital signal, as shown in Equation (2), where the composite can consider multiple covariates for analysis.
f ( γ , z ) = i Z γ i Z i ( t ) .
The estimation process proposed in Ref. [3] was considered to address such diversity within multi-covariate scenarios. This approach utilized Gradient Boosting for the initial seed solution, while variable bounds were defined using IPCRidge and Kaplan–Meier estimators—approaches adopted in this research. Subsequently, GA and Interior Point Optimizer (IPOPT) were employed to optimize these covariate weights.

3.3.1. Partial Log-Likelihood

Since estimating covariate weights involves an optimization, the optimization function used in this research was based on a modified version of the partial log-likelihood from Refs. [36,39], aiming to minimize the negative partial log-likelihood outlined in Equation (3).
L L = i = 1 n [ l n ( λ t i ( β 1 ) ) + j = 1 m γ j l n ( z i j ) λ t i β β j = 1 m z i j γ j ] .
Here, n represents the data count, m indicates the number of covariates, γ j stands for the weights of each covariate, β is the shape parameter of the Weibull distribution (for this paper, we used α where β = α + 1 ), λ is a dimensionless parameter, t i is the operating time of an asset, and z i j represents the value of covariate j.

3.3.2. Genetic Algorithm

Due to the inherent complexity of optimizing the log-likelihood function of Equation (3), particularly when confronted with an increasing number of covariates, the quest for optimization tools capable of addressing nonlinear problems becomes imperative.
An appealing alternative is to use GAs for optimization. GAs are well suited for solving complex optimization problems and offer significant advantages over traditional techniques. They can explore multiple domains of feasible solutions and have a flexible optimization structure [10,19,22]. However, GAs typically require longer convergence times and a careful tuning of hyper-parameters to enhance solution quality [21,23]. Despite these challenges, the suitability of GAs in this research, particularly in the development of predictive maintenance policies, diminished the significance of time constraints. Hence, a GA was chosen as the preferred optimization tool.
A GA simulates chromosome reproduction to generate robust “genes” through fitness evaluation and genetic operations like selection, crossover, and mutation. It requires a fitness function, the one selected for this research mirrored Equation (3), and solutions are represented with a matrix of dimensions (2 + m) × 1. Here, m is the number of covariates in the PHM, with the initial two columns representing variables λ and α , and the rest signifying weights for each covariate. For instance, with two covariates, the chromosome array appears as [ λ , α , γ 1 , γ 2 ].

3.3.3. Interior Point Optimizer Solver

In evaluating the effectiveness of GA optimization against a conventional nonlinear solver, this study also considered IPOPT for covariate weight optimization. IPOPT is an open-source solver for optimization problems. It can handle both linear and nonlinear large-scale problems and provides the advantage of incorporating variable constraints [40].
As highlighted in Ref. [3], IPOPT, as a mathematical programming model solver, does not require hyper-parameters like GA. Consequently, an initial seed value is determined to initiate the resolution process to minimize Equation (3).

3.4. Covariate Selection Process

An essential aspect of weight optimization involves selecting which covariates to prioritize. In datasets, there may be instances where certain variables offer limited value to the model or where variables are correlated. Thus, establishing a method for covariate selection is crucial to developing models that are both simple and robust.

3.4.1. K-Fold

A common tool used in machine learning for evaluating model accuracy is cross-validation, which partitions the dataset into training and validation sets. This method is typically employed when dealing with supervised data or features intended for prediction. However, in this study, the objective was not to predict a specific variable but to optimize the weights (or estimators) in Equation (3) to minimize the log-likelihood score. To achieve this, the research used a strategy to assess the consistency of these covariate weights using cross-validation. This approach is conceptually similar but not identical to the implementation described in Ref. [41].
For this purpose, K-fold cross-validation was implemented. In this approach, the data were divided into K equal-sized folds. Each fold served as the validation set once, while the remaining folds were used for training. This process was repeated K times, ensuring the stability of the model’s performance. Specifically, for this investigation, K-fold cross-validation helped verify the consistency of the covariate weight results provided by the optimization process.

3.4.2. Coefficient of Variation

To measure the consistency of the covariate weights across each fold, the coefficient of variation (CV) was employed. The CV is a standardized measure that expresses the relative variability or dispersion in a dataset relative to its mean. It enables comparisons of variability across different datasets, irrespective of their units or scales. A lower CV indicates less relative variability; thus, covariates with lower CV values are preferred for the final model selection.
CV = σ μ × 100 % .

3.4.3. Akaike Information Criterion and Bayesian Information Criterion

Two other statistical measures for model selection are the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), which balance goodness of fit with model complexity. For both criteria, lower values indicate a better model fit, with AIC applying a moderate complexity penalty and BIC imposing a stronger penalty. These criteria assist in identifying the most appropriate model.
AIC = 2 ln ( L ) + 2 k ,
BIC = 2 ln ( L ) + k ln ( n ) ,
where L is the maximum value of the likelihood function of the model, k is the number of parameters in the model, and n is the number of observations.

3.4.4. Statistical Tests

To enhance the robustness of the selection criteria, two additional methods were employed to assess the significance of each covariate in reducing the log-likelihood score. The first method used the statistic:
G = 2 ( ln L q ln L q 1 ) ,
where L q and L q 1 represent the log-likelihoods for models q and q 1 , respectively, with scores arranged in decreasing order ( L q L q 1 ). The statistic G follows an approximate chi-square distribution with 1 degree of freedom [42].
The second method was the likelihood ratio test (LRT). Each model step was compared to the final step using the statistic:
L R T = 2 ( ln L q ln L s ) ,
where L q and L s are the log-likelihoods for model q and the final model s, with the lowest log-likelihood score. The L R T statistic follows an approximate chi-square distribution with degrees of freedom determined by the number of parameters tested from step q + 1 to the final step s [43].

3.4.5. Selection Process

Based on the above considerations, the selection strategy is outlined as follows:
1.
Assess weight consistency: Utilize K-fold cross-validation on the training set to evaluate the consistency of weights across all covariates. Calculate the mean, variance, and coefficient of variation for each weight within each fold, and identify the features with the lowest CV as potential candidates.
2.
Correlation-based covariate selection: Conduct a correlation analysis among the features and retain those with the lowest CV selection criteria, ensuring no correlated covariates remain in the final dataset.
3.
Perform significance tests: Employ significance tests using statistics G, L R T , A I C , and B I C to validate the candidate models. This process helps determine the significance of each individual covariate and allows for comparisons across different model configurations. The objective is to select a model that achieves the best balance between log-likelihood score and simplicity. Ultimately, select the model that strikes the best balance between log-likelihood score and simplicity.

3.5. Covariate Bands Estimation

Different types of covariates need to be considered to obtain valuable results when working with a multi-covariate model, which is more challenging when considering real operational data. This subsection defines the clustering technique to determine the band states that represent the operational state of an asset and the process to obtain the transition probability matrix for later use in the PHM.

3.5.1. Clustering Technique

In this research, GMM was employed to define the band ranges. Its superior performance in clustering, easy interpretation, and robustness has been demonstrated, surpassing other clustering techniques like K-means in previous studies. GMM is a probabilistic clustering algorithm that represents a dataset as a mixture of multiple Gaussian distributions. It estimates the parameters of these distributions, including means and covariances, and assigns data points to clusters based on the likelihood of belonging to each component by a probability distribution.
The training process depends on the data volume of the dataset. If working with high-volume datasets, the data points are simplified using Sturges’ rule to streamline computation, dividing it into equivalent subgroups with representative class marks for use in the model. However, Sturges’ rule should be avoided when working with low-volume datasets to evade possible data bias.
To identify the optimal number of clusters for GMM, this research utilized the Akaike values (AIC and BIC). The guiding principle for decision-making is to select the number of clusters associated with the lowest values of these criteria.

3.5.2. Band Estimation Process

The following steps depict the process of obtaining the state bands:
1.
Selecting the optimal number of clusters: Use Sturges’ rules if necessary. Subsequently, analyze the AIC and BIC scores. Then, the optimal cluster quantity identified from these techniques is chosen.
2.
Definition of band ranges: After model training and acquiring cluster centroids, the probabilities for each data point belonging to a cluster from GMM are obtained and used to define the ranges for each state. Values on the f ( γ , z ) axis are selected in ascending order to complete the number of clusters obtained in step 1 for the range selection. The results of these band ranges serve as the operational band states within the PHM.
3.
State classification: Each data point is classified according to its state as determined in the previous step. Additionally, if the probability of belonging to a cluster is less than 1%, a random choice process is used for data classification.
4.
Obtaining the transition and probability matrices: After classifying each data point, the total time spent in each state is calculated to derive the transition state rate matrix in Equation (10) and the transition state probability matrix in Equation (11).

3.6. Reliability Metrics

In this section, the reliability metrics for benchmarking the framework are defined. The metrics included Conditional Reliability, Remaining Useful Life (RUL), Mean Time Between Interventions (MTBI), Mean Time Between Failures (MTBF), maintenance costs, and the warning-limit function for defining the maintenance policy.
The conditional reliability serves as an effective tool for illustrating how each state or survival time impacts the operational reliability of equipment, as visualized in Equation (9).
R ( t , Z ( t ) ) = exp t η β exp i = 1 m γ i Z i ( t ) .
However, this form can often be challenging to calculate. Therefore, the product-integral method, as outlined by Ref. [44], was employed to approximate the conditional reliability. To do so, it was necessary to estimate the transition rate matrix Λ and the transition probability matrix P ( x ) as shown in Equations (10) and (11). Here, n i j represents transitions from state i to j, and A i represents the total time spent in state i.
Λ = λ i j ( t ) , λ i j = n i j A i , i j , λ i i = i j λ i j ,
P ( x ) = exp ( Λ x ) = i = 0 ( Λ x ) i i ! .
Subsequently, following this method, the conditional reliability can be expressed as:
R ( t | x , i ) = i L i j ( x , t ) .
The is also a crucial metric for CBM, as it evaluates the remaining lifespan of an asset. The calculation method is shown in Equation (13) based on values obtained from reliability curves.
R U L ( t , Z ( t ) ) = t R ( t , Z ( t ) ) d t .
Other metrics that were also used included the MTBF, which represents the average expected time between failures, while the MTBI is a metric similar to the MTBF but with broader applicability. It considers uncertainties in an asset’s operating environment, autonomy, and the reliability of inherent components.
With respect to the optimization of the CBM model, it was demonstrated that the expected average cost in the long run depends on the optimal threshold risk as proposed by the authors of Ref. [37]. However, their initial approach involved complex mathematical calculations and typically required an iterative process to obtain the optimal risk. Therefore, the expected cost can be approximated using the MTBI, as represented in Equation (14).
ϕ ( T p , Z ( T p ) ) = C p · R ( T p , Z ( T p ) ) + C c · ( 1 R ( T p , Z ( T p ) ) ) M T B I ( T p , Z ( T p ) ) ,
where C p represents the cost of preventive maintenance, C c represents the cost of corrective maintenance, R ( t , Z ( t ) ) represents the conditional reliability, M T B I ( t , Z ( t ) ) represents the expected time between interventions, and T p represents the instant of time when the optimal cost of ϕ ( t ) is achieved.
Once the optimal time and cost have been calculated, the optimal risk can be derived by replacing t in Equation (1) with T p . Thus, the expression for the optimal hazard is described in Equation (15).
d * = β η T p * η β 1 exp i = 1 n γ i Z i ( T p * ) .
Then, the warning-limit function can be defined in terms of d * in Equation (16).
γ Z ( t ) φ ( t ) = ln d * η β K β ( β 1 ) ln ( t ) .
This function allows the definition of a threshold that represents the operational state of equipment based on historical conditional data. This enables the establishment of a predictive maintenance policy using real-time data. If the data surpass this threshold, intervention is necessary to avoid imminent failure. The number of thresholds depends on the number of operational states provided by the band estimation process, with the threshold representing immediate equipment intervention determined by the state with the lowest conditional reliability.

3.7. Proposed Framework

After establishing the necessary tools and techniques, the steps for implementing the predictive maintenance policy are outlined in Figure 1. The processes are categorized into four groups as follows:
1.
Preliminary selection: During this initial phase, data are loaded, and a preliminary selection of all covariates is conducted for use in the PHM-ML model. Furthermore, the quantity of data allocated for training and testing datasets is also determined.
2.
Covariate weight estimation and covariate selection process: In this second phase, the main objective is to estimate the covariate weights and determine their significance in both preliminary and final covariate selection. The process begins with the preliminary selection of all covariates, which entails determining scaling types and calculating initial Weibull parameters. Subsequently, the seed value is obtained, methods for limiting covariate values are defined, and either GA or IPOPT is selected to optimize the log-likelihood function in Equation (3). This step yields the parameters and weights of all covariates. Then, K-fold cross-validation is employed to evaluate weight consistency based on the coefficient of variation. Following this, final model covariates are selected via correlation analysis. The process is then iterated for covariate estimation using the previously selected parameters, resulting in the final Weibull parameters and covariate weights.
3.
Covariate band estimation: In this phase, GMM is applied to derive the ranges defining each state, following the procedure outlined in Section 3.5.2.
4.
Obtaining metrics for the maintenance policy: In this last section, the conditional reliability, RUL, MTBI, MTBF, and optimal costs are calculated by leveraging asset-specific conditional data to estimate the optimal risk level using the developed PHM-ML model, thereby deriving the warning-limit function. This function facilitates predictive intervention for assets, aiming to prevent immediate failure while balancing both risk and cost factors. Consequently, real-time data are overlaid into this function to establish the intervention decision rule; if an asset’s data point surpasses the warning-limit of the worst conditional state, intervention must be undertaken to avoid failure.

4. Case Study and Results Discussion

This study aimed to establish a predictive maintenance policy by analyzing operational data in a real-world scenario involving data from power transformers supplied by a Chilean energy distribution company. Therefore, data were collected from four databases, including transformer details, external tests, operational data, and intervention data, which were processed and merged into a single dataset for this case study.
As context, the company relied on its manual solution, requiring external input at every step. Their method initially generated a list of potential solutions with their respective Weibull parameters, covariate weights, and flags to indicate the feasibility of the solution. A user then selected the most appropriate solution from this list. The process was repeated if no feasible options were available.
The proposed framework aimed to streamline that process by integrating the covariate selection and weight and band estimation processes to obtain a feasible solution without a user having to initially decide. Two analyses were conducted to test this approach’s effectiveness. The first study evaluated the decision policy quality derived from a preliminary solution and the optimal log-likelihood solution obtained in previous studies. Additionally, a sensitivity analysis was conducted to check the impacts of the resultant maintenance policy. The second study focused on methods for selecting the optimal covariates, examining how they affected the decision policy, and comparing them with the previous results.

4.1. Pre-Model Preparation

The compound dataset comprised operational and maintenance details from 17 electrical transformers, corresponding to 93 data points available from May 2014 to June 2017 and 15 vital signals as shown in Table 1.
For the initial analysis, only four covariates were considered with the objective of comparing them to a preliminary solution provided by the electric company. The dataset is represented in Table 2, and the preliminary covariate weights are shown in Table 3. Due to the varied data composition for each power transformer and the crucial need to sort data by operating time for an accurate analysis, the training set comprised data from power transformers 1 to 13, representing 65% of the dataset. The remaining 35% represented the testing data using transformers 13 to 17. The second analysis utilized all the covariates mentioned in Table 1.
Finally, for both analysis, the study horizon spanned 100,000 h, divided into 500 h intervals. A precision level ( Δ ) of 100 h was chosen for the conditional reliability calculation to improve computational efficiency. In addition, a preventive-to-corrective cost ratio of 1:7 was selected.

4.2. Analysis 1: Covariate Weight Sensitivity Analysis

The first analysis aimed to assess the performance of the covariate weight and band estimation process relative to the preliminary solution shown in Table 3, exploring the impact of these covariate weights on the decision policy. Additionally, it investigated whether the optimization parameters identified as optimal in Ref. [3] could enhance the accuracy of the decision rule. Furthermore, a sensitivity analysis was conducted on the latter solution to examine whether minor modifications to the optimization parameters could lead to variations in overall reliability metrics.
A preliminary sensitivity analysis was conducted to determine the optimal GA hyper-parameters. By changing the values of the gene population (from 50 to 500 in steps of 50), crossover probability (from 0 to 1 in steps of 0.1), and mutation probability (from 0 to 1 in steps of 0.1), approximately 4000 cases were created. An extract of these cases is shown in Table 4 with the top best and worst cases. The optimal results indicated the selection of a population of 500 genes, the crossover probability was set to one, the gene distribution of each offspring was set to 0.1, the mutation probability was equal to 0.7, and each value in the offspring underwent mutation by applying a Gaussian additive mutation with a mean of zero, a standard deviation of one, and an independent probability of 0.3 for mutation occurrence. Finally, 1000 generations were taken into account.
A total of eight cases were defined for the analysis. Case 1 corresponded to the original solution provided by the Chilean company, as detailed in Table 3. Case 2 represented the optimal solution identified based on the partial log-likelihood score from [3], utilizing the optimization parameters IPOPT, NS transformation, and IPCRidge. The remaining cases involved sensitivity analyses applied to the optimization parameters of case 2. Table 5 outlines the optimization parameters, log-likelihood scores, Weibull parameters, log-likelihood scores, and resulting covariate weights used for each solution.
In Table 5, the S o l v e r column indicates the optimization methodology: fmincon for the original solver, IPOPT for the traditional solver, and GA for genetic algorithms. The S c a l e r column indicates the type of scaler: NS using the custom solution transformation and MinMax employing the scaling with values between zero and one. The B o u n d s column specifies the technique for establishing covariate weight constraints: Fixed for bounds set in custom software based on expert criteria and IPCR using the proposed method for bound determination. L L represents the partial log-likelihood score from Equation (3), where a lower value technically indicates a better solution. Lastly, the Weibull parameters and covariate weights are displayed in the remaining columns.
Here, case 1 stood out as having the worst LL score, while case 2 achieved the best score, although it remained to be seen whether this also translated into a good maintenance policy. Furthermore, the cases that utilized the IPOPT solver outperformed those that used the GA solver. This suggested that the solver method significantly impacted the quality of the solution, as evidenced by similar LL scores and β and η values.
One noteworthy finding was that varying the bound determination method produced nearly identical results in terms of covariate weights and LL score. This was evident in cases 3, 5, and 8, as well as cases 4 and 6. Consequently, the total number of cases was streamlined to five: cases 1, 2, and 7 were designated as M1, M2, and M3, respectively. Cases 3, 5, and 8 were consolidated as M4, while cases 4 and 6 were represented by M5. Table 6 shows these modifications.
Before proceeding with the analysis, it is essential to highlight a specific modification applied exclusively to cases using the MinMax scaler (M2 and M3). When calculating the f ( γ , z ) values for these cases, they had a magnitude of about 1 × 10 2 . This resulted in centroids with matching magnitudes. Consequently, when assessing the risk using Equation (1), the term e x p ( f ( γ , z ) ) became closer to one, which implied that the conditional reliability was not penalized by the covariates, resulting in curves closely aligned with the reliability computed solely from the Weibull parameters.
An example illustrating this can be seen in Figure 2. Here, the blue curve represents the estimated reliability using only the Weibull parameters (without any covariates). The conditional reliability curves are overlaid on this plot, indicating no penalty when considering the covariates. To address this issue, the data points were not scaled after obtaining the covariate weights. Table 7 specifies where this criterion was applied.

4.2.1. Reliability Metrics

The results of the following figures are delineated following this format: Case MX: Solver Approach–Scaler Method–Bounds Determination. Figure 3 and Table 8 display the cluster boundaries and centroids produced for the five cases. It is evident that cases M5 resembled the reference case, case M1. Conversely, case M2 displayed a noticeably different data segmentation due to its optimization parameters. The remaining cases exhibited comparable results to case M1.
Figure 4 shows considerable dispersion among the conditional reliability states in case M2 and case M3, with state 3 in case M2 incurring a more significant penalty. Two factors could contribute to this outcome. Firstly, it was noted that a lower weight was assigned to γ   R . D i e in both cases, as seen in Table 5, while the other cases shared a similar magnitude. Secondly, the exclusive application of the MinMax scaler to these cases and the following non-scaling of the data points before entering them into the PHM may also have affected these results.
Comparing the results from the GA and IPOPT solvers revealed significant differences in conditional reliability results. The GA tended to produce more optimistic reliability curves, leading to less similarity with the reference case. This discrepancy suggested a potential non-optimal fit in this analysis. Further investigation underscored the crucial role of the scaler in determining the state dispersion and curve shapes, as evidenced in cases M1, M4, and M5.
Overall, the dispersion of the reliability curves appeared to be influenced by the choice of the scaler, while the length of the conditional reliability seemed to depend on the optimization method used. Similar trends were observed for the RUL in Figure 5.
Table 9 presents the optimal time values derived from the cost function, confirming these trends. The case most closely aligned with the reference case was M2, where states 2 and 3 displayed values that were very similar. This suggested that combining IPOPT with the NS transformation could result in policies similar to those of case M1. Additionally, all the generated cases tended to show times greater than those of case M1.
Finally, Figure 6 shows the decision policies for all cases. In the reference case M1, power transformer 13 required immediate intervention at the 20,000 h mark, as it surpassed the worst conditional warning-limit graph. Similarly, transformers 14 and 15 were in the caution zone approximately at the 18,000 h and 20,000 h marks, indicating that preparations for their replacement should be made soon. The remaining transformers did not require intervention, as they were operating correctly. These insights allowed for predictive planning of interventions.
In case M2, despite being the best solution in terms of log-likelihood, did not generate a reasonable policy compared to case M1. It suggested that all transformers should be replaced at around the 5000 h mark, resulting in a very aggressive policy. Similar outcomes were observed in case M5, where the policy suggested replacing all equipment after just 2000 h of use. Case M3 performed the worst, recommending immediate replacement as soon as the equipment was put into operation.
While most configurations failed to generate satisfactory policies, case M4 produced a feasible policy, proposing equipment intervention at the 22,000 h mark, which suggested that transformer 13 was nearing the point of needing intervention. Although case M4 did not perfectly match the reference case, it demonstrated that the IPOPT, NS, and IPCRidge configuration yielded the closest results. This showed the framework’s capability to produce feasible policies when the correct optimization parameters were applied.

4.2.2. Results Discussion

Considering all the results, it is evident that the anticipated similarities between cases M2 and M1 did not materialize. A key contributing factor was the lower weight assigned to covariate γ R . D i e , as shown in Table 5. This lower weight resulted in a significant divergence in the data composition, as observed in Figure 3, which was also reflected in case M3. These findings highlighted the significant influence of the covariate weights on the resulting policies.
Conversely, preliminary findings indicated that imposing constraints on covariate domains, whether fixed, boundless, or with IPCRidge boundaries, prior to optimizing the weights, had minimal impact on the results. This was evident in the original cases 3, 5, and 8, where identical Weibull parameters and covariate weights were obtained.
Furthermore, a sensitivity analysis conducted on case M2 provided crucial insights into the impact of scaling methods on reliability metrics. Specifically, cases using the NS transformation exhibited dispersion in state curves similar to case M1, whereas cases utilizing the MinMax scaler showed greater dispersion, leading to increased penalization for each state.
The results also indicated that the use of GA did not yield satisfactory outcomes across all reliability metrics. This was surprising given that the Weibull parameters in Table 5 and the cluster results in Figure 3 closely resembled the reference case. However, the weights of the covariate γ   R . D i e were not close to those obtained in case M1, reinforcing the high significance of this covariate to the PHM.
Additionally, policies employing Ipopt as the solver method and the NS transformation for data scaling most closely resembled the reference case, as demonstrated in case M4.
Overall, the sensitivity analysis was necessary to reveal that achieving a superior log-likelihood score does not guarantee a feasible maintenance strategy.
Finally, this analysis emphasized the importance of covariate weights in shaping the final decision policy and highlighted the need to accurately test and estimate covariate weights and state bands. The key question now is whether the selected covariates in this analysis were optimal or if superior alternatives existed. This is explored in subsequent analyses.

4.3. Analysis 2: Covariate Selection Sensitivity

In the second analysis, the objective was to evaluate the framework’s covariate selection process and assess how these selections impacted the final decision policy. Consequently, all the vital signals listed in Table 1 were considered. Additionally, based on the results from the preceding analysis, the optimization engine IPOPT, the NS scaler, and the IPCRIDGE bound technique were employed in this case study.
To initiate the analysis, a model including all the covariates was constructed to determine their respective weights. However, to gain a deeper understanding of how the weights varied with different covariate combinations, their weights were calculated for every possible combination, and their median values were computed (this approach helped to mitigate biases from extreme weight values compared to using the average). The results are presented in Table 10, offering a comprehensive analysis across various covariate combinations. For example, out of the total 15 covariates, when considering only 2 covariates, a total of 15 2 combinations were calculated. Then, the median values for each covariate were obtained from these cases.
As evident from Table 10, the covariates Tint, Dift, C2H4, TGC, R.Die, and % Hum consistently demonstrated substantial weight values across all scenarios. This suggested that a more parsimonious model could potentially include only these covariates without compromising the log-likelihood score. In contrast, covariates CH4, GTF, and TGC-CO initially exhibited high weight values. However, their weights decreased significantly when more than nine covariates were considered.
To evaluate the consistency of the covariates, a cross-validation analysis using K folds was applied. The underlying principle was to vary the dataset utilized for estimating the covariate weights in each fold. Thus, if a covariate demonstrated a consistent and relatively low coefficient of variation compared to others, it enhanced its suitability for inclusion in the final model.
In this analysis, 5 and 10 folds were utilized, and the results, including the average, standard deviation, and coefficient of variation, are presented in Table 11 and Table 12. To enhance readability, the tables exclude the covariates CH4, GTF, and TGC-CO, which exhibited extremely high CV values.
The covariates C2H4, R.Die, and % Hum demonstrated relatively lower CV compared to the other covariates. This suggested that the weights assigned to C2H4 and R.Die were more consistent across all models. While % Hum had a higher CV than the aforementioned covariates, its CV remained significantly lower when compared to the rest. Consequently, based on this analysis, C2H4, R.Die, and % Hum should be prioritized when selecting the final model from the framework. Moreover, these covariates also received substantial weights in Table 10 when considering a set of 15 covariates. This may indicate that using the model with all covariates could allow us to assess the importance of each covariate, aiding decision-making in covariate selection.
To extend the analysis, a focused examination was conducted on the impact of each covariate on the log-likelihood score, as presented in Table 13. Interestingly, C2H4 significantly influenced the log-likelihood score. Results for TGC and % Hum are not included as there is no solution when considering each separately.
To check if these results were significant, three tests were taken into account to assess the significance of the covariates on the reduction in the log-likelihood score: the chi-squared test for the difference in the LL scores, the LRT, and Akaike values, as shown in Table 14.
The principal findings of this test indicated that adding any covariates to the model led to a significant improvement in the reduction in the LL score. Furthermore, the chi-squared test revealed that the covariate producing significant changes in the model was C2H4 (p ≤ 0.05), a finding supported by the lowest Akaike value when this covariate was introduced in the model. Additionally, the LRT test indicated that % Hum could also introduce significant changes (p ≤ 0.05). Consequently, models considering only C2H4 and both C2H4 and % Hum were used for further testing.
Continuing with the covariate selection process, a correlation analysis was applied to identify and exclude correlated features that contributed minimally to the model. Figure 7 presents the p-values from the Pearson correlation test, where a significance level below 5% suggests correlations between features (p-value ≤ 0.05). The results revealed that multiple covariates exhibited a high degree of correlation, which could be addressed by removing certain features. The key question was which features should be removed. This decision was crucial since various combinations of feature removal were possible. For instance, consider the temperature covariates: Tint and DifT were correlated because DifT represented the difference between Tint and Text. One possible solution could be to remove these two covariates and use only Text. However, based on the previous analysis presented in Table 10, both Tint and DifT had higher covariate weights than Text, making this selection less advisable. Alternatively, should Tint or DifT be removed from the model? To address this and similar cases objectively, features with high CV were removed until no correlation was detected. In other words, covariates with a smaller CV were prioritized to remain in the model. Using this criteria, DifT, C2H4, R.Die, and % Hum were the combination of features that did not have any correlation, as shown in Figure 8, and were considered for the final model.
Considering all the analyses presented thus far, Table 15 displays all the models for comparison, sorted by the LL score.
Case 0 represents the model with no covariates. Case A includes only C2H4 as a covariate due to its significance in reducing the LL score, as indicated by the chi-squared test and Akaike Value. Case B is based on covariate selection using expert criteria. Case C comprises covariates showing high significance according to the log-likelihood ratio test. Case D includes covariates with substantial weights from Table 10. Case E represents covariates identified through the correlation and CV analysis. Lastly, case F is the model including all covariates (Dda, Text, IFS, CH4, GTF, CO2, Co, and TGC-CO are considered but not displayed for table readability).
The table shows that all the models we compared (A–F) had much lower LL scores than case 0. This means they fit the data better and could potentially make more accurate predictions. However, there was a trade-off. While adding extra features to the models improved the LL score, it also tended to slightly increase the β values and decrease the η values.
Models A, C, and E were particularly interesting. They had the lowest AIC and BIC scores. These scores could be explained by AIC and BIC penalizing models for being too complex. It is important to note that even though model A had the absolute best AIC and BIC scores, its LL score was slightly higher than models C and E. This highlights the challenge of finding an accurate model that is not overly complex.

4.3.1. Decision Rule Performance Comparison

The maintenance policies derived from the models outlined in Table 15 are illustrated in Figure 9. Model A, employing a single covariate, exhibited a notably aggressive maintenance policy, suggesting immediate intervention for all equipment surpassing 4500 h. Similarly, model C, incorporating two covariates, suggested intervention for all transformers. However, a notable enhancement was observed with the addition of more covariates, extending the intervention threshold to around 7000 h.
Models D, E, and F presented nearly identical policies, marked by significant improvements over previous models, suggesting intervention after 15,000 h of operation. The primary difference among these models lay in the number of selected covariates, indicating initially that as the model incorporated more covariates, it became less sensitive to variations induced by these factors. Furthermore, these models proposed a more conservative approach compared to model B, the top performer in the previous analysis.
Upon closer examination, the policies derived from models D, E, and F bore resemblance to the reference case depicted in Figure 6a, with the primary difference being the dispersion of the obtained curves and a more cautious intervention policy. This shows the framework’s capability to generate effective policies even without expert criteria, proving its utility when dealing with data lacking such recommendations

4.3.2. Results Discussion

Based on the analysis of both the covariate selection process and the resulting policies, several insightful findings emerge.
In examining the policies, the importance of covariate selection in shaping the decision rule for maintenance policy becomes evident. The number and choice of covariates significantly influence the approach to equipment intervention. For instance, in models A and C, a limited selection of covariates led to an overly aggressive policy, prompting unnecessary interventions. In contrast, selecting less appropriate covariates, as in model B, resulted in a more lenient policy, thereby increasing the risk of premature equipment failure.
In relation to the covariate selection procedure proposed by the framework, models D, E, and F emerged as interesting cases. Despite differences in their selection methods, these models yielded practically identical results. This could suggest that while the strategy for optimizing covariate weights in model E generated good results, it might not be necessary, as models with more covariates produced identical outcomes. However, a more detailed analysis revealed that models D and F included the covariates selected by model E. Therefore, it was inferred that these were the minimum covariates that must be considered to ensure adequate and feasible maintenance policies. Nevertheless, models D and F illustrated the substantial value provided by the covariate weight estimation process, as it appropriately assigned importance to the covariates to generate pertinent policies.
As observed in this analysis, a systematic approach to covariate selection can lead to building a more parsimonious model. In this instance, this was achieved through the combined use of cross-validation and the coefficient of variation for feature selection in the correlation analysis, yielding results similar to the reference case.
Furthermore, the results suggest that a valid strategy within the proposed framework is to select all covariates and let the optimization engine determine the relative importance of each. Notably, this approach demonstrated identical performance to model E and case 0.

5. Conclusions

This paper provided a new and enhanced PHM-ML formulation to manage real operating time and condition monitoring data, facilitating optimal decision-making for predictive maintenance policies when diverse conditions are involved. A multi-covariate model was developed to address ample conditions heterogeneity and evaluate combined measurements that often lack a direct interpretation. Consequently, the proposed automated framework demonstrated high capability in selecting prioritized conditions, estimating covariate weights, determining state band ranges for reliability analysis, and evaluating predictive maintenance interventions under multi-condition scenarios.
The framework underwent rigorous testing through two sensitivity analyses to discern their impact on the resulting predictive policy. The first analysis evaluated the framework’s efficacy, highlighting that the traditional log-likelihood score did not automatically ensure feasible policies. It also demonstrated that the bound determination methods yielded identical results, the scaler significantly affected the dispersion of the state curves, and the sensitivity analysis aided in identifying optimal optimization parameters to produce feasible policies.
The second analysis, focusing on the quality prediction of the framework, delved deeper into the covariate selection process. It revealed that the proposed methodology generated results similar to the reference case. Furthermore, it was shown that the proposed method allowed for the selection of optimal covariates to generate policies nearly identical to the reference. Thus, any model containing these covariates as a basis produced similar results, even when all covariates were selected, demonstrating the robustness of the covariate weight optimization process. Additionally, given the challenge of interpreting combined conditions, the scheme yielded enhanced results compared to expert criteria.
In summary, the proposed PHM-ML framework offers significant advantages by seamlessly integrating multi-covariate scenarios under a CBM-predictive policy for asset management purposes across diverse industries. As further work, addressing the challenges observed with the MinMax scaler emerges as a key focus. While the proposed solution does not directly impact the framework’s functionality, exploring alternative approaches and data-driven techniques could yield valuable insights for enhancing the policies generated during the implementation of this scaler. Moreover, future research could integrate the framework’s outcomes with expert judgment elicitation techniques, thereby fostering a more comprehensive decision-making process in asset management.

Author Contributions

Conceptualization, D.R.G.; Methodology, D.R.G.; Validation, D.R.G., C.M., R.M., F.K. and P.V.; Formal analysis, D.R.G.; Investigation, D.R.G. and C.M.; Writing—original draft preparation, D.R.G. and C.M.; Writing—review and editing, D.R.G., C.M., R.M., F.K. and P.V.; Supervision, D.R.G.; Project administration and Funding acquisition, D.R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by ANID through FONDEF – Concurso IDeA I+D (Chile). Grant Number: ID22I10348.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors wish to acknowledge the financial support of this study by Agencia Nacional de Investigación y Desarrollo (ANID) through Fondo de Fomento al Desarrollo Científico y Tecnológico (FONDEF) of the Chilean Government (Project FONDEF IDeA ID22I10348).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ali, A.; Abdelhadi, A. Condition-Based Monitoring and Maintenance: State of the Art review. Appl. Sci. 2022, 12, 688. [Google Scholar] [CrossRef]
  2. Jardine, A.K.S.; Group, T.F.; Tsang, A.H.C.; Taghipour, S. Maintenance Replacement and Reliability: Theory and Applications, 3rd ed.; CRC Press: Boca Ratón, FL, USA, 2021. [Google Scholar]
  3. Godoy, D.R.; Álvarez, V.; Mena, R.; Viveros, P.; Kristjanpoller, F. Adopting New Machine Learning Approaches on Cox’s Partial Likelihood Parameter Estimation for Predictive Maintenance Decisions. Machines 2024, 12, 60. [Google Scholar] [CrossRef]
  4. Payette, M.; Abdul-Nour, G. Machine Learning Applications for Reliability Engineering: A review. Sustainability 2023, 15, 6270. [Google Scholar] [CrossRef]
  5. Godoy, D.R.; Álvarez, V.; López-Campos, M. Optimizing Predictive Maintenance Decisions: Use of Non-Arbitrary Multi-Covariate Bands in a Novel Condition Assessment under a Machine Learning Approach. Machines 2023, 11, 418. [Google Scholar] [CrossRef]
  6. Alaswad, S.; Xiang, Y. A review on condition-based maintenance optimization models for stochastically deteriorating system. Reliab. Eng. Syst. Saf. 2017, 157, 54–63. [Google Scholar] [CrossRef]
  7. De Jonge, B.; Teunter, R.; Tinga, T. The influence of practical factors on the benefits of condition-based maintenance over time-based maintenance. Reliab. Eng. Syst. Saf. 2017, 158, 21–30. [Google Scholar] [CrossRef]
  8. Broek, M.A.J.U.H.; Teunter, R.H.; De Jonge, B.; Veldman, J. Joint condition-based maintenance and condition-based production optimization. Reliab. Eng. Syst. Saf. 2021, 214, 107743. [Google Scholar] [CrossRef]
  9. He, R.; Tian, Z.; Wang, Y.; Zuo, M.; Guo, Z. Condition-based maintenance optimization for multi-component systems considering prognostic information and degraded working efficiency. Reliab. Eng. Syst. Saf. 2023, 234, 109167. [Google Scholar] [CrossRef]
  10. Cox, S.R. Regression Models and Life-Tables. J. R. Stat. Soc. Ser. B-Methodol. 1972, 34, 187–202. [Google Scholar] [CrossRef]
  11. Duan, C.; Li, Z.; Liu, F. Condition-based maintenance for ship pumps subject to competing risks under stochastic maintenance quality. Ocean Eng. 2020, 218, 108180. [Google Scholar] [CrossRef]
  12. Zheng, R.; Chen, B.; Gu, L. Condition-based maintenance with dynamic thresholds for a system using the proportional hazards model. Reliab. Eng. Syst. Saf. 2020, 204, 107123. [Google Scholar] [CrossRef]
  13. Zheng, R.; Wang, J.; Zhang, Y. A hybrid repair-replacement policy in the proportional hazards model. Eur. J. Oper. Res. 2023, 304, 1011–1021. [Google Scholar] [CrossRef]
  14. Kazemi, F.; Asgarkhani, N.; Jankowski, R. Machine learning-based seismic response and performance assessment of reinforced concrete buildings. Arch. Civ. Mech. Eng. 2023, 23, 94. [Google Scholar] [CrossRef]
  15. Abidi, M.H.; Mohammed, M.K.; Alkhalefah, H. Predictive Maintenance Planning for Industry 4.0 Using Machine Learning for Sustainable Manufacturing. Sustainability 2022, 14, 3387. [Google Scholar] [CrossRef]
  16. Yan, Q.; Wang, H.; Wu, F. Digital twin-enabled dynamic scheduling with preventive maintenance using a double-layer Q-learning algorithm. Comput. Oper. Res. 2022, 144, 105823. [Google Scholar] [CrossRef]
  17. Jia, Y.; Yan, Q.; Wang, H. Q-learning driven multi-population memetic algorithm for distributed three-stage assembly hybrid flow shop scheduling with flexible preventive maintenance. Expert Syst. Appl. 2023, 232, 120837. [Google Scholar] [CrossRef]
  18. Zonta, T.; Da Costa, C.A.; Da Rosa Righi, R.; De Lima, M.J.; Da Trindade, E.S.; Li, G. Predictive maintenance in the Industry 4.0: A systematic literature review. Comput. Ind. Eng. 2020, 150, 106889. [Google Scholar] [CrossRef]
  19. Goldberg, D.E. Computer-Aided Pipeline Operation Using Genetic Algorithms and Rule Learning. PART I: Genetic Algorithms in Pipeline Optimization. Eng. Comput. 1987, 3, 35–45. [Google Scholar] [CrossRef]
  20. Feng, L.; Zhang, L. Enhanced prediction intervals of tunnel-induced settlement using the genetic algorithm and neural network. Reliab. Eng. Syst. Saf. 2022, 223, 108439. [Google Scholar] [CrossRef]
  21. Katoch, S.; Chauhan, S.S.; Kumar, V. A Review on Genetic Algorithm: Past, Present, and Future. Multimed. Tools Appl. 2020, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
  22. Kowalski, M.; Izdebski, M.; Żak, J.; Gołda, P.; Manerowski, J. Planning and Management of Aircraft Maintenance Using a Genetic Algorithm. Eksploat. Niezawodn. 2021, 23, 143–153. [Google Scholar] [CrossRef]
  23. Ewees, A.A.; Al-qaness, M.A.A.; Abualigah, L.; Oliva, D.; Algamal, Z.Y.; Anter, A.M.; Ibrahim, R.A.; Ghoniem, R.M.; Elaziz, M.A. Boosting Arithmetic Optimization Algorithm with Genetic Algorithm Operators for Feature Selection: Case Study on Cox Proportional Hazards Model. Mathematics 2021, 9, 2321. [Google Scholar] [CrossRef]
  24. Fahad, A.; Alshatri, N.; Tari, Z.; Alamri, A.; Khalil, I.; Zomaya, A.Y.; Foufou, S.; Bouras, A. A Survey of Clustering Algorithms for Big Data: Taxonomy and Empirical Analysis. IEEE Trans. Emerg. Top. Comput. 2014, 2, 267–279. [Google Scholar] [CrossRef]
  25. Lloyd, S. Least Squares Quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  26. Huang, Y.; Englehart, K.; Hudgins, B.; Chan, A.C. A Gaussian Mixture Model Based Classification Scheme for Myoelectric Control of Powered Upper Limb Prostheses. IEEE Trans. Biomed. Eng. 2005, 52, 1801–1811. [Google Scholar] [CrossRef] [PubMed]
  27. Godoy, D.R.; Mavrakis, C.; Mena, R.; Kristjanpoller, F.; Viveros, P. Advancing Predictive Maintenance with PHM-ML modeling: Optimal Covariate Weight Estimation and State Band Definition under Multi-Condition Scenarios. Machines 2024, 12, 403. [Google Scholar] [CrossRef]
  28. Belciug, S.; Iliescu, D.G. Deep Learning and Gaussian Mixture Modelling Clustering Mix: A New Approach for Fetal Morphology View Plane Differentiation. J. Biomed. Inform. 2023, 143, 104402. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, Y.; Ai, Q.; Li, Z. Improving Aggregated Baseline Load Estimation by Gaussian Mixture Model. Energy Rep. 2020, 6, 1221–1225. [Google Scholar] [CrossRef]
  30. Chaleshtori, A.E.; Aghaie, A. A novel bearing fault diagnosis approach using the Gaussian mixture model and the weighted principal component analysis. Reliab. Eng. Syst. Saf. 2024, 242, 109720. [Google Scholar] [CrossRef]
  31. Lucà, F.; Manzoni, S.; Cerutti, F.; Cigada, A. A Damage Detection Approach for Axially Loaded Beam-Like Structures Based on Gaussian Mixture Model. Sensors 2022, 22, 8336. [Google Scholar] [CrossRef] [PubMed]
  32. Guo, Y.; Chen, H. Fault Diagnosis of VRF Air-Conditioning System Based on Improved Gaussian Mixture Model with PCA Approach. Int. J. Refrig. 2020, 118, 1–11. [Google Scholar] [CrossRef]
  33. Yan, S.; Lü, Z.; Huang, H.; Liu, Y.; Li, Y.; Zio, E. A new preventive maintenance strategy optimization model considering lifecycle safety. Reliab. Eng. Syst. Saf. 2022, 221, 108325. [Google Scholar] [CrossRef]
  34. Li, S.; Wen, M.; Zu, T.; Kang, R. Condition-Based Maintenance Optimization Method using performance margin. Axioms 2023, 12, 168. [Google Scholar] [CrossRef]
  35. Dalzochio, J.; Kunst, R.; Barbosa, J.L.V.; Vianna, H.D.; Ramos, G.; Pignaton, E.; Binotto, A.P.D.; Favilla, J. ELFpm: A machine learning framework for industrial machines prediction of remaining useful life. Neurocomputing 2022, 512, 420–442. [Google Scholar] [CrossRef]
  36. Vlok, P.J.; Coetzee, J.L.; Banjević, D.; Jardine, A.; Makiš, V. Optimal component replacement decisions using vibration monitoring and the proportional-hazards model. J. Oper. Res. Soc. 2002, 53, 193–202. [Google Scholar] [CrossRef]
  37. Makiš, V.; Jardine, A. Optimal Replacement In The Proportional Hazards Model. INFOR Inf. Syst. Oper. Res. 1992, 30, 172–183. [Google Scholar] [CrossRef]
  38. Zhou, H.; Li, Y. Optimal replacement in a proportional hazards model with cumulative and dependent risks. Comput. Ind. Eng. 2023, 176, 108930. [Google Scholar] [CrossRef]
  39. Liu, H.; Makis, V. Cutting-tool reliability assessment in variable machining conditions. IEEE Trans. Reliab. 1996, 45, 573–581. [Google Scholar]
  40. Houssein, H.; Garnotel, S.; Hecht, F. Frictionless Signorini’s Contact Problem for Hyperelastic Materials with Interior Point Optimizer. Acta Appl. Math. 2023, 187, 3. [Google Scholar] [CrossRef]
  41. Yang, Y. Consistency of cross validation for comparing regression procedures. Ann. Stat. 2007, 35, 2450–2473. [Google Scholar] [CrossRef]
  42. Jardine, A.K.S.; Anderson, P.M.; Mann, D.S. Application of the weibull proportional hazards model to aircraft and marine engine failure data. Qual. Reliab. Eng. Int. 1987, 3, 77–82. [Google Scholar] [CrossRef]
  43. Sarkar, S.K.; Midi, H. Optimization Techniques for Variable Selection in Binary Logistic Regression Model Applied to Desire for Children Data. J. Math. Stat. 2009, 5, 387–394. [Google Scholar] [CrossRef]
  44. Banjević, D.; Jardine, A. Calculation of reliability function and remaining useful life for a Markov failure time process. IMA J. Manag. Math. 2006, 17, 115–130. [Google Scholar] [CrossRef]
Figure 1. The steps of the proposed framework.
Figure 1. The steps of the proposed framework.
Applsci 14 05514 g001
Figure 2. Results of the conditional reliability for case 2 using MinMax to scale the covariate weights and data points.
Figure 2. Results of the conditional reliability for case 2 using MinMax to scale the covariate weights and data points.
Applsci 14 05514 g002
Figure 3. f ( γ , z ) for all cases with their respective centroids and cluster limits.
Figure 3. f ( γ , z ) for all cases with their respective centroids and cluster limits.
Applsci 14 05514 g003
Figure 4. Results of the conditional reliability for each case.
Figure 4. Results of the conditional reliability for each case.
Applsci 14 05514 g004
Figure 5. RUL results for each case.
Figure 5. RUL results for each case.
Applsci 14 05514 g005
Figure 6. Predictive maintenance policy for each case.
Figure 6. Predictive maintenance policy for each case.
Applsci 14 05514 g006
Figure 7. Pearson correlation p-values of all covariates.
Figure 7. Pearson correlation p-values of all covariates.
Applsci 14 05514 g007
Figure 8. Pearson correlation p-values of DifT, C2H4, R.die, and % Hum covariates.
Figure 8. Pearson correlation p-values of DifT, C2H4, R.die, and % Hum covariates.
Applsci 14 05514 g008
Figure 9. Predictive maintenance policy for each model.
Figure 9. Predictive maintenance policy for each model.
Applsci 14 05514 g009
Table 1. Information about the dataset covariates.
Table 1. Information about the dataset covariates.
CovariatesMeasurementDescription
DatehDate of measurement in hours since 1 January 1985.
FThOperating time in hours since latest replacement.
Interv-Intervention type (where 0 = Failure, 1 = Suspension).
ID-Transformer identifier.
DdaMVAElectrical demand measured in megavolt-ampere.
Tint°CInternal temperature of the transformer.
Text°CAmbient temperature.
DifT°CDifference between internal and ambient temperatures.
IFSAFailure or working current.
C2H4ppmEthylene gas in the transformer oil.
C2H6ppmEthane gas in the transformer oil.
CH4ppmMethane gas in the transformer oil.
GTFppmTypical failure gases in the transformer oil.
CO2ppmCarbon dioxide gas in the transformer oil.
CoppmCarbon monoxide gas in the transformer oil.
TGCppmTotal combustible gases in the transformer oil.
TGC-COppmTotal combustible gases excluding Co in the transformer oil.
R.DieKVDielectric strength of the transformer oil in kilovolt.
% Hum%Humidity percentage in the interior of the transformer.
Table 2. Representation of the dataset.
Table 2. Representation of the dataset.
DateFTIntervIDDda (MVA)Tint (°C)C2H4 (%)R.Die (kV)
259,180162,7700118.1642.139264.1948
259,610163,2100116.4542.108264.0284
283,300186,9000116.6593.327663.7482
270,580156,630028.9531.689666.2913
271,010157,0600212.8501.745265.9824
Table 3. Covariate weights and Weibull parameters obtained by the fmincon method.
Table 3. Covariate weights and Weibull parameters obtained by the fmincon method.
β η γ   Dda γ   Tint γ   c 2 h 4 γ   R . Die
1.4913117,5310.00380.40120.04042.7561
Table 4. Extract of the sensitivity analysis to obtain the best GA hyper-parameters, sorted by log-likelihood scores.
Table 4. Extract of the sensitivity analysis to obtain the best GA hyper-parameters, sorted by log-likelihood scores.
Population SizeCrossover Probability Mutation ProbabilityLL
5001 0.7757.955
3000.6 0.6757.957
5001 0.8759.175
1000.2 1877.462
2000.3 0.5880.133
500.9 1883.064
Table 5. Optimization parameters, log-likelihood scores, and covariate weights considered for each case.
Table 5. Optimization parameters, log-likelihood scores, and covariate weights considered for each case.
CaseSolverScalerBoundsLL β η γ   Dda γ   Tint γ   c 2 h 4 γ   R . Die
1 f m i n c o n NSFixed760.941.491175310.000.400.042.76
2IPOPTMinMaxIPCR747.142.861457060.000.000.080.07
3IPOPTNSIPCR748.352.781445690.000.000.071.96
4GANSIPCR756.791.671274210.000.190.051.05
5IPOPTNS748.352.781445710.000.000.071.96
6GANS756.791.671274140.000.190.051.05
7IPOPTMinMaxFixed748.312.781449600.000.000.070.20
8IPOPTNSFixed748.352.781445720.000.000.071.96
Table 6. Weibull parameters and covariate weights for each case.
Table 6. Weibull parameters and covariate weights for each case.
CasePreviouslyLL β η γ   Dda γ   Tint γ   c 2 h 4 γ   R . Die
M1C1760.941.49117,5310.000.400.042.76
M2C2747.142.86145,7060.000.000.080.07
M3C7748.312.78144,9600.000.000.070.20
M4C3 C5 C8748.352.78144,5690.000.000.081.96
M5C4 C6756.791.67127,4210.000.190.051.05
Table 7. Normalization criteria and cluster number for each case.
Table 7. Normalization criteria and cluster number for each case.
CaseScalerModified Normalization CriteriaNumber of Clusters
M1NSNo3
M2MinMaxYes3
M3MinMaxYes2
M4NSNo3
M5NSNo2
Table 8. Centroids and limits for each case.
Table 8. Centroids and limits for each case.
CaseS1S2S3Limit 1Limit 2Limit 3
M12.802.992.920–2.902.90–2.932.93–∞
M24.515.147.0150–4.774.77–5.665.66–∞
M312.6614.36-0–13.0813.08–∞-
M41.811.92-0–1.871.87–∞-
M51.091.141.180–1.131.13–1.151.15–∞
Table 9. Optimal time (in hours) taking into account the results of the cost function for each case.
Table 9. Optimal time (in hours) taking into account the results of the cost function for each case.
CasesState 1State 2State 3
M1900085008000
M213,00010,5005500
M318,00015,500-
M416,50015,000-
M524,50023,50023,000
Table 10. The median values of the covariate weights computed by considering all possible combinations of covariates in the model.
Table 10. The median values of the covariate weights computed by considering all possible combinations of covariates in the model.
No. Cov.DdaTintTextDifTIFSC2H4C2H6CH4GTFCO2CoTGCTGC-COR.Die% Hum
10.000.130.000.160.000.790.000.990.760.130.000.000.880.850.00
20.000.170.130.220.000.790.000.990.760.000.000.150.880.850.12
30.000.170.120.220.000.780.000.760.760.000.150.150.880.750.15
40.000.170.000.180.000.740.000.350.580.000.150.150.880.740.21
50.000.170.000.170.000.740.000.170.380.000.150.150.770.740.27
60.000.170.000.170.000.740.000.150.250.000.150.180.680.830.29
70.000.170.000.170.000.740.000.150.230.000.150.180.680.840.31
80.000.170.000.170.000.740.000.130.190.000.150.190.630.870.32
90.000.170.000.180.000.740.000.000.000.000.150.410.001.960.35
100.000.180.000.680.000.740.000.000.000.000.000.410.001.960.35
110.000.190.000.680.000.760.170.000.000.000.000.410.001.960.35
120.000.190.000.880.000.780.170.000.000.000.000.180.002.290.35
130.000.190.000.880.000.790.170.000.000.000.000.180.002.290.35
140.000.190.000.880.000.790.170.000.000.000.000.180.002.300.35
150.000.190.000.880.000.790.170.000.000.000.000.180.002.300.35
Table 11. The mean, standard deviation, and coefficient of variation for the covariate weights when utilizing 5 folds.
Table 11. The mean, standard deviation, and coefficient of variation for the covariate weights when utilizing 5 folds.
TintDifTC2H4C2H6CH4GTFTGCTGC-COR.Die% Hum
Mean0.05510.04670.06290.00090.00110.00100.00560.00117.99050.3413
STDV0.1230.0610.0330.0010.0030.0020.0080.0035.5790.259
CV223130511412232231482236976
Table 12. The mean, standard deviation, and coefficient of variation for the covariate weights when utilizing 5 folds.
Table 12. The mean, standard deviation, and coefficient of variation for the covariate weights when utilizing 5 folds.
TintDifTC2H4C2H6CH4GTFTGCTGC-COR.Die% Hum
Mean0.06570.06980.06080.00140.00370.00340.00920.00377.81510.3331
STDV0.0900.0880.0190.0020.0050.0050.0140.0062.9480.271
CV137125311491451441491493881
Table 13. Log-likelihood scores and covariate weight values when using only one covariate.
Table 13. Log-likelihood scores and covariate weight values when using only one covariate.
LLBetaEtaDdaTintTextDifTIFSC2H4C2H6CH4GTFCO2CoTGCTGC-COR.Die% Hum
752.72.5170,5430.00
752.72.5170,542 0.13
752.72.5170,542 0.00
752.72.5170,542 0.16
752.72.5170,543 0.00
748.52.8150,534 0.79
752.72.5170,542 0.00
752.32.6156,825 0.99
752.22.6158,074 0.76
752.72.5170,543 0.13
752.72.5170,542 0.00
752.22.6157,132 0.88
752.72.5170,543 0.85
Table 14. LL scores, chi-squared test, log-likelihood ratio test, and Akaike values when adding a covariate to the model to check significance.
Table 14. LL scores, chi-squared test, log-likelihood ratio test, and Akaike values when adding a covariate to the model to check significance.
LL-2 LLNo. Var.Gdfp-ValueLRTdfp-ValueAkaike
-763.91527.90 1527.86
CO2752.71505.3122.5310.0031.45150.011503.34
Tint752.71505.320.0010.998.92140.841501.33
Dift752.71505.330.0010.998.92130.781499.33
R.Die752.71505.340.0010.988.92120.711497.33
Text752.71505.350.0011.008.92110.631495.33
C2H6752.71505.360.0011.008.92100.541493.33
CH4752.31504.570.8110.378.9290.441490.52
IFS752.31504.580.0011.008.1180.421488.52
Dda752.31504.590.0011.008.1170.321486.52
TGC752.11504.1100.4010.538.1160.231484.12
Co752.11504.1110.0011.007.7150.171482.12
GTF752.11504.1120.0210.887.7140.101480.10
TGC-CO752.01504.0130.0610.817.6930.051478.04
% Hum752.01504.0140.0110.927.6320.021476.03
C2H4748.21496.4157.6210.017.6210.011466.41
Table 15. Final model parameters along with their respective LL, Akaike, and BIC scores.
Table 15. Final model parameters along with their respective LL, Akaike, and BIC scores.
ModelBetaEtaTintDifTC2H4C2H6TGCR.Die% HumLL# VarAkaikeBIC
Model 01.60209,019 763.901527.861527.86
Model A2.75150,534 0.79 748.511498.90*** 1501.05
Model B2.78144,5720.00 0.07 1.96 748.441504.711513.28
Model C2.76145,335 0.74 0.21748.421500.74** 1505.03
Model D2.79132,4880.160.970.79 0.542.390.35748.261508.421521.28
Model E2.79132,491 0.100.08 2.390.34748.241504.42* 1512.99
Model F2.79132,3530.190.880.790.170.182.300.35748.2151526.411558.56
The best performances are indicated by *, **, and ***, respectively.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Godoy, D.R.; Mavrakis, C.; Mena, R.; Kristjanpoller, F.; Viveros, P. An Advanced Framework for Predictive Maintenance Decisions: Integrating the Proportional Hazards Model and Machine Learning Techniques under CBM Multi-Covariate Scenarios. Appl. Sci. 2024, 14, 5514. https://doi.org/10.3390/app14135514

AMA Style

Godoy DR, Mavrakis C, Mena R, Kristjanpoller F, Viveros P. An Advanced Framework for Predictive Maintenance Decisions: Integrating the Proportional Hazards Model and Machine Learning Techniques under CBM Multi-Covariate Scenarios. Applied Sciences. 2024; 14(13):5514. https://doi.org/10.3390/app14135514

Chicago/Turabian Style

Godoy, David R., Constantino Mavrakis, Rodrigo Mena, Fredy Kristjanpoller, and Pablo Viveros. 2024. "An Advanced Framework for Predictive Maintenance Decisions: Integrating the Proportional Hazards Model and Machine Learning Techniques under CBM Multi-Covariate Scenarios" Applied Sciences 14, no. 13: 5514. https://doi.org/10.3390/app14135514

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop