Next Article in Journal
A Comprehensive Image Quality Evaluation of Image Fusion Techniques Using X-Ray Images for Detonator Detection Tasks
Previous Article in Journal
Spatial–Temporal Evolution and Influencing Factors of Land-Use Carbon Emissions: A Case Study of Jiangxi Province
Previous Article in Special Issue
The Estimation of the Remaining Useful Life of Ceramic Plates Used in Iron Ore Filtration Through a Reliability Model and Machine Learning Methods Applied to Industrial Process Variables of a Pims
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward Predictive Maintenance of Biomedical Equipment in Moroccan Public Hospitals: A Data-Driven Structuring Approach

by
Jihanne Moufid
*,
Rim Koulali
,
Khalid Moussaid
and
Noreddine Abghour
LIS Laboratory, Mathematics and Computer Science Department, Faculty of Sciences Aïn Chock, Hassan II University of Casablanca, Casablanca 20100, Morocco
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(20), 10983; https://doi.org/10.3390/app152010983
Submission received: 30 August 2025 / Revised: 5 October 2025 / Accepted: 10 October 2025 / Published: 13 October 2025

Abstract

Predictive maintenance (PdM) of biomedical equipment is increasingly recognized as a strategic lever to enhance reliability and ensure continuity of care. Yet, in resource-limited hospitals, implementation is hindered by fragmented data sources, non-standardized codification, and weak interoperability. Few studies have demonstrated the feasibility of structuring PdM data from real hospital interventions in middle-income countries. This work presents a prototype data structuring pipeline applied to six public hospitals in the Casablanca–Settat region of Morocco. The pipeline consolidates 6816 validated maintenance interventions from 780 devices across 30 departments and integrates normalized reliability indicators (Failure Rate, MTBF, MTTR corrected with IQR, and Downtime Hours). It ensures semantic harmonization, auditability, and reproducibility, resulting in a structured and interoperable dataset that constitutes a regional first in the Moroccan hospital context. To illustrate predictive potential, a proof-of-concept Random Forest model was evaluated. It achieved AUROC = 0.65 on the full imbalanced dataset and AUROC = 0.82 on a balanced 2000-intervention subset, confirming the dataset’s discriminative value while reflecting real-world challenges. This work bridges the gap between conceptual PdM frameworks and operational hospital realities, and establishes a replicable foundation for AI-driven predictive maintenance in low-resource healthcare environments.

1. Introduction

The digitalization of the healthcare sector represents a major structural transformation, driven by the integration of advanced technologies, including the Internet of Medical Things (IoMT), Artificial Intelligence (AI), cloud computing, and big data analytics [1]. This evolution fosters the emergence of interconnected and intelligent hospital ecosystems that aim to optimize both clinical and technical performance. While early initiatives primarily focused on electronic health records (EHRs), telemedicine, and eHealth services, recent innovation efforts increasingly extend to the reliability and availability of critical medical devices.
Predictive maintenance (PdM) has therefore emerged as a key component of Healthcare 4.0. It is defined as a set of procedures that continuously monitor equipment conditions and forecast potential failures through the systematic collection and analysis of operational data [2]. By supporting just-in-time maintenance planning, PdM minimizes unexpected breakdowns, reduces downtime, and enhances continuity of care.
Despite its recognized potential, the implementation of PdM in low- and middle-income countries (LMICs) remains limited. Structural barriers include the absence of unified data governance, fragmented formats of maintenance interventions (e.g., paper forms, spreadsheets, heterogeneous systems), the lack of standardized technical indicators (e.g., failure rate, mean time to repair), and limited interoperability between technical and clinical platforms [3,4,5,6]. Studies in Africa and Asia have highlighted the direct consequences of these deficiencies, including ineffective maintenance decisions, premature replacement of functional devices, and prolonged unavailability of critical equipment [6,7,8]. These challenges are not unique to Morocco but are shared across many LMICs, which reinforces the broader relevance of this work.
The Moroccan healthcare system illustrates these obstacles. It is organized into multiple tiers, including University Hospital Centers, Regional Hospitals, Provincial Hospitals, Day Hospitals, and both urban and rural health centers. While this multi-level structure is essential for addressing territorial healthcare needs, it results in significant heterogeneity in terms of digital infrastructure, technical management, and maintenance practices. Some first-level facilities have partially digitized their biomedical services using Computerized Maintenance Management Systems (CMMS), whereas others still struggle to centralize maintenance intervention histories or adopt standardized device nomenclatures. This systemic fragmentation underscores the urgent need for a coherent technical and organizational interoperability framework, which is a prerequisite for the nationwide implementation of intelligent biomedical maintenance.
The development of AI-based solutions further emphasizes the necessity of high-quality datasets. As highlighted in previous studies, machine learning algorithms can only produce reliable outcomes if the input data is clean, consistent, and standardized [2]. Data preprocessing thus becomes a critical and often the most resource-intensive, step in PdM initiatives, especially in heterogeneous hospital environments.
To address these challenges, this work introduces a structured prototype pipeline for consolidating and preparing real-world hospital maintenance data. The pipeline integrates cleaning, semantic normalization, feature derivation, and validation steps to generate a reproducible, interoperable, and AI-ready dataset. The predictive experiment serves as an exploratory benchmark, illustrating the practical added value of the structured dataset for machine learning tasks and confirming its suitability as a foundation for predictive maintenance research.
This work aims the following:
  • To analyze the state of the art in biomedical maintenance strategies, with a focus on predictive approaches and supporting digital architectures.
  • To identify structural constraints encountered in low-resource hospital environments, particularly with respect to data governance, digital infrastructure, and organizational maturity.
  • To demonstrate the feasibility and relevance of a multi-institutional prototype data structuring pipeline as a foundation for predictive maintenance models that are contextualized, reproducible, and scalable within the Moroccan healthcare system and transferable to other LMIC contexts facing similar challenges.
This paper is organized as follows: Section 2 reviews related work, including theoretical foundations, AI-based strategies, and challenges in low-resource settings. Section 3 outlines the study context and the data structuring methodology. Section 4 presents the main results, combining descriptive statistics with exploratory analyses and a preliminary supervised predictive experiment. Section 5 discusses the key findings, and Section 6 concludes the paper with perspectives for future work.

2. Related Work

2.1. Theoretical Foundations of Biomedical Maintenance

Biomedical maintenance has traditionally alternated between preventive strategies, aimed at minimizing unplanned downtime through scheduled interventions [9,10], and corrective approaches, carried out post-failure but often costly and disruptive. To address these limitations, structured frameworks have been introduced, classifying devices by criticality, frequency of use, and vulnerability to failure [11]. Such models integrate operational dimensions such as planning, documentation, and regulatory compliance, often overlooked in purely technical approaches.
Data-driven implementations further highlight the benefits of structured frameworks. For instance, the integration of Bayesian networks into hospital maintenance platforms demonstrated significant cost reductions by identifying critical failure factors [1]. Parallel to this, modular architectures aligned with Industry 4.0 concepts incorporate IoT, big data analytics, AI, and visualization tools, as illustrated by the five-layer model in [12]. While largely conceptual and rarely validated in hospitals, these approaches outline pathways toward multi-technology PdM pipelines. Adaptations remain necessary in resource-limited contexts, where lightweight CMMS can substitute for more complex infrastructures [13].
Core technical indicators such as MTBF, MTTR, availability, failure recurrence, and criticality indices remain central to predictive modeling, particularly when integrated with CMMS or sensor data [11]. A systematic review [14] reinforces this by framing maintenance as a lifecycle process and identifying three prioritization models: criticality matrices, weighted scoring, and machine learning algorithms (e.g., k-means, Random Forest, SVM). Reliability-Centered Maintenance (RCM) and FMEA continue to provide methodological foundations for assessing criticality and guiding resource allocation [15,16].

2.2. Advances in Predictive Maintenance and Artificial Intelligence

The rising complexity of biomedical devices has made maintenance a strategic priority. Strategies have evolved from reactive and scheduled interventions to predictive paradigms enabled by IoT, embedded computing, and AI [14,17]. PdM leverages historical logs and real-time monitoring to anticipate failures, with documented benefits including reduced downtime, optimized resource allocation, and cost savings [18,19].
Empirical studies report cost reductions of up to 25% and improved availability [17].
Methodologically, PdM relies on structured maintenance records and sensor data. Early models used regression and decision trees, while recent approaches apply ML algorithms such as k-NN, SVM, and CNN for failure prediction, RUL estimation, and criticality classification [20,21,22]. Adoption, however, requires multidisciplinary coordination and often encounters cultural barriers, such as reluctance to act preemptively or limited trust in algorithmic decisions [23].
Recent work also incorporates unstructured data from CMMS reports using NLP methods such as LDA [24], though standardization challenges persist. Broader limitations include scarce high-quality datasets, frequent reliance on simulated data, and interoperability issues across heterogeneous systems [25,26]. To address these gaps, hybrid models integrate conventional indicators (MTBF, MTTR, availability, criticality) with predictive components, improving accuracy and contextual relevance [19]. Deterministic and multicriteria frameworks for equipment replacement have further demonstrated the value of operational data in prioritizing actions under budget constraints [27,28].
Emerging architectures, including SaMD systems and IoT-based predictive models, highlight regulatory and validation requirements similar to those in clinical AI [29]. CNN-based vibration analysis has achieved accuracies above 98% [18], while predictive monitoring of ventilators illustrates the feasibility of targeted PdM applications [2,30]. Cloud-based analytics and lightweight alternatives, such as MWIR thermal imaging complement [31], AI-driven approaches by offering scalable or low-cost fault detection solutions.
In summary, biomedical PdM stands at the intersection of technology, regulation, and hospital resource optimization. Hybrid approaches combining empirical indicators and AI techniques appear most promising for building resilient and clinically relevant maintenance systems. Building on these advances, artificial intelligence has become a central driver of PdM frameworks. Enabling technologies such as IoT, big data, and ML support anomaly detection, health state classification, and RUL estimation, with reported accuracies often above 90% [2,17,32].
A wide range of models has been explored, from SVM and Random Forest to CNN, LSTM, and ensemble methods such as XGBoost and k-NN [32,33]. Studies have reported strong performance in contexts such as autoclave monitoring in Rwanda, ventilator diagnostics, and failure recurrence in Malaysian hospitals [30,33]. However, many rely on simulated or small-scale datasets, limiting generalizability [25,34]. Deep learning approaches also face interpretability challenges, raising concerns about clinical trust and regulatory compliance [35].
Cost considerations remain underexplored, despite the significant investments required in sensors, IT infrastructure, and staff training, particularly in low-resource settings [15,36,37,38]. Methodological diversity, ranging from simple classifiers to multimodal frameworks, further complicates cross-study comparisons. Attempts to combine structured maintenance records with unstructured CMMS logs show promise [39], yet issues of governance, workforce readiness, and interoperability continue to hinder adoption [19]. Recent initiatives propose lightweight or pragmatic solutions, including CMMS-based platforms, sensor-efficient models, no-code Edge AI tools, and Bayesian network approaches for root cause analysis [40,41,42,43]. Despite encouraging results, sustainable improvements in availability and cost-efficiency remain contingent on clean, standardized, and validated datasets, which are essential for deploying AI-powered PdM at scale [15,16,20,44].

2.3. Specific Constraints in Low-Resource Settings

Morocco’s digital health transformation, accelerated by the COVID-19 pandemic, has introduced EHRs, hospital management platforms, telemedicine, and expanded health coverage [35,45,46]. These initiatives, aligned with broader public sector reforms [3,41,47], aim to improve equity and governance, particularly in underserved areas [42]. High-level political support has reinforced this momentum through investments in HMIS modernization and digital identifiers, with the long-term goal of establishing an interoperable, data-driven healthcare system.
Despite these advances, biomedical equipment maintenance remains largely reactive, lacking preventive or predictive planning [43]. Challenges include absent standardized indicators, fragmented CMMS platforms, heterogeneous suppliers, limited engineering capacity, and prolonged repair delays [11]. In many LMICs, reliance on paper-based or inconsistent records further undermines traceability and predictive readiness [7]. Studies confirm that poor data quality and limited staff training weaken planning capacity and hinder the adoption of intelligent tools [48].
Regulatory gaps compound these limitations: fragmented standards and lengthy certification processes delay evaluation of AI-based medical devices, a problem even more acute in resource-constrained contexts. Scaling PdM in LMICs therefore requires robust governance, standardized data protocols, and adaptive legal frameworks, yet disparities in infrastructure, digital literacy, and staff training remain persistent obstacles [11,49].

2.4. Strategic Positioning of the Present Study

The integration of IoT, AI, and big data into Morocco’s healthcare system offers a concrete opportunity to implement predictive maintenance strategies adapted to local constraints [50,51,52]. While such technologies are widely deployed in industry, recent evidence confirms their relevance for healthcare, including in resource-constrained environments.
Several studies validate the operational and economic potential of PdM. A deterministic model covering 3640 devices across Saudi hospitals achieved a 36% cost reduction over ten years [53], while a QFD-based framework in the UAE improved ventilator prioritization [54]. Complementary initiatives in Sri Lanka and India demonstrated that standardized procedures, staff training, and integrated corrective–preventive strategies enhance equipment availability and mitigate systemic challenges [55,56]. Locally adapted solutions further illustrate scalability: lightweight CMMS platforms in Benin [57], Arduino-based monitoring prototypes in Rwanda [58], and a PdM platform at CHU Ibn Sina in Morocco [59] all underscore the feasibility of context-aware deployments.
Nonetheless, many contributions remain limited to simulated or pilot environments with little clinical validation [25,32]. Decision-support tools such as BI dashboards and locally developed CMMS show promise in low-digitalization contexts [13,60], yet their effectiveness depends heavily on institutional maturity and resource availability. Conceptual Industry 4.0 frameworks integrating IoT, AI, and visualization layers offer additional guidance, but often lack hospital validation [12].
Taken together, these findings reinforce the relevance of the present work, which builds on validated technologies, local feedback, and systemic constraints. The originality of this contribution lies in demonstrating the predictive potential of a real, multi-institutional dataset, thereby addressing a critical gap in biomedical maintenance research across LMICs.

3. Materials and Methods

3.1. Study Context and Data Sources

The transition toward digitalization in the Moroccan hospital sector is still hindered by heterogeneous information systems [35], manual maintenance workflows, and limited adoption of AI technologies in maintenance operations. Standardizing data formats and metadata therefore becomes a critical step in improving interoperability in low-standardization environments [61]. To this end, labels and data semantics were harmonized across hospitals, while the origin and processing steps of each file were systematically logged to ensure interoperability, auditability, and generalizability of the pipeline across institutions.
This work, conducted across multiple public healthcare institutions in the Casablanca–Settat region, aims to structure multi-institutional datasets as a foundation for predictive maintenance of critical biomedical devices. This issue reflects a broader continental trend, as the lack of harmonized formats and the fragmentation of health information systems continue to impede the strategic use of healthcare data [62].
Failure and maintenance interventions were collected from six public hospitals in the Casablanca–Settat region, Morocco’s most densely populated area and a representative setting for data consolidation. Covering the period 2014–2022, the sources were highly heterogeneous, including non-standardized Excel spreadsheets, handwritten forms, technical reports, and partial CMMS exports intermittently available since 2022.
These documents presented terminological inconsistencies, irregular date formats, and non-uniform codification of equipment and departments. After semantic harmonization, over 7000 interventions were consolidated. Entries related to decommissioned equipment and incomplete interventions were removed, resulting in 6816 validated from 780 devices across 30 departments. This reduction (−2.6%) reflects the elimination of duplicated, incomplete, or obsolete records during quality filtering.

3.2. Structuring Prototype Pipeline (M1–M7)

Based on modern data science principles [63], a configuration-driven prototype pipeline was developed to process heterogeneous hospital interventions and ensure reproducibility, auditability, and integration readiness. Previous studies indicate that data preparation, including cleaning and structuring, can represent up to 80% of the total effort in analytics projects when sources are heterogeneous or incomplete [64,65], which justifies the emphasis on rigorous preprocessing as a prerequisite for predictive maintenance modeling. The pipeline integrates automated provenance logging: for each ingested file, metadata such as file name, sheet identifier, ingestion timestamp, SHA-256 fingerprint, and parser and mapping versions are systematically recorded to ensure full auditability. Figure 1 presents the modular BioMedStruct pipeline (M1–M7), formalizing the progression from heterogeneous data ingestion to integration readiness.
The BioMedStruct pipeline transforms heterogeneous hospital maintenance logs into a standardized, AI-ready dataset through a modular structure (M1–M7) aligned with modern data-engineering practices.
M1—Ingestion and provenance logging: Heterogeneous Excel files were imported, and for each file, metadata such as file name, sheet identifier, timestamp, and SHA-256 checksum were recorded to ensure full auditability. Each equipment record was assigned two complementary identifiers: an internal code (e.g., iq_real_001) automatically generated by the ingestion module to ensure traceability across hospitals, and the official inventory number (e.g., 9608/19) assigned by the biomedical department. This dual identification maintained referential integrity between digital records and physical assets.
M2—Cleaning and normalization: Duplicates and inconsistent date formats were corrected, while terminologies for services and failure types were semantically harmonized. Records corresponding to retired devices or incomplete interventions were excluded to preserve data reliability.
M3—Structuring: variables were standardized into a unified schema including identifiers (equipment, hospital, department), temporal attributes (failure, intervention, repair dates), and maintenance descriptors (intervention type, failure type, status).
M4—Feature engineering: reliability-related indicators were computed, including Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR), Failure Rate (FR), Downtime Hours (DH).
After these transformations, multi-level validation was conducted to ensure structural integrity and semantic coherence across hospitals. To verify longitudinal completeness, the structured dataset was cross-checked against quarterly hospital inventory reports listing all biomedical devices with their operational status (operational, under repair, retired). In this verification loop, any device marked under repair in quarter Q had to appear as repaired or still pending in Q + 1, preventing structural missingness. Consequently, temporal variables (failure date, repair date, downtime hours, repair duration) reached 0% missing after processing.
In parallel, a series of Quality Gates (QG) were implemented to ensure dataset completeness, semantic consistency, interoperability [63], and inter-hospital coherence. These controls verified the presence of mandatory fields such as intervention date, equipment ID, and department, while also ensuring standardized codification of identifiers and intervention types. Temporal plausibility was assessed by confirming that failure events always preceded their corresponding repairs, and potential duplicates were detected using composite keys. Finally, semantic normalization was applied to textual variables, particularly department names and failure categories, in order to reduce variance and harmonize heterogeneous labels into canonical forms.
This dual validation, combining organizational cross-checking and automated data-quality controls, ensured the structural and semantic consistency of the final dataset.
The main operations applied at each stage of the BioMedStruct prototype pipeline are summarized in Table 1, and the final validated schema includes 26 standardized variables, detailed in Table 2.
After the transformation stages, Figure 2 illustrates the conversion of heterogeneous biomedical records into a standardized, prediction-ready dataset.
Table 2 summarizes the 26 core variables of the structured dataset, including their type, unit, description, and the evolution of missingness before and after cleaning. The extended data dictionary (41 variables, including technical and derived fields) is provided in Appendix A for reproducibility and audit transparency.
Figure 3 illustrates the effect of semantic normalization on hospital department labels, showing how heterogeneous variants were consolidated into standardized categories.

3.3. Reliability, Maintainability, and Availability Indicators

To enable reproducible comparisons across heterogeneous hospitals, reliability and maintainability were assessed using normalized indicators widely adopted in reliability engineering and biomedical maintenance standards [66,67]. Four core metrics were retained, selected for their operational relevance, interpretability, and frequent use in biomedical reliability studies.
  • Failure Rate (FR)
F R = N failures N devices × T obs
Expressed in failures per device·year, where T obs is the observation period in years. This indicator quantifies the average number of failures per device per year, allowing normalization across hospitals with different equipment stocks and follow-up durations.
2
Mean Time Between Failures (MTBF)
M T B F = Total   operating   time N failures
Expressed in days, MTBF measures the mean interval between two successive failures. MTBF is a direct proxy of equipment reliability.
3
Mean Time To Repair (MTTR)
M T T R = T T R N repairs
Expressed in hours, where T T R   denotes the repair duration of each intervention. MTTR captures the maintainability dimension of biomedical equipment by quantifying average repair times.
4
Downtime Hours (DH)
D H = T T R N devices × T obs
Expressed in hours per device·year, DH measures the annual downtime burden per device. DH integrates both frequency and duration of repairs, providing a synthetic availability-oriented indicator.

Robust Correction of MTTR

Exploration of the raw dataset revealed extreme repair times, with values exceeding 800 h. These anomalies mostly reflected administrative delays in closing work orders rather than true repair durations. To mitigate their impact, a robust correction was applied using the interquartile range (IQR) method.
The IQR is defined as follows:
I Q R = Q 3 Q 1
Values above Q 3 + 1.5 × I Q R were capped at this threshold. This correction, widely applied in engineering and healthcare data analytics, preserves natural variability while limiting distortions caused by administrative artifacts.
Collectively, these four indicators form a standardized basis for assessing reliability and maintainability across hospitals. They support both descriptive comparisons and predictive modeling experiments, ensuring methodological alignment between exploratory statistics and machine learning tasks.

3.4. Predictive Modeling Protocol

  • Prediction task
The predictive task was formulated as a binary classification problem aligned with biomedical maintenance standards. Corrective interventions were assigned to the positive class ( Y = 1 ), while preventive, inspection, adjustment, and scheduled revision interventions were assigned to the negative class ( Y = 0 ). The binarization strictly relied on the original “type of maintenance” field contained in hospital interventions, and no artificial records or synthetic labels were introduced. This ensures consistency with hospital reporting practices, traceability to raw interventions, and methodological reproducibility.
The binary outcome variable Y was defined as follows:
Y = 1   i f   c o r r e c t i v e   ( F a i l u r e ) 0   i f   n o n f a i l u r e                          
  • Validation protocol
To ensure reproducibility and to avoid information leakage, two complementary validation strategies were applied. The first relied on RepeatedStratifiedGroupKFold (5 × 10), with grouping at the equipment identifier level to control intra-device correlations and prevent interventions from the same equipment from being split across folds. The second consisted of a temporal split, with training on the 2014–2019 period and testing on 2020–2022, combined with a roll-forward evaluation to approximate prospective deployment and assess temporal generalization.
Formally, the dataset can be expressed as follows:
D = { ( x i , y i , g i ) } i = 1 N
where x i are the features, y i { 0 , 1 } is the target variable, and g i is the equipment identifier. Group-level cross-validation ensures the following:
g i T t r a i n g i T t e s t
Thus, it prevents leakage across interventions from the same physical device. The temporal split is formally defined as follows:
T t r a i n = { x d a t e ( x ) 2019 } , T t e s t = { x d a t e ( x ) 2020 } , T t r a i n T t e s t =
This design ensures that performance estimates are unbiased, both at the device level and in temporal deployment scenarios.
  • Subset definition
From the full dataset of 6816 validated interventions, a balanced subset of 2000 interventions was sampled. Stratification was applied by hospital, department, equipment identifier, and year to preserve representativeness. Corrective and non-failure interventions were adjusted to a 50/50 distribution to mitigate the natural imbalance (=85/15). Randomness during sampling was controlled through a fixed seed (42), and exclusion criteria included incomplete interventions, duplicates, and interventions related to equipment under reform.
  • Model configuration and baselines
Predictive modeling was conducted using a Random Forest classifier. For a given instance x , the probability estimate is computed as the average of individual tree probabilities:
p ^ ( y = 1 x ) = 1 T t = 1 T h t ( x )
where T is the number of trees and h t ( x ) [ 0 , 1 ] denotes the probability assigned to class 1 by tree t . The model was configured with 500 trees, class weights set to “balanced” to account for class imbalance, and a fixed random seed (42) to ensure reproducibility.
For comparison, a Logistic Regression model was implemented as a baseline. It was configured with L2 regularization, balanced class weights, the “liblinear” solver, and a maximum of 500 iterations. This baseline provides a classical, interpretable reference for assessing the added value of ensemble learning in the context of imbalanced hospital data.

4. Results

4.1. Dataset Overview and Descriptive Statistics

This section provides a descriptive overview of the consolidated dataset prior to the computation of reliability-oriented indicators. The curated dataset comprises 6816 validated maintenance interventions, covering 780 biomedical devices across 410 equipment categories. It also documents more than 2300 distinct failure types, underscoring the heterogeneity of maintenance scenarios, and spans 30 clinical departments, thereby ensuring broad institutional coverage. Table 3 summarizes the key indicators.
A cross-institutional comparison was conducted to evaluate disparities among the six hospitals. Figure 4 reports the variability in the number of interventions and the diversity of biomedical devices. Hospital F recorded the highest number of incidents (1206), associated with 162 devices, 340 categories, and 19 clinical departments. While the number of equipment categories remained relatively stable across institutions (310–340), the number of reported failure types varied, indicating heterogeneity in monitoring practices, data granularity, and equipment complexity.
The distribution of intervention types is reported in Figure 5, indicating that corrective maintenance represents approximately 85% of interventions, while preventive actions account for only 15%. This imbalance reflects systemic constraints in Moroccan public hospitals, including limited resources, partial digitalization, administrative delays, and insufficient traceability of preventive actions.

4.2. Failure Characterization and Temporal Patterns

Failure characterization was conducted to analyze the distribution of interventions across services, the most frequent technical codes, and temporal dynamics.
Figure 6 reports the distribution of maintenance interventions by service. Interventions were concentrated in high-dependency units, with Operating Rooms accounting for the largest share, followed by Maternity, Radiology, and Laboratories. Intensive Care Units also reported a substantial number of incidents, reflecting both the criticality of equipment and the complexity of workflows.
The analysis of technical codes highlights recurrent failure categories. Figure 7 reports the five most frequent codes: MEC-02 (mechanical failures), AUT-05 (automation and sensor faults), ECL-01 (lighting faults), ELEC-04 (electronics), and LOG-03 (software anomalies). Each code exceeded 1400 occurrences. Since a single maintenance intervention could be associated with multiple codes (e.g., mechanical and electronic), the cumulative number of occurrences can exceed the number of unique interventions. These codes reflect the heterogeneity of biomedical maintenance, spanning mechanical wear, electronic components, and software anomalies.
Temporal analysis of interventions is reported in Figure 8. The number of failures increased steadily from 2014 to 2019, peaking at over 1100 incidents. This trend reflects the expansion of equipment inventories and improved traceability. In 2020, the COVID-19 pandemic [19] produced a mixed effect: increased failures in critical devices (ventilators, monitors, infusion pumps) coexisted with reduced activity in elective services, leading to underreporting in non-urgent interventions. From 2021 onwards, failures stabilized (≈800 per year). The deployment of CMMS platforms in 2022 further improved reporting accuracy and data structuring.
Overall, the variability across hospitals, the predominance of corrective interventions, and the observed temporal dynamics highlight the need for normalized reliability indicators to enable inter-service comparison and predictive modeling.

4.3. Reliability Indicators

Normalized reliability and maintainability indicators were computed to enable inter-service comparisons. The selected metrics are Failure Rate (FR), Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR, corrected), and Downtime Hours (DH). Their definitions and units are provided in Section 3.
Table 4 summarizes the aggregated results and compares raw and corrected values for MTTR and DH.
The raw MTTR distribution exhibited extreme values, in some cases exceeding 800 h. These anomalies were not representative of actual repair durations but rather reflected administrative delays and structural bottlenecks. The IQR-based correction yielded an upper threshold of 384 h, above which values were capped. As a result, the corrected MTTR converged to 42 h, providing a more realistic estimate of effective repair times. Figure 9 and Figure 10 show the distributions of raw and corrected MTTR.
Service-level comparisons confirm heterogeneous reliability patterns. Figure 11 reports the FR across services, with Intensive Care Units showing the highest failure rates, followed by Operating Rooms, while Radiology displayed lower FR values. Figure 12 presents DH estimates, which align with the FR distribution and highlight higher downtime in critical units.
To integrate multiple metrics, a radar chart was constructed. Figure 13 presents normalized FR, MTBF, MTTR (corrected), and DH for the three most represented departments: Intensive Care, Radiology, and Operating Rooms. The radar profile highlights the critical burden of Intensive Care, the relatively favorable performance of Radiology, and the intermediate profile of Operating Rooms.

4.4. Predictive Modeling Results

Cross-validation results
The predictive task was formulated as a binary classification problem, where interventions were labeled as corrective (failure = 1) or non-failure (0). Table 5 presents grouped cross-validation results. On the full dataset (6816 interventions, 85% corrective/15% preventive), the Random Forest achieved an AUROC of 0.65 ± 0.04 (95% CI: 0.61–0.69), indicating moderate discriminative capacity under imbalance. On the stratified subset (2000 interventions, balanced 50/50), the AUROC increased to 0.82 ± 0.03 (95% CI: 0.76–0.87), demonstrating predictive capacity under balanced conditions.
Temporal validation
Temporal holdout validation was performed using 2014–2019 for training and 2020–2022 for testing. Table 6 presents AUROC values of 0.62–0.65 for the full dataset and 0.79–0.81 for the subset, indicating stable performance across time, though predictive difficulty increased in the imbalanced dataset.
Table 7 compares Random Forest and Logistic Regression across datasets. Random Forest consistently outperformed Logistic Regression on AUROC, F1-macro, and Accuracy, with larger margins observed in the balanced subset, where class imbalance was mitigated.
Roll-forward validation
Roll-forward validation confirmed consistent AUROC, with year-to-year fluctuations not exceeding ±0.02 (Table 8). AUROC decreased to 0.62 in 2020 on the full dataset, coinciding with the COVID-19 crisis, while the subset remained stable at 0.80.
Confusion matrices and ROC analysis
Table 9 shows confusion matrix proportions under temporal split validation. In the full dataset, the false negative rate reached 0.73, reflecting class imbalance. In the subset, false negatives decreased to 0.12, yielding more balanced detection. Figure 14 displays confusion matrices, and Figure 15 illustrates AUROC differences between datasets.

4.5. Analytical Interpretation

Overall, the corrected indicators reveal critical insights into the reliability of biomedical maintenance. Raw MTTR and DH values were initially distorted by extreme outliers, in some cases exceeding 800 h, which primarily reflected administrative delays rather than actual repair times. Applying a robust IQR-based correction was therefore essential to derive realistic maintainability estimates. Service-level comparisons further demonstrated that critical units such as Intensive Care and Operating Rooms concentrated the highest FR and DH values, confirming their priority in predictive maintenance strategies and resource allocation. The adoption of normalized indicators also provided a consistent and comparable framework, strengthening the basis for predictive modeling and inter-hospital benchmarking.
The predictive experiments highlight complementary findings. Structured maintenance interventions were shown to contain discriminative signals, with performance reaching AUROC = 0.80 under balanced conditions, thereby confirming the feasibility of predictive modeling. Conversely, predictive performance decreased on the full dataset due to imbalance and heterogeneity, reflecting the realism of operational hospital data and underscoring the need for resampling strategies or cost-sensitive approaches to enhance robustness.
Temporal validation confirmed stable generalization across years, with a marked decline in 2020, most likely linked to the disruptions caused by the COVID-19 pandemic. This sensitivity to external shocks illustrates the importance of embedding contextual factors into predictive maintenance planning.

5. Discussion

Barriers to the implementation of predictive maintenance in hospital systems are multidimensional and well-documented. Organizational resistance, the lack of qualified personnel to operate connected technologies, and the absence of seamless integration with hospital information systems are recurring limitations [20,38]. Despite medium-term prospects for economic optimization, the initial investment cost, including sensors, analytical platforms, and digital infrastructures, remains prohibitive for many resource-constrained institutions [39]. These financial challenges are further exacerbated by increasingly stringent regulatory requirements related to cybersecurity, maintenance intervention traceability, and algorithm validation [38].
Beyond economic and regulatory barriers, systemic constraints undermine the operational viability of predictive maintenance. These include the lack of interoperability standards, fragmented digital tools, and heterogeneous data entry practices [24]. The literature also highlights a tendency to validate models in isolated settings without addressing change management, user adoption, or the scalability of solutions across multiple sites [2,68]. Combined with the variability in hospital size, human resources, and digital maturity, these limitations complicate the large-scale deployment of predictive solutions [19,49,69].
Our findings confirm the persistence of such structural barriers in resource-limited environments. Partial CMMS coverage, the absence of shared reference frameworks, and inconsistent data entry practices significantly impede the homogeneous aggregation of technical data. The cleaning and standardization phases highlighted how fragmentation compromises traceability. Nevertheless, the proposed structuring pipeline demonstrated that these barriers can be mitigated through a systematic and reproducible methodology [20,24]. By consolidating heterogeneous interventions into an interoperable dataset, the pipeline provides a proof of feasibility relevant to other hospitals facing similar constraints in LMICs.
This work therefore empirically demonstrates that structured data preparation is a prerequisite for predictive maintenance, especially in low-digitization environments [12]. Unlike predominantly theoretical or simulated approaches, the proposed framework is grounded in real data from six Moroccan institutions. It directly addresses a recurrent gap in the literature: regarding the absence of contextualized, reliable, and interoperable datasets in resource-constrained healthcare systems [7,33,70]. The consolidated dataset provides a relevant foundation for evaluating the feasibility of predictive algorithms in minimally digitized environments, without relying on extensive instrumentation or complete CMMS coverage.
The structuring process revealed several obstacles to the effective use of biomedical maintenance data. From a technical perspective, the persistence of heterogeneous formats, reliance on paper-based records, and the diversity of digital tools hindered the harmonization of maintenance histories. From an organizational standpoint, the lack of shared reference frameworks for equipment, services, and failure types limited the comparability of data across facilities. These constraints were further exacerbated by data entry errors, disparities in training, and a limited feedback culture, all of which undermined overall data quality [7,48].
Although CMMS initiatives have been launched, they remain only partially deployed, lack full integration, and are seldom aligned with interoperability standards. This fragmentation restricts longitudinal traceability and the automation of analytics, reinforcing the gap between technical infrastructures and decision-making requirements [3,14,16].
Nevertheless, successful digital transformation efforts in other Moroccan public sectors, such as the digitization of land services led by the national land registry agency (ANCFCC) [71] using a cloud-based and interoperable infrastructure, demonstrate that alignment with open standards such as FHIR, supported by scalable platforms and shared governance, is a viable path forward [72].
In this regard, the present work initiates a modernization trajectory based on the progressive utilization of hospital data to support decision-making. It acts as a bridge between current maintenance practices and the objectives of smarter hospital ecosystems, where interventions are guided by predictive, optimization, and automation mechanisms. By generating a dataset compatible with evolving digital tools, this work prepares the ground for integrating intelligent components such as sensors, alert systems, and dynamic dashboards into a unified architecture oriented toward performance, safety, and efficiency. This incremental trajectory is particularly strategic in resource-constrained contexts, where intelligent reuse of existing infrastructures is preferable to costly technological overhauls.
In addition, the interpretation of derived indicators and predictive results provides valuable operational insights. Corrected MTTR distributions revealed that Operating Rooms and Intensive Care Units concentrated the highest levels of criticality, confirming that predictive strategies should prioritize services where downtime directly compromise patient safety. The comparative evaluation of predictive models further reinforced this point: while the full dataset reflected the inherent imbalance and noise of real hospital data (AUROC = 0.65), the balanced 2000-intervention subset achieved significantly stronger discriminative performance (AUROC = 0.82). These findings illustrate that robust preprocessing and methodological structuring, rather than massive instrumentation, are decisive for enabling predictive readiness. By linking systemic constraints, reliability indicators, and predictive performance, the study demonstrates that predictive maintenance in LMIC hospitals is both feasible and strategically relevant when grounded in rigorously structured datasets.
The next stage will leverage the structured dataset to develop supervised predictive models for tasks such as intervention classification, short-term failure probability estimation, and MTBF prediction. Algorithms including support vector machines, decision trees, neural networks, and hybrid models will be evaluated with metrics adapted to the imbalanced nature of hospital data. In the medium term, enriching the dataset with IoT streams (temperature, electrical current, vibration) will support continuous monitoring and early fault detection, progressively converging toward a unified CMMS–AI–IoT infrastructure. Finally, the integration of semantic web technologies, domain ontologies, and intelligent decision-support systems offers promising avenues for overcoming current limitations. Incorporating technical, clinical, and economic dimensions at the design stage of maintenance policies thus becomes a strategic lever for aligning performance, safety, and cost-effectiveness objectives [73].

6. Conclusions

This work establishes the foundations of a predictive maintenance system tailored to biomedical equipment in Moroccan public hospitals. Starting from heterogeneous and non-standardized sources, a structured methodology enabled the creation of a reliable, interoperable, and prediction-ready dataset. Beyond the national context, the proposed approach illustrates how hospital maintenance data can be systematically transformed into an AI-compatible resource, even in environments characterized by limited digital maturity and fragmented information systems.
The results revealed major imbalances, including the predominance of corrective interventions and partial CMMS coverage, highlighting the urgent need for proactive strategies grounded in historical evidence. By empirically validating the feasibility of dataset structuring in real hospital settings, this study demonstrates that both technical and organizational barriers can be effectively mitigated through systematic, reproducible, and transferable processes.
The structured dataset therefore represents a concrete first step toward the intelligent exploitation of maintenance information for predictive purposes. The methodology, being replicable and adaptable, can be extended across health systems with varying levels of digital maturity, offering particular relevance for resource-limited contexts.
Future work will focus on deploying supervised learning models for failure prediction and intervention classification, as well as integrating real-time IoT monitoring streams. These developments will support the emergence of intelligent maintenance infrastructures aimed at enhancing hospital performance, safety, and operational continuity.

Author Contributions

Conceptualization, J.M.; Data Curation, J.M.; Formal Analysis, J.M. and R.K.; Investigation, J.M., R.K. and N.A.; Methodology, J.M., R.K. and N.A.; Project Administration, R.K., N.A. and K.M.; Resources, J.M. and K.M.; Software, J.M.; Supervision, R.K., N.A. and K.M.; Validation, R.K. and N.A.; Writing—Original Draft, J.M.; Writing—Review and Editing, R.K., N.A. and K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study did not require ethical approval.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset used in this study originates from regional hospital sources and contains sensitive biomedical maintenance records. Due to confidentiality and institutional restrictions, the data are not publicly available.

Acknowledgments

The authors would like to thank the Regional Department of Equipment and Biomedical Maintenance of Casablanca–Settat, as well as the local maintenance teams of the Mohammedia and Settat hospitals, including all biomedical engineers involved, for their valuable assistance and collaboration throughout the data collection and validation phases of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANCFCCAgence Nationale de la Conservation Foncière du Cadastre et de la Cartographie
AUCArea Under the Curve
AUROCArea Under the Receiver Operating Characteristic Curve
BIBusiness Intelligence
CMMSComputerized Maintenance Management Systems
CNN Convolutional Neural Networks
EHRs Electronic Health Records
FDA Food and Drug Administration
FHIRFast Healthcare Interoperability Resources
FMEAFailure Mode and Effects Analysis
IoMTInternet of Medical Things
IoT Internet of Things
k-NNk-Nearest Neighbors
LDALatent Dirichlet Allocation
LMICsLow- and Middle-Income Countries
MTBFMean Time Between Failures
MTTRMean Time To Repair
NLP Natural Language Processing
PdM Predictive Maintenance
RCMReliability-Centered Maintenance
RULRemaining Useful Life
SaMDSoftware as a Medical Device
SVM Support Vector Machines

Appendix A

Table A1. Extended Data Dictionary.
Table A1. Extended Data Dictionary.
Variable NameTypeUnitDescriptionMissing (%)
Before
Missing (%) AfterIncluded in Core Dataset
Identifiers & Provenance1internal_idCategoricalLabelInternal identifier generated by the pipeline (e.g., iq_real_001)0%0%Yes
2equipment_idCategoricalIdentifierOfficial hospital inventory number
(e.g., 9608/19)
0%0%Yes
3hospital_idCategoricalCodeHospital code0%0%Yes
4departmentCategoricalLabelClinical department where the device is used3%0%Yes
5file_sourceCategoricalLabelExcel file name used as data source0%0%No (merged)
6sha256_checksumCategoricalHashSHA-256 checksum ensuring data provenance and integrity0%0%Yes
7file_uidCategoricalHashUnique identifier automatically assigned to each imported Excel source file0%0%No (technical)
8ingestion_timestampTemporalISO-DatetimeExact time of ingestion (YYYY-MM-DD HH:MM:SS) logged for auditability0%0%No (technical)
9last_update_timestampTemporalISO-DatetimeLast modification or validation timestamp of each recordNA0%No (technical)
10checksum_verifiedCategoricalBooleanIndicates whether checksum integrity was verified (True/False)NA0%No (technical)
11data_sourceCategoricalLabelSource of data (manual entry, GMAO export, external file, etc.)0%0%No (technical)
Contextual/Technical12equipment_designationCategoricalLabelDesignation or common name of the device2%0%Yes
13technologyCategoricalLabelEquipment technology (Analog/Digital/Hybrid)6%0%Yes
14brandCategoricalLabelManufacturer brand7%0%No (merged)
15modelCategoricalLabelModel or type9%0%No (merged)
16brand_modelCategoricalLabelUnified brand–model labelNA0%Yes
17acquisition_dateTemporalDD/MM/YYYYDate of acquisition10%0%Yes
18commissioning_dateTemporalDD/MM/YYYYDate of commissioning (first use)10%0%Yes
19Operational_AgeNumericalYearsOperational_Age (time elapsed since commissioning)NA0%Yes
20warranty_statusCategoricalYes/NoIndicates if the device is under warranty12%0%Yes
21warranty_end_dateTemporalDD/MM/YYYYEnd date of warranty period12%0%Yes
22estimated_end_of_life_dateTemporalDD/MM/YYYYEstimated end-of-life date of the equipment25%NAYes
23service_statusCategoricalLabelOperational/Under repair/Retired5%0%Yes
24spare_parts_usedCategoricalLabelSpare parts replaced18%5%No
25CIₙNumerical (ordinal)Scale 1–5Internal Criticality Index combining downtime, failure frequency, and clinical importanceNA0%Yes
26locationCategoricalLabelRoom or unit location20%8%No
27supplier_nameCategoricalLabelSupplier or vendor name18%NANo
Temporal Variables28intervention_dateTemporalDD/MM/YYYYMaintenance intervention date2%0%Yes
29failure_dateTemporalDD/MM/YYYYFailure occurrence date7%0%Yes
30repair_dateTemporalDD/MM/YYYYRepair completion date9%0%Yes
31downtime_hoursNumericalHoursTotal hours of downtime12%0%Yes
32repair_durationNumericalHoursDuration of the repair15%0%Yes
33intervention_yearNumericalYearExtracted year of intervention (for temporal grouping)NA0%Yes
Maintenance/Failure34intervention_typeCategoricalLabelCurative/Preventive/Minor adjustment/External0%0%Yes
35failure_typeCategoricalLabelFailure category (electrical, mechanical, software…)5%0%Yes
36failure_criticalityCategoricalLow/Med/HighFailure severity level20%0%Yes
37intervention_statusCategoricalLabelCompleted/Ongoing/Abandoned4%0%Yes
Derived Indicators38MTBFNumericalDaysMean Time Between FailuresNA0%Derived
39MTTRNumericalHoursMean Time to RepairNA0%Derived
40FRNumerical%/yearAnnualized Failure Rate normalized by equipment and timeNA0%Derived
41DHNumericalHoursDowntime Hours per intervention cycleNA0%Derived

References

  1. Li, J.; Mao, Y.; Zhang, J. Construction of Medical Equipment Maintenance Network Management Platform Based on Big Data. Front. Phys. 2023, 11, 1105906. [Google Scholar] [CrossRef]
  2. Manchadi, O.; Ben-Bouazza, F.-E.; Jioudi, B. Predictive Maintenance in Healthcare System: A Survey. IEEE Access 2023, 11, 61313–61330. [Google Scholar] [CrossRef]
  3. Fakhkhari, H.; Bounabat, B.; Bennani, M.; Bekkali, R. Moroccan Patient-Centered Hospital Information System: Global Architecture. In Proceedings of the ArabWIC 6th Annual International Conference Research Track, Rabat, Morocco, 7–9 March 2019. [Google Scholar]
  4. Oufkir, L.; Oufkir, A.A. Understanding EHR Current Status and Challenges to a Nationwide Electronic Health Records Implementation in Morocco. Inform. Med. Unlocked 2023, 42, 10134. [Google Scholar] [CrossRef]
  5. Arab-Zozani, M.; Imani, A.; Doshmangir, L.; Dalal, K.; Bahreini, R. Assessment of Medical Equipment Maintenance Management: Proposed Checklist Using Iranian Experience. Biomed. Eng. OnLine 2021, 20, 49. [Google Scholar] [CrossRef]
  6. Woldeyohanins, A.E.; Molla, N.M.; Mekonen, A.W.; Wondimu, A. The Availability and Functionality of Medical Equipment and the Barriers to Their Use at Comprehensive Specialized Hospitals in the Amhara Region, Ethiopia. Front. Health Serv. 2025, 4, 1470234. [Google Scholar] [CrossRef]
  7. Abdul-Rahman, T.; Ghosh, S.; Lukman, L.; Bamigbade, G.B.; Oladipo, O.V.; Amarachi, O.R.; Olanrewaju, O.F.; Toluwalashe, S.; Awuah, W.A.; Aborode, A.T.; et al. Inaccessibility and Low Maintenance of Medical Data Archive in Low-Middle Income Countries: Mystery behind Public Health Statistics and Measures. J. Infect. Public Health 2023, 16, 1556–1561. [Google Scholar] [CrossRef]
  8. Hillebrecht, M.; Schmidt, C.; Saptoka, B.P.; Riha, J.; Nachtnebel, M.; Bärnighausen, T. Maintenance versus Replacement of Medical Equipment: A Cost-Minimization Analysis among District Hospitals in Nepal. BMC Health Serv. Res. 2022, 22, 1023. [Google Scholar] [CrossRef]
  9. Basri, E.I.; Abdul Razak, I.H.; Ab-Samat, H.; Kamaruddin, S. Preventive Maintenance (PM) Planning: A Review. J. Qual. Maint. Eng. 2017, 23, 114–143. [Google Scholar] [CrossRef]
  10. Wu, S. Preventive Maintenance Models: A Review. In Replacement Models with Minimal Repair; Tadj, L., Ouali, M.-S., Yacout, S., Ait-Kadi, D., Eds.; Springer: London, UK, 2011; pp. 129–140. ISBN 978-0-85729-215-5. [Google Scholar]
  11. Benisch, J.; Helm, B.; Bertrand-Krajewski, J.-L.; Bloem, S.; Cherqui, F.; Eichelmann, U.; Kroll, S.; Poelsma, P. Operation and Maintenance; IWA Publishing: London, UK, 2021. [Google Scholar] [CrossRef]
  12. Zhou, J.; Xu, B.; Fang, Z.; Zheng, X.; Tang, R.; Haroglu, H. Operations and Maintenance. In Digital Built Asset Management; Edward Elgar Publishing: Cheltenham, UK, 2024; pp. 161–189. ISBN 978-1-0353-2144-5. [Google Scholar]
  13. Fajrin, H.R.; Pramudya, T.; Supriyadi, K. The Development of a Medical Equipment Inventory Information System. In Proceedings of the 2024 IEEE International Conference on Artificial Intelligence and Mechatronics Systems (AIMS), Virtual Conference, 22–23 February 2024; pp. 1–6. [Google Scholar]
  14. Lin, Z.; Kang, J.; Wei, Y.; Zou, B. Maintenance Management Strategies for Medical Equipment in Healthcare Institutions: A Review. BME Horiz. 2024, 2, 135. [Google Scholar] [CrossRef]
  15. Gandhare, S.; Narad, S.; Kumar, P.; Madankar, T. Integrating Quantitative Parameters for Automating Medical Equipment Maintenance Using Industry 4.0 and FMEA. In Proceedings of the 2024 2nd DMIHER International Conference on Artificial Intelligence in Healthcare, Education and Industry (IDICAIEI), Wardha, India, 29–30 November 2024; pp. 1–6. [Google Scholar]
  16. Tripathi, R.; Mishra, V.K.; Maheshwari, H.; Tiwari, R.G.; Agarwal, A.K.; Gupta, A. Extrapolative Preservation Management of Medical Equipment through IoT. In Proceedings of the 2023 International Conference on Artificial Intelligence for Innovations in Healthcare Industries (ICAIIHI), Raipur, India, 29–30 December 2023; Volume 1, pp. 1–5. [Google Scholar]
  17. Shamayleh, A.; Awad, M.; Farhat, J. IoT Based Predictive Maintenance Management of Medical Equipment. J. Med. Syst. 2020, 44, 72. [Google Scholar] [CrossRef]
  18. Guissi, M.; Alaoui, M.H.E.Y.; Belarbi, L.; Chaik, A. IoT for Predictive Maintenance of Critical Medical Equipment in a Hospital Structure. Inform. Autom. Pomiary Gospod. Ochr. Śr. 2024, 14, 71–76. [Google Scholar] [CrossRef]
  19. Rahman, N.H.A.; Zaki, M.H.M.; Hasikin, K.; Razak, N.A.A.; Ibrahim, A.K.; Lai, K.W. Predicting Medical Device Failure: A Promise to Reduce Healthcare Facilities Cost through Smart Healthcare Management. PeerJ Comput. Sci. 2023, 9, e1279. [Google Scholar] [CrossRef] [PubMed]
  20. Ahmed Qaid, M.S.; Mohd Noor, A.; Norali, A.N.; Zakaria, Z.; Ahmad Firdaus, A.Z.; Abu Bakar, A.H.; Fook, C.Y. Remote Monitoring and Predictive Maintenance of Medical Devices. In Proceedings of the International e-Conference on Intelligent Systems and Signal Processing; Thakkar, F., Saha, G., Shahnaz, C., Hu, Y.-C., Eds.; Springer: Singapore, 2022; pp. 727–737. [Google Scholar]
  21. Shah Ershad Bin Mohd Azrul Shazril, M.H.; Mashohor, S.; Amran, M.E.; Fatinah Hafiz, N.; Rahman, A.A.; Ali, A.; Rasid, M.F.A.; Safwan, A.; Kamil, A.; Azilah, N.F. Predictive Maintenance Method Using Machine Learning for IoT Connected Computed Tomography Scan Machine. In Proceedings of the 2023 IEEE 2nd National Biomedical Engineering Conference (NBEC), Melaka, Malaysia, 5–7 September 2023; pp. 42–47. [Google Scholar]
  22. Li, X.; Williams, J.; Swanson, C.; Berg, T. A Machine Learning Approach to Predictive Maintenance: Remaining Useful Life and Motor Fault Analysis. Comput. Ind. Eng. 2025, 206, 111222. [Google Scholar] [CrossRef]
  23. Sabah, S.; Moussa, M.; Shamayleh, A. Predictive Maintenance Application in Healthcare. In Proceedings of the 2022 Annual Reliability and Maintainability Symposium (RAMS), Tucson, AZ, USA, 24–27 January 2022; pp. 1–9. [Google Scholar]
  24. Titah, M.; Bouchaala, M.A. An Ontology-Driven Model for Hospital Equipment Maintenance Management: A Case Study. J. Qual. Maint. Eng. 2024, 30, 409–433. [Google Scholar] [CrossRef]
  25. Habib Shah Ershad Mohd Azrul Shazril, M.; Mashohor, S.; Effendi Amran, M.; Fatinah Hafiz, N.; Mohd Ali, A.; Saiful Bin Naseri, M.; Rasid, M.F.A. Assessment of IoT-Driven Predictive Maintenance Strategies for Computed Tomography Equipment: A Machine Learning Approach. IEEE Access 2024, 12, 195505–195515. [Google Scholar] [CrossRef]
  26. Manchadi, O.; Ben-BOUAZZA, F.-E.; Dehbi, Z.E.O.; Said, Z.; Jioudi, B. Towards Industry 4.0: An IoT-Enabled Data-Driven Architecture for Predictive Maintenance in Pharmaceutical Manufacturing. In Proceedings of the International Conference on Advanced Intelligent Systems for Sustainable Development (AI2SD’2023), Marrakech, Morocco, 15–17 October 2023; Ezziyyani, M., Kacprzyk, J., Balas, V.E., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 28–45. [Google Scholar]
  27. Wang, B.; Rui, T.; Skinner, S.; Ayers-Comegys, M.; Gibson, J.; Williams, S. Medical Equipment Aging: Part I—Impact on Maintenance. J. Clin. Eng. 2024, 49, 52. [Google Scholar] [CrossRef]
  28. Amran, M.E.; Aziz, S.A.; Muhtazaruddin, M.N.; Masrom, M.; Haron, H.N.; Bani, N.A.; Mohd Izhar, M.A.; Usman, S.; Sarip, S.; Najamudin, S.S.; et al. Critical Assessment of Medical Devices on Reliability, Replacement Prioritization and Maintenance Strategy Criterion: Case Study of Malaysian Hospitals. Qual. Reliab. Eng. Int. 2024, 40, 970–1001. [Google Scholar] [CrossRef]
  29. Tettey, F.; Parupelli, S.K.; Desai, S. A Review of Biomedical Devices: Classification, Regulatory Guidelines, Human Factors, Software as a Medical Device, and Cybersecurity. Biomed. Mater. Devices 2024, 2, 316–341. [Google Scholar] [CrossRef]
  30. Peruničić, Ž.; Lalatović, I.; Spahić, L.; Ašić, A.; Pokvić, L.G.; Badnjević, A. Enhancing Mechanical Ventilator Reliability through Machine Learning Based Predictive Maintenance. Technol. Health Care 2025, 33, 1288–1297. [Google Scholar] [CrossRef]
  31. Zheng, F.; Sun, G.; Suo, Y.; Ma, H.; Feng, T. Research on Through-Flame Imaging Using Mid-Wave Infrared Camera Based on Flame Filter. Sensors 2024, 24, 6696. [Google Scholar] [CrossRef]
  32. Montes-Sánchez, J.M.; Uwate, Y.; Nishio, Y.; Vicente-Díaz, S.; Jiménez-Fernández, Á. Predictive Maintenance Edge Artificial Intelligence Application Study Using Recurrent Neural Networks for Early Aging Detection in Peristaltic Pumps. IEEE Trans. Reliab. 2024, 74, 3730–3744. [Google Scholar] [CrossRef]
  33. Zamzam, A.H.; Hasikin, K.; Wahab, A.K.A. Integrated Failure Analysis Using Machine Learning Predictive System for Smart Management of Medical Equipment Maintenance. Eng. Appl. Artif. Intell. 2023, 125, 106715. [Google Scholar] [CrossRef]
  34. Singgih, M.L.; Zakiyyah, F.F. Andrew Machine Learning for Predictive Maintenance: A Literature Review. In Proceedings of the 2024 Seventh International Conference on Vocational Education and Electrical Engineering (ICVEE), Malang, Indonesia, 30–31 October 2024; pp. 250–256. [Google Scholar]
  35. Ouardi, A.; Sekkaki, A.; Mammass, D. Towards an Inter-Cloud Architecture in Healthcare System. In Proceedings of the 2017 International Symposium on Networks, Computers and Communications (ISNCC), Marrakech, Morocco, 16–18 May 2017; pp. 1–6. [Google Scholar]
  36. Rezende, M.C.C.; Santos, R.P.; Coelli, F.C.; Almeida, R.M.V.R. Reliability Analysis Techniques Applied to Highly Complex Medical Equipment Maintenance. In Proceedings of the IX Latin American Congress on Biomedical Engineering and XXVIII Brazilian Congress on Biomedical Engineering, Florianópolis, Brazil, 24–28 October 2022; Marques, J.L.B., Rodrigues, C.R., Suzuki, D.O.H., Marino Neto, J., García Ojeda, R., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 184–192. [Google Scholar]
  37. Boppana, V.R. Data Analytics for Predictive Maintenance in Healthcare Equipment; EPH-International Journal of Business & Management Science: Perth, Australia, 2023. [Google Scholar]
  38. Daniel, C. Medical Device Maintenance Regimes in Healthcare Institutions. In Inspection of Medical Devices: For Regulatory Purposes; Badnjević, A., Cifrek, M., Magjarević, R., Džemić, Z., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 59–91. ISBN 978-3-031-43444-0. [Google Scholar]
  39. Haninie Abd Wahab, N.; Hasikin, K.; Wee Lai, K.; Xia, K.; Ying Taing, A.; Zhang, R. Predicting Medical Device Life Expectancy and Estimating Remaining Useful Life Using a Data-Driven Multimodal Framework. IEEE Access 2025, 13, 117300–117327. [Google Scholar] [CrossRef]
  40. Ahmed, R.; Nasiri, F.; Zayed, T. A Novel Neutrosophic-Based Machine Learning Approach for Maintenance Prioritization in Healthcare Facilities. J. Build. Eng. 2021, 42, 102480. [Google Scholar] [CrossRef]
  41. Jallal, M.; Serhier, Z.; Berrami, H.; Othmani, M.B. Telemedicine: The Situation in Morocco. In Studies in Health Technology and Informatics; IOS Press: Amsterdam, The Netherlands, 2023. [Google Scholar]
  42. Jallal, M.; Serhier, Z.; Berrami, H.; Othmani, M.B. Current State and Prospects of Telemedicine in Morocco: Analysis of Challenges, Initiatives, and Regulatory Framework. Cureus 2023, 15, e50963. [Google Scholar] [CrossRef] [PubMed]
  43. Mohssine, N.; Raji, I.; Lanteigne, G.; Amalik, A.; Chaouch, A. Impact organisationnel de la préparation à l’accréditation en établissement de santé au Maroc. Santé Publique 2015, 27, 503–513. [Google Scholar] [CrossRef] [PubMed]
  44. Ramadan, E.A.; Abu-Ghaleb, A.A.; El-Brawany, M.A. Medical Equipment Maintenance Management and Optimization in Healthcare Facilities: Literature Review. In Proceedings of the 2023 3rd International Conference on Electronic Engineering (ICEEM), Menouf, Egypt, 7–8 October 2023; pp. 1–11. [Google Scholar]
  45. Zinaoui, T.; El Khettab, M.K. La digitalisation des administrations publiques à l’ère de la pandémie. Commun. Organ. Rev. Sci. Francoph. Commun. Organ. 2022, 62, 143–161. [Google Scholar] [CrossRef]
  46. Mansoury, O.; Sebbani, M. Réflexion Sur le Système de Santé du Maroc Dans la Perspective de la Promotion de la Santé. Maghreb Rev. 2023, 48, 317–321. [Google Scholar] [CrossRef]
  47. Mahdaoui, M.; Kissani, N. Morocco’s Healthcare System: Achievements, Challenges, and Perspectives. Cureus 2023, 15, e41143. [Google Scholar] [CrossRef]
  48. Hoxha, K.; Hung, Y.W.; Irwin, B.R.; Grépin, K.A. Understanding the Challenges Associated with the Use of Data from Routine Health Information Systems in Low- and Middle-Income Countries: A Systematic Review. Health Inf. Manag. J. 2022, 51, 135–148. [Google Scholar] [CrossRef]
  49. Jidane, S.; Zidouh, S.; Belyamani, L. The Impact of Telemedicine in Morocco: A Transformative Shift in Healthcare Delivery. Int. J. Biomed. Eng. Clin. Sci. 2025, 11, 6–10. [Google Scholar] [CrossRef]
  50. Ahmad, A. Enhancing Hospital Efficiency through IoT and AI: A Smart Healthcare System. J. Comput. Sci. Appl. Eng. 2024, 2, 34–38. [Google Scholar] [CrossRef]
  51. Janjua, J.I.; Ghazal, T.M.; Abushiba, W.; Abbas, S. Optimizing Patient Outcomes with AI and Predictive Analytics in Healthcare. In Proceedings of the 2024 IEEE 65th International Scientific Conference on Power and Electrical Engineering of Riga Technical University (RTUCON), Riga, Latvia, 10–12 October 2024; pp. 1–6. [Google Scholar]
  52. Patnaik, S.K.; Kushagra, D.P.; Sahran, D.; Kalita, B.J.; Kumar, B.; Prusty, H.; Pandit, P. Optimising Medical Equipment Utilisation and Serviceability: A Data-Driven Approach through Insights from Five Healthcare Institutions. Med. J. Armed Forces India 2025, in press. [CrossRef]
  53. Alahmadi, K.M.; Mahmoud, E.R.I.; Imaduddin, F. Model Development to Improve the Predictive Maintenance Reliability of Medical Devices. Inform. Autom. Pomiary Gospod. Ochr. Śr. 2025, 15, 117–124. [Google Scholar] [CrossRef]
  54. Aljasmi, M.; Piya, S. Identification of Suitable Maintenance Strategy for Medical Devices in UAE Healthcare Facilities. In Proceedings of the 2024 IEEE International Conference on Technology Management, Operations and Decisions (ICTMOD), Sharjah, United Arab Emirates, 4–6 November 2024; pp. 1–8. [Google Scholar]
  55. Amitabh, K.; Mathur, A. Impact of Repair and Maintenance of Hospital Equipment on Health Services in Government Hospitals in North—Eastern Region of India. In Proceedings of the NIELIT’s International Conference on Communication, Electronics and Digital Technology, Guwahati, India, 16–17 February 2024; Shivakumara, P., Mahanta, S., Singh, Y.J., Eds.; Springer Nature: Singapore, 2024; pp. 411–422. [Google Scholar]
  56. Chaminda, J.L.P.; Dharmagunawardene, D.; Rohde, A.; Kularatna, S.; Hinchcliff, R. Implementation of a Multicomponent Program to Improve Effective Use and Maintenance of Medical Equipment in Sri Lankan Hospitals. WHO South-East Asia J. Public Health 2023, 12, 85. [Google Scholar] [CrossRef]
  57. Medenou, D.; Fagbemi, L.A.; Houessouvo, R.C.; Jossou, T.R.; Ahouandjinou, M.H.; Piaggio, D.; Kinnouezan, C.-D.A.; Monteiro, G.A.; Idrissou, M.A.Y.; Iadanza, E.; et al. Medical Devices in Sub-Saharan Africa: Optimal Assistance via Acomputerized Maintenance Management System (CMMS) in Benin. Health Technol. 2019, 9, 219–232. [Google Scholar] [CrossRef]
  58. Niyonambaza, I.; Zennaro, M.; Uwitonze, A. Predictive Maintenance (PdM) Structure Using Internet of Things (IoT) for Mechanical Equipment Used into Hospitals in Rwanda. Future Internet 2020, 12, 224. [Google Scholar] [CrossRef]
  59. Gallab, M.; Ahidar, I.; Zrira, N.; Ngote, N. Towards a Digital Predictive Maintenance (DPM): Healthcare Case Study. Procedia Comput. Sci. 2024, 232, 3183–3194. [Google Scholar] [CrossRef]
  60. Castañeira, M.; Rubio, D.; Salguero, M.G.; Ponce, S.; Madrid, F. Optimizing Biomedical Equipment Management Through a Business Intelligence Platform. In Proceedings of the Advances in Bioengineering and Clinical Engineering, Buenos Aires, Argentina, 3–6 October 2023; Ballina, F.E., Armentano, R., Acevedo, R.C., Meschino, G.J., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 114–121. [Google Scholar]
  61. Fan, C.; Chen, M.; Wang, X.; Wang, J.; Huang, B. A Review on Data Preprocessing Techniques Toward Efficient and Reliable Knowledge Discovery from Building Operational Data. Front. Energy Res. 2021, 9, 652801. [Google Scholar] [CrossRef]
  62. Brand, D.; Singh, J.A.; McKay, A.G.N.; Cengiz, N.; Moodley, K. Data Sharing Governance in Sub-Saharan Africa during Public Health Emergencies: Gaps and Guidance. S. Afr. J. Sci. 2023, 118, 11–12. [Google Scholar] [CrossRef]
  63. Sarker, I.H. Data Science and Analytics: An Overview from Data-Driven Smart Computing, Decision-Making and Applications Perspective. SN Comput. Sci. 2021, 2, 377. [Google Scholar] [CrossRef]
  64. Kandel, S.; Paepcke, A.; Hellerstein, J.; Heer, J. Wrangler: Interactive Visual Specification of Data Transformation Scripts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 3363–3372. [Google Scholar]
  65. Barapatre, D.; Vijayalakshmi, A. Data Preparation on Large Datasets for Data Science. Asian J. Pharm. Clin. Res. 2017, 10, 485–488. [Google Scholar] [CrossRef]
  66. Dhillon, B.S. Maintainability, Maintenance, and Reliability for Engineers; CRC Press: Boca Raton, FL, USA, 2006; ISBN 978-0-429-12821-9. [Google Scholar]
  67. Iadanza, E.; Gonnelli, V.; Satta, F.; Gherardelli, M. Evidence-Based Medical Equipment Management: A Convenient Implementation. Med. Biol. Eng. Comput. 2019, 57, 2215–2230. [Google Scholar] [CrossRef] [PubMed]
  68. Cinar, E.; Kalay, S.; Saricicek, I. A Predictive Maintenance System Design and Implementation for Intelligent Manufacturing. Machines 2022, 10, 1006. [Google Scholar] [CrossRef]
  69. Zamzam, A.H.; Al-Ani, A.K.I.; Wahab, A.K.A.; Lai, K.W.; Satapathy, S.C.; Khalil, A.; Azizan, M.M.; Hasikin, K. Prioritisation Assessment and Robust Predictive System for Medical Equipment: A Comprehensive Strategic Maintenance Management. Front. Public Health 2021, 9, 782203. [Google Scholar] [CrossRef]
  70. Pakala, A.; Shah, D.; Jha, S. Advancing Predictive Maintenance: A Data-Driven Approach for Accurate Equipment Failure Prediction. In Proceedings of the Industry 4.0 and Advanced Manufacturing, Volume 1; Chakrabarti, A., Suwas, S., Arora, M., Eds.; Springer Nature: Singapore, 2025; pp. 219–228. [Google Scholar]
  71. Taouabit, O.; Touhami, F.; Elmoukhtar, M. Transformation digitale de l’administration publique au Maroc et son impact sur l’expérience utilisateur: Cas des services rendus aux notaires par l’Agence Nationale de la Conservation Foncière du Cadastre et de la Cartographie. Int. J. Account. Finance Audit. Manag. Econ. 2023, 4, 384–401. [Google Scholar] [CrossRef]
  72. Mandel, J.C.; Kreda, D.A.; Mandl, K.D.; Kohane, I.S.; Ramoni, R.B. SMART on FHIR: A Standards-Based, Interoperable Apps Platform for Electronic Health Records. J. Am. Med. Inform. Assoc. 2016, 23, 899–908. [Google Scholar] [CrossRef]
  73. Zhou, H.; Liu, Q.; Liu, H.; Chen, Z.; Li, Z.; Zhuo, Y.; Li, K.; Wang, C.; Huang, J. Healthcare Facilities Management: A Novel Data-Driven Model for Predictive Maintenance of Computed Tomography Equipment. Artif. Intell. Med. 2024, 149, 102807. [Google Scholar] [CrossRef]
Figure 1. Conceptual architecture of the BioMedStruct prototype pipeline.
Figure 1. Conceptual architecture of the BioMedStruct prototype pipeline.
Applsci 15 10983 g001
Figure 2. Conversion of heterogeneous biomedical records into a validated 26-variable dataset.
Figure 2. Conversion of heterogeneous biomedical records into a validated 26-variable dataset.
Applsci 15 10983 g002
Figure 3. Impact of semantic normalization on service labels: (a) before normalization, (b) after normalization.
Figure 3. Impact of semantic normalization on service labels: (a) before normalization, (b) after normalization.
Applsci 15 10983 g003
Figure 4. Comparison of key indicators across hospitals.
Figure 4. Comparison of key indicators across hospitals.
Applsci 15 10983 g004
Figure 5. Breakdown of maintenance interventions (2014–2022).
Figure 5. Breakdown of maintenance interventions (2014–2022).
Applsci 15 10983 g005
Figure 6. Distribution of maintenance interventions by hospital department.
Figure 6. Distribution of maintenance interventions by hospital department.
Applsci 15 10983 g006
Figure 7. Most common failure types in biomedical equipment.
Figure 7. Most common failure types in biomedical equipment.
Applsci 15 10983 g007
Figure 8. Distribution of failures by year.
Figure 8. Distribution of failures by year.
Applsci 15 10983 g008
Figure 9. Distribution of MTTR (raw values). The diamond markers indicate statistical outliers located beyond the interquartile range.
Figure 9. Distribution of MTTR (raw values). The diamond markers indicate statistical outliers located beyond the interquartile range.
Applsci 15 10983 g009
Figure 10. Distribution of MTTR (corrected values, IQR-based). Diamond-shaped markers represent statistical outliers beyond the interquartile range.
Figure 10. Distribution of MTTR (corrected values, IQR-based). Diamond-shaped markers represent statistical outliers beyond the interquartile range.
Applsci 15 10983 g010
Figure 11. Failure rate (FR) by hospital service.
Figure 11. Failure rate (FR) by hospital service.
Applsci 15 10983 g011
Figure 12. Downtime hours (DH) across hospital services.
Figure 12. Downtime hours (DH) across hospital services.
Applsci 15 10983 g012
Figure 13. Radar chart of reliability indicators (FR, MTBF, MTTR corrected, DH) for selected services.
Figure 13. Radar chart of reliability indicators (FR, MTBF, MTTR corrected, DH) for selected services.
Applsci 15 10983 g013
Figure 14. Confusion matrices of Random Forest classifiers under temporal split (2014–2019 training, 2020–2022 testing).
Figure 14. Confusion matrices of Random Forest classifiers under temporal split (2014–2019 training, 2020–2022 testing).
Applsci 15 10983 g014
Figure 15. Comparative ROC curves (FULL vs. SUBSET datasets).
Figure 15. Comparative ROC curves (FULL vs. SUBSET datasets).
Applsci 15 10983 g015
Table 1. Main transformations applied during the structuring of the biomedical dataset.
Table 1. Main transformations applied during the structuring of the biomedical dataset.
Stage (M)TransformationDescription
M1Removal of non-tabular rowsElimination of administrative headers and unstructured elements in rows 1 to 5 at the top of source files.
M1Provenance loggingRecording file name, sheet, UTC timestamp, and the SHA-256 file fingerprint, plus parser version and mapping version, for auditability.
M2Date format harmonizationSystematic conversion of dates to the DD/MM/YYYY format (as used in Moroccan hospitals), with explicit parsing of legacy formats.
M2Inventory ID normalizationStandardization of equipment and inventory identifiers with consistent casing and padding.
M2Unique identifier assignmentAssignment of standardized equipment IDs (e.g., EQP_REAL_00001) to ensure a one-to-one mapping between physical units and records; IDs are generated sequentially and linked to hospital and legacy inventory codes to preserve traceability.
M2Duplicate eliminationRemoval of exact or near-duplicates using the composite key {equipment ID, failure date, intervention type}.
M2Column disambiguationSplitting vague fields such as Summary, Model/Type, and Room into distinct columns.
M3Label normalizationUnification of department names, with FR variants mapped to canonical labels to reduce label variance.
M3Failure type categorizationGrouping of raw descriptions into homogeneous classes such as electrical, mechanical, and software.
M4Derivation of analytical variablesComputation of normalized reliability indicators including Failure Rate (FR), Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR), and Downtime Hours (DH).
M4Time aware computationFeatures computed on right-closed windows to avoid temporal information leakage.
M5Variable type definitionExplicit classification of variables into temporal, categorical, or quantitative types.
M5Missing value handlingSimple imputations or informed deletions based on business rules.
M6Final data structuringAssembly of a normalized tabular dataset with 26 interoperable variables and a fixed column order; schema versioning (e.g., BioMedStruct_Schema_CST v1.0.0).
M6Documentation assetsDelivery of a prediction-ready dataset, a data dictionary, a validation report, and provenance logs.
M7Integration readiness deliverablesPackaging of the prediction-ready dataset into standardized distribution formats (CSV/Parquet) with fixed schema and versioned releases, complemented by interoperability assets to support adoption in biomedical maintenance workflows, including CMMS integration, inter-hospital data sharing, IoT connectivity, and risk-scoring dashboards.
Table 2. Core data dictionary of the 26 standardized variables.
Table 2. Core data dictionary of the 26 standardized variables.
Variable NameTypeUnitDescriptionMissing (%) BeforeMissing (%) After
1internal_idCategoricalLabelInternal identifier generated by the pipeline (e.g., iq_real_001)0%0%
2equipment_idCategoricalIdentifierOfficial hospital inventory number (e.g., 9608/19)0%0%
3hospital_idCategoricalCodeHospital code0%0%
4departmentCategoricalLabelClinical department where the device is used3%0%
5sha256_checksumCategoricalHashSHA-256 checksum ensuring data provenance and integrity0%0%
6equipment_designationCategoricalLabelDesignation or common name of the device2%0%
7technologyCategoricalLabelEquipment technology (Analog/Digital/Hybrid)6%0%
8brand_modelCategoricalLabelUnified brand–model labelNA0%
9acquisition_dateTemporalDD/MM/YYYYDate of acquisition10%0%
10commissioning_dateTemporalDD/MM/YYYYDate of commissioning (first use)10%0%
11Operational_AgeNumericalYearsOperational_Age (time elapsed since commissioning)NA0%
12warranty_statusCategoricalYes/NoIndicates if the device is under warranty12%0%
13warranty_end_dateTemporalDD/MM/YYYYEnd date of warranty period12%0%
14estimated_end_of_life_dateTemporalDD/MM/YYYYEstimated end-of-life date of the equipment25%NA
15service_statusCategoricalLabelOperational/Under repair/Retired5%0%
16CIₙNumerical (ordinal)Scale 1–5Internal Criticality Index combining downtime, failure frequency, and clinical importance NA0%
17intervention_dateTemporalDD/MM/YYYYMaintenance intervention date2%0%
18failure_dateTemporalDD/MM/YYYYFailure occurrence date7%0%
19repair_dateTemporalDD/MM/YYYYRepair completion date9%0%
20downtime_hoursNumericalHoursTotal hours of downtime12%0%
21repair_durationNumericalHoursDuration of the repair15%0%
22intervention_yearNumericalYearExtracted year of intervention (for temporal grouping)NA0%
23intervention_typeCategoricalLabelCurative/Preventive/Minor adjustment/External0%0%
24failure_typeCategoricalLabelFailure category (electrical, mechanical, software…)5%0%
25failure_criticalityCategoricalLow/Med/HighFailure severity level20%0%
26intervention_statusCategoricalLabelCompleted/Ongoing/Abandoned4%0%
Table 3. Aggregated general statistics from six Moroccan hospitals.
Table 3. Aggregated general statistics from six Moroccan hospitals.
IndicatorEstimated Value
Total number of records 6816
Total number of tracked equipment 780
Number of equipment categories410
Number of identified failure types2300
Total number of clinical departments covered30
Table 4. Comparison of raw vs. corrected indicators.
Table 4. Comparison of raw vs. corrected indicators.
IndicatorRaw ValueCorrected ValueUnit
MTTR67 h42 hHours
DH102 68hours/device·year
Table 5. Grouped cross-validation (5 × 10 folds) performance.
Table 5. Grouped cross-validation (5 × 10 folds) performance.
DatasetAUROC Mean ± SD (95% CI)F1-Macro Mean ± SDAccuracy
Mean ± SD
Full (6816)0.65 ± 0.04 (0.61–0.69)0.47 ± 0.050.71 ± 0.03
Subset (2000)0.82 ± 0.03 (0.76–0.87)0.66 ± 0.040.79 ± 0.02
Values are reported as mean ± SD. AUROC confidence intervals are [0.61–0.69] for the full dataset and [0.76–0.87] for the subset.
Table 6. Temporal split performance (2014–2019 training, 2020–2022 testing).
Table 6. Temporal split performance (2014–2019 training, 2020–2022 testing).
DatasetAUROC [95% CI]F1-MacroAccuracy
Full (6816)0.63 [0.61–0.67]0.460.70
Subset (2000)0.80 [0.77–0.83]0.650.78
Values are reported as mean ± SD. AUROC confidence intervals were [0.61–0.67] for the full dataset and [0.77–0.83] for the subset.
Table 7. Comparative performance of Random Forest vs. Logistic Regression under temporal split validation.
Table 7. Comparative performance of Random Forest vs. Logistic Regression under temporal split validation.
ClassifierEvaluation MetricFull (6816) Subset (2000) Δ (Subset−Full)
Random ForestAUROC0.65 [0.63–0.67]0.80 [0.77–0.83]+0.15
F1-macro0.44 0.78 +0.34
Accuracy0.70 0.81 +0.11
Logistic RegressionAUROC0.61 [0.58–0.64]0.72 [0.69–0.76]+0.11
F1-macro0.41 0.66 +0.25
Accuracy0.68 0.74 +0.06
Values represent single temporal split performance (2014–2019 training, 2020–2022 testing).
Table 8. Roll-forward AUROC by temporal split.
Table 8. Roll-forward AUROC by temporal split.
Training Period → Testing PeriodFull (6816) Subset (2000)
2014–2017 → 20180.640.79
2014–2018 → 20190.650.80
2014–2019 → 20200.620.81
2014–2020 → 20210.630.80
2014–2021 → 20220.650.79
Roll-forward experiments provide single AUROC values without confidence intervals, since each period corresponds to a unique temporal split.
Table 9. Confusion matrix proportions under temporal split validation.
Table 9. Confusion matrix proportions under temporal split validation.
DatasetTPFNTNFP
Full (6816)0.120.730.180.07
Subset (2000)0.380.120.370.13
Values represent proportions under temporal split validation (2014–2019 training, 2020–2022 testing). TP = True Positives, FN = False Negatives, TN = True Negatives, FP = False Positives.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moufid, J.; Koulali, R.; Moussaid, K.; Abghour, N. Toward Predictive Maintenance of Biomedical Equipment in Moroccan Public Hospitals: A Data-Driven Structuring Approach. Appl. Sci. 2025, 15, 10983. https://doi.org/10.3390/app152010983

AMA Style

Moufid J, Koulali R, Moussaid K, Abghour N. Toward Predictive Maintenance of Biomedical Equipment in Moroccan Public Hospitals: A Data-Driven Structuring Approach. Applied Sciences. 2025; 15(20):10983. https://doi.org/10.3390/app152010983

Chicago/Turabian Style

Moufid, Jihanne, Rim Koulali, Khalid Moussaid, and Noreddine Abghour. 2025. "Toward Predictive Maintenance of Biomedical Equipment in Moroccan Public Hospitals: A Data-Driven Structuring Approach" Applied Sciences 15, no. 20: 10983. https://doi.org/10.3390/app152010983

APA Style

Moufid, J., Koulali, R., Moussaid, K., & Abghour, N. (2025). Toward Predictive Maintenance of Biomedical Equipment in Moroccan Public Hospitals: A Data-Driven Structuring Approach. Applied Sciences, 15(20), 10983. https://doi.org/10.3390/app152010983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop