Next Article in Journal
Evaluation of Hand Muscle Strength Using Manual Dynamometry: A Reliability and Validity Study of the Activ5 Instrument
Previous Article in Journal
Comparative Evaluation of Neural Network Models for Optimizing ECG Signal in Non-Uniform Sampling Domain
Previous Article in Special Issue
Application Cluster Analysis as a Support form Modelling and Digitalizing the Logistics Processes in Warehousing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

ML-Based Maintenance and Control Process Analysis, Simulation, and Automation—A Review

1
Faculty of Computer Science, Kazimierz Wielki University, Chodkiewicza 30, 85-064 Bydgoszcz, Poland
2
Faculty of Mechanical Engineering, Poznań University of Technology, Marii Skłodowskiej-Curie 5, 60-965 Poznan, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(19), 8774; https://doi.org/10.3390/app14198774 (registering DOI)
Submission received: 20 August 2024 / Revised: 20 September 2024 / Accepted: 26 September 2024 / Published: 28 September 2024
(This article belongs to the Special Issue Automation and Digitization in Industry: Advances and Applications)

Abstract

:
Automation and digitalization in various industries towards the Industry 4.0/5.0 paradigms are rapidly progressing thanks to the use of sensors, Industrial Internet of Things (IIoT), and advanced fifth generation (5G) and sixth generation (6G) mobile networks supported by simulation and automation of processes using artificial intelligence (AI) and machine learning (ML). Ensuring the continuity of operations under different conditions is becoming a key factor. One of the most frequently requested solutions is currently predictive maintenance, i.e., the simulation and automation of maintenance processes based on ML. This article aims to extract the main trends in the area of ML-based predictive maintenance present in studies and publications, critically evaluate and compare them, and define priorities for their research and development based on our own experience and a literature review. We provide examples of how BCI-controlled predictive maintenance due to brain–computer interfaces (BCIs) play a transformative role in AI-based predictive maintenance, enabling direct human interaction with complex systems.

1. Introduction

Automation and digitalisation in various industries are advancing rapidly as we move towards the paradigms of Industry 4.0 and 5.0, driven by sensor integration, the Industrial Internet of Things (IIoT), and advanced 5G and emerging 6G cellular networks. These technologies are further enhanced through the simulation and automation of processes using artificial intelligence (AI) and machine learning (ML) approaches. Ensuring the continuity of operations under various conditions is becoming increasingly critical for industrial systems. Predictive maintenance, which uses ML to simulate and automate maintenance processes, is currently one of the most sought-after solutions. This approach allows industries to predict potential equipment failures and proactively perform maintenance, minimising downtime and costs. This study identifies the main trends in ML-based predictive maintenance, especially human–machine communications based on brain–computer interfaces (BCIs). It aims to critically evaluate and compare the current trends, highlighting the most effective strategies and technologies. Furthermore, this study aims to define research and development priorities to further enhance the effectiveness of ML-based predictive maintenance. In this way, it contributes to the continuous evolution of industrial practices, in line with the assumptions of Industry 4.0/5.0.
ML-based maintenance and control process analysis involves using advanced algorithms to monitor, predict, and optimise the performance of industrial systems, including new control systems. In the area of interfaces, this is facilitated by both new techniques for transmitting control commands (hands-free) and visualisation methods that make it easier to understand and spot potential errors, even in complex industrial processes and installations. Through analysing historical and real-time data, ML models can identify patterns and anomalies that indicate potential failures or inefficiencies in machines and processes. This predictive ability enables proactive maintenance, thus reducing downtime and extending the life of equipment. Simulation models powered by ML enable the testing and optimisation of different maintenance strategies and control processes in a virtual environment, reducing the risks and costs associated with real-world experiments. Automation is enhanced by integrating ML algorithms into control systems, enabling dynamic adjustments and decision-making with little or no human intervention (for commands/scenarios requiring human approval). These automated systems can learn and adapt over time, improving their accuracy and efficiency as more data are collected. Integrating ML into maintenance and inspection processes also facilitates better resource allocation, as maintenance activities can be scheduled based on the actual condition of the equipment, rather than fixed intervals. Additionally, ML-based analytics helps to identify bottlenecks and inefficiencies in inspection processes, enabling continuous improvement and optimisation. Applying machine learning to maintenance and inspection processes results in smarter and more efficient operations while significantly reducing costs and increasing system reliability (Figure 1).
Key paradigms for analysing, simulating, and automating ML-based maintenance and control processes revolve around several fundamental concepts, which are outlined in Table 1.
Figure 2 shows the evolution of ML-based maintenance and control processes.
Figure 3 shows the proposed basic architecture of such a system.
This study provides a comprehensive overview of ML-based approaches for the analysis, simulation, and automation of maintenance and inspection processes. It focuses on identifying key trends and advances in the field of ML-based predictive maintenance, drawing insights from existing research and publications. The authors critically evaluate these trends, comparing different methodologies, algorithms, and implementations in the context of their effectiveness and real-world applications. Furthermore, we highlight the strengths and limitations of different ML techniques, offering a balanced perspective of their current and potential future impacts. Drawing on both a literature review and the authors’ professional experience, we define the key R&D priorities that need to be addressed to advance the field. Ultimately, we provide a roadmap for future innovations in predictive maintenance, highlighting the importance of continuous exploration of ML-based solutions. Interest in the adoption of BCI systems in predictive maintenance is becoming particularly relevant for cyber–physical–human systems (CPHSs), with possible applications in healthcare and various industries. Research into such applications of brain–computer interfaces (BCIs) is underway at the Augmented Reality Laboratory for Health Monitoring (ARHeMLab) at the University of Naples Federico II (Italy), among others, but is in the prototype phase.
Brain–computer interfaces (BCIs) play a transformative role in AI-based predictive maintenance, enabling direct human interaction with complex systems. BCIs allow operators to monitor and control maintenance processes more intuitively, using brain signals to interact with AI models. This improves decision-making through providing real-time input and adjustments based on human knowledge and situational awareness. In predictive maintenance, BCIs can tune AI predictions, thus optimising the planning and execution of maintenance tasks. Overall, BCI integration can lead to more accurate and efficient maintenance operations, reducing downtime and improving system reliability. BCI-based control offers hands-free technologies in such systems. Advances in BCI technology are driving the popularisation of this issue and the growing number of applications. An important element of these systems is the EEG (electroencephalographic) signal, captured using a device equipped with electrodes set in a specific hierarchy or arrangement in relation to the cortex of the brain. This signal represents the neuronal activity of a person using the BCI. The number of electrodes may vary from manufacturer to manufacturer [1,2]. The distribution of electrodes on the head corresponds to a certain standard that covers significant areas; the medical standard for EEG signal acquisition is a distribution of ‘10–20’, but there are devices with fewer electrodes, such as eight [3,4]. An extensive study of brain activity is not needed for BCIs as we are not typically looking for abnormalities but rather are using them as control or communication devices, less often for early diagnosis. EEG-based BCIs represent the majority, hence the interest of researchers, engineers, and clinicians in developing methods and techniques for analysing them.
The different brain rhythms captured on the EEG signal are characterised by differences in frequency and amplitude. The following rhythms are considered in the EEG signal itself: the delta rhythm (0.5–4 Hz), the theta rhythm (4–8 Hz), the alpha rhythm (8–12 Hz), the beta rhythm (12–35 Hz), and the gamma rhythm (above 35 Hz).
To date, hundreds of algorithms have been identified for extracting EEG signal features and interpreting them (i.e., the identification of BCI user intent), especially when they are changed intentionally by the BCI user. Recent general advances in EEG classification (and BCI purposes specifically) have proved important for developing this group of solutions.
EEG signal classification using ML techniques has many applications, some of which are listed below.
  • Monitoring central nervous system health and neurological disorders: These techniques can be used for the early detection and monitoring of neurological disorders such as Alzheimer’s disease, Parkinson’s disease, and stroke. Machine learning algorithms analyse EEG biomarkers associated with disease-specific changes in brain activity, providing non-invasive diagnostic tools and tracking disease progression and treatment progression. This provides early intervention, optimised treatment, and personalised healthcare management for people with neurological diseases.
  • Brain–computer interfaces: BCIs enable direct communication between the brain and external devices for early diagnosis, communication, and control, offering assistive technology solutions for people with mobility disabilities. Machine learning algorithms classify EEG signals associated with specific mental tasks or intentions, enabling users to control devices such as prosthetic limbs, computer cursors, or virtual keyboards through bioelectrical brain activity.
  • Neurofeedback: Neurofeedback uses real-time EEG signal classification to provide users with feedback on their brain activity patterns. Through presenting this feedback in a visual or audio format, individuals can learn to modulate brain activity and achieve their desired cognitive states such as relaxation, focus, or stress reduction.
  • Seizure detection and epilepsy monitoring: These techniques can facilitate the automated differentiation of normal brain activity from seizure-related EEG patterns, enabling early detection of epileptic seizures and providing timely warnings to patients or caregivers, increasing the safety and quality of life of people with epilepsy and facilitating rapid intervention and treatment.
  • Sleep stage classification: These techniques are essential for assessing sleep quality and diagnosing sleep disorders (insomnia, sleep apnoea, and others). Automated ML-based sleep stage classification systems provide valuable insights into sleep architecture and facilitate personalised treatment strategies to improve sleep health.
  • Cognitive state monitoring: ML algorithms analyse EEG functions related to attention, memory, and cognitive load, providing real-time feedback on users’ cognitive performance and mental states to help optimise task allocation, increase human performance, and prevent errors related to cognitive fatigue in people performing particularly important functions (e.g., pilots, flight controllers).
  • Emotion recognition: EEG signals can classify signals associated with different emotional states, such as happiness, sadness, fear, and anger, for affective computing, human–computer interactions, and psychological research [1,2,3,4].
These applications demonstrate the versatility and potential impact of EEG signal classification using machine learning techniques in areas ranging from healthcare and rehabilitation to human–computer interaction and signal processing research. This demonstrates the novelty and contribution of this study: continued research and innovation hold the promise of better understanding brain function, developing innovative solutions for education and entertainment, and improving human health and wellbeing.
EEG signals have been studied for 100 years, but new correlations are constantly being discovered; including, for example, simultaneous EEG-FMRI studies that bridge EEG defects.
EEG signals play a key role in BCIs by capturing electrical brain activity, allowing users to interact with devices without physical movement. In hands-free control, BCIs enable operators to manage complex systems by providing a non-invasive method of controlling machines or equipment using brain signals alone. These technologies impact predictive maintenance by enabling the early detection of equipment anomalies, as operators can react faster with hands-free systems. BCIs increase the efficiency of monitoring tasks in industrial settings through the improvement of maintenance response times. Predictive maintenance can be further optimised by integrating BCI-based insights with machine learning models that predict failures or maintenance needs. As a result, BCI can significantly reduce downtime and operational costs through improving real-time problem detection and correction.
The topic under discussion is not popular or frequently reported in the literature: a review of six major bibliographic databases (PubMed, WoS, Scopus, DBLP, Cochrane, and EBSCO) with the keywords ‘EEG classification’ and related words yielded 464 publications (1956–2024), including 12 reviews (Figure 4). However, the publication growth rate is significant: 251 of these were published in the last 5 years (54.09%), and in the last 10 years, 353 (76.08%). Articles concerning ‘EEG’ and ‘classification’ as separate words numbered 12,354.
A similar literature review with the keywords ‘EEG classification’, ‘artificial intelligence/AI’, and related words yielded only 39 publications (2005–2024, Figure 5).
Another literature review with the keywords ‘EEG classification’, ‘machine learning/ML’, and related words yielded 92 publications (2011–2024, Figure 6).
These results indicate that there is still much to be explored in the topic of EEG signal classification using machine learning and that this topic is a developmental one which is worthy of further research.
This study aims to investigate the comparative performance of EEG signal classification using different neural network architectures and different learning methods. The novelty lies in comparing them with an actual EEG signal used for a specific purpose (BCI-mediated control) with this application in mind, as well as contributing to new methods for their use with minimal handling, promoting the automation of EEG signal use procedures (e.g., in Industries 4.0/5.0) through the use of hands-free technology.
Research on detecting disease anomalies in EEG signals using artificial intelligence methods has been ongoing for many years. Research on Alzheimer’s disease, dementia, and mild cognitive impairment is emerging [5,6,7]. Networks that examine the raw signal associated with specific human movements or images can help to develop hands-free tools without the need to calibrate the device for each user. Processing the signal to find similar features using deep neural networks can lead to the development of cloud-structured systems or the ability to use devices in a plug-and-play manner. With the wide range of networks available, they can adapt to the individual and be pre-trained with a specific architecture tailored to the hardware. Notably, a pre-built and pre-trained network does not consume much computing power during data classification.
Neural networks have high classification capabilities, and deep learning allows more accurate signal features to be extracted. A characteristic of deep learning networks is an arbitrary number and distribution of so-called hidden layers, in which the associated neurons operate in layers, resulting in more accurate data classifications or a larger data group for a given feature. Some networks have continuous learning capabilities, such as recurrent networks or reinforcement learning networks, such that they can continuously evolve in a system when provided with appropriate rewards for correct performance.
Deep neural networks, also known as deep networks, are distinguished by their ability to have multiple hidden layers. These layers can detect increasingly complex features in the data, which can lead to better classification and prediction performance. In deep networks, the learning process takes place by iteratively transforming the input data into increasingly abstract representations, allowing the network to learn complex patterns in that data.
Recurrent networks are a type of deep network with the unique ability to process data sequences and continuously learn. Due to their recursive structure, these networks can analyse sequential data, such as text or time series, and remember past contexts, which is crucial for understanding data temporally. In addition, reinforcement learning networks have learning mechanisms that allow them to dynamically adapt their behaviour based on responses and rewards from the environment, enabling them to continuously improve their performance. In this way, deep networks, including recurrent networks and reinforcement learning networks, are powerful tools in the field of artificial intelligence for adaptive and efficient data processing and decision-making in dynamically changing environments.
Convolutional (braided) networks are another network model used in signal analysis. They are characterised by using data input for analysis and are divided into two-dimensional and one-dimensional types. Two-dimensional networks are most commonly used to analyse data folded into matrices, after which, characteristics are extracted within a specified window (e.g., 2 × 2), followed by further processing. The window is shifted by a certain value depending on its size, and the characteristics are extracted to a lower layer. This operation method can be extended to a network after this extraction, where the data would have to be flattened. One-dimensional spliced networks work similarly, but the window has only one dimension. However, regardless of the network type, the channels responsible for distributing colours (red, green, and blue) during image classification must also be determined.
Research is being conducted in the field of biomedical signal analysis based on convolutional neural networks and deep learning networks. This research is based on a database from MIT-BIH that is used to create a better-optimised alternative to existing approaches using EEG-like signals such as the ECG (electrocardiograph) [6]. In Alzheimer’s disease, this involves analysing the EEG signal and looking for specific patterns. The signal is taken from three electrodes [7], pre-processed, and classified using SVM (support vector machine) classifiers, the KNN (K-nearest neighbours) method, and a random forest classifier. Methods have also been created to optimise filter use on the signal and image [7,8,9]. The flower pollination algorithm (FPA) has been used in this context. Signals are also processed by a recurrent neural network (RCNN) and MRI images as input, where the approach is extended by artificial intelligence methods with unsupervised learning, such as fuzzy clustering [5].
Research is also being conducted on EEG signals coordinated with speaking. Linear models are considered and compared with non-linear models using deep learning, and an internal database is used. Several solutions have been presented using different network models based on recursive layers, such as long short-term memory (LSTM) and generative adversarial networks (GANs) [10]. Research is emerging considering the use of EEG signals as BCIs in IoT-enabled systems. The techniques and effectiveness of signal processing using amplitude-shifting keys (ASKs) and frequency-shifting keys (FSKs) are being investigated. In these investigations, neural networks are considered specific encoders and decoders of transmitted wireless signals. Energy efficiency and data reconstruction fidelity in neural networks have also been accounted for [10].
It is also important to remove artefacts that arise from eye and muscle movement. To do so, researchers tend to use a deep learning (DL) model, referring to the collected and available signal before and after processing in the frequency domain. They use a spline filter or cascaded processing to de-noise this signal. The activation function is the tanh function [11]. Solutions are also being developed to de-noise any signal in real-time [12], as well as solutions that create multidimensional rows of data that are de-noised simultaneously to optimise the performance of the resulting model on large amounts of data [13].
Our proposed review of predictive maintenance focusing on the use of brain–computer interfaces (BCIs) brings novelty and significant contributions for several key reasons. BCIs introduce a new dimension, enabling direct communication between human operators and machines, improving decision-making in maintenance tasks within cyber–physical systems in Industry 5.0, where the human is at the centre of the system. This integration allows human expertise to be seamlessly combined with AI-based predictive maintenance systems, improving the interpretation of complex maintenance data. BCIs offer real-time, hands-free control and monitoring, which is particularly beneficial in environments where manual intervention is difficult or dangerous. This review sheds light on how human cognitive feedback can improve predictive models by identifying subtle machine behaviours that artificial intelligence alone can overlook. BCIs also enable faster response times in critical PdM scenarios by allowing operators to intervene directly based on mental commands. How the human brain’s ability to recognise patterns can complement AI, leading to more accurate and proactive maintenance strategies, is currently being investigated. The review highlights novel applications of BCIs, such as immersive virtual environments for remote maintenance and training. It discusses the challenges of integrating BCIs in industrial environments, such as signal accuracy, and suggests potential solutions. From this point of view, this review provides a fresh perspective on human–machine collaboration, making it different from other predictive maintenance reviews.

2. Materials and Methods

2.1. Data Set and Devices

In developing the bibliometric analysis methods for this review, we focused on exploring the research landscape in parameter selection strategies and the development of predictive maintenance systems based on artificial intelligence, as well as possible innovations and applications in this area. To this end, we used bibliometric methods as analytical tools to review scientific publications. To better guide our review and selection of publications, we formulated questions to help us uncover key aspects of this research area: Research question 1 (RQ1): The evolution of research topics/problems over time. RQ2: The geographical distribution of research/publications, authors, scientific institutions, and publications with the greatest impact. RQ3: Topics that may shape future research agendas. To achieve the above objectives, it was crucial to fully understand the current state of research, industry practices (combining science and engineering, manufacturing, and clinical applications), efforts, and future research directions in the pursuit of optimally selecting AI-based predictive maintenance systems for a specific clinical application. In this case, analysing and interpreting bibliometric data can significantly contribute to the ongoing discussion and build a sturdier foundation for further analysis and research (Figure 7).
We also chose to mainly use open sources to select new and easily accessible research from different fields, showing a broader perspective. Notably, not all publications were in English, and results published in India, China, and Arab countries may not be available in every language.

2.2. Methods

For bibliometric analysis, we used the tools built into the Web of Science (WoS) and Scopus databases and the Biblioshiny tool from the Bibliometrics Rv.4.1.3 package. These tools are well suited to bibliometric and scientometric research, sometimes offering a more precise categorisation of conceptual/area/branch structures, authors, documents, and sources, and the various results are presented through graphs and information tables with a choice of analysis and visualisation. Given the complexity and interdisciplinarity of the topic, we have summarised the review in table form.

2.3. EEG Database

EEG databases store brainwave data from users interacting with their systems and can be integrated with BCIs for predictive maintenance applications. In practice, operators can use BCI systems to monitor machines while their brain signals are stored in an EEG database and analysed for real-time decision-making. For example, an operator using a BCI system can detect subtle signs of problems with a machine, and their EEG signals provide early warnings of system anomalies. These brainwave patterns are compared with pre-existing data in the EEG database to identify similar patterns associated with potential equipment failures. Through continuously analysing these signals, the BCI system can trigger preventive maintenance alerts before a failure occurs. Machine learning models can also be trained using the EEG database to improve predictive algorithms, increasing the accuracy of maintenance predictions. This application improves operational efficiency by reducing human error and providing intuitive hands-free control in complex maintenance environments.
We used an off-the-shelf database made available in the EDF file format (the European data format) [https://www.physionet.org/content/eegmmidb/1.0.0/#files-panel, accessed on 24 July 2024].
We tried to find as recent studies as possible, but their dates of creation did not vary much. Often, they reported the results of signal analysis from patients with Alzheimer’s disease and similar neurodegenerative conditions.
The data are presented in groups, so-called events—a window of 231 samples—for all 64 channels, with 32 blocks for left-hand movement, 32 for right-hand movement, and 52 for the resting state between these movements.
The signal sampling rate was 160 Hz for 64 channels. The data included 109 test cases, where individual actions were performed, such as opening and closing the eyes, clasping the hands or feet, and imagining these movements. Each signal intake lasted 120 s and was repeated 3 times. Data were captured by an EEG cap with 64 channels (Figure 8) at a frequency of 160 Hz in a 10-10 system (excluding electrodes Hz, F9, F10, FT9, FT10, A1, A2, TP9, TP10, P9, and P10). Given the large number of time samples for data nos. 88, 92, and 100, these were discarded. The number of samples shown differs from the others (i.e., from 321).
The data were filtered and normalised to better fit model learning. Artefacts were already filtered out of the data used in this study and, thus, the signal was clean. After normalisation, the differences in the range of motion-related and static signal values were very large. The signal was cut into fragments, and depending on the signal under study, there were different motion-signal-group-to-noise ratios. Given the characteristics of the data, standard normalisation was used.
The data were prepared using the MNE library in Python. Each signal record had sectors where the action of clenching and opening the right or left hand occurred and the time of inactivity was marked. Each signal was placed in a data group related to rest; left-hand clenching and opening; and right-hand clenching and opening. The division of the signal varied; for example, in the first set, there was a ratio of 52 parts for the inactivity signal, 32 for the left-hand signal, and 32 for the right-hand signal. The array appeared in a three-dimensional format with the following dimensions: the number of data, the number of channels, and the number of samples, i.e., 52, 64, and 321, respectively.
Based on the ratio of the resting signal to the signal resulting from activity, the left and right hands were combined and labelled with the same value. The number of peculiar data sources was 106; they were grouped into whole data and corresponding 0 or 1 markers with sizes of 12,666 and 64,321, respectively.
The data were split into a teaching and testing set in a ratio of 80% teaching to 20% testing. Specific groups were divided (i.e., 80% idle signal and 80% left-hand–right-hand movement data). The remaining data were used as new data for the testing process.

2.4. Methods and Tools Used

The architecture was inspired by other researchers using braided networks interleaved with maxpool layers. For some deep learning solutions, 3 deep layers are used; however, after extending the network with more layers or increasing the number of neurons per layer, the results were far from satisfactory, oscillating around an accuracy of 50%. These results were not considered in this study. In contrast, varying a spliced network is an approach to determine whether reducing the number of layers will improve its performance.
Figures with layer names and the number of neurons are shown in the articles, which is sufficient for replication. The cited articles used spliced networks; we did not try to replicate them as they are presented but, instead, tried to develop new solutions; which, in our opinion, increased the novelty and contributions of our article.
The project was performed on a computer with the following specifications: AMD Ryzen 5 4500U with Radeon Graphics 2.38 GHz and 16 GB RAM. The libraries used were scikitlearn [14], MNE [15], NumPy [16], and Keras [17], and the Jupyter [18] and Anaconda [19] environments were used.
This tool selection enables the reproducibility of this study and the rapid development of the proposed methodology (e.g., by replacing individual tools with newer ones as they become available).

3. Results

After analysing the content of the main databases, we searched for research articles in two major databases: WoS and Scopus. This choice was dictated by the wide range of research results they contain and their detailed data and analytical tools, enabling comprehensive analyses. This is particularly beneficial when conducting interdisciplinary bibliometric analyses and assessing the impact of research results, as was the case for us. Through creating advanced queries tailored to our research objectives, we were able to apply filters to select relevant publications. In this way, we limited the search to articles in English. Later in the study, we manually reviewed the articles, excluding some (including duplicate articles) to match our research objectives, resulting in the final sample size shown in Figure 9. The WoS search was performed using ‘Topic’, a search using the following set: title, abstract, keyword plus, and other keywords. The Scopus search used the following set: article title, abstract, and keywords. The resulting publications were further screened (Figure 9).
We started our bibliometric analysis with descriptive statistics to understand the characteristics of the data set of the selected group of scientific publications, including leading authors, research institutions, subject areas, and emerging trends in the subject area. This allowed us to identify the evolving vocabulary and major research developments. Examining changes in trends over time allowed us to see changes in the direction/mainstream of research over time and the type and dynamics of the area, including the categorisation of publications into thematic clusters and a picture of the links between research themes. This allowed for easier further identification of key themes and sub-domains, including emerging research directions. A summary of the bibliographic analysis is presented in Table 2. Many original (research) articles were observed. The largest number of publications was related to studies in engineering and computer science. More than half of the leading topics were on technological issues, and a smaller proportion were on implementation issues. The leading countries conducting and funding research were large states in Asia, North America, and Europe, which was also reflected in the leading affiliations.

3.1. Data Sources for ML-Based Predictive Maintenance Systems

The key data sources for ML-based predictive maintenance systems include the following:
  • Real-time data from IoT sensors embedded in machines, such as vibration, temperature, pressure, and humidity, provide critical inputs for monitoring equipment health and detecting anomalies;
  • Historical maintenance records (i.e., records of previous maintenance activities), including repairs, part replacements, and scheduled servicing, help in understanding failure patterns and the effectiveness of previous interventions;
  • Operational data (i.e., data on how machines are used), including time in use, load, speed, and environmental conditions, help in understanding wear patterns;
  • Failure records (i.e., detailed records of equipment failures), including time, location, and nature of failures, are essential for training models to predict similar events;
  • Environmental data (i.e., external conditions), such as temperature, humidity, and exposure to corrosive substances, can affect machine performance and are used to adjust predictions accordingly;
  • Original equipment manufacturer (OEM) data, including specifications, design tolerances, and recommended maintenance schedules, provide a basis for predicting when maintenance should be performed;
  • Operational manuals and guidelines can provide information on recommended operating limits and maintenance strategies that can be used to align forecasts with best practices;
  • Regular inspection reports, often including subjective assessments and photos, help to identify early signs of wear or damage that sensors do not pick up;
  • User feedback (operators and technicians), often qualitative, regarding unusual sounds, vibrations, or other indicators of potential problems can be integrated into predictive models;
  • External data sources, such as industry benchmarks, economic factors, or supply chain disruptions, can impact maintenance schedules and asset availability and are, therefore, included in predictive models.
These data are used both for initially training and preparing systems for operation, as well as for further training the system during normal operations to account for wear and tear and experience from current operations, which are often not immediately noticeable to humans (e.g., an impact that extends over time).

3.2. ML-Based Predictive Maintenance System Architecture

The key components of an ML-based predictive maintenance architecture include the following:
  • The data ingestion layer collects and aggregates data from various sources, including sensors, maintenance logs, and external databases. It supports real-time streaming and batch data transfers, ensuring that all relevant information is available for processing;
  • The data storage layer is a robust and scalable storage solution, often a combination of relational databases, data lakes, and distributed file systems, and is used to store massive amounts of raw and processed data. This layer supports both structured and unstructured data formats;
  • Before analysis with ML, the data need to be cleansed, normalised, and transformed with a data preprocessing module, capturing missing data, extracting features, and reducing dimensionality to prepare them for model training and real-time prediction;
  • The feature engineering layer creates meaningful features from the raw data, which can increase the predictive power of machine learning models (through time series decomposition, aggregation, and anomaly detection);
  • ML models form the core of a system, comprising various ML learning algorithms, including supervised, unsupervised, and deep learning models trained on historical data to predict potential failures or maintenance needs;
  • Model training and validation involve splitting the data into training, validation, and test sets, with cross-validation and hyperparameter tuning used to optimise performance, ensuring that the model generalises well to unseen data;
  • Once the model is trained, a real-time inference engine is deployed to make predictions in real or near real-time: it continuously analyses incoming data, comparing it with the model’s predictions to identify potential issues;
  • When the predictive model detects an anomaly or predicts a potential failure, a control, communication, alerts, and notifications system sends alerts to maintenance teams or triggers automated actions via dashboards, emails, or integrated maintenance management systems;
  • The feedback loop continuously monitors model performance and incorporates new data, allowing the model to be retrained and updated for improved accuracy over time;
  • The user interface and dashboard visualise machine health and provide insights into forecasts and maintenance schedules, enabling operators and managers to make informed decisions based on real-time and historical data.
The method of completing these components and data flows may vary depending on the complexity of the supervised system (Figure 10).

3.3. ML-Based Predictive Maintenance Systems Operation

The ML-based predictive maintenance system operation algorithm is composed as follows:
  • Data collection;
  • Data preprocessing;
  • Feature engineering;
  • Model training;
  • Model validation;
  • Real-time monitoring;
  • Forecasting and decision-making;
  • Alert generation;
  • Performing maintenance actions;
  • Model updating.
The algorithm begins with continuous data collection from various sources, such as sensors on equipment, historical maintenance logs, and environmental conditions, providing a comprehensive data set for analysis. The raw data are then cleaned, normalised, and transformed to remove noise, handle missing values, and convert them into an appropriate format for model input. This step may involve feature extraction and selection. The algorithm identifies and constructs significant features from the pre-processed data, such as calculating running averages, trends, or anomalies, to improve the model’s predictive capabilities. Using the prepared data, the algorithm trains machine learning models such as regression models, decision trees, or neural networks to learn patterns and relationships that precede equipment failures or maintenance needs. The algorithm tests the trained model on a separate validation data set, adjusting hyperparameters and evaluating performance metrics to ensure that the model generalises well to new, unseen data. The deployed model continuously monitors incoming data in real time, applying the learned patterns to detect early signs of potential failures or anomalies that may indicate a need for maintenance. When the model predicts an impending failure or maintenance need, it generates a prediction, often including a confidence score, that informs decisions about maintenance actions. Based on the model’s predictions, the algorithm triggers alerts or notifications to maintenance teams indicating the urgency and nature of the potential problem, enabling timely intervention. Maintenance teams respond to the alerts by performing inspections, repairs, or replacing parts as needed, guided by the algorithm’s predictions. Post-maintenance data are fed back into the system, allowing the algorithm to learn from the results of the actions taken. This continuous feedback loop improves the model, increasing its accuracy and reliability over time.
Predictive maintenance architecture will evolve significantly with the integration of BCIs. A new layer will be introduced to capture and process EEG signals from operators, linking human cognitive data directly to the maintenance system. This will require incorporating EEG sensors and real-time data-processing modules to interpret brainwave activity. The BCI system will need to communicate with existing IoT sensors and machine learning algorithms, creating a hybrid model where human insights complement automated monitoring tools. Data storage and analysis will be extended to include not only machine performance data but also EEG-based patterns that indicate how the operator perceives the state of the machine. The feedback loop will be faster because operators will be able to trigger maintenance actions through brain activity, bypassing the need for manual control. Machine learning models will evolve, combining cognitive and sensor data and improving prediction accuracy with more complex decision-making algorithms. User interfaces will also change, providing operators with visual or audible feedback from their BCI interactions to increase control over maintenance processes. Security measures will need to be improved to protect sensitive EEG data, and new protocols will be required to handle system errors or cognitive overload. Operator training programmes will be revised to include BCIs, ensuring that they effectively contribute to predictive maintenance with brain-driven systems.

3.4. BCI-Based Control

In order to verify the correctness of the signal classification, five trials were performed in which data with changed learning values were given, and the model was trained anew each time. Training parameters such as batch size and epoch numbers were the same. The batch size was 128 and the epoch number was 50. Training was stopped earlier if trained accuracy was not improved by 0.1% since the previous epoch. The data were given in random layouts and in a random order.
A neural network was created to classify the data presented (Figure 11). However, when attempting to classify a larger set, overtraining occurred, and its effectiveness dropped to 0. A unitary approach was therefore tried, so a network was trained for each individual. The data for this network were flattened, and the arrays were merged together. In effect, they were rearranged so that from one part of the action, all the channels were merged into one. To avoid memory overload, the channels were restricted to those in the frontal lobe, i.e., FPZ, FP1, FP2, AF7, and AF3.
The highest value achieved was 73.49%. The mean values of this classification were 50.265% ± 10.94%, where the minimum was 23.29%. The results of this network are presented in the Figure 12.
In order to improve performance, a one-dimensional spliced network was created, and the accuracy achieved for 50 data sources is shown in Table 3. It had a better chance of improving classification performance because of its architecture. The shape of the input data was better suited to this type of network, as shown in Figure 13.
The learning conditions that achieved the highest accuracy for this network are shown. Figure 14 shows the classification accuracy of the learning data, as validation loss Figure 15. A decreasing validation loss value means that the network is learning correctly and the smaller the value, the better the network is at predicting the outcome of off-test data (Table 3).
The advancement and expansion of the training data obtained the following values, as shown in Table 4. There was a significant improvement in signal classification.
Another network (Figure 16) was created to compare another architecture of a similar network. The results are presented as follows (Table 5) for 50 and for 100 data sources (Table 6).
The achieved performance was slightly worse than the previous architecture.
Another model (Figure 17) was presented to show whether it would do a better job of classifying EEG data. A recurrent neural network was created, which had to have a reduced number of epochs due to its much better learning performance than the others.
Table 7 and Table 8 show the effects of learning, network, and classification. Information on learning parameters has also been added. The process was stopped early due to a desire to avoid overtraining the network. For larger data, it was around 10 epochs and for less data, only 3 epochs was enough.
Figure 18 shows results of cross-validation for the best solution (recurrent neural networks).

4. Discussion

ML-based predictive maintenance systems offer a number of benefits, such as reducing unplanned downtime by predicting failures before they occur, leading to significant cost savings and extended equipment life. These systems can optimise maintenance schedules, ensuring that maintenance is performed only when necessary, reducing waste and labour costs. By analysing large data sets from different sources, they can uncover complex patterns and correlations that traditional methods may miss, improving decision-making. They also support real-time monitoring, providing immediate insights and enabling a rapid response to emerging issues. However, these systems also have drawbacks, such as high upfront investments in data collection infrastructure, sensors, and computing resources. The effectiveness of the system is highly dependent on the quality and quantity of data, making data management and pre-processing critical and often difficult tasks. There is also the risk of model inaccuracy or false positives, which can lead to unnecessary maintenance or missed failures, potentially reducing trust in the system. In addition, the complexity of implementing and maintaining these systems requires specialised knowledge and ongoing support, which can be a barrier for some organisations. Finally, the constant evolution of models to adapt to new data can be costly and require ongoing investment to ensure long-term reliability and accuracy [20,21,22,23,24,25].
BCIs offer a direct communication path between the human brain and external systems, making them an innovative focal point for AI-based predictive maintenance in the Industry 5.0 paradigm. Industry 5.0 emphasises human–machine collaboration, and BCIs enable seamless interaction between human operators and AI systems, improving the efficiency of decision-making. BCIs can help workers monitor and control machines in a more intuitive way, enabling real-time feedback on system performance based on human cognitive responses. AI can analyse brain signals from BCIs to predict and identify potential equipment failures before they occur, improving predictive maintenance. This reduces downtime, extends machine life and minimises costs. BCIs combined with AI enable adaptive maintenance strategies where human input is integrated into the decision-making process, creating more personalised maintenance protocols. They allow operators to remotely monitor complex systems with cognitive feedback, even in dangerous or inaccessible environments. In addition, AI-based BCIs can improve the training of maintenance personnel by interpreting brain activity to identify learning gaps and suggest improvements. The combination of BCIs and AI is in line with Industry 5.0’s goal of enhancing human creativity and knowledge while optimising machine performance. Ultimately, AI-based BCI systems can enable smarter, more responsive and human-centred maintenance processes in smart factories.
A study showed that the analysis of signals from hand movement is classifiable at a level of around 90%. This is not a result that matches the level of signal classification achieved in other origins. The best tended to be a recurrent network in terms of time spent on learning and classification results [26,27,28,29].
In this case, the recurrent network that classifies the signal achieved better results than the other type of network, which is the spliced network. This justifies further analysis of the two-dimensional recurrent network architecture in traffic classification areas instead of spline networks.
The cluttered network is worth mentioning. It cannot be ruled out that enlarging the learning set, especially in the case of traffic-related data, would have a more favourable impact on classification accuracy. When considering the use of ANNs to classify signal sources, results of 75% are possible in some cases. A network with deep layers has the advantage that training, regardless of the learning parameters, takes a very short time. This is also due to lower complexity. In the case of the convolutional network, the learning time was about 10 s per epoch, which gives us 500 s per session, or more than 8 min. In this case, the RNN performs best, where the transition time was about 13 s. To optimise learning, the data could be fed into the network at every epoch, rather than stored in computer memory. Note that the data were not very complex and that the differences between rest and motion signals are significant [30,31,32].
In terms of previous implementations of CNNs with multiple layers, the articles presented here used neural networks with similar structures. The authors used a 3D CNN that classifies collected data that are converted from 1D to 3D images. However, the article focused on the issue of the workload degree. Each feature is extracted in 3D layers, where they are later augmented by a weighting factor. A short-term memory algorithm was also used in the study to predict the focus time based on task difficulty. The author of the study achieved 90.8% accuracy with his own set of signals, and 93.9% on a public base. To investigate human performance under load, they performed a Sternberg task, i.e., memorising combined letter and number stimuli during an encoding phase in which the stimuli consisted of combinations of the alphabet (A to Z) and a number (0 to 9). The network architecture used, among other things, max-pooling operations between the separate layers of the spliced network [33].
For the RNN-LTSM, the papers published so far described a way to classify people suffering from epilepsy, targeting the diagnosis of general as well as focal epilepsy. In the proposed network architecture, it was possible to distinguish a recurrent network as well as a layer related to short-term memory, RNN-LTSM. The proposed algorithm achieved an accuracy of 96.7% [34]. For the RNN (GRU) network, the authors, in order to improve the model they presented, developed an algorithm to randomly combine the training data to improve its classification accuracy. In the paper, they achieved an accuracy of 82.92%. The authors of this paper focused on eye movement data using different eye stimulation techniques. A variant of the GRU recurrent network based on a 3-layer complex network was used [35]. A deep learning neural network approach was also presented, where results were achieved at 90%. The neural networks presented accepted multi-column data as input. The authors cited the long learning process of 1.6 h as one of the disadvantages. A comparison of the LSTM and GRU algorithms to correctly classify the EEG signal during eyebrow-raising, biting, or blinking movements and the resting state in classifying the corresponding set of signals showed that the LSTM algorithm proved superior in terms of classifying the real-world data sets prepared by the researchers [36].
The semi-supervised fusion of multi-sensor information adapted to an embedded low-rank tensor graph under very low labelling rate conditions has also been explored. This approach focuses on semi-supervised learning for intelligent diagnostics using data from multiple sensors when only a small amount of labelled data are available. The method combines signals from multiple sensors into a unified model, using a very small number of labelled signals to develop a robust learning system. Low-order tensor learning, constrained by the kernel norm of the tensor, helps to capture relevant features from the combined sensor data. In addition, regularisation of the manifold is used to infer potential label information from a large amount of unlabelled data. The combined techniques aim to improve diagnostic accuracy under extremely limited labelling conditions, improving predictive maintenance. Overall, the approach maximises the utility of unlabelled data through semi-supervised learning [37]. An intelligent method for diagnosing aero engine bearing faults with high accuracy with limited samples addresses the challenge of this diagnosis when only limited data samples are available. It introduces RCMPhE (feature extraction technique) to effectively capture fault features from aircraft engine bearing signals. The BO-SVM (Bayesian Optimisation–Support Vector Machine) is used to classify and identify different fault states in bearings. The combined approach offers a highly accurate fault diagnosis system despite the limitation of limited training data. The effectiveness of the proposed method has been demonstrated through tests on real aero-engine bearing data sets, demonstrating its application in real maintenance scenarios. The method optimises the predictive maintenance of aircraft engines, offering highly accurate fault detection with minimal data [38]. The concept of flexible tensor singular value decomposition and its application to multi-sensor signal fusion processing introduces a flexible tensor singular value decomposition (t-SVD) technique specifically adapted to multi-sensor data fusion. This method defines a flexible o-order mode product〈p,q〉 for tensor manipulation, enabling more adaptive data transformations. Using this new tensor decomposition structure, a new multi-sensor signal processing algorithm was developed that adapts to different data patterns more efficiently than traditional methods. In tests, this new algorithm significantly outperforms existing signal processing techniques, especially in handling complex, multimodal sensor data. Its flexibility makes it particularly suitable for predictive maintenance tasks where data from multiple sensors must be combined and analysed [39].
Multiagent Soft Actor-Critic Aided Active Disturbance Rejection Control (MASAC-ADRC) for DC Solid-State Transformers (dcSST) focuses on uncertainty estimation and compensation in modular systems. It optimises controller gains adaptively using the MASAC method, increasing adaptability to changing environmental conditions without relying on pre-existing data sets. A neural network is used to map optimal ADRC parameters from measured states, making it suitable for real-time applications with dynamic environmental changes. The MASAC-ADRC method achieves excellent dynamic performance in terms of overshoot, settling time and mean squared error, with an improvement of more than 50% over traditional methods. However, the complexity of neural network training and computational load may be a limitation for real-time applications in some systems [40]. The MA-TD3 (Autonomous Input Voltage Sharing Control and Triple Phase Shift Modulation) algorithm for ISOP-DAB converters in DC microgrids formulates the control problem as a Markov game with multiple deep learning agents with reinforcement (DRL). The MA-TD3 algorithm trains these agents offline, allowing them to make real-time control decisions during online operations. The method is highly adaptive, finding optimal combinations of modulation variables in environments with uncertainty and stochastic behaviour, even without accurate model information. It has the advantage of being able to balance the input voltage division (IVS) and minimise the current load in different sub-modules. However, the reliance on complex DRL training processes and the need for extensive offline training data may limit its immediate applicability in rapidly changing environments [41]. Both methods excel in adaptive control under uncertainty, with MASAC-ADRC offering a better dynamic response and MA-TD3 excelling in decentralised control. A key advantage of MASAC-ADRC is real-time optimisation without predefined data sets, while MA-TD3 excels in managing stochastic environments. However, both approaches face challenges in terms of computational complexity and the need for robust offline training or neural network calibration.

4.1. Limitations

Despite their advantages, machine learning-based analysis, simulation, and automation of maintenance and inspection processes face several limitations. One major limitation is the need for large, high-quality data sets to effectively train machine learning models; without sufficient data, models can produce inaccurate or unreliable results. Another challenge is the complexity and cost associated with implementing and maintaining these systems, as they often require specialised knowledge and infrastructure. Machine learning models can also be difficult to interpret, leading to a ’black box’ problem where it is unclear how decisions are made, which can hinder trust and adoption. Additionally, these systems can struggle to adapt in highly dynamic environments where conditions change rapidly, requiring the continuous retraining and updating of models. There is also the risk of overfitting, where models become too adapted to the training data and perform poorly on new, unseen data. Integrating machine learning into existing industrial processes can be complex, requiring significant time and resources to customise and integrate. In addition, machine learning models are susceptible to data errors, which can lead to skewed or unfair decision-making outcomes. Reliance on digital infrastructure and connectivity introduces cybersecurity risks, as these systems can be targeted for attacks [42,43,44,45,46,47]. The ethical and regulatory implications of automating decisions and processes in critical industrial environments continue to evolve, creating potential legal and compliance challenges. With the current state of science and technology, there are several limitations in classifying the EEG signal using ML techniques:
  • Limited spatial resolution: EEG electrodes are typically placed on the scalp, providing limited spatial resolution compared to other neuroimaging techniques such as fMRI or MEG—this limitation makes it difficult to capture detailed spatial information about brain activity from the EEG signal into feature vectors/matrices for ML, which can affect the discriminative power of EEG-based ML models.
  • Temporal variability of the EEG signal: it can exhibit significant temporal variability, both within and between individuals, resulting from attention, fatigue, and emotional states. This makes it difficult to develop unambiguous, reproducible and generalisable classification models.
  • Noise and artefacts: muscle activity, eye blinks, electrode movement and environmental interference can significantly affect the quality of EEG signals, making it difficult for ML algorithms to accurately classify them.
  • Limited training data: obtaining labelled EEG data sets to train ML models can be challenging due to the high cost and complexity of EEG data acquisition. Consequently, limited training data can lead to over-fitting or poor generalisation performance of ML models, especially for complex brain states or individual differences.
  • Interpretability: ML models can achieve high classification accuracy, but interpreting the underlying neural mechanisms or features driving classification decisions remains a challenge—the need for a more complete understanding of how EEG features relate to specific cognitive processes or brain states to translate classification results into practical insights in real-world applications [48,49,50,51].
Circumventing or solving the above-mentioned limitations requires interdisciplinary research efforts, combining knowledge from medicine, signal processing, computer science, and biomedical engineering to develop reliable EEG classification methods. It seems that the fastest progress would come from the development of automatic data acquisition techniques, feature extraction methods and model interpretability [52,53,54,55,56].

4.2. Directions for Further Research

Further research into the analysis, simulation, and automation of ML-based maintenance and inspection processes should focus on improving data quality and availability, as high-quality data sets are essential for developing accurate and reliable ML models. Developing more interpretable models is another key direction, and efforts are aimed at making ML processes more transparent and understandable to users, thereby increasing trust and adoption. Researchers should also explore ways to make ML models more adaptable in dynamic environments, ensuring that they can respond quickly to changes without extensive retraining. Another important area is the integration of ML with other emerging technologies, such as edge computing and digital twins, to create more reliable and real-time maintenance and inspection systems. Addressing the problem of overfitting by developing methods that allow models to better generalise to new data is also critical. There is a need for research to reduce the complexity and cost of implementing ML-based systems, making them more accessible to small and medium-sized enterprises. Improving the cybersecurity of these systems is essential to protect against potential threats and ensure the integrity of operations. Further research should also focus on mitigating biases in ML models, ensuring fair and unbiased decision-making in industrial processes [57,58,59,60,61,62]. The ethical implications of critical process automation should be an important focus of research, with the goal of establishing clear guidelines and best practices. This research should continue to explore the regulatory and compliance aspects of ML-based automation, helping to shape policies that balance innovation with security and accountability. It is noteworthy that much of the aforementioned research is interdisciplinary, requiring collaboration across disciplines and the use of multiple combined research methodologies. The study can be extended by acquiring more samples and also by expanding the database based on signals correlated with other types of traffic. Consideration should be given to whether it would be better to reduce the time window of the signal analysed or to use an adaptive input data width [63,64]. A suitable match to the computational capabilities of computers is also an obstacle, as the signal presented in the study consists of 12,666 groups of 321 samples per 64 channels, resulting in 260,210,304 numerical values. In this case, it is possible to dispense with some of the electrodes and use those channels that correspond to the area of interest, such as the frontal lobe [65,66,67,68,69].
More broadly, further research into the classification of EEG signals using machine learning carries the potential to deepen our understanding of brain function and develop innovative applications. The most promising directions for future research include the following:
  • Artefact and noise robustness: improving EEG classification models that are robust to noise and artefacts can automatically detect and mitigate different types of artefacts in EEG signals, improving the reliability and interpretability of classification results.
  • Personalised and adaptive models: accounting for individual differences in brain activity patterns is essential to develop personalised EEG classification models with high accuracy and generalisation performance; hence, research into adaptive learning techniques and personalised model architectures will increase their utility and effectiveness in real-world applications.
  • Real-time classification systems: these are applicable to BCI, neurofeedback training, and assistive technologies, based on efficient algorithms and hardware implementations for real-time processing and classification of the EEG signal, enabling seamless integration with controlled and communication systems.
  • Deep learning architectures: further research can explore new solutions using their ability to automatically learn hierarchical representations and capture complex temporal relationships in brain signals.
  • Transfer learning: this involves enabling the transfer of knowledge from related tasks or domains to improve the performance of EEG classification models, especially in scenarios with limited labelled data, which can facilitate the generalisation of the model to different individuals, experimental conditions, and clinical populations.
  • Multimodal data fusion: integrating EEG data with other neuroimaging modalities, such as fMRI, MEG, or functional near-infrared spectroscopy (fNIRS), can provide complementary information about brain activity. Research into multimodal data fusion techniques could increase the spatial and temporal resolution of EEG-based classification models, improving their accuracy and interpretability.
  • Improved explainability: increasing the ability to interpret and explain EEG classification models is crucial to understanding the underlying neural mechanisms and building confidence in their predictions, as well as maintaining unambiguity between the user’s intention and the BCI’s interpretation of it.
  • Ethical considerations and user acceptance: as EEG-based ML applications become more common, it is crucial to address ethical issues related to privacy, data security, and user consent. This will translate into responsible implementation and adoption in different settings [70,71,72,73,74,75].
It is worth noting that interdisciplinary research involving computer scientists, medical specialists, and psychologists is required to gain a comprehensive understanding of the neurophysiological and pathological mechanisms underlying the generation of specific EEG signal patterns and to integrate insights from different disciplines in order to develop more comprehensive predictive models and intervention strategies on the basis of the obtained insights (Table 9) [76,77,78,79,80].
So far, ML-based cognitive load management for human performance optimisation has been demonstrated. The accuracy of bidirectional gated network (BDGN)-based cognitive workload classification for EEG signals was 98%, confirming the effectiveness of the suggested BDGN approach. It is a practical solution to scheduling cognitive workload management solutions in Industry 5.0 applications [51,52,53]. In the case of neurodegenerative diseases and their assessment based on EEG signals, lower values were achieved. The Tresnet network yielded an accuracy of 86.9% for Alzheimer’s Disease, which outperforms previous approaches [81,82,83,84,85,86].

5. Conclusions

ML-based predictive maintenance systems offer significant benefits by enabling the proactive identification of potential equipment failures, reducing downtime, and optimising maintenance schedules. They leverage real-time data, historical records, and advanced algorithms to provide accurate predictions, improving overall operational efficiency. While these systems can lead to cost savings and extended equipment life, their success depends on the quality of the data and the robustness of the models used. The continuous monitoring and updating of models are essential to maintaining accuracy and adapting to changing conditions. Overall, machine learning-based predictive maintenance is a powerful tool for modern industries, driving smarter and more efficient maintenance strategies.
In smart factories, BCIs enable operators to control AI-based predictive maintenance systems with direct neural data, increasing the speed and accuracy of maintenance decisions. BCIs allow operators to seamlessly interact with artificial intelligence algorithms, adjusting maintenance schedules and responses based on real-time cognitive insights. This interaction helps optimise equipment performance by enabling intuitive adjustments to AI predictions, preventing potential failures before they occur. By integrating BCIs, smart factories can achieve more responsive and adaptive maintenance processes, reducing unexpected downtime and operational costs. Ultimately, BCIs contribute to highly efficient, self-optimising manufacturing environments.
Key findings from the research highlight that achieving a 96% accuracy in BCI-based control systems is a promising result, but there is potential for even better signal classification, warranting further research. This current accuracy level is sufficient for industrial control systems, but complex multi-channel control requires additional data collection for improved performance. The results emphasise the importance of not just network architecture and learning parameters, but also the type of network used for data analysis. Modular systems may benefit from allowing learned network modules to be swapped out for ones better suited to specific tasks or conditions. Adaptability is key, as systems must meet the user’s needs with minimal adjustments required from the user. Otherwise, the technology may face challenges in adoption. The research suggests that effective communication and control between user and device is essential for system success. Additionally, it is crucial for the device to be flexible enough to handle diverse operational contexts. Finally, continuous advancements in both network design and data collection will drive more sophisticated BCI-based control systems.

Author Contributions

Conceptualization, I.R., D.M., E.D., A.P. and K.G.; methodology, I.R., D.M., E.D., A.P. and K.G.; software, K.G.; validation, I.R., D.M., E.D., A.P. and K.G.; formal analysis, I.R., D.M., E.D., A.P. and K.G.; investigation, I.R., D.M., E.D., A.P. and K.G.; resources, K.G., I.R. and D.M.; data curation, K.G., I.R. and D.M.; writing—original draft preparation, I.R., D.M., E.D., A.P. and K.G.; writing—review and editing, I.R., D.M., E.D., A.P. and K.G.; visualization, I.R., D.M., E.D., A.P. and K.G.; supervision, I.R. and D.M.; project administration, I.R. and D.M.; funding acquisition, I.R. and D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research is being carried out as part of the mini-grant ‘New AI techniques for analyzing biomedical and industrial data’ in the project funded by the Polish Minister of Science and Higher Education under the ‘Regional Initiative of Excellence’ program (RID/SP/0048/2024/01) for Kazimierz Wielki University. The work presented in the paper has been financed under a grant to maintain the research potential of Kazimierz Wielki University.

Data Availability Statement

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lin, S.; Jiang, J.; Huang, K.; Li, L.; He, X.; Du, P.; Wu, Y.; Liu, J.; Li, X.; Huang, Z.; et al. Advanced Electrode Technologies for Noninvasive Brain-Computer Interfaces. ACS Nano 2023, 17, 24487–24513. [Google Scholar] [CrossRef] [PubMed]
  2. Wang, J.; Wang, T.; Liu, H.; Wang, K.; Moses, K.; Feng, Z.; Li, P.; Huang, W. Flexible Electrodes for Brain-Computer Interface System. Adv. Mater. 2023, 35, e2211012. [Google Scholar] [CrossRef] [PubMed]
  3. Qin, Y.; Zhang, Y.; Zhang, Y.; Liu, S.; Guo, X. Application and Development of EEG Acquisition and Feedback Technology: A Review. Biosensors 2023, 13, 930. [Google Scholar] [CrossRef] [PubMed]
  4. Jamil, N.; Belkacem, A.N.; Ouhbi, S.; Lakas, A. Noninvasive Electroencephalography Equipment for Assistive, Adaptive, and Rehabilitative Brain-Computer Interfaces: A Systematic Literature Review. Sensors 2021, 21, 4754. [Google Scholar] [CrossRef]
  5. Mirzaei, G.; Adeli, H. Machine learning techniques for diagnosis of alzheimer disease, mild cognitive disorder, and other types of dementia. Biomed. Signal Process. Control 2022, 72, 103293. [Google Scholar] [CrossRef]
  6. Fouad, I.A.; El-Zahraa, F.; Labib, M. Identification of Alzheimer’s disease from central lobe EEG signals utilizing machine learning and residual neural network. Biomed. Signal Process. Control 2023, 86, 105266. [Google Scholar] [CrossRef]
  7. Srivastava, V. An optimization for adaptive multi-filter estimation in medical images and EEG based signal denoising. Biomed. Signal Process. Control 2023, 82, 104513. [Google Scholar] [CrossRef]
  8. Gu, M.; Zhang, Y.; Wen, Y.; Ai, G.; Zhang, H.; Wang, P.; Wang, G. A lightweight convolutional neural network hardware implementation for wearable heart rate anomaly detection. Comput. Biol. Med. 2023, 155, 106623. [Google Scholar] [CrossRef] [PubMed]
  9. Puffay, C.; Accou, B.; Bollens, I.; Monesi, M.J.; Vanthornhout, J.; Van Hamme, H.; Francart, T. Relating EEG to continuous speech using deep neural networks: A review. arXiv 2023, arXiv:2302.01736. [Google Scholar] [CrossRef]
  10. Kumar, A.; Chakravarthy, S.; Nanthaamornphong, A. Energy-Efficient Deep Neural Networks for EEG Signal Noise Reduction in Next-Generation Green Wireless Networks and Industrial IoT Applications. Symmetry 2023, 15, 2129. [Google Scholar] [CrossRef]
  11. Gabardi, M.; Saibene, A.; Gasparini, F.; Rizzo, D.; Stella, F.A. A multi-artifact EEG denoising by frequency-based deep learning. arXiv 2023, arXiv:2310.17335. [Google Scholar]
  12. Dong, Y.; Tang, X.; Li, Q.; Wang, Y.; Jiang, N.; Tian, L.; Zheng, Y.; Li, X.; Zhao, S.; Li, G.; et al. An Approach for EEG Denoising Based on Wasserstein Generative Adversarial Network. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 3524–3534. [Google Scholar] [CrossRef]
  13. Ma, M.; Han, L.; Zhou, C. BTAD: A binary transformer deep neural network model for anomaly detection in multivariate time series data. Adv. Eng. Inform. 2023, 56, 101949. [Google Scholar] [CrossRef]
  14. Scikitlearn 1.4.2. Available online: https://scikit-learn.org/stable/ (accessed on 13 May 2024).
  15. MNE 1.7.0. Available online: https://mne.tools/stable/index.html (accessed on 13 May 2024).
  16. NumPy 1.26.4. Available online: https://numpy.org/ (accessed on 13 May 2024).
  17. Keras 3.0. Available online: https://keras.io/ (accessed on 13 May 2024).
  18. Jupyter 7.1.3. Available online: https://jupyter.org/try (accessed on 13 May 2024).
  19. Anaconda 2024.02-1. Available online: https://www.anaconda.com/ (accessed on 13 May 2024).
  20. Afzal, M.A.; Gu, Z.; Afzal, B.; Bukhari, S.U. Cognitive Workload Classification in Industry 5.0 Applications: Electroencephalography-Based Bi-Directional Gated Network Approach. Electronics 2023, 12, 4008. [Google Scholar] [CrossRef]
  21. Li, G.; Ji, Z.; Sun, Q. Deep Multi-Instance Conv-Transformer Frameworks for Landmark-Based Brain MRI Classification. Electronics 2024, 13, 980. [Google Scholar] [CrossRef]
  22. Bolourchi, P.; Gholami, M. A machine learning-based data-driven approach to Alzheimer’s disease diagnosis using statistical and harmony search methods. J. Intell. Fuzzy Syst. 2024, 46, 6299. [Google Scholar] [CrossRef]
  23. Prokopowicz, P.; Mikołajewski, D.; Mikołajewska, E.; Kotlarz, P. Fuzzy System as an Assessment Tool for Analysis of the Health-Related Quality of Life for the People After Stroke. In Proceedings of the 16th International Conference on Artificial Intelligence and Soft Computing (ICAISC), Zakopane, Poland, 11–15 June 2017; pp. 710–721. [Google Scholar]
  24. Cummins, L.; Sommers, A.; BakhtiariRamezani, S.; Mittal, S.; Jabour, J.; Seale, M.; Rahimi, S. Explainable Predictive Maintenance: A Survey of Current Methods, Challenges and Opportunities. IEEE Access 2024, 12, 57574–57602. [Google Scholar] [CrossRef]
  25. Pinardi, D.; Arpa, L.; Toscani, A.; Manconi, E.; Binelli, M.; Mucchi, E.A. Novel Hybrid Acquisition System for Industrial Condition Monitoring and Predictive Maintenance. IEEE Access 2024, 12, 98121–98129. [Google Scholar] [CrossRef]
  26. Taye, M.M. Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions. Computation 2023, 11, 52. [Google Scholar] [CrossRef]
  27. Śliwak-Orlicki, T.; Górski, K. Voice recognition and speaker identification: A review of selected speech biometric feature recognition methods. Prz. Elektrotechniczny 2023, 99, 225–229. [Google Scholar]
  28. Strypsteen, T.; Bertrand, A. Bandwidth-efficient distributed neural network architectures with application to body sensor networks. IEEE J. Biomed. Health Inform. 2023, 27, 933–943. [Google Scholar] [CrossRef] [PubMed]
  29. Schalk, G.; McFarland, D.J.; Hinterberger, D.; Birbaumer, N.; Wolpaw, J.R. EEG Motor Movement/Imagery Dataset. 2009. Available online: https://www.physionet.org/content/eegmmidb/1.0.0/ (accessed on 13 May 2024).
  30. Ruan, H.; Liu, Z.; Ding, Y. Large-scale Log-based Failure Diagnosis of Server Groups: A Two-stage Mining Approach Based on Drain 3 and Weight-based Optimization Algorithm. In Proceedings of the 2023 Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2023. [Google Scholar]
  31. Niso, G.; Romero, E.; Moreau, J.T.; Araujo, A.; Krol, L.R. Wireless EEG: A survey of systems and studies. NeuroImage 2023, 269, 119774. [Google Scholar] [CrossRef] [PubMed]
  32. Nicholson, A.A.; Densmore, M.; Frewen, P.A.; Neufeld, R.W.J.; Théberge, J.; Jetly, R.; Lanius, R.A.; Ros, T. Homeostatic normalization of alpha brain rhythms within the default-mode network and reduced symptoms in PTSD following a randomized controlled trial of EEG neurofeedback. Brain Commun. 2023, 5, fcad068. [Google Scholar] [CrossRef]
  33. Nguyen, H.-T.; Mai, N.-D.; Lee, B.; Chung, W. Behind-the-Ear EEG-Based Wearable Driver Drowsiness Detection System Using Embedded Tiny Neural Networks. IEEE Sens. J. 2023, 23, 23875–23892. [Google Scholar] [CrossRef]
  34. Kwak, Y.; Kong, K.; Song, W.-J.; Min, B.-K.; Kim, S.-E. Multilevel Feature Fusion With 3D Convolutional Neural Network for EEG-Based Workload Estimation. IEEE Access 2020, 8, 16009–16021. [Google Scholar] [CrossRef]
  35. Najafi, T.; Jaafar, R.; Remli, R.; Wan Zaidi, W.A. A Classification Model of EEG Signals Based on RNN-LSTM for Diagnosing Focal and Generalized Epilepsy. Sensors 2022, 22, 7269. [Google Scholar] [CrossRef]
  36. Yang, D.; Liu, Y.; Zhou, Z.; Yu, Y.; Liang, X. Decoding Visual Motions from EEG Using Attention-Based RNN. Appl. Sci. 2020, 10, 5662. [Google Scholar] [CrossRef]
  37. Xu, H.; Wang, X.; Huang, J.; Zhang, F.; Chu, F. Semi-supervised multi-sensor information fusion tailored graph embedded low-rank tensor learning machine under extremely low labeled rate. Inf. Fusion 2022, 105, 102222. [Google Scholar] [CrossRef]
  38. Wang, Z.; Luo, Q.; Chen, H.; Zhao, J.; Yao, L.; Zhang, J.; Chu, F. A high-accuracy intelligent fault diagnosis method for aero-engine bearings with limited Samples. Comput. Ind. 2024, 159, 104099. [Google Scholar] [CrossRef]
  39. Huang, J.; Zhang, F.; Safaei, B.; Qin, Z.; Chu, F. The flexible tensor singular value decomposition and its applications in multisensor signal fusion processing. Mech. Syst. Signal Process. 2024, 220, 111662. [Google Scholar] [CrossRef]
  40. Zeng, Y.; Liang, G.; Liu, Q.; Rodriguez, E.; Pou, J.; Jie, H.; Liu, X.; Zhang, X.; Kotturu, J.; Gupta, A. Multiagent Soft Actor-Critic Aided Active Disturbance Rejection Control of DC Solid-State Transformer. IEEE Trans. Ind. Electron. 2024, 1–12. [Google Scholar] [CrossRef]
  41. Zeng, Y.; Pou, J.; Sun, C.; Mukherjee, S.; Xu, X.; Gupta, A.K.; Dong, J. Autonomous Input Voltage Sharing Control and Triple Phase Shift Modulation Method for ISOP-DAB Converter in DC Microgrid: A Multiagent Deep Reinforcement Learning-Based Method. IEEE Trans. Power Electron. 2023, 38, 2985–3000. [Google Scholar] [CrossRef]
  42. Martinek, R.; Ladrova, M.; Sidikova, M.; Jaros, R.; Behbehani, K.; Kahankova, R.; Kawala-Sterniuk, A. Advanced Bioelectrical Signal Processing Methods: Past, Present, and Future Approach-Part III: Other Biosignals. Sensors 2021, 21, 6064. [Google Scholar] [CrossRef] [PubMed]
  43. Duch, W.; Nowak, W.; Meller, J.; Osiński, G.; Dobosz, K.; Mikołajewski, D.; Wójcik, G.M. Computational approach to understanding autism spectrum disorders. Comput. Sci. 2012, 13, 47–61. [Google Scholar] [CrossRef]
  44. Kawala-Janik, A.; Bauer, W.; Al-Bakri, A.; Haddix, C.; Yuvaraj, R.; Cichon, K.; Podraza, W. Implementation of Low-Pass Fractional Filtering for the Purpose of Analysis of Electroencephalographic Signals. In Proceedings of the 9th International Conference on Non-Integer Order Calculus and Its Applications, Łódź, Poland; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 63–73. [Google Scholar]
  45. Rojek, I.; Mikołajewski, D.; Dostatni, E.; Kopowski, J. Specificity of 3D Printing and AI-Based Optimization of Medical Devices Using the Example of a Group of Exoskeletons. Appl. Sci. 2023, 13, 1060. [Google Scholar] [CrossRef]
  46. Rojek, I. Models for Better Environmental Intelligent Management within Water Supply Systems. Water Resour. Manag. 2014, 28, 3875–3890. [Google Scholar] [CrossRef]
  47. Tsoukas, V.; Gkogkidis, A.; Boumpa, E.; Kakarountas, A. A Review on the emerging technology of TinyML. ACM Comput. Surv. 2024, 56, 259. [Google Scholar] [CrossRef]
  48. Karras, A.; Giannaros, A.; Karras, C.N.; Theodorakopoulos, L.; Mammassis, C.S.; Krimpas, G.A.; Sioutas, S. TinyML Algorithms for Big Data Management in Large-Scale IoT Systems. Future Internet 2024, 16, 42. [Google Scholar] [CrossRef]
  49. Yang, H.; Han, J.; Min, K. A Multi-Column CNN Model for Emotion Recognition from EEG Signals. Sensors 2019, 19, 4736. [Google Scholar] [CrossRef]
  50. Manoharan, T.A.; Radhakrishnan, M. Region-Wise Brain Response Classification of ASD Children Using EEG and BiLSTM RNN. Clin. EEG Neurosci. 2023, 54, 461–471. [Google Scholar] [CrossRef]
  51. Mahapatra, N.C.; Bhuyan, P. EEG-based classification of imagined digits using a recurrent neural network. J. Neural Eng. 2023, 20, 026040. [Google Scholar] [CrossRef] [PubMed]
  52. Luo, Y.; Wu, C.; Lv, C. Cascaded Convolutional Recurrent Neural Networks for EEG Emotion Recognition Based on Temporal–Frequency–Spatial Features. Appl. Sci. 2023, 13, 6761. [Google Scholar] [CrossRef]
  53. Luo, W.; Yang, R.; Jin, H.; Li, X.; Li, H.; Liang, K. Single channel blind source separation of complex signals based on spatial-temporal fusion deep learning. IET Radar Sonar Navig. 2023, 17, 200–211. [Google Scholar] [CrossRef]
  54. Khalifa, Y.; Mandic, D.; Sejdić, E. A review of Hidden Markov models and Recurrent Neural Networks for event detection and localization in biomedical signals. Inf. Fusion 2021, 69, 52–72. [Google Scholar] [CrossRef]
  55. Jindal, K.; Upadhyay, R.; Singh, H.S. A novel channel selection and classification methodology for multi-class motor imagery-based BCI system design. Int. J. Imaging Syst.Technol. 2022, 32, 1318–1337. [Google Scholar] [CrossRef]
  56. Jezequel, L.; Vu, N.-S.; Beaudet, J.; Histace, A. Efficient anomaly detection using self-supervised multi-cue tasks. IEEE Trans. Image Process. 2023, 32, 807–821. [Google Scholar] [CrossRef]
  57. Ficco, M.; Guerriero, M.; Milite, E.; Palmieri, F.; Pietrantuono, R.; Russo, S. Federated learning for IoT devices: Enhancing TinyML with on-board training. Inf. Fusion 2024, 104, 102189. [Google Scholar] [CrossRef]
  58. Krishna, A.; Nudurupati, S.R.; Chandana, D.G.; Dwivedi, P.; van Schaik, A.; Mehendale, M.; Thakur, C.S. RAMAN: A Reconfigurable and Sparse tinyML Accelerator for Inference on Edge. IEEE Internet Things J. 2024, 11, 24831–24845. [Google Scholar] [CrossRef]
  59. Kallimani, R.; Pai, K.; Raghuwanshi, P.; Iyer, S.; AlcarazLópez, O.L. TinyML: Tools, applications, challenges, and future research directions. Multim. Tools Appl. 2024, 83, 29015–29045. [Google Scholar] [CrossRef]
  60. Hayajneh, A.M.; Hafeez, M.; Raza Zaidi, S.A.; McLernon, D. TinyML Empowered Transfer Learning on the Edge. IEEE Open J. Commun. Soc. 2024, 5, 1656–1672. [Google Scholar] [CrossRef]
  61. Ancilotto, A.; Paissan, F.; Farella, E. XimSwap: Many-to-Many Face Swapping for TinyML. ACM Trans. Embed. Comput. Syst. 2024, 23, 1–49. [Google Scholar] [CrossRef]
  62. Pavan, M.; Ostrovan, E.; Caltabiano, A.; Roveri, M. TyBox: An Automatic Design and Code Generation Toolbox for TinyML Incremental On-Device Learning. ACM Trans. Embed. Comput. Syst. 2024, 23, 1–27. [Google Scholar] [CrossRef]
  63. Huang, J.; Chang, Y.; Li, W.; Tong, J.; Du, S. A Spatio-Temporal Capsule Neural Network with Self-Correlation Routing for EEG Decoding of Semantic Concepts of Imagination and Perception Tasks. Sensors 2024, 24, 5988. [Google Scholar] [CrossRef]
  64. Huang, Z.; Wang, M. A review of electroencephalogram signal processing methods for brain-controlled robots. Cogn. Robot. 2021, 1, 111–124. [Google Scholar] [CrossRef]
  65. He, B.; Martens, J.; Zhang, G.; Botev, A.; Brock, A.; Smith, S.L.; Teh, Y.W. Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation. arXiv 2023, arXiv:2302.10322. [Google Scholar]
  66. Hang, F.; Guo, W.; Chen, H.; Xie, L.; Zhou, C.; Liu, Y. Logformer: Cascaded Transformer for System Log Anomaly Detection. CMES-Comput. Model. Eng. Sci. 2023, 136. [Google Scholar] [CrossRef]
  67. Gao, L.; Wang, D.; Zhuang, L.; Sun, X.; Huang, M.; Plaza, A. BS3LNet: A new blind-spot self-supervised learning network for hyperspectral anomaly detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–18. [Google Scholar] [CrossRef]
  68. Famoriji, O.J.; Shongwe, T. Electromagnetic machine learning for estimation and mitigation of mutual coupling in strongly coupled arrays. ICT Express 2023, 9, 8–15. [Google Scholar] [CrossRef]
  69. Apicella, A.; Isgrò, F.; Pollastro, A.; Prevete, R. On the effects of data normalization for domain adaptation on EEG data. Eng. Appl. Artif. Intell. 2023, 123, 106205. [Google Scholar] [CrossRef]
  70. Al-Saegh, A.; Dawwd, S.A.; Abdul-Jabbar, J.M. Deep learning for motor imagery EEG-based classification: A review. Biomed. Signal Process. Control 2021, 63, 102172. [Google Scholar] [CrossRef]
  71. Rojek, I. Hybrid Neural Networks as Prediction Models. In Artificial Intelligence and Soft Computing, Lecture Notes in Artificial Intelligence; Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 88–95. [Google Scholar]
  72. Rojek, I. Neural networks as prediction models for water intake in water supply system. In Artificial Intelligence and Soft Computing—ICAISC 2008; Lecture Notes in Computer Science, 5097; Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M., Eds.; Springer: Berlin/Heidelberg, Gemany, 2008; pp. 1109–1119. [Google Scholar]
  73. Bauer, W.; Kawala-Janik, A. Implementation of bi-fractional filtering on the Arduino Uno hardware platform. Lect. Notes Electr. Eng. 2017, 407, 419–428. [Google Scholar]
  74. Kawala-Janik, A.; Bauer, W.; Zolubak, M.; Baranowski, J. Early-stage pilot study on using fractional-order calculus-based filtering for the purpose of analysis of electroencephalography signals. Stud. Log. Gramm. Rhetor. 2016, 47, 103–111. [Google Scholar] [CrossRef]
  75. Wojcik, G.M.; Kaminski, W.A. Self-organised criticality as a function of connections’ number in the model of the rat somatosensory cortex. In Computational Science–ICCS 2008: 8th International Conference, Kraków, Poland, 23–25 June 2008; Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2008; 5101 LNCS, part 1; pp. 620–629. [Google Scholar]
  76. Wojcik, G.M.; Kaminski, W.A.; Matejanka, P. Self-organised criticality in a model of the rat somatosensory cortex. In Parallel Computing Technologies: 9th International Conference, PaCT 2007, Pereslavl-Zalessky, Russia, 3–7 September 2007; Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2007; 4671 LNCS; pp. 468–476. [Google Scholar]
  77. Grzesiak, K.; Piotrowski, Z.; Kelner, J.M. A wireless covert channel based on dirty constellation with phase drift. Electronics 2021, 10, 647. [Google Scholar] [CrossRef]
  78. Sieczkowski, K.; Sondej, T.; Dobrowolski, A.; Olszewski, R. Autocorrelation algorithm for determining a pulse wave delay. In Proceedings of the 2016 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 21–23 September 2016; pp. 321–326. [Google Scholar]
  79. Murawski, K.; Sondej, T.; Rozanowski, K.; Macander, M.; Macander, L. The contactless active optical sensor for vehicle driver fatigue detection. In Proceedings of the SENSORS, 2013 IEEE, Baltimore, MD, USA, 3–6 November 2013. [Google Scholar]
  80. Rózanowski, K.; Piotrowski, Z.; Ciołek, M. Mobile application for driver’s health status remote monitoring. In Proceedings of the 2013 9th International Wireless Communications and Mobile Computing Conference, IWCMC, Sardinia, Italy, 1–5 July 2013; pp. 1738–1743. [Google Scholar]
  81. Sondej, T.; Piotrowski, Z.; Sawicki, K. Architecture of car measurement system for driver monitoring. In Communication Technologies for Vehicles: 4th International Workshop, Nets4Cars/Nets4Trains 2012, Vilnius, Lithuania, 25–27 April 2012; Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2012; 7266 LNCS; pp. 68–79. [Google Scholar]
  82. Mikołajewska, E.; Prokopowicz, P.; Mikolajewski, D. Computational gait analysis using fuzzy logic for everyday clinical purposes-preliminary findings. Bio-Algorithms Med-Syst. 2017, 13, 37–42. [Google Scholar] [CrossRef]
  83. Mikołajewska, E.; Mikołajewski, D. Ethical considerations in the use of brain-computer interfaces. Cent. Eur. J. Med. 2013, 8, 720–724. [Google Scholar] [CrossRef]
  84. Rojek, I.; Dostatni, E.; Mikołajewski, D.; Pawłowski, L.; Wegrzyn-Wolska, K. Modern approach to sustainable production in the context of Industry 4.0. Bull. Pol. Acad. Sci. Tech. Sci. 2022, 70, e143828. [Google Scholar] [CrossRef]
  85. Xu, Z.; Deng, H.; Liu, J.; Yang, Y. Diagnosis of Alzheimer’s Disease Based on the Modified Tresnet. Electronics 2021, 10, 1908. [Google Scholar] [CrossRef]
  86. Mohammed, B.A.; Senan, E.M.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Al-Mekhlafi, Z.G.; Almurayziq, T.S.; Ghaleb, F.A. Multi-Method Analysis of Medical Records and MRI Images for Early Diagnosis of Dementia and Alzheimer’s Disease Based on Deep Learning and Hybrid Methods. Electronics 2021, 10, 2860. [Google Scholar] [CrossRef]
Figure 1. Predictive maintenance development against the background of AI development (own version).
Figure 1. Predictive maintenance development against the background of AI development (own version).
Applsci 14 08774 g001
Figure 2. Concept of development of ML-based maintenance and control processes.
Figure 2. Concept of development of ML-based maintenance and control processes.
Applsci 14 08774 g002
Figure 3. Basic architecture of analysing, simulating, and automating ML-based maintenance and control processes.
Figure 3. Basic architecture of analysing, simulating, and automating ML-based maintenance and control processes.
Applsci 14 08774 g003
Figure 4. Results of the review of publications with the keywords ‘EEG classification’ and related terms.
Figure 4. Results of the review of publications with the keywords ‘EEG classification’ and related terms.
Applsci 14 08774 g004
Figure 5. Results of the review of publications with the keywords ‘EEG classification’, ‘AI’, and related terms.
Figure 5. Results of the review of publications with the keywords ‘EEG classification’, ‘AI’, and related terms.
Applsci 14 08774 g005
Figure 6. Results of the review of publications with the keywords ‘EEG classification’, ‘ML’, and related.
Figure 6. Results of the review of publications with the keywords ‘EEG classification’, ‘ML’, and related.
Applsci 14 08774 g006
Figure 7. Applied bibliometric analysis procedure.
Figure 7. Applied bibliometric analysis procedure.
Applsci 14 08774 g007
Figure 8. Headset electrode distribution.
Figure 8. Headset electrode distribution.
Applsci 14 08774 g008
Figure 9. A flow chart of the review process using PRISMA 2020 guidelines (publications: 2017–2024).
Figure 9. A flow chart of the review process using PRISMA 2020 guidelines (publications: 2017–2024).
Applsci 14 08774 g009
Figure 10. Data flow for advanced video-based analysis.
Figure 10. Data flow for advanced video-based analysis.
Applsci 14 08774 g010
Figure 11. Neural network architecture.
Figure 11. Neural network architecture.
Applsci 14 08774 g011
Figure 12. Values achieved by neural network.
Figure 12. Values achieved by neural network.
Applsci 14 08774 g012
Figure 13. Convolutional neural network.
Figure 13. Convolutional neural network.
Applsci 14 08774 g013
Figure 14. Classification accuracy.
Figure 14. Classification accuracy.
Applsci 14 08774 g014
Figure 15. Classification loss.
Figure 15. Classification loss.
Applsci 14 08774 g015
Figure 16. Another convolutional neural network.
Figure 16. Another convolutional neural network.
Applsci 14 08774 g016
Figure 17. Recurrent neural network (GRU).
Figure 17. Recurrent neural network (GRU).
Applsci 14 08774 g017
Figure 18. Cross-validation for recurrent neural network.
Figure 18. Cross-validation for recurrent neural network.
Applsci 14 08774 g018
Table 1. Key paradigms for analysing, simulating, and automating ML-based maintenance and control processes.
Table 1. Key paradigms for analysing, simulating, and automating ML-based maintenance and control processes.
Paradigm/AreaDescription
IIoT integrationML models are embedded within the IoT framework, enabling seamless communication and data flow between devices and systems to provide more consistent operations.
Condition monitoringUses sensors and ML algorithms to continuously assess the health of machines, enabling early detection of anomalies that could lead to system failure.
Real-time decision-makingML-based automation enables systems to make immediate adjustments in response to changing conditions without human intervention.
Data-driven process optimisationUses ML to analyse massive amounts of operational data, identify inefficiencies, and suggest improvements for better performance.
Simulation-based optimisationUses ML to create virtual models of processes, allowing different strategies to be tested and refined in a risk-free environment.
Adaptive controlInvolves ML systems that learn and evolve over time, improving their accuracy and efficiency as they encounter more data and varying conditions.
Scalability and flexibilityDesigning ML solutions that can easily scale with the growth of industrial operations and adapt to different contexts or sectors.
Human–AI collaborationEmphasizes the role of ML in augmenting human decision-making, where AI provides insights and recommendations but ultimate control remains in the hands of human operators.
Ethical and transparent AIEnsures ML models are designed with integrity, accountability, and transparency in mind, addressing concerns about the ’black box’ nature of AI in critical industrial environments.
Table 2. Summary of results of bibliographic analysis (WoS, Scopus).
Table 2. Summary of results of bibliographic analysis (WoS, Scopus).
Parameter/FeatureValue
Leading type of publicationArticle (48.8%), Conference paper (37.2%),
Review (9.3%), Book chapter (2.3%)
Leading area of scienceEngineering (31.3%), Computer science (21.9%),
Energy (8.3%), Mathematics (7.3%),
Physics and Astronomy (6.3%), Decision Sciences (4.2%)
Leading topicFriction and Vibration, Design and Manufacturing, AI and ML, Statistical methods, Telecommunications, Human–Computer Interaction, Safety and Maintenance, Computer Vision and Graphics, Automation and Control Systems
Leading countriesIndia, Germany, Italy, Australia, USA, China, Norway, Qatar
Leading scientist(s)None observed
Leading affiliationUniversitadegliStudi di Padova,
NorgeskTeknisk-NaturvitenskapeligeUniversitet,
UniversitatHohenheim
Hamad Bin Khalifa University Qatar, Qatar Fundation QF
Leading funding (where information available)Bundesministerium fur Wirtschaft und Energie,
Bundesministerium fur Erbahrung und Landwirtschaft
Table 3. Data classification for 50 data sources.
Table 3. Data classification for 50 data sources.
No.Value
155.82%
248.31%
351.65%
443.45%
550.95%
Average50.04%
Table 4. Data classification for 100 data sources.
Table 4. Data classification for 100 data sources.
No.Value
155.62%
274.36%
381.26%
478.58%
588.58%
Average75.68%
Table 5. Data classification for 50 data sources.
Table 5. Data classification for 50 data sources.
No.Value
162.02%
245.96%
341.26%
444.94%
538.47%
Average46.53%
Table 6. Data classification for 100 data sources.
Table 6. Data classification for 100 data sources.
No.Value
148.58%
253.86%
361.10%
458.46%
568.15%
Average58.03%
Table 7. Data classification for 50 data sources.
Table 7. Data classification for 50 data sources.
No.Value
156.39%
279.69%
393.79%
494.19%
598.82%
Average84.58%
Table 8. Data classification for 100 data sources.
Table 8. Data classification for 100 data sources.
No.Value
178.01%
295.88%
391.69%
494.41%
596.64%
Average91.33%
Table 9. SWOT analysis for ML-based EEG classification systems as control system for predictive maintenance [81,82,83,84,85,86].
Table 9. SWOT analysis for ML-based EEG classification systems as control system for predictive maintenance [81,82,83,84,85,86].
STRENGTHS
Automatisation of EEG data collection
Intuitive use
Individualised use
24/7 wellbeing monitoring and prediction within Industry 5.0
ML-based integrated analysis and prediction
Built-in warnings and alerts
Historical data sets with full users’ history
WEAKNESSES
Limited number and quality of data sets to begin
Introduction requires educated specialists
OPPORTUNITIES
Objectivisation of assessment
Reduced workload toward classification and optimisation
Early diagnosis
Preventive or early intervention
Easier testing
Novel diagnostic methods
Novel factors/mechanisms taken into consideration within diagnosis
Possibility of standardisation
Quick further development
Part of bigger systems (e.g., eHealth, smart home smart factory)
THREATS
Non-acceptance of AI/ML among society and engineers (human with BCI as human 2.0)
Fear of being a part of surveillance society: lack of full understanding
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rojek, I.; Mikołajewski, D.; Dostatni, E.; Piszcz, A.; Galas, K. ML-Based Maintenance and Control Process Analysis, Simulation, and Automation—A Review. Appl. Sci. 2024, 14, 8774. https://doi.org/10.3390/app14198774

AMA Style

Rojek I, Mikołajewski D, Dostatni E, Piszcz A, Galas K. ML-Based Maintenance and Control Process Analysis, Simulation, and Automation—A Review. Applied Sciences. 2024; 14(19):8774. https://doi.org/10.3390/app14198774

Chicago/Turabian Style

Rojek, Izabela, Dariusz Mikołajewski, Ewa Dostatni, Adrianna Piszcz, and Krzysztof Galas. 2024. "ML-Based Maintenance and Control Process Analysis, Simulation, and Automation—A Review" Applied Sciences 14, no. 19: 8774. https://doi.org/10.3390/app14198774

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop