Next Article in Journal
Diagnostic Tool for Early Detection of Rheumatic Disorders Using Machine Learning Algorithm and Predictive Models
Next Article in Special Issue
Cancer Classification from Gene Expression Using Ensemble Learning with an Influential Feature Selection Technique
Previous Article in Journal
Assaying and Classifying T Cell Function by Cell Morphology
Previous Article in Special Issue
Utilizing Generative Adversarial Networks for Acne Dataset Generation in Dermatology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An Overview of Approaches and Methods for the Cognitive Workload Estimation in Human–Machine Interaction Scenarios through Wearables Sensors

by
Sabrina Iarlori
1,
David Perpetuini
2,*,
Michele Tritto
3,
Daniela Cardone
2,
Alessandro Tiberio
3,
Manish Chinthakindi
3,
Chiara Filippini
4,
Luca Cavanini
1,
Alessandro Freddi
1,
Francesco Ferracuti
1,
Arcangelo Merla
2 and
Andrea Monteriù
1
1
Department of Information Engineering, Polytechnique University of Marche, 60121 Ancona, Italy
2
Department of Engineering and Geology, University “G. d’Annunzio” of Chieti-Pescara, 65127 Pescara, Italy
3
Next2U s.r.l., 65127 Pescara, Italy
4
Lega F. D’Oro Research Center, 60027 Osimo, Italy
*
Author to whom correspondence should be addressed.
BioMedInformatics 2024, 4(2), 1155-1173; https://doi.org/10.3390/biomedinformatics4020064
Submission received: 18 February 2024 / Revised: 18 April 2024 / Accepted: 4 May 2024 / Published: 7 May 2024
(This article belongs to the Special Issue Feature Papers in Applied Biomedical Data Science)

Abstract

:
Background: Human-Machine Interaction (HMI) has been an important field of research in recent years, since machines will continue to be embedded in many human actvities in several contexts, such as industry and healthcare. Monitoring in an ecological mannerthe cognitive workload (CW) of users, who interact with machines, is crucial to assess their level of engagement in activities and the required effort, with the goal of preventing stressful circumstances. This study provides a comprehensive analysis of the assessment of CW using wearable sensors in HMI. Methods: this narrative review explores several techniques and procedures for collecting physiological data through wearable sensors with the possibility to integrate these multiple physiological signals, providing a multimodal monitoring of the individuals’CW. Finally, it focuses on the impact of artificial intelligence methods in the physiological signals data analysis to provide models of the CW to be exploited in HMI. Results: the review provided a comprehensive evaluation of the wearables, physiological signals, and methods of data analysis for CW evaluation in HMI. Conclusion: the literature highlighted the feasibility of employing wearable sensors to collect physiological signals for an ecological CW monitoring in HMI scenarios. However, challenges remain in standardizing these measures across different populations and contexts.

1. Introduction

Human-machine interaction (HMI) refers to the dynamic and multidimensional interface between humans and machines, encompassing technical and cognitive dimensions [1,2]. It involves real-time interaction between humans and machines through a human-machine interface [3]. HMI serves as an essential information interaction medium between humans and intelligent devices, with broad application prospects in various fields such as medical care, education, and military applications [4]. Notably, the significance of HMI extends to its role also in labor productivity and efficiency, particularly in the context of Industry 4.0, where it has deeply transformed the manufacturing landscape [5,6]. Additionally, the analysis of mission reliability in man-machine systems emphasizes the importance of considering human cognitive function and the interaction between humans and machines [7,8]. In this perspective, the measurement of cognitive workload (CW) during HMI has attracted great attention for the purposes of design, training, and evaluation [9]. In this field of application, non-invasive contactless or wearable sensors are preferred. The assessment of CW in HMI through wearable sensors is a crucial area of research with implications for various fields such as healthcare, sports, industry, and military settings. Wearable sensors offer the potential to monitor CW in real-time, providing valuable insights into human performance and well-being [10,11]. CW monitoring is usually performed by measuring the brain activity during the execution of tasks. To this aim, portable neuroimaging technologies, such as electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS), are highly suitable for assessing continuously the individuals’ CW [12,13,14,15]. However, it should be highlighted that wearing helmets with located electrodes or optodes could be not comfortable in several situations and environments. For this reason, wearable sensors, such as digital t-shirts, smartwatches, and smart jewels, have emerged as innovative tools for assessing CW, offering real-time monitoring and potential applications in various domains. The potential of wearable devices, including smart glasses and smartwatches, in the realm of HMI and CW evaluation has been emphasized by Lim et al. (2015) [16]. Skin-mountable and wearable sensors, capable of being affixed to garments or directly adhered to the skin, could provide the continuous monitoring of human activities, including the assessment of CW in real-time [17]. According to Khundaqji et al. (2020) [18], smart t-shirts have been recognized as a credible and consistent tool for tracking certain physiological measurements, hence showcasing their capability in evaluating mental strain [16]. Moreover, wearable flexible electronics, including cardiac electrodes and pressure sensors, have shown promise in detecting mental stress and monitoring physiological signals, offering potential applications in CW assessment [19]. In the same context, wearable tactile sensors have demonstrated their potential and are extensively used for monitoring human health and movementsin HMI applications [20].
The wearable sensors potential to monitor the CW during HMI has been enhanced by introducing machine learning (ML) techniques applied to physiological signals. ML is a subfield within the discipline of artificial intelligence (AI), whereby systems are endowed with the ability to acquire knowledge and enhance their performance via experience, without the need for explicit programming [21]. The task entails the creation of algorithms capable of analyzing and interpreting intricate data, adjusting to novel situations, and generating intelligent choices or predictions based on incoming data [22]. For instance, ML applied to physiological signals and imaging data, could provide a strong contribution for diagnostic purposes [23,24]. ML can be of relevant importance in the assessment of cognitive effort, when applied to physiological data for HMI. In fact, physiological signals, including heart rate (HR), brain activity, skin conductance, and eye movements, but also facial expressions, provide contemporaneous information on an individual’s cognitive and emotional condition [25,26,27]. By using ML algorithms to analyze these signals, it becomes feasible to understand and measure cognitive burden, which refers to the level of mental exertion shown by the brain.
This overview aims to investigate the role of wearable sensors in the CW evaluation during HMI. Particularly, the main contributions of the survey are:
  • to comprehensively investigate the potential of smart sensor technology in the assessment of CW. This encompasses exploring the effectiveness, accuracy, and practicality of various smart sensors in detecting and quantifying the cognitive load under different conditions and in various groups of populations with different characteristics.
  • to identify the principal physiological signals employed for CW monitoring in HMI applications, focusing on approaches based on both unimodal and multimodal physiological signal acquisitions.
  • to describe the ML-based approaches used to evaluate the CW from physiological signals, acquired through wearable sensors.
For this overview, PubMed/MEDLINE, Science Direct, IEEE explore, and Scopus databases were searched up to 5 years before the inception.
The search concentrated on the words “cognitive workload”, “human machine interaction”, and “wearable” within the following fields: article title, abstract, and keywords. Notably, only papers published in the last five years were considered. Following the removal of duplicates, two reviewers (DP and SI) independently assessed the titles and abstracts of all potentially relevant publications for inclusion. If there was a difference of opinion, the decision of a third reviewer was used to reach an agreement. Subsequently, the two reviewers obtained complete versions of the papers and independently examined them to determine whether they met the criteria for inclusion. If a consensus could not be achieved via dialogue, a third reviewer’s assessment was used to resolve conflicts (DC). Specifically, reviews and articles authored in a language other than English were not included.
Hence, the paper is structured into different sections about the presented topics. Section 2 provides a comprehensive review on the potentiality of wearable sensors to acquire physiological signals for CW monitoring, briefing describing the various signals that could be recorded. Section 3 describes examples of applications of wearables for CW evaluation in HMI relying on a single physiological signal (i.e., unimodal approach), and examples concerning multimodal approaches. Section 4 describes the employment of ML algorithms for physiological signals data analysis in the context of wearable sensors applied to HMI for CW monitoring, and, finally, a discussion regarding the different application fields, future perspectives, limitations and conclusions are reported in Section 5, Section 6 and Section 7.
The overall topic of the survey is illustrated in Figure 1.

2. Physiological Signals Acquired through Wearable Sensors for HMI

In the field of healthcare, the employment of wearable sensors to collect physiological signals has gained notable attention. The continuous monitoring of physiological parameters that has revolutionized the personalized medicine and the detection of subtle changes in the body are enabled by using these novel sensors. Their development, including movement, physiological, and biochemical sensors, have shown promise in improving disease detection and prognostication through the analysis of changes in physiology over time [28]. Furthermore, the development of flexible, stretchable, and miniaturized wearable sensors has shown great potential for personalized healthcare applications, including the monitoring of motion, physiological, electrophysiological, and electrochemical signals [29]. These advancements in wearable sensor technology have paved the way for applications in healthcare monitoring, disease diagnostics, and HMI [30,31,32,33]. Overall, wearable sensors have the potential to play a transformative role in healthcare and ergonomics, offering continuous, non-invasive monitoring of physiological signals of individuals [31,34]. Importantly, it should be noted that CW monitoring through behavioral and gesture analysis has been proposed so far. For instance, Madeo et al. examine the use of gesture analysis in the creation of novel approaches to HMI, specifically in the evaluation of CW in interactive systems [35].
The physiological signals that can be acquired through such a device are mainly the heart rate (HR) through photoplethysmography (PPG), the electrodermal activity (EDA), and breathing rate.
PPG is an optical method that accurately assesses changes in blood volume inside the small blood vessels of tissue without the need for intrusive procedures [36,37]. It has garnered interest due to its capacity to evaluate heart rate variability (HRV), a crucial measure of autonomic nervous system function and psychophysiological well-being [38,39,40]. In detail, HRV is a measure of the fluctuation in time intervals between successive heartbeats and it indicates the dynamic interaction between the sympathetic and parasympathetic divisions of the autonomic nervous system. The significance of HRV is in its function as a biomarker for cardiovascular health, stress tolerance, and general state of well-being [41,42,43,44,45,46,47,48,49].
The continuous HR monitoring is particularly assured by devices whose characteristic is to extend battery life, providing a practical and sustainable solution for long-term monitoring. In this perspective, the potential of smartphones and smartwatches in these applications is further underscored, the integration of smartphones and smartwatches has revolutionized the landscape of remote psychophysiological assessment. Particularly, this technology has been utilized for health monitoring assessment [50,51,52], daily living and sports activities detection, highlighting their versatility in monitoring and promoting health and well-being [53,54,55,56].
Electrodermal activity (EDA), often referred to as Galvanic Skin Response (GSR), is a physiological signal that indicates changes in the electrical characteristics of the skin caused by sweat gland function [57,58]. EDA is regarded as a delicate indicator of sympathetic arousal and has been extensively used to evaluate emotional and cognitive functions, as well as the functioning of the autonomic nervous system [59,60].
The breathing rate, which refers to the frequency of breaths taken within a minute, is a vital factor in psychophysiological evaluation as it mirrors the physiological and emotional condition of the body [61,62]. The significance of respiratory rate resides in its intimate correlation with the autonomic nervous system and the control of emotions. Fluctuations in respiration rate serve as an indication of tension, anxiety, and relaxation, making it a helpful indicator for psychophysiological evaluation [63,64,65,66]. Furthermore, the pace at which one breathes is strongly connected to HRV, offering valuable information about the body’s reaction to stress and emotional stimulation [43,62].
Ref. [35] In addition, eye tracking technology has become an indispensable tool in the assessment of CW across diverse domains, offering insights through physiological indicators like pupil dilation, eye movement patterns, and fixation durations. This method’s efficacy in evaluating CW is supported by several research in various contexts, such as goal-directed tasks, robotic surgical training, clinical simulations, and space teleoperation, underlining its broad applicability. For instance, Bailey & Iqbal showcased the potential of eye tracking in monitoring CW fluctuations during interactive tasks through pupil dilation [67]. Similarly, Zhang & Cui validated the reliability of the Tobii Pro Nano eye tracker in detecting CW changes in real-time during N-back tasks [68]. Wu et al., further, demonstrated the predictive capability of eye-tracking metrics in estimating CW levels with an accuracy of 84.7% during robotic surgical training [69]. The integration of eye tracking with other physiological measurements, such as EEG, has been shown to refine CW assessments [70].
Moreover, in this perspective, the development of smart jewelry has also increased the impact of physiological monitoring in ecological conditions. Smart jewelry is a growing industry that combines fashion and technology and falls under the category of wearable gadgets. The collection includes a wide variety of jewelry pieces, such as bracelets, rings, necklaces, and earrings, that are equipped with modern technical elements for different purposes. Smart jewelry is specifically designed to effortlessly blend into daily routines, providing features like instantaneous health monitoring, communication, and personal security [71,72,73]. These gadgets are equipped with sensors and communication modules that provide the monitoring of biometric signals, tracking of activities, and provision of emergency alarms. Smart jewelry has attracted attention for its potential in eHealth-related services because of its ability to discreetly and unobtrusively monitor vital signs and physical activity. In addition, smart jewelry is positioned to transform the conventional jewelry sector by integrating cutting-edge technology such as Internet of Things (IoT) connection, blockchain-based authentication, and even emotional and contextual feedback via embedded LED displays. The convergence of aesthetics and usefulness in smart jewelry offers novel prospects for customized healthcare, fashion, and self-expression, rendering it a propitious domain for more investigation and advancement.
Figure 2 describes the principal physiological signals used for CW monitoring.
In the following sections it is described how these signals acquired through wearable sensors have been used to monitor the human CW.

3. Cognitive Workload Monitoring through Unimodal and Multimodal Approach

CW can be roughly categorized into two types according to the use of the emotion recognition modality i.e., unimodal and multimodal. Unimodal methods generally employ single channels, such as face images, speech or physiological signals and text to classify different emotional states. Multimodal approach uses two or more emotion channels to analyze emotion or CW, comprehensively.
The feature extraction process is a crucial part of both the unimodal and multimodal approach because distinct features can facilitate precise results. The feature learning method for unimodal approach can be used for multimodal one. Features are separately extracted, according to the characteristics of single modalities [25].
The main papers reported in this section are summarized in Table 1. Notably, the 58.3% of the papers considered employed a multimodal approach for CW monitoring, whereas the 41.7% of the studies used an unimodal framework.

3.1. Unimodal Physiological Monitoring of Cognitive Workload

The studies reporting a unimodal approach to monitor the CW in HMI are mainly related to study examples of gesture recognition, eye movements, facial expressions and heart rate monitoring.
Concerning the investigation of the gestures and pose of the individuals, Rupprecht et al. [74] assessed the effectiveness and user-friendliness of incorporating a dynamic projection system with visual human location and gesture detection, in industrial site assembly chores. The Task Completion Time (TCT) according to the General Assembly Task Model (GATM) was used to evaluate process efficiency, and the results showed the potential for efficiency improvement. The guided in-situ instructions, which displayed the mounting position and orientation directly at the correct place, were found to be particularly useful in terms of process and usability. Participants provided feedback on the experiments, suggesting improvements such as smoother and more accurate gesture control, simplified information using pictograms, and a mixture of both static in-view and guided in-situ instructions.
Similarly, Ciccarelli et al. [75] presented a system aimed at boosting physical ergonomics in the context of human-robot cooperation. The primary objective was to mitigate the likelihood of musculoskeletal illnesses and promote the overall well-being of workers. The system employed workers’ anthropometric parameters, monitored their posture using inertial and visual systems, considered job requirements, and did real-time risk assessment to optimize robot behavior. The first laboratory tests demonstrated the dependability and precision of the posture monitoring system using the Cubemos Skeleton Tracking SDK and XSens(r) technology. However, limitations were identified, such as occlusion and failure to detect all body points, which could be addressed through AI algorithms and the use of multiple depth cameras. The system also incorporated gesture recognition for task activation and control, with gestures like “OK” and “STOP” implemented. Further enhancements were suggested to improve recognition accuracy and accommodate different tasks. A virtual simulation of a real case study in a quality control workstation demonstrated the system’s effectiveness in improving worker posture and reducing ergonomic risks.
Concerning the eyes movement analysis, Biswas and Prabhakar [77] examined the velocities of saccadic intrusions. Outliers were removed by using the outer fence estimated from the median and inter-quartile range. The average velocities of saccadic intrusion exhibited variability across subjects, with 10 out of 12 people demonstrating velocities above 3°/s. A unidirectional analysis of variance (ANOVA) revealed a statistically significant impact of conditions on the z-scores of saccadic intrusions, with a difference between the control and N-back conditions.
Li et al. [78] developed an augmented visualization Primary Flight Display (PFD) that considerably enhanced pilots’ situation awareness and decreased perceived workload compared to the traditional PFD. Pilots, using the augmented PFD, could better identify the status of flight modes, leading to shorter response times in cognitive information processing. Fixation durations on augmented PFDs were significantly shorter than traditional PFDs, indicating improved attention distribution and situation awareness. The subjective workload ratings using NASA-TLX showed that the augmented PFD design reduced mental demand and frustration, compared to the traditional PFD. The proposed augmented visualization design facilitated pilots’ attention distribution by decreasing fixation duration and increasing the frequency of fixations.
The study by Viegas et al. [83] introduced a technique capable of identifying mental stress by analyzing facial expressions captured only via video footage. The primary benefit of using video for personal stress detection was in the convenient availability of webcams during computer use. A total of 17 distinct Action Units (AUswere recovered, ranging from higher-level to lower-level facial expressions, on a frame-by-frame basis. The authors achieved a 74% accuracy in differentiating between stress and non-stress scenarios utilizing leave-one-subject-out (LOSO) for person independent categorization. The authors achieved an accuracy of 91% using 5-fold cross-validation for classifying individuals based on dependency. A total of 17 distinct AUs were examined to identify and measure mental stress resulting from a multitasking activity coupled with social assessment. Authors used several elementary classifiers to conduct subject-specific and subject-agnostic analyses. In addition, other basic classifiers were used, including Random Forest, LDA, Gaussian Naive Bayes, and Decision Tree. The topic independent classification yields a range of outcomes, ranging from 29% for the 6-class issue to 74% for the binary classification. Conversely, the findings that are reliant on the individual range from 65% to 91% respectively.
The paper by Pongsakomsathien et al. [84] evaluated the feasibility of employing a wearable cardiac monitoring device for real-time HR recording in aerospace applications. The validity of the measurements from the wearable device was compared to a clinically validated device. The study focused on a challenging aerospace task involving time-critical HMI and high CW. The results supported the suitability of the sensor for the intended Cognitive Human-Machine Interfaces and Interactions (CHMI2) system application. The paper discussed the factors that can influence the validity performance of cardiac measurement data, such as hardware and software processing, filtering, and data handling methods, as well as issues like loose straps and contaminated electrodes.

3.2. Multimodal Physiological Monitoring of Cognitive Workload

The papers reporting a multimodal approach to monitor the CW are mainly related to study examples of different physiological data acquisitions at the same time, particularly eye tracking, heart rate variability, breathing rate and posture.
Beggiato et al. [76] gathered physiological measurements, including HR, pupil diameter, and eye blink rate, in order to detect discomfort during autonomous driving scenarios. HR consistently reduced during unpleasant conditions, but pupil diameter rose and eye blink rate decreased in visually observed uncomfortable scenarios. These modifications were considered as indicative of heightened information-processing demands and CW. The findings represented a foundation for creating an algorithm able to identify discomfort in real-time, to be tested on the road, using an autonomous car. Liang et al. [79] determined that using a touch screen input method took longer and involved more physical effort compared to using a mouse. The study was realized using, as system the user interacted with, “The Human machine interface for early warning aircraft”, and monitoring the eye tracking measurement, HR, and HRV. It improved information processing by lowering the average time spent on fixations. Additionally, using the color mode on the scenario map reduced the number of rapid eye movements (saccades) in comparison to the grayscale version. Particularly, grayscale mode helped to mitigate variations in CW caused by various input modalities, hence minimizing visual fatigue. Grayscale mode required more visual scanning and spatial orientation, while color mode lessened the burden of visual search and time demand. Utilizing color-coding on the scenario map enhances job efficiency and diminishes effort in comparison to monochromatic displays. The paper by Khamaisi et al. [80] outlined a methodology for evaluating the User Experience (UX) of employees with the aim of fostering a human-centered approach to manufacturing facilities and improving overall sustainability. The assessment entailed the use of non-intrusive wearable sensors to track human activities and gather physiological indicators, alongside questionnaires for subjective self-evaluation. The method was used on a virtual reality (VR) simulation of complex work sequences at an oil and gas pipes production site. This allowed for the detection of potentially stressful circumstances for the operators, both physically and mentally. The research highlighted the significance of acquiring a profound comprehension of the work environment and organization, together with expertise in users’ requirements and ergonomics, to enhance workers’ welfare, working circumstances, and industrial outcomes. The paper by Peruzzini et al. [81] presented a mathematical model that explores capacity management within the framework of Industry 4.0, using several costing models such as ABC (Activity Based Casting) and TDABC (Time-Driven Activity Based Casting). The research emphasized the balance between maximizing capacity and achieving operational efficiency, demonstrating that increasing capacity may mask operational inefficiencies. The study suggested using a mixed reality (MR) arrangement to facilitate the design for serviceability. Physiological signals, including HR, HRV, breathing rate, and posture, were observed to evaluate the physical and cognitive burden of users while performing maintenance activities in the MR set-up. The findings demonstrated that the optimal design solution effectively decreases the burden of operators, enhanced postural comfort, and mitigated cognitive stress. Ciccarelli and colleagues [85] introduced a technology that facilitated the tracking of operator’s activities, the analysis of data, and the execution of corrective measures to promote social sustainability in the workplace. The instrument enabled the monitoring of workers’ actions, as well as their physical and cognitive exertion, environmental comfort, and adherence to production standards such as work rhythm and product/component specifications. The data were collected via Internet of Things (IoT) devices, such as chest bands, wristbands, smart glasses, and inertial measurement units. The physical ergonomics data were analyzed by tracking the movement of body segments and measuring relative anatomical joint angles, steps, force load, and vibrations. This analysis was conducted to predict and prevent musculoskeletal problems. The cognitive ergonomics data were collected using physiological measures such HR, HRV, breathing rate, eye-related parameters (electrooculogram, blinks, fixation), and EDA. These factors enabled the identification of concerns pertaining to the psychological and cognitive reactions of employees. Ciccarelli et al. also presented a systematic review, analyzing the occupational risks faced by workers and the proposed solutions [86]. The Occupational Safety and Health Administration (OSHA) classified workplace risks into primary categories: safety hazards and ergonomic hazards. Recently, a new categorization has been introduced to include psychosocial and organizational risks, acknowledging the growing understanding of psychosocial factors. With respect to this goal, technology solutions included IoT systems, Digital Twin, and expanded reality were considered. Various studies suggested IoT frameworks for evaluating ergonomic assessment. These frameworks utilize wearable sensors to measure physiological parameters such as HR, HRV, respiratory rate, EDA, EEG, and pupillometry. The purpose of these assessments is to evaluate CW through AI technology, including ML, and computer vision.
The research by Peruzzini and colleagues [82] introduced a systematic approach for using current digital technologies to aid in product-process design. This approach involved analyzing workers’ behaviors and evaluating their perceived experience in industrial settings. The study proposed a systematic protocol analysis to objectively and quantitatively assess the workers’ experience. The goal was to assist in defining the needs for product-process design via the use of digital technology. This study examined the analysis of the perceived human experience in the working environment within the framework of smart factories. Specifically, it established a framework for assessing human stress and comfort to aid in the development of industrial systems, taking into account both product and process characteristics in a cohesive manner. The setup consisted of a GoPro Hero3 video camera for recording real workers and the work-space environment. Additionally, a Tecnomatix Jack software tool by Siemens was used to model the virtual environment and create a “digital twin” of the workplace. The setup also includes VICON Bonita cameras for full body tracking of real workers and real object tracking. To analyze the eye fixation of real users in the mixed reality (MR) environment, a high-quality eye tracking system called Glasses 2 by Tobii was used. Furthermore, a multi-parametric wearable sensor called BioHarness 3.0 by Zephyr was used for real-time monitoring of physiological parameters to analyze the physical and mental stress of real users. Regarding eye tracking, the collected data was very valuable for monitoring the interaction between humans and systems, as well as for establishing connections between eye-related data and human stress, CW, and emotions.

4. Cognitive Workload Assessment through Machine Learning Approaches

The paper by Smith et al. [87] centered on the estimation of the physical effort in human-robot teams that operate in unpredictable circumstances. The workload was divided into many components, including cognitive, visual, verbal, auditory, gross motor, fine motor, and tactile. The research employed wearable sensors, such as HR monitors, motion trackers, and surface electromyography sensors, to gather physiological measurements. These measurements were used to estimate the workload of gross motor skills, fine motor skills, and tactile abilities. The ML models built using these physiological measures were deemed unreliable as a result of task ambiguity and the presence of noise in the data.
Dell’Agnola et al. [88] proposed a new weighted-learning method for Support Vector Machine (SVM) to optimize the model for specific subjects. The study discussed the selection of ML algorithms based on factors such as data size and system requirements, with SVM being commonly used, but not consistently indicated as the best model. The personalized weighted-learning approach considered person-dependent variance in physiological response to workload, and the performance was evaluated using the NASA Task Load Index. The paper emphasized the multidimensional nature of CW and the need to consider task type, duration, and subjective psychological experiences. The specific results of the study were not mentioned in the provided sources.
Masinelli et al. [89] proposed a ML technique for multimodal CW monitoring and evaluation during manual labor to improve the lifetime of wearable sensors. The acquisition unit consisted of different wearable sensors worn by the workers during their manual labor.
The study of Zanetti et al. [90] proposed a ML design methodology and data processing strategy for real-time CW monitoring on resource-constrained wearable devices. The proposed CW monitoring solution achieved an accuracy of 74.5% and a geometric mean of 74.0% between sensitivity and specificity for classification on unseen data. The model optimization technique resulted in a model that is 27.5 times smaller than the default parameters, while the multi-batch data processing scheme decreased the amount of RAM memory used by 14 times. The method utilized a mere 1.28% of the total processing time, hence enabling the machine to attain a battery life of 28.5 h. The paper further demonstrated the validation of the suggested approach, compared it with a complete-capacity solution, utilized an artifact removal strategy, and provided execution profiling statistics on the e-Glass platform. The research by Pongsakornsathien et al. [91] explored the latest developments in sensor networks for aerospace cyber-physical systems, with a specific emphasis on the application of Cognitive Human-Machine Interfaces and Interactions. This article examined the essential neurophysiological metrics used in this environment and explores their correlation with the cognitive states of the operator. The article also introduced appropriate data analysis methods that use ML and statistical inference. These methods were capable of effectively processing both neurophysiological and operational data to produce precise estimates of cognitive states. The chosen sensors were eye tracking sensors, used to extract gaze features and pupillometry, and passive control systems, able to detect several eye activity characteristics, including fixations, blink rate, saccades, pupil diameter, visual entropy, and dwell duration. These variables were associated with the cognitive state of the operator. In particular, saccades presented two modes: real-time data transmission and data recording. The selected sensor was mainly used to measure the diameter of the pupil, as well as cardiac and respiratory activities, whereas neuroimaging technologies were used to observe and enhance comprehension of the brain’s functioning.
Neuroimaging techniques may be classified into two primary categories for attaining their goals: direct observation of neural activity in response to stimuli, and indirect measurement of neural activity through metabolic indicators. The first method involves the use of sensors, such as EEG, which capture the electrical signals produced by cerebral activity, caused by the activation of neurons [92,93]. The second method involves the use of sensors like fNIRS, which employs a spectroscopic approach to measure the levels of blood oxygenation in the brain’s cortex [94].
Moreover, facial expression analysis is a frequently used technique for assessing the emotional states of individuals. Just like speech patterns, analyzing facial expressions does not need any specialist equipment other than a camera (such as an RGB camera) and it is non-intrusive [95]. In this context, various ML methodologies were used to analyze image data, including Neural Fuzzy Systems, artificial neural networks, and convolutional neural networks [96,97].
The main papers described in this section are summarized in Table 2.

5. Discussion

In this review the importance of the CW monitoring in several application fields is investigated. In fact, CW monitoring finds applications in various fields, including ergonomics and computer science.
In the field of ergonomics, the assessment of CW is a multifaceted endeavor that involves the integration of sensitive measurement techniques, such as physiological assessments, to accurately evaluate the mental demands placed on individuals during various tasks. The diverse applications of CW monitoring in ergonomics underscore its significance in enhancing human performance and well-being across different domains, including aviation, medicine, automotive, and HMI [98].
In the context of aviation, the assessment of CW has an exceptional importance due to its direct impact on flight safety and performance. In fact, the probability of human error in aviation due to heightened emotional states and CW can increase the probability of human error [99]. This highlights the critical need for real-time monitoring of CW to mitigate the risk of errors in aviation operations. The integration of advanced techniques for CW monitoring based on physiological data [100], and the consideration of CW in the design of aviation systems, cockpits, and operations underscore the significance in ensuring safe and efficient air travel [101].
The assessment of CW in the automotive is crucial for understanding the mental demands placed on individuals within the driving context. In fact, road accidents are one of the most prevalent causes of injury and death, and they are frequently due to the exaggerated CW and fatigue of the drivers [59,102]. Hence, in this field, predicting such cognitive states by means of ML algorithms applied to neurophysiological recordings could be crucial in preventive accidents.
In the context of medicine and healthcare, the assessment of CW represents an important aspect due to its direct impact on clinical performance, patient safety, and overall healthcare outcomes [103]. This suggests the relevance to apply advanced monitoring techniques to assess CW in clinical environments, thereby enhancing the understanding of mental demands placed on healthcare professionals during patient care. Additionally, monitoring CW has implications for mental health, as chronic exposure to high cognitive loads can lead to stress, burnout, and other mental health issues. Timely interventions not only help prevent these outcomes but also contribute to a healthier work environment.
The literature review demonstrated that ML plays a pivotal role in the quantification and monitoring of CW, offering advanced techniques for automated classification and real-time assessment, resulting highly suited for HMI applications. In fact, ML, combined with advanced digital signal processing, can achieve automated and accurate classifications of CW in natural environments, enhancing decision-making, safety, and performance across various operational contexts.
CW monitoring emerges as a pivotal component in the domain of HMI, offering substantial advantages for the efficacy of interactions and the well-being of human operators. Its significance is underscored in environments where humans are engaged with complex systems. Monitoring CW is instrumental in identifying instances of excessive stress or cognitive burden on operators, which could precipitate errors or accidents. Early detection of such instances facilitates timely interventions to mitigate CW, thereby enhancing operational safety. Furthermore, understanding CW enables the optimization of system design and the allocation of tasks between humans and machines, ensuring tasks are aligned with human cognitive capabilities. This alignment prevents operators from being underloaded, which can lead to boredom and loss of attention, or overloaded, which can result in errors and diminished performance. The concept of adaptive systems, which dynamically adjust their behavior based on the operator’s current cognitive load, relies heavily on CW monitoring. These systems might automate certain tasks when detecting high cognitive load or provide additional support or feedback to the operator. In the context of training and skill development, monitoring CW allows for the customization of training programs to meet individual needs, ensuring operators are well-prepared for their tasks. It also aids in identifying areas requiring further training. Ensuring the CW remains at an optimal level enhances overall user experience and satisfaction. Systems designed with an understanding of CW are likely to be more user-friendly and meet the operators’ needs more effectively.
Importantly, as assessed in the literature survey, to monitor CW effectively, various methods and technologies are utilized, including physiological measurements, performance measures, and subjective assessments. Integrating these measures into human-machine systems necessitates careful consideration of privacy, ethics, and the practicality of monitoring methods across different operational contexts.

6. Limitations and Future Directions

In the realm of HMI, CW monitoring is indispensable for optimizing system performance and user experience. Yet, the endeavor to effectively measure and adapt to CW in real-time presents a multifaceted challenge characterized by several limitations inherent to current methodologies.
A primary concern is the intrusiveness and practicality of some existing CW monitoring techniques. Methods that rely on physiological signals, such as EEG or HRV, often require equipment that can be obtrusive and uncomfortable for the user. This not only impacts the feasibility of deploying such measures in everyday scenarios but may also influence the cognitive load, thereby skewing the results. In this context, the employment of wearable sensors could enhance the application of CW in ecological contexts.
The accuracy and reliability of physiological measurements further compound the issue. In fact, physiological metrics can be susceptible to a range of external factors, from emotional states to health conditions, which may not necessarily correlate with the CW. The subjective nature of self-reporting measures introduces an additional layer of complexity, as these can be influenced by personal perceptions, potentially misrepresenting the actual workload.
Real-time monitoring, essential for the development of adaptive systems, faces its own set of hurdles. The technical challenge of processing data swiftly enough to make real-time adjustments, combined with the need for non-intrusive monitoring methods, presents a serious obstacle.
Moreover, the variability in individual responses to CW necessitates a tailored approach. Factors such as expertise, age, and cognitive capacity mean that a one-size-fits-all solution is untenable, requiring systems that can adapt to the nuances of individual user profiles. Notably, integrating CW monitoring into system design entails a multidisciplinary effort, bridging psychology, human factors, and engineering disciplines. The complexity of such integration, however, poses a barrier, necessitating customized solutions that can accommodate the specific demands of different applications.
Finally, the ethical and privacy implications of CW monitoring cannot be overlooked. The use of physiological and behavioral data raises considerable concerns regarding user privacy and autonomy. Establishing a framework for ethical monitoring practices is crucial, yet developing universally accepted guidelines remains an ongoing challenge.
Future studies in the monitoring of CW hold substantial potential for advancing our understanding of mental demands and enhancing performance across various domains. For instance, the implications of CW in financial decision-making processes should be further investigated, elucidating the impact of cognitive workload on decision quality and financial risk management [104]. Additionally, future studies could delve into CW implications for creative tasks and the associated brain network dynamics [105].
Importantly, the variability in individual responses to cognitive demands, coupled with external influences such as stress or fatigue, requests a personalized approach in interpreting physiological data. This investigates how individual physiological responses are not homogenous, but rather are influenced by a complex interplay of intrinsic and extrinsic factors. Such an approach acknowledges the differential impact of cognitive loads on diverse individuals, factoring in personal thresholds and resilience to stressors. It also calls for adaptable models that can dynamically adjust to these varying individual responses, ensuring that interpretations of physiological signals are contextually relevant and accurate. Future studies should indeed focus on this tailored approach, since it is essential for the accurate assessment of CW, enhancing the precision and applicability of monitoring systems across different user groups.
The imperative for future research in the domain of CW monitoring is to advance the integration of edge computing technologies within everyday life tasks, thereby enhancing real-time assessment capabilities. This promotes the development of computationally efficient models and methodologies tailored for monitoring and detecting individuals’ CW statuses with greater accuracy and reliability. Specifically, this involves designing algorithms that are not only robust and adaptive to varying contexts but also optimized for low-power and high-performance computations typical of edge computing environments. In the context of implementing these advancements, mobile devices, such as smartphones, emerge as a pivotal component. Their ubiquitous nature and advanced computational capabilities make them ideal candidates for CW monitoring within an IoT framework. This integration would allow for a seamless and unobtrusive collection of data, leveraging the sensors and computational power available in these devices. Moreover, the IoT framework could facilitate a more interconnected and responsive system, where data from various sources can be integrated and analyzed in real-time, thereby supporting humans in their day-to-day activities more effectively.

7. Conclusions

This narrative review examines the CW monitoring through physiological signals in several applications. Notably, the literature shows the potential of physiological signals to provide real-time, objective metrics of CW, which has profound implications for diverse fields such as workplace safety and HMI. However, challenges remain in standardizing these measures across different populations and contexts. Variability in individual responses to cognitive demands and the influence of external factors like stress or fatigue necessitate tailored approach to interpreting physiological data. There is also a need for advanced computational models that can accurately parse these complex signals and translate them into meaningful CW assessments.
Future research directions should focus on refining the accuracy and applicability of physiological markers, developing inclusive models that account for individual differences. The interdisciplinary nature of this field will continue to foster collaborations that push the boundaries of our understanding of the human brain at work, leading to innovations that enhance both productivity and well-being in various settings.

Author Contributions

Conceptualization, S.I., D.P., M.T., D.C., A.T., M.C., C.F., L.C., A.F., F.F., A.M. (Arcangelo Merla) and A.M. (Andrea Monteriù); methodology, S.I., D.P., M.T., D.C. and C.F.; investigation, S.I. and D.P.; data curation, S.I., D.P., M.T., D.C., A.T., M.C., C.F., L.C., A.F. and F.F.; writing—original draft preparation, S.I. and D.P.; writing—review and editing, S.I., D.P., M.T., D.C., A.T., M.C., C.F., L.C., A.F., F.F., A.M. (Arcangelo Merla) and A.M. (Andrea Monteriù); supervision, A.M. (Arcangelo Merla) and A.M. (Andrea Monteriù). All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially funded by grant: GRANT Pharaon Horizon 2020 #857188, E-MOTIVE: Empowering Mobility with affective cOmputing, brain-computer inTerface, Internet of Things, and Vital signs monitoring for Smart WhEelchairs.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available on request to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, S.; Ren, Z.; Yu, X.; Huang, A. A Dynamic Model of Evolutionary Knowledge and Capabilities Based on Human-Machine Interaction in Smart Manufactures. Comput. Intell. Neurosci. 2022, 2022, 8584888. [Google Scholar] [CrossRef]
  2. Hopko, S.; Wang, J.; Mehta, R. Human Factors Considerations and Metrics in Shared Space Human-Robot Collaboration: A Systematic Review. Front. Robot. AI 2022, 9, 799522. [Google Scholar] [CrossRef]
  3. Poklukar, Š.; Papa, G.; Novak, F. A Formal Framework of Human–Machine Interaction in Proactive Maintenance–MANTIS Experience. Autom. Časopis Autom. Mjer. Elektron. Računarstvo Komun. 2017, 58, 450–459. [Google Scholar]
  4. Xie, Y.; Liu, G.; Ren, J.; Liu, Y.; Yao, L.; Xu, B.; Chen, D.; Liu, Y. Micro-Fabrication Based Epidermal E-Tattoo with Conformability and Sensitivity as Human-Machine Interface. J. Phys. Conf. Ser. 2023, 2463, 012019. [Google Scholar] [CrossRef]
  5. Nardo, M.; Forino, D.; Murino, T. The Evolution of Man–Machine Interaction: The Role of Human in Industry 4.0 Paradigm. Prod. Manuf. Res. 2020, 8, 20–34. [Google Scholar] [CrossRef]
  6. Lorenzini, M.; Lagomarsino, M.; Fortini, L.; Gholami, S.; Ajoudani, A. Ergonomic Human-Robot Collaboration in Industry: A Review. Front. Robot. AI 2023, 9, 262. [Google Scholar] [CrossRef]
  7. Tang, H.; Guo, J.; Zhou, G. Mission Reliability Analysis of Man-Machine System. In Proceedings of the 2015 First International Conference on Reliability Systems Engineering (ICRSE), Beijing, China, 21–23 October 2015; pp. 1–5. [Google Scholar]
  8. Rubagotti, M.; Tusseyeva, I.; Baltabayeva, S.; Summers, D.; Sandygulova, A. Perceived Safety in Physical Human–Robot Interaction—A Survey. Robot. Auton. Syst. 2022, 151, 104047. [Google Scholar] [CrossRef]
  9. Braarud, P.Ø.; Bodal, T.; Hulsund, J.E.; Louka, M.N.; Nihlwing, C.; Nystad, E.; Svengren, H.; Wingstedt, E. An Investigation of Speech Features, Plant System Alarms, and Operator–System Interaction for the Classification of Operator Cognitive Workload during Dynamic Work. Hum. Factors 2021, 63, 736–756. [Google Scholar] [CrossRef] [PubMed]
  10. Heard, J.; Fortune, J.; Adams, J.A. Speech Workload Estimation for Human-Machine Interaction. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 277–281. [Google Scholar] [CrossRef]
  11. Khundaqji, H.; Hing, W.; Furness, J.; Climstein, M. Wearable Technology to Inform the Prediction and Diagnosis of Cardiorespiratory Events: A Scoping Review. PeerJ 2021, 9, e12598. [Google Scholar] [CrossRef]
  12. Aghajani, H.; Garbey, M.; Omurtag, A. Measuring Mental Workload with EEG+ fNIRS. Front. Hum. Neurosci. 2017, 11, 359. [Google Scholar] [CrossRef]
  13. Curtin, A.; Ayaz, H. The Age of Neuroergonomics: Towards Ubiquitous and Continuous Measurement of Brain Function with fNIRS. Jpn. Psychol. Res. 2018, 60, 374–386. [Google Scholar] [CrossRef]
  14. Chiarelli, A.M.; Perpetuini, D.; Croce, P.; Greco, G.; Mistretta, L.; Rizzo, R.; Vinciguerra, V.; Romeo, M.F.; Zappasodi, F.; Merla, A. Fiberless, Multi-Channel fNIRS-EEG System Based on Silicon Photomultipliers: Towards Sensitive and Ecological Mapping of Brain Activity and Neurovascular Coupling. Sensors 2020, 20, 2831. [Google Scholar] [CrossRef]
  15. Perpetuini, D.; Cardone, D.; Filippini, C.; Spadolini, E.; Mancini, L.; Chiarelli, A.M.; Merla, A. Can Functional Infrared Thermal Imaging Estimate Mental Workload in Drivers as Evaluated by Sample Entropy of the fNIRS Signal? In Proceedings of the 8th European Medical and Biological Engineering Conference: Proceedings of the EMBEC 2020, Portorož, Slovenia, 29 November–3 December 2020; Volume 80, pp. 223–232. [Google Scholar]
  16. Lim, S.; Son, D.; Kim, J.; Lee, Y.B.; Song, J.-K.; Choi, S.; Lee, D.J.; Kim, J.H.; Lee, M.; Hyeon, T. Transparent and Stretchable Interactive Human Machine Interface Based on Patterned Graphene Heterostructures. Adv. Funct. Mater. 2015, 25, 375–383. [Google Scholar] [CrossRef]
  17. Amjadi, M.; Kyung, K.-U.; Park, I.; Sitti, M. Stretchable, Skin-mountable, and Wearable Strain Sensors and Their Potential Applications: A Review. Adv. Funct. Mater. 2016, 26, 1678–1698. [Google Scholar] [CrossRef]
  18. Khundaqji, H.; Hing, W.; Furness, J.; Climstein, M. Smart Shirts for Monitoring Physiological Parameters: Scoping Review. JMIR Mhealth Uhealth 2020, 8, e18092. [Google Scholar] [CrossRef]
  19. Bin Heyat, M.B.; Akhtar, F.; Abbas, S.J.; Al-Sarem, M.; Alqarafi, A.; Stalin, A.; Abbasi, R.; Muaad, A.Y.; Lai, D.; Wu, K. Wearable Flexible Electronics Based Cardiac Electrode for Researcher Mental Stress Detection System Using Machine Learning Models on Single Lead Electrocardiogram Signal. Biosensors 2022, 12, 427. [Google Scholar] [CrossRef] [PubMed]
  20. Ma, T.; Chen, S.; Li, J.; Yin, J.; Jiang, X. Strain-Ultrasensitive Surface Wrinkles for Visual Optical Sensors. Mater. Horiz. 2022, 9, 2233–2242. [Google Scholar] [CrossRef] [PubMed]
  21. Shinde, P.P.; Shah, S. A Review of Machine Learning and Deep Learning Applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
  22. Michie, D.; Spiegelhalter, D.J.; Taylor, C.C. Machine Learning. Neural Stat. Classif. 1994, 13, 1–298. [Google Scholar]
  23. Perpetuini, D.; Formenti, D.; Cardone, D.; Trecroci, A.; Rossi, A.; Di Credico, A.; Merati, G.; Alberti, G.; Di Baldassarre, A.; Merla, A. Can Data-Driven Supervised Machine Learning Approaches Applied to Infrared Thermal Imaging Data Estimate Muscular Activity and Fatigue? Sensors 2023, 23, 832. [Google Scholar] [CrossRef]
  24. Di Credico, A.; Weiss, A.; Corsini, M.; Gaggi, G.; Ghinassi, B.; Wilbertz, J.H.; Di Baldassarre, A. Machine Learning Identifies Phenotypic Profile Alterations of Human Dopaminergic Neurons Exposed to Bisphenols and Perfluoroalkyls. Sci. Rep. 2023, 13, 21907. [Google Scholar] [CrossRef]
  25. Pan, B.; Hirota, K.; Jia, Z.; Dai, Y. A Review of Multimodal Emotion Recognition from Datasets, Preprocessing, Features, and Fusion Methods. Neurocomputing 2023, 561, 126866. [Google Scholar] [CrossRef]
  26. Poria, S.; Cambria, E.; Bajpai, R.; Hussain, A. A Review of Affective Computing: From Unimodal Analysis to Multimodal Fusion. Inf. Fusion 2017, 37, 98–125. [Google Scholar] [CrossRef]
  27. Filippini, C.; Perpetuini, D.; Cardone, D.; Merla, A. Improving Human–Robot Interaction by Enhancing Nao Robot Awareness of Human Facial Expression. Sensors 2021, 21, 6438. [Google Scholar] [CrossRef]
  28. Dunn, J.; Runge, R.; Snyder, M. Wearables and the Medical Revolution. Pers. Med. 2018, 15, 429–448. [Google Scholar] [CrossRef]
  29. Tan, S.; Islam, M.R.; Li, H.; Fernando, A.; Afroj, S.; Karim, N. Highly Scalable, Sensitive and Ultraflexible Graphene-Based Wearable E-Textiles Sensor for Bio-Signal Detection. Adv. Sens. Res. 2022, 1, 2200010. [Google Scholar] [CrossRef]
  30. Guo, Y.; Guo, Z.; Zhong, M.; Wan, P.; Zhang, W.; Zhang, L. A Flexible Wearable Pressure Sensor with Bioinspired Microcrack and Interlocking for Full-range Human–Machine Interfacing. Small 2018, 14, 1803018. [Google Scholar] [CrossRef] [PubMed]
  31. Farahani, B.; Firouzi, F.; Chang, V.; Badaroglu, M.; Constant, N.; Mankodiya, K. Towards Fog-Driven IoT eHealth: Promises and Challenges of IoT in Medicine and Healthcare. Future Gener. Comput. Syst. 2018, 78, 659–676. [Google Scholar] [CrossRef]
  32. Zheng, K.; Gu, F.; Wei, H.; Zhang, L.; Chen, X.; Jin, H.; Pan, S.; Chen, Y.; Wang, S. Flexible, Permeable, and Recyclable Liquid-Metal-Based Transient Circuit Enables Contact/Noncontact Sensing for Wearable Human–Machine Interaction. Small Methods 2023, 7, 2201534. [Google Scholar] [CrossRef] [PubMed]
  33. Ribino, P. The Role of Politeness in Human–Machine Interactions: A Systematic Literature Review and Future Perspectives. Artif. Intell. Rev. 2023, 56, 445–482. [Google Scholar] [CrossRef]
  34. Wu, S.; Hou, L.; Chen, H.; Zhang, G.K.; Zou, Y.; Tushar, Q. Cognitive Ergonomics-Based Augmented Reality Application for Construction Performance. Autom. Constr. 2023, 149, 104802. [Google Scholar] [CrossRef]
  35. Madeo, R.C.B.; Lima, C.A.M.; Peres, S.M. Gesture Unit Segmentation Using Support Vector Machines: Segmenting Gestures from Rest Positions. In Proceedings of the 28th Annual ACM Symposium on Applied Computing, Coimbra, Portugal, 18–22 March 2013; Association for Computing Machinery: New York, NY, USA, 2013; pp. 46–52. [Google Scholar]
  36. Allen, J.; Zheng, D.; Kyriacou, P.A.; Elgendi, M. Photoplethysmography (PPG): State-of-the-Art Methods and Applications. Physiol. Meas. 2021, 42, 100301. [Google Scholar] [CrossRef] [PubMed]
  37. Allen, J. Photoplethysmography and Its Application in Clinical Physiological Measurement. Physiol. Meas. 2007, 28, R1. [Google Scholar] [CrossRef] [PubMed]
  38. Perpetuini, D.; Chiarelli, A.M.; Cardone, D.; Filippini, C.; Rinella, S.; Massimino, S.; Bianco, F.; Bucciarelli, V.; Vinciguerra, V.; Fallica, P.; et al. Prediction of State Anxiety by Machine Learning Applied to Photoplethysmography Data. PeerJ 2021, 9, e10448. [Google Scholar] [CrossRef] [PubMed]
  39. Longmore, S.K.; Lui, G.Y.; Naik, G.; Breen, P.P.; Jalaludin, B.; Gargiulo, G.D. A Comparison of Reflective Photoplethysmography for Detection of Heart Rate, Blood Oxygen Saturation, and Respiration Rate at Various Anatomical Locations. Sensors 2019, 19, 1874. [Google Scholar] [CrossRef] [PubMed]
  40. Weiler, D.T.; Villajuan, S.O.; Edkins, L.; Cleary, S.; Saleem, J.J. Wearable Heart Rate Monitor Technology Accuracy in Research: A Comparative Study between PPG and ECG Technology. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2017, 61, 1292–1296. [Google Scholar] [CrossRef]
  41. Akbar, F.; Mark, G.; Pavlidis, I.; Gutierrez-Osuna, R. An Empirical Study Comparing Unobtrusive Physiological Sensors for Stress Detection in Computer Work. Sensors 2019, 19, 3766. [Google Scholar] [CrossRef]
  42. Perpetuini, D.; Di Credico, A.; Filippini, C.; Izzicupo, P.; Cardone, D.; Chiacchiaretta, P.; Ghinassi, B.; Di Baldassarre, A.; Merla, A. Is It Possible to Estimate Average Heart Rate from Facial Thermal Imaging? Eng. Proc. 2021, 8, 10. [Google Scholar] [CrossRef]
  43. Di Credico, A.; Perpetuini, D.; Izzicupo, P.; Gaggi, G.; Cardone, D.; Filippini, C.; Merla, A.; Ghinassi, B.; Di Baldassarre, A. Estimation of Heart Rate Variability Parameters by Machine Learning Approaches Applied to Facial Infrared Thermal Imaging. Front. Cardiovasc. Med. 2022, 9, 893374. [Google Scholar] [CrossRef] [PubMed]
  44. Di Credico, A.; Petri, C.; Cataldi, S.; Greco, G.; Suarez-Arrones, L.; Izzicupo, P. Heart Rate Variability, Recovery and Stress Analysis of an Elite Rally Driver and Co-Driver during a Competition Period. Sci. Prog. 2024, 107, 00368504231223034. [Google Scholar] [CrossRef] [PubMed]
  45. Hamatta, H.S.; Banerjee, K.; Anandaram, H.; Shabbir Alam, M.; Deva Durai, C.A.; Parvathi Devi, B.; Palivela, H.; Rajagopal, R.; Yeshitla, A. Genetic Algorithm-Based Human Mental Stress Detection and Alerting in Internet of Things. Comput. Intell. Neurosci. 2022, 2022, 4086213. [Google Scholar] [CrossRef]
  46. Huttunen, R.; Leppänen, T.; Duce, B.; Oksenberg, A.; Myllymaa, S.; Töyräs, J.; Korkalainen, H. Assessment of Obstructive Sleep Apnea-Related Sleep Fragmentation Utilizing Deep Learning-Based Sleep Staging from Photoplethysmography. Sleep 2021, 44, zsab142. [Google Scholar] [CrossRef]
  47. Radha, M.; Fonseca, P.; Moreau, A.; Ross, M.; Cerny, A.; Anderer, P.; Long, X.; Aarts, R.M. A Deep Transfer Learning Approach for Wearable Sleep Stage Classification with Photoplethysmography. npj Digit. Med. 2021, 4, 135. [Google Scholar] [CrossRef] [PubMed]
  48. Iqbal, S.; Bacardit, J.; Griffiths, B.; Allen, J. Deep Learning Classification of Systemic Sclerosis from Multi-Site Photoplethysmography Signals. Front. Physiol. 2023, 14, 1242807. [Google Scholar] [CrossRef] [PubMed]
  49. Poh, M.; Battisti, A.J.; Cheng, L.; Lin, J.; Patwardhan, A.; Venkataraman, G.S.; Athill, C.A.; Patel, N.S.; Patel, C.P.; Machado, C.E.; et al. Validation of a Deep Learning Algorithm for Continuous, Real-Time Detection of Atrial Fibrillation Using a Wrist-Worn Device in an Ambulatory Environment. J. Am. Heart Assoc. 2023, 12, e030543. [Google Scholar] [CrossRef]
  50. Adams, J.L.; Kangarloo, T.; Tracey, B.; O’Donnell, P.; Volfson, D.; Latzman, R.D.; Zach, N.; Alexander, R.; Bergethon, P.; Cosman, J. Using a Smartwatch and Smartphone to Assess Early Parkinson’s Disease in the WATCH-PD Study. npj Park. Dis. 2023, 9, 64. [Google Scholar] [CrossRef]
  51. Wouters, F.; Gruwez, H.; Vranken, J.; Vanhaen, D.; Daelman, B.; Ernon, L.; Mesotten, D.; Vandervoort, P.; Verhaert, D. The Potential and Limitations of Mobile Health and Insertable Cardiac Monitors in the Detection of Atrial Fibrillation in Cryptogenic Stroke Patients: Preliminary Results from the REMOTE Trial. Front. Cardiovasc. Med. 2022, 9, 616. [Google Scholar] [CrossRef] [PubMed]
  52. Antos, S.A.; Danilovich, M.K.; Eisenstein, A.R.; Gordon, K.E.; Kording, K.P. Smartwatches Can Detect Walker and Cane Use in Older Adults. Innov. Aging 2019, 3, igz008. [Google Scholar] [CrossRef]
  53. Wang, X.; Perry, T.A.; Caroupapoullé, J.; Forrester, A.; Arden, N.K.; Hunter, D.J. Monitoring Work-Related Physical Activity and Estimating Lower-Limb Loading: A Proof-of-Concept Study. BMC Musculoskelet. Disord. 2021, 22, 552. [Google Scholar] [CrossRef]
  54. Johnston, A.H.; Weiss, G.M. Smartwatch-Based Biometric Gait Recognition. In Proceedings of the 2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS), Arlington, VA, USA, 8–11 September 2015; pp. 1–6. [Google Scholar]
  55. Oluwalade, B.; Neela, S.; Wawira, J.; Adejumo, T.; Purkayastha, S. Human Activity Recognition Using Deep Learning Models on Smartphones and Smartwatches Sensor Data. arXiv 2021, arXiv:2103.03836. [Google Scholar]
  56. Di Credico, A.; Perpetuini, D.; Chiacchiaretta, P.; Cardone, D.; Filippini, C.; Gaggi, G.; Merla, A.; Ghinassi, B.; Di Baldassarre, A.; Izzicupo, P. The Prediction of Running Velocity during the 30–15 Intermittent Fitness Test Using Accelerometry-Derived Metrics and Physiological Parameters: A Machine Learning Approach. Int. J. Environ. Res. Public Health 2021, 18, 10854. [Google Scholar] [CrossRef]
  57. Boucsein, W. Electrodermal Activity; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; ISBN 1-4614-1126-2. [Google Scholar]
  58. Benedek, M.; Kaernbach, C. A Continuous Measure of Phasic Electrodermal Activity. J. Neurosci. Methods 2010, 190, 80–91. [Google Scholar] [CrossRef]
  59. Kajiwara, S. Evaluation of Driver’s Mental Workload by Facial Temperature and Electrodermal Activity under Simulated Driving Conditions. Int. J. Automot. Technol. 2014, 15, 65–70. [Google Scholar] [CrossRef]
  60. Filippini, C.; Di Crosta, A.; Palumbo, R.; Perpetuini, D.; Cardone, D.; Ceccato, I.; Di Domenico, A.; Merla, A. Automated Affective Computing Based on Bio-Signals Analysis and Deep Learning Approach. Sensors 2022, 22, 1789. [Google Scholar] [CrossRef]
  61. Ali, M.; Elsayed, A.; Mendez, A.; Savaria, Y.; Sawan, M. Contact and Remote Breathing Rate Monitoring Techniques: A Review. IEEE Sens. J. 2021, 21, 14569–14586. [Google Scholar] [CrossRef]
  62. Fusco, A.; Locatelli, D.; Onorati, F.; Durelli, G.C.; Santambrogio, M.D. On How to Extract Breathing Rate from PPG Signal Using Wearable Devices. In Proceedings of the 2015 IEEE Biomedical Circuits and Systems Conference (BioCAS), Atlanta, GA, USA, 22–24 October 2015; pp. 1–4. [Google Scholar]
  63. Liguori, C.; Maestri, M.; Spanetta, M.; Placidi, F.; Bonanni, E.; Mercuri, N.B.; Guarnieri, B. Sleep-Disordered Breathing and the Risk of Alzheimer’s Disease. Sleep Med. Rev. 2021, 55, 101375. [Google Scholar] [CrossRef] [PubMed]
  64. Ludwig, N.; Gargano, M.; Formenti, D.; Bruno, D.; Ongaro, L.; Alberti, G. Breathing Training Characterization by Thermal Imaging: A Case Study. Acta Bioeng. Biomech. 2012, 14, 42–47. [Google Scholar]
  65. Osorio, R.S.; Gumb, T.; Pirraglia, E.; Varga, A.W.; Lu, S.; Lim, J.; Wohlleber, M.E.; Ducca, E.L.; Koushyk, V.; Glodzik, L.; et al. Sleep-Disordered Breathing Advances Cognitive Decline in the Elderly. Neurology 2015, 84, 1964–1971. [Google Scholar] [CrossRef]
  66. Schäfer, A.; Kratky, K.W. Estimation of Breathing Rate from Respiratory Sinus Arrhythmia: Comparison of Various Methods. Ann. Biomed. Eng. 2008, 36, 476–485. [Google Scholar] [CrossRef] [PubMed]
  67. Bailey, B.P.; Iqbal, S.T. Understanding Changes in Mental Workload during Execution of Goal-Directed Tasks and Its Application for Interruption Management. ACM Trans. Comput.-Hum. Interact. 2008, 14, 21. [Google Scholar] [CrossRef]
  68. Zhang, L.; Cui, H. Reliability of MUSE 2 and Tobii Pro Nano at Capturing Mobile Application Users’ Real-Time Cognitive Workload Changes. Front. Neurosci. 2022, 16, 1011475. [Google Scholar] [CrossRef]
  69. Wu, C.; Cha, J.; Sulek, J.; Zhou, T.; Sundaram, C.P.; Wachs, J.; Yu, D. Eye-Tracking Metrics Predict Perceived Workload in Robotic Surgical Skills Training. Hum. Factors 2020, 62, 1365–1386. [Google Scholar] [CrossRef]
  70. Iqbal, M.U.; Srinivasan, B.; Srinivasan, R. Multi-Class Classification of Control Room Operators’ Cognitive Workload Using the Fusion of Eye-Tracking and Electroencephalography. Comput. Chem. Eng. 2024, 181, 108526. [Google Scholar] [CrossRef]
  71. Patel, J.; Hasan, R. Smart Bracelets: Towards Automating Personal Safety Using Wearable Smart Jewelry. In Proceedings of the 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 12–15 January 2018; pp. 1–2. [Google Scholar]
  72. Rantala, I.; Colley, A.; Häkkilä, J. Smart Jewelry: Augmenting Traditional Wearable Self-Expression Displays. In Proceedings of the 7th ACM International Symposium on Pervasive Displays, Munich, Germany, 6–8 June 2018; pp. 1–8. [Google Scholar]
  73. Haseli, G.; Ögel, İ.Y.; Ecer, F.; Hajiaghaei-Keshteli, M. Luxury in Female Technology (FemTech): Selection of Smart Jewelry for Women through BCM-MARCOS Group Decision-Making Framework with Fuzzy ZE-Numbers. Technol. Forecast. Soc. Chang. 2023, 196, 122870. [Google Scholar] [CrossRef]
  74. Rupprecht, P.; Kueffner-McCauley, H.; Trimmel, M.; Schlund, S. Adaptive Spatial Augmented Reality for Industrial Site Assembly. Procedia CIRP 2021, 104, 405–410. [Google Scholar] [CrossRef]
  75. Ciccarelli, M.; Papetti, A.; Scoccia, C.; Menchi, G.; Mostarda, L.; Palmieri, G.; Germani, M. A System to Improve the Physical Ergonomics in Human-Robot Collaboration. Procedia Comput. Sci. 2022, 200, 689–698. [Google Scholar] [CrossRef]
  76. Beggiato, M.; Hartwich, F.; Krems, J. Physiological Correlates of Discomfort in Automated Driving. Transp. Res. Part F Traffic Psychol. Behav. 2019, 66, 445–458. [Google Scholar] [CrossRef]
  77. Biswas, P.; Prabhakar, G. Detecting Drivers’ Cognitive Load from Saccadic Intrusion. Transp. Res. Part F Traffic Psychol. Behav. 2018, 54, 63–78. [Google Scholar] [CrossRef]
  78. Li, W.-C.; Horn, A.; Sun, Z.; Zhang, J.; Braithwaite, G. Augmented Visualization Cues on Primary Flight Display Facilitating Pilot’s Monitoring Performance. Int. J. Hum.-Comput. Stud. 2020, 135, 102377. [Google Scholar]
  79. Liang, C.; Liu, S.; Wanyan, X.; Liu, C.; Xiao, X.; Min, Y. Effects of Input Method and Display Mode of Situation Map on Early Warning Aircraft Reconnaissance Task Performance with Different Information Complexities. Chin. J. Aeronaut. 2023, 36, 105–114. [Google Scholar] [CrossRef]
  80. Khamaisi, R.K.; Brunzini, A.; Grandi, F.; Peruzzini, M.; Pellicciari, M. UX Assessment Strategy to Identify Potential Stressful Conditions for Workers. Robot. Comput.-Integr. Manuf. 2022, 78, 102403. [Google Scholar]
  81. Peruzzini, M.; Grandi, F.; Pellicciari, M.; Campanella, C.E. A Mixed-Reality Digital Set-up to Support Design for Serviceability. Procedia Manuf. 2018, 17, 499–506. [Google Scholar] [CrossRef]
  82. Peruzzini, M.; Grandi, F.; Pellicciari, M. How to Analyse the Workers’ Experience in Integrated Product-Process Design. J. Ind. Inf. Integr. 2018, 12, 31–46. [Google Scholar] [CrossRef]
  83. Viegas, C.; Lau, S.-H.; Maxion, R.; Hauptmann, A. Towards Independent Stress Detection: A Dependent Model Using Facial Action Units. In Proceedings of the 2018 International Conference on Content-Based Multimedia Indexing (CBMI), La Rochelle, France, 4–6 September 2018; pp. 1–6. [Google Scholar]
  84. Pongsakornsathien, N.; Gardi, A.; Lim, Y.; Sabatini, R.; Kistan, T.; Ezer, N. Performance Characterisation of Wearable Cardiac Monitoring Devices for Aerospace Applications. In Proceedings of the 2019 IEEE 5th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Turin, Italy, 19–21 June 2019; pp. 76–81. [Google Scholar]
  85. Ciccarelli, M.; Papetti, A.; Germani, M.; Leone, A.; Rescio, G. Human Work Sustainability Tool. J. Manuf. Syst. 2022, 62, 76–86. [Google Scholar] [CrossRef]
  86. Ciccarelli, M.; Papetti, A.; Germani, M. Exploring How New Industrial Paradigms Affect the Workforce: A Literature Review of Operator 4.0. J. Manuf. Syst. 2023, 70, 464–483. [Google Scholar] [CrossRef]
  87. Smith, J.B.; Baskaran, P.; Adams, J.A. Decomposed Physical Workload Estimation for Human-Robot Teams. In Proceedings of the 2022 IEEE 3rd International Conference on Human-Machine Systems (ICHMS), Orlando, FL, USA, 17–19 November 2022; pp. 1–6. [Google Scholar]
  88. Dell’Agnola, F.; Jao, P.-K.; Arza, A.; Chavarriaga, R.; Millán, J.d.R.; Floreano, D.; Atienza, D. Machine-Learning Based Monitoring of Cognitive Workload in Rescue Missions with Drones. IEEE J. Biomed. Health Inform. 2022, 26, 4751–4762. [Google Scholar] [CrossRef]
  89. Masinelli, G.; Forooghifar, F.; Arza, A.; Atienza, D.; Aminifar, A. Self-Aware Machine Learning for Multimodal Workload Monitoring during Manual Labor on Edge Wearable Sensors. IEEE Des. Test 2020, 37, 58–66. [Google Scholar] [CrossRef]
  90. Zanetti, R.; Arza, A.; Aminifar, A.; Atienza, D. Real-Time EEG-Based Cognitive Workload Monitoring on Wearable Devices. IEEE Trans. Biomed. Eng. 2021, 69, 265–277. [Google Scholar] [CrossRef] [PubMed]
  91. Pongsakornsathien, N.; Lim, Y.; Gardi, A.; Hilton, S.; Planke, L.; Sabatini, R.; Kistan, T.; Ezer, N. Sensor Networks for Aerospace Human-Machine Systems. Sensors 2019, 19, 3465. [Google Scholar] [CrossRef] [PubMed]
  92. Buzsáki, G.; Anastassiou, C.A.; Koch, C. The Origin of Extracellular Fields and Currents—EEG, ECoG, LFP and Spikes. Nat. Rev. Neurosci. 2012, 13, 407–420. [Google Scholar] [CrossRef] [PubMed]
  93. Kirschstein, T.; Köhling, R. What Is the Source of the EEG? Clin. EEG Neurosci. 2009, 40, 146–149. [Google Scholar] [CrossRef] [PubMed]
  94. Pinti, P.; Aichelburg, C.; Gilbert, S.; Hamilton, A.; Hirsch, J.; Burgess, P.; Tachtsidis, I. A Review on the Use of Wearable Functional Near-infrared Spectroscopy in Naturalistic Environments. Jpn. Psychol. Res. 2018, 60, 347–373. [Google Scholar] [CrossRef] [PubMed]
  95. Calvo, M.G.; Nummenmaa, L. Perceptual and Affective Mechanisms in Facial Expression Recognition: An Integrative Review. Cogn. Emot. 2016, 30, 1081–1106. [Google Scholar] [CrossRef]
  96. Fathima, A.; Vaidehi, K. Review on Facial Expression Recognition System Using Machine Learning Techniques. In Advances in Decision Sciences, Image Processing, Security and Computer Vision: International Conference on Emerging Trends in Engineering (ICETE); Springer: Berlin/Heidelberg, Germany, 2020; Volume 2, pp. 608–618. [Google Scholar]
  97. Abdullah, S.M.S.; Abdulazeez, A.M. Facial Expression Recognition Based on Deep Learning Convolution Neural Network: A Review. J. Soft Comput. Data Min. 2021, 2, 53–65. [Google Scholar]
  98. Ranchet, M.; Morgan, J.C.; Akinwuntan, A.E.; Devos, H. Cognitive Workload across the Spectrum of Cognitive Impairments: A Systematic Review of Physiological Measures. Neurosci. Biobehav. Rev. 2017, 80, 516–537. [Google Scholar] [CrossRef] [PubMed]
  99. Hidalgo-Muñoz, A.R.; Mouratille, D.; Matton, N.; Causse, M.; Rouillard, Y.; El-Yagoubi, R. Cardiovascular Correlates of Emotional State, Cognitive Workload and Time-on-Task Effect during a Realistic Flight Simulation. Int. J. Psychophysiol. 2018, 128, 62–69. [Google Scholar] [CrossRef] [PubMed]
  100. Taheri Gorji, H.; Wilson, N.; VanBree, J.; Hoffmann, B.; Petros, T.; Tavakolian, K. Using Machine Learning Methods and EEG to Discriminate Aircraft Pilot Cognitive Workload during Flight. Sci. Rep. 2023, 13, 2507. [Google Scholar] [CrossRef] [PubMed]
  101. Schmid, D.; Korn, B. A Tripartite Concept of a Remote-Copilot Center for Commercial Single-Pilot Operations. In Proceedings of the AIAA Information Systems-AIAA Infotech@ Aerospace, Grapevine, TX, USA, 9–13 January 2017; p. 0064. [Google Scholar]
  102. Cardone, D.; Perpetuini, D.; Filippini, C.; Mancini, L.; Nocco, S.; Tritto, M.; Rinella, S.; Giacobbe, A.; Fallica, G.; Ricci, F.; et al. Classification of Drivers’ Mental Workload Levels: Comparison of Machine Learning Methods Based on ECG and Infrared Thermal Signals. Sensors 2022, 22, 7300. [Google Scholar] [CrossRef] [PubMed]
  103. Hughes, A.M.; Hancock, G.M.; Marlow, S.L.; Stowers, K.; Salas, E. Cardiac Measures of Cognitive Workload: A Meta-Analysis. Hum. Factors 2019, 61, 393–414. [Google Scholar] [CrossRef]
  104. Guastello, S.J. Cognitive Workload and Fatigue in Financial Decision Making; Springer: Berlin/Heidelberg, Germany, 2016; ISBN 4-431-55312-6. [Google Scholar]
  105. Beaty, R.E.; Benedek, M.; Silvia, P.J.; Schacter, D.L. Creative Cognition and Brain Network Dynamics. Trends Cogn. Sci. 2016, 20, 87–95. [Google Scholar] [CrossRef]
Figure 1. Illustration of the main topics presented in the survey. From the investigated wearable sensors, used to monitor and acquire the physiological signals, to the evaluation of the cognitive workload techniques, including ML algorithms. The figure is created with BioRender.com (accessed on 4 March 2024), and uxwing.com (accessed on 4 March 2024).
Figure 1. Illustration of the main topics presented in the survey. From the investigated wearable sensors, used to monitor and acquire the physiological signals, to the evaluation of the cognitive workload techniques, including ML algorithms. The figure is created with BioRender.com (accessed on 4 March 2024), and uxwing.com (accessed on 4 March 2024).
Biomedinformatics 04 00064 g001
Figure 2. Illustration of the principal physiological signals acquired for CW monitoring. The figure is created with BioRender.com (accessed on 4 March 2024).
Figure 2. Illustration of the principal physiological signals acquired for CW monitoring. The figure is created with BioRender.com (accessed on 4 March 2024).
Biomedinformatics 04 00064 g002
Table 1. Main papers included into this section of the review after the literature review procedure.
Table 1. Main papers included into this section of the review after the literature review procedure.
AuthorsObjectiveSignals Acquired
Rupprecht et al. [74]Combination of a dynamic projection system with visual detection of human location and gestures in a spacious work area.Gesture recognition through high-resolution RGB-camera and real-time object recognition algorithm YOLOv3.
Ciccarelli et al. [75]Development of a system to avoid uncomfortable and unsafe posturesPosture recognition through Intel RealSense technologies and the Cubemos Skeleton Tracking SDK.
Beggiato et al. [76]Investigation of factors that influence comfort and discomfort in automated driving.HR, EDA, pupil diameter, and eye blink rate
Biswas and Prabhakar [77]Investigation of the possibility of identifying drivers’ cognitive load and immediate awareness of emerging road risks via the use of Saccadic Intrusion.Eye gaze and saccadic intrusion through Tobii TX-2
Li et al. [78]Evaluation of human-computer interactions on a augmented visualization Primary
Flight Display (PFD) compared with the traditional PFD.
Pupil Labs eye tracker to assess pupil position and dimensions.
Liang et al. [79]An assessment of how the input method and display mode of the situation map impact the performance of Early Warning Aircraft (EWA) during reconnaissance tasks, considering varying levels of information complexity.Tobii Glasses 2 for eye movement recording and HRV through a PPG device.
Khamaisi et al. [80]Assessment of the users experience of workers in manufacturing sites.• HTC Vive Tracker suite, to evaluate the human body angles;
• Empatica E4 wristband, to record HR and EDA;
• Zephyr Bioharness 3 thoracic band, to collect RR intervals;
• HTC Vive Pro Eye headset equipped with Tobii eye tracking system.
Peruzzini et al. [81]Development of a mixed reality setup where operators are digitized and monitored to evaluate both physical and cognitive ergonomics.• Siemens JACK to collect the users’ postures and movements in the real space;
• Tobii Glasses 2 to collect the users’ eye fixations;
• Zephyr Bioharness to collect HR, HRV, breathing rate (BR), acceleration (VMU), and posture.
Peruzzini et al. [82]A procedure to employ digital technologies to enhance product-process design to analyse the workers behavioursThe setup consists of a video camera used to replicate the virtual environment and generate a digital replica of the workplace. Additionally, an advanced eye tracking system (specifically, the Glasses 2 by Tobii) is employed to analyze the precise eye fixation of real users. Furthermore, a multi-parametric wearable sensor is utilized to monitor real-time physiological parameters.
Table 2. Main papers included into this section of the overview after the completion of the literature review procedure.
Table 2. Main papers included into this section of the overview after the completion of the literature review procedure.
AuthorsObjectiveSignals AcquiredAI
Zanetti et al. [90]The study focuses on building a cognitive workload monitoring (CWM) solution to assess the drone operators’ CW level during a simulated search and rescue mission.EEG (Biosemi ActiveTwo system)Random Forest
Smith et al. [87]Development of models for estimating grossand fine motor, and tactileworkload.• The BioharnessTM to collect
heart rate, respiration
rate, and postural magnitude;
• Xsens MTw Awinda to measure participant’s body pose;
• two Myo armbands, each equipped with 8-channels
surface EMG and a forearm inertial metrics.
Several ML algorithms not specified
Dell’Agnola et al. [88]ML algorithm
for real-time cognitive workload monitoring of drones rescue operators.
Respiration, ECG, PPG, and skin temperature.A new weighted-learning method for Support
Vector Machine (SVM)
Pongsakornsathien et al. [84] Development of a Cognitive Human-Machine Interfaces and Interactions (CHMI2) framework.ECG through the BH system.Adaptive Neuro-Fuzzy Inference System (ANFIS)
Ciccarelli et al. [85]The supervision of operator’s tasks and the execution of remedial measures to ensure social sustainability in the workplace.These data are acquired by IoT devices (i.e., chest band, wristband, smart glasses, inertial measurement units).machine learning algorithms as STAI and NASA-TLX
Ciccarelli et al. [86]A systematic review analyzing the occupational risks of the workers and the possible solutions.wearable sensors to evaluate mental and cognitive workload collecting physiological parameters such as heart rate, Heart rate Variability, Respiratory rate (RR), Electrodermal Activity (EDA), electroencephalography (EEG) and pupillometry.use of AI technologie, such as machine learning (ML) and computer vision
Viegas et al. [83]A method able to assess mental stress through facial expressions detected only on video17 Action Units (AUs) from upper-level to lower-level face frame-wise have been extracted.Different classifiers were used: Random Forest, LDA, Gaussian Naive Bayes and Decision Tree.
Pongsakornsathien et al. [91]Advances in sensor networks for aerospace cyber-physical systems are discussed, focusing on Cognitive Human-Machine Interfaces and Interactions implementations.Eye fixations, blink rate, saccades, pupil diameter, visual entropy and dwell time; neuroimaging technologiesNeural Fuzzy Systems and networks, artificial neural networks, convolutional neural networks
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Iarlori, S.; Perpetuini, D.; Tritto, M.; Cardone, D.; Tiberio, A.; Chinthakindi, M.; Filippini, C.; Cavanini, L.; Freddi, A.; Ferracuti, F.; et al. An Overview of Approaches and Methods for the Cognitive Workload Estimation in Human–Machine Interaction Scenarios through Wearables Sensors. BioMedInformatics 2024, 4, 1155-1173. https://doi.org/10.3390/biomedinformatics4020064

AMA Style

Iarlori S, Perpetuini D, Tritto M, Cardone D, Tiberio A, Chinthakindi M, Filippini C, Cavanini L, Freddi A, Ferracuti F, et al. An Overview of Approaches and Methods for the Cognitive Workload Estimation in Human–Machine Interaction Scenarios through Wearables Sensors. BioMedInformatics. 2024; 4(2):1155-1173. https://doi.org/10.3390/biomedinformatics4020064

Chicago/Turabian Style

Iarlori, Sabrina, David Perpetuini, Michele Tritto, Daniela Cardone, Alessandro Tiberio, Manish Chinthakindi, Chiara Filippini, Luca Cavanini, Alessandro Freddi, Francesco Ferracuti, and et al. 2024. "An Overview of Approaches and Methods for the Cognitive Workload Estimation in Human–Machine Interaction Scenarios through Wearables Sensors" BioMedInformatics 4, no. 2: 1155-1173. https://doi.org/10.3390/biomedinformatics4020064

Article Metrics

Back to TopTop