Next Article in Journal
Excitation-Dependent pKa Extends the Sensing Range of Fluorescence Lifetime pH Sensors
Previous Article in Journal
An Identification Method for Road Hypnosis Based on the Fusion of Human Life Parameters
Previous Article in Special Issue
Exploring the Impact of the NULL Class on In-the-Wild Human Activity Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Movement Sensing Opportunities for Monitoring Dynamic Cognitive States

by
Tad T. Brunyé
1,2,*,
James McIntyre
2,3,
Gregory I. Hughes
1 and
Eric L. Miller
2,3
1
U.S. Army DEVCOM Soldier Center, Natick, MA 01760, USA
2
Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, USA
3
Department of Electrical and Computer Engineering, Tufts University, Medford, MA 02155, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(23), 7530; https://doi.org/10.3390/s24237530 (registering DOI)
Submission received: 23 September 2024 / Revised: 7 November 2024 / Accepted: 23 November 2024 / Published: 25 November 2024
(This article belongs to the Special Issue Sensors for Human Movement Recognition and Analysis)

Abstract

:
In occupational domains such as sports, healthcare, driving, and military, both individuals and small groups are expected to perform challenging tasks under adverse conditions that induce transient cognitive states such as stress, workload, and uncertainty. Wearable and standoff 6DOF sensing technologies are advancing rapidly, including increasingly miniaturized yet robust inertial measurement units (IMUs) and portable marker-less infrared optical motion tracking. These sensing technologies may offer opportunities to track overt physical behavior and classify cognitive states relevant to human performance in diverse human–machine domains. We describe progress in research attempting to distinguish cognitive states by tracking movement behavior in both individuals and small groups, examining potential applications in sports, healthcare, driving, and the military. In the context of military training and operations, there are no generally accepted methods for classifying transient mental states such as uncertainty from movement-related data, despite its importance for shaping decision-making and behavior. To fill this gap, an example data set is presented including optical motion capture of rifle trajectories during a dynamic marksmanship task that elicits variable uncertainty; using machine learning, we demonstrate that features of weapon trajectories capturing the complexity of motion are valuable for classifying low versus high uncertainty states. We argue that leveraging metrics of human movement behavior reveals opportunities to complement relatively costly and less portable neurophysiological sensing technologies and enables domain-specific human–machine interfaces to support a wide range of cognitive functions.

1. Introduction

Whether in the form of subtle finger movements or whole-body coordinated activity, humans are constantly moving. Research suggests that movement results from, indicates, and guides perception and thought, forming the basis of cognitive science theories of embodied cognition and perception-action feedback loops [1,2,3,4]. The notion that subtle alterations in movement behavior can indicate cognitive states is readily exemplified through everyday experience: a shaking knee can indicate stress and anxiety, erratic head movements can indicate confusion or disorientation, increased postural stability can indicate engagement in a task, and anterior trunk lean can indicate mental workload [5,6,7,8,9]. In other words, humans embody the inherent mental demands of a context or task and produce measurable behavioral traces of this embodiment that can be leveraged to peer inside the “black box” of cognition.
Prior research examining behavioral or neurophysiological correlates of cognitive states is largely restricted to laboratory contexts with highly controlled environments and tasks, leveraging diverse multi-modal sensors. For example, to assess mental workload, researchers have variably measured pupil diameter and patterns of eye movements using infrared eye tracking [10,11,12], heart rate and heart rate variability using physiological monitoring [13,14,15], prefrontal cortical brain activity using functional near-infrared spectroscopy [16,17], frequency-domain oscillatory brain activity using electroencephalography (EEG) [18,19], acoustic modulations of speech [20], and/or subjective assessments such as the NASA task load index (NASA-TLX) [21]. While each of these measures offers promise within specific contexts, they are also subject to several inherent limitations. Indeed, neurophysiological sensing is largely restricted to non-ambulatory contexts due to low signal-to-noise ratios and motion artifact, speech analysis is restricted to tasks involving spoken language, and subjective instruments often necessitate disrupting task engagement to probe self-reported states. Movement sensing offers an unintrusive supplement or alternative to traditional neurophysiological sensors used in the cognitive and brain sciences, with emerging research suggesting that it can be leveraged for cognitive state estimation.

2. Movement Sensing and Cognitive State Estimation

Human movement sensing is mostly performed in one of two primary modalities, using optical motion capture (OMC) or inertial measurement units (IMUs). OMC uses a synchronized array of infrared cameras to track sequences of Cartesian coordinates as markers (e.g., retroreflective marker balls) move through a limited measurement volume. In systems using markers, passive or active markers are typically placed over anatomical landmarks (e.g., knee, sternum, clavicle) or on tools (e.g., on a rifle). Based on sequences of translational and rotational marker movement, an image acquisition system processes and visualizes movement patterns of tracked objects and derives quantitative movement metrics of kinematics (e.g., velocity, acceleration) and/or kinetics (e.g., force, power) [22]. OMC has very high precision, with overall precision typically below 100–200 µm depending upon camera capabilities, tracking volume, calibration procedures, and the position of trackable objects [23,24]. However, OMC hardware and software are costly and resource intensive for training, operation, and maintenance [25]. They also track over a restricted volume of space, reducing applicability to field environments.
In contrast, IMU-based sensors use an array of gyroscopes, accelerometers, and/or magnetometers to track an object’s acceleration, velocity, rotation (roll, pitch, yaw), and/or heading relative to a global reference frame [26]. The gyroscope provides information regarding angular rate, the accelerometer provides information regarding force and acceleration, and the magnetometer (when equipped) provides information regarding the local magnetic field. Many modern handheld and wearable devices use integrated IMUs including smartphones, tablets, and fitness trackers, and they are becoming increasingly miniaturized while maintaining reasonable accuracy and precision [27]. While IMUs cannot achieve the precision of OMC, particularly over extended periods of time (i.e., as drift accumulates), they present sensing opportunities with relatively unintrusive, portable, low cost, and flexible technologies that can be used in several applications: seated or ambulatory, stationary or moving, and with individuals or groups [25,28,29,30].
We consider four domains in which research may benefit from movement sensing, whether via OMC or IMU, to provide insights into transient cognitive states: sports, healthcare, driving and navigation, and military. Thereafter, we describe a new dataset and provide a detailed demonstration of how OMC-based movement sensing may provide insights into dynamic mental states of uncertainty while military personnel move a weapon during a simulated marksmanship task.

2.1. Sports

Movement sensing is increasingly popular in sports medicine, with an emphasis on tracking and optimizing training trajectories and providing a basis for data-driven feedback between players and coaches [31,32]. While most work examining movement sensing in athletes tends to emphasize injury and fatigue prevention [33,34,35], a few recent studies have suggested that movement sensing can be used to predict athlete intentions (a high-level cognitive function involving goal-setting, decision-making, and planning) immediately prior to overt movement.
In one study, participants performed a series of trials where they would stand and then transition to run in one of eight directions (e.g., straight ahead, to the right, to the left) based on a visual cue [36]. Participants’ movement was tracked using a combination of OMC, IMUs, force plates, and electromyography (EMG) to capture muscle activation and movement kinetics and kinematics. Using a machine learning algorithm (XGBoost) with ensemble learning and gradient boosting, the authors were able to identify participants’ intent to move, and in which direction, approximately 100 ms before the onset of detectable velocity change at the center of mass (i.e., absolute kinematic start). In other words, the algorithm can detect two mental states of interest: preparation to execute a movement, and an intent to move in a particular direction. The authors suggest that kinetics measured with OMC provided the most valuable contribution to the algorithm, perhaps given its relative precision.
In sports, the ability to detect movement intent can provide insight into when an individual recognizes and successfully interprets game dynamics and uses them to guide their own behavior; it can also be valuable for predicting offensive and defensive plays in team sports contexts.

2.2. Healthcare

In healthcare, movement sensing has been used to infer mental states of both physicians and patients. With physicians, tracking the movement of surgical tools during surgery has been associated with varied skill level, with relatively novice and uncertain (a mental state of limited knowledge or information, making it difficult to predict outcomes or make decisions) surgeons showing different movement patterns relative to expert surgeons [37]. For example, Cao and colleagues tracked laparoscopic tool use during four surgical tasks (dissection, suturing, knot tying, suture cutting) and found distinct movement features, including velocity, were related to surgeons’ uncertainty of position and orientation within the body cavity [38]. Furthermore, when pathologists examined (i.e., zooming, panning) tissue biopsies, several movement-related features including movement entropy were related to diagnostic accuracy in detecting cancerous lesions, suggesting that physicians with relatively high certainty and confidence show relatively predictable (i.e., less erratic) movements [39]. In a study with surgeons performing arthroscopic surgery, machine learning classifiers were able to identify confusion states with over 94% accuracy by examining head and eye movements alone [40]. In medical training, algorithms resulting from this work could be used to automate the detection of uncertainty states and trigger expert remediation, AI-enabled assistance, and recommender systems.
With patients, tracking hand movements during cognitive tasks can differentiate those with versus without autism spectrum disorder (ASD) or major depressive disorder [41,42], suggesting the value of limb movement tracking to detect psychological conditions. In age-related clinical disorders, insole balance sensors have been used to predict mild cognitive impairment based on balance-related features [43], and features of gait measured using OMC can predict dementia onset [44] and mild cognitive impairment [45]. Head movement tracking has also been shown sensitive to chronically high mental workload levels that might be associated with changes in well-being and health [46].

2.3. Driving and Navigation

Identifying reliable markers of driver inattention, disorientation, or workload could prove valuable for human–machine integration with semi-autonomous driving systems. While a lot of research has considered the value of oculomotor metrics such as eye gaze and blink frequency and duration, some work has considered relatively gross movement dynamics of drivers. For example, using sensors affixed to steering wheels in driving simulators, researchers have found that rotational entropy of steering wheel movements is indicative of various mental states including workload and inattention [47,48].
Similarly, sensing the entropy of head movements during navigation tasks can be indicative of disorientation and uncertainty [7]. Specifically, when yaw-based head movements are relatively unpredictable and erratic, navigators are more likely to be experiencing uncertainty regarding direction of travel. Interestingly, entropy is a recurring feature in earlier work, suggesting that the inherent unpredictability and disorder of movement behavior can indicate mental states of confusion or uncertainty. Interestingly, when people are placed under conditions of stress and uncertainty, neuroscientists have found increased entropy in both functional brain activity and heart rate dynamics [49,50,51,52]; it could be the case that entropy-related movement features reflect these neural dynamics.

2.4. Military

In military contexts, dynamic coordinated body movements are fundamental to many common tasks. Whether maintaining a vehicle, hiking a mountain, entering a building, or setting up a roadblock, military personnel must coordinate their body and equipment while variably experiencing states of stress, uncertainty, and workload.
In the case of rifle marksmanship, military personnel constantly observe their environment, orient towards potential threats, make decisions regarding the posture and intent of potential threats, and then act accordingly [53,54]. While research in sports, healthcare, and driving and navigation domains suggest that movement dynamics might indeed indicate mental states, to our knowledge, no studies have examined whether rifle movement dynamics during marksmanship tasks might be indicative of unfolding mental states of uncertainty, workload, or stress.
To fill this gap, we present data collected from military personnel engaged in a simulated marksmanship task while we used OMC sensors to track rifle position and compute features of movement dynamics. In the next section, we describe the design, analysis, and results of this analysis.

3. Classifying Uncertainty States via Rifle Movement Dynamics

As part of a larger study previously reported [55,56], we used a marksmanship simulation system to collect data from military personnel engaged in a shoot/don’t-shoot decision-making task. For these analyses, we focused on 6DOF OMC data of rifle movements collected during discrete task trials to assess whether time series patterns of movement data might reliably indicate self-reported uncertainty states.

3.1. Participants, Design and Procedure

A total of 83 male (Mean age 23.1) military personnel (active-duty U.S. Army soldiers) voluntarily provided written informed consent to participate in a study approved by the institutional review boards at Tufts University and the U.S. Army Combat Capabilities Command Armaments Center. During a series of laboratory sessions, participants completed several tasks measuring cognitive, affective, physiological, and biochemical responses to acute stress. To measure cognitive responses, tasks included spatial orienting, recognition memory, and simulated marksmanship.
For the purpose of this analysis, we focus only on the simulated marksmanship task the details of which are described below. This task involved two phases: learning and testing. During learning, participants learned how to distinguish a friendly versus enemy camouflage pattern to an accuracy criterion of 80%. During testing, these camouflage patterns were placed on simulated avatars with systematically varied clarity, similar to a traditional perceptual decision-making task [57,58]. Specifically, camouflage visual clarity was varied across six conditions that overlaid one pattern over the other to increase confusability: 100% pattern A and 0% pattern B, 60–65% pattern A and 35–40% pattern B, and 51% pattern A and 49% pattern B, and vice versa. In this manner, any given avatar was relatively easy or difficult (and in some cases impossible) to distinguish as friendly versus enemy. This manipulation was intended to ensure that participants would experience variable subjective uncertainty levels over the course of trials.
The camouflaged avatars were used in a virtual reality scenario (built using the Unity3d (version 2018.2.6f1) engine [59]) that simulated an avatar walking towards the participant in the virtual world. Over the course of 15 trials, the participant had to decide whether the avatar was friendly or enemy (based solely on their camouflage pattern) and therefore whether to allow the avatar to pass (friendly) or to shoot (enemy) the avatar using a gaming rifle. At the outset of each trial, the participant was instructed to return the rifle to the downward-facing (i.e., low-ready) position; if they decided to engage an enemy, they would shoulder the rifle, aim at the avatar, and pull the trigger. If they decided to let a friendly pass, they would leave the rifle in the downward-facing position and push a small button on the barrel of the rifle. Critically, immediately after each shooting decision, participants would rate their certainty on a scale from 1 (very low) to 6 (very high). The task was performed under stress, with participants receiving a mild electric shock to the torso when they made an incorrect decision (i.e., shooting a friendly, or missing an enemy); shock was administered only after they made their certainty rating.
The simulated marksmanship task was performed two to three times by each participant (on separate days) in a large-scale projection screen-based cave automatic virtual environment (CAVE). The rifle was equipped with an array of retroreflective marker balls that were registered and tracked by a TRACKPACK/E infrared motion tracking system (Advanced Realtime Tracking, GmbH, Weilheim, Germany). This afforded sensing of 6DOF (xyz, roll, pitch, yaw) rifle movement at 60 Hz, recorded using the DTrack (version 3.1.1) software (also by Advanced Realtime Tracking).

3.2. Data Processing

Data were collected from a total of 3735 trials, with the rifle being raised and a trigger pull occurring on approximately half (1875) of all trials; these trials were carried forward for pre-processing movement trajectories. Note that we did not include in the analysis trials where the participant let the avatar pass as there was no appreciable movement of the rifle in such cases.
For the 1875 trials of interest, time series data from the OMC sensors were time-locked to the onset of a trial, when an avatar first became perceptible in the virtual scene, and the end of a trial, when the trigger was pulled to fire upon the avatar. Upon initial inspection of our data, we found that when participants moved their rifle from the low-ready to a shouldered position, movement was primarily within the translational degrees of freedom (i.e., xyz axes; predominately upward), with less movement in the rotational degrees of freedom (roll, pitch, yaw). Thus, analyses are restricted to the former. Example rifle movement trajectories are depicted in Figure 1.
Pre-processing resulted in the removal of 250 trials due to extremely brief (i.e., <5 s) or extremely long (i.e., >1 min) trajectories typically due to either tracking error (the former) or scenario software errors (the latter).
To derive features from the remaining 1625 trajectories, we leveraged two Python packages: the Time Series Feature Extraction Library (TSFEL [60]), and the Nonlinear Analysis Core’s NONANLibrary [61]. Together, the TSFEL and NONANLibrary packages allow for the calculation of features in the temporal, probability, spectral, divergence, and fractal domains. A full list of features can be found at the TSFEL and NONANLibrary GitHub pages. A total of 385 features for velocity (V) and each of the X, Y, and Z translational axes were computed resulting in 1540 features per trajectory. Relative to the participant’s perspective, the Y axis reflects superior–inferior (up/down) movement of the rifle, the X axis reflects medial–lateral (left/right) movement of the weapon, and the Z axis reflects anterior–posterior (forward/backward) movement of the weapon.
Data analysis was intended to assess whether the movement dynamic features could classify low versus high certainty states. A given trial for a given participant was deemed low certainty if the score was below the median score across all trials for that individual, or high certainty if the score was at or above the median. This process resulted in a total of 581 trials included in the low certainty class, and 1044 trials included in the high certainty class.

3.3. Data Analysis

To evaluate whether the extracted features can predict (un)certainty, we conducted a five-fold cross-validation, utilizing 80% of the data for training and 20% for testing in each fold. Feature selection was performed on the training set using the Terminating Random Experiments Selector (T-Rex Selector) [62]. The T-Rex Selector is notable for its use of dummy variables to control the false discovery rate (FDR), ensuring that the proportion of falsely identified variables among all selected variables meets a user-defined criterion which was set at 0.05 for all analysis in this paper.
The selected features from the training set were then used to train a scikit-learn (version 1.4.0) pipeline, which included a robust scaler and a logistic regression model. Predictions were made using the selected features and the trained pipeline. The predictive models achieved an average F1 score of 0.77, indicating that the rifle trajectory features are effective in predicting shot certainty (accuracy 0.66, misclassification rate 0.34, precision 0.67, recall 0.92, MSE 0.34), detailed in Table 1.

3.4. Results

The most frequently selected features across the five iterations are detailed in Table 2. Notably, selected features including Lyapunov exponents [63] and the slope of the power spectrum for the magnitude of the velocity, were related to the predictability or regularity of the trajectories in space. Two features were selected in most of the five iterations. First, the Lyapunov exponent from the velocity magnitude was selected in all five iterations. This feature had a negative weight in each model iteration, suggesting that decreased regularity (i.e., higher Lyapunov exponent) in trajectory velocity is associated with lower certainty. Second, the Lyapunov exponent of the lateral (X) position, was selected in four out of five iterations and also had a negative weight, indicating that predictability in the participant’s medial–lateral rifle movement (X axis) is related to lower certainty.
To test the statistical difference in the two most frequently selected features, we ran two-sample Kolmogerov Smirnov tests using Scipy’s (version 1.10.1) kstest method. Both features achieve a significant p-value (p < 1 × 10−5), suggesting the samples come from different distributions (Figure 2).

4. Discussion

Analysis of this data set demonstrated that features derived from rifle movement dynamics, particularly those related to the trajectories’ spatial predictability/regularity, can effectively distinguish between low and high uncertainty states in a simulated marksmanship task. Specifically, the Lyapunov exponents of both velocity magnitude and lateral (X-axis) movement consistently emerged as key predictors, with higher variability (i.e., lower predictability) in these movement patterns associated with lower certainty. This finding aligns with previous research in healthcare [38,39], navigation [7], and driving [47,48] contexts, which also suggests that increased movement irregularity or unpredictability (typically measured via entropy) can reflect heightened cognitive demand or uncertainty.
The consistent selection of features such as the Lyapunov exponent and velocity spectral slope highlights the utility of these and related movement characteristics in cognitive state estimation. The fact that movement dynamics could be predictive of cognitive uncertainty states, particularly in a highly stressful and dynamic task like marksmanship, supports the notion that movement sensing with OMS and/or IMUs can be an unobtrusive, real-time indicator of cognitive processes. Importantly, this extends previous research in domains like healthcare and navigation by demonstrating that rifle movement, specifically in military tasks, is a valuable proxy for underlying cognitive states.
These results suggest several practical implications. In military contexts, real-time detection of uncertainty based on movement features could be leveraged to provide adaptive feedback to trainees, optimize training protocols, quantify the transition from novice to expert, and reduce decision-making errors. There are also several promising civilian applications of our results. In law enforcement training, movement tracking for mental state classification it could improve decision-making under stress by providing real-time feedback on cognitive states like uncertainty, enabling tailored interventions that enhance officers’ confidence and resilience [64,65]. In sports training, movement tracking can optimize performance and injury prevention by revealing cognitive states related to focus, anticipation, and fatigue [66]. Within healthcare, this approach can aid in medical training by identifying moments of uncertainty in procedures, for example during surgical training [67]. For driver training and navigation, movement sensing could enhance safety by detecting fatigue or disorientation, encouraging adaptive guidance to reduce driver stress [68]. Finally, in workplace safety and efficiency, tracking subtle movement dynamics in high-risk jobs can inform training to manage workload and fatigue, improving ergonomics and reducing injury risks [69,70]. These applications demonstrate how movement sensing can support real-time, adaptive feedback across diverse high-demand environments. Across domains, classifying uncertainty states from movement behavior could also assist in quantifying the progression from novice to expert during training.
Future research should explore the potential for integrating movement-based cognitive state estimation with other physiological or environmental sensing modalities, as well as testing these methods in real-world training and operational contexts. Sensor fusion could improve the accuracy and precision of uncertainty state classification; however, it is compelling that we can achieve moderate-to-high model performance with a single sensor suitable for resource-constrained settings. It will also explore whether there are unique movement signatures related to similar but dissociable cognitive states such as cognitive workload, uncertainty, and acute stress. More broadly, our results suggest that human movement dynamics may prove valuable for classifying a wide range of neurocognitive states accompanying diverse work-related movements; while we focused on healthy, neurotypical participants, there are also potential applications for clinical surveillance and diagnosis.

5. Conclusions

In conclusion, we demonstrate that the movement dynamics characterizing weapon trajectories offer a promising avenue for estimating cognitive states such as uncertainty in complex, high-stress tasks. The findings pave the way for further exploration into how subtle changes in movement behavior can reveal cognitive states, offering new opportunities for sensing and cognitive state monitoring in both research and applied settings.

Author Contributions

Conceptualization, T.T.B.; methodology, T.T.B.; formal analysis, J.M. and E.L.M.; data curation, T.T.B.; writing—original draft preparation, T.T.B.; writing—review and editing, T.T.B., J.M., G.I.H. and E.L.M.; visualization, J.M.; supervision, E.L.M.; funding acquisition, T.T.B. and G.I.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the U.S. Army DEVCOM Soldier Center, grant number W911-QY-19-R-0003 awarded to Tufts University. The views expressed in this article are solely those of the authors and do not reflect the official policies or positions of the Department of Army, the Department of Defense, or any other department or agency of the U.S. government.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the U.S. Army Combat Capabilities Command Armaments Center (protocol code 18-007, approved 8 August 2018) and Tufts University (protocol code 1808016, approved 8 September 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available at the Harvard Dataverse, persistent link: https://doi.org/10.7910/DVN/WZEHSR accessed on 23 September 2024.

Acknowledgments

The authors thank the soldier participants from the 82nd and 101st Airborne Divisions of the U.S. Army, as well as the soldiers attached to the Human Research Volunteer program at the U.S. Army DEVCOM Soldier Center, who volunteered their time to participate in this study.

Conflicts of Interest

The authors declare no conflicts of interest. Staff from the funding agency (T.T.B. and G.I.H.) played a role in the design of the study, the collection, analyses, and interpretation of data, the writing of the manuscript; and in the decision to publish the results.

References

  1. Barsalou, L.W. Perceptual Symbol Systems. Behav. Brain Sci. 1999, 22, 577–609; discussion 610–660. [Google Scholar] [CrossRef] [PubMed]
  2. Barsalou, L.W. Grounded Cognition. Annu. Rev. Psychol. 2008, 59, 617–645. [Google Scholar] [CrossRef] [PubMed]
  3. Varela, F.J.; Thompson, E.; Rosch, E. The Embodied Mind, Revised Edition: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 2017; ISBN 978-0-262-33550-8. [Google Scholar]
  4. Warren, W.H. The Perception-Action Coupling. In Sensory-Motor Organizations and Development in Infancy and Early Childhood; NATO ASI Series; Springer: Berlin/Heidelberg, Germany, 1990; Volume 56, pp. 23–37. ISBN 978-94-009-2071-2. [Google Scholar]
  5. Balaban, C.D.; Cohn, J.; Redfern, M.S.; Prinkey, J.; Stripling, R.; Hoffer, M. Postural Control as a Probe for Cognitive State: Exploiting Human Information Processing to Enhance Performance. Int. J. Hum.–Comput. Interact. 2004, 17, 275–286. [Google Scholar] [CrossRef]
  6. Ballenghein, U.; Megalakaki, O.; Baccino, T. Cognitive Engagement in Emotional Text Reading: Concurrent Recordings of Eye Movements and Head Motion. Cogn. Emot. 2019, 33, 1448–1460. [Google Scholar] [CrossRef]
  7. Brunyé, T.T.; Haga, Z.D.; Houck, L.A.; Taylor, H.A. You Look Lost: Understanding Uncertainty and Representational Flexibility in Navigation. In Representations in Mind and World: Essays Inspired by Barbara Tversky; Zacks, J.M., Taylor, H.A., Eds.; Routledge: New York, NY, USA, 2017; pp. 42–56. ISBN 978-1-315-16978-1. [Google Scholar]
  8. Qiu, J.; Helbig, R. Body Posture as an Indicator of Workload in Mental Work. Hum. Factors 2012, 54, 626–635. [Google Scholar] [CrossRef] [PubMed]
  9. Witt, P.L.; Brown, K.C.; Roberts, J.B.; Weisel, J.; Sawyer, C.R.; Behnke, R.R. Somatic Anxiety Patterns Before, During, and After Giving a Public Speech. South. Commun. J. 2006, 71, 87–100. [Google Scholar] [CrossRef]
  10. Kaczorowska, M.; Karczmarek, P.; Plechawska-Wójcik, M.; Tokovarov, M. On the Improvement of Eye Tracking-Based Cognitive Workload Estimation Using Aggregation Functions. Sensors 2021, 21, 4542. [Google Scholar] [CrossRef]
  11. Peißl, S.; Wickens, C.D.; Baruah, R. Eye-Tracking Measures in Aviation: A Selective Literature Review. Int. J. Aerosp. Psychol. 2018, 28, 98–112. [Google Scholar] [CrossRef]
  12. Tolvanen, O.; Elomaa, A.-P.; Itkonen, M.; Vrzakova, H.; Bednarik, R.; Huotarinen, A. Eye-Tracking Indicators of Workload in Surgery: A Systematic Review. J. Investig. Surg. 2022, 35, 1340–1349. [Google Scholar] [CrossRef]
  13. Cardone, D.; Perpetuini, D.; Filippini, C.; Mancini, L.; Nocco, S.; Tritto, M.; Rinella, S.; Giacobbe, A.; Fallica, G.; Ricci, F.; et al. Classification of Drivers’ Mental Workload Levels: Comparison of Machine Learning Methods Based on ECG and Infrared Thermal Signals. Sensors 2022, 22, 7300. [Google Scholar] [CrossRef]
  14. Charles, R.L.; Nixon, J. Measuring Mental Workload Using Physiological Measures: A Systematic Review. Appl. Ergon. 2019, 74, 221–232. [Google Scholar] [CrossRef] [PubMed]
  15. Hoover, A.; Singh, A.; Fishel-Brown, S.; Muth, E. Real-Time Detection of Workload Changes Using Heart Rate Variability. Biomed. Signal Process. Control 2012, 7, 333–341. [Google Scholar] [CrossRef]
  16. Cao, J.; Garro, E.M.; Zhao, Y. EEG/fNIRS Based Workload Classification Using Functional Brain Connectivity and Machine Learning. Sensors 2022, 22, 7623. [Google Scholar] [CrossRef]
  17. Herff, C.; Heger, D.; Fortmann, O.; Hennrich, J.; Putze, F.; Schultz, T. Mental Workload during N-Back Task—Quantified in the Prefrontal Cortex Using fNIRS. Front. Hum. Neurosci. 2014, 7, 935. [Google Scholar] [CrossRef] [PubMed]
  18. Berka, C.; Levendowski, D.J.; Lumicao, M.N.; Yau, A.; Davis, G.; Zivkovic, V.T.; Olmstead, R.E.; Tremoulet, P.D.; Craven, P.L. EEG Correlates of Task Engagement and Mental Workload in Vigilance, Learning, and Memory Tasks. Aviat. Space Environ. Med. 2007, 78, B231–B244. [Google Scholar]
  19. Zhou, Y.; Huang, S.; Xu, Z.; Wang, P.; Wu, X.; Zhang, D. Cognitive Workload Recognition Using EEG Signals and Machine Learning: A Review. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 799–818. [Google Scholar] [CrossRef]
  20. Lively, S.E.; Pisoni, D.B.; Van Summers, W.; Bernacki, R.H. Effects of Cognitive Workload on Speech Production: Acoustic Analyses and Perceptual Consequences. J. Acoust. Soc. Am. 1993, 93, 2962–2973. [Google Scholar] [CrossRef]
  21. Hart, S.G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 904–908. [Google Scholar] [CrossRef]
  22. Guerra-Filho, P.G. Optical Motion Capture: Theory and Implementation. J. Theor. Appl. Inform. (RITA) 2005, 12, 61–89. [Google Scholar]
  23. Aurand, A.M.; Dufour, J.S.; Marras, W.S. Accuracy Map of an Optical Motion Capture System with 42 or 21 Cameras in a Large Measurement Volume. J. Biomech. 2017, 58, 237–240. [Google Scholar] [CrossRef]
  24. Windolf, M.; Götzen, N.; Morlock, M. Systematic Accuracy and Precision Analysis of Video Motion Capturing Systems—Exemplified on the Vicon-460 System. J. Biomech. 2008, 41, 2776–2780. [Google Scholar] [CrossRef] [PubMed]
  25. Ciklacandir, S.; Ozkan, S.; Isler, Y. A Comparison of the Performances of Video-Based and IMU Sensor-Based Motion Capture Systems on Joint Angles. In Proceedings of the 2022 Innovations in Intelligent Systems and Applications Conference (ASYU), Antalya, Turkey, 7–9 September 2022; pp. 1–5. [Google Scholar]
  26. Iosa, M.; Picerno, P.; Paolucci, S.; Morone, G. Wearable Inertial Sensors for Human Movement Analysis. Expert Rev. Med. Devices 2016, 13, 641–659. [Google Scholar] [CrossRef] [PubMed]
  27. Vaduvescu, V.A.; Negrea, P. Inertial Measurement Unit—A Short Overview of the Evolving Trend for Miniaturization and Hardware Structures. In Proceedings of the 2021 International Conference on Applied and Theoretical Electricity (ICATE), Craiova, Romania, 27–29 May 2021; pp. 1–5. [Google Scholar]
  28. Franco, T.; Sestrem, L.; Henriques, P.R.; Alves, P.; Varanda Pereira, M.J.; Brandão, D.; Leitão, P.; Silva, A. Motion Sensors for Knee Angle Recognition in Muscle Rehabilitation Solutions. Sensors 2022, 22, 7605. [Google Scholar] [CrossRef] [PubMed]
  29. Manupibul, U.; Tanthuwapathom, R.; Jarumethitanont, W.; Kaimuk, P.; Limroongreungrat, W.; Charoensuk, W. Integration of Force and IMU Sensors for Developing Low-Cost Portable Gait Measurement System in Lower Extremities. Sci. Rep. 2023, 13, 10653. [Google Scholar] [CrossRef] [PubMed]
  30. Saun, T.J.; Grantcharov, T.P. Design and Validation of an Inertial Measurement Unit (IMU)-Based Sensor for Capturing Camera Movement in the Operating Room. HardwareX 2021, 9, e00179. [Google Scholar] [CrossRef]
  31. Arlotti, J.S.; Carroll, W.O.; Afifi, Y.; Talegaonkar, P.; Albuquerque, L.; Burch V, R.F.; Ball, J.E.; Chander, H.; Petway, A. Benefits of IMU-Based Wearables in Sports Medicine: Narrative Review. Int. J. Kinesiol. Sports Sci. 2022, 10, 36–43. [Google Scholar] [CrossRef]
  32. Davarzani, S.; Helzer, D.; Rivera, J.; Saucier, D.; Jo, E.; Burch V., R.; Chander, H.; Strawderman, L.; Ball, J.E.; Smith, B.K.; et al. Validity and Reliability of StriveTM Sense3 for Muscle Activity Monitoring During the Squat Exercise. Int. J. Kinesiol. Sports Sci. 2020, 8, 1–18. [Google Scholar] [CrossRef]
  33. Apte, S.; Prigent, G.; Stöggl, T.; Martínez, A.; Snyder, C.; Gremeaux-Bader, V.; Aminian, K. Biomechanical Response of the Lower Extremity to Running-Induced Acute Fatigue: A Systematic Review. Front. Physiol. 2021, 12, 646042. [Google Scholar] [CrossRef]
  34. Marotta, L.; Scheltinga, B.L.; van Middelaar, R.; Bramer, W.M.; van Beijnum, B.-J.F.; Reenalda, J.; Buurke, J.H. Accelerometer-Based Identification of Fatigue in the Lower Limbs during Cyclical Physical Exercise: A Systematic Review. Sensors 2022, 22, 3008. [Google Scholar] [CrossRef]
  35. Rawashdeh, S.A.; Rafeldt, D.A.; Uhl, T.L. Wearable IMU for Shoulder Injury Prevention in Overhead Sports. Sensors 2016, 16, 1847. [Google Scholar] [CrossRef]
  36. Moolchandani, P.R.; Mazumdar, A.; Young, A.J. Design of an Intent Recognition System for Dynamic, Rapid Motions in Unstructured Environments. ASME Lett. Dyn. Syst. Control 2021, 2, 011004. [Google Scholar] [CrossRef]
  37. Ahmidi, N.; Hager, G.D.; Ishii, L.; Fichtinger, G.; Gallia, G.L.; Ishii, M. Surgical Task and Skill Classification from Eye Tracking and Tool Motion in Minimally Invasive Surgery. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2010; Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 295–302. [Google Scholar]
  38. Cao, C.; MacKenzie, C.L.; Payandeh, S. Task and Motion Analyses in Endoscopic Surgery. In Proceedings of the ASME International Mechanical Engineering Congress and Exposition, Atlanta, GA, USA, 17–22 November 1996; ASME: Atlanta, GA, USA, 1996; Volume IMECE1996-0386, pp. 583–590. [Google Scholar]
  39. Brunyé, T.T.; Booth, K.; Hendel, D.; Kerr, K.F.; Shucard, H.; Weaver, D.L.; Elmore, J.G. Machine Learning Classification of Diagnostic Accuracy in Pathologists Interpreting Breast Biopsies. J. Am. Med. Inform. Assoc. 2024, 31, 552–562. [Google Scholar] [CrossRef]
  40. Hosp, B.; Yin, M.S.; Haddawy, P.; Watcharopas, R.; Sa-Ngasoongsong, P.; Kasneci, E. States of Confusion: Eye and Head Tracking Reveal Surgeons’ Confusion during Arthroscopic Surgery. In Proceedings of the 2021 International Conference on Multimodal Interaction, Montréal, QC, Canada, 18–22 October 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 753–757. [Google Scholar]
  41. Monaro, M.; Toncini, A.; Ferracuti, S.; Tessari, G.; Vaccaro, M.G.; De Fazio, P.; Pigato, G.; Meneghel, T.; Scarpazza, C.; Sartori, G. The Detection of Malingering: A New Tool to Identify Made-Up Depression. Front. Psychiatry 2018, 9, 249. [Google Scholar] [CrossRef]
  42. Torres, E.B.; Isenhower, R.W.; Yanovich, P.; Rehrig, G.; Stigler, K.; Nurnberger, J.; José, J.V. Strategies to Develop Putative Biomarkers to Characterize the Female Phenotype with Autism Spectrum Disorders. J. Neurophysiol. 2013, 110, 1646–1662. [Google Scholar] [CrossRef]
  43. Hundrieser, S.; Heinemann, F.; Klatt, M.; Struleva, M.; Munk, A. Unbalanced Kantorovich-Rubinstein Distance, Plan, and Barycenter on Finite Spaces: A Statistical Perspective. arXiv 2024, arXiv:2211.08858. [Google Scholar]
  44. Aoki, K.; Ngo, T.T.; Mitsugami, I.; Okura, F.; Niwa, M.; Makihara, Y.; Yagi, Y.; Kazui, H. Early Detection of Lower MMSE Scores in Elderly Based on Dual-Task Gait. IEEE Access 2019, 7, 40085–40094. [Google Scholar] [CrossRef]
  45. Shin, J.-Y.; Kim, Y.-J.; Kim, J.-S.; Min, S.-B.; Park, J.-N.; Bae, J.-H.; Seo, H.-E.; Shin, H.-S.; Yu, Y.-E.; Lim, J.-Y.; et al. The Correlation between Gait and Cognitive Function in Dual-Task Walking of the Elderly with Cognitive Impairment: A Systematic Literature Review. J. Korean Soc. Phys. Med. 2022, 17, 93–108. [Google Scholar] [CrossRef]
  46. Chen, S.; Epps, J. Atomic Head Movement Analysis for Wearable Four-Dimensional Task Load Recognition. IEEE J. Biomed. Health Inform. 2019, 23, 2464–2474. [Google Scholar] [CrossRef]
  47. Boer, E.R. Behavioral Entropy as an Index of Workload. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2000, 44, 125–128. [Google Scholar] [CrossRef]
  48. Boer, E.R. Behavioral Entropy as a Measure of Driving Performance. In Driving Assessment Conference; University of Iowa: Iowa City, IA, USA, 2001; Volume 1. [Google Scholar] [CrossRef]
  49. Hirsh, J.B.; Mar, R.A.; Peterson, J.B. Psychological Entropy: A Framework for Understanding Uncertainty-Related Anxiety. Psychol. Rev. 2012, 119, 304–320. [Google Scholar] [CrossRef] [PubMed]
  50. Keshmiri, S. Entropy and the Brain: An Overview. Entropy 2020, 22, 917. [Google Scholar] [CrossRef] [PubMed]
  51. Zanetti, M.; Faes, L.; Nollo, G.; De Cecco, M.; Pernice, R.; Maule, L.; Pertile, M.; Fornaser, A. Information Dynamics of the Brain, Cardiovascular and Respiratory Network during Different Levels of Mental Stress. Entropy 2019, 21, 275. [Google Scholar] [CrossRef] [PubMed]
  52. Keshmiri, S. Conditional Entropy: A Potential Digital Marker for Stress. Entropy 2021, 23, 286. [Google Scholar] [CrossRef] [PubMed]
  53. Biggs, A.T. Developing Scenarios That Evoke Shoot/Don’t-Shoot Errors. Appl. Ergon. 2021, 94, 103397. [Google Scholar] [CrossRef]
  54. Chung, G.K.W.K.; De La Cruz, G.C.; de Vries, L.F.; Kim, J.-O.; Bewley, W.L.; de Souza e Silva, A.; Sylvester, R.M.; Baker, E.L. Determinants of Rifle Marksmanship Performance: Predicting Shooting Performance with Advanced Distributed Learning Assessments; University of California Los Angeles: Los Angeles, CA, USA, 2004. [Google Scholar]
  55. Brunyé, T.T.; Giles, G.E. Methods for Eliciting and Measuring Behavioral and Physiological Consequences of Stress and Uncertainty in Virtual Reality. Front. Virtual Real. 2023, 4, 951435. [Google Scholar] [CrossRef]
  56. Giles, G.E.; Cantelon, J.A.; Navarro, E.; Brunyé, T.T. State and Trait Predictors of Cognitive Responses to Acute Stress and Uncertainty. Mil. Psychol. 2024, 1–8. [Google Scholar] [CrossRef]
  57. Brunyé, T.T.; Gardony, A.L. Eye Tracking Measures of Uncertainty during Perceptual Decision Making. Int. J. Psychophysiol. 2017, 120, 60–68. [Google Scholar] [CrossRef]
  58. Heekeren, H.R.; Marrett, S.; Ungerleider, L.G. The Neural Systems That Mediate Human Perceptual Decision Making. Nat. Rev. Neurosci. 2008, 9, 467–479. [Google Scholar] [CrossRef] [PubMed]
  59. Unity Technologies, Inc. Unity3d Engine; Unity Technologies, Inc.: San Francisco, CA, USA, 2022. [Google Scholar]
  60. Barandas, M.; Folgado, D.; Fernandes, L.; Santos, S.; Abreu, M.; Bota, P.; Liu, H.; Schultz, T.; Gamboa, H. TSFEL: Time Series Feature Extraction Library. SoftwareX 2020, 11, 100456. [Google Scholar] [CrossRef]
  61. UNO Biomechanics Nonlinear-Analysis-Core/NONANLibrary 2020. Available online: https://github.com/Nonlinear-Analysis-Core/NONANLibrary (accessed on 23 September 2024).
  62. Machkour, J.; Muma, M.; Palomar, D.P. The Terminating-Random Experiments Selector: Fast High-Dimensional Variable Selection with False Discovery Rate Control. arXiv 2021, arXiv:2110.06048. [Google Scholar]
  63. Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov Exponents from a Time Series. Phys. D Nonlinear Phenom. 1985, 16, 285–317. [Google Scholar] [CrossRef]
  64. Dailey, S.F.; Campbell, L.N.P.; Ramsdell, J. Law Enforcement Officer Naturalistic Decision-Making in High-Stress Conditions. Polic. Int. J. 2024, 47, 929–948. [Google Scholar] [CrossRef]
  65. Voigt, L.; Zinner, C. How to Improve Decision Making and Acting Under Stress: The Effect of Training with and without Stress on Self-Defense Skills in Police Officers. J. Police Crim. Psych. 2023, 38, 1017–1024. [Google Scholar] [CrossRef]
  66. Torres-Ronda, L.; Beanland, E.; Whitehead, S.; Sweeting, A.; Clubb, J. Tracking Systems in Team Sports: A Narrative Review of Applications of the Data and Sport Specific Analysis. Sports Med. Open 2022, 8, 15. [Google Scholar] [CrossRef]
  67. Cristancho, S.M.; Apramian, T.; Vanstone, M.; Lingard, L.; Ott, M.; Novick, R.J. Understanding Clinical Uncertainty: What Is Going on When Experienced Surgeons Are Not Sure What to Do? Acad. Med. 2013, 88, 1516. [Google Scholar] [CrossRef]
  68. Beukman, A.R.; Hancke, G.P.; Silva, B.J. A Multi-Sensor System for Detection of Driver Fatigue. In Proceedings of the 2016 IEEE 14th International Conference on Industrial Informatics (INDIN), Poitiers, France, 18–20 July 2016; pp. 870–873. [Google Scholar]
  69. Brunner, O.; Mertens, A.; Nitsch, V.; Brandl, C. Accuracy of a Markerless Motion Capture System for Postural Ergonomic Risk Assessment in Occupational Practice. Int. J. Occup. Saf. Ergon. 2022, 28, 1865–1873. [Google Scholar] [CrossRef]
  70. Yunus, M.N.H.; Jaafar, M.H.; Mohamed, A.S.A.; Azraai, N.Z.; Hossain, M.S. Implementation of Kinetic and Kinematic Variables in Ergonomic Risk Assessment Using Motion Capture Simulation: A Review. Int. J. Environ. Res. Public Health 2021, 18, 8342. [Google Scholar] [CrossRef]
Figure 1. Example time-series rifle movement data, demonstrating either low variability panel (A), or high variability early (B), middle (C), and/or late (D) in the trajectory.
Figure 1. Example time-series rifle movement data, demonstrating either low variability panel (A), or high variability early (B), middle (C), and/or late (D) in the trajectory.
Sensors 24 07530 g001
Figure 2. Box plot distinguishing low versus high rated uncertainty for two notable features, including 95% confidence intervals and an indication of pairwise statistical significance (**** p < 0.00001).
Figure 2. Box plot distinguishing low versus high rated uncertainty for two notable features, including 95% confidence intervals and an indication of pairwise statistical significance (**** p < 0.00001).
Sensors 24 07530 g002
Table 1. Convergence of the model across the five iterations, including the number of features selected and five key performance variables.
Table 1. Convergence of the model across the five iterations, including the number of features selected and five key performance variables.
IterationNumber of FeaturesF1AccuracyPrecisionRecallMSE
1110.7750.6640.6820.8970.336
280.7630.6450.6710.8830.355
360.7660.6510.6770.8830.348
4110.7680.6610.6840.8770.339
570.7750.6570.6700.9200.343
Table 2. The 5 most frequently selected trajectory features across iterations of the 5-fold cross validation, mean feature weights, and a description of each feature. VM = velocity magnitude, AMI = average mutual information using Stergiou method.
Table 2. The 5 most frequently selected trajectory features across iterations of the 5-fold cross validation, mean feature weights, and a description of each feature. VM = velocity magnitude, AMI = average mutual information using Stergiou method.
FeatureWeightDescription
Lyapunov Exponent of VM−0.29The rate at which small differences in velocity grow over time, indicating sensitivity to initial conditions and chaos.
Lyapunov Exponent of X−0.27The rate at which small differences in X-axis (lateral) movement grow over time, indicating sensitivity to initial conditions and chaos.
Lyapunov Exponent of Y−0.17The rate at which small differences in Y-axis (vertical) movement grow over time, indicating sensitivity to initial conditions and chaos.
Spectral Slope−0.16Power of a trajectory’s velocity changing across different frequencies, revealing smoothness or complexity of the trajectory.
AMI (Stergiou) of X−0.30Nonlinear dependencies and predictability of X-axis movement over time, how much information past values provide about future values.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brunyé, T.T.; McIntyre, J.; Hughes, G.I.; Miller, E.L. Movement Sensing Opportunities for Monitoring Dynamic Cognitive States. Sensors 2024, 24, 7530. https://doi.org/10.3390/s24237530

AMA Style

Brunyé TT, McIntyre J, Hughes GI, Miller EL. Movement Sensing Opportunities for Monitoring Dynamic Cognitive States. Sensors. 2024; 24(23):7530. https://doi.org/10.3390/s24237530

Chicago/Turabian Style

Brunyé, Tad T., James McIntyre, Gregory I. Hughes, and Eric L. Miller. 2024. "Movement Sensing Opportunities for Monitoring Dynamic Cognitive States" Sensors 24, no. 23: 7530. https://doi.org/10.3390/s24237530

APA Style

Brunyé, T. T., McIntyre, J., Hughes, G. I., & Miller, E. L. (2024). Movement Sensing Opportunities for Monitoring Dynamic Cognitive States. Sensors, 24(23), 7530. https://doi.org/10.3390/s24237530

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop