Next Article in Journal
Automation of Library Services—Turning Point in Development of Academic Libraries
Previous Article in Journal
Performance Evaluation of Recursive Mean Filter Using Scilab, MATLAB, and MPI (Message Passing Interface)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

IMPACT: A Dataset for Integrated Measurement of Performance and Contextual Task-Related Effects †

1
Department of Software and Internet Technologies, Technical University of Varna, 9010 Varna, Bulgaria
2
Department of Communication Engineering and Technologies, Technical University of Varna, 9010 Varna, Bulgaria
*
Author to whom correspondence should be addressed.
Presented at the International Conference on Electronics, Engineering Physics and Earth Science (EEPES’24), Kavala, Greece, 19–21 June 2024.
Eng. Proc. 2024, 70(1), 40; https://doi.org/10.3390/engproc2024070040
Published: 8 August 2024

Abstract

:
The paper introduces the Integrated Measurement of Performance and Contextual Task-related effects (IMPACT) dataset, designed to facilitate comprehensive investigations into contextual dynamics within human–machine collaboration (HMC) systems. Unlike traditional approaches focusing solely on monitoring physical contextual aspects, IMPACT uniquely incorporates the individual perception of human participants within these systems. Data sources include temporal records of task performance, user models derived from real-time video processing as well as contextual factors such as different light and noise conditions. Key features include task stimuli, user responses, reaction times, emotional states, and attention levels, all synchronized via timestamps and recorded in .csv and .mp4 formats. Our analysis highlights variations in user perceptions and performance under different contextual states, both in person-independent and person-specific scenarios. Pre-test and post-test questionnaire data reveal shifts in user perceptions of light and noise as distractors. Performance data indicate that task-related adaptations maintain consistent performance levels despite contextual changes, while attention and arousal levels vary significantly. Person-specific analysis underscores the importance of individualized context adaptation, as users exhibit unique responses to environmental changes. The IMPACT dataset supports the development of adaptive human–machine collaboration systems by integrating individual user perceptions with objective context monitoring. Future research will focus on refining context-adaptive models to enhance the robustness and accuracy of individualized context-related performance predictions.

1. Introduction

The increasing integration of human–machine collaboration (HMC) systems within various industries poses a number of challenges. The modern human–machine interaction (HMI) and HMC systems, with predominantly human-centered structures, often acknowledge the importance of the context-related parameters, whose variability can significantly influence the system’s output result from the collaborative task, often expressed as decreased performance or other undesirable drawbacks.
The recent literature underscores the criticality of context awareness within these systems, prompting the development of diverse context-aware tools and methodologies for context recognition, interpretation, and assessment. The incorporation of numerous modalities and monitoring tools raises questions regarding data processing and subsequent utilization for context-related adaptation. In this regard, the availability of resources allowing statistical analysis, data understanding, and machine learning for the creation of context-related models is of crucial importance.
Addressing this issue, Ref. [1] underscores the importance of the selection, pre-processing, and transformation of the data provided from a set of different context-monitoring sources and the elaboration of a unified version of that data for data processing.
Most of the existing context-related datasets tend to contribute to enhancing the context awareness of an agent either in human–robot collaboration (HRC), e.g., to avoid collisions with the human [2] or to estimate the human posture and relate it to a particular domain-specific activity [3,4], or in human–computer collaboration [5]—to estimate human’s emotion-related momentary parameters. Such an approach builds situation awareness, based on the “machine point of view”.
In the current paper, we present the IMPACT dataset (Integrated Measurement of Performance and Contextual Task-related effects), aiming for a better understanding of the influence of context-related parameters during the execution of a cognition-demanding collaborative task. It is intended to contribute to better decision-making in the process of context-related adaptation considering the human’s individual perception of the contextual parameter changes.

2. Materials and Methods

2.1. Background

IMPACT was designed and collected as a resource creation activity which is supposed to develop further the possibilities for improving intelligent human–machine interfaces related to the iHMI research project. The data acquisition was based on a purposely developed software, created as a validation of the intelligent Human–Machine Interface framework (iHMIfr) explained in detail in [6,7]. This dataset aimed to support the third phase of the research, which focused on the implementation of context-related adaptation.
In the following subsections, we present the data collection setup, the experimental protocol, as well as a description of the dataset features, and the overall organization of the IMPACT dataset.

2.2. Data Collection Setup

The data collection setup consists of a desktop PC running the above-mentioned web application, equipped with a 24′ monitor with a TRUST web camera mounted on top, a set of stereo speakers, placed from both sides of the monitor, and an adjustable 220 V desktop lamp, controlled by an Arduino controller through a relay. The setup is presented in Figure 1.

2.3. Data Collection Protocol

A total of 18 volunteers (13 male and 5 female) took part in the experiment. All of them were students at the Technical University of Varna. The average age among male participants was 25.08 ± 6.89 years, whereas among female participants, it was 26.11 ± 7.77 years. Each one of them participated individually according to a schedule prepared in advance. In order for discomfort and distraction to be avoided, the laboratory was restricted for other visitors during the experimental sessions.
The overall duration of the experiment was about 40 min. per participant out of which 15 min. was the continuation of the test. The structure of the experiment is shown in Figure 2.
Each session commenced with the participant completing an informed consent declaration, which outlined the types of data collected. This was followed by the completion of Questionnaire 1, which included personal information such as age, sex, handedness, any known issues with sight and hearing, the use of glasses or other assisting devices, as well as details regarding personal habits related to computer gaming, and overall perceptions of the influence of distractors such as light and noise on work effectiveness and concentration. Subsequently, participants engaged in an introductory session where the task was explained, accompanied by trial runs. The number of trials undertaken was at the discretion of each participant, ensuring that by the end of this session, participants felt adequately informed about the test requirements. Following the introductory session, participants underwent a 15 min test during which data were collected. The final step involved completing Questionnaire 2, which solicited self-assessment reports regarding the impact experienced as a result of the applied distractors.
The actual test was based on the T-load_D-back cognitive task [6] during which context-related changes in the shape of external stimuli for the HMC system distractions were applied. The task was structured so that the influence of the applied stimuli was to be isolated with short (1 min) intervals (recovery time) without any distractors. The three applied distractions were as follows: frontal light, noise (type “neighbour speaking”), and light and noise simultaneously.

2.4. IMPACT Dataset Description

The IMPACT dataset comprises data sourced from two primary channels: the web-based application and the questionnaires. Data from the questionnaires are aggregated and presented in .xls tables. Meanwhile, data collected through the application encompass temporal values of parameters associated with the states of the three managers within the iHMIfr architecture: the task manager, the user model, and the context model [6]. All collected data are formatted in .csv files. Integration of timestamps synchronizes the data, which are fused on a decision level at the time of each response.
The model of the task is presented by features like task-related stimuli, given answer (response), performance (true/false), the current speed of the task (presented by the time interval between each two task stimuli in milliseconds), and the reaction time. Data for the model of the user are collected through the integration of the Morphcast SDK [8], which processes real-time video from the camera mounted on the monitor. This data extraction employs face recognition algorithms to derive features such as arousal, valence, attention, and seven basic emotional states (anger, disgust, fear, happiness, neutrality, sadness, and surprise). Valence and arousal values are normalized within the interval (−1; 1), while attention and emotional state values lie within the interval (0; 1). The model of the context comprises features related to light and noise, represented by binary values depending on the state of each distractor, as well as brightness—a metric assessing the average brightness of the participant’s face. The brightness measurement is intended to physically assess contextual changes related to light. It is conducted after isolating the face from the rest of the screen to eliminate the influence of different hair or clothing colors worn by the participant.
Additionally, alongside the aforementioned data, two video clips in .mp4 format per participant were recorded: one capturing the participant’s behavior (via the camera), and the other capturing the screen. These clips were synchronized in time for subsequent matching. The dataset is available for free access from the link given in the Supplementary Materials.

3. Results

3.1. Analysis of Questionnaire-Based Data

The results based on data from both questionnaires are presented below. Figure 3 shows a summary of the pre-test understanding of each participant with regard to the influence of light and noise as parameters of the working environment.
Figure 4 and Figure 5 depict the post-test perception of these parameters, respectively, as a ranking of light/noise/both context states (Figure 4) and as the perceived degree of influence of each one of them on the participants’ performance (Figure 5).
A comparison of the pre- and post-experiment self-assessment reports highlights an initial perception bias towards noise as the more significant distractor. Initially, noise is predominantly categorized as having a significant impact, while light is considered moderately to significantly distracting. Post-experiment results indicate the greatest impact when both distracting factors are applied simultaneously. However, when applied individually, light emerges as the predominant distractor with a very high influence, while noise is perceived as moderately significant.

3.2. Analysis of Collected Data in Person-Independent Scenario

The results from the data collected through the application are presented in two scenarios: person-independent and person-specific. In the person-independent scenario, average values of arousal, valence, attention, and performance are provided across different context states in comparison with the baseline condition (None), as shown in Figure 6.
The average performance does not show considerable changes, primarily due to the task-related adaptation performed by the system throughout the test. However, the average attention increases during distraction with “Noise” stimuli, likely as a result of participants’ efforts to maintain self-concentration. A similar trend is observed with arousal, while valence exhibits the most negative response among all context states. For all other states, attention shows consistent average values. Arousal decreases during the “Light” state, likely contributing to its lower levels during the “Both” state as well.
Figure 7 depicts the changes in the reaction time during the different contextual conditions.
The reaction times also exhibit the greatest variance during the “Noise” condition, particularly noticeable in the last quartile. Interestingly, the distributions of reaction times for the “Light” and “Both stimuli” conditions appear similar, except for the third quartile. This discrepancy may arise from the fact that participants encountered the “Both” condition after experiencing the “Light” and “Noise” stimuli separately, potentially influencing their reaction times.

3.3. Analysis of Collected Data in Person-Specific Scenario

The person-specific scenario presents participants’ individual perceptions related to momentary context states. Figure 8 illustrates the variability in attention levels as a result of changes in the context state for each participant. These person-specific results align with the self-assessment reports, demonstrating that each participant has a unique perception of contextual changes. Therefore, an individualized approach to context-related adaptation is not only relevant but essential for accurately addressing these differences.

4. Conclusions

In this paper, we presented the dataset for Integrated Measurement of Performance and Contextual Task-related effects (IMPACT). This dataset enables the study of context not only through monitoring its physical aspects but also through understanding the individual perceptions of human participants in a human–machine collaboration (HMC) system. The data collected and analyzed in both person-independent and person-specific scenarios confirm that context-related adaptation in a collaborative system benefits from incorporating individual perceptions into context monitoring.
Moving forward, our research will focus on further exploring and utilizing the IMPACT dataset, with the ultimate goal of developing a robust model for accurately estimating individualized aspects of context-related changes. Since the HMC system successfully adapts its behavior based on user performance, maintaining this parameter within a narrow interval of around 87%, its stability leads to corresponding dynamic changes in other parameters such as attention, as shown in Section 3.3. Future efforts will aim to establish additional relationships between human-related parameters (e.g., reaction time) and machine-related parameters (e.g., the number of adaptive reactions), allowing for a more accurate interpretation of context influence.

Supplementary Materials

The following supporting information can be downloaded at http://isr.tu-varna.bg/ihmi/index.php/resursi (accessed on 1 August 2024).

Author Contributions

Conceptualization, M.M., V.M. and Y.K.; methodology M.M.; software, Y.K.; validation, M.M., Y.K. and V.M.; formal analysis, M.M.; investigation, M.M. and Y.K.; resources, M.M., Y.K. and V.M.; data curation, Y.K. and M.M.; writing—original draft preparation, M.M.; writing—review and editing, V.M. and M.M.; visualization, M.M.; supervision, V.M.; project administration, V.M.; funding acquisition, V.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by THE BULGARIAN NATIONAL SCIENCE FUND, grant number FNI KP-06-N37/18, entitled “Investigation on intelligent human-machine interaction inter-faces, capable of recognizing high-risk emotional and cognitive conditions”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data obtained are in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Debruyne, C. Introducing Context and Context-awareness in Data Integration: Identifying the Problem and a Preliminary Case Study on Informed Consent. In Proceedings of the 22nd International Conference on Information Integration and Web-Based Applications and Services, Chiang Mai, Thailand, 30 November–2 December 2020; pp. 178–183. [Google Scholar] [CrossRef]
  2. Liu, H.; Wang, L. Collision-free human-robot collaboration based on context awareness. Robot. Comput.-Integr. Manuf. 2021, 67, 101997. [Google Scholar] [CrossRef]
  3. Andriluka, M.; Pishchulin, L.; Gehler, P.; Schiele, B. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3686–3693. [Google Scholar] [CrossRef]
  4. Charles, J.; Pfister, T.; Magee, D.; Hogg, D.; Zisserman, A. Personalizing Human Video Pose Estimation. arXiv 2015, arXiv:1511.06676. [Google Scholar] [CrossRef]
  5. Dubiel, M.; Yilma, B.A.; Latifzadeh, K.; Leiva, L.A. A Contextual Framework for Adaptive User Interfaces: Modelling the Interaction Environment. 2022. Available online: http://arxiv.org/abs/2203.16882 (accessed on 22 April 2024).
  6. Markov, M.; Kalinin, Y.; Ganchev, T. A Task-related Adaptation in Intelligent Human-Machine Interfaces. In Proceedings of the 2022 International Conference on Communications, Information, Electronic and Energy Systems (CIEES), Veliko Tarnovo, Bulgaria, 24–26 November 2022; pp. 1–4. [Google Scholar] [CrossRef]
  7. Markov, M.; Kalinin, Y.; Markova, V.; Ganchev, T. Towards Implementation of Emotional Intelligence in Human–Machine Collaborative Systems. Electronics 2023, 12, 3852. [Google Scholar] [CrossRef]
  8. Emotion AI Provider. Facial Emotion Recognition, MorphCast. 2023. Available online: https://www.morphcast.com (accessed on 1 May 2023).
Figure 1. The experimental setup for collection of the IMPACT dataset with LIGHT stimuli on (a) and off during the test session (b).
Figure 1. The experimental setup for collection of the IMPACT dataset with LIGHT stimuli on (a) and off during the test session (b).
Engproc 70 00040 g001
Figure 2. Data acquisition workflow during the IMPACT dataset collection sessions.
Figure 2. Data acquisition workflow during the IMPACT dataset collection sessions.
Engproc 70 00040 g002
Figure 3. A pre-test self-assessment of the influence of light and noise as parameters of the working environment.
Figure 3. A pre-test self-assessment of the influence of light and noise as parameters of the working environment.
Engproc 70 00040 g003
Figure 4. A post-test ranking of the impact of three context-related distractors.
Figure 4. A post-test ranking of the impact of three context-related distractors.
Engproc 70 00040 g004
Figure 5. A post-test self-assessment of the degree of influence of the distractors.
Figure 5. A post-test self-assessment of the degree of influence of the distractors.
Engproc 70 00040 g005
Figure 6. Comparison of arousal, valence, attention, and performance of context-related states.
Figure 6. Comparison of arousal, valence, attention, and performance of context-related states.
Engproc 70 00040 g006
Figure 7. Reaction times presented by context-related states.
Figure 7. Reaction times presented by context-related states.
Engproc 70 00040 g007
Figure 8. Individual attention dependency on the context state.
Figure 8. Individual attention dependency on the context state.
Engproc 70 00040 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Markov, M.; Kalinin, Y.; Markova, V. IMPACT: A Dataset for Integrated Measurement of Performance and Contextual Task-Related Effects. Eng. Proc. 2024, 70, 40. https://doi.org/10.3390/engproc2024070040

AMA Style

Markov M, Kalinin Y, Markova V. IMPACT: A Dataset for Integrated Measurement of Performance and Contextual Task-Related Effects. Engineering Proceedings. 2024; 70(1):40. https://doi.org/10.3390/engproc2024070040

Chicago/Turabian Style

Markov, Miroslav, Yasen Kalinin, and Valentina Markova. 2024. "IMPACT: A Dataset for Integrated Measurement of Performance and Contextual Task-Related Effects" Engineering Proceedings 70, no. 1: 40. https://doi.org/10.3390/engproc2024070040

Article Metrics

Back to TopTop