-
Emojis for Evaluating and Supporting Psychological Safety in Co-Creation
-
A Study of NLP-Based Speech Interfaces in Medical Virtual Reality
-
Education 4.0 for Industry 4.0: A Mixed Reality Framework for Workforce Readiness in Manufacturing
-
Environments That Boost Creativity: AI-Generated Living Geometry
-
Effects of Flight Experience or Simulator Exposure on Simulator Sickness in Virtual Reality Flight Simulation
Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, peer-reviewed, open access journal on multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: CiteScore - Q1 (Neuroscience (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 25 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.4 (2024);
5-Year Impact Factor:
2.7 (2024)
Latest Articles
Exploring Consumer Perception of Augmented Reality (AR) Tools for Displaying and Understanding Nutrition Labels: A Pilot Study
Multimodal Technol. Interact. 2025, 9(9), 97; https://doi.org/10.3390/mti9090097 (registering DOI) - 16 Sep 2025
Abstract
Augmented reality (AR) technology offers a promising approach to providing consumers with detailed and personalized information about food products. The aim of this pilot study was to explore how the use of AR tools comprising visual and auditory formats affects consumers’ perception and
[...] Read more.
Augmented reality (AR) technology offers a promising approach to providing consumers with detailed and personalized information about food products. The aim of this pilot study was to explore how the use of AR tools comprising visual and auditory formats affects consumers’ perception and understanding of nutrition labels of two commercially available products (lasagne ready meal and strawberry yogurt). The nutritional information of both the lasagne and yogurt product were presented to consumers (n = 30) under three experimental conditions: original packaging, visual AR, and visual and audio AR. Consumers answered questions about their perceptions of the products’ overall healthiness, caloric content, and macronutrient composition, as well as how the information was presented. The results showed that while nutritional information presented under the original packaging condition was more effective in changing consumer perceptions, the AR tools were found to be more “novel” and “memorable”. More specifically, for both lasagne and yogurt, the visual AR tool resulted in a more memorable experience compared to original packaging. The use of visual AR and visual and audio AR tools were considered novel experiences for both products. However, the provision of nutritional information had a greater impact on product perception than the specific experimental condition used to present it. These results provide evidence from a pilot study supporting the development of an AR tool for displaying and potentially improving the understanding of nutrition labels.
Full article
Open AccessArticle
Analyzing Player Behavior in a VR Game for Children Using Gameplay Telemetry
by
Mihai-Alexandru Grosu and Stelian Nicola
Multimodal Technol. Interact. 2025, 9(9), 96; https://doi.org/10.3390/mti9090096 - 9 Sep 2025
Abstract
►▼
Show Figures
Virtual reality (VR) has become increasingly popular and has started entering homes, schools, and clinics, yet evidence on how children interact during free-form, unguided play remains limited. Understanding how interaction dynamics relate to player performance is essential for designing more accessible and engaging
[...] Read more.
Virtual reality (VR) has become increasingly popular and has started entering homes, schools, and clinics, yet evidence on how children interact during free-form, unguided play remains limited. Understanding how interaction dynamics relate to player performance is essential for designing more accessible and engaging VR experiences, especially in educational contexts. For this reason, we developed VRBloons, a child-friendly VR game about popping balloons. The game logs real-time gameplay telemetry such as total hand movement, accuracy, throw rate, and other performance related gameplay data. By analyzing several feature-engineered metrics using unsupervised clustering and non-parametric statistical validation, we aim to identify distinct behavioral patterns. The analysis revealed several associations between input preferences, movement patterns, and performance outcomes, forming clearly distinct clusters. From the performed analysis, input preference emerged as an independent dimension of play style, supporting the inclusion of redundant input mappings to accommodate diverse motor capabilities. Additionally, the results highlight the opportunities for performance-sensitive assistance systems that adapt the difficulty of the game in real time. Overall, this study demonstrates how telemetry-based profiling can shape the design decisions in VR experiences, offering a methodological framework for assessing varied interaction styles and a diverse player population.
Full article

Graphical abstract
Open AccessArticle
Unplugged Activities in the Development of Computational Thinking with Poly-Universe
by
Aldemir Malveira de Oliveira, Piedade Vaz-Rebelo and Maria da Graça Bidarra
Multimodal Technol. Interact. 2025, 9(9), 95; https://doi.org/10.3390/mti9090095 - 9 Sep 2025
Abstract
►▼
Show Figures
This paper presents an educational experience of using Poly-Universe, a game created by Janos Saxon, with the aim of developing computational thinking (CT) skills through unplugged activities. It was implemented in the course “Algorithm Analysis,” with the participation of students in the sixth
[...] Read more.
This paper presents an educational experience of using Poly-Universe, a game created by Janos Saxon, with the aim of developing computational thinking (CT) skills through unplugged activities. It was implemented in the course “Algorithm Analysis,” with the participation of students in the sixth period of Computer Science at a University Center for Higher Education in Brazil. These students were facing various cognitive difficulties in using the four pillars of CT, namely abstraction, pattern recognition, algorithm, and decomposition. To address the students’ learning gaps, unplugged activities were implemented using Poly-Universe pieces—geometric shapes such as triangles, squares, and circles—exploring the connection through the pillars of CT. A mixed methodology integrating quantitative and qualitative approaches was applied to compare the progress of the students and their reactions when developing the activities. The results obtained evidenced that the level of learning involving the computational pillars on “Algorithm Analysis” had a significant evolution, from 30% to almost 80% in terms of achievement in academic tests. In addition, an increase in students’ engagement and collaboration was also registered. Therefore, the implementation of unplugged activities with Poly-Universe revealed a promotion of skills related to the pillars of CT, especially in the analysis of algorithms.
Full article

Figure 1
Open AccessSystematic Review
Augmented Reality in Education Through Collaborative Learning: A Systematic Literature Review
by
Georgios Christoforos Kazlaris, Euclid Keramopoulos, Charalampos Bratsas and Georgios Kokkonis
Multimodal Technol. Interact. 2025, 9(9), 94; https://doi.org/10.3390/mti9090094 - 6 Sep 2025
Abstract
►▼
Show Figures
The rapid advancement of technology in our era has brought significant changes to various fields of human activity, including education. As a key pillar of intellectual and social development, education integrates innovative tools to enrich learning experiences. One such tool is Augmented Reality
[...] Read more.
The rapid advancement of technology in our era has brought significant changes to various fields of human activity, including education. As a key pillar of intellectual and social development, education integrates innovative tools to enrich learning experiences. One such tool is Augmented Reality (AR), which enables dynamic interaction between physical and digital environments. This systematic review, following PRISMA guidelines, examines AR’s use in education, with a focus on enhancing collaborative learning across various educational levels. A total of 29 peer-reviewed studies published between 2010 and 2024 were selected based on defined inclusion criteria, retrieved from major databases such as Scopus, Web of Science, IEEE Xplore, and ScienceDirect. The findings suggest that AR can improve student engagement and foster collaboration through interactive, immersive methods. However, the review also identifies methodological gaps in current research, such as inconsistent sample size reporting, limited information on questionnaires, and the absence of standardized evaluation approaches. This review contributes to the field by offering a structured synthesis of current research, highlighting critical gaps, and proposing directions for more rigorous, transparent, and pedagogically grounded studies on the integration of AR in collaborative learning environments.
Full article

Figure 1
Open AccessArticle
Augminded: Ambient Mirror Display Notifications
by
Timo Götzelmann, Pascal Karg and Mareike Müller
Multimodal Technol. Interact. 2025, 9(9), 93; https://doi.org/10.3390/mti9090093 - 4 Sep 2025
Abstract
►▼
Show Figures
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by
[...] Read more.
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by the user. It enables technical and non-technical objects in the environment to be visually highlighted and thus subtly draw the attention of people passing by. The presented technology enables the provision of information that can be viewed in more detail by the user if required by slowing down their movement. Users can decide whether this is relevant to them or not. A prototype system was implemented and evaluated through a user study. The results show a high level of acceptance and intuitive usability of the system, with participants being able to reliably perceive and process the information displayed. The technology thus offers promising potential for the unobtrusive and context-sensitive provision of information in various application areas. The paper discusses limitations of the system and outlines future research directions to further optimize the technology and extend its applicability.
Full article

Figure 1
Open AccessArticle
Evaluating Educational Game Design Through Human–Machine Pair Inspection: Case Studies in Adaptive Learning Environments
by
Ioannis Sarlis, Dimitrios Kotsifakos and Christos Douligeris
Multimodal Technol. Interact. 2025, 9(9), 92; https://doi.org/10.3390/mti9090092 - 1 Sep 2025
Abstract
Educational games often fail to effectively merge game mechanics with educational goals, lacking adaptive feedback and real-time performance monitoring. This study explores how Human–Computer Interaction principles and adaptive feedback can enhance educational game design to improve learning outcomes and user experience. Four educational
[...] Read more.
Educational games often fail to effectively merge game mechanics with educational goals, lacking adaptive feedback and real-time performance monitoring. This study explores how Human–Computer Interaction principles and adaptive feedback can enhance educational game design to improve learning outcomes and user experience. Four educational games were analyzed using a mixed-methods approach and evaluated through established frameworks, such as the Serious Educational Games Evaluation Framework, the Assessment of Learning and Motivation Software, the Learning Object Evaluation Scale for Students, and Universal Design for Learning guidelines. In addition, a novel Human–Machine Pair Inspection protocol was employed to gather real-time data on adaptive feedback, cognitive load, and interactive behavior. Findings suggest that Human–Machine Pair Inspection-based adaptive mechanisms significantly boost personalized learning, knowledge retention, and student motivation by better aligning games with learning objectives. Although the sample size is small, this research provides practical insights for educators and designers, highlighting the effectiveness of adaptive Game-Based Learning. The study proposes the Human–Machine Pair Inspection methodology as a valuable tool for creating educational games that successfully balance user experience with learning goals, warranting further empirical validation with larger groups.
Full article
(This article belongs to the Special Issue Video Games: Learning, Emotions, and Motivation)
►▼
Show Figures

Figure 1
Open AccessArticle
Assessment of the Validity and Reliability of Reaction Speed Measurements Using the Rezzil Player Application in Virtual Reality
by
Jacek Polechoński and Agata Horbacz
Multimodal Technol. Interact. 2025, 9(9), 91; https://doi.org/10.3390/mti9090091 - 1 Sep 2025
Abstract
►▼
Show Figures
Virtual reality (VR) is widely used across various areas of human life. One field where its application is rapidly growing is sport and physical activity (PA). Training applications are being developed that support various sports disciplines, motor skill acquisition, and the development of
[...] Read more.
Virtual reality (VR) is widely used across various areas of human life. One field where its application is rapidly growing is sport and physical activity (PA). Training applications are being developed that support various sports disciplines, motor skill acquisition, and the development of motor abilities. Immersive technologies are increasingly being used to assess motor and cognitive capabilities. As such, validation studies of these diagnostic tools are essential. The aim of this study was to estimate the validity and reliability of reaction speed (RS) measurements using the Rezzil Player application (“Reaction” module) in immersive VR compared to results obtained with the SMARTFit device in a real environment (RE). The study involved 43 university students (17 women and 26 men). Both tests required participants to strike light targets on a panel with their hands. Two indicators of response were analyzed in both tests: the number of hits on illuminated targets within a specified time frame and the average RS in response to visual stimuli. Statistically significant and relatively strong correlations were observed between the two measurement methods: number of hits (rS = 0.610; p < 0.001) and average RS (rS = 0.535; p < 0.001). High intraclass correlation coefficients (ICCs) were also found for both test environments: number of hits in VR (ICC = 0.851), average RS in VR (0.844), number of hits in RE (ICC = 0.881), and average RS in RE (0.878). The findings indicate that the Rezzil Player application can be considered a valid and reliable tool for measuring reaction speed in VR. The correlation with conventional methods and the high ICC values attest to the psychometric quality of the tool.
Full article

Figure 1
Open AccessArticle
Design and Evaluation of a Serious Game Prototype to Stimulate Pre-Reading Fluency Processes in Paediatric Hospital Classrooms
by
Juan Pedro Tacoronte-Sosa and María Ángeles Peña-Hita
Multimodal Technol. Interact. 2025, 9(9), 90; https://doi.org/10.3390/mti9090090 - 27 Aug 2025
Abstract
Didactic digital tools can commence, enhance, and strengthen reading fluency in children undergoing long-term hospitalization due to oncology conditions. However, resources specifically designed to support rapid naming and decoding in Spanish remain scarce. This study presents the design, development, and evaluation of a
[...] Read more.
Didactic digital tools can commence, enhance, and strengthen reading fluency in children undergoing long-term hospitalization due to oncology conditions. However, resources specifically designed to support rapid naming and decoding in Spanish remain scarce. This study presents the design, development, and evaluation of a game prototype aimed at addressing this gap among Spanish-speaking preschoolers in hospital settings. Developed using Unity through a design-based research methodology, the game comprises three narratively linked levels targeting rapid naming, decoding, and fluency. A sequential exploratory mixed-methods design (QUAL-quan) guided the evaluation. Qualitative data were obtained from a focus group of hospital teachers (N = 6) and interviews with experts (N = 30) in relevant fields. Quantitative validation involved 274 experts assessing the game’s contextual, pedagogical, and technical quality. The prototype was also piloted with four end-users using standardised tests for rapid naming, decoding, and fluency in Spanish. Results indicated strong expert consensus regarding the game’s educational value, contextual fit, and usability. Preliminary findings suggest potential for fostering and supplementing early literacy skills in hospitalised children. Further research with larger clinical samples is recommended to validate these outcomes.
Full article
(This article belongs to the Special Issue Video Games: Learning, Emotions, and Motivation)
►▼
Show Figures

Figure 1
Open AccessArticle
Cognitive Workload Assessment in Aerospace Scenarios: A Cross-Modal Transformer Framework for Multimodal Physiological Signal Fusion
by
Pengbo Wang, Hongxi Wang and Heming Zhang
Multimodal Technol. Interact. 2025, 9(9), 89; https://doi.org/10.3390/mti9090089 - 26 Aug 2025
Abstract
In the field of cognitive workload assessment for aerospace training, existing methods exhibit significant limitations in unimodal feature extraction and in leveraging complementary synergy among multimodal signals, while current fusion paradigms struggle to effectively capture nonlinear dynamic coupling characteristics across modalities. This study
[...] Read more.
In the field of cognitive workload assessment for aerospace training, existing methods exhibit significant limitations in unimodal feature extraction and in leveraging complementary synergy among multimodal signals, while current fusion paradigms struggle to effectively capture nonlinear dynamic coupling characteristics across modalities. This study proposes DST-Net (Cross-Modal Downsampling Transformer Network), which synergistically integrates pilots’ multimodal physiological signals (electromyography, electrooculography, electrodermal activity) with flight dynamics data through an Anti-Aliasing and Average Pooling LSTM (AAL-LSTM) data fusion strategy combined with cross-modal attention mechanisms. Evaluation on the “CogPilot” dataset for flight task difficulty prediction demonstrates that AAL-LSTM achieves substantial performance improvements over existing approaches (AUC = 0.97, F1 Score = 94.55). Given the dataset’s frequent sensor data missingness, the study further enhances simulated flight experiments. By incorporating eye-tracking features via cross-modal attention mechanisms, the upgraded DST-Net framework achieves even higher performance (AUC = 0.998, F1 Score = 97.95) and reduces the root mean square error (RMSE) of cumulative flight error prediction to 1750. These advancements provide critical support for safety-critical aviation training systems.
Full article
(This article belongs to the Special Issue Human-AI Collaborative Interaction Design: Rethinking Human-Computer Symbiosis in the Age of Intelligent Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Development of a Multi-Platform AI-Based Software Interface for the Accompaniment of Children
by
Isaac León, Camila Reyes, Iesus Davila, Bryan Puruncajas, Dennys Paillacho, Nayeth Solorzano, Marcelo Fajardo-Pruna, Hyungpil Moon and Francisco Yumbla
Multimodal Technol. Interact. 2025, 9(9), 88; https://doi.org/10.3390/mti9090088 - 26 Aug 2025
Abstract
The absence of parental presence has a direct impact on the emotional stability and social routines of children, especially during extended periods of separation from their family environment, as in the case of daycare centers, hospitals, or when they remain alone at home.
[...] Read more.
The absence of parental presence has a direct impact on the emotional stability and social routines of children, especially during extended periods of separation from their family environment, as in the case of daycare centers, hospitals, or when they remain alone at home. At the same time, the technology currently available to provide emotional support in these contexts remains limited. In response to the growing need for emotional support and companionship in child care, this project proposes the development of a multi-platform software architecture based on artificial intelligence (AI), designed to be integrated into humanoid robots that assist children between the ages of 6 and 14. The system enables daily verbal and non-verbal interactions intended to foster a sense of presence and personalized connection through conversations, games, and empathetic gestures. Built on the Robot Operating System (ROS), the software incorporates modular components for voice command processing, real-time facial expression generation, and joint movement control. These modules allow the robot to hold natural conversations, display dynamic facial expressions on its LCD (Liquid Crystal Display) screen, and synchronize gestures with spoken responses. Additionally, a graphical interface enhances the coherence between dialogue and movement, thereby improving the quality of human–robot interaction. Initial evaluations conducted in controlled environments assessed the system’s fluency, responsiveness, and expressive behavior. Subsequently, it was implemented in a pediatric hospital in Guayaquil, Ecuador, where it accompanied children during their recovery. It was observed that this type of artificial intelligence-based software, can significantly enhance the experience of children, opening promising opportunities for its application in clinical, educational, recreational, and other child-centered settings.
Full article
(This article belongs to the Special Issue Human-AI Collaborative Interaction Design: Rethinking Human-Computer Symbiosis in the Age of Intelligent Systems)
►▼
Show Figures

Graphical abstract
Open AccessArticle
3D Printing as a Multimodal STEM Learning Technology: A Survey Study in Second Chance Schools
by
Despina Radiopoulou, Antreas Kantaros, Theodore Ganetsos and Paraskevi Zacharia
Multimodal Technol. Interact. 2025, 9(9), 87; https://doi.org/10.3390/mti9090087 - 24 Aug 2025
Abstract
►▼
Show Figures
This study explores the integration of 3D printing technology by adult learners in Greek Second Chance Schools (SCS), institutions designed to address Early School Leaving and promote Lifelong Learning. Grounded in constructivist and experiential learning theories, the research examines adult learners’ attitudes toward
[...] Read more.
This study explores the integration of 3D printing technology by adult learners in Greek Second Chance Schools (SCS), institutions designed to address Early School Leaving and promote Lifelong Learning. Grounded in constructivist and experiential learning theories, the research examines adult learners’ attitudes toward 3D printing technology through a hands-on STEM activity in the context of teaching scientific literacy. The instructional activity was centered on a physics experiment illustrating Archimedes’ principle using a multimodal approach, combining 3D computer modeling for visualization and design with tangible manipulation of a printed object, thereby offering both digital and Hands-on learning experiences. Quantitative data was collected using a structured questionnaire to assess participants’ perception toward the 3D printing technology. Findings indicate a positive trend in adult learners’ responses, finding 3D printing accessible, interesting, and easy to use. While expressing hesitation about independently applying the technology in the future, overall responses suggest strong interest and openness to using emerging technologies within educational settings, even among marginalized adult populations. This work highlights the value of integrating emerging technologies into alternative education frameworks and offers a replicable model for inclusive STEM education and lays the groundwork for further research in adult learning environments using innovative, learner-centered approaches.
Full article

Graphical abstract
Open AccessArticle
Telerehabilitation Strategy for University Students with Back Pain Based on 3D Animations: Case Study
by
Carolina Ponce-Ibarra, Diana-Margarita Córdova-Esparza, Teresa García-Ramírez, Julio-Alejandro Romero-González, Juan Terven, Mauricio Arturo Ibarra-Corona and Rolando Pérez Palacios-Bonilla
Multimodal Technol. Interact. 2025, 9(9), 86; https://doi.org/10.3390/mti9090086 - 24 Aug 2025
Abstract
Nowadays, the use of technology has become increasingly indispensable, leading to prolonged exposure to computers and other screen devices. This situation is common in work areas related to Information and Communication Technologies (ICTs), where people spend long hours in front of a computer.
[...] Read more.
Nowadays, the use of technology has become increasingly indispensable, leading to prolonged exposure to computers and other screen devices. This situation is common in work areas related to Information and Communication Technologies (ICTs), where people spend long hours in front of a computer. This exposure has been associated with the development of musculoskeletal disorders, among which nonspecific back pain is particularly prevalent. This observational study presents the design of a telerehabilitation strategy based on 3D animations, which is aimed at enhancing the musculoskeletal health of individuals working or studying in ICT-related fields. The intervention was developed through the Moodle platform and designed using the ADDIE instructional model, incorporating educational content and therapeutic exercises adapted to digital ergonomics. The sample included university students in the field of computer science who were experiencing symptoms associated with prolonged computer use. After a four-week intervention period, the results show favorable changes in pain perception and knowledge of postural hygiene. These findings suggest that a distance-based educational and therapeutic strategy may be a useful approach for the prevention and treatment of back pain in academic settings.
Full article
(This article belongs to the Special Issue uHealth Interventions and Digital Therapeutics for Better Diseases Prevention and Patient Care)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Do Novices Struggle with AI Web Design? An Eye-Tracking Study of Full-Site Generation Tools
by
Chen Chu, Jianan Zhao and Zhanxun Dong
Multimodal Technol. Interact. 2025, 9(9), 85; https://doi.org/10.3390/mti9090085 - 22 Aug 2025
Abstract
AI-powered full-site web generation tools promise to democratize website creation for novice users. However, their actual usability and accessibility for novice users remain insufficiently studied. This study examines interaction barriers faced by novice users when using Wix ADI to complete three tasks: Task
[...] Read more.
AI-powered full-site web generation tools promise to democratize website creation for novice users. However, their actual usability and accessibility for novice users remain insufficiently studied. This study examines interaction barriers faced by novice users when using Wix ADI to complete three tasks: Task 1 (onboarding), Task 2 (template customization), and Task 3 (product page creation). Twelve participants with no web design background were recruited to perform these tasks while their behavior was recorded via screen capture and eye-tracking (Tobii Glasses 2), supplemented by post-task interviews. Task completion rates declined significantly in Task 2 (66.67%) and 3 (33.33%). Help-seeking behaviors increased significantly, particularly during template customization and product page creation. Eye-tracking data indicated elevated cognitive load in later tasks, with fixation count and saccade count peaking in Task 2 and pupil diameter peaking in Task 3. Qualitative feedback identified core challenges such as interface ambiguity, limited transparency in AI control, and disrupted task logic. These findings reveal a gap between AI tool affordances and novice user needs, underscoring the importance of interface clarity, editable transparency, and adaptive guidance. As full-site generators increasingly target general users, lowering barriers for novice audiences is essential for equitable access to web creation.
Full article
(This article belongs to the Special Issue Human-AI Collaborative Interaction Design: Rethinking Human-Computer Symbiosis in the Age of Intelligent Systems)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Systematic Review of Artificial Intelligence in Education: Trends, Benefits, and Challenges
by
Juan Garzón, Eddy Patiño and Camilo Marulanda
Multimodal Technol. Interact. 2025, 9(8), 84; https://doi.org/10.3390/mti9080084 - 20 Aug 2025
Abstract
Artificial intelligence (AI) is changing how we teach and learn, generating excitement and concern about its potential to transform education. To contribute to the debate, this systematic literature review examines current research trends (publication year, country of study, publication journal, education level, education
[...] Read more.
Artificial intelligence (AI) is changing how we teach and learn, generating excitement and concern about its potential to transform education. To contribute to the debate, this systematic literature review examines current research trends (publication year, country of study, publication journal, education level, education field, and AI type), as well as the benefits and challenges of integrating AI into education. This review analyzed 155 peer-reviewed empirical studies published between 2015 and 2025. The review reveals a significant increase in research activity since 2022, reflecting the impact of generative AI tools, such as ChatGPT. Studies highlight a range of benefits, including enhanced learning outcomes, personalized instruction, and increased student motivation. However, there are challenges to overcome, such as students’ ethical use of AI, teachers’ resistance to using AI systems, and the digital dependency these systems can generate. These findings show AI’s potential to enhance education; however, its success depends on careful implementation and collaboration among educators, researchers, and policymakers to ensure meaningful and equitable outcomes.
Full article
(This article belongs to the Special Issue Online Learning to Multimodal Era: Interfaces, Analytics and User Experiences)
►▼
Show Figures

Figure 1
Open AccessReview
Homo smartphonus: Psychological Aspects of Smartphone Use—A Literature Review
by
Piotr Sorokowski and Marta Sobczak
Multimodal Technol. Interact. 2025, 9(8), 83; https://doi.org/10.3390/mti9080083 - 19 Aug 2025
Abstract
The increasing prevalence of smartphone use has raised concerns about its impact on human psychological functioning. This literature review provides a comprehensive overview of the psychological dimensions influenced by smartphone use, spanning health psychology, individual differences, social psychology, and cognitive functioning. The review
[...] Read more.
The increasing prevalence of smartphone use has raised concerns about its impact on human psychological functioning. This literature review provides a comprehensive overview of the psychological dimensions influenced by smartphone use, spanning health psychology, individual differences, social psychology, and cognitive functioning. The review draws on findings from numerous studies, primarily conducted in highly developed Western and Asian countries, where cultural factors may influence usage patterns and psychological outcomes. Key limitations in the current body of research include geographical biases and methodological challenges such as sample homogeneity and reliance on self-report measures. Evidence suggests that excessive smartphone use can lead to addiction and is associated with negative psychological and health consequences. The review also highlights how individual differences—such as personality traits, age, and gender—affect smartphone usage. Social implications, both positive (e.g., increased connectivity) and negative (e.g., interpersonal conflict), are explored in depth. Cognitive effects are considered, particularly in relation to attention and memory, where findings suggest potential impairments in sustained focus and information retention. While the literature often emphasizes risks, this review also points to the need for further exploration of the potential benefits of smartphone use. In summary, the review offers valuable insights into the complex psychological effects of smartphones and underscores the importance of future research to better understand their nuanced impact on well-being.
Full article
Open AccessReview
Perception and Monitoring of Sign Language Acquisition for Avatar Technologies: A Rapid Focused Review (2020–2025)
by
Khansa Chemnad and Achraf Othman
Multimodal Technol. Interact. 2025, 9(8), 82; https://doi.org/10.3390/mti9080082 - 14 Aug 2025
Abstract
►▼
Show Figures
Sign language avatar systems have emerged as a promising solution to bridge communication gaps where human sign language interpreters are unavailable. However, the design of these avatars often fails to account for the diversity in how users acquire and perceive sign language. This
[...] Read more.
Sign language avatar systems have emerged as a promising solution to bridge communication gaps where human sign language interpreters are unavailable. However, the design of these avatars often fails to account for the diversity in how users acquire and perceive sign language. This study presents a rapid review of 17 empirical studies (2020–2025) to synthesize how linguistic and cognitive variability affects sign language perception and how these findings can guide avatar development. We extracted and synthesized key constructs, participant profiles, and capture techniques relevant to avatar fidelity. This review finds that delayed exposure to sign language is consistently linked to persistent challenges in syntactic processing, classifier use, and avatar comprehension. In contrast, early-exposed signers demonstrate more robust parsing and greater tolerance of perceptual irregularities. Key perceptual features, such as smooth transitions between signs, expressive facial cues for grammatical clarity, and consistent spatial placement of referents, emerge as critical for intelligibility, particularly for late learners. These findings highlight the importance of participatory design and user-centered validation in advancing accessible, culturally responsive human–computer interaction through next-generation avatar systems.
Full article

Figure 1
Open AccessArticle
Organizing Relational Complexity—Design of Interactive Complex Systems
by
Linus de Petris and Siamak Khatibi
Multimodal Technol. Interact. 2025, 9(8), 81; https://doi.org/10.3390/mti9080081 - 12 Aug 2025
Abstract
►▼
Show Figures
With the advent of AI- and robot-systems, the current Human–Computer Interaction (HCI) paradigm, which treats interaction as a transactional exchange, is increasingly insufficient for complex socio-technical systems. This paper argues for a shift toward an agential realist perspective, which understands interaction not as
[...] Read more.
With the advent of AI- and robot-systems, the current Human–Computer Interaction (HCI) paradigm, which treats interaction as a transactional exchange, is increasingly insufficient for complex socio-technical systems. This paper argues for a shift toward an agential realist perspective, which understands interaction not as an exchange between separate entities, but as a phenomenon continuously enacted through dynamic, material-discursive practices known as ‘intra-actions’. Through a diffractive reading of agential realism, HCI, complex systems theory, and an empirical case study of a touring exhibition on skateboarding culture, this paper explores an alternative approach. A key finding emerged from a sound-recording workshop when a participant described the recordings not as “how it sounds,” but as “how it feels” to skate. The finding reveals the limits of traditional HCI and it illustrates how interacting parts are co-constituted through the intra-actions of entangled agencies. An argument is made that design for interactive complex systems should change from focusing on causal transactional interaction towards organizing relational complexity, which is staging the conditions for a rich scope of emergent encounters to unfold. The paper concludes by suggesting further research into non-causal explanation and computation.
Full article

Figure 1
Open AccessArticle
Space Medicine Meets Serious Games: Boosting Engagement with the Medimon Creature Collector
by
Martin Hundrup, Jessi Holte, Ciara Bordeaux, Emma Ferguson, Joscelyn Coad, Terence Soule and Tyler Bland
Multimodal Technol. Interact. 2025, 9(8), 80; https://doi.org/10.3390/mti9080080 - 7 Aug 2025
Abstract
Serious games that integrate educational content with engaging gameplay mechanics hold promise for reducing cognitive load and increasing student motivation in STEM and health science education. This preliminary study presents the development and evaluation of the Medimon NASA Demo, a game-based learning prototype
[...] Read more.
Serious games that integrate educational content with engaging gameplay mechanics hold promise for reducing cognitive load and increasing student motivation in STEM and health science education. This preliminary study presents the development and evaluation of the Medimon NASA Demo, a game-based learning prototype designed to teach undergraduate students about the musculoskeletal and visual systems—two critical domains in space medicine. Participants (n = 23) engaged with the game over a two-week self-regulated learning period. The game employed mnemonic-based characters, visual storytelling, and turn-based battle mechanics to reinforce medical concepts. Quantitative results demonstrated significant learning gains, with posttest scores increasing by an average of 23% and a normalized change of c = 0.4. Engagement levels were high across multiple dimensions of situational interest, and 74% of participants preferred the game over traditional formats. Qualitative analysis of open-ended responses revealed themes related to intrinsic appeal, perceived learning efficacy, interaction design, and cognitive resource management. While the game had minimal impact on short-term STEM career interest, its educational potential was clearly supported. These findings suggest that mnemonic-driven serious games like Medimon can effectively enhance engagement and learning in health science education, especially when aligned with real-world contexts such as space medicine.
Full article
(This article belongs to the Special Issue Video Games: Learning, Emotions, and Motivation)
►▼
Show Figures

Figure 1
Open AccessArticle
Evaluating Spatial Decision-Making and Player Experience in a Remote Multiplayer Augmented Reality Hide-and-Seek Game
by
Yasas Sri Wickramasinghe, Heide Karen Lukosch, James Everett and Stephan Lukosch
Multimodal Technol. Interact. 2025, 9(8), 79; https://doi.org/10.3390/mti9080079 - 31 Jul 2025
Abstract
►▼
Show Figures
This study investigates how remote multiplayer gameplay, enabled through Augmented Reality (AR), transforms spatial decision-making and enhances player experience in a location-based augmented reality game (LBARG). A remote multiplayer handheld-based AR game was designed and evaluated on how it influences players’ spatial decision-making
[...] Read more.
This study investigates how remote multiplayer gameplay, enabled through Augmented Reality (AR), transforms spatial decision-making and enhances player experience in a location-based augmented reality game (LBARG). A remote multiplayer handheld-based AR game was designed and evaluated on how it influences players’ spatial decision-making strategies, engagement, and gameplay experience. In a user study involving 60 participants, we compared remote gameplay in our AR game with traditional hide-and-seek. We found that AR significantly transforms traditional gameplay by introducing different spatial interactions, which enhanced spatial decision-making and collaboration. Our results also highlight the potential of AR to increase player engagement and social interaction, despite the challenges posed by the added navigation complexities. These findings contribute to the engaging design of future AR games and beyond.
Full article

Figure 1
Open AccessArticle
A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance
by
Maria-Magdalena Wolf, Henrik Schmidt, Michael Christl, Jana Fank and Frank Diermeyer
Multimodal Technol. Interact. 2025, 9(8), 78; https://doi.org/10.3390/mti9080078 - 31 Jul 2025
Cited by 2
Abstract
►▼
Show Figures
Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is
[...] Read more.
Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is critical. In addition to video feed, supplemental informational elements are crucial—not only for the predominantly studied remote driving, but also for emerging desk-based remote assistance concepts. This work develops a GUI for different teleoperation concepts by identifying key informational elements during the teleoperation process through expert interviews (N = 9). Following this, a static and dynamic GUI prototype was developed and evaluated in a click dummy study (N = 36). Thereby, the dynamic GUI adapts the number of displayed elements according to the teleoperation phase. Results show that both GUIs achieve good system usability scale (SUS) ratings, with the dynamic GUI significantly outperforming the static version in both usability and task completion time. However, the results might be attributable to a learning effect due to the lack of randomization. The user experience questionnaire (UEQ) score shows potential for improvement. To enhance the user experience, the GUI should be evaluated in a follow-up study that includes interaction with a real vehicle.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
3 September 2025
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada
Join Us at the MDPI at the University of Toronto Career Fair, 23 September 2025, Toronto, ON, Canada

1 September 2025
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
MDPI INSIGHTS: The CEO’s Letter #26 – CUJS, Head of Ethics, Open Peer Review, AIS 2025, Reviewer Recognition
Topics
Topic in
Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2025

Special Issues
Special Issue in
MTI
Artificial Intelligence in Medical Radiation Science, Radiology and Radiation Oncology
Guest Editor: Curtise K. C. NgDeadline: 30 September 2025
Special Issue in
MTI
Video Games: Learning, Emotions, and Motivation
Guest Editors: Ana Manzano-León, José M. Rodríguez FerrerDeadline: 20 October 2025
Special Issue in
MTI
Human-AI Collaborative Interaction Design: Rethinking Human-Computer Symbiosis in the Age of Intelligent Systems
Guest Editor: Qianling JiangDeadline: 30 November 2025
Special Issue in
MTI
Multimodal Interaction Design in Immersive Learning and Training Environments
Guest Editors: Qian Li, Yunfei Long, Lei ShiDeadline: 20 February 2026