Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (170)

Search Parameters:
Keywords = touchscreen

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 775 KB  
Article
Assessment of Fine Motor Abilities Among Children with Spinal Muscular Atrophy Treated with Nusinersen Using a New Touchscreen Application: A Pilot Study
by Inbal Klemm, Alexandra Danial-Saad, Alexis R. Karlin, Rya Nassar-Yassien, Iuliana Eshel, Hagit Levine, Tamar Steinberg and Sharon Aharoni
Children 2025, 12(10), 1378; https://doi.org/10.3390/children12101378 - 12 Oct 2025
Viewed by 249
Abstract
Background/Objectives: Spinal Muscular Atrophy (SMA) is a genetic neurodegenerative disease characterized by severe muscle weakness and atrophy. Advances in disease-modifying therapies have dramatically changed the natural history of SMA and the outcome measures that are used to assess the clinical response to therapy. [...] Read more.
Background/Objectives: Spinal Muscular Atrophy (SMA) is a genetic neurodegenerative disease characterized by severe muscle weakness and atrophy. Advances in disease-modifying therapies have dramatically changed the natural history of SMA and the outcome measures that are used to assess the clinical response to therapy. Standard assessment methods for SMA are limited in their ability to detect minor changes in fine motor abilities and in patients’ daily functions. The aim of this pilot study was to evaluate the feasibility and preliminary use of the Touchscreen-Assessment Tool (TATOO) alongside standardized tools to detect changes in upper extremity motor function among individuals with SMA receiving nusinersen therapy. Methods: Thirteen individuals with genetically-confirmed SMA, aged 6–23 years, eight with SMA type 2, and five with SMA type 3, participated. The patients continued the maintenance dosing of nusinersen during the study period. They were evaluated at the onset of the study, then twice more at intervals at least six months apart. Upper extremity functional assessments were performed via the TATOO and standardized tools: the Hand Grip Dynamometer (HGD), Pinch Dynamometer (PD), Revised Upper Limb Module (RULM), and Nine-Hole Peg Test (NHPT). Results: Significant changes in fine motor function were detected using the TATOO together with other standardized tools. Participants demonstrated notable improvements in hand grip strength and fine motor performance, as measured by the NHPT. The RULM results were not statistically significant for the total study group, particularly in ambulatory patients with SMA type 3. TATOO provided detailed metrics, and revealed enhancements in accuracy and speed across various tasks. However, given the small sample size, the lack of a control group, and the lack of baseline assessment before receiving therapy, these findings should be considered preliminary and exploratory. Conclusions: The findings suggest that the TATOO, alongside traditional assessment tools, offers a sensitive measure of fine motor function changes in patients with SMA. This study highlights the potential of touchscreen-based assessments to address gaps in current outcome measures and emphasizes the need for larger, multicenter studies that will include pre-treatment, baseline, and control data. Full article
Show Figures

Figure 1

26 pages, 7995 KB  
Article
Smart Home Control Using Real-Time Hand Gesture Recognition and Artificial Intelligence on Raspberry Pi 5
by Thomas Hobbs and Anwar Ali
Electronics 2025, 14(20), 3976; https://doi.org/10.3390/electronics14203976 - 10 Oct 2025
Viewed by 1203
Abstract
This paper outlines the process of developing a low-cost system for home appliance control via real-time hand gesture classification using Computer Vision and a custom lightweight machine learning model. This system strives to enable those with speech or hearing disabilities to interface with [...] Read more.
This paper outlines the process of developing a low-cost system for home appliance control via real-time hand gesture classification using Computer Vision and a custom lightweight machine learning model. This system strives to enable those with speech or hearing disabilities to interface with smart home devices in real time using hand gestures, such as is possible with voice-activated ‘smart assistants’ currently available. The system runs on a Raspberry Pi 5 to enable future IoT integration and reduce costs. The system also uses the official camera module v2 and 7-inch touchscreen. Frame preprocessing uses MediaPipe to assign hand coordinates, and NumPy tools to normalise them. A machine learning model then predicts the gesture. The model, a feed-forward network consisting of five fully connected layers, was built using Keras 3 and compiled with TensorFlow Lite. Training data utilised the HaGRIDv2 dataset, modified to consist of 15 one-handed gestures from its original of 23 one- and two-handed gestures. When used to train the model, validation metrics of 0.90 accuracy and 0.31 loss were returned. The system can control both analogue and digital hardware via GPIO pins and, when recognising a gesture, averages 20.4 frames per second with no observable delay. Full article
Show Figures

Figure 1

26 pages, 617 KB  
Review
Mobile Typing as a Window into Sensorimotor and Cognitive Function
by Lorenzo Viviani, Alba Liso and Laila Craighero
Brain Sci. 2025, 15(10), 1084; https://doi.org/10.3390/brainsci15101084 - 7 Oct 2025
Viewed by 409
Abstract
The rapid evolution of human–technology interaction necessitates continuous sensorimotor adaptation to new digital interfaces and tasks. Mobile typing, defined as text entry on smartphone touchscreens, offers a compelling example of this process, requiring users to adapt fine motor control and coordination to a [...] Read more.
The rapid evolution of human–technology interaction necessitates continuous sensorimotor adaptation to new digital interfaces and tasks. Mobile typing, defined as text entry on smartphone touchscreens, offers a compelling example of this process, requiring users to adapt fine motor control and coordination to a constrained virtual environment. Aligned with the embodied cognition framework, understanding these digital sensorimotor experiences is crucial. A key theoretical question is whether these skills primarily involve adaptation of existing motor patterns or necessitate de novo learning, a distinction particularly relevant across generations with differing early sensorimotor experiences. This narrative review synthesizes current understanding of the sensorimotor aspects of smartphone engagement and typing skill evaluation methods. It examines touchscreen competence, skill acquisition, diverse strategies employed, and the influence of interface constraints on motor performance, while also detailing various sophisticated performance metrics and analyzing different data collection methodologies. Research highlights that analyzing typing behaviors and their underlying neural correlates increasingly serves as a potential source of behavioral biomarkers. However, while notable progress has been made, the field is still developing, requiring stronger methodological foundations and crucial standardization of metrics and protocols to fully capture and understand the dynamic sensorimotor processes involved in digital interactions. Nevertheless, mobile typing emerges as a compelling model for advancing our understanding of human sensorimotor learning and cognitive function, offering a rich, ecologically valid platform for investigating human-world interaction. Full article
(This article belongs to the Section Sensory and Motor Neuroscience)
Show Figures

Figure 1

29 pages, 14762 KB  
Article
Design and Validation of PACTUS 2.0: Usability for Neurological Patients, Seniors and Caregivers
by Juan J. Sánchez-Gil, Aurora Sáez, Juan José Ochoa-Sepúlveda, Rafael López-Luque, David Cáceres-Gómez and Eduardo Cañete-Carmona
Sensors 2025, 25(19), 6158; https://doi.org/10.3390/s25196158 - 4 Oct 2025
Viewed by 494
Abstract
Stroke is one of the leading causes of disability worldwide. Its sequelae require early, intensive, and repetitive rehabilitation, but is often ineffective due to a lack of patient motivation. Gamification has been incorporated in recent years as a response to this issue. The [...] Read more.
Stroke is one of the leading causes of disability worldwide. Its sequelae require early, intensive, and repetitive rehabilitation, but is often ineffective due to a lack of patient motivation. Gamification has been incorporated in recent years as a response to this issue. The aim of incorporating games is to motivate patients to perform therapeutic exercises. This study presents PACTUS, a new version of a gamified device for stroke neurorehabilitation. Using a series of colored cards, a touchscreen station, and a sensorized handle with an RGB sensor, patients can interact with three games specifically programmed to work on different areas of neurorehabilitation. In addition to presenting the technical design (including energy consumption and sensor signal processing), the results of an observational study conducted with neurological patients, healthy older adults, and caregivers (who also completed the System Usability Scale) are also presented. This usability, safety, and satisfaction study provided an assessment of the device for future iterations. The inclusion of the experiences of the three groups (patients, caregivers, and older adults) provided a more comprehensive and integrated view of the device, enriching our understanding of its strengths and limitations. Although the results were preliminarily positive, areas for improvement were identified. Full article
Show Figures

Figure 1

23 pages, 1255 KB  
Article
Using Android Smartphones to Collect Precise Measures of Reaction Times to Multisensory Stimuli
by Ulysse Roussel, Emmanuel Fléty, Carlos Agon, Isabelle Viaud-Delmon and Marine Taffou
Sensors 2025, 25(19), 6072; https://doi.org/10.3390/s25196072 - 2 Oct 2025
Viewed by 526
Abstract
Multisensory behavioral research is increasingly aiming to move beyond traditional laboratories and into real-world settings. Smartphones offer a promising platform for this purpose, but their use in psychophysical experiments requires rigorous validation of their ability to precisely present multisensory stimuli and record reaction [...] Read more.
Multisensory behavioral research is increasingly aiming to move beyond traditional laboratories and into real-world settings. Smartphones offer a promising platform for this purpose, but their use in psychophysical experiments requires rigorous validation of their ability to precisely present multisensory stimuli and record reaction times (RTs). To date, no study has systematically assessed the feasibility of conducting RT-based multisensory paradigms on smartphones. In this study, we developed a reproducible validation method to quantify smartphones’ temporal precision in synchronized auditory–tactile stimulus delivery and RT logging. Applying this method to five Android devices, we identified two with sufficient precision. We also introduced a technique to enhance RT measurement by combining touchscreen and accelerometer data, effectively doubling the measure resolution—from 8.33 ms (limited by a 120 Hz refresh rate) to 4 ms. Using a top-performing device identified through our validation, we conducted an audio–tactile RT experiment with 20 healthy participants. Looming sounds were presented through headphones during a tactile detection task. Results showed that looming sounds reduced tactile RTs by 20–25 ms compared to static sounds, replicating a well-established multisensory effect linked to peripersonal space. These findings present a robust method for validating smartphones for cognitive research and demonstrate that high-precision audio–tactile paradigms can be reliably implemented on mobile devices. This work lays the groundwork for rigorous, scalable, and ecologically valid multisensory behavioral studies in naturalistic environments, expanding participant reach and enhancing the relevance of multisensory research. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

21 pages, 3036 KB  
Article
Infrared Thermography and Deep Learning Prototype for Early Arthritis and Arthrosis Diagnosis: Design, Clinical Validation, and Comparative Analysis
by Francisco-Jacob Avila-Camacho, Leonardo-Miguel Moreno-Villalba, José-Luis Cortes-Altamirano, Alfonso Alfaro-Rodríguez, Hugo-Nathanael Lara-Figueroa, María-Elizabeth Herrera-López and Pablo Romero-Morelos
Technologies 2025, 13(10), 447; https://doi.org/10.3390/technologies13100447 - 2 Oct 2025
Viewed by 609
Abstract
Arthritis and arthrosis are prevalent joint diseases that cause pain and disability, and their early diagnosis is crucial for preventing irreversible damage. Conventional diagnostic methods such as X-ray, ultrasound, and MRI have limitations in early detection, prompting interest in alternative techniques. This work [...] Read more.
Arthritis and arthrosis are prevalent joint diseases that cause pain and disability, and their early diagnosis is crucial for preventing irreversible damage. Conventional diagnostic methods such as X-ray, ultrasound, and MRI have limitations in early detection, prompting interest in alternative techniques. This work presents the design and clinical evaluation of a prototype device for non-invasive early diagnosis of arthritis (inflammatory joint disease) and arthrosis (osteoarthritis) using infrared thermography and deep neural networks. The portable prototype integrates a Raspberry Pi 4 microcomputer, an infrared thermal camera, and a touchscreen interface, all housed in a 3D-printed PLA enclosure. A custom Flask-based application enables two operational modes: (1) thermal image acquisition for training data collection, and (2) automated diagnosis using a pre-trained ResNet50 deep learning model. A clinical study was conducted at a university clinic in a temperature-controlled environment with 100 subjects (70% with arthritic conditions and 30% healthy). Thermal images of both hands (four images per hand) were captured for each participant, and all patients provided informed consent. The ResNet50 model was trained to classify three classes (healthy, arthritis, and arthrosis) from these images. Results show that the system can effectively distinguish healthy individuals from those with joint pathologies, achieving an overall test accuracy of approximately 64%. The model identified healthy hands with high confidence (100% sensitivity for the healthy class), but it struggled to differentiate between arthritis and arthrosis, often misclassifying one as the other. The prototype’s multiclass ROC (Receiver Operating Characteristic) analysis further showed excellent discrimination between healthy vs. diseased groups (AUC, Area Under the Curve ~1.00), but lower performance between arthrosis and arthritis classes (AUC ~0.60–0.68). Despite these challenges, the device demonstrates the feasibility of AI-assisted thermographic screening: it is completely non-invasive, radiation-free, and low-cost, providing results in real-time. In the discussion, we compare this thermography-based approach with conventional diagnostic modalities and highlight its advantages, such as early detection of physiological changes, portability, and patient comfort. While not intended to replace established methods, this technology can serve as an early warning and triage tool in clinical settings. In conclusion, the proposed prototype represents an innovative application of infrared thermography and deep learning for joint disease screening. With further improvements in classification accuracy and broader validation, such systems could significantly augment current clinical practice by enabling rapid and non-invasive early diagnosis of arthritis and arthrosis. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Graphical abstract

15 pages, 3751 KB  
Article
Local Structural Changes in High-Alumina, Low-Lithium Glass-Ceramics During Crystallization
by Minghan Li, Yan Pan, Shuguang Wei, Yanping Ma, Chuang Dong, Hongxun Hao and Hong Jiang
Nanomaterials 2025, 15(18), 1449; https://doi.org/10.3390/nano15181449 - 20 Sep 2025
Viewed by 485
Abstract
In this study, we investigate the phase transition process during high-alumina, low-lithium glass-ceramics (ZnO-MgO-Li2O-SiO2-Al2O3) crystallization. The differential scanning calorimetry and high-temperature X-ray diffraction results show that approximately 10 wt.% of (Zn, Mg)Al2O4 [...] Read more.
In this study, we investigate the phase transition process during high-alumina, low-lithium glass-ceramics (ZnO-MgO-Li2O-SiO2-Al2O3) crystallization. The differential scanning calorimetry and high-temperature X-ray diffraction results show that approximately 10 wt.% of (Zn, Mg)Al2O4 crystals precipitated when the heat treatment temperature reached 850 °C, indicating that a large number of nuclei had already formed during the earlier stages of heat treatment. Field emission transmission electron microscopy used to observe the microstructure of glass-ceramics after staged heat treatment revealed that cation migration occurred during the nucleation process. Zn and Mg aggregated around Al to form (Zn, Mg)Al2O4 nuclei, which provided sites for crystal growth. Moreover, high-valence Zr aggregated outside the glass network, leading to the formation of nanocrystals. Raman spectroscopy analysis of samples at different stages of crystallization revealed that during spinel precipitation, the Q3 and Q4 structural units in the glass network increased significantly, along with an increase in the number of bridging oxygens. Highly coordinated Al originally present in the network mainly participated in spinel nucleation, effectively suppressing the subsequent formation of LixAlxSi1−xO2, which eventually resulted in the successful preparation of glass-ceramics with (Zn, Mg)Al2O4 and ZrO2 as the main crystalline phases. The grains in this glass-ceramic are all nanocrystals. Its Vickers hardness and flexural strength can reach up to 875 Hv and 350 MPa, respectively, while the visible light transmittance of the glass-ceramic reaches 81.5%. This material shows potential for applications in touchscreen protection, aircraft and high-speed train windshields, and related fields. Full article
(This article belongs to the Section Inorganic Materials and Metal-Organic Frameworks)
Show Figures

Figure 1

28 pages, 3520 KB  
Systematic Review
Diagnostic Accuracy of Touchscreen-Based Tests for Mild Cognitive Disorders: A Systematic Review and Meta-Analysis
by Nathavy Um Din, Florian Maronnat, Bruno Oquendo, Sylvie Pariel, Carmelo Lafuente-Lafuente, Fadi Badra and Joël Belmin
Diagnostics 2025, 15(18), 2383; https://doi.org/10.3390/diagnostics15182383 - 18 Sep 2025
Viewed by 586
Abstract
Background/Objectives: Mild neurocognitive disorder (mNCD) is a state of vulnerability, in which individuals exhibit cognitive deficits identified by cognitive testing, which do not interfere with their ability to independently perform in daily activities. New touchscreen tools had to be designed for cognitive [...] Read more.
Background/Objectives: Mild neurocognitive disorder (mNCD) is a state of vulnerability, in which individuals exhibit cognitive deficits identified by cognitive testing, which do not interfere with their ability to independently perform in daily activities. New touchscreen tools had to be designed for cognitive assessment and had to be at an advanced stage of development but their clinical relevance is still unclear. We aimed to identify digital tools used in the diagnosis of mNCD and assess the diagnostic performance of these tools. Methods: In a systematic review, we searched 4 databases for articles (PubMed, Embase, Web of science, IEEE Xplore). From 6516 studies retrieved, we included 50 articles in the review in which a touchscreen tool was used to assess cognitive function in older adults. Study quality was assessed using the QUADAS-II scale. Data from 34 articles were appropriate for meta-analysis and were analyzed using the bivariate random-effects method (STATA software version 19). Results: The 50 articles in the review totaled 5974 participants and the 34 in the meta-analysis, 4500 participants. Pooled sensitivity and specificity were 0.81 (95%CI: 0.78 to 0.84) and 0.83 (95%CI: 0.79 to 0.86), respectively. High heterogeneity among the studies led us to examine test performance across key characteristics in a subgroup analysis. Tests that are short and self-administered on a touchscreen tablet perform as well as longer tests administered by an assessor or on a fixed device. Conclusions: Cognitive testing with a touchscreen tablet is appropriate for screening for mNCD. Further studies are needed to determine their clinical utility in screening for mNCD in primary care settings and referral to specialized care. This research received no external funding and is registered with PROSPERO under the number CRD42022358725. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

31 pages, 5071 KB  
Article
Feasibility of an AI-Enabled Smart Mirror Integrating MA-rPPG, Facial Affect, and Conversational Guidance in Realtime
by Mohammad Afif Kasno and Jin-Woo Jung
Sensors 2025, 25(18), 5831; https://doi.org/10.3390/s25185831 - 18 Sep 2025
Viewed by 818
Abstract
This paper presents a real-time smart mirror system combining multiple AI modules for multimodal health monitoring. The proposed platform integrates three core components: facial expression analysis, remote photoplethysmography (rPPG), and conversational AI. A key innovation lies in transforming the Moving Average rPPG (MA-rPPG) [...] Read more.
This paper presents a real-time smart mirror system combining multiple AI modules for multimodal health monitoring. The proposed platform integrates three core components: facial expression analysis, remote photoplethysmography (rPPG), and conversational AI. A key innovation lies in transforming the Moving Average rPPG (MA-rPPG) model—originally developed for offline batch processing—into a real-time, continuously streaming setup, enabling seamless heart rate and peripheral oxygen saturation (SpO2) monitoring using standard webcams. The system also incorporates the DeepFace facial analysis library for live emotion, age detection, and a Generative Pre-trained Transformer 4o (GPT-4o)-based mental health chatbot with bilingual (English/Korean) support and voice synthesis. Embedded into a touchscreen mirror with Graphical User Interface (GUI), this solution delivers ambient, low-interruption interaction and real-time user feedback. By unifying these AI modules within an interactive smart mirror, our findings demonstrate the feasibility of integrating multimodal sensing (rPPG, affect detection) and conversational AI into a real-time smart mirror platform. This system is presented as a feasibility-stage prototype to promote real-time health awareness and empathetic feedback. The physiological validation was limited to a single subject, and the user evaluation constituted only a small formative assessment; therefore, results should be interpreted strictly as preliminary feasibility evidence. The system is not intended to provide clinical diagnosis or generalizable accuracy at this stage. Full article
(This article belongs to the Special Issue Sensors and Sensing Technologies for Social Robots)
Show Figures

Figure 1

26 pages, 6642 KB  
Project Report
Designing Augmented Reality for Preschoolers: Lessons from Co-Designing a Spatial Learning App
by Ashley E. Lewis Presser, Jillian Orr, Sarah Nixon Gerard, Emily Braham, Nolan Manning and Kevin Lesniewicz
Educ. Sci. 2025, 15(9), 1195; https://doi.org/10.3390/educsci15091195 - 11 Sep 2025
Viewed by 795
Abstract
Technology offers both benefits and challenges in education, especially with augmented reality (AR), which enables interaction with digital characters in real environments. While spatial reasoning is crucial, it is often neglected in preschool due to limited access to suitable curricula and tools. Designing [...] Read more.
Technology offers both benefits and challenges in education, especially with augmented reality (AR), which enables interaction with digital characters in real environments. While spatial reasoning is crucial, it is often neglected in preschool due to limited access to suitable curricula and tools. Designing effective AR experiences for young children demands a different approach than traditional touchscreen methods, as it depends on the child’s environment, movements, and abilities, requiring designs that support learning even with limited resources. This tailored approach ensures that AR can be a powerful tool in early childhood education, promoting essential skills in an engaging manner. This design case details the development of an AR tablet app aimed at enhancing preschoolers’ spatial-thinking skills. It includes insights gained from co-designing and testing with teachers and children, how research findings led to app revisions, and the potential benefits of using AR technology for young learners. Full article
Show Figures

Figure 1

15 pages, 2127 KB  
Article
Accessible Interface for Museum Geological Exhibitions: PETRA—A Gesture-Controlled Experience of Three-Dimensional Rocks and Minerals
by Andrei Ionuţ Apopei
Minerals 2025, 15(8), 775; https://doi.org/10.3390/min15080775 - 24 Jul 2025
Cited by 1 | Viewed by 885
Abstract
The increasing integration of 3D technologies and machine learning is fundamentally reshaping mineral sciences and cultural heritage, establishing the foundation for an emerging “Mineralogy 4.0” framework. However, public engagement with digital 3D collections is often limited by complex or costly interfaces, such as [...] Read more.
The increasing integration of 3D technologies and machine learning is fundamentally reshaping mineral sciences and cultural heritage, establishing the foundation for an emerging “Mineralogy 4.0” framework. However, public engagement with digital 3D collections is often limited by complex or costly interfaces, such as VR/AR systems and traditional touchscreen kiosks, creating a clear need for more intuitive, accessible, and more engaging and inclusive solutions. This paper presents PETRA, an open-source, gesture-controlled system for exploring 3D rocks and minerals. Developed in the TouchDesigner environment, PETRA utilizes a standard webcam and the MediaPipe framework to translate natural hand movements into real-time manipulation of digital specimens, requiring no specialized hardware. The system provides a customizable, node-based framework for creating touchless, interactive exhibits. Successfully evaluated during a “Long Night of Museums” public event with 550 visitors, direct qualitative observations confirmed high user engagement, rapid instruction-free learnability across diverse age groups, and robust system stability in a continuous-use setting. As a practical case study, PETRA demonstrates that low-cost, webcam-based gesture control is a viable solution for creating accessible and immersive learning experiences. This work offers a significant contribution to the fields of digital mineralogy, human–machine interaction, and cultural heritage by providing a hygienic, scalable, and socially engaging method for interacting with geological collections. This research confirms that as digital archives grow, the development of human-centered interfaces is paramount in unlocking their full scientific and educational potential. Full article
(This article belongs to the Special Issue 3D Technologies and Machine Learning in Mineral Sciences)
Show Figures

Figure 1

15 pages, 1444 KB  
Article
Touchscreen Tasks for Cognitive Testing in Domestic Goats (Capra hircus): A Pilot Study Using Odd-Item Search Training
by Jie Gao, Yumi Yamanashi and Masayuki Tanaka
Animals 2025, 15(14), 2115; https://doi.org/10.3390/ani15142115 - 17 Jul 2025
Viewed by 1158
Abstract
The cognition of large farm animals is important for understanding how cognitive abilities are shaped by evolution and domestication. Valid testing methods are needed with the development of cognitive studies in more species. Here, a step-by-step method for training four naïve domestic goats [...] Read more.
The cognition of large farm animals is important for understanding how cognitive abilities are shaped by evolution and domestication. Valid testing methods are needed with the development of cognitive studies in more species. Here, a step-by-step method for training four naïve domestic goats to use a touchscreen in cognitive tests is described. The goats made accurate touches smoothly after training. Follow-up tests were conducted to confirm that they could do cognitive tests on a touchscreen. In the pilot test of odd-item search, all the goats had above-chance level performances in some conditions. In the subsequent odd-item search tasks using multiple novel stimulus sets, one goat could achieve the criterion and complete several stages, and the results showed a learning effect. These suggest a potential ability to learn the rule of odd-item search. Not all goats could pass the criteria, and there were failures in the transfer, indicating a perceptual strategy rather than using the odd-item search rule. The experiment confirmed that goats could use the touchscreen testing system for cognitive tasks and demonstrated their approaches in tackling this problem. We also hope that these training methods will help future studies training and testing naïve animals. Full article
(This article belongs to the Section Animal System and Management)
Show Figures

Figure 1

18 pages, 1534 KB  
Article
Heart Rate Variability in Adolescents with Autistic Spectrum Disorder Practicing a Virtual Reality Using Two Different Interaction Devices (Concrete and Abstract): A Prospective Randomized Crossover Controlled Trial
by Étria Rodrigues, Ariane Livanos, Joyce A. L. Garbin, Susi M. S. Fernandes, Amanda O. Simcsik, Tânia B. Crocetta, Eduardo D. Dias, Carlos B. M. Monteiro, Fernando H. Magalhães, Alessandro H. N. Ré, Íbis A. P. Moraes and Talita D. Silva-Magalhães
Healthcare 2025, 13(12), 1402; https://doi.org/10.3390/healthcare13121402 - 12 Jun 2025
Viewed by 1863
Abstract
Individuals with autism spectrum disorder (ASD) often experience dysregulation of the autonomic nervous system, as evidenced by alterations in heart rate variability (HRV). It can be influenced by virtual reality (VR), which affects physiological responses due to its sensitivity to environmental and emotional [...] Read more.
Individuals with autism spectrum disorder (ASD) often experience dysregulation of the autonomic nervous system, as evidenced by alterations in heart rate variability (HRV). It can be influenced by virtual reality (VR), which affects physiological responses due to its sensitivity to environmental and emotional stimuli. Objectives: This study aimed to assess HRV in individuals with ASD before, during, and after VR-based tasks over a 10-day period, specifically examining how HRV fluctuated in response to concrete (touchscreen) and abstract (webcam) interactions. Methods: Twenty-two male participants were randomly assigned to two sequences based on the order of tasks performed (starting with either the concrete or abstract task). Results: The findings revealed significant changes in HRV indices (RMSSD, SD1, SDNN, and SD2) between the two task types. Conclusions: The participants engaged in abstract tasks demonstrated higher motor demands, which were indicated by decreased parasympathetic activity and an increased LF/HF ratio, suggesting greater activation of the sympathetic nervous system. Full article
Show Figures

Figure 1

12 pages, 3563 KB  
Article
Development of a Fluorescent Rapid Test Sensing System for Influenza Virus
by Wei-Chien Weng, Yu-Lin Wu, Zia-Jia Lin, Wen-Fung Pan and Yu-Cheng Lin
Micromachines 2025, 16(6), 635; https://doi.org/10.3390/mi16060635 - 28 May 2025
Viewed by 750
Abstract
This paper presents a sensitive and stable fluorescence rapid test sensing system for the quantitative analysis of influenza rapid test results, integrating a detection reader to minimize errors from conventional visual interpretation. The hardware includes a control board, touchscreen, camera module, UV LED [...] Read more.
This paper presents a sensitive and stable fluorescence rapid test sensing system for the quantitative analysis of influenza rapid test results, integrating a detection reader to minimize errors from conventional visual interpretation. The hardware includes a control board, touchscreen, camera module, UV LED illumination, and a dark chamber, while the software handles camera and light source control, as well as image processing. Validation shows strong linearity, high precision, and reproducibility. For influenza A (H1N1), the system achieved a coefficient of determination (R2) of 0.9782 (25–200 ng/mL) and 0.9865 (1–10 ng/mL); for influenza B (Yamagata), the coefficient of determination (R2) was 0.9762 (2–10 ng/mL). The coefficient of variation ranged from 1–5% for influenza A and 4–9% for influenza B. Detection limits were 4 ng/mL for influenza A and 6 ng/mL for influenza B. These results confirm the system’s capability for accurate quantitative analysis while reducing reliance on subjective interpretation. Its compact, portable design supports on-site rapid testing and allows for potential expansion to detect other targets, such as COVID-19, RSV, and myocardial enzymes. The system’s scalability makes it a promising tool for clinical diagnostics, point-of-care testing (POCT), and infectious disease monitoring. Full article
(This article belongs to the Special Issue Portable Sensing Systems in Biological and Chemical Analysis)
Show Figures

Figure 1

29 pages, 3925 KB  
Article
Beyond Signatures: Leveraging Sensor Fusion for Contextual Handwriting Recognition
by Alen Salkanovic, Diego Sušanj, Luka Batistić and Sandi Ljubic
Sensors 2025, 25(7), 2290; https://doi.org/10.3390/s25072290 - 4 Apr 2025
Viewed by 1170
Abstract
This paper deals with biometric identification based on unique patterns and characteristics of an individual’s handwriting, focusing on the dynamic writing process on a touchscreen device. Related work in this domain indicates the dominance of specific research approaches. Namely, in most cases, only [...] Read more.
This paper deals with biometric identification based on unique patterns and characteristics of an individual’s handwriting, focusing on the dynamic writing process on a touchscreen device. Related work in this domain indicates the dominance of specific research approaches. Namely, in most cases, only the signature is analyzed, verification methods are more prevalent than recognition methods, and the provided solutions are mainly based on using a particular device or specific sensor for collecting biometric data. In this context, our work aims to fill the identified research gap by introducing a new handwriting-based user recognition technique. The proposed approach implements the concept of sensor fusion and does not rely exclusively on signatures for recognition but also includes other forms of handwriting, such as short sentences, words, or individual letters. Additionally, two different ways of handwriting input, using a stylus and a finger, are introduced into the analysis. In order to collect data on the dynamics of handwriting and signing, a specially designed apparatus was used with various sensors integrated into common smart devices, along with additional external sensors and accessories. A total of 60 participants took part in a controlled experiment to form a handwriting biometrics dataset for further analysis. To classify participants’ handwriting, custom architecture CNN models were utilized for feature extraction and classification tasks. The obtained results showed that the proposed handwriting recognition system achieves accuracies of 0.982, 0.927, 0.884, and 0.661 for signatures, words, short sentences, and individual letters, respectively. We further investigated the main effects of the input modality and the train set’s size on the system’s accuracy. Finally, an ablation study was carried out to analyze the impact of individual sensors within the fusion-based setup. Full article
Show Figures

Figure 1

Back to TopTop