sensors-logo

Journal Browser

Journal Browser

Intelligent Systems and Sensors for Assistive Technology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (20 January 2023) | Viewed by 34986

Special Issue Editors

Institute of Applied Sciences and Intelligent Systems (ISASI), National Research Council of Italy (CNR), 80078 Pozzuoli, Italy
Interests: computer vision; machine learning; signal processing; assistive technology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Bioengineering, Faculty of Engineering, Imperial College London, London, UK
Interests: Data Science; Healthcare Technologies; Machine Learning

Special Issue Information

Dear Colleagues,

According to the World Health Organization, Assistive Technology (AT) enables and promotes inclusion and participation, especially of persons with disabilities, aging populations, and people who have conditions such as diabetes and stroke. Examples of assistive products include hearing aids, wheelchairs, spectacles, prostheses, and devices that support memory, among many others.

In a broader sense, AT can refer to any instrumentation or system aimed at making easier daily-activities, such as driving, working, or to assure safety and security; at the other end of the spectrum, it can be an enabler of new capabilities, such as supporting remote diagnosis or surgery in medical contexts.

In the past few decades, scientific progress has led to new AT solutions which leverage multidisciplinary knowledge in the fields of micro-nano sensors, embedded systems, robotics, computer vision, IoT, psychology, wireless networks, medicine, Human-Machine Interaction, advanced materials for sensing, image and signal processing, data fusion, machine learning, and so on. 

This Special Issue is aimed at representing the latest advances in AT. We welcome contributions in all fields of AT, including new systems and algorithms, as well as those considering new applications. These include but are not limited to:

Education

Healthcare

HMI

Remote Surgery

Rehabilitation

Safety

Security

Home accessibility

Smart Environments

Robotics

Communication

Dr. Marco Leo
Prof. Anil Anthony Bharath
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Assistive technology
  • Machine Learning
  • Sensors
  • Healthcare Technologies
  • Data Science

Related Special Issue

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 3376 KiB  
Article
Automatic Detection of Cognitive Impairment with Virtual Reality
by Farzana A. Mannan, Lilla A. Porffy, Dan W. Joyce, Sukhwinder S. Shergill and Oya Celiktutan
Sensors 2023, 23(2), 1026; https://doi.org/10.3390/s23021026 - 16 Jan 2023
Cited by 2 | Viewed by 2316
Abstract
Cognitive impairment features in neuropsychiatric conditions and when undiagnosed can have a severe impact on the affected individual’s safety and ability to perform daily tasks. Virtual Reality (VR) systems are increasingly being explored for the recognition, diagnosis and treatment of cognitive impairment. In [...] Read more.
Cognitive impairment features in neuropsychiatric conditions and when undiagnosed can have a severe impact on the affected individual’s safety and ability to perform daily tasks. Virtual Reality (VR) systems are increasingly being explored for the recognition, diagnosis and treatment of cognitive impairment. In this paper, we describe novel VR-derived measures of cognitive performance and show their correspondence with clinically-validated cognitive performance measures. We use an immersive VR environment called VStore where participants complete a simulated supermarket shopping task. People with psychosis (k=26) and non-patient controls (k=128) participated in the study, spanning ages 20–79 years. The individuals were split into two cohorts, a homogeneous non-patient cohort (k=99 non-patient participants) and a heterogeneous cohort (k=26 patients, k=29 non-patient participants). Participants’ spatio-temporal behaviour in VStore is used to extract four features, namely, route optimality score, proportional distance score, execution error score, and hesitation score using the Traveling Salesman Problem and explore-exploit decision mathematics. These extracted features are mapped to seven validated cognitive performance scores, via linear regression models. The most statistically important feature is found to be the hesitation score. When combined with the remaining extracted features, the multiple linear regression model resulted in statistically significant results with R2 = 0.369, F-Stat = 7.158, p(F-Stat) = 0.000128. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

21 pages, 5001 KiB  
Article
Design of Audio-Augmented-Reality-Based O&M Orientation Training for Visually Impaired Children
by Linchao Wei, Lingling Jin, Ruining Gong, Yaojun Yang and Xiaochen Zhang
Sensors 2022, 22(23), 9487; https://doi.org/10.3390/s22239487 - 05 Dec 2022
Cited by 1 | Viewed by 1882
Abstract
Orientation and Mobility training (O&M) is a specific program that teaches people with vision loss to orient themselves and travel safely within certain contexts. State-of-the-art research reveals that people with vision loss expect high-quality O&M training, especially at early ages, but the conventional [...] Read more.
Orientation and Mobility training (O&M) is a specific program that teaches people with vision loss to orient themselves and travel safely within certain contexts. State-of-the-art research reveals that people with vision loss expect high-quality O&M training, especially at early ages, but the conventional O&M training methods involve tedious programs and require a high participation of professional trainers. However, there is an insufficient number of excellent trainers. In this work, we first interpret and discuss the relevant research in recent years. Then, we discuss the questionnaires and interviews we conducted with visually impaired people. On the basis of field investigation and related research, we propose the design of a training solution for children to operate and maintain direction based on audio augmented reality. We discuss how, within the perceptible scene created by EasyAR’s map-aware framework, we created an AR audio source tracing training that simulates a social scene to strengthen the audiometric identification of the subjects, and then to verify the efficiency and feasibility of this scheme, we implemented the application prototype with the required hardware and software and conducted the subsequential experiments with blindfolded children. We confirm the high usability of the designed approach by analyzing the results of the pilot study. Compared with other orientation training studies, the method we propose makes the whole training process flexible and entertaining. At the same time, this training process does not involve excessive economic costs or require professional skills training, allowing users to undergo training at home or on the sports ground rather than having to go to rehabilitation sites or specified schools. Furthermore, according to the feedback from the experiments, the approach is promising in regard to gamification. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

20 pages, 34239 KiB  
Article
UNav: An Infrastructure-Independent Vision-Based Navigation System for People with Blindness and Low Vision
by Anbang Yang, Mahya Beheshti, Todd E. Hudson, Rajesh Vedanthan, Wachara Riewpaiboon, Pattanasak Mongkolwat, Chen Feng and John-Ross Rizzo
Sensors 2022, 22(22), 8894; https://doi.org/10.3390/s22228894 - 17 Nov 2022
Cited by 2 | Viewed by 2187
Abstract
Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a [...] Read more.
Vision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user’s location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user’s direction by exploiting the 2D–3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra’s algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera’s intrinsic parameters, such as focal length. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

20 pages, 2124 KiB  
Article
Accessible Tutoring Platform Using Audio-Tactile Graphics Adapted for Visually Impaired People
by Michał Maćkowski and Piotr Brzoza
Sensors 2022, 22(22), 8753; https://doi.org/10.3390/s22228753 - 12 Nov 2022
Cited by 4 | Viewed by 1301
Abstract
One of the problems faced by people with blindness is access to materials presented in graphical form. There are many alternative forms of providing such information, but they are very often ineffective or have certain limitations. The development of mobile devices and touch [...] Read more.
One of the problems faced by people with blindness is access to materials presented in graphical form. There are many alternative forms of providing such information, but they are very often ineffective or have certain limitations. The development of mobile devices and touch sensors enabled the development of new tools to support such people. This study presents a solution called an accessible tutoring platform, using audio-tactile graphics for people with blindness. We aimed to research the influence of the developed platform for the alternative presentation of graphics information on better memorizing, recognizing, and learning. Another goal of the research was to verify the effectiveness of the proposed method for the alternative presentation of audio-tactile graphics. The effectiveness of the proposed solution was verified quantitatively and qualitatively on two groups of blind students from primary and secondary schools with the use of a developed platform and prepared materials for learning mathematics. The obtained research results show that the proposed method of verifying students’ knowledge and auto-selecting exercises with adapted audio description positively influences the improvement of learning effectiveness. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

23 pages, 4743 KiB  
Article
User Based Development and Test of the EXOTIC Exoskeleton: Empowering Individuals with Tetraplegia Using a Compact, Versatile, 5-DoF Upper Limb Exoskeleton Controlled through Intelligent Semi-Automated Shared Tongue Control
by Mikkel Berg Thøgersen, Mostafa Mohammadi, Muhammad Ahsan Gull, Stefan Hein Bengtson, Frederik Victor Kobbelgaard, Bo Bentsen, Benjamin Yamin Ali Khan, Kåre Eg Severinsen, Shaoping Bai, Thomas Bak, Thomas Baltzer Moeslund, Anne Marie Kanstrup and Lotte N. S. Andreasen Struijk
Sensors 2022, 22(18), 6919; https://doi.org/10.3390/s22186919 - 13 Sep 2022
Cited by 10 | Viewed by 2892
Abstract
This paper presents the EXOTIC- a novel assistive upper limb exoskeleton for individuals with complete functional tetraplegia that provides an unprecedented level of versatility and control. The current literature on exoskeletons mainly focuses on the basic technical aspects of exoskeleton design and control [...] Read more.
This paper presents the EXOTIC- a novel assistive upper limb exoskeleton for individuals with complete functional tetraplegia that provides an unprecedented level of versatility and control. The current literature on exoskeletons mainly focuses on the basic technical aspects of exoskeleton design and control while the context in which these exoskeletons should function is less or not prioritized even though it poses important technical requirements. We considered all sources of design requirements, from the basic technical functions to the real-world practical application. The EXOTIC features: (1) a compact, safe, wheelchair-mountable, easy to don and doff exoskeleton capable of facilitating multiple highly desired activities of daily living for individuals with tetraplegia; (2) a semi-automated computer vision guidance system that can be enabled by the user when relevant; (3) a tongue control interface allowing for full, volitional, and continuous control over all possible motions of the exoskeleton. The EXOTIC was tested on ten able-bodied individuals and three users with tetraplegia caused by spinal cord injury. During the tests the EXOTIC succeeded in fully assisting tasks such as drinking and picking up snacks, even for users with complete functional tetraplegia and the need for a ventilator. The users confirmed the usability of the EXOTIC. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

24 pages, 4045 KiB  
Article
Sensor-Based Prototype of a Smart Assistant for Visually Impaired People—Preliminary Results
by Emilia Șipoș, Cosmin Ciuciu and Laura Ivanciu
Sensors 2022, 22(11), 4271; https://doi.org/10.3390/s22114271 - 03 Jun 2022
Cited by 5 | Viewed by 3805
Abstract
People with visual impairment are the second largest affected category with limited access to assistive products. A complete, portable, and affordable smart assistant for helping visually impaired people to navigate indoors, outdoors, and interact with the environment is presented in this paper. The [...] Read more.
People with visual impairment are the second largest affected category with limited access to assistive products. A complete, portable, and affordable smart assistant for helping visually impaired people to navigate indoors, outdoors, and interact with the environment is presented in this paper. The prototype of the smart assistant consists of a smart cane and a central unit; communication between user and the assistant is carried out through voice messages, making the system suitable for any user, regardless of their IT skills. The assistant is equipped with GPS, electronic compass, Wi-Fi, ultrasonic sensors, an optical sensor, and an RFID reader, to help the user navigate safely. Navigation functionalities work offline, which is especially important in areas where Internet coverage is weak or missing altogether. Physical condition monitoring, medication, shopping, and weather information, facilitate the interaction between the user and the environment, supporting daily activities. The proposed system uses different components for navigation, provides independent navigation systems for indoors and outdoors, both day and night, regardless of weather conditions. Preliminary tests provide encouraging results, indicating that the prototype has the potential to help visually impaired people to achieve a high level of independence in daily activities. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

26 pages, 10692 KiB  
Article
Robustness and Tracking Performance Evaluation of PID Motion Control of 7 DoF Anthropomorphic Exoskeleton Robot Assisted Upper Limb Rehabilitation
by Tanvir Ahmed, Md Rasedul Islam, Brahim Brahmi and Mohammad Habibur Rahman
Sensors 2022, 22(10), 3747; https://doi.org/10.3390/s22103747 - 14 May 2022
Cited by 11 | Viewed by 2845
Abstract
Upper limb dysfunctions (ULD) are common following a stroke. Annually, more than 15 million people suffer a stroke worldwide. We have developed a 7 degrees of freedom (DoF) exoskeleton robot named the smart robotic exoskeleton (SREx) to provide upper limb rehabilitation [...] Read more.
Upper limb dysfunctions (ULD) are common following a stroke. Annually, more than 15 million people suffer a stroke worldwide. We have developed a 7 degrees of freedom (DoF) exoskeleton robot named the smart robotic exoskeleton (SREx) to provide upper limb rehabilitation therapy. The robot is designed for adults and has an extended range of motion compared to our previously designed ETS-MARSE robot. While providing rehabilitation therapy, the exoskeleton robot is always subject to random disturbance. Moreover, these types of robots manage various patients and different degrees of impairment, which are quite impossible to model and incorporate into the robot dynamics. We hypothesize that a model-independent controller, such as a PID controller, is most suitable for maneuvering a therapeutic exoskeleton robot to provide rehabilitation therapy. This research implemented a model-free proportional–integral–derivative (PID) controller to maneuver a complex 7 DoF anthropomorphic exoskeleton robot (i.e., SREx) to provide a wide variety of upper limb exercises to the different subjects. The robustness and trajectory tracking performance of the PID controller was evaluated with experiments. The results show that a PID controller can effectively control a highly nonlinear and complex exoskeleton-type robot. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

18 pages, 5820 KiB  
Article
Posture Assessment in Dentistry for Different Visual Aids Using 2D Markers
by Alberto Pispero, Marco Marcon, Carlo Ghezzi, Domenico Massironi, Elena Maria Varoni, Stefano Tubaro and Giovanni Lodi
Sensors 2021, 21(22), 7717; https://doi.org/10.3390/s21227717 - 19 Nov 2021
Cited by 7 | Viewed by 2420
Abstract
Attention and awareness towards musculoskeletal disorders (MSDs) in the dental profession has increased considerably in the last few years. From recent literature reviews, it appears that the prevalence of MSDs in dentists concerns between 64 and 93%. In our clinical trial, we have [...] Read more.
Attention and awareness towards musculoskeletal disorders (MSDs) in the dental profession has increased considerably in the last few years. From recent literature reviews, it appears that the prevalence of MSDs in dentists concerns between 64 and 93%. In our clinical trial, we have assessed the dentist posture during the extraction of 90 third lower molars depending on whether the operator performs the intervention by the use of the operating microscope, surgical loupes, or with the naked eye. In particular, we analyzed the evolution of the body posture during different interventions evaluating the impact of visual aids with respect to naked eye interventions. The presented posture assessment approach is based on 3D acquisitions of the upper body, based on planar markers, which allows us to discriminate spatial displacements up to 2 mm in translation and 1 degree in rotation. We found a significant reduction of neck bending in interventions using visual aids, in particular for those performed with the microscope. We further investigated the impact of different postures on MSD risk using a widely adopted evaluation tool for ergonomic investigations of workplaces, named (RULA) Rapid Upper Limb Assessment. The analysis performed in this clinical trial is based on a 3D marker tracker that is able to follow a surgeon’s upper limbs during interventions. The method highlighted pros and cons of different approaches. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

24 pages, 8060 KiB  
Article
Hip Lift Transfer Assistive System for Reducing Burden on Caregiver’s Waist
by Jiang Wu and Motoki Shino
Sensors 2021, 21(22), 7548; https://doi.org/10.3390/s21227548 - 13 Nov 2021
Cited by 4 | Viewed by 3020
Abstract
In Japan, the aging population is expected to increase the number of elderly people in the future. The purpose of this study was to develop a hip lift transfer assistive system to improve the QOL of elderly living and to prevent back pain [...] Read more.
In Japan, the aging population is expected to increase the number of elderly people in the future. The purpose of this study was to develop a hip lift transfer assistive system to improve the QOL of elderly living and to prevent back pain for caregivers. We extracted the impediment factor and the necessary scene for the assistance, decided on the transfer process from the wheelchair in the toilet, and considered the reduction method of the burden based on the quantitative evaluation of the caregiver’s lumbar burden and developed the device. Then we proposed the algorithm of the system by grasping the behavior and lumbar burden characteristics at the time of using the hip lift transfer assistive system by the developed device in which the proposed support algorithm of standing seating assistance operation is implemented in the actual use environment. Through the assistive movement evaluation experiment and the actual operation in the toilet, we have verified that the use of this device can reduce the caregivers’ lumbar burden below the standard value (3400 N) and have proved the effectiveness of the proposed transfer assistive system. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

22 pages, 3116 KiB  
Article
Impact of a Vibrotactile Belt on Emotionally Challenging Everyday Situations of the Blind
by Charlotte Brandebusemeyer, Anna Ricarda Luther, Sabine U. König, Peter König and Silke M. Kärcher
Sensors 2021, 21(21), 7384; https://doi.org/10.3390/s21217384 - 06 Nov 2021
Cited by 4 | Viewed by 2761
Abstract
Spatial orientation and navigation depend primarily on vision. Blind people lack this critical source of information. To facilitate wayfinding and to increase the feeling of safety for these people, the “feelSpace belt” was developed. The belt signals magnetic north as a fixed reference [...] Read more.
Spatial orientation and navigation depend primarily on vision. Blind people lack this critical source of information. To facilitate wayfinding and to increase the feeling of safety for these people, the “feelSpace belt” was developed. The belt signals magnetic north as a fixed reference frame via vibrotactile stimulation. This study investigates the effect of the belt on typical orientation and navigation tasks and evaluates the emotional impact. Eleven blind subjects wore the belt daily for seven weeks. Before, during and after the study period, they filled in questionnaires to document their experiences. A small sub-group of the subjects took part in behavioural experiments before and after four weeks of training, i.e., a straight-line walking task to evaluate the belt’s effect on keeping a straight heading, an angular rotation task to examine effects on egocentric orientation, and a triangle completion navigation task to test the ability to take shortcuts. The belt reduced subjective discomfort and increased confidence during navigation. Additionally, the participants felt safer wearing the belt in various outdoor situations. Furthermore, the behavioural tasks point towards an intuitive comprehension of the belt. Altogether, the blind participants benefited from the vibrotactile belt as an assistive technology in challenging everyday situations. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Graphical abstract

Review

Jump to: Research

26 pages, 543 KiB  
Review
In-Home Older Adults’ Activity Pattern Monitoring Using Depth Sensors: A Review
by Md Sarfaraz Momin, Abu Sufian, Debaditya Barman, Paramartha Dutta, Mianxiong Dong and Marco Leo
Sensors 2022, 22(23), 9067; https://doi.org/10.3390/s22239067 - 23 Nov 2022
Cited by 8 | Viewed by 2718
Abstract
The global population is aging due to many factors, including longer life expectancy through better healthcare, changing diet, physical activity, etc. We are also witnessing various frequent epidemics as well as pandemics. The existing healthcare system has failed to deliver the care and [...] Read more.
The global population is aging due to many factors, including longer life expectancy through better healthcare, changing diet, physical activity, etc. We are also witnessing various frequent epidemics as well as pandemics. The existing healthcare system has failed to deliver the care and support needed to our older adults (seniors) during these frequent outbreaks. Sophisticated sensor-based in-home care systems may offer an effective solution to this global crisis. The monitoring system is the key component of any in-home care system. The evidence indicates that they are more useful when implemented in a non-intrusive manner through different visual and audio sensors. Artificial Intelligence (AI) and Computer Vision (CV) techniques may be ideal for this purpose. Since the RGB imagery-based CV technique may compromise privacy, people often hesitate to utilize in-home care systems which use this technology. Depth, thermal, and audio-based CV techniques could be meaningful substitutes here. Due to the need to monitor larger areas, this review article presents a systematic discussion on the state-of-the-art using depth sensors as primary data-capturing techniques. We mainly focused on fall detection and other health-related physical patterns. As gait parameters may help to detect these activities, we also considered depth sensor-based gait parameters separately. The article provides discussions on the topic in relation to the terminology, reviews, a survey of popular datasets, and future scopes. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

23 pages, 586 KiB  
Review
Video-Based Automatic Baby Motion Analysis for Early Neurological Disorder Diagnosis: State of the Art and Future Directions
by Marco Leo, Giuseppe Massimo Bernava, Pierluigi Carcagnì and Cosimo Distante
Sensors 2022, 22(3), 866; https://doi.org/10.3390/s22030866 - 24 Jan 2022
Cited by 15 | Viewed by 5145
Abstract
Neurodevelopmental disorders (NDD) are impairments of the growth and development of the brain and/or central nervous system. In the light of clinical findings on early diagnosis of NDD and prompted by recent advances in hardware and software technologies, several researchers tried to introduce [...] Read more.
Neurodevelopmental disorders (NDD) are impairments of the growth and development of the brain and/or central nervous system. In the light of clinical findings on early diagnosis of NDD and prompted by recent advances in hardware and software technologies, several researchers tried to introduce automatic systems to analyse the baby’s movement, even in cribs. Traditional technologies for automatic baby motion analysis leverage contact sensors. Alternatively, remotely acquired video data (e.g., RGB or depth) can be used, with or without active/passive markers positioned on the body. Markerless approaches are easier to set up and maintain (without any human intervention) and they work well on non-collaborative users, making them the most suitable technologies for clinical applications involving children. On the other hand, they require complex computational strategies for extracting knowledge from data, and then, they strongly depend on advances in computer vision and machine learning, which are among the most expanding areas of research. As a consequence, also markerless video-based analysis of movements in children for NDD has been rapidly expanding but, to the best of our knowledge, there is not yet a survey paper providing a broad overview of how recent scientific developments impacted it. This paper tries to fill this gap and it lists specifically designed data acquisition tools and publicly available datasets as well. Besides, it gives a glimpse of the most promising techniques in computer vision, machine learning and pattern recognition which could be profitably exploited for children motion analysis in videos. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Assistive Technology)
Show Figures

Figure 1

Back to TopTop