sensors-logo

Journal Browser

Journal Browser

Sensors with Machine Learning Methods for Assisted Systems - Recent Advances and Future Trends

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 July 2021) | Viewed by 30427

Special Issue Editors


E-Mail Website
Guest Editor
Sustainable Communication Technologies Department, SINTEF Digital, 0373 Oslo, Norway
Interests: sensors; healthcare; human-machine/computer/robot interaction; pattern recognition; deep learning; artificial intelligence; machine learning; big data; robotics; image processing; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Senior Research Scientist, Software and Service Innovation, SINTEF Digital, Oslo, Norway
Interests: sensors; data integration; artificial intelligence; big data; semantic web; human computer interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In this era of artificial intelligence (AI) and big data, the approaches for data analysis, information extraction, and underlying event analysis with state-of-the-art machine learning algorithms have grown radically. Hence, there is an increasing demand for solutions that can successfully handle enormous data from lots of sensors and model the events in data. Assisted systems incorporate systems, applications, and services adopting sensors, measurement methods, and information technologies to offer new products and solutions to address various necessities, such as health and wellbeing. The expected outcomes from the introduction of such a paradigm include a positive impact on health-related quality of life, reducing the costs of healthcare provision at the same time.

Despite the marvelous advancements and achievements of artificial intelligence fields so far, their black-box nature and questions around the lack of transparency are still hampering their applications in society. To trust, accept, and adopt emerging AI solutions in our lives and practices, explainable AI (XAI) is in very much demand these days, along with state-of-the-art machine learning algorithms such as convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM), neural structured learning (NSL), etc. XAI can provide human-understandable interpretations by explaining the machine learning model’s algorithmic behavior and outcomes. Thus, it can enable people to control and continuously improve the performance, transparency, and explainability throughout the lifecycle of AI applications. Considering this motivation, the recently emerging trend among the diverse and multidisciplinary research communities is exploration of AI approaches and the development of contextual models.

This Special Issue highlights the recent advances and future trends in developing intelligent and smart wearable and/or ambient sensor-based systems, methods, and frameworks to measure the wellbeing of people. It also focuses on machine learning approaches to model the underlying events in the data. Algorithms related to XAI would be quite interesting and encouraging in this regard, along with other machine learning algorithms to handle distinguished sensor data for assisted systems. We invite manuscripts on a wide range of smart sensing and machine learning research for assisted systems, including but not limited to:

  • Artificial intelligence;
  • Explainable AI models;
  • Human–computer interactions;
  • Robotics for healthcare;
  • Signal processing;
  • Multimodal sensing;
  • Feature analysis;
  • Context-based sensing;
  • Knowledge discovery from sensor data;
  • Machine learning on sensor data;
  • Data analysis for smart sensing;
  • Sensor data applications;
  • Pattern recognition;
  • Smart-assisted system;
  • Expert systems and applications.

Dr. Md Zia Uddin
Dr. Ahmet Soylu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensors
  • Artificial intelligence
  • Assisted system
  • Machine learning
  • Explainable AI

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 5584 KiB  
Article
Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment
by Razin Bin Issa, Modhumonty Das, Md. Saferi Rahman, Monika Barua, Md. Khalilur Rhaman, Kazi Shah Nawaz Ripon and Md. Golam Rabiul Alam
Sensors 2021, 21(4), 1468; https://doi.org/10.3390/s21041468 - 20 Feb 2021
Cited by 14 | Viewed by 4792
Abstract
Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art [...] Read more.
Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles. Full article
Show Figures

Figure 1

14 pages, 25055 KiB  
Article
Hand Gesture Recognition Using Single Patchable Six-Axis Inertial Measurement Unit via Recurrent Neural Networks
by Edwin Valarezo Añazco, Seung Ju Han, Kangil Kim, Patricio Rivera Lopez, Tae-Seong Kim and Sangmin Lee
Sensors 2021, 21(4), 1404; https://doi.org/10.3390/s21041404 - 17 Feb 2021
Cited by 24 | Viewed by 5023
Abstract
Recording human gestures from a wearable sensor produces valuable information to implement control gestures or in healthcare services. The wearable sensor is required to be small and easily worn. Advances in miniaturized sensor and materials research produces patchable inertial measurement units (IMUs). In [...] Read more.
Recording human gestures from a wearable sensor produces valuable information to implement control gestures or in healthcare services. The wearable sensor is required to be small and easily worn. Advances in miniaturized sensor and materials research produces patchable inertial measurement units (IMUs). In this paper, a hand gesture recognition system using a single patchable six-axis IMU attached at the wrist via recurrent neural networks (RNN) is presented. The IMU comprises IC-based electronic components on a stretchable, adhesive substrate with serpentine-structured interconnections. The proposed patchable IMU with soft form-factors can be worn in close contact with the human body, comfortably adapting to skin deformations. Thus, signal distortion (i.e., motion artifacts) produced for vibration during the motion is minimized. Also, our patchable IMU has a wireless communication (i.e., Bluetooth) module to continuously send the sensed signals to any processing device. Our hand gesture recognition system was evaluated, attaching the proposed patchable six-axis IMU on the right wrist of five people to recognize three hand gestures using two models based on recurrent neural nets. The RNN-based models are trained and validated using a public database. The preliminary results show that our proposed patchable IMU have potential to continuously monitor people’s motions in remote settings for applications in mobile health, human–computer interaction, and control gestures recognition. Full article
Show Figures

Figure 1

17 pages, 1933 KiB  
Article
Speech-Based Surgical Phase Recognition for Non-Intrusive Surgical Skills’ Assessment in Educational Contexts
by Carmen Guzmán-García, Marcos Gómez-Tome, Patricia Sánchez-González, Ignacio Oropesa and Enrique J. Gómez
Sensors 2021, 21(4), 1330; https://doi.org/10.3390/s21041330 - 13 Feb 2021
Cited by 11 | Viewed by 2706
Abstract
Surgeons’ procedural skills and intraoperative decision making are key elements of clinical practice. However, the objective assessment of these skills remains a challenge to this day. Surgical workflow analysis (SWA) is emerging as a powerful tool to solve this issue in surgical educational [...] Read more.
Surgeons’ procedural skills and intraoperative decision making are key elements of clinical practice. However, the objective assessment of these skills remains a challenge to this day. Surgical workflow analysis (SWA) is emerging as a powerful tool to solve this issue in surgical educational environments in real time. Typically, SWA makes use of video signals to automatically identify the surgical phase. We hypothesize that the analysis of surgeons’ speech using natural language processing (NLP) can provide deeper insight into the surgical decision-making processes. As a preliminary step, this study proposes to use audio signals registered in the educational operating room (OR) to classify the phases of a laparoscopic cholecystectomy (LC). To do this, we firstly created a database with the transcriptions of audio recorded in surgical educational environments and their corresponding phase. Secondly, we compared the performance of four feature extraction techniques and four machine learning models to find the most appropriate model for phase recognition. The best resulting model was a support vector machine (SVM) coupled to a hidden-Markov model (HMM), trained with features obtained with Word2Vec (82.95% average accuracy). The analysis of this model’s confusion matrix shows that some phrases are misplaced due to the similarity in the words used. The study of the model’s temporal component suggests that further attention should be paid to accurately detect surgeons’ normal conversation. This study proves that speech-based classification of LC phases can be effectively achieved. This lays the foundation for the use of audio signals for SWA, to create a framework of LC to be used in surgical training, especially for the training and assessment of procedural and decision-making skills (e.g., to assess residents’ procedural knowledge and their ability to react to adverse situations). Full article
Show Figures

Figure 1

18 pages, 732 KiB  
Communication
Towards Detecting Biceps Muscle Fatigue in Gym Activity Using Wearables
by Mohamed Elshafei and Emad Shihab
Sensors 2021, 21(3), 759; https://doi.org/10.3390/s21030759 - 23 Jan 2021
Cited by 18 | Viewed by 5094
Abstract
Fatigue is a naturally occurring phenomenon during human activities, but it poses a bigger risk for injuries during physically demanding activities, such as gym activities and athletics. Several studies show that bicep muscle fatigue can lead to various injuries that may require up [...] Read more.
Fatigue is a naturally occurring phenomenon during human activities, but it poses a bigger risk for injuries during physically demanding activities, such as gym activities and athletics. Several studies show that bicep muscle fatigue can lead to various injuries that may require up to 22 weeks of treatment. In this work, we adopt a wearable approach to detect biceps muscle fatigue during a bicep concentration curl exercise as an example of a gym activity. Our dataset consists of 3000 bicep curls from twenty middle-aged volunteers at ages between 27 to 30 and Body Mass Index (BMI) ranging between 18 to 28. All volunteers have been gym-goers for at least 1 year with no records of chronic diseases, muscle, or bone surgeries. We encountered two main challenges while collecting our dataset. The first challenge was the dumbbell’s suitability, where we found that a dumbbell weight (4.5 kg) provides the best tradeoff between longer recording sessions and the occurrence of fatigue on exercises. The second challenge is the subjectivity of RPE, where we average the reported RPE with the measured heart rate converted to RPE. We observed from our data that fatigue reduces the biceps’ angular velocity; therefore, it increases the completion time for later sets. We extracted a total of 33 features from our dataset, which have been reduced to 16 features. These features are the most overall representative and correlated with bicep curl movement, yet they are fatigue-specific features. We utilized these features in five machine learning models, which are Generalized Linear Models (GLM), Logistic Regression (LR), Random Forests (RF), Decision Trees (DT), and Feedforward Neural Networks (FNN). We found that using a two-layer FNN achieves an accuracy of 98% and 88% for subject-specific and cross-subject models, respectively. The results presented in this work are useful and represent a solid start for moving into a real-world application for detecting the fatigue level in bicep muscles using wearable sensors as we advise athletes to take fatigue into consideration to avoid fatigue-induced injuries. Full article
Show Figures

Figure 1

20 pages, 3255 KiB  
Article
Intelligent Sensory Pen for Aiding in the Diagnosis of Parkinson’s Disease from Dynamic Handwriting Analysis
by Eugênio Peixoto Júnior, Italo L. D. Delmiro, Naercio Magaia, Fernanda M. Maia, Mohammad Mehedi Hassan, Victor Hugo C. Albuquerque and Giancarlo Fortino
Sensors 2020, 20(20), 5840; https://doi.org/10.3390/s20205840 - 15 Oct 2020
Cited by 15 | Viewed by 4317
Abstract
In this paper, we propose a pen device capable of detecting specific features from dynamic handwriting tests for aiding on automatic Parkinson’s disease identification. The method used in this work uses machine learning to compare the raw signals from different sensors in the [...] Read more.
In this paper, we propose a pen device capable of detecting specific features from dynamic handwriting tests for aiding on automatic Parkinson’s disease identification. The method used in this work uses machine learning to compare the raw signals from different sensors in the device coupled to a pen and extract relevant information such as tremors and hand acceleration to diagnose the patient clinically. Additionally, the datasets composed of raw signals from healthy and Parkinson’s disease patients acquired here are made available to further contribute to research related to this topic. Full article
Show Figures

Figure 1

34 pages, 13069 KiB  
Article
Considerations for Determining the Coefficient of Inertia Masses for a Tracked Vehicle
by Octavian Alexa, Iulian Coropețchi, Alexandru Vasile, Ionica Oncioiu and Lucian Ștefăniță Grigore
Sensors 2020, 20(19), 5587; https://doi.org/10.3390/s20195587 - 29 Sep 2020
Cited by 5 | Viewed by 2559
Abstract
The purpose of the article is to present a point of view on determining the mass moment of inertia coefficient of a tracked vehicle. This coefficient is very useful to be able to estimate the performance of a tracked vehicle, including slips in [...] Read more.
The purpose of the article is to present a point of view on determining the mass moment of inertia coefficient of a tracked vehicle. This coefficient is very useful to be able to estimate the performance of a tracked vehicle, including slips in the converter. Determining vehicle acceleration plays an important role in assessing vehicle mobility. Additionally, during the transition from the Hydroconverter to the hydro-clutch regime, these estimations become quite difficult due to the complexity of the propulsion aggregate (engine and hydrodynamic transmission) and rolling equipment. The algorithm for determining performance is focused on estimating acceleration performance. To validate the proposed model, tests were performed to determine the equivalent reduced moments of inertia at the drive wheel (gravitational method) and the main components (three-wire pendulum method). The dynamic performances determined during the starting process are necessary for the validation of the general model for simulating the longitudinal dynamics of the vehicle. Finally, the differential and algebraic equations of the virtual model approximate more accurately the actual process of the operation of the vehicle. The virtual model, through the data obtained from the simulation process, allows for the determination, indirectly, of the variation of the mass moment of inertia coefficient and its expression of approximation. Full article
Show Figures

Figure 1

21 pages, 2089 KiB  
Article
Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data
by Shan Ullah and Deok-Hwan Kim
Sensors 2020, 20(18), 5030; https://doi.org/10.3390/s20185030 - 04 Sep 2020
Cited by 29 | Viewed by 3528
Abstract
This study focuses on driver-behavior identification and its application to finding embedded solutions in a connected car environment. We present a lightweight, end-to-end deep-learning framework for performing driver-behavior identification using in-vehicle controller area network (CAN-BUS) sensor data. The proposed method outperforms the state-of-the-art [...] Read more.
This study focuses on driver-behavior identification and its application to finding embedded solutions in a connected car environment. We present a lightweight, end-to-end deep-learning framework for performing driver-behavior identification using in-vehicle controller area network (CAN-BUS) sensor data. The proposed method outperforms the state-of-the-art driver-behavior profiling models. Particularly, it exhibits significantly reduced computations (i.e., reduced numbers both of floating-point operations and parameters), more efficient memory usage (compact model size), and less inference time. The proposed architecture features depth-wise convolution, along with augmented recurrent neural networks (long short-term memory or gated recurrent unit), for time-series classification. The minimum time-step length (window size) required in the proposed method is significantly lower than that required by recent algorithms. We compared our results with compressed versions of existing models by applying efficient channel pruning on several layers of current models. Furthermore, our network can adapt to new classes using sparse-learning techniques, that is, by freezing relatively strong nodes at the fully connected layer for the existing classes and improving the weaker nodes by retraining them using data regarding the new classes. We successfully deploy the proposed method in a container environment using NVIDIA Docker in an embedded system (Xavier, TX2, and Nano) and comprehensively evaluate it with regard to numerous performance metrics. Full article
Show Figures

Figure 1

Back to TopTop