sensors-logo

Journal Browser

Journal Browser

Advanced Sensing and Machine-Learning-Based Analysis of Human Behaviour and Physiology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 44228

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
Interests: machine learning; pattern recognition; robotics
Special Issues, Collections and Topics in MDPI journals
School of Computing, University of Portsmouth, Portsmouth, UK
Interests: biosensory data analysis; wearable sensors; haptics
Special Issues, Collections and Topics in MDPI journals
Shenyang Institue of Automation, Chinese Academy of Sciences, Shenyang, China
Interests: intelligent robotics; machine learning; automation

E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, University of Bath, Claverton Down, Bath BA2 7AY, UK
Interests: human–machine interface; rehabilitation robotics; biomechatronics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Huazhong University of Science and Technology, Wuhan, China
Interests: flexible electronics sensors and manufacturing

Special Issue Information

Dear Colleagues,

A successful human–machine/human–robot interaction is dependent on adequate communication and understanding between humans and machines/robots during their contact. Recent development in sensing and analysis technology has enabled more efficient human–machine/human–robot interaction. Particularly, a good understanding of human behaviour and physiology allows machines/robots to interact more intuitively with users in a human-centred nature and is prioritised by a growing research interest. As a response, advanced sensing technology (wearable sensing, remote sensing, multimodal sensing, and so on) in combination with machine learning based analysis (feature engineering, classic machine learning models, deep learning approaches, and so on) keeps advancing to accommodate the needs of human–machine/human–robot systems and their applications.

This Special Issue aims to gather the most recent development in sensing- and machine-learning-based analysis with a particular focus on human behaviour and physiology, to push forward the frontier of human–machine/human–robot interaction. The scope of this Special Issue features but is not limited to the following areas:

  • Advanced sensory acquisition
  • Tactile sensor development
  • Wearable sensing device
  • Remote sensing device
  • Multimodal sensing
  • Human behaviour sensing
  • Physiology sensing and measurement
  • Sensing for human–machine interaction
  • Sensing for human–robot interaction
  • Machine-learning-based sensory data analysis
  • Deep-learning-based sensory data analysis
  • Neural networks for sensory interpreting
  • Computational intelligence in sensing and analysis

Dr. Zhaojie Ju
Dr. Dalin Zhou
Dr. Jinguo Liu
Dr. Dingguo Zhang
Dr. YongAn Huang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Tactile sensing
  • Wearable sensing
  • Human behaviour sensing
  • Physiology sensing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

12 pages, 4834 KiB  
Article
Decoding Physical and Cognitive Impacts of Particulate Matter Concentrations at Ultra-Fine Scales
by Shawhin Talebi, David J. Lary, Lakitha O. H. Wijeratne, Bharana Fernando, Tatiana Lary, Matthew Lary, John Sadler, Arjun Sridhar, John Waczak, Adam Aker and Yichao Zhang
Sensors 2022, 22(11), 4240; https://doi.org/10.3390/s22114240 - 2 Jun 2022
Cited by 2 | Viewed by 3295
Abstract
The human body is an incredible and complex sensing system. Environmental factors trigger a wide range of automatic neurophysiological responses. Biometric sensors can capture these responses in real time, providing clues about the underlying biophysical mechanisms. In this prototype study, we demonstrate an [...] Read more.
The human body is an incredible and complex sensing system. Environmental factors trigger a wide range of automatic neurophysiological responses. Biometric sensors can capture these responses in real time, providing clues about the underlying biophysical mechanisms. In this prototype study, we demonstrate an experimental paradigm to holistically capture and evaluate the interactions between an environmental context and physiological markers of an individual operating that environment. A cyclist equipped with a biometric sensing suite is followed by an environmental survey vehicle during outdoor bike rides. The interactions between environment and physiology are then evaluated though the development of empirical machine learning models, which estimate particulate matter concentrations from biometric variables alone. Here, we show biometric variables can be used to accurately estimate particulate matter concentrations at ultra-fine spatial scales with high fidelity (r2 = 0.91) and that smaller particles are better estimated than larger ones. Inferring environmental conditions solely from biometric measurements allows us to disentangle key interactions between the environment and the body. This work sets the stage for future investigations of these interactions for a larger number of factors, e.g., black carbon, CO2, NO/NO2/NOx, and ozone. By tapping into our body’s ‘built-in’ sensing abilities, we can gain insights into how our environment influences our physical health and cognitive performance. Full article
Show Figures

Figure 1

15 pages, 1716 KiB  
Article
Data-Driven EEG Band Discovery with Decision Trees
by Shawhin Talebi, John Waczak, Bharana A. Fernando, Arjun Sridhar and David J. Lary
Sensors 2022, 22(8), 3048; https://doi.org/10.3390/s22083048 - 15 Apr 2022
Cited by 8 | Viewed by 5069
Abstract
Electroencephalography (EEG) is a brain imaging technique in which electrodes are placed on the scalp. EEG signals are commonly decomposed into frequency bands called delta, theta, alpha, and beta. While these bands have been shown to be useful for characterizing various brain states, [...] Read more.
Electroencephalography (EEG) is a brain imaging technique in which electrodes are placed on the scalp. EEG signals are commonly decomposed into frequency bands called delta, theta, alpha, and beta. While these bands have been shown to be useful for characterizing various brain states, their utility as a one-size-fits-all analysis tool remains unclear. The goal of this work is to outline an objective strategy for discovering optimal EEG bands based on signal power spectra. A two-step data-driven methodology is presented for objectively determining the best EEG bands for a given dataset. First, a decision tree is used to estimate the optimal frequency band boundaries for reproducing the signal’s power spectrum for a predetermined number of bands. The optimal number of bands is then determined using an Akaike Information Criterion (AIC)-inspired quality score that balances goodness-of-fit with a small band count. This data-driven approach led to better characterization of the underlying power spectrum by identifying bands that outperformed the more commonly used band boundaries by a factor of two. Additionally, key spectral components were isolated in dedicated frequency bands. The proposed method provides a fully automated and flexible approach to capturing key signal components and possibly discovering new indices of brain activity. Full article
Show Figures

Figure 1

10 pages, 5172 KiB  
Communication
Investigation on the Sampling Frequency and Channel Number for Force Myography Based Hand Gesture Recognition
by Guangtai Lei, Shenyilang Zhang, Yinfeng Fang, Yuxi Wang and Xuguang Zhang
Sensors 2021, 21(11), 3872; https://doi.org/10.3390/s21113872 - 3 Jun 2021
Cited by 13 | Viewed by 3231
Abstract
Force myography (FMG) is a method that uses pressure sensors to measure muscle contraction indirectly. Compared with the conventional approach utilizing myoelectric signals in hand gesture recognition, it is a valuable substitute. To achieve the aim of gesture recognition at minimum cost, it [...] Read more.
Force myography (FMG) is a method that uses pressure sensors to measure muscle contraction indirectly. Compared with the conventional approach utilizing myoelectric signals in hand gesture recognition, it is a valuable substitute. To achieve the aim of gesture recognition at minimum cost, it is necessary to study the minimum sampling frequency and the minimal number of channels. For purpose of investigating the effect of sampling frequency and the number of channels on the accuracy of gesture recognition, a hardware system that has 16 channels has been designed for capturing forearm FMG signals with a maximum sampling frequency of 1 kHz. Using this acquisition equipment, a force myography database containing 10 subjects’ data has been created. In this paper, gesture accuracies under different sampling frequencies and channel’s number are obtained. Under 1 kHz sampling rate and 16 channels, four of five tested classifiers reach an accuracy up to about 99%. Other experimental results indicate that: (1) the sampling frequency of the FMG signal can be as low as 5 Hz for the recognition of static movements; (2) the reduction of channel number has a large impact on the accuracy, and the suggested channel number for gesture recognition is eight; and (3) the distribution of the sensors on the forearm would affect the recognition accuracy, and it is possible to improve the accuracy via optimizing the sensor position. Full article
Show Figures

Figure 1

15 pages, 47064 KiB  
Article
How to Represent Paintings: A Painting Classification Using Artistic Comments
by Wentao Zhao, Dalin Zhou, Xinguo Qiu and Wei Jiang
Sensors 2021, 21(6), 1940; https://doi.org/10.3390/s21061940 - 10 Mar 2021
Cited by 13 | Viewed by 4141
Abstract
The goal of large-scale automatic paintings analysis is to classify and retrieve images using machine learning techniques. The traditional methods use computer vision techniques on paintings to enable computers to represent the art content. In this work, we propose using a graph convolutional [...] Read more.
The goal of large-scale automatic paintings analysis is to classify and retrieve images using machine learning techniques. The traditional methods use computer vision techniques on paintings to enable computers to represent the art content. In this work, we propose using a graph convolutional network and artistic comments rather than the painting color to classify type, school, timeframe and author of the paintings by implementing natural language processing (NLP) techniques. First, we build a single artistic comment graph based on co-occurrence relations and document word relations and then train an art graph convolutional network (ArtGCN) on the entire corpus. The nodes, which include the words and documents in the topological graph are initialized using a one-hot representation; then, the embeddings are learned jointly for both words and documents, supervised by the known-class training labels of the paintings. Through extensive experiments on different classification tasks using different input sources, we demonstrate that the proposed methods achieve state-of-art performance. In addition, ArtGCN can learn word and painting embeddings, and we find that they have a major role in describing the labels and retrieval paintings, respectively. Full article
Show Figures

Figure 1

25 pages, 44165 KiB  
Article
A Bayesian Driver Agent Model for Autonomous Vehicles System Based on Knowledge-Aware and Real-Time Data
by Jichang Ma, Hui Xie, Kang Song and Hao Liu
Sensors 2021, 21(2), 331; https://doi.org/10.3390/s21020331 - 6 Jan 2021
Cited by 9 | Viewed by 5356
Abstract
A key research area in autonomous driving is how to model the driver’s decision-making behavior, due to the fact it is significant for a self-driving vehicles considering their traffic safety and efficiency. However, the uncertain characteristics of vehicle and pedestrian trajectories affect urban [...] Read more.
A key research area in autonomous driving is how to model the driver’s decision-making behavior, due to the fact it is significant for a self-driving vehicles considering their traffic safety and efficiency. However, the uncertain characteristics of vehicle and pedestrian trajectories affect urban roads, which poses severe challenges to the cognitive understanding and decision-making of autonomous vehicle systems in terms of accuracy and robustness. To overcome the abovementioned problems, this paper proposes a Bayesian driver agent (BDA) model which is a vision-based autonomous vehicle system with learning and inference methods inspired by human driver’s cognitive psychology. Different from the end-to-end learning method and traditional rule-based methods, our approach breaks the driving system up into a scene recognition module and a decision inference module. The perception module, which is based on a multi-task learning neural network (CNN), takes a driver’s-view image as its input and predicts the traffic scene’s feature values. The decision module based on dynamic Bayesian network (DBN) then makes an inferred decision using the traffic scene’s feature values. To explore the validity of the Bayesian driver agent model, we performed experiments on a driving simulation platform. The BDA model can extract the scene feature values effectively and predict the probability distribution of the human driver’s decision-making process accurately based on inference. We take the lane changing scenario as an example to verify the model, the intraclass correlation coefficient (ICC) correlation between the BDA model and human driver’s decision process reached 0.984. This work suggests a research in scene perception and autonomous decision-making that may apply to autonomous vehicle system. Full article
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 437 KiB  
Review
Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning
by Jiang Hua, Liangcai Zeng, Gongfa Li and Zhaojie Ju
Sensors 2021, 21(4), 1278; https://doi.org/10.3390/s21041278 - 11 Feb 2021
Cited by 140 | Viewed by 21514
Abstract
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent [...] Read more.
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed. Full article
Show Figures

Figure 1

Back to TopTop