Force and Vision Perception for Intelligent Robots

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Systems & Control Engineering".

Deadline for manuscript submissions: closed (15 February 2024) | Viewed by 4284

Special Issue Editors


E-Mail Website
Guest Editor
National Engineering Laboratory for Robot Vision Perception and Control, College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: robotics and mechatronics; biomimetic sensing; advanced robot technology; human–computer interaction

E-Mail Website
Guest Editor
National Engineering Laboratory for Robot Vision Perception and Control, College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: robotics and mechatronics; advanced robot technology; human–computer interaction
National Engineering Laboratory for Robot Vision Perception and Control Technology, College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: robot vision; multi target tracking; force sensor

Special Issue Information

Dear Colleagues,

Force sensing technology is the core technology in the fields of robotics, human–machine interaction and virtual reality, including force sensing technology, force feedback technology, force computing technology, force control technology, etc. This technology has become a high-end science and technology developed by countries since this century and has been widely used in virtual reality systems as a new human–computer interaction mode.

In order to promote the development of force sensing technology and its application in robotics, virtual reality systems and other fields, Electronics plans to set up a column entitled "Force Perception for Intelligent Robots" to collect the latest research results, innovative applications and technical trend analysis (comprehensive) articles of force sensing technology. The details of the solicitation are as follows:

Scope of Special Issue (but not limited to):

(1) Force sensor and multi-dimensional force measurement technology;

(2) Force feedback technology;

(3) Force interaction technology;

(4) Robot force control;

(5) Tactile sensors;

(6) Tactile information processing and target recognition;

(7) Tactile and visual information fusion;

(8) Bionic touch and electronic skin;

(9) Human force sensing modeling and analysis.

Dr. Qiaokang Liang
Dr. Jianyong Long
Dr. Wanneng Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • force sensor
  • robot force control
  • tactile sensors
  • tactile-visual information fusion

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2942 KiB  
Article
A Single-Tree Point Cloud Completion Approach of Feature Fusion for Agricultural Robots
by Dali Xu, Guangsheng Chen and Weipeng Jing
Electronics 2023, 12(6), 1296; https://doi.org/10.3390/electronics12061296 - 8 Mar 2023
Cited by 1 | Viewed by 1840
Abstract
With the continuous development of digital agriculture and intelligent forestry, the demand for three-dimensional modeling of trees or plants using agricultural robots is also increasing. Laser radar technology has gradually become an important technical means for agricultural robots to obtain three-dimensional information about [...] Read more.
With the continuous development of digital agriculture and intelligent forestry, the demand for three-dimensional modeling of trees or plants using agricultural robots is also increasing. Laser radar technology has gradually become an important technical means for agricultural robots to obtain three-dimensional information about trees. When using laser radar to scan trees, incomplete point cloud data are often obtained due to leaf occlusion, visual angle limitation, or operation error, which leads to quality degradation of the subsequent 3D modeling and quantitative analysis of trees. At present, a lot of research work has been carried out in the direction of point cloud completion, in which the deep learning model is the mainstream solution. However, the existing deep learning models have mainly been applied to urban scene completion or the point cloud completion of indoor regularized objects, and the research objects generally have obvious continuity and symmetry characteristics. There has been no relevant research on the point cloud completion method for objects with obvious individual morphological differences, such as trees. Therefore, this paper proposes a single-tree point cloud completion method based on feature fusion. This method uses PointNet, based on point structure, to extract the global features of trees, and EdgeConv, based on graph structure, to extract the local features of trees. After integrating global and local features, FoldingNet is used to realize the generation of a complete point cloud. Compared to other deep learning methods on the open source data set, the CD index using this method increased by 21.772% on average, and the EMD index increased by 15.672% on average, which proves the effectiveness of the method in this paper and provides a new solution for agricultural robots to obtain three-dimensional information about trees. Full article
(This article belongs to the Special Issue Force and Vision Perception for Intelligent Robots)
Show Figures

Graphical abstract

18 pages, 1557 KiB  
Article
Fatigue Driving Recognition Method Based on Multi-Scale Facial Landmark Detector
by Weichu Xiao, Hongli Liu, Ziji Ma, Weihong Chen, Changliang Sun and Bo Shi
Electronics 2022, 11(24), 4103; https://doi.org/10.3390/electronics11244103 - 9 Dec 2022
Cited by 5 | Viewed by 1909
Abstract
Fatigue driving behavior recognition in all-weather real driving environments is a challenging task. Accurate recognition of fatigue driving behavior is helpful to improve traffic safety. The facial landmark detector is crucial to fatigue driving recognition. However, existing facial landmark detectors are mainly aimed [...] Read more.
Fatigue driving behavior recognition in all-weather real driving environments is a challenging task. Accurate recognition of fatigue driving behavior is helpful to improve traffic safety. The facial landmark detector is crucial to fatigue driving recognition. However, existing facial landmark detectors are mainly aimed at stable front face color images instead of side face gray images, which is difficult to adapt to the fatigue driving behavior recognition in real dynamic scenes. To maximize the driver’s facial feature information and temporal characteristics, a fatigue driving behavior recognition method based on a multi-scale facial landmark detector (MSFLD) is proposed. First, a spatial pyramid pooling and multi-scale feature output (SPP-MSFO) detection model is built to obtain a face region image. The MSFLD is a lightweight facial landmark detector, which is composed of convolution layers, inverted bottleneck blocks, and multi-scale full connection layers to achieve accurate detection of 23 key points on the face. Second, the aspect ratios of the left eye, right eye and mouth are calculated in accordance with the coordinates of the key points to form a fatigue parameter matrix. Finally, the combination of adaptive threshold and statistical threshold is used to avoid misjudgment of fatigue driving recognition. The adaptive threshold is dynamic, which solves the problem of the difference in the aspect ratio of the eyes and mouths of different drivers. The statistical threshold is a supplement to solve the problem of driver’s low eye threshold and high mouth threshold. The proposed methods are evaluated on the Hunan University Fatigue Detection (HNUFDD) dataset. The proposed MSFLD achieves a normalized mean error value of 5.4518%, and the accuracy of the fatigue driving recognition method based on MSFLD achieves 99.1329%, which outperforms that of state-of-the-art methods. Full article
(This article belongs to the Special Issue Force and Vision Perception for Intelligent Robots)
Show Figures

Figure 1

Back to TopTop