sensors-logo

Journal Browser

Journal Browser

Sensor Applications on Face Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 April 2019) | Viewed by 40098

Special Issue Editors

School of Computing, Ulster University, Jordanstown, UK
Interests: machine learning; chemometrics; spectroscopy; sensors; food authentication; virus detection
Special Issues, Collections and Topics in MDPI journals
AnyVision, Ltd, Belfast, United Kingdom
Interests: deep learning; face recognition; 3D face modeling; face detection; expression recognition
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Interests: deep learning; face recognition; face detection; expression recognition; anti-spoofing

Special Issue Information

Dear Colleagues,

With the rise of embedded sensing and the Internet of Things (IoT), there has arisen a demand for low-cost, low-power sensors for a variety of applications. At the same time, deep learning has made great progress in terms of theory and applications in the past few years. In particular, face analysis, such as face recognition, is one of the most important applications of deep learning. Undoubtedly, it will inspire more applications when sensors meet with face analysis.  However, the current deep learning driven models cannot be directly used for sensor-based applications. Specifically, large-scale unconstrained face recognition needs be improved. In addition, the efficiency of current models is usually low and cannot meet the requirements of many sensor-based applications. This Special Issue aims to make a definitive statement about the state-of-the-art by providing a significant collective contribution to this field of study. Therefore, we aim to solicit original contributions that: (1) present state-of-the-art face analysis techniques; (2) develop novel methods and applications; (3) survey the recent progress in this area; and (4) establish benchmark datasets.

Prof. Dr. Hui Wang
Dr. Guosheng Hu
Dr. Zhen Lei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Face Recognition using various sensors
  • Face Detection
  • Facial Landmark Detection
  • Facial Expression Analysis
  • (Deep) Model Acceleration
  • 3D face modeling
  • Face attribute recognition (including age and gender)
  • Face hallucination and completion

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 7741 KiB  
Article
Light Fields for Face Analysis
by Chiara Galdi, Valeria Chiesa, Christoph Busch, Paulo Lobato Correia, Jean-Luc Dugelay and Christine Guillemot
Sensors 2019, 19(12), 2687; https://doi.org/10.3390/s19122687 - 14 Jun 2019
Cited by 13 | Viewed by 3940
Abstract
The term “plenoptic” comes from the Latin words plenus (“full”) + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible [...] Read more.
The term “plenoptic” comes from the Latin words plenus (“full”) + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible to define the direction of every ray in the light-field vector function. Imaging systems are rapidly evolving with the emergence of light-field-capturing devices. Consequently, existing image-processing techniques need to be revisited to match the richer information provided. This article explores the use of light fields for face analysis. This field of research is very recent but already includes several works reporting promising results. Such works deal with the main steps of face analysis and include but are not limited to: face recognition; face presentation attack detection; facial soft-biometrics classification; and facial landmark detection. This article aims to review the state of the art on light fields for face analysis, identifying future challenges and possible applications. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

22 pages, 3136 KiB  
Article
Real-Time Multi-Scale Face Detector on Embedded Devices
by Xu Zhao, Xiaoqing Liang, Chaoyang Zhao, Ming Tang and Jinqiao Wang
Sensors 2019, 19(9), 2158; https://doi.org/10.3390/s19092158 - 09 May 2019
Cited by 22 | Viewed by 4051
Abstract
Face detection is the basic step in video face analysis and has been studied for many years. However, achieving real-time performance on computation-resource-limited embedded devices still remains an open challenge. To address this problem, in this paper we propose a face detector, EagleEye, [...] Read more.
Face detection is the basic step in video face analysis and has been studied for many years. However, achieving real-time performance on computation-resource-limited embedded devices still remains an open challenge. To address this problem, in this paper we propose a face detector, EagleEye, which shows a good trade-off between high accuracy and fast speed on the popular embedded device with low computation power (e.g., the Raspberry Pi 3b+). The EagleEye is designed to have low floating-point operations per second (FLOPS) as well as enough capacity, and its accuracy is further improved without adding too much FLOPS. Specifically, we design five strategies for building efficient face detectors with a good balance of accuracy and running speed. The first two strategies help to build a detector with low computation complexity and enough capacity. We use convolution factorization to change traditional convolutions into more sparse depth-wise convolutions to save computation costs and we use successive downsampling convolutions at the beginning of the face detection network. The latter three strategies significantly improve the accuracy of the light-weight detector without adding too much computation costs. We design an efficient context module to utilize context information to benefit the face detection. We also adopt information preserving activation function to increase the network capacity. Finally, we use focal loss to further improve the accuracy by handling the class imbalance problem better. Experiments show that the EagleEye outperforms the other face detectors with the same order of computation costs, on both runtime efficiency and accuracy. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

17 pages, 983 KiB  
Article
A Case Study of Facial Emotion Classification Using Affdex
by Martin Magdin, Ľubomír Benko and Štefan Koprda
Sensors 2019, 19(9), 2140; https://doi.org/10.3390/s19092140 - 09 May 2019
Cited by 29 | Viewed by 4712
Abstract
This paper focuses on the analysis of reactions captured by the face analysis system. The experiment was conducted on a sample of 50 university students. Each student was shown 100 random images and the student´s reaction to every image was recorded. The recorded [...] Read more.
This paper focuses on the analysis of reactions captured by the face analysis system. The experiment was conducted on a sample of 50 university students. Each student was shown 100 random images and the student´s reaction to every image was recorded. The recorded reactions were subsequently compared to the reaction of the image that was expected. The results of the experiment have shown several imperfections of the face analysis system. The system has difficulties classifying expressions and cannot detect and identify inner emotions that a person may experience when shown the image. Face analysis systems can only detect emotions that are expressed externally on a face by physiological changes in certain parts of the face. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

21 pages, 322 KiB  
Article
The Effectiveness of Depth Data in Liveness Face Authentication Using 3D Sensor Cameras
by Ghazel Albakri and Sharifa Alghowinem
Sensors 2019, 19(8), 1928; https://doi.org/10.3390/s19081928 - 24 Apr 2019
Cited by 16 | Viewed by 4418
Abstract
Even though biometric technology increases the security of systems that use it, they are prone to spoof attacks where attempts of fraudulent biometrics are used. To overcome these risks, techniques on detecting liveness of the biometric measure are employed. For example, in systems [...] Read more.
Even though biometric technology increases the security of systems that use it, they are prone to spoof attacks where attempts of fraudulent biometrics are used. To overcome these risks, techniques on detecting liveness of the biometric measure are employed. For example, in systems that utilise face authentication as biometrics, a liveness is assured using an estimation of blood flow, or analysis of quality of the face image. Liveness assurance of the face using real depth technique is rarely used in biometric devices and in the literature, even with the availability of depth datasets. Therefore, this technique of employing 3D cameras for liveness of face authentication is underexplored for its vulnerabilities to spoofing attacks. This research reviews the literature on this aspect and then evaluates the liveness detection to suggest solutions that account for the weaknesses found in detecting spoofing attacks. We conduct a proof-of-concept study to assess the liveness detection of 3D cameras in three devices, where the results show that having more flexibility resulted in achieving a higher rate in detecting spoofing attacks. Nonetheless, it was found that selecting a wide depth range of the 3D camera is important for anti-spoofing security recognition systems such as surveillance cameras used in airports. Therefore, to utilise the depth information and implement techniques that detect faces regardless of the distance, a 3D camera with long maximum depth range (e.g., 20 m) and high resolution stereo cameras could be selected, which can have a positive impact on accuracy. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

21 pages, 3002 KiB  
Article
Three-D Wide Faces (3DWF): Facial Landmark Detection and 3D Reconstruction over a New RGB–D Multi-Camera Dataset
by Marcos Quintana, Sezer Karaoglu, Federico Alvarez, Jose Manuel Menendez and Theo Gevers
Sensors 2019, 19(5), 1103; https://doi.org/10.3390/s19051103 - 04 Mar 2019
Cited by 6 | Viewed by 4961
Abstract
Latest advances of deep learning paradigm and 3D imaging systems have raised the necessity for more complete datasets that allow exploitation of facial features such as pose, gender or age. In our work, we propose a new facial dataset collected with an innovative [...] Read more.
Latest advances of deep learning paradigm and 3D imaging systems have raised the necessity for more complete datasets that allow exploitation of facial features such as pose, gender or age. In our work, we propose a new facial dataset collected with an innovative RGB–D multi-camera setup whose optimization is presented and validated. 3DWF includes 3D raw and registered data collection for 92 persons from low-cost RGB–D sensing devices to commercial scanners with great accuracy. 3DWF provides a complete dataset with relevant and accurate visual information for different tasks related to facial properties such as face tracking or 3D face reconstruction by means of annotated density normalized 2K clouds and RGB–D streams. In addition, we validate the reliability of our proposal by an original data augmentation method from a massive set of face meshes for facial landmark detection in 2D domain, and by head pose classification through common Machine Learning techniques directed towards proving alignment of collected data. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

24 pages, 1401 KiB  
Article
Histogram-Based CRC for 3D-Aided Pose-Invariant Face Recognition
by Liang Shi, Xiaoning Song, Tao Zhang and Yuquan Zhu
Sensors 2019, 19(4), 759; https://doi.org/10.3390/s19040759 - 13 Feb 2019
Cited by 9 | Viewed by 3121
Abstract
Traditional Collaborative Representation-based Classification algorithms for face recognition (CRC) usually suffer from data uncertainty, especially if it includes various poses and illuminations. To address this issue, in this paper, we design a new CRC method using histogram statistical measurement (H-CRC) combined with a [...] Read more.
Traditional Collaborative Representation-based Classification algorithms for face recognition (CRC) usually suffer from data uncertainty, especially if it includes various poses and illuminations. To address this issue, in this paper, we design a new CRC method using histogram statistical measurement (H-CRC) combined with a 3D morphable model (3DMM) for pose-invariant face classification. First, we fit a 3DMM to raw images in the dictionary to reconstruct the 3D shapes and textures. The fitting results are used to render numerous virtual samples of 2D images that are frontalized from arbitrary poses. In contrast to other distance-based evaluation algorithms for collaborative (or sparse) representation-based methods, the histogram information of all the generated 2D face images is subsequently exploited. Second, we use a histogram-based metric learning to evaluate the most similar neighbours of the test sample, which aims to obtain ideal result for pose-invariant face recognition using the designed histogram-based 3DMM model and online pruning strategy, forming a unified 3D-aided CRC framework. The proposed method achieves desirable classification results that are conducted on a set of well-known face databases, including ORL, Georgia Tech, FERET, FRGC, PIE and LFW. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 2200 KiB  
Review
A Review on Automatic Facial Expression Recognition Systems Assisted by Multimodal Sensor Data
by Najmeh Samadiani, Guangyan Huang, Borui Cai, Wei Luo, Chi-Hung Chi, Yong Xiang and Jing He
Sensors 2019, 19(8), 1863; https://doi.org/10.3390/s19081863 - 18 Apr 2019
Cited by 123 | Viewed by 13291
Abstract
Facial Expression Recognition (FER) can be widely applied to various research areas, such as mental diseases diagnosis and human social/physiological interaction detection. With the emerging advanced technologies in hardware and sensors, FER systems have been developed to support real-world application scenes, instead of [...] Read more.
Facial Expression Recognition (FER) can be widely applied to various research areas, such as mental diseases diagnosis and human social/physiological interaction detection. With the emerging advanced technologies in hardware and sensors, FER systems have been developed to support real-world application scenes, instead of laboratory environments. Although the laboratory-controlled FER systems achieve very high accuracy, around 97%, the technical transferring from the laboratory to real-world applications faces a great barrier of very low accuracy, approximately 50%. In this survey, we comprehensively discuss three significant challenges in the unconstrained real-world environments, such as illumination variation, head pose, and subject-dependence, which may not be resolved by only analysing images/videos in the FER system. We focus on those sensors that may provide extra information and help the FER systems to detect emotion in both static images and video sequences. We introduce three categories of sensors that may help improve the accuracy and reliability of an expression recognition system by tackling the challenges mentioned above in pure image/video processing. The first group is detailed-face sensors, which detect a small dynamic change of a face component, such as eye-trackers, which may help differentiate the background noise and the feature of faces. The second is non-visual sensors, such as audio, depth, and EEG sensors, which provide extra information in addition to visual dimension and improve the recognition reliability for example in illumination variation and position shift situation. The last is target-focused sensors, such as infrared thermal sensors, which can facilitate the FER systems to filter useless visual contents and may help resist illumination variation. Also, we discuss the methods of fusing different inputs obtained from multimodal sensors in an emotion system. We comparatively review the most prominent multimodal emotional expression recognition approaches and point out their advantages and limitations. We briefly introduce the benchmark data sets related to FER systems for each category of sensors and extend our survey to the open challenges and issues. Meanwhile, we design a framework of an expression recognition system, which uses multimodal sensor data (provided by the three categories of sensors) to provide complete information about emotions to assist the pure face image/video analysis. We theoretically analyse the feasibility and achievability of our new expression recognition system, especially for the use in the wild environment, and point out the future directions to design an efficient, emotional expression recognition system. Full article
(This article belongs to the Special Issue Sensor Applications on Face Analysis)
Show Figures

Figure 1

Back to TopTop