Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10533 KiB  
Review
Low-Cost Automatic Weather Stations in the Internet of Things
by Konstantinos Ioannou, Dimitris Karampatzakis, Petros Amanatidis, Vasileios Aggelopoulos and Ilias Karmiris
Information 2021, 12(4), 146; https://doi.org/10.3390/info12040146 - 29 Mar 2021
Cited by 39 | Viewed by 13136
Abstract
Automatic Weather Stations (AWS) are extensively used for gathering meteorological and climatic data. The World Meteorological Organization (WMO) provides publications with guidelines for the implementation, installation, and usages of these stations. Nowadays, in the new era of the Internet of Things, there is [...] Read more.
Automatic Weather Stations (AWS) are extensively used for gathering meteorological and climatic data. The World Meteorological Organization (WMO) provides publications with guidelines for the implementation, installation, and usages of these stations. Nowadays, in the new era of the Internet of Things, there is an ever-increasing necessity for the implementation of automatic observing systems that will provide scientists with the real-time data needed to design and apply proper environmental policy. In this paper, an extended review is performed regarding the technologies currently used for the implementation of Automatic Weather Stations. Furthermore, we also present the usage of new emerging technologies such as the Internet of Things, Edge Computing, Deep Learning, LPWAN, etc. in the implementation of future AWS-based observation systems. Finally, we present a case study and results from a testbed AWS (project AgroComp) developed by our research team. The results include test measurements from low-cost sensors installed on the unit and predictions provided by Deep Learning algorithms running locally. Full article
Show Figures

Figure 1

56 pages, 12679 KiB  
Article
Multimodal Approaches for Indoor Localization for Ambient Assisted Living in Smart Homes
by Nirmalya Thakur and Chia Y. Han
Information 2021, 12(3), 114; https://doi.org/10.3390/info12030114 - 7 Mar 2021
Cited by 49 | Viewed by 5807
Abstract
This work makes multiple scientific contributions to the field of Indoor Localization for Ambient Assisted Living in Smart Homes. First, it presents a Big-Data driven methodology that studies the multimodal components of user interactions and analyzes the data from Bluetooth Low Energy (BLE) [...] Read more.
This work makes multiple scientific contributions to the field of Indoor Localization for Ambient Assisted Living in Smart Homes. First, it presents a Big-Data driven methodology that studies the multimodal components of user interactions and analyzes the data from Bluetooth Low Energy (BLE) beacons and BLE scanners to detect a user’s indoor location in a specific ‘activity-based zone’ during Activities of Daily Living. Second, it introduces a context independent approach that can interpret the accelerometer and gyroscope data from diverse behavioral patterns to detect the ‘zone-based’ indoor location of a user in any Internet of Things (IoT)-based environment. These two approaches achieved performance accuracies of 81.36% and 81.13%, respectively, when tested on a dataset. Third, it presents a methodology to detect the spatial coordinates of a user’s indoor position that outperforms all similar works in this field, as per the associated root mean squared error—one of the performance evaluation metrics in ISO/IEC18305:2016—an international standard for testing Localization and Tracking Systems. Finally, it presents a comprehensive comparative study that includes Random Forest, Artificial Neural Network, Decision Tree, Support Vector Machine, k-NN, Gradient Boosted Trees, Deep Learning, and Linear Regression, to address the challenge of identifying the optimal machine learning approach for Indoor Localization. Full article
(This article belongs to the Special Issue Pervasive Computing in IoT)
Show Figures

Figure 1

20 pages, 1449 KiB  
Article
Internet of Things: A General Overview between Architectures, Protocols and Applications
by Marco Lombardi, Francesco Pascale and Domenico Santaniello
Information 2021, 12(2), 87; https://doi.org/10.3390/info12020087 - 19 Feb 2021
Cited by 136 | Viewed by 24657
Abstract
In recent years, the growing number of devices connected to the internet has increased significantly. These devices can interact with the external environment and with human beings through a wide range of sensors that, perceiving reality through the digitization of some parameters of [...] Read more.
In recent years, the growing number of devices connected to the internet has increased significantly. These devices can interact with the external environment and with human beings through a wide range of sensors that, perceiving reality through the digitization of some parameters of interest, can provide an enormous amount of data. All this data is then shared on the network with other devices and with different applications and infrastructures. This dynamic and ever-changing world underlies the Internet of Things (IoT) paradigm. To date, countless applications based on IoT have been developed; think of Smart Cities, smart roads, and smart industries. This article analyzes the current architectures, technologies, protocols, and applications that characterize the paradigm. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

15 pages, 3391 KiB  
Article
The Evolution of Language Models Applied to Emotion Analysis of Arabic Tweets
by Nora Al-Twairesh
Information 2021, 12(2), 84; https://doi.org/10.3390/info12020084 - 17 Feb 2021
Cited by 31 | Viewed by 4380
Abstract
The field of natural language processing (NLP) has witnessed a boom in language representation models with the introduction of pretrained language models that are trained on massive textual data then used to fine-tune downstream NLP tasks. In this paper, we aim to study [...] Read more.
The field of natural language processing (NLP) has witnessed a boom in language representation models with the introduction of pretrained language models that are trained on massive textual data then used to fine-tune downstream NLP tasks. In this paper, we aim to study the evolution of language representation models by analyzing their effect on an under-researched NLP task: emotion analysis; for a low-resource language: Arabic. Most of the studies in the field of affect analysis focused on sentiment analysis, i.e., classifying text into valence (positive, negative, neutral) while few studies go further to analyze the finer grained emotional states (happiness, sadness, anger, etc.). Emotion analysis is a text classification problem that is tackled using machine learning techniques. Different language representation models have been used as features for these machine learning models to learn from. In this paper, we perform an empirical study on the evolution of language models, from the traditional term frequency–inverse document frequency (TF–IDF) to the more sophisticated word embedding word2vec, and finally the recent state-of-the-art pretrained language model, bidirectional encoder representations from transformers (BERT). We observe and analyze how the performance increases as we change the language model. We also investigate different BERT models for Arabic. We find that the best performance is achieved with the ArabicBERT large model, which is a BERT model trained on a large dataset of Arabic text. The increase in F1-score was significant +7–21%. Full article
(This article belongs to the Special Issue Natural Language Processing for Social Media)
Show Figures

Figure 1

26 pages, 5939 KiB  
Article
An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments
by Nirmalya Thakur and Chia Y. Han
Information 2021, 12(2), 81; https://doi.org/10.3390/info12020081 - 14 Feb 2021
Cited by 73 | Viewed by 7996
Abstract
This framework for human behavior monitoring aims to take a holistic approach to study, track, monitor, and analyze human behavior during activities of daily living (ADLs). The framework consists of two novel functionalities. First, it can perform the semantic analysis of user interactions [...] Read more.
This framework for human behavior monitoring aims to take a holistic approach to study, track, monitor, and analyze human behavior during activities of daily living (ADLs). The framework consists of two novel functionalities. First, it can perform the semantic analysis of user interactions on the diverse contextual parameters during ADLs to identify a list of distinct behavioral patterns associated with different complex activities. Second, it consists of an intelligent decision-making algorithm that can analyze these behavioral patterns and their relationships with the dynamic contextual and spatial features of the environment to detect any anomalies in user behavior that could constitute an emergency. These functionalities of this interdisciplinary framework were developed by integrating the latest advancements and technologies in human–computer interaction, machine learning, Internet of Things, pattern recognition, and ubiquitous computing. The framework was evaluated on a dataset of ADLs, and the performance accuracies of these two functionalities were found to be 76.71% and 83.87%, respectively. The presented and discussed results uphold the relevance and immense potential of this framework to contribute towards improving the quality of life and assisted living of the aging population in the future of Internet of Things (IoT)-based ubiquitous living environments, e.g., smart homes. Full article
(This article belongs to the Special Issue Pervasive Computing in IoT)
Show Figures

Figure 1

25 pages, 2206 KiB  
Review
Supply Chain Disruption Risk Management with Blockchain: A Dynamic Literature Review
by Niloofar Etemadi, Yari Borbon-Galvez, Fernanda Strozzi and Tahereh Etemadi
Information 2021, 12(2), 70; https://doi.org/10.3390/info12020070 - 7 Feb 2021
Cited by 84 | Viewed by 15790
Abstract
The purpose of this review is to describe the landscape of scientific literature enriched by an author’s keyword analysis to develop and test blockchain’s capabilities for enhancing supply chain resilience in times of increased risk and uncertainty. This review adopts a dynamic quantitative [...] Read more.
The purpose of this review is to describe the landscape of scientific literature enriched by an author’s keyword analysis to develop and test blockchain’s capabilities for enhancing supply chain resilience in times of increased risk and uncertainty. This review adopts a dynamic quantitative bibliometric method called systematic literature network analysis (SLNA) to extract and analyze the papers. The procedure consists of two methods: a systematic literature review (SLR) and bibliometric network analysis (BNA). This paper provides an important contribution to the literature in applying blockchain as a key component of cyber supply chain risk management (CSRM), manage and predict disruption risks that lead to resilience and robustness of the supply chain. This systematic review also sheds light on different research areas such as the potential of blockchain for privacy and security challenges, security of smart contracts, monitoring counterfeiting, and traceability database systems to ensure food safety and security. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

12 pages, 1367 KiB  
Article
Multi-Sensor Data Fusion Algorithm for Indoor Fire Early Warning Based on BP Neural Network
by Lesong Wu, Lan Chen and Xiaoran Hao
Information 2021, 12(2), 59; https://doi.org/10.3390/info12020059 - 30 Jan 2021
Cited by 59 | Viewed by 6944
Abstract
Fire early warning is an important way to deal with the faster burning rate of modern home fires and ensure the safety of the residents’ lives and property. To improve real-time fire alarm performance, this paper proposes an indoor fire early warning algorithm [...] Read more.
Fire early warning is an important way to deal with the faster burning rate of modern home fires and ensure the safety of the residents’ lives and property. To improve real-time fire alarm performance, this paper proposes an indoor fire early warning algorithm based on a back propagation neural network. The early warning algorithm fuses the data of temperature, smoke concentration and carbon monoxide, which are collected by sensors, and outputs the probability of fire occurrence. In this study, non-uniform sampling and trend extraction were used to enhance the ability to distinguish fire signals and environmental interference. Data from six sets of standard test fire scenarios and six sets of no-fire scenarios were used to test the algorithm proposed in this paper. The test results show that the proposed algorithm can correctly alarm six standard test fires from these 12 scenarios, and the fire detection time is shortened by 32%. Full article
(This article belongs to the Special Issue Industrial Wireless Networks: Algorithms, Protocols and Applications)
Show Figures

Figure 1

11 pages, 2550 KiB  
Article
Lightweight End-to-End Neural Network Model for Automatic Heart Sound Classification
by Tao Li, Yibo Yin, Kainan Ma, Sitao Zhang and Ming Liu
Information 2021, 12(2), 54; https://doi.org/10.3390/info12020054 - 26 Jan 2021
Cited by 25 | Viewed by 3378
Abstract
Heart sounds play an important role in the initial screening of heart diseases. However, the accurate diagnosis with heart sound signals requires doctors to have many years of clinical experience and relevant professional knowledge. In this study, we proposed an end-to-end lightweight neural [...] Read more.
Heart sounds play an important role in the initial screening of heart diseases. However, the accurate diagnosis with heart sound signals requires doctors to have many years of clinical experience and relevant professional knowledge. In this study, we proposed an end-to-end lightweight neural network model that does not require heart sound segmentation and has very few parameters. We segmented the original heart sound signal and performed a short-time Fourier transform (STFT) to obtain the frequency domain features. These features were sent to the improved two-dimensional convolutional neural network (CNN) model for features learning and classification. Considering the imbalance of positive and negative samples, we introduced FocalLoss as the loss function, verified our network model with multiple random verifications, and, hence, obtained a better classification result. Our main purpose is to design a lightweight network structure that is easy for hardware implementation. Compared with the results of the latest literature, our model only uses 4.29 K parameters, which is 1/10 of the size of the state-of-the-art work. Full article
Show Figures

Figure 1

17 pages, 9497 KiB  
Article
Text Classification Based on Convolutional Neural Networks and Word Embedding for Low-Resource Languages: Tigrinya
by Awet Fesseha, Shengwu Xiong, Eshete Derb Emiru, Moussa Diallo and Abdelghani Dahou
Information 2021, 12(2), 52; https://doi.org/10.3390/info12020052 - 25 Jan 2021
Cited by 75 | Viewed by 7164
Abstract
This article studies convolutional neural networks for Tigrinya (also referred to as Tigrigna), which is a family of Semitic languages spoken in Eritrea and northern Ethiopia. Tigrinya is a “low-resource” language and is notable in terms of the absence of comprehensive and free [...] Read more.
This article studies convolutional neural networks for Tigrinya (also referred to as Tigrigna), which is a family of Semitic languages spoken in Eritrea and northern Ethiopia. Tigrinya is a “low-resource” language and is notable in terms of the absence of comprehensive and free data. Furthermore, it is characterized as one of the most semantically and syntactically complex languages in the world, similar to other Semitic languages. To the best of our knowledge, no previous research has been conducted on the state-of-the-art embedding technique that is shown here. We investigate which word representation methods perform better in terms of learning for single-label text classification problems, which are common when dealing with morphologically rich and complex languages. Manually annotated datasets are used here, where one contains 30,000 Tigrinya news texts from various sources with six categories of “sport”, “agriculture”, “politics”, “religion”, “education”, and “health” and one unannotated corpus that contains more than six million words. In this paper, we explore pretrained word embedding architectures using various convolutional neural networks (CNNs) to predict class labels. We construct a CNN with a continuous bag-of-words (CBOW) method, a CNN with a skip-gram method, and CNNs with and without word2vec and FastText to evaluate Tigrinya news articles. We also compare the CNN results with traditional machine learning models and evaluate the results in terms of the accuracy, precision, recall, and F1 scoring techniques. The CBOW CNN with word2vec achieves the best accuracy with 93.41%, significantly improving the accuracy for Tigrinya news classification. Full article
(This article belongs to the Special Issue Natural Language Processing for Social Media)
Show Figures

28 pages, 1680 KiB  
Article
Comparative Study of Energy Efficient Routing Techniques in Wireless Sensor Networks
by Rachid Zagrouba and Amine Kardi
Information 2021, 12(1), 42; https://doi.org/10.3390/info12010042 - 18 Jan 2021
Cited by 94 | Viewed by 8678
Abstract
This paper surveys the energy-efficient routing protocols in wireless sensor networks (WSNs). It provides a classification and comparison following a new proposed taxonomy distinguishing nine categories of protocols, namely: Latency-aware and energy-efficient routing, next-hop selection, network architecture, initiator of communication, network topology, protocol [...] Read more.
This paper surveys the energy-efficient routing protocols in wireless sensor networks (WSNs). It provides a classification and comparison following a new proposed taxonomy distinguishing nine categories of protocols, namely: Latency-aware and energy-efficient routing, next-hop selection, network architecture, initiator of communication, network topology, protocol operation, delivery mode, path establishment and application type. We analyze each class, discuss its representative routing protocols (mechanisms, advantages, disadvantages…) and compare them based on different parameters under the appropriate class. Simulation results of LEACH, Mod-LEACH, iLEACH, E-DEEC, multichain-PEGASIS and M-GEAR protocols, conducted under the NS3 simulator, show that the routing task must be based on various intelligent techniques to enhance the network lifespan and guarantee better coverage of the sensing area. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

32 pages, 1151 KiB  
Review
Identifying Fake News on Social Networks Based on Natural Language Processing: Trends and Challenges
by Nicollas R. de Oliveira, Pedro S. Pisa, Martin Andreoni Lopez, Dianne Scherly V. de Medeiros and Diogo M. F. Mattos
Information 2021, 12(1), 38; https://doi.org/10.3390/info12010038 - 18 Jan 2021
Cited by 73 | Viewed by 22664
Abstract
The epidemic spread of fake news is a side effect of the expansion of social networks to circulate news, in contrast to traditional mass media such as newspapers, magazines, radio, and television. Human inefficiency to distinguish between true and false facts exposes fake [...] Read more.
The epidemic spread of fake news is a side effect of the expansion of social networks to circulate news, in contrast to traditional mass media such as newspapers, magazines, radio, and television. Human inefficiency to distinguish between true and false facts exposes fake news as a threat to logical truth, democracy, journalism, and credibility in government institutions. In this paper, we survey methods for preprocessing data in natural language, vectorization, dimensionality reduction, machine learning, and quality assessment of information retrieval. We also contextualize the identification of fake news, and we discuss research initiatives and opportunities. Full article
(This article belongs to the Special Issue Decentralization and New Technologies for Social Media)
Show Figures

Figure 1

11 pages, 1805 KiB  
Article
Discriminant Analysis of Voice Commands in the Presence of an Unmanned Aerial Vehicle
by Marzena Mięsikowska
Information 2021, 12(1), 23; https://doi.org/10.3390/info12010023 - 8 Jan 2021
Cited by 4 | Viewed by 2500
Abstract
The aim of this study was to perform discriminant analysis of voice commands in the presence of an unmanned aerial vehicle equipped with four rotating propellers, as well as to obtain background sound levels and speech intelligibility. The measurements were taken in laboratory [...] Read more.
The aim of this study was to perform discriminant analysis of voice commands in the presence of an unmanned aerial vehicle equipped with four rotating propellers, as well as to obtain background sound levels and speech intelligibility. The measurements were taken in laboratory conditions in the absence of the unmanned aerial vehicle and the presence of the unmanned aerial vehicle. Discriminant analysis of speech commands (left, right, up, down, forward, backward, start, and stop) was performed based on mel-frequency cepstral coefficients. Ten male speakers took part in this experiment. The unmanned aerial vehicle hovered at a height of 1.8 m during the recordings at a distance of 2 m from the speaker and 0.3 m above the measuring equipment. Discriminant analysis based on mel-frequency cepstral coefficients showed promising classification of speech commands equal to 76.2% for male speakers. Evaluated speech intelligibility during recordings and obtained sound levels in the presence of the unmanned aerial vehicle during recordings did not exclude verbal communication with the unmanned aerial vehicle for male speakers. Full article
(This article belongs to the Special Issue UAVs for Smart Cities: Protocols, Applications, and Challenges)
Show Figures

Figure 1

14 pages, 501 KiB  
Article
Concept of an Ontology for Automated Vehicle Behavior in the Context of Human-Centered Research on Automated Driving Styles
by Johannes Ossig, Stephanie Cramer and Klaus Bengler
Information 2021, 12(1), 21; https://doi.org/10.3390/info12010021 - 8 Jan 2021
Cited by 8 | Viewed by 3784
Abstract
In the human-centered research on automated driving, it is common practice to describe the vehicle behavior by means of terms and definitions related to non-automated driving. However, some of these definitions are not suitable for this purpose. This paper presents an ontology for [...] Read more.
In the human-centered research on automated driving, it is common practice to describe the vehicle behavior by means of terms and definitions related to non-automated driving. However, some of these definitions are not suitable for this purpose. This paper presents an ontology for automated vehicle behavior which takes into account a large number of existing definitions and previous studies. This ontology is characterized by an applicability for various levels of automated driving and a clear conceptual distinction between characteristics of vehicle occupants, the automation system, and the conventional characteristics of a vehicle. In this context, the terms ‘driveability’, ‘driving behavior’, ‘driving experience’, and especially ‘driving style’, which are commonly associated with non-automated driving, play an important role. In order to clarify the relationships between these terms, the ontology is integrated into a driver-vehicle system. Finally, the ontology developed here is used to derive recommendations for the future design of automated driving styles and in general for further human-centered research on automated driving. Full article
Show Figures

Figure 1

15 pages, 2337 KiB  
Article
Driver Drowsiness Estimation Based on Factorized Bilinear Feature Fusion and a Long-Short-Term Recurrent Convolutional Network
by Shuang Chen, Zengcai Wang and Wenxin Chen
Information 2021, 12(1), 3; https://doi.org/10.3390/info12010003 - 22 Dec 2020
Cited by 27 | Viewed by 4450
Abstract
The effective detection of driver drowsiness is an important measure to prevent traffic accidents. Most existing drowsiness detection methods only use a single facial feature to identify fatigue status, ignoring the complex correlation between fatigue features and the time information of fatigue features, [...] Read more.
The effective detection of driver drowsiness is an important measure to prevent traffic accidents. Most existing drowsiness detection methods only use a single facial feature to identify fatigue status, ignoring the complex correlation between fatigue features and the time information of fatigue features, and this reduces the recognition accuracy. To solve these problems, we propose a driver sleepiness estimation model based on factorized bilinear feature fusion and a long- short-term recurrent convolutional network to detect driver drowsiness efficiently and accurately. The proposed framework includes three models: fatigue feature extraction, fatigue feature fusion, and driver drowsiness detection. First, we used a convolutional neural network (CNN) to effectively extract the deep representation of eye and mouth-related fatigue features from the face area detected in each video frame. Then, based on the factorized bilinear feature fusion model, we performed a nonlinear fusion of the deep feature representations of the eyes and mouth. Finally, we input a series of fused frame-level features into a long-short-term memory (LSTM) unit to obtain the time information of the features and used the softmax classifier to detect sleepiness. The proposed framework was evaluated with the National Tsing Hua University drowsy driver detection (NTHU-DDD) video dataset. The experimental results showed that this method had better stability and robustness compared with other methods. Full article
Show Figures

Figure 1

14 pages, 1444 KiB  
Article
Application of Blockchain Technology in Dynamic Resource Management of Next Generation Networks
by Michael Xevgenis, Dimitrios G. Kogias, Panagiotis Karkazis, Helen C. Leligou and Charalampos Patrikakis
Information 2020, 11(12), 570; https://doi.org/10.3390/info11120570 - 6 Dec 2020
Cited by 22 | Viewed by 4003
Abstract
With the advent of Software Defined Networking (SDN) and Network Function Virtualization (NFV) technologies, the networking infrastructures are becoming increasingly agile in their attempts to offer the quality of services needed by the users, maximizing the efficiency of infrastructure utilization. This in essence [...] Read more.
With the advent of Software Defined Networking (SDN) and Network Function Virtualization (NFV) technologies, the networking infrastructures are becoming increasingly agile in their attempts to offer the quality of services needed by the users, maximizing the efficiency of infrastructure utilization. This in essence mandates the statistical multiplexing of demands across the infrastructures of different Network Providers (NPs), which would allow them to cope with the increasing demand, upgrading their infrastructures at a slower pace. However, to enjoy the benefits of statistical multiplexing, a trusted authority to govern it would be required. At the same time, blockchain technology aspires to offer a solid advantage in such untrusted environments, enabling the development of decentralized solutions that ensure the integrity and immutability of the information stored in the digital ledger. To this end, in this paper, we propose a blockchain-based solution that allows NPs to trade their (processing and networking) resources. We implemented the solution in a test-bed deployed on the cloud and we present the gathered performance results, showing that a blockchain-based solution is feasible and appropriate. We also discuss further improvements and challenges. Full article
Show Figures

Figure 1

24 pages, 12563 KiB  
Article
Using UAV Based 3D Modelling to Provide Smart Monitoring of Road Pavement Conditions
by Ronald Roberts, Laura Inzerillo and Gaetano Di Mino
Information 2020, 11(12), 568; https://doi.org/10.3390/info11120568 - 4 Dec 2020
Cited by 37 | Viewed by 5754
Abstract
Road pavements need adequate maintenance to ensure that their conditions are kept in a good state throughout their lifespans. For this to be possible, authorities need efficient and effective databases in place, which have up to date and relevant road condition information. However, [...] Read more.
Road pavements need adequate maintenance to ensure that their conditions are kept in a good state throughout their lifespans. For this to be possible, authorities need efficient and effective databases in place, which have up to date and relevant road condition information. However, obtaining this information can be very difficult and costly and for smart city applications, it is vital. Currently, many authorities make maintenance decisions by assuming road conditions, which leads to poor maintenance plans and strategies. This study explores a pathway to obtain key information on a roadway utilizing drone imagery to replicate the roadway as a 3D model. The study validates this by using structure-from-motion techniques to replicate roads using drone imagery on a real road section. Using 3D models, flexible segmentation strategies are exploited to understand the road conditions and make assessments on the level of degradation of the road. The study presents a practical pipeline to do this, which can be implemented by different authorities, and one, which will provide the authorities with the key information they need. With this information, authorities can make more effective road maintenance decisions without the need for expensive workflows and exploiting smart monitoring of the road structures. Full article
(This article belongs to the Special Issue UAVs for Smart Cities: Protocols, Applications, and Challenges)
Show Figures

Figure 1

15 pages, 318 KiB  
Article
Detection of Atrial Fibrillation Using a Machine Learning Approach
by Sidrah Liaqat, Kia Dashtipour, Adnan Zahid, Khaled Assaleh, Kamran Arshad and Naeem Ramzan
Information 2020, 11(12), 549; https://doi.org/10.3390/info11120549 - 26 Nov 2020
Cited by 47 | Viewed by 6620
Abstract
The atrial fibrillation (AF) is one of the most well-known cardiac arrhythmias in clinical practice, with a prevalence of 1–2% in the community, which can increase the risk of stroke and myocardial infarction. The detection of AF electrocardiogram (ECG) can improve the early [...] Read more.
The atrial fibrillation (AF) is one of the most well-known cardiac arrhythmias in clinical practice, with a prevalence of 1–2% in the community, which can increase the risk of stroke and myocardial infarction. The detection of AF electrocardiogram (ECG) can improve the early detection of diagnosis. In this paper, we have further developed a framework for processing the ECG signal in order to determine the AF episodes. We have implemented machine learning and deep learning algorithms to detect AF. Moreover, the experimental results show that better performance can be achieved with long short-term memory (LSTM) as compared to other algorithms. The initial experimental results illustrate that the deep learning algorithms, such as LSTM and convolutional neural network (CNN), achieved better performance (10%) as compared to machine learning classifiers, such as support vectors, logistic regression, etc. This preliminary work can help clinicians in AF detection with high accuracy and less probability of errors, which can ultimately result in reduction in fatality rate. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning)
Show Figures

Figure 1

21 pages, 7758 KiB  
Article
Context-Aware Wireless Sensor Networks for Smart Building Energy Management System
by Najem Naji, Mohamed Riduan Abid, Driss Benhaddou and Nissrine Krami
Information 2020, 11(11), 530; https://doi.org/10.3390/info11110530 - 15 Nov 2020
Cited by 17 | Viewed by 4573
Abstract
Energy Management Systems (EMS) are indispensable for Smart Energy-Efficient Buildings (SEEB). This paper proposes a Wireless Sensor Network (WSN)-based EMS deployed and tested in a real-world smart building on a university campus. The at-scale implementation enabled the deployment of a WSN mesh topology [...] Read more.
Energy Management Systems (EMS) are indispensable for Smart Energy-Efficient Buildings (SEEB). This paper proposes a Wireless Sensor Network (WSN)-based EMS deployed and tested in a real-world smart building on a university campus. The at-scale implementation enabled the deployment of a WSN mesh topology to evaluate performance in terms of routing capabilities, data collection, and throughput. The proposed EMS uses the Context-Based Reasoning (CBR) Model to represent different types of buildings and offices. We implemented a new energy-efficient policy for electrical heaters control based on a Finite State Machine (FSM) leveraging on context-related events. This demonstrated significant effectiveness in minimizing the processing load, especially when adopting multithreading in data acquisition and control. To optimize sensors’ battery lifetime, we deployed a new Energy Aware Context Recognition Algorithm (EACRA) that dynamically configures sensors to send data under specific conditions and at particular times to avoid redundant data transmissions. EACRA increases the sensors’ battery lifetime by optimizing the number of samples, used modules, and transmissions. Our proposed EMS design can be used as a model to retrofit other kinds of buildings, such as residential and industrial, and thus converting them to SEEBs. Full article
(This article belongs to the Special Issue Data Processing in the Internet of Things)
Show Figures

Figure 1

19 pages, 1841 KiB  
Review
Understanding the Blockchain Oracle Problem: A Call for Action
by Giulio Caldarelli
Information 2020, 11(11), 509; https://doi.org/10.3390/info11110509 - 29 Oct 2020
Cited by 120 | Viewed by 16377
Abstract
Scarce and niche in the literature just a few years ago, the blockchain topic is now the main subject in conference papers and books. However, the hype generated by the technology and its potential implications for real-world applications is flawed by many misconceptions [...] Read more.
Scarce and niche in the literature just a few years ago, the blockchain topic is now the main subject in conference papers and books. However, the hype generated by the technology and its potential implications for real-world applications is flawed by many misconceptions about how it works and how it is implemented, creating faulty thinking or overly optimistic expectations. Too often, characteristics such as immutability, transparency, and censorship resistance, which mainly belong to the bitcoin blockchain, are sought in regular blockchains, whose potential is barely comparable. Furthermore, critical aspects such as oracles and their role in smart contracts receive few literature contributions, leaving results and theoretical implications highly questionable. This literature review of the latest papers in the field aims to give clarity to the blockchain oracle problem by discussing its effects in some of the most promising real-world applications. The analysis supports the view that the more trusted a system is, the less the oracle problem impacts. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

18 pages, 468 KiB  
Article
Energy-Efficient Check-and-Spray Geocast Routing Protocol for Opportunistic Networks
by Khuram Khalid, Isaac Woungang, Sanjay Kumar Dhurandher, Jagdeep Singh and Joel J. P. C. Rodrigues
Information 2020, 11(11), 504; https://doi.org/10.3390/info11110504 - 28 Oct 2020
Cited by 15 | Viewed by 2809
Abstract
Opportunistic networks (OppNets) are a type of challenged network where there is no guaranteed of end-to-path between the nodes for data delivery because of intermittent connectivity, node mobility and frequent topology changes. In such an environment, the routing of data is a challenge [...] Read more.
Opportunistic networks (OppNets) are a type of challenged network where there is no guaranteed of end-to-path between the nodes for data delivery because of intermittent connectivity, node mobility and frequent topology changes. In such an environment, the routing of data is a challenge since the battery power of the mobile nodes drains out quickly because of multi-routing activities such as scanning, transmitting, receiving, and computational processing, effecting the overall network performance. In this paper, a novel routing protocol for OppNets called Energy-Efficient Check-and-Spray Geocast Routing (EECSG) is proposed, which introduces an effective way of message distribution in the geocasting region to all residing nodes while saving the energy consumption by restricting the unnecessary packet transmission in that region. A Check-and-Spray technique is also introduced to eliminate the overhead of packets in the geocast region. The proposed EECSG is evaluated by simulations and compared against the Efficient and Flexible Geocasting for Opportunistic Networks (GSAF) and the Centrality- Based Geocasting for Opportunistic networks (CGOPP) routing protocols in terms of average latency, delivery ratio, number of messages forwarded, number of dead nodes, overhead ratio, and hop count, showing superior performance. Full article
(This article belongs to the Special Issue Vehicle-To-Everything (V2X) Communication)
Show Figures

Figure 1

17 pages, 8463 KiB  
Article
Underwater Fish Body Length Estimation Based on Binocular Image Processing
by Ruoshi Cheng, Caixia Zhang, Qingyang Xu, Guocheng Liu, Yong Song, Xianfeng Yuan and Jie Sun
Information 2020, 11(10), 476; https://doi.org/10.3390/info11100476 - 12 Oct 2020
Cited by 11 | Viewed by 4055
Abstract
Recently, the information analysis technology of underwater has developed rapidly, which is beneficial to underwater resource exploration, underwater aquaculture, etc. Dangerous and laborious manual work is replaced by deep learning-based computer vision technology, which has gradually become the mainstream. The binocular cameras based [...] Read more.
Recently, the information analysis technology of underwater has developed rapidly, which is beneficial to underwater resource exploration, underwater aquaculture, etc. Dangerous and laborious manual work is replaced by deep learning-based computer vision technology, which has gradually become the mainstream. The binocular cameras based visual analysis method can not only collect seabed images but also construct the 3D scene information. The parallax of the binocular image was used to calculate the depth information of the underwater object. A binocular camera based refined analysis method for underwater creature body length estimation was constructed. A fully convolutional network (FCN) was used to segment the corresponding underwater object in the image to obtain the object position. A fish’s body direction estimation algorithm is proposed according to the segmentation image. The semi-global block matching (SGBM) algorithm was used to calculate the depth of the object region and estimate the object body length according to the left and right views of the object. The algorithm has certain advantages in time and accuracy for interest object analysis by the combination of FCN and SGBM. Experiment results show that this method effectively reduces unnecessary information, improves efficiency and accuracy compared to the original SGBM algorithm. Full article
Show Figures

Figure 1

19 pages, 4298 KiB  
Article
Technological Aspects of Blockchain Application for Vehicle-to-Network
by Vasiliy Elagin, Anastasia Spirkina, Mikhail Buinevich and Andrei Vladyko
Information 2020, 11(10), 465; https://doi.org/10.3390/info11100465 - 30 Sep 2020
Cited by 38 | Viewed by 4528
Abstract
Over the past decade, wireless communication technologies have developed significantly for intelligent applications in road transport. This paper provides an overview of telecommunications-based intelligent transport systems with a focus on ensuring system safety and resilience. In vehicle-to-everything, these problems are extremely acute due [...] Read more.
Over the past decade, wireless communication technologies have developed significantly for intelligent applications in road transport. This paper provides an overview of telecommunications-based intelligent transport systems with a focus on ensuring system safety and resilience. In vehicle-to-everything, these problems are extremely acute due to the specifics of the operation of transport networks, which requires the use of special protection mechanisms. In this regard, it was decided to use blockchain as a system platform to support the needs of transport systems for secure information exchange. This paper describes the technological aspects of implementing blockchain technology in vehicle-to-network; the features of such technology are presented, as well as the features of their interaction. The authors considered various network characteristics and identified the parameters that have a primary impact on the operation of the vehicle-to-network (V2N) network when implementing the blockchain. In the paper, an experiment was carried out that showed the numerical characteristics for the allocation of resources on devices involved in organizing V2N communication and conclusions were drawn from the results of the study. Full article
(This article belongs to the Special Issue Vehicle-To-Everything (V2X) Communication)
Show Figures

Figure 1

18 pages, 854 KiB  
Article
SlowTT: A Slow Denial of Service against IoT Networks
by Ivan Vaccari, Maurizio Aiello and Enrico Cambiaso
Information 2020, 11(9), 452; https://doi.org/10.3390/info11090452 - 18 Sep 2020
Cited by 22 | Viewed by 4464
Abstract
The security of Internet of Things environments is a critical and trending topic, due to the nature of the networks and the sensitivity of the exchanged information. In this paper, we investigate the security of the Message Queue Telemetry Transport (MQTT) protocol, widely [...] Read more.
The security of Internet of Things environments is a critical and trending topic, due to the nature of the networks and the sensitivity of the exchanged information. In this paper, we investigate the security of the Message Queue Telemetry Transport (MQTT) protocol, widely adopted in IoT infrastructures. We exploit two specific weaknesses of MQTT, identified during our research activities, allowing the client to configure the KeepAlive parameter and MQTT packets to execute an innovative cyber threat against the MQTT broker. In order to validate the exploitation of such vulnerabilities, we propose SlowTT, a novel “Slow” denial of service attack aimed at targeting MQTT through low-rate techniques, characterized by minimum attack bandwidth and computational power requirements. We validate SlowTT against real MQTT services, by considering both plaintext and encrypted communications and by comparing the effects of the attack when targeting different application daemons and protocol versions. Results show that SlowTT is extremely successful, and it can exploit the identified vulnerability to execute a denial of service against the IoT network by keeping the connection alive for a long time. Full article
(This article belongs to the Special Issue Security and Privacy in the Internet of Things)
Show Figures

Figure 1

13 pages, 2349 KiB  
Article
Learning a Convolutional Autoencoder for Nighttime Image Dehazing
by Mengyao Feng, Teng Yu, Mingtao Jing and Guowei Yang
Information 2020, 11(9), 424; https://doi.org/10.3390/info11090424 - 31 Aug 2020
Cited by 4 | Viewed by 3358
Abstract
Currently, haze removal of images captured at night for foggy scenes rely on the traditional, prior-based methods, but these methods are frequently ineffective at dealing with night hazy images. In addition, the light sources at night are complicated and there is a problem [...] Read more.
Currently, haze removal of images captured at night for foggy scenes rely on the traditional, prior-based methods, but these methods are frequently ineffective at dealing with night hazy images. In addition, the light sources at night are complicated and there is a problem of inconsistent brightness. This makes the estimation of the transmission map complicated in the night scene. Based on the above analysis, we propose an autoencoder method to solve the problem of overestimation or underestimation of transmission captured by the traditional, prior-based methods. For nighttime hazy images, we first remove the color effect of the haze image with an edge-preserving maximum reflectance prior (MRP) method. Then, the hazy image without color influence is input into the self-encoder network with skip connections to obtain the transmission map. Moreover, instead of using the local maximum method, we estimate the ambient illumination through a guiding image filtering. In order to highlight the effectiveness of our experiments, a large number of comparison experiments were conducted between our method and the state-of-the-art methods. The results show that our method can effectively suppress the halo effect and reduce the effectiveness of glow. In the experimental part, we calculate that the average Peak Signal to Noise Ratio (PSNR) is 21.0968 and the average Structural Similarity (SSIM) is 0.6802. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

13 pages, 2482 KiB  
Article
A Deep-Learning-Based Framework for Automated Diagnosis of COVID-19 Using X-ray Images
by Irfan Ullah Khan and Nida Aslam
Information 2020, 11(9), 419; https://doi.org/10.3390/info11090419 - 29 Aug 2020
Cited by 68 | Viewed by 12364
Abstract
The emergence and outbreak of the novel coronavirus (COVID-19) had a devasting effect on global health, the economy, and individuals’ daily lives. Timely diagnosis of COVID-19 is a crucial task, as it reduces the risk of pandemic spread, and early treatment will save [...] Read more.
The emergence and outbreak of the novel coronavirus (COVID-19) had a devasting effect on global health, the economy, and individuals’ daily lives. Timely diagnosis of COVID-19 is a crucial task, as it reduces the risk of pandemic spread, and early treatment will save patients’ life. Due to the time-consuming, complex nature, and high false-negative rate of the gold-standard RT-PCR test used for the diagnosis of COVID-19, the need for an additional diagnosis method has increased. Studies have proved the significance of X-ray images for the diagnosis of COVID-19. The dissemination of deep-learning techniques on X-ray images can automate the diagnosis process and serve as an assistive tool for radiologists. In this study, we used four deep-learning models—DenseNet121, ResNet50, VGG16, and VGG19—using the transfer-learning concept for the diagnosis of X-ray images as COVID-19 or normal. In the proposed study, VGG16 and VGG19 outperformed the other two deep-learning models. The study achieved an overall classification accuracy of 99.3%. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 3660 KiB  
Article
Spatiotemporal Convolutional Neural Network with Convolutional Block Attention Module for Micro-Expression Recognition
by Boyu Chen, Zhihao Zhang, Nian Liu, Yang Tan, Xinyu Liu and Tong Chen
Information 2020, 11(8), 380; https://doi.org/10.3390/info11080380 - 29 Jul 2020
Cited by 43 | Viewed by 6828
Abstract
A micro-expression is defined as an uncontrollable muscular movement shown on the face of humans when one is trying to conceal or repress his true emotions. Many researchers have applied the deep learning framework to micro-expression recognition in recent years. However, few have [...] Read more.
A micro-expression is defined as an uncontrollable muscular movement shown on the face of humans when one is trying to conceal or repress his true emotions. Many researchers have applied the deep learning framework to micro-expression recognition in recent years. However, few have introduced the human visual attention mechanism to micro-expression recognition. In this study, we propose a three-dimensional (3D) spatiotemporal convolutional neural network with the convolutional block attention module (CBAM) for micro-expression recognition. First image sequences were input to a medium-sized convolutional neural network (CNN) to extract visual features. Afterwards, it learned to allocate the feature weights in an adaptive manner with the help of a convolutional block attention module. The method was testified in spontaneous micro-expression databases (Chinese Academy of Sciences Micro-expression II (CASME II), Spontaneous Micro-expression Database (SMIC)). The experimental results show that the 3D CNN with convolutional block attention module outperformed other algorithms in micro-expression recognition. Full article
Show Figures

Figure 1

13 pages, 1115 KiB  
Article
Ensemble-Based Online Machine Learning Algorithms for Network Intrusion Detection Systems Using Streaming Data
by Nathan Martindale, Muhammad Ismail and Douglas A. Talbert
Information 2020, 11(6), 315; https://doi.org/10.3390/info11060315 - 11 Jun 2020
Cited by 25 | Viewed by 5710
Abstract
As new cyberattacks are launched against systems and networks on a daily basis, the ability for network intrusion detection systems to operate efficiently in the big data era has become critically important, particularly as more low-power Internet-of-Things (IoT) devices enter the market. This [...] Read more.
As new cyberattacks are launched against systems and networks on a daily basis, the ability for network intrusion detection systems to operate efficiently in the big data era has become critically important, particularly as more low-power Internet-of-Things (IoT) devices enter the market. This has motivated research in applying machine learning algorithms that can operate on streams of data, trained online or “live” on only a small amount of data kept in memory at a time, as opposed to the more classical approaches that are trained solely offline on all of the data at once. In this context, one important concept from machine learning for improving detection performance is the idea of “ensembles”, where a collection of machine learning algorithms are combined to compensate for their individual limitations and produce an overall superior algorithm. Unfortunately, existing research lacks proper performance comparison between homogeneous and heterogeneous online ensembles. Hence, this paper investigates several homogeneous and heterogeneous ensembles, proposes three novel online heterogeneous ensembles for intrusion detection, and compares their performance accuracy, run-time complexity, and response to concept drifts. Out of the proposed novel online ensembles, the heterogeneous ensemble consisting of an adaptive random forest of Hoeffding Trees combined with a Hoeffding Adaptive Tree performed the best, by dealing with concept drift in the most effective way. While this scheme is less accurate than a larger size adaptive random forest, it offered a marginally better run-time, which is beneficial for online training. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Physical Security)
Show Figures

Figure 1

19 pages, 600 KiB  
Article
Malicious Text Identification: Deep Learning from Public Comments and Emails
by Asma Baccouche, Sadaf Ahmed, Daniel Sierra-Sosa and Adel Elmaghraby
Information 2020, 11(6), 312; https://doi.org/10.3390/info11060312 - 10 Jun 2020
Cited by 32 | Viewed by 7684
Abstract
Identifying internet spam has been a challenging problem for decades. Several solutions have succeeded to detect spam comments in social media or fraudulent emails. However, an adequate strategy for filtering messages is difficult to achieve, as these messages resemble real communications. From the [...] Read more.
Identifying internet spam has been a challenging problem for decades. Several solutions have succeeded to detect spam comments in social media or fraudulent emails. However, an adequate strategy for filtering messages is difficult to achieve, as these messages resemble real communications. From the Natural Language Processing (NLP) perspective, Deep Learning models are a good alternative for classifying text after being preprocessed. In particular, Long Short-Term Memory (LSTM) networks are one of the models that perform well for the binary and multi-label text classification problems. In this paper, an approach merging two different data sources, one intended for Spam in social media posts and the other for Fraud classification in emails, is presented. We designed a multi-label LSTM model and trained it on the joint datasets including text with common bigrams, extracted from each independent dataset. The experiment results show that our proposed model is capable of identifying malicious text regardless of the source. The LSTM model trained with the merged dataset outperforms the models trained independently on each dataset. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure 1

24 pages, 4514 KiB  
Article
Comparison of Methods to Evaluate the Influence of an Automated Vehicle’s Driving Behavior on Pedestrians: Wizard of Oz, Virtual Reality, and Video
by Tanja Fuest, Elisabeth Schmidt and Klaus Bengler
Information 2020, 11(6), 291; https://doi.org/10.3390/info11060291 - 29 May 2020
Cited by 19 | Viewed by 4196
Abstract
Integrating automated vehicles into mixed traffic entails several challenges. Their driving behavior must be designed such that is understandable for all human road users, and that it ensures an efficient and safe traffic system. Previous studies investigated these issues, especially regarding the communication [...] Read more.
Integrating automated vehicles into mixed traffic entails several challenges. Their driving behavior must be designed such that is understandable for all human road users, and that it ensures an efficient and safe traffic system. Previous studies investigated these issues, especially regarding the communication between automated vehicles and pedestrians. These studies used different methods, e.g., videos, virtual reality, or Wizard of Oz vehicles. However, the extent of transferability between these studies is still unknown. Therefore, we replicated the same study design in four different settings: two video, one virtual reality, and one Wizard of Oz setup. In the first video setup, videos from the virtual reality setup were used, while in the second setup, we filmed the Wizard of Oz vehicle. In all studies, participants stood at the roadside in a shared space. An automated vehicle approached from the left, using different driving profiles characterized by changing speed to communicate its intention to let the pedestrians cross the road. Participants were asked to recognize the intention of the automated vehicle and to press a button as soon as they realized this intention. Results revealed differences in the intention recognition time between the four study setups, as well as in the correct intention rate. The results from vehicle–pedestrian interaction studies published in recent years that used different study settings can therefore only be compared to each other to a limited extent. Full article
Show Figures

Figure 1

14 pages, 4918 KiB  
Article
Effects of Marking Automated Vehicles on Human Drivers on Highways
by Tanja Fuest, Alexander Feierle, Elisabeth Schmidt and Klaus Bengler
Information 2020, 11(6), 286; https://doi.org/10.3390/info11060286 - 28 May 2020
Cited by 17 | Viewed by 3660
Abstract
Due to the short range of the sensor technology used in automated vehicles, we assume that the implemented driving strategies may initially differ from those of human drivers. Nevertheless, automated vehicles must be able to move safely through manual road traffic. Initially, they [...] Read more.
Due to the short range of the sensor technology used in automated vehicles, we assume that the implemented driving strategies may initially differ from those of human drivers. Nevertheless, automated vehicles must be able to move safely through manual road traffic. Initially, they will behave as carefully as human learners do. In the same way that driving-school vehicles tend to be marked in Germany, markings for automated vehicles could also prove advantageous. To this end, a simulation study with 40 participants was conducted. All participants experienced three different highway scenarios, each with and without a marked automated vehicle. One scenario was based around some roadworks, the next scenario was a traffic jam, and the last scenario involved a lane change. Common to all scenarios was that the automated vehicles strictly adhered to German highway regulations, and therefore moved in road traffic somewhat differently to human drivers. After each trial, we asked participants to rate how appropriate and disturbing the automated vehicle’s driving behavior was. We also measured objective data, such as the time of a lane change and the time headway. The results show no differences for the subjective and objective data regarding the marking of an automated vehicle. Reasons for this might be that the driving behavior itself is sufficiently informative for humans to recognize an automated vehicle. In addition, participants experienced the automated vehicle’s driving behavior for the first time, and it is reasonable to assume that an adjustment of the humans’ driving behavior would take place in the event of repeated encounters. Full article
Show Figures

Figure 1

18 pages, 18197 KiB  
Article
Quantitative Evaluation of Dense Skeletons for Image Compression
by Jieying Wang, Maarten Terpstra, Jiří Kosinka and Alexandru Telea
Information 2020, 11(5), 274; https://doi.org/10.3390/info11050274 - 20 May 2020
Cited by 5 | Viewed by 3415
Abstract
Skeletons are well-known descriptors used for analysis and processing of 2D binary images. Recently, dense skeletons have been proposed as an extension of classical skeletons as a dual encoding for 2D grayscale and color images. Yet, their encoding power, measured by the quality [...] Read more.
Skeletons are well-known descriptors used for analysis and processing of 2D binary images. Recently, dense skeletons have been proposed as an extension of classical skeletons as a dual encoding for 2D grayscale and color images. Yet, their encoding power, measured by the quality and size of the encoded image, and how these metrics depend on selected encoding parameters, has not been formally evaluated. In this paper, we fill this gap with two main contributions. First, we improve the encoding power of dense skeletons by effective layer selection heuristics, a refined skeleton pixel-chain encoding, and a postprocessing compression scheme. Secondly, we propose a benchmark to assess the encoding power of dense skeletons for a wide set of natural and synthetic color and grayscale images. We use this benchmark to derive optimal parameters for dense skeletons. Our method, called Compressing Dense Medial Descriptors (CDMD), achieves higher-compression ratios at similar quality to the well-known JPEG technique and, thereby, shows that skeletons can be an interesting option for lossy image encoding. Full article
Show Figures

Figure 1

21 pages, 5582 KiB  
Article
Multi-Vehicle Simulation in Urban Automated Driving: Technical Implementation and Added Benefit
by Alexander Feierle, Michael Rettenmaier, Florian Zeitlmeir and Klaus Bengler
Information 2020, 11(5), 272; https://doi.org/10.3390/info11050272 - 19 May 2020
Cited by 12 | Viewed by 4232
Abstract
This article investigates the simultaneous interaction between an automated vehicle (AV) and its passenger, and between the same AV and a human driver of another vehicle. For this purpose, we have implemented a multi-vehicle simulation consisting of two driving simulators, one for the [...] Read more.
This article investigates the simultaneous interaction between an automated vehicle (AV) and its passenger, and between the same AV and a human driver of another vehicle. For this purpose, we have implemented a multi-vehicle simulation consisting of two driving simulators, one for the AV and one for the manual vehicle. The considered scenario is a road bottleneck with a double-parked vehicle either on one side of the road or on both sides of the road where an AV and a simultaneously oncoming human driver negotiate the right of way. The AV communicates to its passenger via the internal automation human–machine interface (HMI) and it concurrently displays the right of way to the human driver via an external HMI. In addition to the regular encounters, this paper analyzes the effect of an automation failure, where the AV first communicates to yield the right of way and then changes its strategy and passes through the bottleneck first despite oncoming traffic. The research questions the study aims to answer are what methods should be used for the implementation of multi-vehicle simulations with one AV, and if there is an added benefit of this multi-vehicle simulation compared to single-driver simulator studies. The results show an acceptable synchronicity for using traffic lights as basic synchronization and a distance control as the detail synchronization method. The participants had similar passing times in the multi-vehicle simulation compared to a previously conducted single-driver simulation. Moreover, there was a lower crash rate in the multi-vehicle simulation during the automation failure. Concluding the results, the proposed method seems to be an appropriate solution to implement multi-vehicle simulation with one AV. Additionally, multi-vehicle simulation offers a benefit if more than one human affects the interaction within a scenario. Full article
Show Figures

Figure 1

23 pages, 8936 KiB  
Article
Modeling Road Accident Severity with Comparisons of Logistic Regression, Decision Tree and Random Forest
by Mu-Ming Chen and Mu-Chen Chen
Information 2020, 11(5), 270; https://doi.org/10.3390/info11050270 - 18 May 2020
Cited by 57 | Viewed by 7302
Abstract
To reduce the damage caused by road accidents, researchers have applied different techniques to explore correlated factors and develop efficient prediction models. The main purpose of this study is to use one statistical and two nonparametric data mining techniques, namely, logistic regression (LR), [...] Read more.
To reduce the damage caused by road accidents, researchers have applied different techniques to explore correlated factors and develop efficient prediction models. The main purpose of this study is to use one statistical and two nonparametric data mining techniques, namely, logistic regression (LR), classification and regression tree (CART), and random forest (RF), to compare their prediction capability, identify the significant variables (identified by LR) and important variables (identified by CART or RF) that are strongly correlated with road accident severity, and distinguish the variables that have significant positive influence on prediction performance. In this study, three prediction performance evaluation measures, accuracy, sensitivity and specificity, are used to find the best integrated method which consists of the most effective prediction model and the input variables that have higher positive influence on accuracy, sensitivity and specificity. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

15 pages, 1489 KiB  
Article
Knowledge Graphs for Online Marketing and Sales of Touristic Services
by Anna Fensel, Zaenal Akbar, Elias Kärle, Christoph Blank, Patrick Pixner and Andreas Gruber
Information 2020, 11(5), 253; https://doi.org/10.3390/info11050253 - 4 May 2020
Cited by 11 | Viewed by 6305
Abstract
Direct online marketing and sales are nowadays an essential part of almost any business that addresses an end consumer, such as in tourism. On the downside, the data and content required for such marketing and sales are typically distributed, and difficult to identify [...] Read more.
Direct online marketing and sales are nowadays an essential part of almost any business that addresses an end consumer, such as in tourism. On the downside, the data and content required for such marketing and sales are typically distributed, and difficult to identify and use, especially for small and medium enterprises. Further, a combination of content management and semantics for automated online marketing and sales is becoming practically feasible now, especially with the global adoption of knowledge graphs. A design and feasibility pilot of a solution implementing semantic content and data value chain for online direct marketing and sales, basing on knowledge graphs, and efficiently addressing multiple channels and stakeholders, is provided and evaluated with the end-users. The implementation is shown to be suitable for the use on the Web, social media and mobile channels. The proof of concept addresses the tourism sector, exploring, in particular, the case of touristic service packaging, and is applicable globally. The typically encountered challenges, particularly, the ones related to data quality, are identified, and the ways to overcome them are discussed. The paper advances the knowledge of employment of knowledge graphs in online marketing and sales, and showcases its related innovative practical application, co-created by the industry providing marketing and sales solutions for Austria, one of the world’s leading touristic regions. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

16 pages, 5217 KiB  
Article
A Systematic Exploration of Deep Neural Networks for EDA-Based Emotion Recognition
by Dian Yu and Shouqian Sun
Information 2020, 11(4), 212; https://doi.org/10.3390/info11040212 - 15 Apr 2020
Cited by 28 | Viewed by 6014
Abstract
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have [...] Read more.
Subject-independent emotion recognition based on physiological signals has become a research hotspot. Previous research has proved that electrodermal activity (EDA) signals are an effective data resource for emotion recognition. Benefiting from their great representation ability, an increasing number of deep neural networks have been applied for emotion recognition, and they can be classified as a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), or a combination of these (CNN+RNN). However, there has been no systematic research on the predictive power and configurations of different deep neural networks in this task. In this work, we systematically explore the configurations and performances of three adapted deep neural networks: ResNet, LSTM, and hybrid ResNet-LSTM. Our experiments use the subject-independent method to evaluate the three-class classification on the MAHNOB dataset. The results prove that the CNN model (ResNet) reaches a better accuracy and F1 score than the RNN model (LSTM) and the CNN+RNN model (hybrid ResNet-LSTM). Extensive comparisons also reveal that our three deep neural networks with EDA data outperform previous models with handcraft features on emotion recognition, which proves the great potential of the end-to-end DNN method. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

17 pages, 2970 KiB  
Article
Improving Power and Resource Management in Heterogeneous Downlink OFDMA Networks
by Nalliyanna Goundar Veerappan Kousik, Yuvaraj Natarajan, Kallam Suresh, Rizwan Patan and Amir H. Gandomi
Information 2020, 11(4), 203; https://doi.org/10.3390/info11040203 - 10 Apr 2020
Cited by 16 | Viewed by 2802
Abstract
In the past decade, low power consumption schemes have undergone degraded communication performance, where they fail to maintain the trade-off between the resource and power consumption. In this paper, management of resource and power consumption on small cell orthogonal frequency-division multiple access (OFDMA) [...] Read more.
In the past decade, low power consumption schemes have undergone degraded communication performance, where they fail to maintain the trade-off between the resource and power consumption. In this paper, management of resource and power consumption on small cell orthogonal frequency-division multiple access (OFDMA) networks is enacted using the sleep mode selection method. The sleep mode selection method uses both power and resource management, where the former is responsible for a heterogeneous network, and the latter is managed using a deactivation algorithm. Further, to improve the communication performance during sleep mode selection, a semi-Markov sleep mode selection decision-making process is developed. Spectrum reuse maximization is achieved using a small cell deactivation strategy that potentially identifies and eliminates the sleep mode cells. The performance of this hybrid technique is evaluated and compared against benchmark techniques. The results demonstrate that the proposed hybrid performance model shows effective power and resource management with reduced computational cost compared with benchmark techniques. Full article
(This article belongs to the Special Issue 10th Anniversary of Information—Emerging Research Challenges)
Show Figures

Figure 1

17 pages, 1366 KiB  
Article
Automatic Sorting of Dwarf Minke Whale Underwater Images
by Dmitry A. Konovalov, Natalie Swinhoe, Dina B. Efremova, R. Alastair Birtles, Martha Kusetic, Suzanne Hillcoat, Matthew I. Curnock, Genevieve Williams and Marcus Sheaves
Information 2020, 11(4), 200; https://doi.org/10.3390/info11040200 - 9 Apr 2020
Cited by 4 | Viewed by 5284
Abstract
A predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subspecies) occurs annually in the Australian waters of the northern Great Barrier Reef in June–July, which has been the subject of a long-term photo-identification study. Researchers from the Minke Whale Project (MWP) at [...] Read more.
A predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subspecies) occurs annually in the Australian waters of the northern Great Barrier Reef in June–July, which has been the subject of a long-term photo-identification study. Researchers from the Minke Whale Project (MWP) at James Cook University collect large volumes of underwater digital imagery each season (e.g., 1.8TB in 2018), much of which is contributed by citizen scientists. Manual processing and analysis of this quantity of data had become infeasible, and Convolutional Neural Networks (CNNs) offered a potential solution. Our study sought to design and train a CNN that could detect whales from video footage in complex near-surface underwater surroundings and differentiate the whales from people, boats and recreational gear. We modified known classification CNNs to localise whales in video frames and digital still images. The required high classification accuracy was achieved by discovering an effective negative-labelling training technique. This resulted in a less than 1% false-positive classification rate and below 0.1% false-negative rate. The final operation-version CNN-pipeline processed all videos (with the interval of 10 frames) in approximately four days (running on two GPUs) delivering 1.95 million sorted images. Full article
(This article belongs to the Special Issue Computational Sport Science and Sport Analytics)
Show Figures

Figure 1

19 pages, 1890 KiB  
Article
Standardized Test Procedure for External Human–Machine Interfaces of Automated Vehicles
by Christina Kaß, Stefanie Schoch, Frederik Naujoks, Sebastian Hergeth, Andreas Keinath and Alexandra Neukum
Information 2020, 11(3), 173; https://doi.org/10.3390/info11030173 - 24 Mar 2020
Cited by 31 | Viewed by 5500
Abstract
Research on external human–machine interfaces (eHMIs) has recently become a major area of interest in the field of human factors research on automated driving. The broad variety of methodological approaches renders the current state of research inconclusive and comparisons between interface designs impossible. [...] Read more.
Research on external human–machine interfaces (eHMIs) has recently become a major area of interest in the field of human factors research on automated driving. The broad variety of methodological approaches renders the current state of research inconclusive and comparisons between interface designs impossible. To date, there are no standardized test procedures to evaluate and compare different design variants of eHMIs with each other and with interactions without eHMIs. This article presents a standardized test procedure that enables the effective usability evaluation of eHMI design solutions. First, the test procedure provides a methodological approach to deduce relevant use cases for the evaluation of an eHMI. In addition, we define specific usability requirements that must be fulfilled by an eHMI to be effective, efficient, and satisfying. To prove whether an eHMI meets the defined requirements, we have developed a test protocol for the empirical evaluation of an eHMI with a participant study. The article elucidates underlying considerations and details of the test protocol that serves as framework to measure the behavior and subjective evaluations of non-automated road users when interacting with automated vehicles in an experimental setting. The standardized test procedure provides a useful framework for researchers and practitioners. Full article
Show Figures

Figure 1

16 pages, 8799 KiB  
Article
Robust Beamforming Design for SWIPT-Based Multi-Radio Wireless Mesh Network with Cooperative Jamming
by Liang Li, Xiongwen Zhao, Suiyan Geng, Yu Zhang and Lei Zhang
Information 2020, 11(3), 138; https://doi.org/10.3390/info11030138 - 29 Feb 2020
Cited by 2 | Viewed by 3131
Abstract
Wireless mesh networks (WMNs) can provide flexible wireless connections in a smart city, internet of things (IoT), and device-to-device (D2D) communications. The performance of WMNs can be greatly enhanced by adopting a multi-radio technique, which enables a node to communicate with more nodes [...] Read more.
Wireless mesh networks (WMNs) can provide flexible wireless connections in a smart city, internet of things (IoT), and device-to-device (D2D) communications. The performance of WMNs can be greatly enhanced by adopting a multi-radio technique, which enables a node to communicate with more nodes simultaneously. However, multi-radio WMNs face two main challenges, namely, energy consumption and physical layer secrecy. In this paper, both simultaneous wireless information and power transfer (SWIPT) and cooperative jamming technologies were adopted to overcome these two problems. We designed the SWIPT and cooperative jamming scheme, minimizing the total transmission power by properly selecting beamforming vectors of the WMN nodes and jammer to satisfy the individual signal-to-interference-plus-noise ratio (SINR) and energy harvesting (EH) constrains. Especially, we considered the channel estimate error caused by the imperfect channel state information. The SINR of eavesdropper (Eve) was suppressed to protect the secrecy of WMN nodes. Due to the fractional form, the problem was proved to be non-convex. We developed a tractable algorithm by transforming it into a convex one, utilizing semi-definite programming (SDP) relaxation and S-procedure methods. The simulation results validated the effectiveness of the proposed algorithm compared with the non-robust design. Full article
Show Figures

Figure 1

18 pages, 700 KiB  
Article
The Zaragoza’s Knowledge Graph: Open Data to Harness the City Knowledge
by Paola Espinoza-Arias, María Jesús Fernández-Ruiz, Victor Morlán-Plo, Rubén Notivol-Bezares and Oscar Corcho
Information 2020, 11(3), 129; https://doi.org/10.3390/info11030129 - 26 Feb 2020
Cited by 10 | Viewed by 4747
Abstract
Public administrations handle large amounts of data in relation to their internal processes as well as to the services that they offer. Following public-sector information reuse regulations and worldwide open data publication trends, these administrations are increasingly publishing their data as open data. [...] Read more.
Public administrations handle large amounts of data in relation to their internal processes as well as to the services that they offer. Following public-sector information reuse regulations and worldwide open data publication trends, these administrations are increasingly publishing their data as open data. However, open data are often released without agreed data models and in non-reusable formats, reducing interoperability and efficiency in data reuse. These aspects hinder interoperability with other administrations and do not allow taking advantage of the associated knowledge in an efficient manner. This paper presents the continued work performed by the Zaragoza city council over more than 15 years in order to generate its knowledge graph, which constitutes the key piece of their data management system, whose main strengthen is the open-data-by-default policy. The main functionalities that have been developed for the internal and external exploitation of the city’s open data are also presented. Finally, some city council experiences and lessons learned during this process are also explained. Full article
(This article belongs to the Special Issue Linked Data and Knowledge Graphs in Large Organisations)
Show Figures

Figure 1

24 pages, 2278 KiB  
Article
Innovation in the Era of IoT and Industry 5.0: Absolute Innovation Management (AIM) Framework
by Farhan Aslam, Wang Aimin, Mingze Li and Khaliq Ur Rehman
Information 2020, 11(2), 124; https://doi.org/10.3390/info11020124 - 24 Feb 2020
Cited by 202 | Viewed by 23793
Abstract
In the modern business environment, characterized by rapid technological advancements and globalization, abetted by IoT and Industry 5.0 phenomenon, innovation is indispensable for competitive advantage and economic growth. However, many organizations are facing problems in its true implementation due to the absence of [...] Read more.
In the modern business environment, characterized by rapid technological advancements and globalization, abetted by IoT and Industry 5.0 phenomenon, innovation is indispensable for competitive advantage and economic growth. However, many organizations are facing problems in its true implementation due to the absence of a practical innovation management framework, which has made the implementation of the concept elusive instead of persuasive. The present study has proposed a new innovation management framework labeled as “Absolute Innovation Management (AIM)” to make innovation more understandable, implementable, and part of the organization’s everyday routine by synergizing the innovation ecosystem, design thinking, and corporate strategy to achieve competitive advantage and economic growth. The current study used an integrative literature review methodology to develop the “Absolute Innovation Management” framework. The absolute innovation management framework links the innovation ecosystem with the corporate strategy of the firm by adopting innovation management as a strategy through design thinking. Thus, making innovation more user/human-centered that is desirable by the customer, viable for business and technically feasible, creating both entrepreneurial and customer value, and boosting corporate venturing and corporate entrepreneurship to achieve competitive advantage and economic growth while addressing the needs of IoT and Industry 5.0 era. In sum, it synergizes innovation, design thinking, and strategy to make businesses future-ready for IoT and industry 5.0 revolution. The present study is significant, as it not only make considerable contributions to the existing literature on innovation management by developing a new framework but also makes the concept more practical, implementable and part of an organization’s everyday routine. Full article
Show Figures

Figure 1

10 pages, 346 KiB  
Review
On the Integration of Knowledge Graphs into Deep Learning Models for a More Comprehensible AI—Three Challenges for Future Research
by Giuseppe Futia and Antonio Vetrò
Information 2020, 11(2), 122; https://doi.org/10.3390/info11020122 - 22 Feb 2020
Cited by 64 | Viewed by 17979
Abstract
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI [...] Read more.
Deep learning models contributed to reaching unprecedented results in prediction and classification tasks of Artificial Intelligence (AI) systems. However, alongside this notable progress, they do not provide human-understandable insights on how a specific result was achieved. In contexts where the impact of AI on human life is relevant (e.g., recruitment tools, medical diagnoses, etc.), explainability is not only a desirable property, but it is -or, in some cases, it will be soon-a legal requirement. Most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the recursive mathematical functions in deep learning algorithms. A complementary approach is represented by symbolic AI, where symbols are elements of a lingua franca between humans and deep learning. In this context, Knowledge Graphs (KGs) and their underlying semantic technologies are the modern implementation of symbolic AI—while being less flexible and robust to noise compared to deep learning models, KGs are natively developed to be explainable. In this paper, we review the main XAI approaches existing in the literature, underlying their strengths and limitations, and we propose neural-symbolic integration as a cornerstone to design an AI which is closer to non-insiders comprehension. Within such a general direction, we identify three specific challenges for future research—knowledge matching, cross-disciplinary explanations and interactive explanations. Full article
(This article belongs to the Special Issue 10th Anniversary of Information—Emerging Research Challenges)
Show Figures

Figure 1

38 pages, 3191 KiB  
Review
Digital Image Watermarking Techniques: A Review
by Mahbuba Begum and Mohammad Shorif Uddin
Information 2020, 11(2), 110; https://doi.org/10.3390/info11020110 - 17 Feb 2020
Cited by 180 | Viewed by 48721
Abstract
Digital image authentication is an extremely significant concern for the digital revolution, as it is easy to tamper with any image. In the last few decades, it has been an urgent concern for researchers to ensure the authenticity of digital images. Based on [...] Read more.
Digital image authentication is an extremely significant concern for the digital revolution, as it is easy to tamper with any image. In the last few decades, it has been an urgent concern for researchers to ensure the authenticity of digital images. Based on the desired applications, several suitable watermarking techniques have been developed to mitigate this concern. However, it is tough to achieve a watermarking system that is simultaneously robust and secure. This paper gives details of standard watermarking system frameworks and lists some standard requirements that are used in designing watermarking techniques for several distinct applications. The current trends of digital image watermarking techniques are also reviewed in order to find the state-of-the-art methods and their limitations. Some conventional attacks are discussed, and future research directions are given. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

19 pages, 2817 KiB  
Article
Outpatient Text Classification Using Attention-Based Bidirectional LSTM for Robot-Assisted Servicing in Hospital
by Che-Wen Chen, Shih-Pang Tseng, Ta-Wen Kuan and Jhing-Fa Wang
Information 2020, 11(2), 106; https://doi.org/10.3390/info11020106 - 16 Feb 2020
Cited by 62 | Viewed by 9051
Abstract
In general, patients who are unwell do not know with which outpatient department they should register, and can only get advice after they are diagnosed by a family doctor. This may cause a waste of time and medical resources. In this paper, we [...] Read more.
In general, patients who are unwell do not know with which outpatient department they should register, and can only get advice after they are diagnosed by a family doctor. This may cause a waste of time and medical resources. In this paper, we propose an attention-based bidirectional long short-term memory (Att-BiLSTM) model for service robots, which has the ability to classify outpatient categories according to textual content. With the outpatient text classification system, users can talk about their situation to a service robot and the robot can tell them which clinic they should register with. In the implementation of the proposed method, dialog text of users in the Taiwan E Hospital were collected as the training data set. Through natural language processing (NLP), the information in the dialog text was extracted, sorted, and converted to train the long-short term memory (LSTM) deep learning model. Experimental results verify the ability of the robot to respond to questions autonomously through acquired casual knowledge. Full article
(This article belongs to the Special Issue Natural Language Processing in Healthcare and Medical Informatics)
Show Figures

Figure 1

31 pages, 1157 KiB  
Article
A New Green Prospective of Non-orthogonal Multiple Access (NOMA) for 5G
by Vishaka Basnayake, Dushantha Nalin K. Jayakody, Vishal Sharma, Nikhil Sharma, P. Muthuchidambaranathan and Hakim Mabed
Information 2020, 11(2), 89; https://doi.org/10.3390/info11020089 - 7 Feb 2020
Cited by 33 | Viewed by 9025
Abstract
Energy efficiency is a major concern in the emerging mobile cellular wireless networks since massive connectivity is to be expected with high energy requirements from the network operators. Non-orthogonal multiple access (NOMA) being the frontier multiple access scheme for 5G, there exists numerous [...] Read more.
Energy efficiency is a major concern in the emerging mobile cellular wireless networks since massive connectivity is to be expected with high energy requirements from the network operators. Non-orthogonal multiple access (NOMA) being the frontier multiple access scheme for 5G, there exists numerous research attempts on enhancing the energy efficiency of NOMA enabled wireless networks while maintaining its outstanding performance metrics such as high throughput, data rates and capacity maximized optimally.The concept of green NOMA is introduced in a generalized manner to identify the energy efficient NOMA schemes. These schemes will result in an optimal scenario in which the energy generated for communication is managed sustainably. Hence, the effect on the environment, economy, living beings, etc is minimized. The recent research developments are classified for a better understanding of areas which are lacking attention and needs further improvement. Also, the performance comparison of energy efficient, NOMA schemes against conventional NOMA is presented. Finally, challenges and emerging research trends, for energy efficient NOMA are discussed. Full article
(This article belongs to the Special Issue 10th Anniversary of Information—Emerging Research Challenges)
Show Figures

Figure 1

23 pages, 4115 KiB  
Article
Vehicle Location Prediction Based on Spatiotemporal Feature Transformation and Hybrid LSTM Neural Network
by Yuelei Xiao and Qing Nian
Information 2020, 11(2), 84; https://doi.org/10.3390/info11020084 - 4 Feb 2020
Cited by 16 | Viewed by 3640
Abstract
Location prediction has attracted much attention due to its important role in many location-based services. The existing location prediction methods have large trajectory information loss and low prediction accuracy. Hence, they are unsuitable for vehicle location prediction of the intelligent transportation system, which [...] Read more.
Location prediction has attracted much attention due to its important role in many location-based services. The existing location prediction methods have large trajectory information loss and low prediction accuracy. Hence, they are unsuitable for vehicle location prediction of the intelligent transportation system, which needs small trajectory information loss and high prediction accuracy. To solve the problem, a vehicle location prediction algorithm was proposed in this paper, which is based on a spatiotemporal feature transformation method and a hybrid long short-term memory (LSTM) neural network model. In the algorithm, the transformation method is used to convert a vehicle trajectory into an appropriate input of the neural network model, and then the vehicle location at the next time is predicted by the neural network model. The experimental results show that the trajectory information of an original taxi trajectory is basically reserved by its shadowed taxi trajectory, and the trajectory points of the predicted taxi trajectory are close to those of the shadowed taxi trajectory. It proves that our proposed algorithm effectively reduces the information loss of vehicle trajectory and improves the accuracy of vehicle location prediction. Furthermore, the experimental results also show that the algorithm has a higher distance percentage and a shorter average distance than the other predication models. Therefore, our proposed algorithm is better than the other prediction models in the accuracy of vehicle location predication. Full article
Show Figures

Figure 1

36 pages, 5832 KiB  
Article
A Novel Bio-Inspired Deep Learning Approach for Liver Cancer Diagnosis
by Rania M. Ghoniem
Information 2020, 11(2), 80; https://doi.org/10.3390/info11020080 - 31 Jan 2020
Cited by 41 | Viewed by 6967
Abstract
Current research on computer-aided diagnosis (CAD) of liver cancer is based on traditional feature engineering methods, which have several drawbacks including redundant features and high computational cost. Recent deep learning models overcome these problems by implicitly capturing intricate structures from large-scale medical image [...] Read more.
Current research on computer-aided diagnosis (CAD) of liver cancer is based on traditional feature engineering methods, which have several drawbacks including redundant features and high computational cost. Recent deep learning models overcome these problems by implicitly capturing intricate structures from large-scale medical image data. However, they are still affected by network hyperparameters and topology. Hence, the state of the art in this area can be further optimized by integrating bio-inspired concepts into deep learning models. This work proposes a novel bio-inspired deep learning approach for optimizing predictive results of liver cancer. This approach contributes to the literature in two ways. Firstly, a novel hybrid segmentation algorithm is proposed to extract liver lesions from computed tomography (CT) images using SegNet network, UNet network, and artificial bee colony optimization (ABC), namely, SegNet-UNet-ABC. This algorithm uses the SegNet for separating liver from the abdominal CT scan, then the UNet is used to extract lesions from the liver. In parallel, the ABC algorithm is hybridized with each network to tune its hyperparameters, as they highly affect the segmentation performance. Secondly, a hybrid algorithm of the LeNet-5 model and ABC algorithm, namely, LeNet-5/ABC, is proposed as feature extractor and classifier of liver lesions. The LeNet-5/ABC algorithm uses the ABC to select the optimal topology for constructing the LeNet-5 network, as network structure affects learning time and classification accuracy. For assessing performance of the two proposed algorithms, comparisons have been made to the state-of-the-art algorithms on liver lesion segmentation and classification. The results reveal that the SegNet-UNet-ABC is superior to other compared algorithms regarding Jaccard index, Dice index, correlation coefficient, and convergence time. Moreover, the LeNet-5/ABC algorithm outperforms other algorithms regarding specificity, F1-score, accuracy, and computational time. Full article
Show Figures

Graphical abstract

26 pages, 4621 KiB  
Article
Theme Mapping and Bibliometrics Analysis of One Decade of Big Data Research in the Scopus Database
by Anne Parlina, Kalamullah Ramli and Hendri Murfi
Information 2020, 11(2), 69; https://doi.org/10.3390/info11020069 - 28 Jan 2020
Cited by 53 | Viewed by 9664
Abstract
Recently, the popularity of big data as a research field has shown continuous and wide-scale growth. This study aims to capture the scientific structure and topic evolution of big data research using bibliometrics and text mining-based analysis methods. Bibliographic data of journal articles [...] Read more.
Recently, the popularity of big data as a research field has shown continuous and wide-scale growth. This study aims to capture the scientific structure and topic evolution of big data research using bibliometrics and text mining-based analysis methods. Bibliographic data of journal articles regarding big data published between 2009 to 2018 were collected from the Scopus database and analyzed. The results show a significant growth of publications since 2014. Furthermore, the findings of this study highlight the core journals, most cited articles, top productive authors, countries, and institutions. Secondly, a unique approach to identifying and analyzing major research themes in big data publications was proposed. Keywords were clustered, and each cluster was labeled as a theme. Moreover, the papers were divided into four sub-periods to observe the thematic evolution. The theme mapping reveals that research on big data is dominated by big data analytics, which covers methods, tools, supporting infrastructure, and applications. Other critical aspects of big data research are security and privacy. Social networks and the Internet of things are significant sources of big data, and the resources and services offered by cloud computing strongly support the management and processing of big data. Full article
Show Figures

Figure 1

17 pages, 1583 KiB  
Article
From HMI to HMIs: Towards an HMI Framework for Automated Driving
by Klaus Bengler, Michael Rettenmaier, Nicole Fritz and Alexander Feierle
Information 2020, 11(2), 61; https://doi.org/10.3390/info11020061 - 25 Jan 2020
Cited by 135 | Viewed by 14402
Abstract
During automated driving, there is a need for interaction between the automated vehicle (AV) and the passengers inside the vehicle and between the AV and the surrounding road users outside of the car. For this purpose, different types of human machine interfaces (HMIs) [...] Read more.
During automated driving, there is a need for interaction between the automated vehicle (AV) and the passengers inside the vehicle and between the AV and the surrounding road users outside of the car. For this purpose, different types of human machine interfaces (HMIs) are implemented. This paper introduces an HMI framework and describes the different HMI types and the factors influencing their selection and content. The relationship between these HMI types and their influencing factors is also presented in the framework. Moreover, the interrelations of the HMI types are analyzed. Furthermore, we describe how the framework can be used in academia and industry to coordinate research and development activities. With the help of the HMI framework, we identify research gaps in the field of HMI for automated driving to be explored in the future. Full article
(This article belongs to the Special Issue Automotive User Interfaces and Interactions in Automated Driving)
Show Figures

Figure 1

13 pages, 4062 KiB  
Article
Short-Term Solar Irradiance Forecasting Based on a Hybrid Deep Learning Methodology
by Ke Yan, Hengle Shen, Lei Wang, Huiming Zhou, Meiling Xu and Yuchang Mo
Information 2020, 11(1), 32; https://doi.org/10.3390/info11010032 - 6 Jan 2020
Cited by 61 | Viewed by 5016
Abstract
Accurate prediction of solar irradiance is beneficial in reducing energy waste associated with photovoltaic power plants, preventing system damage caused by the severe fluctuation of solar irradiance, and stationarizing the power output integration between different power grids. Considering the randomness and multiple dimension [...] Read more.
Accurate prediction of solar irradiance is beneficial in reducing energy waste associated with photovoltaic power plants, preventing system damage caused by the severe fluctuation of solar irradiance, and stationarizing the power output integration between different power grids. Considering the randomness and multiple dimension of weather data, a hybrid deep learning model that combines a gated recurrent unit (GRU) neural network and an attention mechanism is proposed forecasting the solar irradiance changes in four different seasons. In the first step, the Inception neural network and ResNet are designed to extract features from the original dataset. Secondly, the extracted features are inputted into the recurrent neural network (RNN) network for model training. Experimental results show that the proposed hybrid deep learning model accurately predicts solar irradiance changes in a short-term manner. In addition, the forecasting performance of the model is better than traditional deep learning models (such as long short term memory and GRU). Full article
(This article belongs to the Special Issue Machine Learning on Scientific Data and Information)
Show Figures

Figure 1

Back to TopTop