sensors-logo

Journal Browser

Journal Browser

Data Fusion and Machine Learning in Sensor Networks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 25252

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, Universidad Carlos III de Madrid, Madrid, Spain
Interests: information fusion; artificial intelligence; machine vision; autonomous vehicles
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Applied Artificial Intelligence Group, Universidad Carlos III de Madrid, Colmenarejo (Madrid), Spain
Interests: artificial intelligence; machine learning; big data; evolutionary computation; decision support systems

Special Issue Information

Dear Colleagues,

In today’s digital world, information is the key factor to make decisions. Ubiquitous electronic sources, such as sensors and video, provide a steady stream of data, while text-based data from databases, the Internet, email, chat, and VOIP, and social media are growing exponentially. The ability to make sense of data by fusing it into new knowledge would provide clear advantages in making decisions.

Fusion systems aim to integrate sensor data and information in databases, knowledge bases, contextual information, etc., to describe situations. In a sense, the goal of information fusion is to attain a global view of a scenario to make the best decision.

The key aspect in modern DF applications is the appropriate integration of all types of information or knowledge: Observational data, knowledge models (a priori or inductively learned), and contextual information. Each of these categories has a distinctive nature and potential support for the result of the fusion process.

  • Observational data: Observational data are the fundamental data about a dynamic scenario, as collected from some observational capability (sensors of any type). These data are about the observable entities in the world that are of interest
  • Contextual information: Contextual information has become fundamental to develop models in complex scenarios. Context and the elements of what could be called contextual information could be defined as “the set of circumstances surrounding a task that are of potential relevance to its completion.” Because of its task-relevance; fusion or estimating/inferring task implies the development of a best-possible estimate taking into account this lateral knowledge.
  • Learned knowledge: DF systems combine multi-source data to provide inferences, exploiting models of the expected behaviors of entities (physical models like cinematics or logical models like expected behaviors depending on context). In those cases where a priori knowledge for DF process development cannot be formed, one possibility is to try and excise knowledge through online machine learning processes, operating on observational and other data. These are procedural and algorithmic methods for discovering relationships among, and behaviors of, entities of interest.

This Special Issue invites contributions on the following topics (but is not limited to them):

  • Data fusion of distributed sensors
  • Context definition and management
  • Machine learning techniques
  • Integration of data fusion
  • Ambient intelligence
  • Data fusion on autonomous systems
  • Human–computer interaction
  • Application scenarios for machine learning-based data fusion
  • Data fusion model performance evaluation
  • Machine learning interpretability in data fusion

Dr. Jesús García-Herrero
Dr. Antonio Berlanga
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Data fusion
  • Machine learning
  • Decision support
  • Big data
  • Context-adaptation
  • Internet of things
  • Machine vision

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:
19 pages, 1442 KiB  
Article
A Graph Neural Network with Spatio-Temporal Attention for Multi-Sources Time Series Data: An Application to Frost Forecast
by Hernan Lira, Luis Martí and Nayat Sanchez-Pi
Sensors 2022, 22(4), 1486; https://doi.org/10.3390/s22041486 - 15 Feb 2022
Cited by 9 | Viewed by 4523
Abstract
Frost forecast is an important issue in climate research because of its economic impact on several industries. In this study, we propose GRAST-Frost, a graph neural network (GNN) with spatio-temporal architecture, which is used to predict minimum temperatures and the incidence of frost. [...] Read more.
Frost forecast is an important issue in climate research because of its economic impact on several industries. In this study, we propose GRAST-Frost, a graph neural network (GNN) with spatio-temporal architecture, which is used to predict minimum temperatures and the incidence of frost. We developed an IoT platform capable of acquiring weather data from an experimental site, and in addition, data were collected from 10 weather stations in close proximity to the aforementioned site. The model considers spatial and temporal relations while processing multiple time series simultaneously. Performing predictions of 6, 12, 24, and 48 h in advance, this model outperforms classical time series forecasting methods, including linear and nonlinear machine learning methods, simple deep learning architectures, and nongraph deep learning models. In addition, we show that our model significantly improves on the current state of the art of frost forecasting methods. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

16 pages, 1289 KiB  
Article
Infrequent Pattern Detection for Reliable Network Traffic Analysis Using Robust Evolutionary Computation
by A. N. M. Bazlur Rashid, Mohiuddin Ahmed and Al-Sakib Khan Pathan
Sensors 2021, 21(9), 3005; https://doi.org/10.3390/s21093005 - 25 Apr 2021
Cited by 5 | Viewed by 2595
Abstract
While anomaly detection is very important in many domains, such as in cybersecurity, there are many rare anomalies or infrequent patterns in cybersecurity datasets. Detection of infrequent patterns is computationally expensive. Cybersecurity datasets consist of many features, mostly irrelevant, resulting in lower classification [...] Read more.
While anomaly detection is very important in many domains, such as in cybersecurity, there are many rare anomalies or infrequent patterns in cybersecurity datasets. Detection of infrequent patterns is computationally expensive. Cybersecurity datasets consist of many features, mostly irrelevant, resulting in lower classification performance by machine learning algorithms. Hence, a feature selection (FS) approach, i.e., selecting relevant features only, is an essential preprocessing step in cybersecurity data analysis. Despite many FS approaches proposed in the literature, cooperative co-evolution (CC)-based FS approaches can be more suitable for cybersecurity data preprocessing considering the Big Data scenario. Accordingly, in this paper, we have applied our previously proposed CC-based FS with random feature grouping (CCFSRFG) to a benchmark cybersecurity dataset as the preprocessing step. The dataset with original features and the dataset with a reduced number of features were used for infrequent pattern detection. Experimental analysis was performed and evaluated using 10 unsupervised anomaly detection techniques. Therefore, the proposed infrequent pattern detection is termed Unsupervised Infrequent Pattern Detection (UIPD). Then, we compared the experimental results with and without FS in terms of true positive rate (TPR). Experimental analysis indicates that the highest rate of TPR improvement was by cluster-based local outlier factor (CBLOF) of the backdoor infrequent pattern detection, and it was 385.91% when using FS. Furthermore, the highest overall infrequent pattern detection TPR was improved by 61.47% for all infrequent patterns using clustering-based multivariate Gaussian outlier score (CMGOS) with FS. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

32 pages, 6896 KiB  
Article
Prototype Regularized Manifold Regularization Technique for Semi-Supervised Online Extreme Learning Machine
by Muhammad Zafran Muhammad Zaly Shah, Anazida Zainal, Fuad A. Ghaleb, Abdulrahman Al-Qarafi and Faisal Saeed
Sensors 2022, 22(9), 3113; https://doi.org/10.3390/s22093113 - 19 Apr 2022
Cited by 1 | Viewed by 1689
Abstract
Data streaming applications such as the Internet of Things (IoT) require processing or predicting from sequential data from various sensors. However, most of the data are unlabeled, making applying fully supervised learning algorithms impossible. The online manifold regularization approach allows sequential learning from [...] Read more.
Data streaming applications such as the Internet of Things (IoT) require processing or predicting from sequential data from various sensors. However, most of the data are unlabeled, making applying fully supervised learning algorithms impossible. The online manifold regularization approach allows sequential learning from partially labeled data, which is useful for sequential learning in environments with scarcely labeled data. Unfortunately, the manifold regularization technique does not work out of the box as it requires determining the radial basis function (RBF) kernel width parameter. The RBF kernel width parameter directly impacts the performance as it is used to inform the model to which class each piece of data most likely belongs. The width parameter is often determined off-line via hyperparameter search, where a vast amount of labeled data is required. Therefore, it limits its utility in applications where it is difficult to collect a great deal of labeled data, such as data stream mining. To address this issue, we proposed eliminating the RBF kernel from the manifold regularization technique altogether by combining the manifold regularization technique with a prototype learning method, which uses a finite set of prototypes to approximate the entire data set. Compared to other manifold regularization approaches, this approach instead queries the prototype-based learner to find the most similar samples for each sample instead of relying on the RBF kernel. Thus, it no longer necessitates the RBF kernel, which improves its practicality. The proposed approach can learn faster and achieve a higher classification performance than other manifold regularization techniques based on experiments on benchmark data sets. Results showed that the proposed approach can perform well even without using the RBF kernel, which improves the practicality of manifold regularization techniques for semi-supervised learning. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

17 pages, 13807 KiB  
Article
Modeling and Fault Detection of Brushless Direct Current Motor by Deep Learning Sensor Data Fusion
by Priscile Suawa, Tenia Meisel, Marcel Jongmanns, Michael Huebner and Marc Reichenbach
Sensors 2022, 22(9), 3516; https://doi.org/10.3390/s22093516 - 05 May 2022
Cited by 11 | Viewed by 2957
Abstract
Only with new sensor concepts in a network, which go far beyond what the current state-of-the-art can offer, can current and future requirements for flexibility, safety, and security be met. The combination of data from many sensors allows a richer representation of the [...] Read more.
Only with new sensor concepts in a network, which go far beyond what the current state-of-the-art can offer, can current and future requirements for flexibility, safety, and security be met. The combination of data from many sensors allows a richer representation of the observed phenomenon, e.g., system degradation, which can facilitate analysis and decision-making processes. This work addresses the topic of predictive maintenance by exploiting sensor data fusion and artificial intelligence-based analysis. With a dataset such as vibration and sound from sensors, we focus on studying paradigms that orchestrate the most optimal combination of sensors with deep learning sensor fusion algorithms to enable predictive maintenance. In our experimental setup, we used raw data obtained from two sensors, a microphone, and an accelerometer installed on a brushless direct current (BLDC) motor. The data from each sensor were processed individually and, in a second step, merged to create a solid base for analysis. To diagnose BLDC motor faults, this work proposes to use data-level sensor fusion with deep learning methods such as deep convolutional neural networks (DCNNs) for their ability to automatically extract relevant information from the input data, the long short-term memory method (LSTM), and convolutional long short-term memory (CNN-LSTM), a combination of the two previous methods. The results show that in our setup, sound signals outperform vibrations when used individually for training. However, without any feature selection/extraction step, the accuracy of the models improves with data fusion and reaches 98.8%, 93.5%, and 73.6% for the DCNN, CNN-LSTM, and LSTM methods, respectively, 98.8% being a performance that, according to our reading, has never been reached in the analysis of the faults of a BLDC motor without first going through the extraction of the characteristics and their fusion by traditional methods. These results show that it is possible to work with raw data from multiple sensors and achieve good results using deep learning methods without spending time and resources on selecting appropriate features to extract and methods to use for feature extraction and data fusion. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

23 pages, 738 KiB  
Review
Applications of Fusion Techniques in E-Commerce Environments: A Literature Review
by Emmanouil Daskalakis, Konstantina Remoundou, Nikolaos Peppes, Theodoros Alexakis, Konstantinos Demestichas, Evgenia Adamopoulou and Efstathios Sykas
Sensors 2022, 22(11), 3998; https://doi.org/10.3390/s22113998 - 25 May 2022
Cited by 3 | Viewed by 2338
Abstract
The extreme rise of the Internet of Things and the increasing access of people to web applications have led to the expanding use of diverse e-commerce solutions, which was even more obvious during the COVID-19 pandemic. Large amounts of heterogeneous data from multiple [...] Read more.
The extreme rise of the Internet of Things and the increasing access of people to web applications have led to the expanding use of diverse e-commerce solutions, which was even more obvious during the COVID-19 pandemic. Large amounts of heterogeneous data from multiple sources reside in e-commerce environments and are often characterized by data source inaccuracy and unreliability. In this regard, various fusion techniques can play a crucial role in addressing such challenges and are extensively used in numerous e-commerce applications. This paper’s goal is to conduct an academic literature review of prominent fusion-based solutions that can assist in tackling the everyday challenges the e-commerce environments face as well as in their needs to make more accurate and better business decisions. For categorizing the solutions, a novel 4-fold categorization approach is introduced including product-related, economy-related, business-related, and consumer-related solutions, followed by relevant subcategorizations, based on the wide variety of challenges faced by e-commerce. Results from the 65 fusion-related solutions included in the paper show a great variety of different fusion applications, focusing on the fusion of already existing models and algorithms as well as the existence of a large number of different machine learning techniques focusing on the same e-commerce-related challenge. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

26 pages, 4496 KiB  
Article
Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network
by Divya Thekke Kanapram, Lucio Marcenaro, David Martin Gomez and Carlo Regazzoni
Sensors 2022, 22(6), 2260; https://doi.org/10.3390/s22062260 - 15 Mar 2022
Viewed by 2151
Abstract
In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that [...] Read more.
In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to generating a given outcome in a complex autonomous or semi-autonomous system. The necessity of interpretability is often related to the evaluation of performances in complex systems and the acceptance of agents’ automatization processes where critical high-risk decisions have to be taken. This paper concentrates on one of the core functionality of such systems, i.e., abnormality detection, and on choosing a model representation modality based on a data-driven machine learning (ML) technique such that the outcomes become interpretable. The interpretability in this work is achieved through graph matching of semantic level vocabulary generated from the data and their relationships. The proposed approach assumes that the data-driven models to be chosen should support emergent self-awareness (SA) of the agents at multiple abstraction levels. It is demonstrated that the capability of incrementally updating learned representation models based on progressive experiences of the agent is shown to be strictly related to interpretability capability. As a case study, abnormality detection is analyzed as a primary feature of the collective awareness (CA) of a network of vehicles performing cooperative behaviors. Each vehicle is considered an example of an Internet of Things (IoT) node, therefore providing results that can be generalized to an IoT framework where agents have different sensors, actuators, and tasks to be accomplished. The capability of a model to allow evaluation of abnormalities at different levels of abstraction in the learned models is addressed as a key aspect for interpretability. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

25 pages, 7860 KiB  
Article
Error Reduction in Vision-Based Multirotor Landing System
by Juan Pedro Llerena Caña, Jesús García Herrero and José Manuel Molina López
Sensors 2022, 22(10), 3625; https://doi.org/10.3390/s22103625 - 10 May 2022
Viewed by 1948
Abstract
New applications are continuously appearing with drones as protagonists, but all of them share an essential critical maneuver—landing. New application requirements have led the study of novel landing strategies, in which vision systems have played and continue to play a key role. Generally, [...] Read more.
New applications are continuously appearing with drones as protagonists, but all of them share an essential critical maneuver—landing. New application requirements have led the study of novel landing strategies, in which vision systems have played and continue to play a key role. Generally, the new applications use the control and navigation systems embedded in the aircraft. However, the internal dynamics of these systems, initially focused on other tasks such as the smoothing trajectories between different waypoints, can trigger undesired behaviors. In this paper, we propose a landing system based on monocular vision and navigation information to estimate the helipad global position. In addition, the global estimation system includes a position error correction module by cylinder space transformation and a filtering system with a sliding window. To conclude, the landing system is evaluated with three quality metrics, showing how the proposed correction system together with stationary filtering improves the raw landing system. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

18 pages, 1779 KiB  
Article
Decision-Based Fusion for Vehicle Matching
by Sally Ghanem, Ryan A. Kerekes and Ryan Tokola
Sensors 2022, 22(7), 2803; https://doi.org/10.3390/s22072803 - 06 Apr 2022
Cited by 1 | Viewed by 1538
Abstract
In this work, a framework is proposed for decision fusion utilizing features extracted from vehicle images and their detected wheels. Siamese networks are exploited to extract key signatures from pairs of vehicle images. Our approach then examines the extent of reliance between signatures [...] Read more.
In this work, a framework is proposed for decision fusion utilizing features extracted from vehicle images and their detected wheels. Siamese networks are exploited to extract key signatures from pairs of vehicle images. Our approach then examines the extent of reliance between signatures generated from vehicle images to robustly integrate different similarity scores and provide a more informed decision for vehicle matching. To that end, a dataset was collected that contains hundreds of thousands of side-view vehicle images under different illumination conditions and elevation angles. Experiments show that our approach could achieve better matching accuracy by taking into account the decisions made by a whole-vehicle or wheels-only matching network. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

20 pages, 1152 KiB  
Review
Data Fusion in Agriculture: Resolving Ambiguities and Closing Data Gaps
by Jayme Garcia Arnal Barbedo
Sensors 2022, 22(6), 2285; https://doi.org/10.3390/s22062285 - 16 Mar 2022
Cited by 16 | Viewed by 3582
Abstract
Acquiring useful data from agricultural areas has always been somewhat of a challenge, as these are often expansive, remote, and vulnerable to weather events. Despite these challenges, as technologies evolve and prices drop, a surge of new data are being collected. Although a [...] Read more.
Acquiring useful data from agricultural areas has always been somewhat of a challenge, as these are often expansive, remote, and vulnerable to weather events. Despite these challenges, as technologies evolve and prices drop, a surge of new data are being collected. Although a wealth of data are being collected at different scales (i.e., proximal, aerial, satellite, ancillary data), this has been geographically unequal, causing certain areas to be virtually devoid of useful data to help face their specific challenges. However, even in areas with available resources and good infrastructure, data and knowledge gaps are still prevalent, because agricultural environments are mostly uncontrolled and there are vast numbers of factors that need to be taken into account and properly measured for a full characterization of a given area. As a result, data from a single sensor type are frequently unable to provide unambiguous answers, even with very effective algorithms, and even if the problem at hand is well defined and limited in scope. Fusing the information contained in different sensors and in data from different types is one possible solution that has been explored for some decades. The idea behind data fusion involves exploring complementarities and synergies of different kinds of data in order to extract more reliable and useful information about the areas being analyzed. While some success has been achieved, there are still many challenges that prevent a more widespread adoption of this type of approach. This is particularly true for the highly complex environments found in agricultural areas. In this article, we provide a comprehensive overview on the data fusion applied to agricultural problems; we present the main successes, highlight the main challenges that remain, and suggest possible directions for future research. Full article
(This article belongs to the Special Issue Data Fusion and Machine Learning in Sensor Networks)
Show Figures

Figure 1

Back to TopTop