sensors-logo

Journal Browser

Journal Browser

Parallel and Edge Computing with Artificial Intelligence for Sensor Network

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (15 October 2023) | Viewed by 35575

Special Issue Editors


E-Mail Website
Guest Editor
Department of Data Science, New Jersey Institute of Technology, Newark, NJ 07102, USA
Interests: big data; data-intensive computing; parallel and distributed computing; high-performance networking; large-scale scientific visualization; wireless sensor networks; cyber security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer and Network Engineering, Graduate School of Informatics and Engineering, The University of Electro-Communications, 1-5-1, Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan
Interests: ad hoc networks; sensor networks; intelligent transport systems; communication protocols; IoT; big data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Electronics, Information and Communication Engineering, Kangwon National University, Gangwon-do, Chuncheon, Republic of Korea
Interests: sensor networks; data fusion applications; parallel and edge computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

As the key components of the Internet of Things (IoT), sensors of disparate types and modalities have been deployed almost everywhere, including home appliances, personal portable devices, factory pipelines, autonomous vehicles, transportation infrastructures, human bodies, country boarders, battlefields, farms, etc. The ubiquitous deployment of such sensors leads to the formation of various sensor networks, which have found numerous successful applications in industry, agriculture, healthcare, homeland security, smart city, smart manufacturing, etc.

Particularly, over the years, sensors have become smarter, smaller, cheaper, and more powerful than ever before, which makes it now possible to achieve quality through quantity. Consequently, a large deployment of sensors could generate colossal amounts of data on a regular basis, which must be transferred, integrated, and analyzed for decision making and decision support. The processing of sensor data at large scales typically requires the use of massive resources and big data frameworks for parallel computing at data centers on clouds.

Edge computing, on the other hand, pushes computing to the edge of the network to relieve the resource pressure on cloud-based data centers and reduce the amount of data that need to be transferred over networks. Processing data at or close to the data source could bring significant performance improvements, such as better response time of sensor network applications with a drastic reduction of energy consumption for sustainable operations. This new computing paradigm has attracted more attention due to the increasing number of resources made available on edge devices.

Either on the cloud or at the edge, artificial intelligence (AI) can be fused into every step of the data collection and processing workflow. Particularly, a plethora of machine-learning-based approaches have been employed to determine sensor deployment and data routing, model and predict application performance, optimize resource allocation and utilization, reduce energy consumption, etc.

This Special Issue focuses on discussions and insights into the latest advancements and technologies on parallel and edge computing with artificial intelligence in support of sensor network applications in all domains. We welcome novel and original contributions with a broad range of problems and methods on the theory, design, implementation, and evaluation of sensor network-based computing solutions and AI-enabled approaches such as machine learning and deep learning.

The topics of interest include but are not limited to:

- Computation- and data-intensive sensor network applications;

- Cloud computing for sensor data processing;

- Big data technologies, frameworks, systems, and platforms for parallel computing;

- Distributed computation and data management in sensor network applications;

- Edge computing in sensor networks;

- Energy-efficient computation, communication, and caching at the edge;

- Edge intelligence in energy-efficient Internet of Things;

- Edge and fog computing for smart environments;

- Mobile edge computing for smart environments;

- Optimization, control, and automation of edge and cloud computing;

- Computing continuum from clouds to edges;

- Sustainable and smart edge systems;

- Resource management in edge and cloud computing;

- Artificial intelligence and machine learning for sensor network applications;

- Edge intelligence in smart home, smart buildings, and smart cities;

- Parallel and edge computing for smart manufacturing;

- Collaborative Internet of Vehicles for intelligent transportation;

- Artificial intelligence-enabled sensor-based surveillance, monitoring, and tracking;

- Sensor network applications in agriculture, healthcare, homeland security, etc.;

- Novel applications, experiences, and field trials with parallel and edge computing.

Prof. Dr. Chase Wu
Dr. Celimuge Wu
Dr. Kihyeon Kwon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor networks
  • parallel computing
  • edge computing
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1666 KiB  
Article
CaFANet: Causal-Factors-Aware Attention Networks for Equipment Fault Prediction in the Internet of Things
by Zhenwen Gui, Shuaishuai He, Yao Lin, Xin Nan, Xiaoyan Yin and Chase Q. Wu
Sensors 2023, 23(16), 7040; https://doi.org/10.3390/s23167040 - 9 Aug 2023
Viewed by 1231
Abstract
Existing fault prediction algorithms based on deep learning have achieved good prediction performance. These algorithms treat all features fairly and assume that the progression of the equipment faults is stationary throughout the entire lifecycle. In fact, each feature has a different contribution to [...] Read more.
Existing fault prediction algorithms based on deep learning have achieved good prediction performance. These algorithms treat all features fairly and assume that the progression of the equipment faults is stationary throughout the entire lifecycle. In fact, each feature has a different contribution to the accuracy of fault prediction, and the progress of equipment faults is non-stationary. More specifically, capturing the time point at which a fault first appears is more important for improving the accuracy of fault prediction. Moreover, the progress of the different faults of equipment varies significantly. Therefore, taking feature differences and time information into consideration, we propose a Causal-Factors-Aware Attention Network, CaFANet, for equipment fault prediction in the Internet of Things. Experimental results and performance analysis confirm the superiority of the proposed algorithm over traditional machine learning methods with prediction accuracy improved by up to 15.3%. Full article
Show Figures

Figure 1

17 pages, 1252 KiB  
Article
Phishing URLs Detection Using Sequential and Parallel ML Techniques: Comparative Analysis
by Naya Nagy, Malak Aljabri, Afrah Shaahid, Amnah Albin Ahmed, Fatima Alnasser, Linda Almakramy, Manar Alhadab and Shahad Alfaddagh
Sensors 2023, 23(7), 3467; https://doi.org/10.3390/s23073467 - 26 Mar 2023
Cited by 18 | Viewed by 6460
Abstract
In today’s digitalized era, the world wide web services are a vital aspect of each individual’s daily life and are accessible to the users via uniform resource locators (URLs). Cybercriminals constantly adapt to new security technologies and use URLs to exploit vulnerabilities for [...] Read more.
In today’s digitalized era, the world wide web services are a vital aspect of each individual’s daily life and are accessible to the users via uniform resource locators (URLs). Cybercriminals constantly adapt to new security technologies and use URLs to exploit vulnerabilities for illicit benefits such as stealing users’ personal and sensitive data, which can lead to financial loss, discredit, ransomware, or the spread of malicious infections and catastrophic cyber-attacks such as phishing attacks. Phishing attacks are being recognized as the leading source of data breaches and the most prevalent deceitful scam of cyber-attacks. Artificial intelligence (AI)-based techniques such as machine learning (ML) and deep learning (DL) have proven to be infallible in detecting phishing attacks. Nevertheless, sequential ML can be time intensive and not highly efficient in real-time detection. It can also be incapable of handling vast amounts of data. However, utilizing parallel computing techniques in ML can help build precise, robust, and effective models for detecting phishing attacks with less computation time. Therefore, in this proposed study, we utilized various multiprocessing and multithreading techniques in Python to train ML and DL models. The dataset used comprised 54 K records for training and 12 K for testing. Five experiments were carried out, the first one based on sequential execution followed by the next four based on parallel execution techniques (threading using Python parallel backend, threading using Python parallel backend and number of jobs, threading manually, and multiprocessing using Python parallel backend). Four models, namely, random forest (RF), naïve bayes (NB), convolutional neural network (CNN), and long short-term memory (LSTM) were deployed to carry out the experiments. Overall, the experiments yielded excellent results and speedup. Lastly, to consolidate, a comprehensive comparative analysis was performed. Full article
Show Figures

Figure 1

23 pages, 995 KiB  
Article
Speeding Task Allocation Search for Reconfigurations in Adaptive Distributed Embedded Systems Using Deep Reinforcement Learning
by Ramón Rotaeche, Alberto Ballesteros and Julián Proenza
Sensors 2023, 23(1), 548; https://doi.org/10.3390/s23010548 - 3 Jan 2023
Cited by 1 | Viewed by 2334
Abstract
A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these [...] Read more.
A Critical Adaptive Distributed Embedded System (CADES) is a group of interconnected nodes that must carry out a set of tasks to achieve a common goal, while fulfilling several requirements associated with their critical (e.g., hard real-time requirements) and adaptive nature. In these systems, a key challenge is to solve, in a timely manner, the combinatorial optimization problem involved in finding the best way to allocate the tasks to the available nodes (i.e., the task allocation) taking into account aspects such as the computational costs of the tasks and the computational capacity of the nodes. This problem is not trivial and there is no known polynomial time algorithm to find the optimal solution. Several studies have proposed Deep Reinforcement Learning (DRL) approaches to solve combinatorial optimization problems and, in this work, we explore the application of such approaches to the task allocation problem in CADESs. We first discuss the potential advantages of using a DRL-based approach over several heuristic-based approaches to allocate tasks in CADESs and we then demonstrate how a DRL-based approach can achieve similar results for the best performing heuristic in terms of optimality of the allocation, while requiring less time to generate such allocation. Full article
Show Figures

Figure 1

13 pages, 4123 KiB  
Communication
On Weather Data-Based Prediction of Gamma Exposure Rates Using Gradient Boosting Learning for Environmental Radiation Monitoring
by Changhyun Cho, Kihyeon Kwon and Chase Wu
Sensors 2022, 22(18), 7062; https://doi.org/10.3390/s22187062 - 18 Sep 2022
Cited by 3 | Viewed by 1865
Abstract
Gamma radiation has been classified by the International Agency for Research on Cancer (IARC) as a carcinogenic agent with sufficient evidence in humans. Previous studies show that some weather data are cross-correlated with gamma exposure rates; hence, we hypothesize that the gamma exposure [...] Read more.
Gamma radiation has been classified by the International Agency for Research on Cancer (IARC) as a carcinogenic agent with sufficient evidence in humans. Previous studies show that some weather data are cross-correlated with gamma exposure rates; hence, we hypothesize that the gamma exposure rate could be predicted with certain weather data. In this study, we collected various weather and radiation data from an automatic weather system (AWS) and environmental radiation monitoring system (ERMS) during a specific period and trained and tested two time-series learning algorithms—namely, long short-term memory (LSTM) and light gradient boosting machine (LightGBM)—with two preprocessing methods, namely, standardization and normalization. The experimental results illustrate that standardization is superior to normalization for data preprocessing with smaller deviations, and LightGBM outperforms LSTM in terms of prediction accuracy and running time. The prediction capability of LightGBM makes it possible to determine whether the increase in the gamma exposure rate is caused by a change in the weather or an actual gamma ray for environmental radiation monitoring. Full article
Show Figures

Figure 1

16 pages, 5370 KiB  
Article
A Semantic Data-Based Distributed Computing Framework to Accelerate Digital Twin Services for Large-Scale Disasters
by Jin-Woo Kwon, Seong-Jin Yun and Won-Tae Kim
Sensors 2022, 22(18), 6749; https://doi.org/10.3390/s22186749 - 7 Sep 2022
Cited by 5 | Viewed by 1980
Abstract
As natural disasters become extensive, due to various environmental problems, such as the global warming, it is difficult for the disaster management systems to rapidly provide disaster prediction services, due to complex natural phenomena. Digital twins can effectively provide the services using high-fidelity [...] Read more.
As natural disasters become extensive, due to various environmental problems, such as the global warming, it is difficult for the disaster management systems to rapidly provide disaster prediction services, due to complex natural phenomena. Digital twins can effectively provide the services using high-fidelity disaster models and real-time observational data with distributed computing schemes. However, the previous schemes take little account of the correlations between environmental data of disasters, such as landscapes and weather. This causes inaccurate computing load predictions resulting in unbalanced load partitioning, which increases the prediction service times of the disaster management agencies. In this paper, we propose a novel distributed computing framework to accelerate the prediction services through semantic analyses of correlations between the environmental data. The framework combines the data into disaster semantic data to represent the initial disaster states, such as the sizes of wildfire burn scars and fuel models. With the semantic data, the framework predicts computing loads using the convolutional neural network-based algorithm, partitions the simulation model into balanced sub-models, and allocates the sub-models into distributed computing nodes. As a result, the proposal shows up to 38.5% of the prediction time decreases, compared to the previous schemes. Full article
Show Figures

Figure 1

16 pages, 1245 KiB  
Article
Multi-Modal Learning-Based Equipment Fault Prediction in the Internet of Things
by Xin Nan, Bo Zhang, Changyou Liu, Zhenwen Gui and Xiaoyan Yin
Sensors 2022, 22(18), 6722; https://doi.org/10.3390/s22186722 - 6 Sep 2022
Cited by 4 | Viewed by 2115
Abstract
The timely detection of equipment failure can effectively avoid industrial safety accidents. The existing equipment fault diagnosis methods based on single-mode signal not only have low accuracy, but also have the inherent risk of being misled by signal noise. In this paper, we [...] Read more.
The timely detection of equipment failure can effectively avoid industrial safety accidents. The existing equipment fault diagnosis methods based on single-mode signal not only have low accuracy, but also have the inherent risk of being misled by signal noise. In this paper, we reveal the possibility of using multi-modal monitoring data to improve the accuracy of equipment fault prediction. The main challenge of multi-modal data fusion is how to effectively fuse multi-modal data to improve the accuracy of fault prediction. We propose a multi-modal learning framework for fusion of low-quality monitoring data and high-quality monitoring data. In essence, low-quality monitoring data are used as a compensation for high-quality monitoring data. Firstly, the low-quality monitoring data is optimized, and then the features are extracted. At the same time, the high-quality monitoring data is dealt with by a low complexity convolutional neural network. Moreover, the robustness of the multi-modal learning algorithm is guaranteed by adding noise to the high-quality monitoring data. Finally, different dimensional features are projected into a common space to obtain accurate fault sample classification. Experimental results and performance analysis confirm the superiority of the proposed algorithm. Compared with the traditional feature concatenation method, the prediction accuracy of the proposed multi-modal learning algorithm can be improved by up to 7.42%. Full article
Show Figures

Figure 1

37 pages, 501 KiB  
Article
Combined Federated and Split Learning in Edge Computing for Ubiquitous Intelligence in Internet of Things: State-of-the-Art and Future Directions
by Qiang Duan, Shijing Hu, Ruijun Deng and Zhihui Lu
Sensors 2022, 22(16), 5983; https://doi.org/10.3390/s22165983 - 10 Aug 2022
Cited by 47 | Viewed by 8518
Abstract
Federated learning (FL) and split learning (SL) are two emerging collaborative learning methods that may greatly facilitate ubiquitous intelligence in the Internet of Things (IoT). Federated learning enables machine learning (ML) models locally trained using private data to be aggregated into a global [...] Read more.
Federated learning (FL) and split learning (SL) are two emerging collaborative learning methods that may greatly facilitate ubiquitous intelligence in the Internet of Things (IoT). Federated learning enables machine learning (ML) models locally trained using private data to be aggregated into a global model. Split learning allows different portions of an ML model to be collaboratively trained on different workers in a learning framework. Federated learning and split learning, each have unique advantages and respective limitations, may complement each other toward ubiquitous intelligence in IoT. Therefore, the combination of federated learning and split learning recently became an active research area attracting extensive interest. In this article, we review the latest developments in federated learning and split learning and present a survey on the state-of-the-art technologies for combining these two learning methods in an edge computing-based IoT environment. We also identify some open problems and discuss possible directions for future research in this area with the hope of arousing the research community’s interest in this emerging field. Full article
Show Figures

Figure 1

22 pages, 4208 KiB  
Article
Application of Chaos Mutation Adaptive Sparrow Search Algorithm in Edge Data Compression
by Shaoming Qiu and Ao Li
Sensors 2022, 22(14), 5425; https://doi.org/10.3390/s22145425 - 20 Jul 2022
Cited by 9 | Viewed by 1933
Abstract
In view of the large amount of data collected by an edge server, when compression technology is used for data compression, data classification accuracy is reduced and data loss is large. This paper proposes a data compression algorithm based on the chaotic mutation [...] Read more.
In view of the large amount of data collected by an edge server, when compression technology is used for data compression, data classification accuracy is reduced and data loss is large. This paper proposes a data compression algorithm based on the chaotic mutation adaptive sparrow search algorithm (CMASSA). Constructing a new fitness function, CMASSA optimizes the hyperparameters of the Convolutional Auto-Encoder Network (CAEN) on the cloud service center, aiming to obtain the optimal CAEN model. The model is sent to the edge server to compress the data at the lower level of edge computing. The effectiveness of CMASSA performance is tested on ten high-dimensional benchmark functions, and the results show that CMASSA outperforms other comparison algorithms. Subsequently, experiments are compared with other literature on the Multi-class Weather Dataset (MWD). Experiments show that under the premise of ensuring a certain compression ratio, the proposed algorithm not only has better accuracy in classification tasks than other algorithms but also maintains a high degree of data reconstruction. Full article
Show Figures

Figure 1

22 pages, 1725 KiB  
Article
On a Blockchain-Based Security Scheme for Defense against Malicious Nodes in Vehicular Ad-Hoc Networks
by Guandong Liu, Na Fan, Chase Q. Wu and Xiaomin Zou
Sensors 2022, 22(14), 5361; https://doi.org/10.3390/s22145361 - 18 Jul 2022
Cited by 8 | Viewed by 2480
Abstract
Vehicular ad-hoc networks (VANETs) aim to provide a comfortable driving experience. Sharing messages in VANETs can help with traffic management, congestion mitigation, and driving safety. However, forged or false messages may undermine the efficiency of VANETs. In this paper, we propose a security [...] Read more.
Vehicular ad-hoc networks (VANETs) aim to provide a comfortable driving experience. Sharing messages in VANETs can help with traffic management, congestion mitigation, and driving safety. However, forged or false messages may undermine the efficiency of VANETs. In this paper, we propose a security scheme based on blockchain technology, where two types of blockchain are constructed based on roadside units (RSUs) and Certificate Authorities (CAs), respectively. The proposed security scheme has multifold goals to identify malicious nodes and detect forged messages based on multiple factors, such as reputation of sender nodes, and time and distance effectiveness of messages. In addition, an incentive mechanism is introduced on the RSU blockchain to encourage RSUs to adopt active behaviors. Extensive simulations show that the proposed scheme exhibits superior performances to existing methods in detecting forged messages and identifying malicious nodes. Meanwhile, it provides privacy protection and improves the efficiency of vehicular networks. Full article
Show Figures

Figure 1

21 pages, 1689 KiB  
Article
LightFD: Real-Time Fault Diagnosis with Edge Intelligence for Power Transformers
by Xinhua Fu, Kejun Yang, Min Liu, Tianzhang Xing and Chase Wu
Sensors 2022, 22(14), 5296; https://doi.org/10.3390/s22145296 - 15 Jul 2022
Cited by 5 | Viewed by 2241
Abstract
Power fault monitoring based on acoustic waves has gained a great deal of attention in industry. Existing methods for fault diagnosis typically collect sound signals on site and transmit them to a back-end server for analysis, which may fail to provide a real-time [...] Read more.
Power fault monitoring based on acoustic waves has gained a great deal of attention in industry. Existing methods for fault diagnosis typically collect sound signals on site and transmit them to a back-end server for analysis, which may fail to provide a real-time response due to transmission packet loss and latency. However, the limited computing power of edge devices and the existing methods for feature extraction pose a significant challenge to performing diagnosis on the edge. In this paper, we propose a fast Lightweight Fault Diagnosis method for power transformers, referred to as LightFD, which integrates several technical components. Firstly, before feature extraction, we design an asymmetric Hamming-cosine window function to reduce signal spectrum leakage and ensure data integrity. Secondly, we design a multidimensional spatio-temporal feature extraction method to extract acoustic features. Finally, we design a parallel dual-layer, dual-channel lightweight neural network to realize the classification of different fault types on edge devices with limited computing power. Extensive simulation and experimental results show that the diagnostic precision and recall of LightFD reach 94.64% and 95.33%, which represent an improvement of 4% and 1.6% over the traditional SVM method, respectively. Full article
Show Figures

Figure 1

21 pages, 4825 KiB  
Article
A Multipopulation Dynamic Adaptive Coevolutionary Strategy for Large-Scale Complex Optimization Problems
by Yanlei Yin, Lihua Wang and Litong Zhang
Sensors 2022, 22(5), 1999; https://doi.org/10.3390/s22051999 - 4 Mar 2022
Cited by 1 | Viewed by 1615
Abstract
In this paper, a multipopulation dynamic adaptive coevolutionary strategy is proposed for large-scale optimization problems, which can dynamically and adaptively adjust the connection between population particles according to the optimization problem characteristics. Based on analysis of the network evolution characteristics of collaborative search [...] Read more.
In this paper, a multipopulation dynamic adaptive coevolutionary strategy is proposed for large-scale optimization problems, which can dynamically and adaptively adjust the connection between population particles according to the optimization problem characteristics. Based on analysis of the network evolution characteristics of collaborative search between particles, a dynamic adaptive evolutionary network (DAEN) model with multiple interconnection couplings is established in this algorithm. In the model, the swarm type is divided according to the judgment threshold of particle types, and the dynamic evolution of collaborative topology in the evolutionary process is adaptively completed according to the coupling connection strength between different particle types, which enhances the algorithm’s global and local searching capability and optimization accuracy. Based on that, the evolution rules of the particle swarm dynamic cooperative search network were established, the search algorithm was designed, and the adaptive coevolution between particles in different optimization environments was achieved. Simulation results revealed that the proposed algorithm exhibited a high optimization accuracy and converging rate for high-dimensional and large-scale complex optimization problems. Full article
Show Figures

Figure 1

Back to TopTop