sensors-logo

Journal Browser

Journal Browser

Algorithm and Distributed Computing for the Internet of Things

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (10 April 2020) | Viewed by 53700

Special Issue Editors


E-Mail Website
Guest Editor
Department of Technologies of Computers and Communications, University of Extremadura, 10003 Cáceres, Spain
Interests: optimization and computational intelligence; machine learning; reconfigurable computing and FPGAs; wireless communications; bioinformatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics Engineering, University of Coimbra, 3030-290 Coimbra, Portugal
Interests: Internet of Things; wireless sensor networks; human in the loop cyberphysical systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Multimedia Engineering, Osaka University, Osaka 565-0871, Japan
Interests: database systems; mobile computing; social computing; web mining; location privacy; time-series analysis

Special Issue Information

Dear Colleagues,

There is a growing interest in applying technologies, such as the Internet of Things (IoT), derived from the intense deployment of Wireless Sensor Networks (WSNs) during the last few decades, in relevant fields such as Industry 4.0, smart cities, and connected infrastructures. The use of IoT in these areas can lead to significant benefits, but it also generates a huge amount of data, which should be processed by a system to, in many cases, detect patterns or optimize certain processes of interest. To this end, big data, machine learning, and deep learning approaches are usually considered, leading to more complex and powerful systems than those of previous WSN applications, which also means new challenges to address. Moreover, for critical areas such as connected infrastructures, dependability, sustainability, and security issues should be addressed to ensure specific application requirements.

In this Special Issue, we seek original, unpublished high-quality articles, not currently under review by another conference or journal, clearly focused on theoretical and implementation solutions for IoT, including intelligent approaches (machine learning, big data, and deep learning), network levels (edge, fog, and cloud), embedded systems, sensing devices, nonfunctional requirements (dependability, security, and sustainability), deployment strategies, and management platforms.

Topics of Interest
Potential topics include, but are not limited to:

  • Smart manufacturing and Industry 4.0
  • Connected vehicles
  • Smart cities
  • Cognitive computing and deep learning
  • Big data Processing
  • Edge computing and network intelligence, bringing the computation closer to the data
  • Dependability (real-time, reliability, availability, safety)
  • Sustainability (low-power operation, energy management, energy harvesting)
  • Security
  • Distributed and embedded computing for networked systems
  • Applications, deployment, and management
  • Human behavior and context-aware aspects
  • Prediction models
  • Optimization and metaheuristics

Prof. Juan A. Gomez-Pulido
Prof. Jorge Sá Silva
Prof. Takahiro Hara
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

5 pages, 201 KiB  
Editorial
Algorithm and Distributed Computing for the Internet of Things
by Juan A. Gómez-Pulido, Jorge Sá Silva and Takahiro Hara
Sensors 2020, 20(16), 4513; https://doi.org/10.3390/s20164513 - 12 Aug 2020
Viewed by 1591
Abstract
The ongoing generalization of Internet of Things and its presence and application in multiple fields is generating a large amount of data that can be used to extract knowledge, among other purposes. In this context, algorithmic techniques and efficient computer systems provide an [...] Read more.
The ongoing generalization of Internet of Things and its presence and application in multiple fields is generating a large amount of data that can be used to extract knowledge, among other purposes. In this context, algorithmic techniques and efficient computer systems provide an opportunity to successfully address efficient data processing and intelligent data analysis. As a result, multiple services can be improved, resources can be optimized and real-world problems of interest can be solved. This Special Issue on Algorithm and Distributed Computing for the Internet of Things gives the opportunity to know recent advances in the application of modern technologies hardware and software to the Internet of Things. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)

Research

Jump to: Editorial, Other

37 pages, 11251 KiB  
Article
Location Privacy Protection in Distributed IoT Environments Based on Dynamic Sensor Node Clustering
by Konstantinos Dimitriou and Ioanna Roussaki
Sensors 2019, 19(13), 3022; https://doi.org/10.3390/s19133022 - 09 Jul 2019
Cited by 5 | Viewed by 3849
Abstract
One of the most significant challenges in Internet of Things (IoT) environments is the protection of privacy. Failing to guarantee the privacy of sensitive data collected and shared over IoT infrastructures is a critical barrier that delays the wide penetration of IoT technologies [...] Read more.
One of the most significant challenges in Internet of Things (IoT) environments is the protection of privacy. Failing to guarantee the privacy of sensitive data collected and shared over IoT infrastructures is a critical barrier that delays the wide penetration of IoT technologies in several user-centric application domains. Location information is the most common dynamic information monitored and lies among the most sensitive ones from a privacy perspective. This article introduces a novel mechanism that aims to protect the privacy of location information across Data Centric Sensor Networks (DCSNs) that monitor the location of mobile objects in IoT systems. The respective data dissemination protocols proposed enhance the security of DCSNs rendering them less vulnerable to intruders interested in obtaining the location information monitored. In this respect, a dynamic clustering algorithm is that clusters the DCSN nodes not only based on the network topology, but also considering the current location of the objects monitored. The proposed techniques do not focus on the prevention of attacks, but on enhancing the privacy of sensitive location information once IoT nodes have been compromised. They have been extensively assessed via series of experiments conducted over the IoT infrastructure of FIT IoT-LAB and the respective evaluation results indicate that the dynamic clustering algorithm proposed significantly outperforms existing solutions focusing on enhancing the privacy of location information in IoT. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

47 pages, 1027 KiB  
Article
Mapping Neural Networks to FPGA-Based IoT Devices for Ultra-Low Latency Processing
by Maciej Wielgosz and Michał Karwatowski
Sensors 2019, 19(13), 2981; https://doi.org/10.3390/s19132981 - 05 Jul 2019
Cited by 26 | Viewed by 4678
Abstract
Internet of things (IoT) infrastructure, fast access to knowledge becomes critical. In some application domains, such as robotics, autonomous driving, predictive maintenance, and anomaly detection, the response time of the system is more critical to ensure Quality of Service than the quality of [...] Read more.
Internet of things (IoT) infrastructure, fast access to knowledge becomes critical. In some application domains, such as robotics, autonomous driving, predictive maintenance, and anomaly detection, the response time of the system is more critical to ensure Quality of Service than the quality of the answer. In this paper, we propose a methodology, a set of predefined steps to be taken in order to map the models to hardware, especially field programmable gate arrays (FPGAs), with the main focus on latency reduction. Multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) was employed along with custom scores for sparsity, bit-width of the representation and quality of the model. Furthermore, we created a framework which enables mapping of neural models to FPGAs. The proposed solution is validated using three case studies and Xilinx Zynq UltraScale+ MPSoC 285 XCZU15EG as a platform. The results show a compression ratio for quantization and pruning in different scenarios with and without retraining procedures. Using our publicly available framework, we achieved 210 ns of latency for a single processing step for a model composed of two long short-term memory (LSTM) and a single dense layer. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

19 pages, 2632 KiB  
Article
DNN-MVL: DNN-Multi-View-Learning-Based Recover Block Missing Data in a Dam Safety Monitoring System
by Yingchi Mao, Jianhua Zhang, Hai Qi and Longbao Wang
Sensors 2019, 19(13), 2895; https://doi.org/10.3390/s19132895 - 30 Jun 2019
Cited by 35 | Viewed by 2961
Abstract
Many sensor nodes have been widely deployed in the physical world to gather various environmental information, such as water quality, earthquake, and huge dam safety. Due to the limitation in the batter power, memory, and computational capacity, missing data can occur at arbitrary [...] Read more.
Many sensor nodes have been widely deployed in the physical world to gather various environmental information, such as water quality, earthquake, and huge dam safety. Due to the limitation in the batter power, memory, and computational capacity, missing data can occur at arbitrary sensor nodes and time slots. In extreme situations, some sensors may lose readings at consecutive time slots. The successive missing data takes the side effects on the accuracy of real-time monitoring as well as the performance on the data analysis in the wireless sensor networks. Unfortunately, existing solutions to the missing data filling cannot well uncover the complex non-linear spatial and temporal relations. To address these problems, a DNN (Deep Neural Network) multi-view learning method (DNN-MVL) is proposed to fill the successive missing readings. DNN-MVL mainly considers five views: global spatial view, global temporal view, local spatial view, local temporal view, and semantic view. These five views are modeled with inverse distance of weight interpolation, bidirectional simple exponential smoothing, user-based collaborative filtering, mass diffusion-based collaborative filtering with the bipartite graph, and structural embedding, respectively. The results of the five views are aggregated to a final value in a multi-view learning algorithm with DNN model to obtain the final filling readings. Experiments on large-scale real dam deformation data demonstrate that DNN-MVL has a mean absolute error about 6.5%, and mean relative error 21.4%, and mean square error 8.17% for dam deformation data, outperforming all of the baseline methods. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

16 pages, 1450 KiB  
Article
Pseudo-Random Encryption for Security Data Transmission in Wireless Sensor Networks
by Liang Liu, Wen Chen, Tao Li and Yuling Liu
Sensors 2019, 19(11), 2452; https://doi.org/10.3390/s19112452 - 29 May 2019
Cited by 8 | Viewed by 2489
Abstract
The security of wireless sensor networks (WSN) has become a great challenge due to the transmission of sensor data through an open and wireless network with limited resources. In the paper, we discussed a lightweight security scheme to protect the confidentiality of data [...] Read more.
The security of wireless sensor networks (WSN) has become a great challenge due to the transmission of sensor data through an open and wireless network with limited resources. In the paper, we discussed a lightweight security scheme to protect the confidentiality of data transmission between sensors and an ally fusion center (AFC) over insecure links. For the typical security problem of WSN’s binary hypothesis testing of a target’s state, sensors were divided into flipping and non-flipping groups according to the outputs of a pseudo-random function which was held by sensors and the AFC. Then in order to prevent an enemy fusion center (EFC) from eavesdropping, the binary outputs from the flipping group were intentionally flipped to hinder the EFC’s data fusion. Accordingly, the AFC performed inverse flipping to recover the flipped data before data fusion. We extended the scheme to a more common scenario with multiple scales of sensor quantification and candidate states. The underlying idea was that the sensor measurements were randomly mapped to other quantification scales using a mapping matrix, which ensured that as long as the EFC was not aware of the matrix, it could not distract any useful information from the captured data, while the AFC could appropriately perform data fusion based on the inverse mapping of the sensor outputs. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

14 pages, 539 KiB  
Article
Coverage-Balancing User Selection in Mobile Crowd Sensing with Budget Constraint
by Yanan Wang, Guodong Sun and Xingjian Ding
Sensors 2019, 19(10), 2371; https://doi.org/10.3390/s19102371 - 23 May 2019
Cited by 12 | Viewed by 2932
Abstract
Mobile crowd sensing (MCS) is a new computing paradigm for the internet of things, and it is widely accepted as a powerful means to achieve urban-scale sensing and data collection. In the MCS campaign, the smart mobilephone users can detect their surrounding environments [...] Read more.
Mobile crowd sensing (MCS) is a new computing paradigm for the internet of things, and it is widely accepted as a powerful means to achieve urban-scale sensing and data collection. In the MCS campaign, the smart mobilephone users can detect their surrounding environments with their on-phone sensors and return the sensing data to the MCS organizer. In this paper, we focus on the coverage-balancing user selection (CBUS) problem with a budget constraint. Solving the CBUS problem aims to select a proper subset of users such that their sensing coverage is as large and balancing as possible, yet without violating the budget specified by the MCS campaign. We first propose a novel coverage balance-based sensing utility model, which effectively captures the joint requirement of the MCS requester for coverage area and coverage balance. We then formally define the CBUS problem under the proposed sensing utility model. Because of the NP-hardness of the CBUS problem, we design a heuristic-based algorithm, called MIA, which tactfully employs the maximum independent set model to determine a preliminary subset of users from all the available users and then adjusts this user subset to improve the budget implementation. MIA also includes a fast approach to calculating the area of the union coverage with any complicated boundaries, which is also applicable to any MCS scenarios that are set up with the coverage area-based sensing utility. The extensive numeric experiments show the efficacy of our designs both in coverage balance and in the total coverage area. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

22 pages, 2902 KiB  
Article
Connected Vehicle as a Mobile Sensor for Real Time Queue Length at Signalized Intersections
by Kai Gao, Farong Han, Pingping Dong, Naixue Xiong and Ronghua Du
Sensors 2019, 19(9), 2059; https://doi.org/10.3390/s19092059 - 02 May 2019
Cited by 101 | Viewed by 5257
Abstract
With the development of intelligent transportation system (ITS) and vehicle to X (V2X), the connected vehicle is capable of sensing a great deal of useful traffic information, such as queue length at intersections. Aiming to solve the problem of existing models’ complexity and [...] Read more.
With the development of intelligent transportation system (ITS) and vehicle to X (V2X), the connected vehicle is capable of sensing a great deal of useful traffic information, such as queue length at intersections. Aiming to solve the problem of existing models’ complexity and information redundancy, this paper proposes a queue length sensing model based on V2X technology, which consists of two sub-models based on shockwave sensing and back propagation (BP) neural network sensing. First, the model obtains state information of the connected vehicles and analyzes the formation process of the queue, and then it calculates the velocity of the shockwave to predict the queue length of the subsequent unconnected vehicles. Then, the neural network is trained with historical connected vehicle data, and a sub-model based on the BP neural network is established to predict the real-time queue length. Finally, the final queue length at the intersection is determined by combining the sub-models by variable weight. Simulation results show that the sensing accuracy of the combined model is proportional to the penetration rate of connected vehicles, and sensing of queue length can be achieved even in low penetration rate environments. In mixed traffic environments of connected vehicles and unconnected vehicles, the queuing length sensing model proposed in this paper has higher performance than the probability distribution (PD) model when the penetration rate is low, and it has an almost equivalent performance with higher penetration rate while the penetration rate is not needed. The proposed sensing model is more applicable for mixed traffic scenarios with much looser conditions. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

15 pages, 383 KiB  
Article
Optimizing Movement for Maximizing Lifetime of Mobile Sensors for Covering Targets on a Line
by Peihuang Huang, Wenxing Zhu and Longkun Guo
Sensors 2019, 19(2), 273; https://doi.org/10.3390/s19020273 - 11 Jan 2019
Cited by 5 | Viewed by 2535
Abstract
Given a set of sensors distributed on the plane and a set of Point of Interests (POIs) on a line segment, a primary task of the mobile wireless sensor network is to schedule covering the POIs by the sensors, such that each POI [...] Read more.
Given a set of sensors distributed on the plane and a set of Point of Interests (POIs) on a line segment, a primary task of the mobile wireless sensor network is to schedule covering the POIs by the sensors, such that each POI is monitored by at least one sensor. For balancing the energy consumption, we study the min-max line barrier target coverage (LBTC) problem which aims to minimize the maximum movement of the sensors from their original positions to their final positions at which the coverage is composed. We first proved that when the radius of the sensors are non-uniform integers, even 1-dimensional LBTC (1D-LBTC), a special case of LBTC in which the sensors are distributed on the line segment instead of the plane, is NP -hard. The hardness result is interesting, since the continuous version of LBTC to cover a given line segment instead of the POIs is known polynomial solvable. Then we present an exact algorithm for LBTC with uniform radius and sensors distributed on the plane, via solving the decision version of LBTC. We argue that our algorithm runs in time O ( n 2 log n ) and produces an optimal solution to LBTC. The time complexity compares favorably to the state-of-art runtime O ( n 3 log n ) of the continuous version which aims to cover a line barrier instead of the targets. Last but not the least, we carry out numerical experiments to evaluate the practical performance of the algorithms, which demonstrates a practical runtime gain comparing with an optimal algorithm based on integer linear programming. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

16 pages, 1642 KiB  
Article
An Optimized Probabilistic Delay Tolerant Network (DTN) Routing Protocol Based on Scheduling Mechanism for Internet of Things (IoT)
by Yuxin Mao, Chenqian Zhou, Yun Ling and Jaime Lloret
Sensors 2019, 19(2), 243; https://doi.org/10.3390/s19020243 - 10 Jan 2019
Cited by 56 | Viewed by 6051
Abstract
Many applications of Internet of Things (IoT) have been implemented based on unreliable wireless or mobile networks like the delay tolerant network (DTN). Therefore, it is an important issue for IoT applications to achieve efficient data transmission in DTN. In order to improve [...] Read more.
Many applications of Internet of Things (IoT) have been implemented based on unreliable wireless or mobile networks like the delay tolerant network (DTN). Therefore, it is an important issue for IoT applications to achieve efficient data transmission in DTN. In order to improve delivery rate and optimize delivery delay with low overhead in DTN for IoT applications, we propose a new routing protocol, called Scheduling-Probabilistic Routing Protocol using History of Encounters and Transitivity (PROPHET). In this protocol, we calculate the delivery predictability according to the encountering frequency among nodes. Two scheduling mechanisms are proposed to extend the traditional PROPHET protocol and improve performance in both storage and transmission in DTN. In order to evaluate the proposed routing protocol, we perform simulations and compare it with other routing protocols in an Opportunistic Network Environment (ONE) simulator. The results demonstrate that the proposed Scheduling-PROPHET can achieve better performances in several key aspects compared with the existing protocols. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

19 pages, 4976 KiB  
Article
Wireless Charging Deployment in Sensor Networks
by Wei-Yu Lai and Tien-Ruey Hsiang
Sensors 2019, 19(1), 201; https://doi.org/10.3390/s19010201 - 08 Jan 2019
Cited by 9 | Viewed by 4587
Abstract
Charging schemes utilizing mobile wireless chargers can be applied to prolong the lifespan of a wireless sensor network. In considering charging schemes with mobile chargers, most current studies focus on charging each sensor from a single position, then optimizing the moving paths of [...] Read more.
Charging schemes utilizing mobile wireless chargers can be applied to prolong the lifespan of a wireless sensor network. In considering charging schemes with mobile chargers, most current studies focus on charging each sensor from a single position, then optimizing the moving paths of the chargers. However, in reality, a wireless charger may charge the same sensor from several positions in its path. In this paper we consider this fact and seek to minimize both the number of charging locations and the total required charging time. Two charging plans are developed. The first plan considers the charging time required by each sensor and greedily selects the charging service positions. The second one is a two-phase plan, where the number of charging positions is first minimized, then minimum charging times are assigned to every position according to the charging requirements of the nearby sensors. This paper also corrects a problem neglected by some studies in minimizing the number of charging service positions and further provides a corresponding solution. Empirical studies show that compared with other minimal clique partition (MCP)-based methods, the proposed charging plan may save up to 60% in terms of both the number of charging positions and the total required charging time. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

26 pages, 735 KiB  
Article
Design and Analysis of a General Relay-Node Selection Mechanism on Intersection in Vehicular Networks
by Dun Cao, Bin Zheng, Jin Wang, Baofeng Ji and Chunhai Feng
Sensors 2018, 18(12), 4251; https://doi.org/10.3390/s18124251 - 03 Dec 2018
Cited by 6 | Viewed by 2413
Abstract
Employment of a relay node can extend the coverage of a message in vehicular networks (VNET). In addition, the prior information regarding the road structure, which determines the structure of VNET, can benefit relay-node selection. However, the non-line-of-sight (NLOS) communication in the intersection [...] Read more.
Employment of a relay node can extend the coverage of a message in vehicular networks (VNET). In addition, the prior information regarding the road structure, which determines the structure of VNET, can benefit relay-node selection. However, the non-line-of-sight (NLOS) communication in the intersection scenarios and diverse shapes for the intersection hamper the design of a general relay-node selection on intersection. To resolve this problem, in this paper, we build a model to describe the general intersection, and propose a general relay-node selection method on intersection. Additionally, based on our mathematical description of the general intersection, the performance models for the general relay-node selection on the intersection are first explored in terms of message dissemination speed and Packet Delivery Ratio (PDR). The simulation results validate these models and indicate the improvement of our proposal, especially in heavy traffic. The improvement includes, at the high density of 3.0025 vehicles/m, the huge gain of up to 23.35% in terms of message dissemination speed than that of other compared methods and PDR of over 90%. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

23 pages, 1677 KiB  
Article
A oneM2M-Based Query Engine for Internet of Things (IoT) Data Streams
by Putu Wiramaswara Widya, Yoga Yustiawan and Joonho Kwon
Sensors 2018, 18(10), 3253; https://doi.org/10.3390/s18103253 - 27 Sep 2018
Cited by 8 | Viewed by 4334
Abstract
The new standard oneM2M (one machine-to-machine) aims to standardize the architecture and protocols of Internet of Things (IoT) middleware for better interoperability. Although the standard seems promising, it lacks several features for efficiently searching and retrieving IoT data which satisfy users’ intentions. In [...] Read more.
The new standard oneM2M (one machine-to-machine) aims to standardize the architecture and protocols of Internet of Things (IoT) middleware for better interoperability. Although the standard seems promising, it lacks several features for efficiently searching and retrieving IoT data which satisfy users’ intentions. In this paper, we design and develop a oneM2M-based query engine, called OMQ, that provides a real-time processing over IoT data streams. For this purpose, we define a query language which enables users to retrieve IoT data from data sources using JavaScript Object Notation (JSON). We also propose efficient query processing algorithms which utilizes the oneM2M architecture consisting of two nodes: (1) the IoT node and (2) the infrastructure node. IoT nodes of OMQ are mainly sensor devices execute user queries the aggregate, transform and filter operators, whereas the infrastructure node handles the join operator of user queries. Since the query processing algorithms are implemented as the hybrid infrastructure-edge processing, user queries can be executed efficiently in each IoT node rather than only in the infrastructure node. Thus, our OMQ system reduces the query processing time and the network bandwidth. We conducted a comprehensive evaluation of OMQ using a real and a synthetic data set. Experimental results demonstrate the feasibility and efficiency of OMQ system for executing queries and transferring data from each IoT node. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

27 pages, 3220 KiB  
Article
Distributed Egocentric Betweenness Measure as a Vehicle Selection Mechanism in VANETs: A Performance Evaluation Study
by Ademar T. Akabane, Roger Immich, Richard W. Pazzi, Edmundo R. M. Madeira and Leandro A. Villas
Sensors 2018, 18(8), 2731; https://doi.org/10.3390/s18082731 - 20 Aug 2018
Cited by 13 | Viewed by 4821
Abstract
In the traditional approach for centrality measures, also known as sociocentric, a network node usually requires global knowledge of the network topology in order to evaluate its importance. Therefore, it becomes difficult to deploy such an approach in large-scale or highly dynamic networks. [...] Read more.
In the traditional approach for centrality measures, also known as sociocentric, a network node usually requires global knowledge of the network topology in order to evaluate its importance. Therefore, it becomes difficult to deploy such an approach in large-scale or highly dynamic networks. For this reason, another concept known as egocentric has been introduced, which analyses the social environment surrounding individuals (through the ego-network). In other words, this type of network has the benefit of using only locally available knowledge of the topology to evaluate the importance of a node. It is worth emphasizing that in this approach, each network node will have a sub-optimal accuracy. However, such accuracy may be enough for a given purpose, for instance, the vehicle selection mechanism (VSM) that is applied to find, in a distributed fashion, the best-ranked vehicles in the network after each topology change. In order to confirm that egocentric measures can be a viable alternative for implementing a VSM, in particular, a case study was carried out to validate the effectiveness and viability of that mechanism for a distributed information management system. To this end, we used the egocentric betweenness measure as a selection mechanism of the most appropriate vehicle to carry out the tasks of information aggregation and knowledge generation. Based on the analysis of the performance results, it was confirmed that a VSM is extremely useful for VANET applications, and two major contributions of this mechanism can be highlighted: (i) reduction of bandwidth consumption; and (ii) overcoming the issue of highly dynamic topologies. Another contribution of this work is a thorough study by implementing and evaluating how well egocentric betweenness performs in comparison to the sociocentric measure in VANETs. Evaluation results show that the use of the egocentric betweenness measure in highly dynamic topologies has demonstrated a high degree of similarity compared to the sociocentric approach. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

Other

Jump to: Editorial, Research

20 pages, 7556 KiB  
Concept Paper
Natural Computing Applied to the Underground System: A Synergistic Approach for Smart Cities
by Clemencio Morales Lucas, Luis Fernando De Mingo López and Nuria Gómez Blas
Sensors 2018, 18(12), 4094; https://doi.org/10.3390/s18124094 - 22 Nov 2018
Cited by 7 | Viewed by 3457
Abstract
The management and proper use of the Urban Public Transport Systems (UPTS) constitutes a critical field that has not been investigated in accordance to its relevance and urgent idiosyncrasy within the Smart Cities realm. Swarm Intelligence is a very promising paradigm to deal [...] Read more.
The management and proper use of the Urban Public Transport Systems (UPTS) constitutes a critical field that has not been investigated in accordance to its relevance and urgent idiosyncrasy within the Smart Cities realm. Swarm Intelligence is a very promising paradigm to deal with such complex and dynamic systems. It presents robust, scalable, and self-organized behavior to deal with dynamic and fast changing systems. The intelligence of cities can be modelled as a swarm of digital telecommunication networks (the nerves), ubiquitously embedded intelligence, sensors and tags, and software. In this paper, a new approach based on the use of the Natural Computing paradigm and Collective Computation is shown, more concretely taking advantage of an Ant Colony Optimization algorithm variation and Fireworks algorithms to build a system that makes the complete control of the UPTS a tangible reality. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

Back to TopTop