Next Issue
Previous Issue

Table of Contents

Information, Volume 8, Issue 2 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-33
Export citation of selected articles as:

Research

Open AccessArticle Continuous Leakage Resilient Lossy Trapdoor Functions
Information 2017, 8(2), 38; doi:10.3390/info8020038
Received: 5 February 2017 / Revised: 14 March 2017 / Accepted: 17 March 2017 / Published: 23 March 2017
PDF Full-text (798 KB) | HTML Full-text | XML Full-text
Abstract
Lossy trapdoor functions (LTFs) were first introduced by Peikert and Waters (STOC’08). Since their introduction, lossy trapdoor functions have found numerous applications. They can be used as tools to construct important cryptographic primitives such as injective one-way trapdoor functions, chosen-ciphertext-secure public key encryptions,
[...] Read more.
Lossy trapdoor functions (LTFs) were first introduced by Peikert and Waters (STOC’08). Since their introduction, lossy trapdoor functions have found numerous applications. They can be used as tools to construct important cryptographic primitives such as injective one-way trapdoor functions, chosen-ciphertext-secure public key encryptions, deterministic encryptions, et al. In this paper, we focus on the lossy trapdoor functions in the presence of continuous leakage. We introduce the new notion of updatable lossy trapdoor functions (ULTFs) and give their formal definition and security properties. Based on these, we extend the security model to the LTFs against continuous leakage when the evaluation algorithm is leakage resilient. Under the standard DDH assumption and DCR assumption, respectively, we show two explicit lossy trapdoor functions against continuous leakage in the standard model. In these schemes, using the technology of matrix kernel, the trapdoor can be refreshed at regular intervals and the adversaries can learn unbounded leakage information on the trapdoor along the whole system life. At the same time, we also show the performance of the proposed schemes compared with the known existing continuous leakage resilient lossy trapdoor functions. Full article
(This article belongs to the Special Issue Secure Data Storage and Sharing Techniques in Cloud Computing)
Open AccessArticle Aesthetic Local Search of Wind Farm Layouts
Information 2017, 8(2), 39; doi:10.3390/info8020039
Received: 12 January 2017 / Revised: 21 February 2017 / Accepted: 21 March 2017 / Published: 24 March 2017
PDF Full-text (365 KB) | HTML Full-text | XML Full-text
Abstract
The visual impact of wind farm layouts has seen little consideration in the literature on the wind farm layout optimisation problem to date. Most existing algorithms focus on optimising layouts for power or the cost of energy alone. In this paper, we consider
[...] Read more.
The visual impact of wind farm layouts has seen little consideration in the literature on the wind farm layout optimisation problem to date. Most existing algorithms focus on optimising layouts for power or the cost of energy alone. In this paper, we consider the geometry of wind farm layouts and whether it is possible to bi-optimise a layout for both energy efficiency and the degree of visual impact that the layout exhibits. We develop a novel optimisation approach for solving the problem which measures mathematically the degree of visual impact of a layout. The approach draws inspiration from the field of architecture. To evaluate our ideas, we demonstrate them on three benchmark problems for the wind farm layout optimisation problem in conjunction with two recently-published stochastic local search algorithms. Optimal patterned layouts are shown to be very close in terms of energy efficiency to optimal non-patterned layouts. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle HTCRL: A Range-Free Location Algorithm Based on Homothetic Triangle Cyclic Refinement in Wireless Sensor Networks
Information 2017, 8(2), 40; doi:10.3390/info8020040
Received: 3 January 2017 / Revised: 8 March 2017 / Accepted: 23 March 2017 / Published: 27 March 2017
PDF Full-text (4618 KB) | HTML Full-text | XML Full-text
Abstract
Wireless sensor networks (WSN) have become a significant technology in recent years. They can be widely used in many applications. WSNs consist of a large number of sensor nodes and each of them is energy-constrained and low-power dissipation. Most of the sensor nodes
[...] Read more.
Wireless sensor networks (WSN) have become a significant technology in recent years. They can be widely used in many applications. WSNs consist of a large number of sensor nodes and each of them is energy-constrained and low-power dissipation. Most of the sensor nodes are tiny sensors with small memories and do not acquire their own locations. This means determining the locations of the unknown sensor nodes is one of the key issues in WSN. In this paper, an improved APIT algorithm HTCRL (Homothetic Triangle Cyclic Refinement Location) is proposed, which is based on the principle of the homothetic triangle. It adopts perpendicular median surface cutting to narrow down target area in order to decrease the average localization error rate. It reduces the probability of misjudgment by adding the conditions of judgment. It can get a relatively high accuracy compared with the typical APIT algorithm without any additional hardware equipment or increasing the communication overhead. Full article
(This article belongs to the Special Issue Sensor Networks for Emergent Technologies)
Figures

Figure 1

Open AccessArticle Correlation Coefficient between Dynamic Single Valued Neutrosophic Multisets and Its Multiple Attribute Decision-Making Method
Information 2017, 8(2), 41; doi:10.3390/info8020041
Received: 7 March 2017 / Revised: 4 April 2017 / Accepted: 5 April 2017 / Published: 7 April 2017
PDF Full-text (432 KB) | HTML Full-text | XML Full-text
Abstract
Based on dynamic information collected from different time intervals in some real situations, this paper firstly proposes a dynamic single valued neutrosophic multiset (DSVNM) to express dynamic information and operational relations of DSVNMs. Then, a correlation coefficient between DSVNMs and a weighted correlation
[...] Read more.
Based on dynamic information collected from different time intervals in some real situations, this paper firstly proposes a dynamic single valued neutrosophic multiset (DSVNM) to express dynamic information and operational relations of DSVNMs. Then, a correlation coefficient between DSVNMs and a weighted correlation coefficient between DSVNMs are presented to measure the correlation degrees between DSVNMs, and their properties are investigated. Based on the weighted correlation coefficient of DSVNMs, a multiple attribute decision-making method is established under a DSVNM environment, in which the evaluation values of alternatives with respect to attributes are collected from different time intervals and are represented by the form of DSVNMs. The ranking order of alternatives is performed through the weighted correlation coefficient between an alternative and the ideal alternative, which is considered by the attribute weights and the time weights, and thus the best one(s) can also be determined. Finally, a practical example shows the application of the proposed method. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Security Awareness of the Digital Natives
Information 2017, 8(2), 42; doi:10.3390/info8020042
Received: 8 March 2017 / Revised: 4 April 2017 / Accepted: 5 April 2017 / Published: 8 April 2017
Cited by 1 | PDF Full-text (1137 KB) | HTML Full-text | XML Full-text
Abstract
Young generations make extensive use of mobile devices, such as smartphones, tablets and laptops, while a plethora of security risks associated with such devices are induced by vulnerabilities related to user behavior. Furthermore, the number of security breaches on or via portable devices
[...] Read more.
Young generations make extensive use of mobile devices, such as smartphones, tablets and laptops, while a plethora of security risks associated with such devices are induced by vulnerabilities related to user behavior. Furthermore, the number of security breaches on or via portable devices increases exponentially. Thus, deploying suitable risk treatments requires the investigation of how the digital natives (young people, born and bred in the digital era) use their mobile devices and their level of security awareness, in order to identify common usage patterns with negative security impact. In this article, we present the results of a survey performed across a multinational sample of digital natives with distinct backgrounds and levels of competence in terms of security, to identify divergences in user behavior due to regional, educational and other factors. Our results highlight significant influences on the behavior of digital natives, arising from user confidence, educational background, and parameters related to usability and accessibility. The outcomes of this study justify the need for further analysis of the topic, in order to identify the influence of fine-grained semantics, but also the consolidation of wide and robust user-models. Full article
Figures

Figure 1

Open AccessArticle Object Tracking by a Combination of Discriminative Global and Generative Multi-Scale Local Models
Information 2017, 8(2), 43; doi:10.3390/info8020043
Received: 6 February 2017 / Revised: 5 April 2017 / Accepted: 5 April 2017 / Published: 11 April 2017
PDF Full-text (2950 KB) | HTML Full-text | XML Full-text
Abstract
Object tracking is a challenging task in many computer vision applications due to occlusion, scale variation and background clutter, etc. In this paper, we propose a tracking algorithm by combining discriminative global and generative multi-scale local models. In the global model, we teach
[...] Read more.
Object tracking is a challenging task in many computer vision applications due to occlusion, scale variation and background clutter, etc. In this paper, we propose a tracking algorithm by combining discriminative global and generative multi-scale local models. In the global model, we teach a classifier with sparse discriminative features to separate the target object from the background based on holistic templates. In the multi-scale local model, the object is represented by multi-scale local sparse representation histograms, which exploit the complementary partial and spatial information of an object across different scales. Finally, a collaborative similarity score of one candidate target is input into a Bayesian inference framework to estimate the target state sequentially during tracking. Experimental results on the various challenging video sequences show that the proposed method performs favorably compared to several state-of-the-art trackers. Full article
Figures

Figure 1

Open AccessArticle BBDS: Blockchain-Based Data Sharing for Electronic Medical Records in Cloud Environments
Information 2017, 8(2), 44; doi:10.3390/info8020044
Received: 1 March 2017 / Revised: 27 March 2017 / Accepted: 13 April 2017 / Published: 17 April 2017
Cited by 2 | PDF Full-text (2668 KB) | HTML Full-text | XML Full-text
Abstract
Disseminating medical data beyond the protected cloud of institutions poses severe risks to patients’ privacy, as breaches push them to the point where they abstain from full disclosure of their condition. This situation negatively impacts the patient, scientific research, and all stakeholders. To
[...] Read more.
Disseminating medical data beyond the protected cloud of institutions poses severe risks to patients’ privacy, as breaches push them to the point where they abstain from full disclosure of their condition. This situation negatively impacts the patient, scientific research, and all stakeholders. To address this challenge, we propose a blockchain-based data sharing framework that sufficiently addresses the access control challenges associated with sensitive data stored in the cloud using immutability and built-in autonomy properties of the blockchain. Our system is based on a permissioned blockchain which allows access to only invited, and hence verified users. As a result of this design, further accountability is guaranteed as all users are already known and a log of their actions is kept by the blockchain. The system permits users to request data from the shared pool after their identities and cryptographic keys are verified. The evidence from the system evaluation shows that our scheme is lightweight, scalable, and efficient. Full article
(This article belongs to the Special Issue Secure Data Storage and Sharing Techniques in Cloud Computing)
Figures

Figure 1

Open AccessArticle A Shallow Network with Combined Pooling for Fast Traffic Sign Recognition
Information 2017, 8(2), 45; doi:10.3390/info8020045
Received: 24 February 2017 / Revised: 1 April 2017 / Accepted: 13 April 2017 / Published: 17 April 2017
PDF Full-text (3367 KB) | HTML Full-text | XML Full-text
Abstract
Traffic sign recognition plays an important role in intelligent transportation systems. Motivated by the recent success of deep learning in the application of traffic sign recognition, we present a shallow network architecture based on convolutional neural networks (CNNs). The network consists of only
[...] Read more.
Traffic sign recognition plays an important role in intelligent transportation systems. Motivated by the recent success of deep learning in the application of traffic sign recognition, we present a shallow network architecture based on convolutional neural networks (CNNs). The network consists of only three convolutional layers for feature extraction, and it learns in a backward optimization way. We propose the method of combining different pooling operations to improve sign recognition performance. In view of real-time performance, we use the activation function ReLU to improve computational efficiency. In addition, a linear layer with softmax-loss is taken as the classifier. We use the German traffic sign recognition benchmark (GTSRB) to evaluate the network on CPU, without expensive GPU acceleration hardware, under real-world recognition conditions. The experiment results indicate that the proposed method is effective and fast, and it achieves the highest recognition rate compared with other state-of-the-art algorithms. Full article
Figures

Figure 1

Open AccessArticle A Framework for Systematic Refinement of Trustworthiness Requirements
Information 2017, 8(2), 46; doi:10.3390/info8020046
Received: 16 December 2016 / Revised: 13 April 2017 / Accepted: 15 April 2017 / Published: 20 April 2017
PDF Full-text (917 KB) | HTML Full-text | XML Full-text
Abstract
The trustworthiness of systems that support complex collaborative business processes is an emergent property. In order to address users’ trust concerns, trustworthiness requirements of software systems must be elicited and satisfied. The aim of this paper is to address the gap that exists
[...] Read more.
The trustworthiness of systems that support complex collaborative business processes is an emergent property. In order to address users’ trust concerns, trustworthiness requirements of software systems must be elicited and satisfied. The aim of this paper is to address the gap that exists between end-users’ trust concerns and the lack of implementation of proper trustworthiness requirements. New technologies like cloud computing bring new capabilities for hosting and offering complex collaborative business operations. However, these advances might bring undesirable side effects, e.g., introducing new vulnerabilities and threats caused by collaboration and data exchange over the Internet. Hence, users become more concerned about trust. Trust is subjective; trustworthiness requirements for addressing trust concerns are difficult to elicit, especially if there are different parties involved in the business process. We propose a user-centered trustworthiness requirement analysis and modeling framework. We integrate the subjective trust concerns into goal models and embed them into business process models as objective trustworthiness requirements. Business process model and notation is extended to enable modeling trustworthiness requirements. This paper focuses on the challenges of elicitation, refinement and modeling trustworthiness requirements. An application example from the healthcare domain is used to demonstrate our approach. Full article
(This article belongs to the Special Issue Trust, Privacy and Security in Digital Business)
Figures

Figure 1

Open AccessArticle Developing Knowledge-Based Citizen Participation Platform to Support Smart City Decision Making: The Smarticipate Case Study
Information 2017, 8(2), 47; doi:10.3390/info8020047
Received: 28 February 2017 / Revised: 12 April 2017 / Accepted: 15 April 2017 / Published: 21 April 2017
Cited by 1 | PDF Full-text (5309 KB) | HTML Full-text | XML Full-text
Abstract
Citizen participation for social innovation and co-creating urban regeneration proposals can be greatly facilitated by innovative IT systems. Such systems can use Open Government Data, visualise urban proposals in 3D models and provide automated feedback on the feasibility of the proposals. Using such
[...] Read more.
Citizen participation for social innovation and co-creating urban regeneration proposals can be greatly facilitated by innovative IT systems. Such systems can use Open Government Data, visualise urban proposals in 3D models and provide automated feedback on the feasibility of the proposals. Using such a system as a communication platform between citizens and city administrations provides an integrated top-down and bottom-up urban planning and decision-making approach to smart cities. However, generating automated feedback on citizens’ proposals requires modelling domain-specific knowledge i.e., vocabulary and rules, which can be applied on spatial and temporal 3D models. This paper presents the European Commission funded H2020 smarticipate project that aims to achieve the above challenge by applying it on three smart cities: Hamburg, Rome and RBKC-London. Whilst the proposed system architecture indicates various innovative features, a proof of concept of the automated feedback feature for the Hamburg use case ‘planting trees’ is demonstrated. Early results and lessons learned show that it is feasible to provide automated feedback on citizen-initiated proposals on specific topics. However, it is not straightforward to generalise this feature to cover more complex concepts and conditions which require specifying comprehensive domain languages, rules and appropriate tools to process them. This paper also highlights the strengths of the smarticipate platform, discusses challenges to realise its different features and suggests potential solutions. Full article
(This article belongs to the Special Issue Smart City Technologies, Systems and Applications)
Figures

Figure 1

Open AccessArticle Assembling Deep Neural Networks for Medical Compound Figure Detection
Information 2017, 8(2), 48; doi:10.3390/info8020048
Received: 17 March 2017 / Revised: 18 April 2017 / Accepted: 19 April 2017 / Published: 21 April 2017
Cited by 1 | PDF Full-text (850 KB) | HTML Full-text | XML Full-text
Abstract
Compound figure detection on figures and associated captions is the first step to making medical figures from biomedical literature available for further analysis. The performance of traditional methods is limited to the choice of hand-engineering features and prior domain knowledge. We train multiple
[...] Read more.
Compound figure detection on figures and associated captions is the first step to making medical figures from biomedical literature available for further analysis. The performance of traditional methods is limited to the choice of hand-engineering features and prior domain knowledge. We train multiple convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) networks on top of pre-trained word vectors to learn textual features from captions and employ deep CNNs to learn visual features from figures. We then identify compound figures by combining textual and visual prediction. Our proposed architecture obtains remarkable performance in three run types—textual, visual and mixed—and achieves better performance in ImageCLEF2015 and ImageCLEF2016. Full article
Figures

Figure 1

Open AccessArticle Automated Prostate Gland Segmentation Based on an Unsupervised Fuzzy C-Means Clustering Technique Using Multispectral T1w and T2w MR Imaging
Information 2017, 8(2), 49; doi:10.3390/info8020049
Received: 4 February 2017 / Revised: 3 April 2017 / Accepted: 24 April 2017 / Published: 28 April 2017
PDF Full-text (4264 KB) | HTML Full-text | XML Full-text
Abstract
Prostate imaging analysis is difficult in diagnosis, therapy, and staging of prostate cancer. In clinical practice, Magnetic Resonance Imaging (MRI) is increasingly used thanks to its morphologic and functional capabilities. However, manual detection and delineation of prostate gland on multispectral MRI data is
[...] Read more.
Prostate imaging analysis is difficult in diagnosis, therapy, and staging of prostate cancer. In clinical practice, Magnetic Resonance Imaging (MRI) is increasingly used thanks to its morphologic and functional capabilities. However, manual detection and delineation of prostate gland on multispectral MRI data is currently a time-expensive and operator-dependent procedure. Efficient computer-assisted segmentation approaches are not yet able to address these issues, but rather have the potential to do so. In this paper, a novel automatic prostate MR image segmentation method based on the Fuzzy C-Means (FCM) clustering algorithm, which enables multispectral T1-weighted (T1w) and T2-weighted (T2w) MRI anatomical data processing, is proposed. This approach, using an unsupervised Machine Learning technique, helps to segment the prostate gland effectively. A total of 21 patients with suspicion of prostate cancer were enrolled in this study. Volume-based metrics, spatial overlap-based metrics and spatial distance-based metrics were used to quantitatively evaluate the accuracy of the obtained segmentation results with respect to the gold-standard boundaries delineated manually by an expert radiologist. The proposed multispectral segmentation method was compared with the same processing pipeline applied on either T2w or T1w MR images alone. The multispectral approach considerably outperforms the monoparametric ones, achieving an average Dice Similarity Coefficient 90.77 ± 1.75, with respect to 81.90 ± 6.49 and 82.55 ± 4.93 by processing T2w and T1w imaging alone, respectively. Combining T2w and T1w MR image structural information significantly enhances prostate gland segmentation by exploiting the uniform gray appearance of the prostate on T1w MRI. Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Figures

Figure 1

Open AccessArticle The Diffraction Research of Cylindrical Block Effect Based on Indoor 45 GHz Millimeter Wave Measurements
Information 2017, 8(2), 50; doi:10.3390/info8020050
Received: 3 April 2017 / Revised: 24 April 2017 / Accepted: 27 April 2017 / Published: 2 May 2017
PDF Full-text (2666 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, four kinds of block diffraction models were proposed on the basis of the uniform geometrical theory of diffraction, and these models were validated by experiments with 45 GHz millimeter wave in the laboratory. The results are in agreement with the
[...] Read more.
In this paper, four kinds of block diffraction models were proposed on the basis of the uniform geometrical theory of diffraction, and these models were validated by experiments with 45 GHz millimeter wave in the laboratory. The results are in agreement with the theoretical analysis. Some errors exist in the measurement results because of the unsatisfactory experimental environment. Single conducting cylindrical block measurement error was less than 0.5 dB, and single man block measurement error in the school laboratory was less than 1 dB, while in the factory laboratory environment, the peak to peak error reached 1.6 dB. Human body block attenuation was about 5.9–9.2 dB lower than that of the single conducting cylinder. A human body and a conducting cylinder were used together as a block in model (c) and model (d), but the positions of the cylinder in the two models were different. The measurement results showed that the attenuation of model (d) is about 3 dB higher than that of model (c). Full article
Figures

Figure 1

Open AccessArticle Subtraction and Division Operations of Simplified Neutrosophic Sets
Information 2017, 8(2), 51; doi:10.3390/info8020051
Received: 4 April 2017 / Revised: 1 May 2017 / Accepted: 2 May 2017 / Published: 4 May 2017
PDF Full-text (225 KB) | HTML Full-text | XML Full-text
Abstract
A simplified neutrosophic set is characterized by a truth-membership function, an indeterminacy-membership function, and a falsity-membership function, which is a subclass of the neutrosophic set and contains the concepts of an interval neutrosophic set and a single valued neutrosophic set. It is a
[...] Read more.
A simplified neutrosophic set is characterized by a truth-membership function, an indeterminacy-membership function, and a falsity-membership function, which is a subclass of the neutrosophic set and contains the concepts of an interval neutrosophic set and a single valued neutrosophic set. It is a powerful structure in expressing indeterminate and inconsistent information. However, there has only been one paper until now—to the best of my knowledge—on the subtraction and division operators in the basic operational laws of neutrosophic single-valued numbers defined in existing literature. Therefore, this paper proposes subtraction operation and division operation for simplified neutrosophic sets, including single valued neutrosophic sets and interval neutrosophic sets respectively, under some constrained conditions to form the integral theoretical framework of simplified neutrosophic sets. In addition, we give numerical examples to illustrate the defined operations. The subtraction and division operations are very important in many practical applications, such as decision making and image processing. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Multi-Label Classification from Multiple Noisy Sources Using Topic Models
Information 2017, 8(2), 52; doi:10.3390/info8020052
Received: 24 January 2017 / Revised: 24 April 2017 / Accepted: 27 April 2017 / Published: 5 May 2017
PDF Full-text (775 KB) | HTML Full-text | XML Full-text
Abstract
Multi-label classification is a well-known supervised machine learning setting where each instance is associated with multiple classes. Examples include annotation of images with multiple labels, assigning multiple tags for a web page, etc. Since several labels can be assigned to a single instance,
[...] Read more.
Multi-label classification is a well-known supervised machine learning setting where each instance is associated with multiple classes. Examples include annotation of images with multiple labels, assigning multiple tags for a web page, etc. Since several labels can be assigned to a single instance, one of the key challenges in this problem is to learn the correlations between the classes. Our first contribution assumes labels from a perfect source. Towards this, we propose a novel topic model (ML-PA-LDA). The distinguishing feature in our model is that classes that are present as well as the classes that are absent generate the latent topics and hence the words. Extensive experimentation on real world datasets reveals the superior performance of the proposed model. A natural source for procuring the training dataset is through mining user-generated content or directly through users in a crowdsourcing platform. In this more practical scenario of crowdsourcing, an additional challenge arises as the labels of the training instances are provided by noisy, heterogeneous crowd-workers with unknown qualities. With this motivation, we further augment our topic model to the scenario where the labels are provided by multiple noisy sources and refer to this model as ML-PA-LDA-MNS. With experiments on simulated noisy annotators, the proposed model learns the qualities of the annotators well, even with minimal training data. Full article
(This article belongs to the Special Issue Text Mining Applications and Theory)
Figures

Figure 1

Open AccessArticle A Filter Structure for Arbitrary Re-Sampling Ratio Conversion of a Discrete Signal
Information 2017, 8(2), 53; doi:10.3390/info8020053
Received: 2 February 2017 / Revised: 20 April 2017 / Accepted: 5 May 2017 / Published: 12 May 2017
PDF Full-text (1424 KB) | HTML Full-text | XML Full-text
Abstract
In this report, we studied the sampling synchronization of a discrete signal in the receiver of a communication system and found that the frequency of the received signal usually exhibits some unpredictable deviations. We observed many harmonics caused by the frequency deviations of
[...] Read more.
In this report, we studied the sampling synchronization of a discrete signal in the receiver of a communication system and found that the frequency of the received signal usually exhibits some unpredictable deviations. We observed many harmonics caused by the frequency deviations of the discrete received signal. These findings indicate that signal sampling synchronization is an important research technique when using discrete Fourier transforms (DFT) to analyze the harmonics of discrete signals. We investigated the influence of these harmonics on the performance of signal sampling and studied the frequency estimation of the received signal. Based on the frequency estimation of the received signal, the sampling rate of the discrete signal was converted using a modified Farrow filter to achieve sampling synchronization for the received signal. The algorithm discussed here can be applied to sampling synchronization for monitoring and control systems. Finally, simulations and experimental results are presented. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle A Method for Multi-Criteria Group Decision Making with 2-Tuple Linguistic Information Based on Cloud Model
Information 2017, 8(2), 54; doi:10.3390/info8020054
Received: 19 March 2017 / Revised: 8 May 2017 / Accepted: 9 May 2017 / Published: 12 May 2017
PDF Full-text (1052 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a new approach to solve the multi-criteria group decision making (MCGDM) problem where criteria values take the form of 2-tuple linguistic information. Firstly, a 2-tuple hybrid ordered weighted geometric (THOWG) operator is proposed, which synthetically considers the importance of both
[...] Read more.
This paper presents a new approach to solve the multi-criteria group decision making (MCGDM) problem where criteria values take the form of 2-tuple linguistic information. Firstly, a 2-tuple hybrid ordered weighted geometric (THOWG) operator is proposed, which synthetically considers the importance of both individual and the ordered position so as to overcome the defects of existing operators. Secondly, combining the advantages of the cloud model and 2-tuple linguistic variable, a new generating cloud method is proposed to transform 2-tuple linguistic variables into clouds. Thirdly, we further define some new cloud algorithms, such as cloud possibility degree and cloud support degree which can be respectively used to compare clouds and determine the criteria weights. Furthermore, a new approach for 2-tuple linguistic group decision making is presented on the basis of the THOWG operator, the improved generating cloud method as well as the new cloud algorithms. Finally, an example of assessing the social effects of biomass power plants (BPPS) is illustrated to verify the application and feasible of the developed approach, and a comparative analysis is also conducted to validate the effectiveness of the proposed method. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessArticle An Experience-Based Framework for Evaluating Tourism Mobile Commerce Platforms
Information 2017, 8(2), 55; doi:10.3390/info8020055
Received: 21 March 2017 / Revised: 9 May 2017 / Accepted: 9 May 2017 / Published: 12 May 2017
PDF Full-text (236 KB) | HTML Full-text | XML Full-text
Abstract
This research presents and studies an evaluation framework for tourism mobile commerce platforms based on tourists’ experience. Synthesizing from prior literature, relevant theories, and the results of online questionnaires, we select 24 evaluation indices for preliminary evaluation. Using exploratory factor analysis method, we
[...] Read more.
This research presents and studies an evaluation framework for tourism mobile commerce platforms based on tourists’ experience. Synthesizing from prior literature, relevant theories, and the results of online questionnaires, we select 24 evaluation indices for preliminary evaluation. Using exploratory factor analysis method, we then extract from these indices the following five principal factors: interactive experience, infrastructure experience, personalization experience, product or service quality experience, and product operation experience. We further employ the confirmatory factor analysis to test the construction of the evaluation framework and demonstrate that the evaluation framework is both robust and effective. Finally, based on our proposed evaluation framework, we empirically evaluate the most popular mobile commerce platforms (Ctrip and Qunaer) in China by using fuzzy comprehensive evaluation method. Full article
(This article belongs to the Section Information Applications)
Open AccessArticle Dynamic, Interactive and Visual Analysis of Population Distribution and Mobility Dynamics in an Urban Environment Using the Mobility Explorer Framework
Information 2017, 8(2), 56; doi:10.3390/info8020056
Received: 28 February 2017 / Revised: 20 April 2017 / Accepted: 25 April 2017 / Published: 15 May 2017
PDF Full-text (6795 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates the extent to which a mobile data source can be utilised to generate new information intelligence for decision-making in smart city planning processes. In this regard, the Mobility Explorer framework is introduced and applied to the City of Vienna (Austria)
[...] Read more.
This paper investigates the extent to which a mobile data source can be utilised to generate new information intelligence for decision-making in smart city planning processes. In this regard, the Mobility Explorer framework is introduced and applied to the City of Vienna (Austria) by using anonymised mobile phone data from a mobile phone service provider. This framework identifies five necessary elements that are needed to develop complex planning applications. As part of the investigation and experiments a new dynamic software tool, called Mobility Explorer, has been designed and developed based on the requirements of the planning department of the City of Vienna. As a result, the Mobility Explorer enables city stakeholders to interactively visualise the dynamic diurnal population distribution, mobility patterns and various other complex outputs for planning needs. Based on the experiences during the development phase, this paper discusses mobile data issues, presents the visual interface, performs various user-defined analyses, demonstrates the application’s usefulness and critically reflects on the evaluation results of the citizens’ motion exploration that reveal the great potential of mobile phone data in smart city planning but also depict its limitations. These experiences and lessons learned from the Mobility Explorer application development provide useful insights for other cities and planners who want to make informed decisions using mobile phone data in their city planning processes through dynamic visualisation of Call Data Record (CDR) data. Full article
(This article belongs to the Special Issue Smart City Technologies, Systems and Applications)
Figures

Figure 1

Open AccessArticle An Effective and Robust Single Image Dehazing Method Using the Dark Channel Prior
Information 2017, 8(2), 57; doi:10.3390/info8020057
Received: 14 March 2017 / Revised: 15 May 2017 / Accepted: 15 May 2017 / Published: 17 May 2017
PDF Full-text (14891 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a single image dehazing method aiming at addressing the inherent limitations of the extensively employed dark channel prior (DCP). More concretely, we introduce the Gaussian mixture model (GMM) to segment the input hazy image into scenes based on
[...] Read more.
In this paper, we propose a single image dehazing method aiming at addressing the inherent limitations of the extensively employed dark channel prior (DCP). More concretely, we introduce the Gaussian mixture model (GMM) to segment the input hazy image into scenes based on the haze density feature map. With the segmentation results, combined with the proposed sky region detection method, we can effectively recognize the sky region where the DCP cannot well handle this. On the basis of sky region detection, we then present an improved global atmospheric light estimation method to increase the estimation accuracy of the atmospheric light. Further, we present a multi-scale fusion-based strategy to obtain the transmission map based on DCP, which can significantly reduce the blocking artifacts of the transmission map. To further rectify the error-prone transmission within the sky region, an adaptive sky region transmission correction method is also presented. Finally, due to the segmentation-blindness of GMM, we adopt the guided total variation (GTV) to tackle this problem while eliminating the extensive texture details contained in the transmission map. Experimental results verify the power of our method and show its superiority over several state-of-the-art methods. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle A Novel Identity-Based Signcryption Scheme in the Standard Model
Information 2017, 8(2), 58; doi:10.3390/info8020058
Received: 2 March 2017 / Revised: 1 May 2017 / Accepted: 16 May 2017 / Published: 19 May 2017
PDF Full-text (268 KB) | HTML Full-text | XML Full-text
Abstract
Identity-based signcryption is a useful cryptographic primitive that provides both authentication and confidentiality for identity-based crypto systems. It is challenging to build a secure identity-based signcryption scheme that can be proven secure in a standard model. In this paper, we address the issue
[...] Read more.
Identity-based signcryption is a useful cryptographic primitive that provides both authentication and confidentiality for identity-based crypto systems. It is challenging to build a secure identity-based signcryption scheme that can be proven secure in a standard model. In this paper, we address the issue and propose a novel construction of identity-based signcryption which enjoys IND-CCA security and existential unforgeability without resorting to the random oracle model. Comparisons demonstrate that the new scheme achieves stronger security, better performance efficiency and shorter system parameters. Full article
(This article belongs to the Special Issue Secure Data Storage and Sharing Techniques in Cloud Computing)
Open AccessArticle A Two-Stage Joint Model for Domain-Specific Entity Detection and Linking Leveraging an Unlabeled Corpus
Information 2017, 8(2), 59; doi:10.3390/info8020059
Received: 6 March 2017 / Revised: 15 May 2017 / Accepted: 18 May 2017 / Published: 22 May 2017
PDF Full-text (1811 KB) | HTML Full-text | XML Full-text
Abstract
The intensive construction of domain-specific knowledge bases (DSKB) has posed an urgent demand for researches about domain-specific entity detection and linking (DSEDL). Joint models are usually adopted in DSEDL tasks, but data imbalance and high computational complexity exist in these models. Besides, traditional
[...] Read more.
The intensive construction of domain-specific knowledge bases (DSKB) has posed an urgent demand for researches about domain-specific entity detection and linking (DSEDL). Joint models are usually adopted in DSEDL tasks, but data imbalance and high computational complexity exist in these models. Besides, traditional feature representation methods are insufficient for domain-specific tasks, due to problems such as lack of labeled data, link sparseness in DSKBs, and so on. In this paper, a two-stage joint (TSJ) model is proposed to solve the data imbalance problem by discriminatively processing entity mentions with different degrees of ambiguity. In addition, three novel methods are put forward to generate effective features by incorporating an unlabeled corpus. One crucial feature involving entity detection is the mention type, extracted by a long short-term memory (LSTM) model trained on automatically annotated data. The other two types of features mainly involve entity linking, including the inner-document topical coherence, which is measured based on entity co-occurring relationships in the corpus, and the cross-document entity coherence evaluated using similar documents. An overall 74.26% F1 value is obtained on a dataset of real-world movie comments, demonstrating the effectiveness of the proposed approach and indicating its potentiality to be used in real-world domain-specific applications. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Correction of Outliers in Temperature Time Series Based on Sliding Window Prediction in Meteorological Sensor Network
Information 2017, 8(2), 60; doi:10.3390/info8020060
Received: 28 March 2017 / Revised: 17 May 2017 / Accepted: 19 May 2017 / Published: 24 May 2017
PDF Full-text (435 KB) | HTML Full-text | XML Full-text
Abstract
In order to detect outliers in temperature time series data for improving data quality and decision-making quality related to design and operation, we proposed an algorithm based on sliding window prediction. Firstly, the time series are segmented based on the sliding window. Then,
[...] Read more.
In order to detect outliers in temperature time series data for improving data quality and decision-making quality related to design and operation, we proposed an algorithm based on sliding window prediction. Firstly, the time series are segmented based on the sliding window. Then, the prediction model is established based on the history data to predict the future value. If the difference between a predicted value and a measured value is larger than the preset threshold value, the sequence point will be judged to be an outlier and then corrected. In this paper, the sliding window and parameter settings of the algorithm are discussed and the algorithm is verified on actual data. This method does not need to pre classify the abnormal points and perform fast, and can handle large scale data. The experimental results show that the proposed algorithm can not only effectively detect outliers in the time series of meteorological data but also improves the correction efficiency notoriously. Full article
Figures

Figure 1

Open AccessArticle Information and Inference
Information 2017, 8(2), 61; doi:10.3390/info8020061
Received: 16 April 2017 / Revised: 21 May 2017 / Accepted: 22 May 2017 / Published: 27 May 2017
PDF Full-text (1971 KB) | HTML Full-text | XML Full-text
Abstract
Inference is expressed using information and is therefore subject to the limitations of information. The conventions that determine the reliability of inference have developed in information ecosystems under the influence of a range of selection pressures. These conventions embed limitations in information measures
[...] Read more.
Inference is expressed using information and is therefore subject to the limitations of information. The conventions that determine the reliability of inference have developed in information ecosystems under the influence of a range of selection pressures. These conventions embed limitations in information measures like quality, pace and friction caused by selection trade-offs. Some selection pressures improve the reliability of inference; others diminish it by reinforcing the limitations of the conventions. This paper shows how to apply these ideas to inference in order to analyse the limitations; the analysis is applied to various theories of inference including examples from the philosophies of science and mathematics as well as machine learning. The analysis highlights the limitations of these theories and how different, seemingly competing, ideas about inference can relate to each other. Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Open AccessArticle Exponential Operations and an Aggregation Method for Single-Valued Neutrosophic Numbers in Decision Making
Information 2017, 8(2), 62; doi:10.3390/info8020062
Received: 22 April 2017 / Revised: 31 May 2017 / Accepted: 2 June 2017 / Published: 7 June 2017
Cited by 1 | PDF Full-text (236 KB) | HTML Full-text | XML Full-text
Abstract
As an extension of an intuitionistic fuzzy set, a single-valued neutrosophic set is described independently by the membership functions of its truth, indeterminacy, and falsity, which is a subclass of a neutrosophic set (NS). However, in existing exponential operations and their aggregation methods
[...] Read more.
As an extension of an intuitionistic fuzzy set, a single-valued neutrosophic set is described independently by the membership functions of its truth, indeterminacy, and falsity, which is a subclass of a neutrosophic set (NS). However, in existing exponential operations and their aggregation methods for neutrosophic numbers (NNs) (basic elements in NSs), the exponents (weights) are positive real numbers in unit intervals under neutrosophic decision-making environments. As a supplement, this paper defines new exponential operations of single-valued NNs (basic elements in a single-valued NS), where positive real numbers are used as the bases, and single-valued NNs are used as the exponents. Then, we propose a single-valued neutrosophic weighted exponential aggregation (SVNWEA) operator based on the exponential operational laws of single-valued NNs and the SVNWEA operator-based decision-making method. Finally, an illustrative example shows the applicability and rationality of the presented method. A comparison with a traditional method demonstrates that the new decision-making method is more appropriate and effective. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle Turbo Coded OFDM Combined with MIMO Antennas Based on Matched Interleaver for Coded-Cooperative Wireless Communication
Information 2017, 8(2), 63; doi:10.3390/info8020063
Received: 22 April 2017 / Revised: 2 June 2017 / Accepted: 6 June 2017 / Published: 13 June 2017
PDF Full-text (2217 KB) | HTML Full-text | XML Full-text
Abstract
A turbo coded cooperative orthogonal frequency division multiplexing (OFDM) with multiple-input multiple-output (MIMO) antennas scheme is considered, and its performance over a fast Rayleigh fading channel is evaluated. The turbo coded OFDM incorporates MIMO (2 × 2) Alamouti space-time block code. The interleaver
[...] Read more.
A turbo coded cooperative orthogonal frequency division multiplexing (OFDM) with multiple-input multiple-output (MIMO) antennas scheme is considered, and its performance over a fast Rayleigh fading channel is evaluated. The turbo coded OFDM incorporates MIMO (2 × 2) Alamouti space-time block code. The interleaver design, and its placement always plays a vital role in the performance of a turbo coded cooperation scheme. Therefore, a code-matched interleaver (CMI) is selected as an optimum choice of interleaver and is placed at the relay node. The performance of the CMI is evaluated in a turbo coded OFDM system over an additive white Gaussian noise (AWGN) channel. Moreover, the performance of the CMI is also evaluated in the turbo coded OFDM system with MIMO antennas over a fast Rayleigh fading channel. The modulation schemes chosen are Binary Phase shift keying (BPSK), Quadrature phase shift keying (QPSK) and 16-Quadrature amplitude modulation (16QAM). Soft-demodulators are employed along with joint iterative soft-input soft-output (SISO) turbo decoder at the destination node. Monte Carlo simulated results reveal that the turbo coded cooperative OFDM system with MIMO antennas scheme incorporates coding gain, diversity gain and cooperation gain successfully over the direct transmission scheme under identical conditions. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Identifying High Quality Document–Summary Pairs through Text Matching
Information 2017, 8(2), 64; doi:10.3390/info8020064
Received: 28 March 2017 / Revised: 3 June 2017 / Accepted: 7 June 2017 / Published: 12 June 2017
PDF Full-text (1584 KB) | HTML Full-text | XML Full-text
Abstract
Text summarization namely, automatically generating a short summary of a given document, is a difficult task in natural language processing. Nowadays, deep learning as a new technique has gradually been deployed for text summarization, but there is still a lack of large-scale high
[...] Read more.
Text summarization namely, automatically generating a short summary of a given document, is a difficult task in natural language processing. Nowadays, deep learning as a new technique has gradually been deployed for text summarization, but there is still a lack of large-scale high quality datasets for this technique. In this paper, we proposed a novel deep learning method to identify high quality document–summary pairs for building a large-scale pairs dataset. Concretely, a long short-term memory (LSTM)-based model was designed to measure the quality of document–summary pairs. In order to leverage information across all parts of each document, we further proposed an improved LSTM-based model by removing the forget gate in the LSTM unit. Experiments conducted on the training set and the test set built upon Sina Weibo (a Chinese microblog website similar to Twitter) showed that the LSTM-based models significantly outperformed baseline models with regard to the area under receiver operating characteristic curve (AUC) value. Full article
(This article belongs to the Special Issue Text Mining Applications and Theory)
Figures

Figure 1

Open AccessArticle Security Policy Scheme for an Efficient Security Architecture in Software-Defined Networking
Information 2017, 8(2), 65; doi:10.3390/info8020065
Received: 9 April 2017 / Revised: 10 June 2017 / Accepted: 11 June 2017 / Published: 13 June 2017
PDF Full-text (5474 KB) | HTML Full-text | XML Full-text
Abstract
In order to build an efficient security architecture, previous studies have attempted to understand complex system architectures and message flows to detect various attack packets. However, the existing hardware-based single security architecture cannot efficiently handle a complex system structure. To solve this problem,
[...] Read more.
In order to build an efficient security architecture, previous studies have attempted to understand complex system architectures and message flows to detect various attack packets. However, the existing hardware-based single security architecture cannot efficiently handle a complex system structure. To solve this problem, we propose a software-defined networking (SDN) policy-based scheme for an efficient security architecture. The proposed scheme considers four policy functions: separating, chaining, merging, and reordering. If SDN network functions virtualization (NFV) system managers use these policy functions to deploy a security architecture, they only submit some of the requirement documents to the SDN policy-based architecture. After that, the entire security network can be easily built. This paper presents information about the design of a new policy functions model, and it discusses the performance of this model using theoretical analysis. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle An Energy-Efficient Routing Algorithm in Three-Dimensional Underwater Sensor Networks Based on Compressed Sensing
Information 2017, 8(2), 66; doi:10.3390/info8020066
Received: 8 May 2017 / Revised: 12 June 2017 / Accepted: 14 June 2017 / Published: 16 June 2017
PDF Full-text (2086 KB) | HTML Full-text | XML Full-text
Abstract
Compressed sensing (CS) has become a powerful tool to process data that is correlated in underwater sensor networks (USNs). Based on CS, certain signals can be recovered from a relatively small number of random linear projections. Since the battery-driven sensor nodes work in
[...] Read more.
Compressed sensing (CS) has become a powerful tool to process data that is correlated in underwater sensor networks (USNs). Based on CS, certain signals can be recovered from a relatively small number of random linear projections. Since the battery-driven sensor nodes work in adverse environments, energy-efficient routing well-matched with CS is needed to realize data gathering in USNs. In this paper, a clustering, uneven-layered, and multi-hop routing based on CS (CS-CULM) is proposed. The inter-cluster transmission and fusion are fulfilled by an improved LEACH protocol, then the uneven-layered, multi-hop routing is adopted to forward the packets fused to sink node for data reconstruction. Simulation results show that CS-CULM can achieve better performances in energy saving and data reconstruction. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Understanding the Impact of Human Mobility Patterns on Taxi Drivers’ Profitability Using Clustering Techniques: A Case Study in Wuhan, China
Information 2017, 8(2), 67; doi:10.3390/info8020067
Received: 17 April 2017 / Revised: 15 June 2017 / Accepted: 15 June 2017 / Published: 19 June 2017
PDF Full-text (3289 KB) | HTML Full-text | XML Full-text
Abstract
Taxi trajectories reflect human mobility over the urban roads’ network. Although taxi drivers cruise the same city streets, there is an observed variation in their daily profit. To reveal the reasons behind this issue, this study introduces a novel approach for investigating and
[...] Read more.
Taxi trajectories reflect human mobility over the urban roads’ network. Although taxi drivers cruise the same city streets, there is an observed variation in their daily profit. To reveal the reasons behind this issue, this study introduces a novel approach for investigating and understanding the impact of human mobility patterns (taxi drivers’ behavior) on daily drivers’ profit. Firstly, a K-means clustering method is adopted to group taxi drivers into three profitability groups according to their driving duration, driving distance and income. Secondly, the cruising trips and stopping spots for each profitability group are extracted. Thirdly, a comparison among the profitability groups in terms of spatial and temporal patterns on cruising trips and stopping spots is carried out. The comparison applied various methods including the mash map matching method and DBSCAN clustering method. Finally, an overall analysis of the results is discussed in detail. The results show that there is a significant relationship between human mobility patterns and taxi drivers’ profitability. High profitability drivers based on their experience earn more compared to other driver groups, as they know which places are more active to cruise and to stop and at what times. This study provides suggestions and insights for taxi companies and taxi drivers in order to increase their daily income and to enhance the efficiency of the taxi industry. Full article
Figures

Figure 1

Open AccessArticle Computer-Generated Abstract Paintings Oriented by the Color Composition of Images
Information 2017, 8(2), 68; doi:10.3390/info8020068
Received: 31 March 2017 / Revised: 15 June 2017 / Accepted: 15 June 2017 / Published: 20 June 2017
PDF Full-text (54306 KB) | HTML Full-text | XML Full-text
Abstract
Designers and artists often require reference images at authoring time. The emergence of computer technology has provided new conditions and possibilities for artistic creation and research. It has also expanded the forms of artistic expression and attracted many artists, designers and computer experts
[...] Read more.
Designers and artists often require reference images at authoring time. The emergence of computer technology has provided new conditions and possibilities for artistic creation and research. It has also expanded the forms of artistic expression and attracted many artists, designers and computer experts to explore different artistic directions and collaborate with one another. In this paper, we present an efficient k-means-based method to segment the colors of an original picture to analyze the composition ratio of the color information and calculate individual color areas that are associated with their sizes. This information is transformed into regular geometries to reconstruct the colors of the picture to generate abstract images. Furthermore, we designed an application system using the proposed method and generated many works; some artists and designers have used it as an auxiliary tool for art and design creation. The experimental results of datasets demonstrate the effectiveness of our method and can give us inspiration for our work. Full article
Figures

Figure 1

Open AccessArticle Expression and Analysis of Joint Roughness Coefficient Using Neutrosophic Number Functions
Information 2017, 8(2), 69; doi:10.3390/info8020069
Received: 28 May 2017 / Revised: 17 June 2017 / Accepted: 17 June 2017 / Published: 20 June 2017
PDF Full-text (2497 KB) | HTML Full-text | XML Full-text
Abstract
In nature, the mechanical properties of geological bodies are very complex, and its various mechanical parameters are vague, incomplete, imprecise, and indeterminate. In these cases, we cannot always compute or provide exact/crisp values for the joint roughness coefficient (JRC), which is a quite
[...] Read more.
In nature, the mechanical properties of geological bodies are very complex, and its various mechanical parameters are vague, incomplete, imprecise, and indeterminate. In these cases, we cannot always compute or provide exact/crisp values for the joint roughness coefficient (JRC), which is a quite crucial parameter for determining the shear strength in rock mechanics, but we need to approximate them. Hence, we need to investigate the anisotropy and scale effect of indeterminate JRC values by neutrosophic number (NN) functions, because the NN is composed of its determinate part and the indeterminate part and is very suitable for the expression of JRC data with determinate and/or indeterminate information. In this study, the lower limit of JRC data is chosen as the determinate information, and the difference between the lower and upper limits is chosen as the indeterminate information. In this case, the NN functions of the anisotropic ellipse and logarithmic equation of JRC are developed to reflect the anisotropy and scale effect of JRC values. Additionally, the NN parameter ψ is defined to quantify the anisotropy of JRC values. Then, a two-variable NN function is introduced based on the factors of both the sample size and measurement orientation. Further, the changing rates in various sample sizes and/or measurement orientations are investigated by their derivative and partial derivative NN functions. However, an actual case study shows that the proposed NN functions are effective and reasonable in the expression and analysis of the indeterminate values of JRC. Obviously, NN functions provide a new, effective way for passing from the classical crisp expression and analyses to the neutrosophic ones. Full article
Figures

Figure 1

Open AccessCommunication Adopting Sector-Based Replacement (SBR) and Utilizing Air-R to Achieve R-WSN Sustainability
Information 2017, 8(2), 70; doi:10.3390/info8020070
Received: 23 May 2017 / Revised: 15 June 2017 / Accepted: 16 June 2017 / Published: 21 June 2017
PDF Full-text (3337 KB) | HTML Full-text | XML Full-text
Abstract
Sensor replacement in the rechargeable wireless sensor network (R-WSN) is important to provide continuous sensing services once sensor node failure or damage occurs. However, satisfactory solutions have not been found yet in developing a sustainable network and effectively prolonging its lifetime. Thus, we
[...] Read more.
Sensor replacement in the rechargeable wireless sensor network (R-WSN) is important to provide continuous sensing services once sensor node failure or damage occurs. However, satisfactory solutions have not been found yet in developing a sustainable network and effectively prolonging its lifetime. Thus, we propose a new technique for detecting, reporting, and handling sensor failure, called sector-based replacement (SBR). Base station (BS) features are utilized in dividing the monitoring field into sectors and analyzing the incoming data from the nodes to detect the failed nodes. An airplane robot (Air-R) is then sent to a replacement task trip. The goals of this study are to (i) increase and guarantee the sustainability of the R-WSN; (ii) rapidly detect the failed nodes in sectors by utilizing the BS capabilities in analyzing data and achieving the highest performance for replacing the failed nodes using Air-R; and (iii) minimize the Air-R effort movement by applying the new field-dividing mechanism that leads to fast replacement. Extensive simulations are conducted to verify the effectiveness and efficiency of the SBR technique. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Journal Contact

MDPI AG
Information Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Information Edit a special issue Review for Information
logo
loading...
Back to Top