Next Issue
Volume 8, September
Previous Issue
Volume 8, March
 
 

Information, Volume 8, Issue 2 (June 2017) – 33 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
3337 KiB  
Communication
Adopting Sector-Based Replacement (SBR) and Utilizing Air-R to Achieve R-WSN Sustainability
by Sadam M. Alkhalidi, Dong Wang and Zaid A. Al-Marhabi
Information 2017, 8(2), 70; https://doi.org/10.3390/info8020070 - 21 Jun 2017
Cited by 1 | Viewed by 4134
Abstract
Sensor replacement in the rechargeable wireless sensor network (R-WSN) is important to provide continuous sensing services once sensor node failure or damage occurs. However, satisfactory solutions have not been found yet in developing a sustainable network and effectively prolonging its lifetime. Thus, we [...] Read more.
Sensor replacement in the rechargeable wireless sensor network (R-WSN) is important to provide continuous sensing services once sensor node failure or damage occurs. However, satisfactory solutions have not been found yet in developing a sustainable network and effectively prolonging its lifetime. Thus, we propose a new technique for detecting, reporting, and handling sensor failure, called sector-based replacement (SBR). Base station (BS) features are utilized in dividing the monitoring field into sectors and analyzing the incoming data from the nodes to detect the failed nodes. An airplane robot (Air-R) is then sent to a replacement task trip. The goals of this study are to (i) increase and guarantee the sustainability of the R-WSN; (ii) rapidly detect the failed nodes in sectors by utilizing the BS capabilities in analyzing data and achieving the highest performance for replacing the failed nodes using Air-R; and (iii) minimize the Air-R effort movement by applying the new field-dividing mechanism that leads to fast replacement. Extensive simulations are conducted to verify the effectiveness and efficiency of the SBR technique. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

2497 KiB  
Article
Expression and Analysis of Joint Roughness Coefficient Using Neutrosophic Number Functions
by Jun Ye, Jiqian Chen, Rui Yong and Shigui Du
Information 2017, 8(2), 69; https://doi.org/10.3390/info8020069 - 20 Jun 2017
Cited by 23 | Viewed by 4151
Abstract
In nature, the mechanical properties of geological bodies are very complex, and its various mechanical parameters are vague, incomplete, imprecise, and indeterminate. In these cases, we cannot always compute or provide exact/crisp values for the joint roughness coefficient (JRC), which is a quite [...] Read more.
In nature, the mechanical properties of geological bodies are very complex, and its various mechanical parameters are vague, incomplete, imprecise, and indeterminate. In these cases, we cannot always compute or provide exact/crisp values for the joint roughness coefficient (JRC), which is a quite crucial parameter for determining the shear strength in rock mechanics, but we need to approximate them. Hence, we need to investigate the anisotropy and scale effect of indeterminate JRC values by neutrosophic number (NN) functions, because the NN is composed of its determinate part and the indeterminate part and is very suitable for the expression of JRC data with determinate and/or indeterminate information. In this study, the lower limit of JRC data is chosen as the determinate information, and the difference between the lower and upper limits is chosen as the indeterminate information. In this case, the NN functions of the anisotropic ellipse and logarithmic equation of JRC are developed to reflect the anisotropy and scale effect of JRC values. Additionally, the NN parameter ψ is defined to quantify the anisotropy of JRC values. Then, a two-variable NN function is introduced based on the factors of both the sample size and measurement orientation. Further, the changing rates in various sample sizes and/or measurement orientations are investigated by their derivative and partial derivative NN functions. However, an actual case study shows that the proposed NN functions are effective and reasonable in the expression and analysis of the indeterminate values of JRC. Obviously, NN functions provide a new, effective way for passing from the classical crisp expression and analyses to the neutrosophic ones. Full article
Show Figures

Figure 1

54306 KiB  
Article
Computer-Generated Abstract Paintings Oriented by the Color Composition of Images
by Mao Li, Jiancheng Lv, Xiaojie Li and Jing Yin
Information 2017, 8(2), 68; https://doi.org/10.3390/info8020068 - 20 Jun 2017
Cited by 5 | Viewed by 6702
Abstract
Designers and artists often require reference images at authoring time. The emergence of computer technology has provided new conditions and possibilities for artistic creation and research. It has also expanded the forms of artistic expression and attracted many artists, designers and computer experts [...] Read more.
Designers and artists often require reference images at authoring time. The emergence of computer technology has provided new conditions and possibilities for artistic creation and research. It has also expanded the forms of artistic expression and attracted many artists, designers and computer experts to explore different artistic directions and collaborate with one another. In this paper, we present an efficient k-means-based method to segment the colors of an original picture to analyze the composition ratio of the color information and calculate individual color areas that are associated with their sizes. This information is transformed into regular geometries to reconstruct the colors of the picture to generate abstract images. Furthermore, we designed an application system using the proposed method and generated many works; some artists and designers have used it as an auxiliary tool for art and design creation. The experimental results of datasets demonstrate the effectiveness of our method and can give us inspiration for our work. Full article
Show Figures

Figure 1

3289 KiB  
Article
Understanding the Impact of Human Mobility Patterns on Taxi Drivers’ Profitability Using Clustering Techniques: A Case Study in Wuhan, China
by Hasan A. H. Naji, Chaozhong Wu and Hui Zhang
Information 2017, 8(2), 67; https://doi.org/10.3390/info8020067 - 19 Jun 2017
Cited by 12 | Viewed by 5127
Abstract
Taxi trajectories reflect human mobility over the urban roads’ network. Although taxi drivers cruise the same city streets, there is an observed variation in their daily profit. To reveal the reasons behind this issue, this study introduces a novel approach for investigating and [...] Read more.
Taxi trajectories reflect human mobility over the urban roads’ network. Although taxi drivers cruise the same city streets, there is an observed variation in their daily profit. To reveal the reasons behind this issue, this study introduces a novel approach for investigating and understanding the impact of human mobility patterns (taxi drivers’ behavior) on daily drivers’ profit. Firstly, a K-means clustering method is adopted to group taxi drivers into three profitability groups according to their driving duration, driving distance and income. Secondly, the cruising trips and stopping spots for each profitability group are extracted. Thirdly, a comparison among the profitability groups in terms of spatial and temporal patterns on cruising trips and stopping spots is carried out. The comparison applied various methods including the mash map matching method and DBSCAN clustering method. Finally, an overall analysis of the results is discussed in detail. The results show that there is a significant relationship between human mobility patterns and taxi drivers’ profitability. High profitability drivers based on their experience earn more compared to other driver groups, as they know which places are more active to cruise and to stop and at what times. This study provides suggestions and insights for taxi companies and taxi drivers in order to increase their daily income and to enhance the efficiency of the taxi industry. Full article
Show Figures

Figure 1

2086 KiB  
Article
An Energy-Efficient Routing Algorithm in Three-Dimensional Underwater Sensor Networks Based on Compressed Sensing
by Bo Li, Hongjuan Yang, Gongliang Liu and Xiyuan Peng
Information 2017, 8(2), 66; https://doi.org/10.3390/info8020066 - 16 Jun 2017
Cited by 2 | Viewed by 3658
Abstract
Compressed sensing (CS) has become a powerful tool to process data that is correlated in underwater sensor networks (USNs). Based on CS, certain signals can be recovered from a relatively small number of random linear projections. Since the battery-driven sensor nodes work in [...] Read more.
Compressed sensing (CS) has become a powerful tool to process data that is correlated in underwater sensor networks (USNs). Based on CS, certain signals can be recovered from a relatively small number of random linear projections. Since the battery-driven sensor nodes work in adverse environments, energy-efficient routing well-matched with CS is needed to realize data gathering in USNs. In this paper, a clustering, uneven-layered, and multi-hop routing based on CS (CS-CULM) is proposed. The inter-cluster transmission and fusion are fulfilled by an improved LEACH protocol, then the uneven-layered, multi-hop routing is adopted to forward the packets fused to sink node for data reconstruction. Simulation results show that CS-CULM can achieve better performances in energy saving and data reconstruction. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

5474 KiB  
Article
Security Policy Scheme for an Efficient Security Architecture in Software-Defined Networking
by Woosik Lee and Namgi Kim
Information 2017, 8(2), 65; https://doi.org/10.3390/info8020065 - 13 Jun 2017
Cited by 14 | Viewed by 5483
Abstract
In order to build an efficient security architecture, previous studies have attempted to understand complex system architectures and message flows to detect various attack packets. However, the existing hardware-based single security architecture cannot efficiently handle a complex system structure. To solve this problem, [...] Read more.
In order to build an efficient security architecture, previous studies have attempted to understand complex system architectures and message flows to detect various attack packets. However, the existing hardware-based single security architecture cannot efficiently handle a complex system structure. To solve this problem, we propose a software-defined networking (SDN) policy-based scheme for an efficient security architecture. The proposed scheme considers four policy functions: separating, chaining, merging, and reordering. If SDN network functions virtualization (NFV) system managers use these policy functions to deploy a security architecture, they only submit some of the requirement documents to the SDN policy-based architecture. After that, the entire security network can be easily built. This paper presents information about the design of a new policy functions model, and it discusses the performance of this model using theoretical analysis. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

2217 KiB  
Article
Turbo Coded OFDM Combined with MIMO Antennas Based on Matched Interleaver for Coded-Cooperative Wireless Communication
by Rahim Umar, Fengfan Yang and Shoaib Mughal
Information 2017, 8(2), 63; https://doi.org/10.3390/info8020063 - 13 Jun 2017
Cited by 8 | Viewed by 5897
Abstract
A turbo coded cooperative orthogonal frequency division multiplexing (OFDM) with multiple-input multiple-output (MIMO) antennas scheme is considered, and its performance over a fast Rayleigh fading channel is evaluated. The turbo coded OFDM incorporates MIMO (2 × 2) Alamouti space-time block code. The interleaver [...] Read more.
A turbo coded cooperative orthogonal frequency division multiplexing (OFDM) with multiple-input multiple-output (MIMO) antennas scheme is considered, and its performance over a fast Rayleigh fading channel is evaluated. The turbo coded OFDM incorporates MIMO (2 × 2) Alamouti space-time block code. The interleaver design, and its placement always plays a vital role in the performance of a turbo coded cooperation scheme. Therefore, a code-matched interleaver (CMI) is selected as an optimum choice of interleaver and is placed at the relay node. The performance of the CMI is evaluated in a turbo coded OFDM system over an additive white Gaussian noise (AWGN) channel. Moreover, the performance of the CMI is also evaluated in the turbo coded OFDM system with MIMO antennas over a fast Rayleigh fading channel. The modulation schemes chosen are Binary Phase shift keying (BPSK), Quadrature phase shift keying (QPSK) and 16-Quadrature amplitude modulation (16QAM). Soft-demodulators are employed along with joint iterative soft-input soft-output (SISO) turbo decoder at the destination node. Monte Carlo simulated results reveal that the turbo coded cooperative OFDM system with MIMO antennas scheme incorporates coding gain, diversity gain and cooperation gain successfully over the direct transmission scheme under identical conditions. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

1584 KiB  
Article
Identifying High Quality Document–Summary Pairs through Text Matching
by Yongshuai Hou, Yang Xiang, Buzhou Tang, Qingcai Chen, Xiaolong Wang and Fangze Zhu
Information 2017, 8(2), 64; https://doi.org/10.3390/info8020064 - 12 Jun 2017
Cited by 3 | Viewed by 6100
Abstract
Text summarization namely, automatically generating a short summary of a given document, is a difficult task in natural language processing. Nowadays, deep learning as a new technique has gradually been deployed for text summarization, but there is still a lack of large-scale high [...] Read more.
Text summarization namely, automatically generating a short summary of a given document, is a difficult task in natural language processing. Nowadays, deep learning as a new technique has gradually been deployed for text summarization, but there is still a lack of large-scale high quality datasets for this technique. In this paper, we proposed a novel deep learning method to identify high quality document–summary pairs for building a large-scale pairs dataset. Concretely, a long short-term memory (LSTM)-based model was designed to measure the quality of document–summary pairs. In order to leverage information across all parts of each document, we further proposed an improved LSTM-based model by removing the forget gate in the LSTM unit. Experiments conducted on the training set and the test set built upon Sina Weibo (a Chinese microblog website similar to Twitter) showed that the LSTM-based models significantly outperformed baseline models with regard to the area under receiver operating characteristic curve (AUC) value. Full article
(This article belongs to the Special Issue Text Mining Applications and Theory)
Show Figures

Figure 1

236 KiB  
Article
Exponential Operations and an Aggregation Method for Single-Valued Neutrosophic Numbers in Decision Making
by Zhikang Lu and Jun Ye
Information 2017, 8(2), 62; https://doi.org/10.3390/info8020062 - 07 Jun 2017
Cited by 22 | Viewed by 3540
Abstract
As an extension of an intuitionistic fuzzy set, a single-valued neutrosophic set is described independently by the membership functions of its truth, indeterminacy, and falsity, which is a subclass of a neutrosophic set (NS). However, in existing exponential operations and their aggregation methods [...] Read more.
As an extension of an intuitionistic fuzzy set, a single-valued neutrosophic set is described independently by the membership functions of its truth, indeterminacy, and falsity, which is a subclass of a neutrosophic set (NS). However, in existing exponential operations and their aggregation methods for neutrosophic numbers (NNs) (basic elements in NSs), the exponents (weights) are positive real numbers in unit intervals under neutrosophic decision-making environments. As a supplement, this paper defines new exponential operations of single-valued NNs (basic elements in a single-valued NS), where positive real numbers are used as the bases, and single-valued NNs are used as the exponents. Then, we propose a single-valued neutrosophic weighted exponential aggregation (SVNWEA) operator based on the exponential operational laws of single-valued NNs and the SVNWEA operator-based decision-making method. Finally, an illustrative example shows the applicability and rationality of the presented method. A comparison with a traditional method demonstrates that the new decision-making method is more appropriate and effective. Full article
(This article belongs to the Section Information Theory and Methodology)
1971 KiB  
Article
Information and Inference
by Paul Walton
Information 2017, 8(2), 61; https://doi.org/10.3390/info8020061 - 27 May 2017
Cited by 5 | Viewed by 5010
Abstract
Inference is expressed using information and is therefore subject to the limitations of information. The conventions that determine the reliability of inference have developed in information ecosystems under the influence of a range of selection pressures. These conventions embed limitations in information measures [...] Read more.
Inference is expressed using information and is therefore subject to the limitations of information. The conventions that determine the reliability of inference have developed in information ecosystems under the influence of a range of selection pressures. These conventions embed limitations in information measures like quality, pace and friction caused by selection trade-offs. Some selection pressures improve the reliability of inference; others diminish it by reinforcing the limitations of the conventions. This paper shows how to apply these ideas to inference in order to analyse the limitations; the analysis is applied to various theories of inference including examples from the philosophies of science and mathematics as well as machine learning. The analysis highlights the limitations of these theories and how different, seemingly competing, ideas about inference can relate to each other. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

435 KiB  
Article
Correction of Outliers in Temperature Time Series Based on Sliding Window Prediction in Meteorological Sensor Network
by Li Ma, Xiaodu Gu and Baowei Wang
Information 2017, 8(2), 60; https://doi.org/10.3390/info8020060 - 24 May 2017
Cited by 19 | Viewed by 9457
Abstract
In order to detect outliers in temperature time series data for improving data quality and decision-making quality related to design and operation, we proposed an algorithm based on sliding window prediction. Firstly, the time series are segmented based on the sliding window. Then, [...] Read more.
In order to detect outliers in temperature time series data for improving data quality and decision-making quality related to design and operation, we proposed an algorithm based on sliding window prediction. Firstly, the time series are segmented based on the sliding window. Then, the prediction model is established based on the history data to predict the future value. If the difference between a predicted value and a measured value is larger than the preset threshold value, the sequence point will be judged to be an outlier and then corrected. In this paper, the sliding window and parameter settings of the algorithm are discussed and the algorithm is verified on actual data. This method does not need to pre classify the abnormal points and perform fast, and can handle large scale data. The experimental results show that the proposed algorithm can not only effectively detect outliers in the time series of meteorological data but also improves the correction efficiency notoriously. Full article
Show Figures

Figure 1

1811 KiB  
Article
A Two-Stage Joint Model for Domain-Specific Entity Detection and Linking Leveraging an Unlabeled Corpus
by Hongzhi Zhang, Weili Zhang, Tinglei Huang, Xiao Liang and Kun Fu
Information 2017, 8(2), 59; https://doi.org/10.3390/info8020059 - 22 May 2017
Viewed by 4063
Abstract
The intensive construction of domain-specific knowledge bases (DSKB) has posed an urgent demand for researches about domain-specific entity detection and linking (DSEDL). Joint models are usually adopted in DSEDL tasks, but data imbalance and high computational complexity exist in these models. Besides, traditional [...] Read more.
The intensive construction of domain-specific knowledge bases (DSKB) has posed an urgent demand for researches about domain-specific entity detection and linking (DSEDL). Joint models are usually adopted in DSEDL tasks, but data imbalance and high computational complexity exist in these models. Besides, traditional feature representation methods are insufficient for domain-specific tasks, due to problems such as lack of labeled data, link sparseness in DSKBs, and so on. In this paper, a two-stage joint (TSJ) model is proposed to solve the data imbalance problem by discriminatively processing entity mentions with different degrees of ambiguity. In addition, three novel methods are put forward to generate effective features by incorporating an unlabeled corpus. One crucial feature involving entity detection is the mention type, extracted by a long short-term memory (LSTM) model trained on automatically annotated data. The other two types of features mainly involve entity linking, including the inner-document topical coherence, which is measured based on entity co-occurring relationships in the corpus, and the cross-document entity coherence evaluated using similar documents. An overall 74.26% F1 value is obtained on a dataset of real-world movie comments, demonstrating the effectiveness of the proposed approach and indicating its potentiality to be used in real-world domain-specific applications. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

268 KiB  
Article
A Novel Identity-Based Signcryption Scheme in the Standard Model
by Yueying Huang and Junjie Yang
Information 2017, 8(2), 58; https://doi.org/10.3390/info8020058 - 19 May 2017
Cited by 6 | Viewed by 4047
Abstract
Identity-based signcryption is a useful cryptographic primitive that provides both authentication and confidentiality for identity-based crypto systems. It is challenging to build a secure identity-based signcryption scheme that can be proven secure in a standard model. In this paper, we address the issue [...] Read more.
Identity-based signcryption is a useful cryptographic primitive that provides both authentication and confidentiality for identity-based crypto systems. It is challenging to build a secure identity-based signcryption scheme that can be proven secure in a standard model. In this paper, we address the issue and propose a novel construction of identity-based signcryption which enjoys IND-CCA security and existential unforgeability without resorting to the random oracle model. Comparisons demonstrate that the new scheme achieves stronger security, better performance efficiency and shorter system parameters. Full article
(This article belongs to the Special Issue Secure Data Storage and Sharing Techniques in Cloud Computing)
14891 KiB  
Article
An Effective and Robust Single Image Dehazing Method Using the Dark Channel Prior
by Xiaoyan Yuan, Mingye Ju, Zhenfei Gu and Shuwang Wang
Information 2017, 8(2), 57; https://doi.org/10.3390/info8020057 - 17 May 2017
Cited by 11 | Viewed by 6529
Abstract
In this paper, we propose a single image dehazing method aiming at addressing the inherent limitations of the extensively employed dark channel prior (DCP). More concretely, we introduce the Gaussian mixture model (GMM) to segment the input hazy image into scenes based on [...] Read more.
In this paper, we propose a single image dehazing method aiming at addressing the inherent limitations of the extensively employed dark channel prior (DCP). More concretely, we introduce the Gaussian mixture model (GMM) to segment the input hazy image into scenes based on the haze density feature map. With the segmentation results, combined with the proposed sky region detection method, we can effectively recognize the sky region where the DCP cannot well handle this. On the basis of sky region detection, we then present an improved global atmospheric light estimation method to increase the estimation accuracy of the atmospheric light. Further, we present a multi-scale fusion-based strategy to obtain the transmission map based on DCP, which can significantly reduce the blocking artifacts of the transmission map. To further rectify the error-prone transmission within the sky region, an adaptive sky region transmission correction method is also presented. Finally, due to the segmentation-blindness of GMM, we adopt the guided total variation (GTV) to tackle this problem while eliminating the extensive texture details contained in the transmission map. Experimental results verify the power of our method and show its superiority over several state-of-the-art methods. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

6795 KiB  
Article
Dynamic, Interactive and Visual Analysis of Population Distribution and Mobility Dynamics in an Urban Environment Using the Mobility Explorer Framework
by Jan Peters-Anders, Zaheer Khan, Wolfgang Loibl, Helmut Augustin and Arno Breinbauer
Information 2017, 8(2), 56; https://doi.org/10.3390/info8020056 - 15 May 2017
Cited by 3 | Viewed by 5624
Abstract
This paper investigates the extent to which a mobile data source can be utilised to generate new information intelligence for decision-making in smart city planning processes. In this regard, the Mobility Explorer framework is introduced and applied to the City of Vienna (Austria) [...] Read more.
This paper investigates the extent to which a mobile data source can be utilised to generate new information intelligence for decision-making in smart city planning processes. In this regard, the Mobility Explorer framework is introduced and applied to the City of Vienna (Austria) by using anonymised mobile phone data from a mobile phone service provider. This framework identifies five necessary elements that are needed to develop complex planning applications. As part of the investigation and experiments a new dynamic software tool, called Mobility Explorer, has been designed and developed based on the requirements of the planning department of the City of Vienna. As a result, the Mobility Explorer enables city stakeholders to interactively visualise the dynamic diurnal population distribution, mobility patterns and various other complex outputs for planning needs. Based on the experiences during the development phase, this paper discusses mobile data issues, presents the visual interface, performs various user-defined analyses, demonstrates the application’s usefulness and critically reflects on the evaluation results of the citizens’ motion exploration that reveal the great potential of mobile phone data in smart city planning but also depict its limitations. These experiences and lessons learned from the Mobility Explorer application development provide useful insights for other cities and planners who want to make informed decisions using mobile phone data in their city planning processes through dynamic visualisation of Call Data Record (CDR) data. Full article
(This article belongs to the Special Issue Smart City Technologies, Systems and Applications)
Show Figures

Figure 1

236 KiB  
Article
An Experience-Based Framework for Evaluating Tourism Mobile Commerce Platforms
by Hongbo Lyu and Zuopeng (Justin) Zhang
Information 2017, 8(2), 55; https://doi.org/10.3390/info8020055 - 12 May 2017
Cited by 2 | Viewed by 4256
Abstract
This research presents and studies an evaluation framework for tourism mobile commerce platforms based on tourists’ experience. Synthesizing from prior literature, relevant theories, and the results of online questionnaires, we select 24 evaluation indices for preliminary evaluation. Using exploratory factor analysis method, we [...] Read more.
This research presents and studies an evaluation framework for tourism mobile commerce platforms based on tourists’ experience. Synthesizing from prior literature, relevant theories, and the results of online questionnaires, we select 24 evaluation indices for preliminary evaluation. Using exploratory factor analysis method, we then extract from these indices the following five principal factors: interactive experience, infrastructure experience, personalization experience, product or service quality experience, and product operation experience. We further employ the confirmatory factor analysis to test the construction of the evaluation framework and demonstrate that the evaluation framework is both robust and effective. Finally, based on our proposed evaluation framework, we empirically evaluate the most popular mobile commerce platforms (Ctrip and Qunaer) in China by using fuzzy comprehensive evaluation method. Full article
(This article belongs to the Section Information Applications)
1052 KiB  
Article
A Method for Multi-Criteria Group Decision Making with 2-Tuple Linguistic Information Based on Cloud Model
by Haobo Zhang, Yunna Wu, Jianwei Gao and Chuanbo Xu
Information 2017, 8(2), 54; https://doi.org/10.3390/info8020054 - 12 May 2017
Cited by 5 | Viewed by 4071
Abstract
This paper presents a new approach to solve the multi-criteria group decision making (MCGDM) problem where criteria values take the form of 2-tuple linguistic information. Firstly, a 2-tuple hybrid ordered weighted geometric (THOWG) operator is proposed, which synthetically considers the importance of both [...] Read more.
This paper presents a new approach to solve the multi-criteria group decision making (MCGDM) problem where criteria values take the form of 2-tuple linguistic information. Firstly, a 2-tuple hybrid ordered weighted geometric (THOWG) operator is proposed, which synthetically considers the importance of both individual and the ordered position so as to overcome the defects of existing operators. Secondly, combining the advantages of the cloud model and 2-tuple linguistic variable, a new generating cloud method is proposed to transform 2-tuple linguistic variables into clouds. Thirdly, we further define some new cloud algorithms, such as cloud possibility degree and cloud support degree which can be respectively used to compare clouds and determine the criteria weights. Furthermore, a new approach for 2-tuple linguistic group decision making is presented on the basis of the THOWG operator, the improved generating cloud method as well as the new cloud algorithms. Finally, an example of assessing the social effects of biomass power plants (BPPS) is illustrated to verify the application and feasible of the developed approach, and a comparative analysis is also conducted to validate the effectiveness of the proposed method. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

1424 KiB  
Article
A Filter Structure for Arbitrary Re-Sampling Ratio Conversion of a Discrete Signal
by Hong Zhang and Changjian Zhu
Information 2017, 8(2), 53; https://doi.org/10.3390/info8020053 - 12 May 2017
Cited by 4 | Viewed by 4295
Abstract
In this report, we studied the sampling synchronization of a discrete signal in the receiver of a communication system and found that the frequency of the received signal usually exhibits some unpredictable deviations. We observed many harmonics caused by the frequency deviations of [...] Read more.
In this report, we studied the sampling synchronization of a discrete signal in the receiver of a communication system and found that the frequency of the received signal usually exhibits some unpredictable deviations. We observed many harmonics caused by the frequency deviations of the discrete received signal. These findings indicate that signal sampling synchronization is an important research technique when using discrete Fourier transforms (DFT) to analyze the harmonics of discrete signals. We investigated the influence of these harmonics on the performance of signal sampling and studied the frequency estimation of the received signal. Based on the frequency estimation of the received signal, the sampling rate of the discrete signal was converted using a modified Farrow filter to achieve sampling synchronization for the received signal. The algorithm discussed here can be applied to sampling synchronization for monitoring and control systems. Finally, simulations and experimental results are presented. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

775 KiB  
Article
Multi-Label Classification from Multiple Noisy Sources Using Topic Models
by Divya Padmanabhan, Satyanath Bhat, Shirish Shevade and Y. Narahari
Information 2017, 8(2), 52; https://doi.org/10.3390/info8020052 - 05 May 2017
Cited by 9 | Viewed by 5689
Abstract
Multi-label classification is a well-known supervised machine learning setting where each instance is associated with multiple classes. Examples include annotation of images with multiple labels, assigning multiple tags for a web page, etc. Since several labels can be assigned to a single instance, [...] Read more.
Multi-label classification is a well-known supervised machine learning setting where each instance is associated with multiple classes. Examples include annotation of images with multiple labels, assigning multiple tags for a web page, etc. Since several labels can be assigned to a single instance, one of the key challenges in this problem is to learn the correlations between the classes. Our first contribution assumes labels from a perfect source. Towards this, we propose a novel topic model (ML-PA-LDA). The distinguishing feature in our model is that classes that are present as well as the classes that are absent generate the latent topics and hence the words. Extensive experimentation on real world datasets reveals the superior performance of the proposed model. A natural source for procuring the training dataset is through mining user-generated content or directly through users in a crowdsourcing platform. In this more practical scenario of crowdsourcing, an additional challenge arises as the labels of the training instances are provided by noisy, heterogeneous crowd-workers with unknown qualities. With this motivation, we further augment our topic model to the scenario where the labels are provided by multiple noisy sources and refer to this model as ML-PA-LDA-MNS. With experiments on simulated noisy annotators, the proposed model learns the qualities of the annotators well, even with minimal training data. Full article
(This article belongs to the Special Issue Text Mining Applications and Theory)
Show Figures

Figure 1

225 KiB  
Article
Subtraction and Division Operations of Simplified Neutrosophic Sets
by Jun Ye
Information 2017, 8(2), 51; https://doi.org/10.3390/info8020051 - 04 May 2017
Cited by 28 | Viewed by 5042
Abstract
A simplified neutrosophic set is characterized by a truth-membership function, an indeterminacy-membership function, and a falsity-membership function, which is a subclass of the neutrosophic set and contains the concepts of an interval neutrosophic set and a single valued neutrosophic set. It is a [...] Read more.
A simplified neutrosophic set is characterized by a truth-membership function, an indeterminacy-membership function, and a falsity-membership function, which is a subclass of the neutrosophic set and contains the concepts of an interval neutrosophic set and a single valued neutrosophic set. It is a powerful structure in expressing indeterminate and inconsistent information. However, there has only been one paper until now—to the best of my knowledge—on the subtraction and division operators in the basic operational laws of neutrosophic single-valued numbers defined in existing literature. Therefore, this paper proposes subtraction operation and division operation for simplified neutrosophic sets, including single valued neutrosophic sets and interval neutrosophic sets respectively, under some constrained conditions to form the integral theoretical framework of simplified neutrosophic sets. In addition, we give numerical examples to illustrate the defined operations. The subtraction and division operations are very important in many practical applications, such as decision making and image processing. Full article
(This article belongs to the Section Information Theory and Methodology)
2666 KiB  
Article
The Diffraction Research of Cylindrical Block Effect Based on Indoor 45 GHz Millimeter Wave Measurements
by Xingrong Li, Yongqian Li and Baogang Li
Information 2017, 8(2), 50; https://doi.org/10.3390/info8020050 - 02 May 2017
Cited by 4 | Viewed by 4631
Abstract
In this paper, four kinds of block diffraction models were proposed on the basis of the uniform geometrical theory of diffraction, and these models were validated by experiments with 45 GHz millimeter wave in the laboratory. The results are in agreement with the [...] Read more.
In this paper, four kinds of block diffraction models were proposed on the basis of the uniform geometrical theory of diffraction, and these models were validated by experiments with 45 GHz millimeter wave in the laboratory. The results are in agreement with the theoretical analysis. Some errors exist in the measurement results because of the unsatisfactory experimental environment. Single conducting cylindrical block measurement error was less than 0.5 dB, and single man block measurement error in the school laboratory was less than 1 dB, while in the factory laboratory environment, the peak to peak error reached 1.6 dB. Human body block attenuation was about 5.9–9.2 dB lower than that of the single conducting cylinder. A human body and a conducting cylinder were used together as a block in model (c) and model (d), but the positions of the cylinder in the two models were different. The measurement results showed that the attenuation of model (d) is about 3 dB higher than that of model (c). Full article
Show Figures

Figure 1

4264 KiB  
Article
Automated Prostate Gland Segmentation Based on an Unsupervised Fuzzy C-Means Clustering Technique Using Multispectral T1w and T2w MR Imaging
by Leonardo Rundo, Carmelo Militello, Giorgio Russo, Antonio Garufi, Salvatore Vitabile, Maria Carla Gilardi and Giancarlo Mauri
Information 2017, 8(2), 49; https://doi.org/10.3390/info8020049 - 28 Apr 2017
Cited by 48 | Viewed by 8082
Abstract
Prostate imaging analysis is difficult in diagnosis, therapy, and staging of prostate cancer. In clinical practice, Magnetic Resonance Imaging (MRI) is increasingly used thanks to its morphologic and functional capabilities. However, manual detection and delineation of prostate gland on multispectral MRI data is [...] Read more.
Prostate imaging analysis is difficult in diagnosis, therapy, and staging of prostate cancer. In clinical practice, Magnetic Resonance Imaging (MRI) is increasingly used thanks to its morphologic and functional capabilities. However, manual detection and delineation of prostate gland on multispectral MRI data is currently a time-expensive and operator-dependent procedure. Efficient computer-assisted segmentation approaches are not yet able to address these issues, but rather have the potential to do so. In this paper, a novel automatic prostate MR image segmentation method based on the Fuzzy C-Means (FCM) clustering algorithm, which enables multispectral T1-weighted (T1w) and T2-weighted (T2w) MRI anatomical data processing, is proposed. This approach, using an unsupervised Machine Learning technique, helps to segment the prostate gland effectively. A total of 21 patients with suspicion of prostate cancer were enrolled in this study. Volume-based metrics, spatial overlap-based metrics and spatial distance-based metrics were used to quantitatively evaluate the accuracy of the obtained segmentation results with respect to the gold-standard boundaries delineated manually by an expert radiologist. The proposed multispectral segmentation method was compared with the same processing pipeline applied on either T2w or T1w MR images alone. The multispectral approach considerably outperforms the monoparametric ones, achieving an average Dice Similarity Coefficient 90.77 ± 1.75, with respect to 81.90 ± 6.49 and 82.55 ± 4.93 by processing T2w and T1w imaging alone, respectively. Combining T2w and T1w MR image structural information significantly enhances prostate gland segmentation by exploiting the uniform gray appearance of the prostate on T1w MRI. Full article
(This article belongs to the Special Issue Fuzzy Logic for Image Processing)
Show Figures

Figure 1

850 KiB  
Article
Assembling Deep Neural Networks for Medical Compound Figure Detection
by Yuhai Yu, Hongfei Lin, Jiana Meng, Xiaocong Wei and Zhehuan Zhao
Information 2017, 8(2), 48; https://doi.org/10.3390/info8020048 - 21 Apr 2017
Cited by 8 | Viewed by 5083
Abstract
Compound figure detection on figures and associated captions is the first step to making medical figures from biomedical literature available for further analysis. The performance of traditional methods is limited to the choice of hand-engineering features and prior domain knowledge. We train multiple [...] Read more.
Compound figure detection on figures and associated captions is the first step to making medical figures from biomedical literature available for further analysis. The performance of traditional methods is limited to the choice of hand-engineering features and prior domain knowledge. We train multiple convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) networks on top of pre-trained word vectors to learn textual features from captions and employ deep CNNs to learn visual features from figures. We then identify compound figures by combining textual and visual prediction. Our proposed architecture obtains remarkable performance in three run types—textual, visual and mixed—and achieves better performance in ImageCLEF2015 and ImageCLEF2016. Full article
Show Figures

Figure 1

5309 KiB  
Article
Developing Knowledge-Based Citizen Participation Platform to Support Smart City Decision Making: The Smarticipate Case Study
by Zaheer Khan, Jens Dambruch, Jan Peters-Anders, Andreas Sackl, Anton Strasser, Peter Fröhlich, Simon Templer and Kamran Soomro
Information 2017, 8(2), 47; https://doi.org/10.3390/info8020047 - 21 Apr 2017
Cited by 35 | Viewed by 8772
Abstract
Citizen participation for social innovation and co-creating urban regeneration proposals can be greatly facilitated by innovative IT systems. Such systems can use Open Government Data, visualise urban proposals in 3D models and provide automated feedback on the feasibility of the proposals. Using such [...] Read more.
Citizen participation for social innovation and co-creating urban regeneration proposals can be greatly facilitated by innovative IT systems. Such systems can use Open Government Data, visualise urban proposals in 3D models and provide automated feedback on the feasibility of the proposals. Using such a system as a communication platform between citizens and city administrations provides an integrated top-down and bottom-up urban planning and decision-making approach to smart cities. However, generating automated feedback on citizens’ proposals requires modelling domain-specific knowledge i.e., vocabulary and rules, which can be applied on spatial and temporal 3D models. This paper presents the European Commission funded H2020 smarticipate project that aims to achieve the above challenge by applying it on three smart cities: Hamburg, Rome and RBKC-London. Whilst the proposed system architecture indicates various innovative features, a proof of concept of the automated feedback feature for the Hamburg use case ‘planting trees’ is demonstrated. Early results and lessons learned show that it is feasible to provide automated feedback on citizen-initiated proposals on specific topics. However, it is not straightforward to generalise this feature to cover more complex concepts and conditions which require specifying comprehensive domain languages, rules and appropriate tools to process them. This paper also highlights the strengths of the smarticipate platform, discusses challenges to realise its different features and suggests potential solutions. Full article
(This article belongs to the Special Issue Smart City Technologies, Systems and Applications)
Show Figures

Figure 1

917 KiB  
Article
A Framework for Systematic Refinement of Trustworthiness Requirements
by Nazila Gol Mohammadi and Maritta Heisel
Information 2017, 8(2), 46; https://doi.org/10.3390/info8020046 - 20 Apr 2017
Cited by 8 | Viewed by 6898
Abstract
The trustworthiness of systems that support complex collaborative business processes is an emergent property. In order to address users’ trust concerns, trustworthiness requirements of software systems must be elicited and satisfied. The aim of this paper is to address the gap that exists [...] Read more.
The trustworthiness of systems that support complex collaborative business processes is an emergent property. In order to address users’ trust concerns, trustworthiness requirements of software systems must be elicited and satisfied. The aim of this paper is to address the gap that exists between end-users’ trust concerns and the lack of implementation of proper trustworthiness requirements. New technologies like cloud computing bring new capabilities for hosting and offering complex collaborative business operations. However, these advances might bring undesirable side effects, e.g., introducing new vulnerabilities and threats caused by collaboration and data exchange over the Internet. Hence, users become more concerned about trust. Trust is subjective; trustworthiness requirements for addressing trust concerns are difficult to elicit, especially if there are different parties involved in the business process. We propose a user-centered trustworthiness requirement analysis and modeling framework. We integrate the subjective trust concerns into goal models and embed them into business process models as objective trustworthiness requirements. Business process model and notation is extended to enable modeling trustworthiness requirements. This paper focuses on the challenges of elicitation, refinement and modeling trustworthiness requirements. An application example from the healthcare domain is used to demonstrate our approach. Full article
(This article belongs to the Special Issue Trust, Privacy and Security in Digital Business)
Show Figures

Figure 1

3367 KiB  
Article
A Shallow Network with Combined Pooling for Fast Traffic Sign Recognition
by Jianming Zhang, Qianqian Huang, Honglin Wu and Yukai Liu
Information 2017, 8(2), 45; https://doi.org/10.3390/info8020045 - 17 Apr 2017
Cited by 20 | Viewed by 6941
Abstract
Traffic sign recognition plays an important role in intelligent transportation systems. Motivated by the recent success of deep learning in the application of traffic sign recognition, we present a shallow network architecture based on convolutional neural networks (CNNs). The network consists of only [...] Read more.
Traffic sign recognition plays an important role in intelligent transportation systems. Motivated by the recent success of deep learning in the application of traffic sign recognition, we present a shallow network architecture based on convolutional neural networks (CNNs). The network consists of only three convolutional layers for feature extraction, and it learns in a backward optimization way. We propose the method of combining different pooling operations to improve sign recognition performance. In view of real-time performance, we use the activation function ReLU to improve computational efficiency. In addition, a linear layer with softmax-loss is taken as the classifier. We use the German traffic sign recognition benchmark (GTSRB) to evaluate the network on CPU, without expensive GPU acceleration hardware, under real-world recognition conditions. The experiment results indicate that the proposed method is effective and fast, and it achieves the highest recognition rate compared with other state-of-the-art algorithms. Full article
Show Figures

Figure 1

2668 KiB  
Article
BBDS: Blockchain-Based Data Sharing for Electronic Medical Records in Cloud Environments
by Qi Xia, Emmanuel Boateng Sifah, Abla Smahi, Sandro Amofa and Xiaosong Zhang
Information 2017, 8(2), 44; https://doi.org/10.3390/info8020044 - 17 Apr 2017
Cited by 441 | Viewed by 26634
Abstract
Disseminating medical data beyond the protected cloud of institutions poses severe risks to patients’ privacy, as breaches push them to the point where they abstain from full disclosure of their condition. This situation negatively impacts the patient, scientific research, and all stakeholders. To [...] Read more.
Disseminating medical data beyond the protected cloud of institutions poses severe risks to patients’ privacy, as breaches push them to the point where they abstain from full disclosure of their condition. This situation negatively impacts the patient, scientific research, and all stakeholders. To address this challenge, we propose a blockchain-based data sharing framework that sufficiently addresses the access control challenges associated with sensitive data stored in the cloud using immutability and built-in autonomy properties of the blockchain. Our system is based on a permissioned blockchain which allows access to only invited, and hence verified users. As a result of this design, further accountability is guaranteed as all users are already known and a log of their actions is kept by the blockchain. The system permits users to request data from the shared pool after their identities and cryptographic keys are verified. The evidence from the system evaluation shows that our scheme is lightweight, scalable, and efficient. Full article
(This article belongs to the Special Issue Secure Data Storage and Sharing Techniques in Cloud Computing)
Show Figures

Figure 1

2950 KiB  
Article
Object Tracking by a Combination of Discriminative Global and Generative Multi-Scale Local Models
by Zhiguo Song, Jifeng Sun and Jialin Yu
Information 2017, 8(2), 43; https://doi.org/10.3390/info8020043 - 11 Apr 2017
Cited by 2 | Viewed by 4904
Abstract
Object tracking is a challenging task in many computer vision applications due to occlusion, scale variation and background clutter, etc. In this paper, we propose a tracking algorithm by combining discriminative global and generative multi-scale local models. In the global model, we teach [...] Read more.
Object tracking is a challenging task in many computer vision applications due to occlusion, scale variation and background clutter, etc. In this paper, we propose a tracking algorithm by combining discriminative global and generative multi-scale local models. In the global model, we teach a classifier with sparse discriminative features to separate the target object from the background based on holistic templates. In the multi-scale local model, the object is represented by multi-scale local sparse representation histograms, which exploit the complementary partial and spatial information of an object across different scales. Finally, a collaborative similarity score of one candidate target is input into a Bayesian inference framework to estimate the target state sequentially during tracking. Experimental results on the various challenging video sequences show that the proposed method performs favorably compared to several state-of-the-art trackers. Full article
Show Figures

Figure 1

1137 KiB  
Article
Security Awareness of the Digital Natives
by Vasileios Gkioulos, Gaute Wangen, Sokratis K. Katsikas, George Kavallieratos and Panayiotis Kotzanikolaou
Information 2017, 8(2), 42; https://doi.org/10.3390/info8020042 - 08 Apr 2017
Cited by 24 | Viewed by 6654
Abstract
Young generations make extensive use of mobile devices, such as smartphones, tablets and laptops, while a plethora of security risks associated with such devices are induced by vulnerabilities related to user behavior. Furthermore, the number of security breaches on or via portable devices [...] Read more.
Young generations make extensive use of mobile devices, such as smartphones, tablets and laptops, while a plethora of security risks associated with such devices are induced by vulnerabilities related to user behavior. Furthermore, the number of security breaches on or via portable devices increases exponentially. Thus, deploying suitable risk treatments requires the investigation of how the digital natives (young people, born and bred in the digital era) use their mobile devices and their level of security awareness, in order to identify common usage patterns with negative security impact. In this article, we present the results of a survey performed across a multinational sample of digital natives with distinct backgrounds and levels of competence in terms of security, to identify divergences in user behavior due to regional, educational and other factors. Our results highlight significant influences on the behavior of digital natives, arising from user confidence, educational background, and parameters related to usability and accessibility. The outcomes of this study justify the need for further analysis of the topic, in order to identify the influence of fine-grained semantics, but also the consolidation of wide and robust user-models. Full article
Show Figures

Figure 1

432 KiB  
Article
Correlation Coefficient between Dynamic Single Valued Neutrosophic Multisets and Its Multiple Attribute Decision-Making Method
by Jun Ye
Information 2017, 8(2), 41; https://doi.org/10.3390/info8020041 - 07 Apr 2017
Cited by 25 | Viewed by 3794
Abstract
Based on dynamic information collected from different time intervals in some real situations, this paper firstly proposes a dynamic single valued neutrosophic multiset (DSVNM) to express dynamic information and operational relations of DSVNMs. Then, a correlation coefficient between DSVNMs and a weighted correlation [...] Read more.
Based on dynamic information collected from different time intervals in some real situations, this paper firstly proposes a dynamic single valued neutrosophic multiset (DSVNM) to express dynamic information and operational relations of DSVNMs. Then, a correlation coefficient between DSVNMs and a weighted correlation coefficient between DSVNMs are presented to measure the correlation degrees between DSVNMs, and their properties are investigated. Based on the weighted correlation coefficient of DSVNMs, a multiple attribute decision-making method is established under a DSVNM environment, in which the evaluation values of alternatives with respect to attributes are collected from different time intervals and are represented by the form of DSVNMs. The ranking order of alternatives is performed through the weighted correlation coefficient between an alternative and the ideal alternative, which is considered by the attribute weights and the time weights, and thus the best one(s) can also be determined. Finally, a practical example shows the application of the proposed method. Full article
(This article belongs to the Section Information Theory and Methodology)
Previous Issue
Next Issue
Back to TopTop