Next Issue
Previous Issue

Table of Contents

Information, Volume 9, Issue 5 (May 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-27
Export citation of selected articles as:
Open AccessArticle Key Concept Identification: A Comprehensive Analysis of Frequency and Topical Graph-Based Approaches
Information 2018, 9(5), 128; https://doi.org/10.3390/info9050128
Received: 9 February 2018 / Revised: 15 May 2018 / Accepted: 15 May 2018 / Published: 18 May 2018
PDF Full-text (693 KB) | HTML Full-text | XML Full-text
Abstract
Automatic key concept extraction from text is the main challenging task in information extraction, information retrieval and digital libraries, ontology learning, and text analysis. The statistical frequency and topical graph-based ranking are the two kinds of potentially powerful and leading unsupervised approaches in
[...] Read more.
Automatic key concept extraction from text is the main challenging task in information extraction, information retrieval and digital libraries, ontology learning, and text analysis. The statistical frequency and topical graph-based ranking are the two kinds of potentially powerful and leading unsupervised approaches in this area, devised to address the problem. To utilize the potential of these approaches and improve key concept identification, a comprehensive performance analysis of these approaches on datasets from different domains is needed. The objective of the study presented in this paper is to perform a comprehensive empirical analysis of selected frequency and topical graph-based algorithms for key concept extraction on three different datasets, to identify the major sources of error in these approaches. For experimental analysis, we have selected TF-IDF, KP-Miner and TopicRank. Three major sources of error, i.e., frequency errors, syntactical errors and semantical errors, and the factors that contribute to these errors are identified. Analysis of the results reveals that performance of the selected approaches is significantly degraded by these errors. These findings can help us develop an intelligent solution for key concept extraction in the future. Full article
Figures

Figure 1

Open AccessArticle TwitPersonality: Computing Personality Traits from Tweets Using Word Embeddings and Supervised Learning
Information 2018, 9(5), 127; https://doi.org/10.3390/info9050127
Received: 15 February 2018 / Revised: 11 May 2018 / Accepted: 15 May 2018 / Published: 18 May 2018
PDF Full-text (436 KB) | HTML Full-text | XML Full-text
Abstract
We are what we do, like, and say. Numerous research efforts have been pushed towards the automatic assessment of personality dimensions relying on a set of information gathered from social media platforms such as list of friends, interests of musics and movies, endorsements
[...] Read more.
We are what we do, like, and say. Numerous research efforts have been pushed towards the automatic assessment of personality dimensions relying on a set of information gathered from social media platforms such as list of friends, interests of musics and movies, endorsements and likes an individual has ever performed. Turning this information into signals and giving them as inputs to supervised learning approaches has resulted in being particularly effective and accurate in computing personality traits and types. Despite the demonstrated accuracy of these approaches, the sheer amount of information needed to put in place such a methodology and access restrictions make them unfeasible to be used in a real usage scenario. In this paper, we propose a supervised learning approach to compute personality traits by only relying on what an individual tweets about publicly. The approach segments tweets in tokens, then it learns word vector representations as embeddings that are then used to feed a supervised learner classifier. We demonstrate the effectiveness of the approach by measuring the mean squared error of the learned model using an international benchmark of Facebook status updates. We also test the transfer learning predictive power of this model with an in-house built benchmark created by twenty four panelists who performed a state-of-the-art psychological survey and we observe a good conversion of the model while analyzing their Twitter posts towards the personality traits extracted from the survey. Full article
(This article belongs to the Special Issue Love & Hate in the Time of Social Media and Social Networks)
Figures

Figure 1

Open AccessArticle Element-Weighted Neutrosophic Correlation Coefficient and Its Application in Improving CAMShift Tracker in RGBD Video
Information 2018, 9(5), 126; https://doi.org/10.3390/info9050126
Received: 11 April 2018 / Revised: 10 May 2018 / Accepted: 15 May 2018 / Published: 18 May 2018
Cited by 1 | PDF Full-text (4416 KB) | HTML Full-text | XML Full-text
Abstract
Neutrosophic set (NS) is a new branch of philosophy to deal with the origin, nature, and scope of neutralities. Many kinds of correlation coefficients and similarity measures have been proposed in neutrosophic domain. In this work, by considering that there may exist different
[...] Read more.
Neutrosophic set (NS) is a new branch of philosophy to deal with the origin, nature, and scope of neutralities. Many kinds of correlation coefficients and similarity measures have been proposed in neutrosophic domain. In this work, by considering that there may exist different contributions for the neutrosophic elements of T (Truth), I (Indeterminacy), and F (Falsity), a method of element-weighted neutrosophic correlation coefficient is proposed, and it is applied for improving the CAMShift tracker in RGBD (RGB-Depth) video. The concept of object seeds is proposed, and it is employed for extracting object region and calculating the depth back-projection. Each candidate seed is represented in the single-valued neutrosophic set (SVNS) domain via three membership functions, T, I, and F. Then the element-weighted neutrosophic correlation coefficient is applied for selecting robust object seeds by fusing three kinds of criteria. Moreover, the proposed correlation coefficient is applied for estimating a robust back-projection by fusing the information in both color and depth domains. Finally, for the scale adaption problem, two alternatives in the neutrosophic domain are proposed, and the corresponding correlation coefficient between the proposed alternative and the ideal one is employed for the identification of the scale. When considering challenging factors like fast motion, blur, illumination variation, deformation, and camera jitter, the experimental results revealed that the improved CAMShift tracker performs well. Full article
Figures

Figure 1

Open AccessReview A Survey on Efforts to Evolve the Control Plane of Inter-Domain Routing
Information 2018, 9(5), 125; https://doi.org/10.3390/info9050125
Received: 4 April 2018 / Revised: 5 May 2018 / Accepted: 14 May 2018 / Published: 18 May 2018
Cited by 1 | PDF Full-text (427 KB) | HTML Full-text | XML Full-text
Abstract
The Internet’s default inter-domain routing protocol is the Border Gateway Protocol (BGP). With the BGP, dozens of thousands of Autonomous Systems (ASs) exchange network layer reachability information to manage connectivity among them. The BGP was introduced in the early stages of the Internet,
[...] Read more.
The Internet’s default inter-domain routing protocol is the Border Gateway Protocol (BGP). With the BGP, dozens of thousands of Autonomous Systems (ASs) exchange network layer reachability information to manage connectivity among them. The BGP was introduced in the early stages of the Internet, and although the BGP is one of the most successful protocols, new desirable features have been difficult to incorporate into the network over the decades. Thus, this paper classifies previous works to evolve the control plane of inter-domain routing into three types of approaches: brand new design; incremental improvement; inter-domain communication. The main goal of this paper is to provide an understanding of what approaches have been taken to evolve the inter-domain routing control plane. This survey also discusses why the control plane’s issues are hard to evolve and future perspectives for that topic. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle Double Distance-Calculation-Pruning for Similarity Search
Information 2018, 9(5), 124; https://doi.org/10.3390/info9050124
Received: 13 March 2018 / Revised: 29 April 2018 / Accepted: 10 May 2018 / Published: 17 May 2018
PDF Full-text (1088 KB) | HTML Full-text | XML Full-text
Abstract
Many modern applications deal with complex data, where retrieval by similarity plays an important role. Complex data main comparison mechanisms are based on similarity predicates. They are usually immersed in metric spaces where distance functions are employed to express the similarity and a
[...] Read more.
Many modern applications deal with complex data, where retrieval by similarity plays an important role. Complex data main comparison mechanisms are based on similarity predicates. They are usually immersed in metric spaces where distance functions are employed to express the similarity and a lower bound property is usually employed to prevent distance calculations. Retrieval by similarity is implemented by unary and binary operators. Most of the studies aimed at improving the efficiency of unary operators, either by using metric access methods or mathematical properties to prune parts of the search space during query answering. Studies on binary operators to solve similarity joins aim to improve efficiency and most of them use only the metric lower bound property for pruning. However, they are dependent on the query parameters, such as the range radius. In this paper, we propose a generic concept that uses both lower and upper bound properties based on the Metric Spaces Theory to increase the avoidance of element comparisons. The concept can be applied on any existing similarity retrieval method. We analyzed the prunability power increase and show an example of its application on classical join nested loops algorithms. Practical evaluation over both synthetic and real data sets shows that our method reduced the number of distance evaluations on similarity joins. Full article
(This article belongs to the Section Information Processes)
Figures

Figure 1

Open AccessArticle Multimedia Storytelling in Journalism: Exploring Narrative Techniques in Snow Fall
Information 2018, 9(5), 123; https://doi.org/10.3390/info9050123
Received: 23 April 2018 / Revised: 5 May 2018 / Accepted: 14 May 2018 / Published: 16 May 2018
PDF Full-text (1383 KB) | HTML Full-text | XML Full-text
Abstract
News stories aim to create an immersive reading experience by virtually transporting the audience to the described scenes. In print journalism, this experience is facilitated by text-linguistic narrative techniques, such as detailed scene reconstructions, a chronological event structure, point-of-view writing, and speech and
[...] Read more.
News stories aim to create an immersive reading experience by virtually transporting the audience to the described scenes. In print journalism, this experience is facilitated by text-linguistic narrative techniques, such as detailed scene reconstructions, a chronological event structure, point-of-view writing, and speech and thought reports. The present study examines how these techniques are translated into journalistic multimedia stories and explores how the distinctive features of text, image, video, audio, and graphic animations are exploited to immerse the audience in otherwise distant news events. To that end, a case study of the New York Times multimedia story Snow Fall is carried out. Results show that scenes are vividly reconstructed through a combination of text, image, video, and graphic animation. The story’s event structure is expressed in text and picture, while combinations of text, video, and audio are used to represent the events from the viewpoints of news actors. Although text is still central to all narrative techniques, it is complemented with other media formats to create various multimedia combinations, each intensifying the experience of immersion. Full article
(This article belongs to the Special Issue Immersive Multimedia)
Figures

Figure 1

Open AccessArticle A Calibrated Test-Set for Measurement of Access-Point Time Specifications in Hybrid Wired/Wireless Industrial Communication
Information 2018, 9(5), 122; https://doi.org/10.3390/info9050122
Received: 7 April 2018 / Revised: 4 May 2018 / Accepted: 7 May 2018 / Published: 16 May 2018
PDF Full-text (591 KB) | HTML Full-text | XML Full-text
Abstract
In factory automation and process control systems, hybrid wired/wireless networks are often deployed to connect devices of difficult reachability such as those mounted on mobile equipment. A widespread implementation of these networks makes use of Access Points (APs) to implement wireless extensions of
[...] Read more.
In factory automation and process control systems, hybrid wired/wireless networks are often deployed to connect devices of difficult reachability such as those mounted on mobile equipment. A widespread implementation of these networks makes use of Access Points (APs) to implement wireless extensions of Real-Time Ethernet (RTE) networks via the IEEE 802.11 Wireless LAN (WLAN). Unfortunately, APs may introduce random delays in frame forwarding, mainly related to their internal behavior (e.g., queue management, processing times), that clearly impact the overall worst case execution time of real-time tasks involved in industrial process control systems. As a consequence, the knowledge of such delays becomes a crucial design parameter, and their estimation is definitely of utter importance. In this scenario, the paper presents an original and effective method to measure the aforementioned delays introduced by APs, exploiting a hybrid loop-back link and a simple, yet accurate set-up with moderate instrumentation requirements. The proposed method, which requires an initial calibration phase by means of a reference AP, has been successfully tested on some commercial APs to prove its effectiveness. The proposed measurement procedure is proven to be general and, as such, can be profitably adopted in even different scenarios. Full article
(This article belongs to the Section Information and Communications Technology)
Figures

Figure 1

Open AccessArticle A Novel Rough WASPAS Approach for Supplier Selection in a Company Manufacturing PVC Carpentry Products
Information 2018, 9(5), 121; https://doi.org/10.3390/info9050121
Received: 23 April 2018 / Revised: 12 May 2018 / Accepted: 13 May 2018 / Published: 16 May 2018
Cited by 3 | PDF Full-text (807 KB) | HTML Full-text | XML Full-text
Abstract
The decision-making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as supply chain management. One of the most important items in the initial phase of the supply chain, which strongly influences its further
[...] Read more.
The decision-making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as supply chain management. One of the most important items in the initial phase of the supply chain, which strongly influences its further flow, is to decide on the most favorable supplier. In this paper a selection of suppliers in a company producing polyvinyl chloride (PVC) carpentry was made based on the new approach developed in the field of multi-criteria decision making (MCDM). The relative values of the weight coefficients of the criteria are calculated using the rough analytical hierarchical process (AHP) method. The evaluation and ranking of suppliers is carried out using the new rough weighted aggregated sum product assessment (WASPAS) method. In order to determine the stability of the model and the ability to apply the developed rough WASPAS approach, the paper analyzes its sensitivity, which involves changing the value of the coefficient λ in the first part. The second part of the sensitivity analysis relates to the application of different multi-criteria decision-making methods in combination with rough numbers that have been developed in the very recent past. The model presented in the paper is solved by using the following methods: rough Simple Additive Weighting (SAW), rough Evaluation based on Distancefrom Average Solution (EDAS), rough MultiAttributive Border Approximation area Comparison (MABAC), rough Višekriterijumsko kompromisno rangiranje (VIKOR), rough MultiAttributiveIdeal-Real Comparative Analysis (MAIRCA) and rough Multi-objective optimization by ratio analysis plus the full multiplicative form (MULTIMOORA). In addition, in the third part of the sensitivity analysis, the Spearman correlation coefficient (SCC) of the ranks obtained was calculated which confirms the applicability of all the proposed approaches. The proposed rough model allows the evaluation of alternatives despite the imprecision and lack of quantitative information in the information-management process. Full article
Figures

Figure 1

Open AccessArticle A Market-Based Optimization Approach for Domestic Thermal and Electricity Energy Management System: Formulation and Assessment
Information 2018, 9(5), 120; https://doi.org/10.3390/info9050120
Received: 29 March 2018 / Revised: 2 May 2018 / Accepted: 7 May 2018 / Published: 15 May 2018
PDF Full-text (2753 KB) | HTML Full-text | XML Full-text
Abstract
The increase of domestic electrical and thermal controllable devices and the emergence of dynamic electrical pricing leads to the opportunity to integrate and optimize electrical and thermal energy at a house level using a home energy management system (HEMS) in order to minimize
[...] Read more.
The increase of domestic electrical and thermal controllable devices and the emergence of dynamic electrical pricing leads to the opportunity to integrate and optimize electrical and thermal energy at a house level using a home energy management system (HEMS) in order to minimize the energy costs. In the literature, optimization-based algorithms yielding 24-h schedules are used in spite of their growing complexity with the number of controllable devices and their sensitivity to forecast errors which leads, in most of the cases, to suboptimal schedules. To overcome this weakness, this paper introduces a domestic thermal and electrical control based on a market approach. In contrast with the optimization-based HEMS, the proposed market-based approach targets a scalable and reactive optimal control. This paper first formulates the market-based optimization problem with generality and discusses its optimality conditions with regards to the microeconomic theory. Secondly, this paper compares its optimality to an optimization-based approach and a rule-based approach under forecast errors using Monte Carlo simulations. Finally, this paper quantifies and identifies the effectiveness boundaries of the different approaches. Full article
(This article belongs to the Special Issue Agent-Based Artificial Markets)
Figures

Figure 1

Open AccessArticle Fast Identification of High Utility Itemsets from Candidates
Information 2018, 9(5), 119; https://doi.org/10.3390/info9050119
Received: 11 April 2018 / Revised: 5 May 2018 / Accepted: 7 May 2018 / Published: 14 May 2018
PDF Full-text (840 KB) | HTML Full-text | XML Full-text
Abstract
High utility itemsets (HUIs) are sets of items with high utility, like profit, in a database. Efficient mining of high utility itemsets is an important problem in the data mining area. Many mining algorithms adopt a two-phase framework. They first generate a set
[...] Read more.
High utility itemsets (HUIs) are sets of items with high utility, like profit, in a database. Efficient mining of high utility itemsets is an important problem in the data mining area. Many mining algorithms adopt a two-phase framework. They first generate a set of candidate itemsets by roughly overestimating the utilities of all itemsets in a database, and subsequently compute the exact utility of each candidate to identify HUIs. Therefore, the major costs in these algorithms come from candidate generation and utility computation. Previous works mainly focus on how to reduce the number of candidates, without dedicating much attention to utility computation, to the best of our knowledge. However, we find that, for a mining task, the time of utility computation in two-phase algorithms dominates the whole running time of these algorithms. Therefore, it is important to optimize utility computation. In this paper, we first give a basic algorithm for HUI identification, the core of which is a utility computation procedure. Subsequently, a novel candidate tree structure is proposed for storing candidate itemsets, and a candidate tree-based algorithm is developed for fast HUI identification, in which there is an efficient utility computation procedure. Extensive experimental results show that the candidate tree-based algorithm outperforms the basic algorithm and the performance of two-phase algorithms, integrating the candidate tree algorithm as their second step, can be significantly improved. Full article
Figures

Figure 1

Open AccessArticle When Robots Get Bored and Invent Team Sports: A More Suitable Test than the Turing Test?
Information 2018, 9(5), 118; https://doi.org/10.3390/info9050118
Received: 9 April 2018 / Revised: 5 May 2018 / Accepted: 8 May 2018 / Published: 11 May 2018
PDF Full-text (608 KB) | HTML Full-text | XML Full-text
Abstract
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for
[...] Read more.
Increasingly, the Turing test—which is used to show that artificial intelligence has achieved human-level intelligence—is being regarded as an insufficient indicator of human-level intelligence. This essay extends arguments that embodied intelligence is required for human-level intelligence, and proposes a more suitable test for determining human-level intelligence: the invention of team sports by humanoid robots. The test is preferred because team sport activity is easily identified, uniquely human, and is suggested to emerge in basic, controllable conditions. To expect humanoid robots to self-organize, or invent, team sport as a function of human-level artificial intelligence, the following necessary conditions are proposed: humanoid robots must have the capacity to participate in cooperative-competitive interactions, instilled by algorithms for resource acquisition; they must possess or acquire sufficient stores of energetic resources that permit leisure time, thus reducing competition for scarce resources and increasing cooperative tendencies; and they must possess a heterogeneous range of energetic capacities. When present, these factors allow robot collectives to spontaneously invent team sport activities and thereby demonstrate one fundamental indicator of human-level intelligence. Full article
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)
Figures

Figure 1

Open AccessArticle A Comparison of Emotion Annotation Approaches for Text
Information 2018, 9(5), 117; https://doi.org/10.3390/info9050117
Received: 27 March 2018 / Revised: 25 April 2018 / Accepted: 9 May 2018 / Published: 11 May 2018
PDF Full-text (443 KB) | HTML Full-text | XML Full-text
Abstract
While the recognition of positive/negative sentiment in text is an established task with many standard data sets and well developed methodologies, the recognition of a more nuanced affect has received less attention: there are few publicly available annotated resources and there are a
[...] Read more.
While the recognition of positive/negative sentiment in text is an established task with many standard data sets and well developed methodologies, the recognition of a more nuanced affect has received less attention: there are few publicly available annotated resources and there are a number of competing emotion representation schemes with as yet no clear approach to choose between them. To address this lack, we present a series of emotion annotation studies on tweets, providing methods for comparisons between annotation methods (relative vs. absolute) and between different representation schemes. We find improved annotator agreement with a relative annotation scheme (comparisons) on a dimensional emotion model over a categorical annotation scheme on Ekman’s six basic emotions; however, when we compare inter-annotator agreement for comparisons with agreement for a rating scale annotation scheme (both with the same dimensional emotion model), we find improved inter-annotator agreement with rating scales, challenging a common belief that relative judgements are more reliable. To support these studies and as a contribution in itself, we further present a publicly available collection of 2019 tweets annotated with scores on each of four emotion dimensions: valence, arousal, dominance and surprise, following the emotion representation model identified by Fontaine et al. in 2007. Full article
(This article belongs to the Special Issue Love & Hate in the Time of Social Media and Social Networks)
Figures

Figure 1

Open AccessArticle Vector Spatial Big Data Storage and Optimized Query Based on the Multi-Level Hilbert Grid Index in HBase
Information 2018, 9(5), 116; https://doi.org/10.3390/info9050116
Received: 16 March 2018 / Revised: 28 April 2018 / Accepted: 4 May 2018 / Published: 9 May 2018
PDF Full-text (10031 KB) | HTML Full-text | XML Full-text
Abstract
Faced with the rapid growth of vector data and the urgent requirement of low-latency query, it has become an important and timely challenge to effectively achieve the scalable storage and efficient access of vector big data. However, a systematic method is rarely seen
[...] Read more.
Faced with the rapid growth of vector data and the urgent requirement of low-latency query, it has become an important and timely challenge to effectively achieve the scalable storage and efficient access of vector big data. However, a systematic method is rarely seen for vector polygon data storage and query taking spatial locality into account in the storage schema, index construction and query optimization. In the paper, we focus on the storage and topological query of vector polygon geometry data in HBase, and the rowkey in the HBase table is the concatenation of the Hilbert value of the grid cell to which the center of the object entity’s MBR belongs, the layer identifier and the order code. Then, a new multi-level grid index structure, termed Q-HBML, that incorporates the grid-object spatial relationship and a new Hilbert hierarchical code into the multi-level grid, is proposed for improving the spatial query efficiency. Finally, based on the Q-HBML index, two query optimization strategies and an optimized topological query algorithm, ML-OTQ, are presented to optimize the topological query process and enhance the topological query efficiency. Through four groups of comparative experiments, it has been proven that our approach supports better performance. Full article
Figures

Figure 1

Open AccessArticle Calibration of C-Logit-Based SUE Route Choice Model Using Mobile Phone Data
Information 2018, 9(5), 115; https://doi.org/10.3390/info9050115
Received: 18 March 2018 / Revised: 27 April 2018 / Accepted: 6 May 2018 / Published: 8 May 2018
PDF Full-text (3849 KB) | HTML Full-text | XML Full-text
Abstract
Theoretically speaking, the data of a stated preference survey could be suggested for the calibration of a stochastic route choice model. However, it is unrealistic to implement the questionnaire survey for such a large number of alternative routes. Engineers generally determine the parameter
[...] Read more.
Theoretically speaking, the data of a stated preference survey could be suggested for the calibration of a stochastic route choice model. However, it is unrealistic to implement the questionnaire survey for such a large number of alternative routes. Engineers generally determine the parameter empirically. This experienced choice of perception parameter may cause higher errors in the route flows. In our calibration model of the perception parameter, the data of the cellular network is set as the input. This model consists of two levels. The upper level is to minimize the gap squares of the route choice ratio between the C-logit model and the cellular network data. The stochastic user equilibrium (SUE) in terms of the C-logit model is used as the lower level. The simulated annealing (SA) algorithm is used to solve the model, where the route-based gradient projection (GP) algorithm is used to solve the inner SUE. A case study is used to validate the convergence of the model calibration. A real-world road network is used to demonstrate the objective advantage of an equilibrium constraint over a nonequilibrium constraint and explain the feasibility of the candidate routes assumption. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Hardware Support for Security in the Internet of Things: From Lightweight Countermeasures to Accelerated Homomorphic Encryption
Information 2018, 9(5), 114; https://doi.org/10.3390/info9050114
Received: 29 March 2018 / Revised: 19 April 2018 / Accepted: 20 April 2018 / Published: 8 May 2018
PDF Full-text (2228 KB) | HTML Full-text | XML Full-text
Abstract
In the Internet of Things (IoT), many strong constraints have to be considered when designing the connected objects, including low cost and low power, thus limited resources. The confidentiality and integrity of sensitive data must however be ensured even when they have to
[...] Read more.
In the Internet of Things (IoT), many strong constraints have to be considered when designing the connected objects, including low cost and low power, thus limited resources. The confidentiality and integrity of sensitive data must however be ensured even when they have to be processed in the cloud. Security is therefore one of the design constraints but must be achieved without the usual level of resources. In this paper, we address two very different examples showing how embedded hardware/software co-design can help in improving security in the IoT context. The first example targets so-called “hardware attacks” and we show how some simple attacks can be made much more difficult at very low cost. This is demonstrated on a crypto-processor designed for Elliptic Curve Cryptography (ECC). A very lightweight countermeasure is implemented against Simple Power Analysis (SPA), taking advantage of the general processor usually available in the system. The second example shows how confidentiality in the cloud can be guaranteed by homomorphic encryption at a lower computational cost by taking advantage of a hardware accelerator. The proposed accelerator is very easy to implement and can easily be tuned to several encryption schemes and several trade-offs between hardware costs and computation speed. Full article
(This article belongs to the Special Issue Security in the Internet of Things)
Figures

Figure 1

Open AccessArticle The Human Takeover: A Call for a Venture into an Existential Opportunity
Information 2018, 9(5), 113; https://doi.org/10.3390/info9050113
Received: 4 April 2018 / Revised: 30 April 2018 / Accepted: 3 May 2018 / Published: 5 May 2018
PDF Full-text (342 KB) | HTML Full-text | XML Full-text
Abstract
We propose a venture into an existential opportunity for establishing a world ‘good enough’ for humans to live in. Defining an existential opportunity as the converse of an existential risk—that is, a development that promises to dramatically improve the future of humanity—we argue
[...] Read more.
We propose a venture into an existential opportunity for establishing a world ‘good enough’ for humans to live in. Defining an existential opportunity as the converse of an existential risk—that is, a development that promises to dramatically improve the future of humanity—we argue that one such opportunity is available and should be explored now. The opportunity resides in the moment of transition of the Internet—from mediating information to mediating distributed direct governance in the sense of self-organization. The Internet of tomorrow will mediate the execution of contracts, transactions, public interventions and all other change-establishing events more reliably and more synergistically than any other technology or institution. It will become a distributed, synthetically intelligent agent in itself. This transition must not be just observed, or exploited instrumentally: it must be ventured into and seized on behalf of entire humanity. We envision a configuration of three kinds of cognitive system—the human mind, social systems and the emerging synthetic intelligence—serving to augment the autonomy of the first from the ‘programming’ imposed by the second. Our proposition is grounded in a detailed analysis of the manner in which the socio-econo-political system has evolved into a powerful control mechanism that subsumes human minds, steers their will and automates their thinking. We see the venture into the existential opportunity described here as aiming at the global dissolution of the core reason of that programming’s effectiveness—the critical dependence of the continuity of human lives on the coherence of the socially constructed personas they ‘wear.’ Thus, we oppose the popular prediction of the upcoming, ‘dreadful AI takeover’ with a call for action: instead of worrying that Artificial Intelligence will soon come to dominate and govern the human world, let us think of how it could help the human being to finally be able to do it. Full article
Figures

Figure 1

Open AccessArticle Developing an Ontology-Based Rollover Monitoring and Decision Support System for Engineering Vehicles
Information 2018, 9(5), 112; https://doi.org/10.3390/info9050112
Received: 2 April 2018 / Revised: 27 April 2018 / Accepted: 28 April 2018 / Published: 4 May 2018
PDF Full-text (6315 KB) | HTML Full-text | XML Full-text
Abstract
The increasing number of rollover accidents of engineering vehicles has attracted close attention; however, most researchers focus on the analysis and monitoring of rollover stability indexes and seldom the assessment and decision support for the rollover risk of engineering vehicles. In this context,
[...] Read more.
The increasing number of rollover accidents of engineering vehicles has attracted close attention; however, most researchers focus on the analysis and monitoring of rollover stability indexes and seldom the assessment and decision support for the rollover risk of engineering vehicles. In this context, an ontology-based rollover monitoring and decision support system for engineering vehicles is proposed. The ontology model is built for representing monitored rollover stability data with semantic properties and for constructing semantic relevance among the various concepts involved in the rollover domain. On the basis of this, ontology querying and reasoning methods based on the Simple Protocol and RDF Query Language (SPARQL) and Semantic Web Rule Language (SWRL) rules are utilized to realize the rollover risk assessment and to obtain suggested measures. PC and mobile applications (APPs) have also been developed to implement the above methods. In addition, five sets of rollover stability data for an articulated off-road engineering vehicle under different working conditions were analyzed to verify the accuracy and effectiveness of the proposed system. Full article
(This article belongs to the Section Information Systems)
Figures

Figure 1

Open AccessFeature PaperArticle Can Digital Computers Support Ancient Mathematical Consciousness?
Information 2018, 9(5), 111; https://doi.org/10.3390/info9050111
Received: 26 February 2018 / Revised: 29 April 2018 / Accepted: 1 May 2018 / Published: 4 May 2018
PDF Full-text (374 KB) | HTML Full-text | XML Full-text
Abstract
This paper poses, discusses, but does not definitively answer, the following questions: What sorts of reasoning machinery could the ancient mathematicians, and other intelligent animals, be using for spatial reasoning, before the discovery of modern logical mechanisms? “Diagrams in minds” perhaps? How and
[...] Read more.
This paper poses, discusses, but does not definitively answer, the following questions: What sorts of reasoning machinery could the ancient mathematicians, and other intelligent animals, be using for spatial reasoning, before the discovery of modern logical mechanisms? “Diagrams in minds” perhaps? How and why did natural selection produce such machinery? Is there a single package of biological abilities for spatial reasoning, or did different sorts of mathematical competence evolve at different times, forming a “layered” system? Do the layers develop in individuals at different stages? Which components are shared with other intelligent species? Does some or all of the machinery exist at or before birth in humans and if not how and when does it develop, and what is the role of experience in its development? How do brains implement such machinery? Could similar machines be implemented as virtual machines on digital computers, and if not what sorts of non-digital “Super Turing” mechanisms could replicate the required functionality, including discovery of impossibility and necessity? How are impossibility and necessity represented in brains? Are chemical mechanisms required? How could such mechanisms be specified in a genome? Are some not specified in the genome but products of interaction between genome and environment? Does Turing’s work on chemical morphogenesis published shortly before he died indicate that he was interested in this problem? Will the answers to these questions vindicate Immanuel Kant’s claims about the nature of mathematical knowledge, including his claim that mathematical truths are non-empirical, synthetic and necessary? Perhaps it’s time for discussions of consciousness to return to the nature of ancient mathematical consciousness, and related aspects of everyday human and non-human intelligence, usually ignored by consciousness theorists. Full article
Figures

Figure 1

Open AccessArticle Social Engineering Attacks and Countermeasures in the New Zealand Banking System: Advancing a User-Reflective Mitigation Model
Information 2018, 9(5), 110; https://doi.org/10.3390/info9050110
Received: 21 March 2018 / Revised: 23 April 2018 / Accepted: 1 May 2018 / Published: 3 May 2018
PDF Full-text (2512 KB) | HTML Full-text | XML Full-text
Abstract
Social engineering attacks are possibly one of the most dangerous forms of security and privacy attacks since they are technically oriented to psychological manipulation and have been growing in frequency with no end in sight. This research study assessed the major aspects and
[...] Read more.
Social engineering attacks are possibly one of the most dangerous forms of security and privacy attacks since they are technically oriented to psychological manipulation and have been growing in frequency with no end in sight. This research study assessed the major aspects and underlying concepts of social engineering attacks and their influence in the New Zealand banking sector. The study further identified attack stages and provided a user-reflective model for the mitigation of attacks at every stage of the social engineering attack cycle. The outcome of this research was a model that provides users with a process of having a reflective stance while engaging in online activities. Our model is proposed to aid users and, of course, financial institutions to re-think their anti-social engineering strategies while constantly maintaining a self-reflective assessment of whether they are being subjected to social engineering attacks while transacting online. Full article
(This article belongs to the Special Issue Security in the Internet of Things)
Figures

Figure 1

Open AccessArticle Ontology-Based Domain Analysis for Model Driven Pervasive Game Development
Information 2018, 9(5), 109; https://doi.org/10.3390/info9050109
Received: 28 March 2018 / Revised: 24 April 2018 / Accepted: 27 April 2018 / Published: 3 May 2018
PDF Full-text (3251 KB) | HTML Full-text | XML Full-text
Abstract
Domain Analysis (DA) plays an important role in Model Driven Development (MDD) and Domain-Specific Modeling (DSM). However, most formal DA methods are heavy weight and not practical sometimes. For instance, when computer games are developed, the problem domain (game design) is decided gradually
[...] Read more.
Domain Analysis (DA) plays an important role in Model Driven Development (MDD) and Domain-Specific Modeling (DSM). However, most formal DA methods are heavy weight and not practical sometimes. For instance, when computer games are developed, the problem domain (game design) is decided gradually within numerous iterations. It is not practical to fit a heavy-weight DA in such an agile process. In this research, we propose a light-weight DA which can be embedded in the original game development process. The DA process is based on a game ontology which serves for both game design and domain analysis. In this paper, we introduce the ontology and demonstrate how to use it in the domain analysis process. We discuss the quality and evaluate the ontology with a user acceptance survey. The test result shows that most potential users considered the ontology useful and easy to use. Full article
Figures

Figure 1

Open AccessArticle A Worst Case Performance Analysis of Approximated Zero Forcing Vectoring for DSL Systems
Information 2018, 9(5), 108; https://doi.org/10.3390/info9050108
Received: 9 March 2018 / Revised: 27 April 2018 / Accepted: 28 April 2018 / Published: 3 May 2018
PDF Full-text (2207 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we introduce a Gaussian approximation for the achievable downstream bit rate per user in modern broadband and ultra-broadband digital subscriber loop-based access systems. The considered formulation allows one to account for the main characteristics of the interference scenario (e.g., number
[...] Read more.
In this paper we introduce a Gaussian approximation for the achievable downstream bit rate per user in modern broadband and ultra-broadband digital subscriber loop-based access systems. The considered formulation allows one to account for the main characteristics of the interference scenario (e.g., number and positions of interferers along the cable), the far-end crosstalk (FEXT) fluctuations, bit loading limitation per sub-carrier, and approximated zero forcing in vectoring pre-coding. Formulas are obtained assuming log-normal statistics for the signal-to-interference plus noise ratio per sub-carrier. The validity of the considered approximation has been assessed by computer calculation. A very good agreement between the exact and the approximated bit rates is obtained. The bit rate approximation is then used to analyze the performance of a very high-speed digital subscriber line type 2 (VDSL2, Profile 35b) with vectoring and used to assess G.fast performance degradation when approximated zero forcing in vectoring pre-coding is applied. It is observed for G.fast performance that degradation due to residual FEXT after vectoring pre-coding can be relevant. A significant performance improvement can be achieved at the expense of increased computational complexity of the vectoring pre-coding. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Towards Aiding Decision-Making in Social Networks by Using Sentiment and Stress Combined Analysis
Information 2018, 9(5), 107; https://doi.org/10.3390/info9050107
Received: 15 February 2018 / Revised: 25 April 2018 / Accepted: 25 April 2018 / Published: 2 May 2018
PDF Full-text (1090 KB) | HTML Full-text | XML Full-text
Abstract
The present work is a study of the detection of negative emotional states that people have using social network sites (SNSs), and the effect that this negative state has on the repercussions of posted messages. We aim to discover in which grade a
[...] Read more.
The present work is a study of the detection of negative emotional states that people have using social network sites (SNSs), and the effect that this negative state has on the repercussions of posted messages. We aim to discover in which grade a user having an affective state considered negative by an Analyzer can affect other users and generate bad repercussions. Those Analyzers that we propose are a Sentiment Analyzer, a Stress Analyzer and a novel combined Analyzer. We also want to discover what Analyzer is more suitable to predict a bad future situation, and in what context. We designed a Multi-Agent System (MAS) that uses different Analyzers to protect or advise users. This MAS uses the trained and tested Analyzers to predict future bad situations in social media, which could be triggered by the actions of a user that has an emotional state considered negative. We conducted an experimentation with different datasets of text messages from Twitter.com to examine the ability of the system to predict bad repercussions, by comparing the polarity, stress level or combined value classification of the messages that are replies to the ones of the messages that originated them. Full article
(This article belongs to the Special Issue Love & Hate in the Time of Social Media and Social Networks)
Figures

Figure 1

Open AccessArticle Precipitation Data Assimilation System Based on a Neural Network and Case-Based Reasoning System
Information 2018, 9(5), 106; https://doi.org/10.3390/info9050106
Received: 28 March 2018 / Revised: 14 April 2018 / Accepted: 17 April 2018 / Published: 2 May 2018
PDF Full-text (2066 KB) | HTML Full-text | XML Full-text
Abstract
There are several methods to forecast precipitation, but none of them is accurate enough since predicting precipitation is very complicated and influenced by many factors. Data assimilation systems (DAS) aim to increase the prediction result by processing data from different sources in a
[...] Read more.
There are several methods to forecast precipitation, but none of them is accurate enough since predicting precipitation is very complicated and influenced by many factors. Data assimilation systems (DAS) aim to increase the prediction result by processing data from different sources in a general way, such as a weighted average, but have not been used for precipitation prediction until now. A DAS that makes use of mathematical tools is complex and hard to carry out. In our paper, machine learning techniques are introduced into a precipitation data assimilation system. After summarizing the theoretical construction of this method, we take some practical weather forecasting experiments and the results show that the new system is effective and promising. Full article
Figures

Figure 1

Open AccessArticle Definition of Motion and Biophysical Indicators for Home-Based Rehabilitation through Serious Games
Information 2018, 9(5), 105; https://doi.org/10.3390/info9050105
Received: 10 March 2018 / Revised: 23 April 2018 / Accepted: 26 April 2018 / Published: 1 May 2018
Cited by 1 | PDF Full-text (3112 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we describe Remote Monitoring Validation Engineering System (ReMoVES), a newly-developed platform for motion rehabilitation through serious games and biophysical sensors. The main features of the system are highlighted as follows: motion tracking capabilities through Microsoft Kinect V2 and Leap Motion
[...] Read more.
In this paper, we describe Remote Monitoring Validation Engineering System (ReMoVES), a newly-developed platform for motion rehabilitation through serious games and biophysical sensors. The main features of the system are highlighted as follows: motion tracking capabilities through Microsoft Kinect V2 and Leap Motion are disclosed and compared with other solutions; the emotional state of the patient is evaluated with heart rate measurements and electrodermal activity monitored by Microsoft Band 2 during the execution of the functional exercises planned by the therapist. The ReMoVES platform is conceived for home-based rehabilitation after the hospitalisation period, and the system will deploy machine learning techniques to provide an automated evaluation of the patient performance during the training. The algorithms should deliver effective reports to the therapist about the training performance while the patient exercises on their own. The game features that will be described in this manuscript represent the input for the training set, while the feedback provided by the therapist is the output. To face this supervised learning problem, we are describing the most significant features to be used as key indicators of the patient’s performance along with the evaluation of their accuracy in discriminating between good or bad patient actions. Full article
(This article belongs to the Special Issue Selected Papers from ICBRA 2017)
Figures

Figure 1

Open AccessArticle A New Bi-Directional Projection Model Based on Pythagorean Uncertain Linguistic Variable
Information 2018, 9(5), 104; https://doi.org/10.3390/info9050104
Received: 9 April 2018 / Revised: 20 April 2018 / Accepted: 21 April 2018 / Published: 26 April 2018
PDF Full-text (983 KB) | HTML Full-text | XML Full-text
Abstract
To solve the multi-attribute decision making (MADM) problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the
[...] Read more.
To solve the multi-attribute decision making (MADM) problems with Pythagorean uncertain linguistic variable, an extended bi-directional projection method is proposed. First, we utilize the linguistic scale function to convert uncertain linguistic variable and provide a new projection model, subsequently. Then, to depict the bi-directional projection method, the formative vectors of alternatives and ideal alternatives are defined. Furthermore, a comparative analysis with projection model is conducted to show the superiority of bi-directional projection method. Finally, an example of graduate’s job option is given to demonstrate the effectiveness and feasibility of the proposed method. Full article
Figures

Figure 1

Open AccessArticle On Neutrosophic αψ-Closed Sets
Information 2018, 9(5), 103; https://doi.org/10.3390/info9050103
Received: 2 April 2018 / Revised: 19 April 2018 / Accepted: 19 April 2018 / Published: 25 April 2018
Cited by 2 | PDF Full-text (261 KB) | HTML Full-text | XML Full-text
Abstract
The aim of this paper is to introduce the concept of αψ -closed sets in terms of neutrosophic topological spaces. We also study some of the properties of neutrosophic αψ -closed sets. Further, we introduce continuity and contra continuity for the
[...] Read more.
The aim of this paper is to introduce the concept of α ψ -closed sets in terms of neutrosophic topological spaces. We also study some of the properties of neutrosophic α ψ -closed sets. Further, we introduce continuity and contra continuity for the introduced set. The two functions and their relations are studied via a neutrosophic point set. Full article
(This article belongs to the Section Information Theory and Methodology)
Open AccessArticle #europehappinessmap: A Framework for Multi-Lingual Sentiment Analysis via Social Media Big Data (A Twitter Case Study)
Information 2018, 9(5), 102; https://doi.org/10.3390/info9050102
Received: 9 March 2018 / Revised: 2 April 2018 / Accepted: 20 April 2018 / Published: 24 April 2018
PDF Full-text (7302 KB) | HTML Full-text | XML Full-text
Abstract
The growth and popularity of social media platforms have generated a new social interaction environment thus a new collaboration and communication network among individuals. These platforms own tremendous amount of data about users’ behaviors and sentiments since people create, share or exchange their
[...] Read more.
The growth and popularity of social media platforms have generated a new social interaction environment thus a new collaboration and communication network among individuals. These platforms own tremendous amount of data about users’ behaviors and sentiments since people create, share or exchange their information, ideas, pictures or video using them. One of these popular platforms is Twitter, which via its voluntary information sharing structure, provides researchers data potential of benefit for their studies. Based on Twitter data, in this study a multilingual sentiment detection framework is proposed to compute European Gross National Happiness (GNH). This framework consists of a novel data collection, filtering and sampling method, and a newly constructed multilingual sentiment detection algorithm for social media big data, and tested with nine European countries (United Kingdom, Germany, Sweden, Turkey, Portugal, The Netherlands, Italy, France and Spain) and their national languages over a six year period. The reliability of the data is checked with peak/troughs comparison for special days from Wikipedia news lists. The validity is checked with a group of correlation analyses with OECD Life Satisfaction survey reports’, Euro-Dollar and other currency exchanges, and national stock market time series data. After validity and reliability confirmations, the European GNH map is drawn for six years. The main problem addressed is to propose a novel multilingual social media sentiment analysis framework for calculating GNH for countries and change the way of OECD type organizations’ survey and interview methodology. Also, it is believed that this framework can serve more detailed results (e.g., daily or hourly sentiments of society in different languages). Full article
(This article belongs to the Section Information Theory and Methodology)
Figures

Figure 1

Back to Top