Next Issue
Volume 8, March
Previous Issue
Volume 7, September
 
 

Computers, Volume 7, Issue 4 (December 2018) – 25 articles

Cover Story (view full-size image): In this work, we compare the exergaming experience of young and old individuals under four difficulty adjustment methods. Studies frequently use exergames to improve individuals’ physical functions and reduce the likelihood of noncommunicable diseases. While task difficulty optimization is crucial to the exergame design, research has consistently overlooked the effects of age-related factors on the preferred difficulty adjustment methods. We compared the exergaming experience of young and old individuals under constant, ramping, performance-based, and biofeedback-based difficulty adjustments. Our results correlate well with previous work and support the role of dynamic difficulty adjustments. Further investigation revealed that old individuals are also likely to experience flow under ramping difficulty adjustments, whereas performance-based adjustments were only feasible for young individuals. View [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1329 KiB  
Article
Deep Validation of Spatial Temporal Features of Synthetic Mobility Models
by Nisrine Ibadah, Khalid Minaoui, Mohammed Rziza, Mohammed Oumsis and César Benavente-Peces
Computers 2018, 7(4), 71; https://doi.org/10.3390/computers7040071 - 16 Dec 2018
Cited by 7 | Viewed by 5067
Abstract
This paper analyzes the most relevant spatial-temporal stochastic properties of benchmark synthetic mobility models. Each pattern suffers from various mobility flaws, as will be shown by the models’ validation. A set of metrics is used to describe mobility features, such as the speed [...] Read more.
This paper analyzes the most relevant spatial-temporal stochastic properties of benchmark synthetic mobility models. Each pattern suffers from various mobility flaws, as will be shown by the models’ validation. A set of metrics is used to describe mobility features, such as the speed decay problem, the density wave phenomenon, the spatial node distribution, and the average neighbor percentage. These metrics have already been validated for the random waypoint mobility model (RWPMM), but they have not yet been verified for other mobility patterns that are most frequently used. For this reason, this investigation attempts to deeply validate those metrics for other mobility models, namely the Manhattan Grid mobility, the Reference Point Group mobility, the Nomadic Community mobility, the Self-Similar Least Action Walk, and SMOOTH models. Moreover, we propose a novel mobility metric named the “node neighbors range”. The relevance of this new metric is that it proves at once the set of outcomes of previous metrics. It offers a global view of the overall range of mobile neighbors during the experimental time. The current research aims to more rigorously understand mobility features in order to conduct a precise assessment of each mobility flaw, given that this fact further impacts the performance of the whole network. These validations aim to summarize several parameters into 18,126 different scenarios with an average of 486 validated files. An exhaustive analysis with details like those found in this paper leads to a good understanding of the accurate behaviors of mobility models by displaying the ability of every pattern to deal with certain topology changes, as well as to ensure network performances. Validation results confirm the effectiveness and robustness of our novel metric. Full article
Show Figures

Figure 1

14 pages, 867 KiB  
Article
Sentence Level Domain Independent Opinion and Targets Identification in Unstructured Reviews
by Kahirullah Khan and Wahab Khan
Computers 2018, 7(4), 70; https://doi.org/10.3390/computers7040070 - 11 Dec 2018
Cited by 1 | Viewed by 4801
Abstract
User reviews, blogs, and social media data are widely used for various types of decision-making. In this connection, Machine Learning and Natural Language Processing techniques are employed to automate the process of opinion extraction and summarization. We have studied different techniques of opinion [...] Read more.
User reviews, blogs, and social media data are widely used for various types of decision-making. In this connection, Machine Learning and Natural Language Processing techniques are employed to automate the process of opinion extraction and summarization. We have studied different techniques of opinion mining and found that the extraction of opinion target and opinion words and the relation identification between them are the main tasks of state-of-the-art techniques. Furthermore, domain-independent features extraction is still a challenging task, since it is costly to manually create an extensive list of features for every domain. In this study, we tested different syntactic patterns and semantic rules for the identification of evaluative expressions containing relevant target features and opinion. We have proposed a domain-independent framework that consists of two phases. First, we extract Best Fit Examples (BFE) consisting of short sentences and candidate phrases and in the second phase, pruning is employed to filter the candidate opinion targets and opinion words. The results of the proposed model are significant. Full article
Show Figures

Figure 1

17 pages, 2484 KiB  
Article
Global Gbest Guided-Artificial Bee Colony Algorithm for Numerical Function Optimization
by Habib Shah, Nasser Tairan, Harish Garg and Rozaida Ghazali
Computers 2018, 7(4), 69; https://doi.org/10.3390/computers7040069 - 07 Dec 2018
Cited by 27 | Viewed by 6073
Abstract
Numerous computational algorithms are used to obtain a high performance in solving mathematics, engineering and statistical complexities. Recently, an attractive bio-inspired method—namely the Artificial Bee Colony (ABC)—has shown outstanding performance with some typical computational algorithms in different complex problems. The modification, hybridization and [...] Read more.
Numerous computational algorithms are used to obtain a high performance in solving mathematics, engineering and statistical complexities. Recently, an attractive bio-inspired method—namely the Artificial Bee Colony (ABC)—has shown outstanding performance with some typical computational algorithms in different complex problems. The modification, hybridization and improvement strategies made ABC more attractive to science and engineering researchers. The two well-known honeybees-based upgraded algorithms, Gbest Guided Artificial Bee Colony (GGABC) and Global Artificial Bee Colony Search (GABCS), use the foraging behavior of the global best and guided best honeybees for solving complex optimization tasks. Here, the hybrid of the above GGABC and GABC methods is called the 3G-ABC algorithm for strong discovery and exploitation processes. The proposed and typical methods were implemented on the basis of maximum fitness values instead of maximum cycle numbers, which has provided an extra strength to the proposed and existing methods. The experimental results were tested with sets of fifteen numerical benchmark functions. The obtained results from the proposed approach are compared with the several existing approaches such as ABC, GABC and GGABC, result and found to be very profitable. Finally, obtained results are verified with some statistical testing. Full article
Show Figures

Figure 1

16 pages, 4116 KiB  
Article
Failure Detection and Prevention for Cyber-Physical Systems Using Ontology-Based Knowledge Base
by Nazakat Ali and Jang-Eui Hong
Computers 2018, 7(4), 68; https://doi.org/10.3390/computers7040068 - 06 Dec 2018
Cited by 34 | Viewed by 7129
Abstract
Cyber-physical systems have emerged as a new engineering paradigm, which combine the cyber and physical world with comprehensive computational and analytical tools to solve complex tasks. In cyber-physical systems, components are developed to detect failures, prevent failures, or mitigate the failures of a [...] Read more.
Cyber-physical systems have emerged as a new engineering paradigm, which combine the cyber and physical world with comprehensive computational and analytical tools to solve complex tasks. In cyber-physical systems, components are developed to detect failures, prevent failures, or mitigate the failures of a system. Sensors gather real-time data as an input to the system for further processing. Therefore, the whole cyber-physical system depends on sensors to accomplish their tasks and the failure of one sensor may lead to the failure of the whole system. To address this issue, we present an approach that utilizes the Failure Modes, Effects, and Criticality Analysis, which is a prominent hazard analysis technique to increase the understanding of risk and failure prevention. In our approach, we transform the Failure Modes, Effects, and Criticality Analysis model into a UML(Unified Modeling Language) class diagram, and then a knowledge base is constructed based on the derived UML class diagram. Finally, the UML class diagram is used to build an ontology. The proposed approach employs a 5C architecture for smart industries for its systematic application. Lastly, we use a smart home case study to validate our approach. Full article
Show Figures

Figure 1

14 pages, 400 KiB  
Article
Heuristic Approaches for Location Assignment of Capacitated Services in Smart Cities
by Gerbrich Hoekstra and Frank Phillipson
Computers 2018, 7(4), 67; https://doi.org/10.3390/computers7040067 - 03 Dec 2018
Cited by 5 | Viewed by 4838
Abstract
This paper proposes two heuristic approaches to solve the Multi-Service Capacitated Facility Location Problem. This problem covers assigning equipment to access points, offering multiple services in a Smart City context. The access points should offer the services to the customer and fulfil their [...] Read more.
This paper proposes two heuristic approaches to solve the Multi-Service Capacitated Facility Location Problem. This problem covers assigning equipment to access points, offering multiple services in a Smart City context. The access points should offer the services to the customer and fulfil their demand, given the coverage of the service and their capacity constraints. Both the heuristic approaches solve the assignment problem for the services separately and combine the solutions of the step together. One of them, however, updates the cost parameters between consecutive steps and produce near optimal solutions in reasonable time compared to the solution obtained from solving an integer linear programming problem exactly. Full article
Show Figures

Figure 1

19 pages, 1112 KiB  
Review
Extending NUMA-BTLP Algorithm with Thread Mapping Based on a Communication Tree
by Iulia Știrb
Computers 2018, 7(4), 66; https://doi.org/10.3390/computers7040066 - 03 Dec 2018
Cited by 1 | Viewed by 4213
Abstract
The paper presents a Non-Uniform Memory Access (NUMA)-aware compiler optimization for task-level parallel code. The optimization is based on Non-Uniform Memory Access—Balanced Task and Loop Parallelism (NUMA-BTLP) algorithm Ştirb, 2018. The algorithm gets the type of each thread in the source code based [...] Read more.
The paper presents a Non-Uniform Memory Access (NUMA)-aware compiler optimization for task-level parallel code. The optimization is based on Non-Uniform Memory Access—Balanced Task and Loop Parallelism (NUMA-BTLP) algorithm Ştirb, 2018. The algorithm gets the type of each thread in the source code based on a static analysis of the code. After assigning a type to each thread, NUMA-BTLP Ştirb, 2018 calls NUMA-BTDM mapping algorithm Ştirb, 2016 which uses PThreads routine pthread_setaffinity_np to set the CPU affinities of the threads (i.e., thread-to-core associations) based on their type. The algorithms perform an improve thread mapping for NUMA systems by mapping threads that share data on the same core(s), allowing fast access to L1 cache data. The paper proves that PThreads based task-level parallel code which is optimized by NUMA-BTLP Ştirb, 2018 and NUMA-BTDM Ştirb, 2016 at compile-time, is running time and energy efficiently on NUMA systems. The results show that the energy is optimized with up to 5% at the same execution time for one of the tested real benchmarks and up to 15% for another benchmark running in infinite loop. The algorithms can be used on real-time control systems such as client/server based applications which require efficient access to shared resources. Most often, task parallelism is used in the implementation of the server and loop parallelism is used for the client. Full article
Show Figures

Figure 1

26 pages, 6355 KiB  
Article
Specification and Verification in Integrated Model of Distributed Systems (IMDS)
by Wiktor B. Daszczuk
Computers 2018, 7(4), 65; https://doi.org/10.3390/computers7040065 - 02 Dec 2018
Cited by 8 | Viewed by 5316
Abstract
Distributed systems, such as the Internet of Things (IoT) and cloud computing, are becoming popular. This requires modeling that reflects the natural characteristics of such systems: the locality of independent components, the autonomy of their decisions, and asynchronous communication. Automated verification of deadlocks [...] Read more.
Distributed systems, such as the Internet of Things (IoT) and cloud computing, are becoming popular. This requires modeling that reflects the natural characteristics of such systems: the locality of independent components, the autonomy of their decisions, and asynchronous communication. Automated verification of deadlocks and distributed termination supports rapid development. Existing techniques do not reflect some features of distribution. Most formalisms are synchronous and/or use some kind of global state, both of which are unrealistic. No model supports the communication duality that allows the integration of a remote procedure call and client-server paradigm into a single, uniform model. The majority of model checkers refer to total deadlocks. Usually, they do not distinguish between communication deadlocks from resource deadlocks and deadlocks from distributed termination. Some verification mechanisms check partial deadlocks at the expense of restricting the structure of the system being verified. The paper presents an original formalism for the modeling and verification of distributed systems. The Integrated Model of Distributed Systems (IMDS) defines a distributed system as two sets: states and messages, and the relationship of the “actions” between these sets. Communication duality provides projections on servers and on traveling agents, but the uniform specification of the verified system is preserved. General temporal formulas over IMDS, independent of the structure of the verified system, allow automated verification. These formulas distinguish between deadlocks and distributed termination, and between communication deadlocks and resource deadlocks. Partial deadlocks and partial termination can be checked. The Dedan tool was developed using IMDS formalism. Full article
Show Figures

Figure 1

26 pages, 8442 KiB  
Article
Self-Configuring IoT Service QoS Guarantee Using QBAIoT
by Ahmad Khalil, Nader Mbarek and Olivier Togni
Computers 2018, 7(4), 64; https://doi.org/10.3390/computers7040064 - 17 Nov 2018
Cited by 9 | Viewed by 6071
Abstract
Providing Internet of Things (IoT) environments with service level guarantee is a challenging task for improving IoT application usage experience. We specify in this paper an IoT architecture enabling an IoT Service Level Agreement (iSLA) achievement between an IoT Service Provider (IoT-SP) and [...] Read more.
Providing Internet of Things (IoT) environments with service level guarantee is a challenging task for improving IoT application usage experience. We specify in this paper an IoT architecture enabling an IoT Service Level Agreement (iSLA) achievement between an IoT Service Provider (IoT-SP) and an IoT Client (IoT-C). In order to guarantee the IoT applications’ requirements, Quality of Service (QoS) mechanisms should be implemented within all the layers of the IoT architecture. Thus, we propose a specific mechanism for the lowest layer of our service level based IoT architecture (i.e., sensing layer). It is an adaptation of the IEEE 802.15.4 slotted CSMA/CA mechanism enabling to take into consideration the requirements of real-time IoT services. Our access method called QBAIoT (QoS based Access for IoT) extends IEEE 802.15.4 systems by creating a new contention access period for each specified traffic class in the iSLA. Furthermore, due to the huge number of IoT connected devices, self-configuring capability provisioning is necessary for limiting human intervention and total cost of ownership (TCO). Thus, we integrate a self-configuring capability to the QBAIoT access method by implementing the MAPE-K closed control loop within the IoT High Level Gateway (HL-Gw) of our proposed QoS based IoT architecture. Full article
Show Figures

Figure 1

16 pages, 2378 KiB  
Article
Trustworthiness of Dynamic Moving Sensors for Secure Mobile Edge Computing
by John Yoon
Computers 2018, 7(4), 63; https://doi.org/10.3390/computers7040063 - 16 Nov 2018
Cited by 3 | Viewed by 5186
Abstract
Wireless sensor network is an emerging technology, and the collaboration of wireless sensors becomes one of the active research areas for utilizing sensor data. Various sensors collaborate to recognize the changes of a target environment, to identify, if any radical change occurs. For [...] Read more.
Wireless sensor network is an emerging technology, and the collaboration of wireless sensors becomes one of the active research areas for utilizing sensor data. Various sensors collaborate to recognize the changes of a target environment, to identify, if any radical change occurs. For the accuracy improvement, the calibration of sensors has been discussed, and sensor data analytics are becoming popular in research and development. However, they are not satisfactorily efficient for the situations where sensor devices are dynamically moving, abruptly appearing, or disappearing. If the abrupt appearance of sensors is a zero-day attack, and the disappearance of sensors is an ill-functioning comrade, then sensor data analytics of untrusted sensors will result in an indecisive artifact. The predefined sensor requirements or meta-data-based sensor verification is not adaptive to identify dynamically moving sensors. This paper describes a deep-learning approach to verify the trustworthiness of sensors by considering the sensor data only. The proposed verification on sensors can be done without having to use meta-data about sensors or to request consultation from a cloud server. The contribution of this paper includes (1) quality preservation of sensor data for mining analytics. The sensor data are trained to identify their characteristics of outliers: whether they are attack outliers, or outlier-like abrupt changes in environments; and (2) authenticity verification of dynamically moving sensors, which was possible. Previous unknown sensors are also identified by deep-learning approach. Full article
(This article belongs to the Special Issue Mobile Edge Computing)
Show Figures

Figure 1

13 pages, 581 KiB  
Article
Making Sense of the World: Framing Models for Trustworthy Sensor-Driven Systems
by Muffy Calder, Simon Dobson, Michael Fisher and Julie McCann
Computers 2018, 7(4), 62; https://doi.org/10.3390/computers7040062 - 15 Nov 2018
Cited by 3 | Viewed by 6025
Abstract
Sensor-driven systems provide data and information that facilitate real-time decision-making and autonomous actuation, as well as enable informed policy choices. However, can we be sure that these systems work as expected? Can we model them in a way that captures all the key [...] Read more.
Sensor-driven systems provide data and information that facilitate real-time decision-making and autonomous actuation, as well as enable informed policy choices. However, can we be sure that these systems work as expected? Can we model them in a way that captures all the key issues? We define two concepts: frames of reference and frames of function that help us organise models of sensor-based systems and their purpose. Examples from a smart water distribution network illustrate how frames offer a lens through which to organise and balance multiple views of the system. Frames aid communication between modellers, analysts and stakeholders, and distinguish the purpose of each model, which contributes towards our trust that the system fulfils its purpose. Full article
Show Figures

Figure 1

25 pages, 2757 KiB  
Article
Profiling Director’s Style Based on Camera Positioning Using Fuzzy Logic
by Hartarto Junaedi, Mochamad Hariadi and I Ketut Eddy Purnama
Computers 2018, 7(4), 61; https://doi.org/10.3390/computers7040061 - 14 Nov 2018
Cited by 1 | Viewed by 5709
Abstract
Machinima is a computer imaging technology typically used in games and animation. It prints all movie cast properties into a virtual environment by means of a camera positioning. Since cinematography is complementary to Machinima, it is possible to simulate a director’s style via [...] Read more.
Machinima is a computer imaging technology typically used in games and animation. It prints all movie cast properties into a virtual environment by means of a camera positioning. Since cinematography is complementary to Machinima, it is possible to simulate a director’s style via various camera placements in this environment. In a gaming application, the director’s style is one of the most impressive cinematic factors, where a whole different gaming experience can be obtained using different styles applied to the same scene. This paper describes a system capable of automatically profile a director’s style using fuzzy logic. We employed 19 output variables and 15 other calculated variables from the animation extraction data to profile two different directors’ styles from five scenes. Area plots and histograms were generated, and, by analyzing the histograms, different director’s styles could be subsequently classified. Full article
Show Figures

Figure 1

12 pages, 1603 KiB  
Article
Model Structure Optimization for Fuel Cell Polarization Curves
by Markku Ohenoja, Aki Sorsa and Kauko Leiviskä
Computers 2018, 7(4), 60; https://doi.org/10.3390/computers7040060 - 09 Nov 2018
Cited by 6 | Viewed by 5104
Abstract
The applications of evolutionary optimizers such as genetic algorithms, differential evolution, and various swarm optimizers to the parameter estimation of the fuel cell polarization curve models have increased. This study takes a novel approach on utilizing evolutionary optimization in fuel cell modeling. Model [...] Read more.
The applications of evolutionary optimizers such as genetic algorithms, differential evolution, and various swarm optimizers to the parameter estimation of the fuel cell polarization curve models have increased. This study takes a novel approach on utilizing evolutionary optimization in fuel cell modeling. Model structure identification is performed with genetic algorithms in order to determine an optimized representation of a polarization curve model with linear model parameters. The optimization is repeated with a different set of input variables and varying model complexity. The resulted model can successfully be generalized for different fuel cells and varying operating conditions, and therefore be readily applicable to fuel cell system simulations. Full article
Show Figures

Figure 1

20 pages, 3663 KiB  
Article
Exergame Experience of Young and Old Individuals Under Different Difficulty Adjustment Methods
by Oral Kaplan, Goshiro Yamamoto, Takafumi Taketomi, Alexander Plopski, Christian Sandor and Hirokazu Kato
Computers 2018, 7(4), 59; https://doi.org/10.3390/computers7040059 - 07 Nov 2018
Cited by 6 | Viewed by 6838
Abstract
In this work, we compare the exergaming experience of young and old individuals under four difficulty adjustment methods. Physical inactivity is a leading cause of numerous health conditions including heart diseases, diabetes, cancer, and reduced life expectancy. Committing to regular physical exercise is [...] Read more.
In this work, we compare the exergaming experience of young and old individuals under four difficulty adjustment methods. Physical inactivity is a leading cause of numerous health conditions including heart diseases, diabetes, cancer, and reduced life expectancy. Committing to regular physical exercise is a simple non-pharmaceutical preventive measure for maintaining good health and sustaining quality of life. Incorporating exercise into games, studies frequently used exergames as an intervention tool over the last decades to improve physical functions and to increase adherence to exercise. While task difficulty optimization is crucial to exergame design, researchers consistently overlooked age as an element which can significantly influence the nature of end results. We use the Flow State Scale to analyze the mental state of young and old individuals to compare constant difficulty with ramping, performance-based, and biofeedback-based difficulty adjustments. Our results indicate that old individuals are less likely to experience flow compared to young under the same difficulty adjustment methods. Further investigation revealed that old individuals are likely to experience flow under ramping and biofeedback-based difficulty adjustments whereas performance-based adjustments were only feasible for young. Full article
(This article belongs to the Special Issue Computer Technologies for Human-Centered Cyber World)
Show Figures

Figure 1

18 pages, 1838 KiB  
Article
A New Competitive Binary Grey Wolf Optimizer to Solve the Feature Selection Problem in EMG Signals Classification
by Jingwei Too, Abdul Rahim Abdullah, Norhashimah Mohd Saad, Nursabillilah Mohd Ali and Weihown Tee
Computers 2018, 7(4), 58; https://doi.org/10.3390/computers7040058 - 05 Nov 2018
Cited by 99 | Viewed by 8573
Abstract
Features extracted from the electromyography (EMG) signal normally consist of irrelevant and redundant features. Conventionally, feature selection is an effective way to evaluate the most informative features, which contributes to performance enhancement and feature reduction. Therefore, this article proposes a new competitive binary [...] Read more.
Features extracted from the electromyography (EMG) signal normally consist of irrelevant and redundant features. Conventionally, feature selection is an effective way to evaluate the most informative features, which contributes to performance enhancement and feature reduction. Therefore, this article proposes a new competitive binary grey wolf optimizer (CBGWO) to solve the feature selection problem in EMG signals classification. Initially, short-time Fourier transform (STFT) transforms the EMG signal into time-frequency representation. Ten time-frequency features are extracted from the STFT coefficient. Then, the proposed method is used to evaluate the optimal feature subset from the original feature set. To evaluate the effectiveness of proposed method, CBGWO is compared with binary grey wolf optimization (BGWO1 and BGWO2), binary particle swarm optimization (BPSO), and genetic algorithm (GA). The experimental results show the superiority of CBGWO not only in classification performance, but also feature reduction. In addition, CBGWO has a very low computational cost, which is more suitable for real world application. Full article
Show Figures

Figure 1

31 pages, 4027 KiB  
Article
A Novel Framework for Portfolio Selection Model Using Modified ANFIS and Fuzzy Sets
by Chanchal Kumar and Mohammad Najmud Doja
Computers 2018, 7(4), 57; https://doi.org/10.3390/computers7040057 - 31 Oct 2018
Cited by 5 | Viewed by 4839
Abstract
This paper proposes a novel framework for solving the portfolio selection problem. This framework is excogitated using two newly parameters obtained from an existing basic mean variance model. The scheme can prove entirely advantageous for decision-making while using computed values of these significant [...] Read more.
This paper proposes a novel framework for solving the portfolio selection problem. This framework is excogitated using two newly parameters obtained from an existing basic mean variance model. The scheme can prove entirely advantageous for decision-making while using computed values of these significant parameters. The framework combines effectiveness of the mean-variance model and another significant parameter called Conditional-Value-at-Risk (CVaR). It focuses on extracting two newly parameters viz. αnew and βnew, which are demarcated from results obtained from mean-variance model and the value of CVaR. The method intends to minimize the overall cost, which is computed in the framework using quadratic equations involving these newly parameters. The new structure of ANFIS is designed by changing existing structure of ANFIS and this new structure contains six layers instead of existing five-layered structure. Fuzzy sets are harnessed for the design of the second layer of this new ANFIS structure. The output parameter acquired from the sixth layer of the new ANFIS structure serves as an important index for an investor in the decision-making. The numerical results acquired from the framework and the new six-layered structure is presented and these results are assimilated and compared with the results of the existing ANFIS structure. Full article
Show Figures

Figure 1

19 pages, 394 KiB  
Article
Locality Aware Path ORAM: Implementation, Experimentation and Analytical Modeling
by Kholoud Al-Saleh and Abdelfettah Belghith
Computers 2018, 7(4), 56; https://doi.org/10.3390/computers7040056 - 29 Oct 2018
Cited by 3 | Viewed by 5006
Abstract
In this paper, we propose an advanced implementation of Path ORAM to hide the access pattern to outsourced data into the cloud. This implementation takes advantage of eventual data locality and popularity by introducing a small amount of extra storage at the client [...] Read more.
In this paper, we propose an advanced implementation of Path ORAM to hide the access pattern to outsourced data into the cloud. This implementation takes advantage of eventual data locality and popularity by introducing a small amount of extra storage at the client side. Two replacement strategies are used to manage this extra storage (cache): the Least Recently Used (LRU) and the Least Frequently Used (LFU). Using the same test bed, conducted experiments clearly show the superiority of the advanced implementation compared to the traditional Path ORAM implementation, even for a small cache size and reduced data locality. We then present a mathematical model that provides closed form solutions when data requests follow a Zipf distribution with non-null parameter. This model is showed to have a small and acceptable relative error and is then well validated by the conducted experimental results. Full article
Show Figures

Figure 1

20 pages, 3458 KiB  
Article
Deploying CPU-Intensive Applications on MEC in NFV Systems: The Immersive Video Use Case
by Giorgio Cattaneo, Fabio Giust, Claudio Meani, Daniele Munaretto and Pietro Paglierani
Computers 2018, 7(4), 55; https://doi.org/10.3390/computers7040055 - 26 Oct 2018
Cited by 19 | Viewed by 13105
Abstract
Multi-access Edge Computing (MEC) will be a technology pillar of forthcoming 5G networks. Nonetheless, there is a great interest in also deploying MEC solutions in current 4G infrastructures. MEC enables data processing in proximity to end users. Thus, latency can be minimized, high [...] Read more.
Multi-access Edge Computing (MEC) will be a technology pillar of forthcoming 5G networks. Nonetheless, there is a great interest in also deploying MEC solutions in current 4G infrastructures. MEC enables data processing in proximity to end users. Thus, latency can be minimized, high data rates locally achieved, and real-time information about radio link status or consumer geographical position exploited to develop high-value services. To consolidate network elements and edge applications on the same virtualization infrastructure, network operators aim to combine MEC with Network Function Virtualization (NFV). However, MEC in NFV integration is not fully established yet: in fact, various architectural issues are currently open, even at standardization level. This paper describes a novel MEC in an NFV system which successfully combines, at management level, MEC functional blocks with an NFV Orchestrator, and can neutrally support any “over the top” Mobile Edge application with minimal integration effort. A specific ME app combined with an end-user app for the provision of immersive video services is presented. To provide low latency, CPU-intensive services to end users, the proposed architecture exploits High-Performance Computing resources embedded in the edge infrastructure. Experimental results showing the effectiveness of the proposed architecture are reported and discussed. Full article
(This article belongs to the Special Issue Mobile Edge Computing)
Show Figures

Figure 1

14 pages, 1636 KiB  
Article
Norm-Based Binary Search Trees for Speeding Up KNN Big Data Classification
by Ahmad B. A. Hassanat
Computers 2018, 7(4), 54; https://doi.org/10.3390/computers7040054 - 21 Oct 2018
Cited by 31 | Viewed by 5930
Abstract
Due to their large sizes and/or dimensions, the classification of Big Data is a challenging task using traditional machine learning, particularly if it is carried out using the well-known K-nearest neighbors classifier (KNN) classifier, which is a slow and lazy classifier by its [...] Read more.
Due to their large sizes and/or dimensions, the classification of Big Data is a challenging task using traditional machine learning, particularly if it is carried out using the well-known K-nearest neighbors classifier (KNN) classifier, which is a slow and lazy classifier by its nature. In this paper, we propose a new approach to Big Data classification using the KNN classifier, which is based on inserting the training examples into a binary search tree to be used later for speeding up the searching process for test examples. For this purpose, we used two methods to sort the training examples. The first calculates the minimum/maximum scaled norm and rounds it to 0 or 1 for each example. Examples with 0-norms are sorted in the left-child of a node, and those with 1-norms are sorted in the right child of the same node; this process continues recursively until we obtain one example or a small number of examples with the same norm in a leaf node. The second proposed method inserts each example into the binary search tree based on its similarity to the examples of the minimum and maximum Euclidean norms. The experimental results of classifying several machine learning big datasets show that both methods are much faster than most of the state-of-the-art methods compared, with competing accuracy rates obtained by the second method, which shows great potential for further enhancements of both methods to be used in practice. Full article
Show Figures

Figure 1

36 pages, 1347 KiB  
Article
Automatic Configurable Hardware Code Generation for Software-Defined Radios
by Lekhobola Tsoeunyane, Simon Winberg and Michael Inggs
Computers 2018, 7(4), 53; https://doi.org/10.3390/computers7040053 - 19 Oct 2018
Viewed by 5202
Abstract
The development of software-defined radio (SDR) systems using field-programmable gate arrays (FPGAs) compels designers to reuse pre-existing Intellectual Property (IP) cores in order to meet time-to-market and design efficiency requirements. However, the low-level development difficulties associated with FPGAs hinder productivity, even when the [...] Read more.
The development of software-defined radio (SDR) systems using field-programmable gate arrays (FPGAs) compels designers to reuse pre-existing Intellectual Property (IP) cores in order to meet time-to-market and design efficiency requirements. However, the low-level development difficulties associated with FPGAs hinder productivity, even when the designer is experienced with hardware design. These low-level difficulties include non-standard interfacing methods, component communication and synchronization challenges, complicated timing constraints and processing blocks that need to be customized through time-consuming design tweaks. In this paper, we present a methodology for automated and behavioral integration of dedicated IP cores for rapid prototyping of SDR applications. To maintain high performance of the SDR designs, our methodology integrates IP cores using characteristics of the dataflow model of computation (MoC), namely the static dataflow with access patterns (SDF-AP). We show how the dataflow is mapped onto the low-level model of hardware by efficiently applying low-level based optimizations and using a formal analysis technique that guarantees the correctness of the generated solutions. Furthermore, we demonstrate the capability of our automated hardware design approach by developing eight SDR applications in VHDL. The results show that well-optimized designs are generated and that this can improve productivity while also conserving the hardware resources used. Full article
(This article belongs to the Special Issue Reconfigurable Computing Technologies and Applications)
Show Figures

Figure 1

34 pages, 5614 KiB  
Article
Run-Time Mitigation of Power Budget Variations and Hardware Faults by Structural Adaptation of FPGA-Based Multi-Modal SoPC
by Dimple Sharma, Lev Kirischian and Valeri Kirischian
Computers 2018, 7(4), 52; https://doi.org/10.3390/computers7040052 - 11 Oct 2018
Cited by 4 | Viewed by 4823
Abstract
Systems for application domains like robotics, aerospace, defense, autonomous vehicles, etc. are usually developed on System-on-Programmable Chip (SoPC) platforms, capable of supporting several multi-modal computation-intensive tasks on their FPGAs. Since such systems are mostly autonomous and mobile, they have rechargeable power sources and [...] Read more.
Systems for application domains like robotics, aerospace, defense, autonomous vehicles, etc. are usually developed on System-on-Programmable Chip (SoPC) platforms, capable of supporting several multi-modal computation-intensive tasks on their FPGAs. Since such systems are mostly autonomous and mobile, they have rechargeable power sources and therefore, varying power budgets. They may also develop hardware faults due to radiation, thermal cycling, aging, etc. Systems must be able to sustain the performance requirements of their multi-task multi-modal workload in the presence of variations in available power or occurrence of hardware faults. This paper presents an approach for mitigating power budget variations and hardware faults (transient and permanent) by run-time structural adaptation of the SoPC. The proposed method is based on dynamically allocating, relocating and re-integrating task-specific processing circuits inside the partially reconfigurable FPGA to accommodate the available power budget, satisfy tasks’ performances and hardware resource constraints, and/or to restore task functionality affected by hardware faults. The proposed method has been experimentally implemented on the ARM Cortex-A9 processor of Xilinx Zynq XC7Z020 FPGA. Results have shown that structural adaptation can be done in units of milliseconds since the worst-case decision-making process does not exceed the reconfiguration time of a partial bit-stream. Full article
(This article belongs to the Special Issue Reconfigurable Computing Technologies and Applications)
Show Figures

Figure 1

15 pages, 916 KiB  
Article
Ontology Middleware for Integration of IoT Healthcare Information Systems in EHR Systems
by Abdullah Alamri
Computers 2018, 7(4), 51; https://doi.org/10.3390/computers7040051 - 08 Oct 2018
Cited by 39 | Viewed by 7314
Abstract
Healthcare sectors have been at the forefront of the adoption and use of IoT technologies for efficient healthcare diagnosis and treatment. Because healthcare IoT sensor technology obtains health-related data from patients, it needs to be integrated with the electronic healthcare records (EHR) system. [...] Read more.
Healthcare sectors have been at the forefront of the adoption and use of IoT technologies for efficient healthcare diagnosis and treatment. Because healthcare IoT sensor technology obtains health-related data from patients, it needs to be integrated with the electronic healthcare records (EHR) system. Most EHR systems have not been designed for integration with IoT technology; they have been designed to be more patient-centric management systems. The use of the IoT in EHR remains a long-term goal. Configuring IoT in EHR can enhance patient healthcare, enabling health providers to monitor their patients outside of the clinic. To assist physicians to access data resources efficiently, a data model that is semantic and flexible is needed to connect EHR data and IoT data that may help to provide true interoperability and integration. This research proposes a semantic middleware that exploits ontology to support the semantic integration and functional collaborations between IoT healthcare Information Systems and EHR systems. Full article
Show Figures

Figure 1

16 pages, 5700 KiB  
Article
Distance-Constrained Outage Probability Analysis for Device-to-Device Communications Underlaying Cellular Networks with Frequency Reuse Factor of 2
by Devarani Devi Ningombam and Seokjoo Shin
Computers 2018, 7(4), 50; https://doi.org/10.3390/computers7040050 - 06 Oct 2018
Cited by 9 | Viewed by 5439
Abstract
Device-to-device (D2D) communication is affirmed as one of the dynamic techniques in improving the network throughput and capacity and reducing traffic load to the evolved Node B (eNB). In this paper, we propose a resource allocation and power control technique in which two-pairs [...] Read more.
Device-to-device (D2D) communication is affirmed as one of the dynamic techniques in improving the network throughput and capacity and reducing traffic load to the evolved Node B (eNB). In this paper, we propose a resource allocation and power control technique in which two-pairs of D2D users can simultaneously share same uplink cellular resource. In this case, interference between D2D users and cellular users is no longer insignificant so it must be properly handled. The proposed scheme considers fractional frequency reuse (FFR) scheme as a promising method that can relatively reduce the intra-cell interference. The main objective of the proposed scheme is to maximize the D2D communication throughput and overall system throughput by minimizing outage probability. Hence, we formulate an outage probability problem and overall system throughput optimization problem while guaranteeing minimum allowable signal-to-interference-plus-noise ratio (SINR). For fair distribution of cellular resources to multiple D2D pairs, we used Jain's fairness index (JFI) method. Simulation is conducted in MATLAB and our simulation results demonstrate that the proposed scheme achieves remarkable system performance as compared with existing methods. Full article
Show Figures

Graphical abstract

19 pages, 584 KiB  
Article
An Empirical Study on Security Knowledge Sharing and Learning in Open Source Software Communities
by Shao-Fang Wen
Computers 2018, 7(4), 49; https://doi.org/10.3390/computers7040049 - 01 Oct 2018
Cited by 4 | Viewed by 5312
Abstract
Open source software (OSS) security has been the focus of the security community and practitioners over the past decades. However, the number of new vulnerabilities keeps increasing in today’s OSS systems. With today’s increasingly important and complex OSS, lacking software security knowledge to [...] Read more.
Open source software (OSS) security has been the focus of the security community and practitioners over the past decades. However, the number of new vulnerabilities keeps increasing in today’s OSS systems. With today’s increasingly important and complex OSS, lacking software security knowledge to handle security vulnerabilities in OSS development will result in more breaches that are serious in the future. Learning software security is a difficult and challenging task since the domain is quite context specific and the real project situation is necessary to apply the security concepts within the specific system. Many OSS proponents believe that the OSS community offers significant learning opportunities from its best practices. However, studies that specifically explore security knowledge sharing and learning in OSS communities are scarce. This research is intended to fill this gap by empirically investigating factors that affect knowledge sharing and learning about software security and the relationship among them. A conceptual model is proposed that helps to conceptualize the linkage between socio-technical practices and software security learning processes in OSS communities. A questionnaire and statistical analytical techniques were employed to test hypothesized relationships in the model to gain a better understanding of this research topic. Full article
(This article belongs to the Special Issue Software Security and Assurance)
Show Figures

Figure 1

15 pages, 371 KiB  
Article
Performance Evaluation of HARQ Schemes for the Internet of Things
by Lorenzo Vangelista and Marco Centenaro
Computers 2018, 7(4), 48; https://doi.org/10.3390/computers7040048 - 25 Sep 2018
Cited by 11 | Viewed by 5126
Abstract
Hybrid Automatic Repeat reQuest (HARQ) techniques are widely employed in the most important wireless systems, e.g., the Long Term Evolution (LTE) cellular standard, to increase the reliability of the communication. Despite these schemes have been widely studied in literature in the past several [...] Read more.
Hybrid Automatic Repeat reQuest (HARQ) techniques are widely employed in the most important wireless systems, e.g., the Long Term Evolution (LTE) cellular standard, to increase the reliability of the communication. Despite these schemes have been widely studied in literature in the past several years, the recent results obtained by Polyanskiy, Poor, and Verdú on the finite-blocklength regime disclosed new possibilities for the research on HARQ schemes. Indeed, new communications trends, which are usually part of the Internet of Things (IoT) paradigm, are characterized by very short packet sizes and a high reliability requirement and, therefore, they call for efficient HARQ techniques. In many scenarios, the energy efficiency of the communication plays a key role as well. In this paper, we aim at providing a comprehensive performance comparison of various kinds of HARQ schemes in the context of short-packet transmissions with energy constraints. We derive optimal power allocation strategies and we show that a minimum 50% energy saving can be achieved after very few transmission attempts if we enable packet combining at the receiver side. Full article
Show Figures

Figure 1

20 pages, 2419 KiB  
Article
Connecting Smart Objects in IoT Architectures by Screen Remote Monitoring and Control
by Zebo Yang and Tatsuo Nakajima
Computers 2018, 7(4), 47; https://doi.org/10.3390/computers7040047 - 24 Sep 2018
Cited by 2 | Viewed by 7811
Abstract
Electronic visual display enabled by touchscreen technologies evolves as one of the universal multimedia output methods and a popular input intermediate with touch–interaction. As a result, we can always gain access of an intelligent machine by obtaining control of its display contents. Since [...] Read more.
Electronic visual display enabled by touchscreen technologies evolves as one of the universal multimedia output methods and a popular input intermediate with touch–interaction. As a result, we can always gain access of an intelligent machine by obtaining control of its display contents. Since remote screen sharing systems are also increasingly prevalent, we propose a cross-platform middleware infrastructure which supports remote monitoring and control functionalities based on remote streaming for networked intelligent devices such as smart phone, computer and smart watch, etc. and home appliances such as smart refrigerator, smart air-conditioner and smart TV, etc. We aim to connect all these devices with display screens, so as to make possible remote monitoring and controlling a certain device by whichever one (usually the nearest one) of display screens among the network. The system is a distributed network consisting of multiple modular nodes of server and client, and is compatible to prevalent operating systems such as Windows, macOS, Unix-like/Linux and Android, etc. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop