Next Issue
Previous Issue

Table of Contents

Computers, Volume 6, Issue 2 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-9
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Towards Trustworthy Collaborative Editing
Computers 2017, 6(2), 13; doi:10.3390/computers6020013
Received: 14 December 2016 / Revised: 24 March 2017 / Accepted: 28 March 2017 / Published: 30 March 2017
PDF Full-text (1083 KB) | HTML Full-text | XML Full-text
Abstract
Real-time collaborative editing applications are drastically different from typical client–server applications in that every participant has a copy of the shared document. In this type of environment, each participant acts as both a client and a server replica. In this article, we elaborate
[...] Read more.
Real-time collaborative editing applications are drastically different from typical client–server applications in that every participant has a copy of the shared document. In this type of environment, each participant acts as both a client and a server replica. In this article, we elaborate on how to adapt Byzantine fault tolerance (BFT) mechanisms to enhance the trustworthiness of such applications. It is apparent that traditional BFT algorithms cannot be used directly because it would dictate that all updates submitted by participants be applied sequentially, which would defeat the purpose of collaborative editing. The goal of this study is to design and implement an efficient BFT solution by exploiting the application semantics and by doing a threat analysis of these types of applications. Our solution can be considered as a form of optimistic BFT in that local states maintained by each participant may diverge temporarily. The states of the participants are made consistent with each other by a periodic synchronization mechanism. Full article
Figures

Figure 1

Open AccessArticle Emotion Elicitation in a Socially Intelligent Service: The Typing Tutor
Computers 2017, 6(2), 14; doi:10.3390/computers6020014
Received: 10 January 2017 / Revised: 21 March 2017 / Accepted: 27 March 2017 / Published: 31 March 2017
PDF Full-text (611 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an experimental study on modeling machine emotion elicitation in a socially intelligent service, the typing tutor. The aim of the study is to evaluate the extent to which the machine emotion elicitation can influence the affective state (valence and arousal)
[...] Read more.
This paper presents an experimental study on modeling machine emotion elicitation in a socially intelligent service, the typing tutor. The aim of the study is to evaluate the extent to which the machine emotion elicitation can influence the affective state (valence and arousal) of the learner during a tutoring session. The tutor provides continuous real-time emotion elicitation via graphically rendered emoticons, as an emotional feedback to learner’s performance. Good performance is rewarded by the positive emoticon, based on the notion of positive reinforcement. Facial emotion recognition software is used to analyze the affective state of the learner for later evaluation. Experimental results show the correlation between the positive emoticon and the learner’s affective state is significant for all 13 (100%) test participants on the arousal dimension and for 9 (69%) test participants on both affective dimensions. The results also confirm our hypothesis and show that the machine emotion elicitation is significant for 11 (85%) of 13 test participants. We conclude that the machine emotion elicitation with simple graphical emoticons has a promising potential for the future development of the tutor. Full article
(This article belongs to the Special Issue Advances in Affect- and Personality-based Personalized Systems)
Figures

Figure 1

Open AccessArticle Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm
Computers 2017, 6(2), 15; doi:10.3390/computers6020015
Received: 14 February 2017 / Revised: 19 March 2017 / Accepted: 29 March 2017 / Published: 5 April 2017
PDF Full-text (4153 KB) | HTML Full-text | XML Full-text
Abstract
In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time
[...] Read more.
In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA) to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality. Full article
Figures

Figure 1

Open AccessArticle Research on Similarity Measurements of 3D Models Based on Skeleton Trees
Computers 2017, 6(2), 17; doi:10.3390/computers6020017
Received: 8 March 2017 / Revised: 19 April 2017 / Accepted: 19 April 2017 / Published: 22 April 2017
Cited by 1 | PDF Full-text (31604 KB) | HTML Full-text | XML Full-text
Abstract
There is a growing need to be able to accurately and efficiently recognize similar models from existing model sets, in particular, for 3D models. This paper proposes a method of similarity measurement of 3D models, in which the similarity between 3D models is
[...] Read more.
There is a growing need to be able to accurately and efficiently recognize similar models from existing model sets, in particular, for 3D models. This paper proposes a method of similarity measurement of 3D models, in which the similarity between 3D models is easily, accurately and automatically calculated by means of skeleton trees constructed by a simple rule. The skeleton operates well as a key descriptor of a 3D model. Specifically, a skeleton tree represents node features (including connection and orientation) that can reflect the topology and branch features (including region and bending degree) of 3D models geometrically. Node feature distance is first computed by the dot product between node connection distance, which is defined by 2-norm, and node orientation distance, which is defined by tangent space distance. Then branch feature distances are computed by the weighted sum of the average regional distances, as defined by generalized Hausdorff distance, and the average bending degree distance as defined by curvature. Overall similarity is expressed as the weighted sum of topology and geometry similarity. The similarity calculation is efficient and accurate because it is not necessary to perform other operations such as rotation or translation and it considers more topological and geometric information. The experiment demonstrates the feasibility and accuracy of the proposed method. Full article
Figures

Figure 1

Open AccessArticle The Right to Remember: Implementing a Rudimentary Emotive-Effect Layer for Frustration on AI Agent Gameplay Strategy
Computers 2017, 6(2), 18; doi:10.3390/computers6020018
Received: 24 March 2017 / Revised: 27 April 2017 / Accepted: 3 May 2017 / Published: 12 May 2017
PDF Full-text (881 KB) | HTML Full-text | XML Full-text
Abstract
AI (Artificial Intelligence) is often looked at as a logical way to develop a game agent that methodically looks at options and delivers rational or irrational solutions. This paper is based on developing an AI agent that plays a game with a similar
[...] Read more.
AI (Artificial Intelligence) is often looked at as a logical way to develop a game agent that methodically looks at options and delivers rational or irrational solutions. This paper is based on developing an AI agent that plays a game with a similar emotive content like a human. The purpose of the study was to see if the incorporation of this emotive content would influence the outcomes within the game Love Letter. In order to do this an AI agent with an emotive layer was developed to play the game over a million times. A lower win/loss ratio demonstrates that, to some extent, this methodology was vindicated and a 100 per cent win for the AI agent did not happen. Machine learning techniques were modelled purposely so as to match extreme models of behavioural change. The results demonstrated a win/loss ratio of 0.67 for the AI agent and, in many ways, reflected the frustration that a normal player would exhibit during game play. As was hypothesised, the final agent investment value was, on average, lower after match play than its initial value. Full article
(This article belongs to the Special Issue Artificial Intelligence for Computer Games)
Figures

Figure 1

Open AccessArticle Design of a Convolutional Two-Dimensional Filter in FPGA for Image Processing Applications
Computers 2017, 6(2), 19; doi:10.3390/computers6020019
Received: 14 April 2017 / Revised: 13 May 2017 / Accepted: 15 May 2017 / Published: 17 May 2017
PDF Full-text (1127 KB) | HTML Full-text | XML Full-text
Abstract
Exploiting the Bachet weight decomposition theorem, a new two-dimensional filter is designed. The filter can be adapted to different multimedia applications, but in this work it is specifically targeted to image processing applications. The method allows emulating standard 32 bit floating point multipliers
[...] Read more.
Exploiting the Bachet weight decomposition theorem, a new two-dimensional filter is designed. The filter can be adapted to different multimedia applications, but in this work it is specifically targeted to image processing applications. The method allows emulating standard 32 bit floating point multipliers using a chain of fixed point adders and a logic unit to manage the exponent, in order to obtain IEEE-754 compliant results. The proposed design allows more compact implementation of a floating point filtering architecture when a fixed set of coefficients and a fixed range of input values are used. The elaboration of the data proceeds in raster-scan order and is capable of directly processing the data coming from the acquisition source thanks to a careful organization of the memories, avoiding the implementation of frame buffers or any aligning circuitry. The proposed architecture shows state-of-the-art performances in terms of critical path delay, obtaining a critical path delay of 4.7 ns when implemented on a Xilinx Virtex 7 FPGA board. Full article
Figures

Figure 1

Open AccessArticle Comparison of Four SVM Classifiers Used with Depth Sensors to Recognize Arabic Sign Language Words
Computers 2017, 6(2), 20; doi:10.3390/computers6020020
Received: 14 April 2017 / Revised: 30 May 2017 / Accepted: 12 June 2017 / Published: 15 June 2017
PDF Full-text (2093 KB) | HTML Full-text | XML Full-text
Abstract
The objective of this research was to recognize the hand gestures of Arabic Sign Language (ArSL) words using two depth sensors. The researchers developed a model to examine 143 signs gestured by 10 users for 5 ArSL words (the dataset). The sensors captured
[...] Read more.
The objective of this research was to recognize the hand gestures of Arabic Sign Language (ArSL) words using two depth sensors. The researchers developed a model to examine 143 signs gestured by 10 users for 5 ArSL words (the dataset). The sensors captured depth images of the upper human body, from which 235 angles (features) were extracted for each joint and between each pair of bones. The dataset was divided into a training set (109 observations) and a testing set (34 observations). The support vector machine (SVM) classifier was set using different parameters on the gestured words’ dataset to produce four SVM models, with linear kernel (SVMLD and SVMLT) and radial kernel (SVMRD and SVMRT) functions. The overall identification accuracy for the corresponding words in the training set for the SVMLD, SVMLT, SVMRD, and SVMRT models was 88.92%, 88.92%, 90.88%, and 90.884%, respectively. The accuracy from the testing set for SVMLD, SVMLT, SVMRD, and SVMRT was 97.059%, 97.059%, 94.118%, and 97.059%, respectively. Therefore, since the two kernels in the models were close in performance, it is far more efficient to use the less complex model (linear kernel) set with a default parameter. Full article
Figures

Figure 1

Open AccessArticle Enhancing BER Performance Limit of BCH and RS Codes Using Multipath Diversity
Computers 2017, 6(2), 21; doi:10.3390/computers6020021
Received: 6 April 2017 / Revised: 3 June 2017 / Accepted: 12 June 2017 / Published: 16 June 2017
PDF Full-text (8464 KB) | HTML Full-text | XML Full-text
Abstract
Modern wireless communication systems suffer from phase shifting and, more importantly, from interference caused by multipath propagation. Multipath propagation results in an antenna receiving two or more copies of the signal sequence sent from the same source but that has been delivered via
[...] Read more.
Modern wireless communication systems suffer from phase shifting and, more importantly, from interference caused by multipath propagation. Multipath propagation results in an antenna receiving two or more copies of the signal sequence sent from the same source but that has been delivered via different paths. Multipath components are treated as redundant copies of the original data sequence and are used to improve the performance of forward error correction (FEC) codes without extra redundancy, in order to improve data transmission reliability and increase the bit rate over the wireless communication channel. For a proof of concept Bose, Ray-Chaudhuri, and Hocquenghem (BCH) and Reed-Solomon (RS) codes have been used for FEC to compare their bit error rate (BER) performances. The results showed that the wireless multipath components significantly improve the performance of FEC. Furthermore, FEC codes with low error correction capability and employing the multipath phenomenon are enhanced to perform better than FEC codes which have a bit higher error correction capability and did not utilise the multipath. Consequently, the bit rate is increased, and communication reliability is improved without extra redundancy. Full article
Figures

Figure 1

Review

Jump to: Research

Open AccessFeature PaperReview Reliability of NAND Flash Memories: Planar Cells and Emerging Issues in 3D Devices
Computers 2017, 6(2), 16; doi:10.3390/computers6020016
Received: 3 March 2017 / Revised: 13 April 2017 / Accepted: 18 April 2017 / Published: 21 April 2017
PDF Full-text (5594 KB) | HTML Full-text | XML Full-text
Abstract
We review the state-of-the-art in the understanding of planar NAND Flash memory reliability and discuss how the recent move to three-dimensional (3D) devices has affected this field. Particular emphasis is placed on mechanisms developing along the lifetime of the memory array, as opposed
[...] Read more.
We review the state-of-the-art in the understanding of planar NAND Flash memory reliability and discuss how the recent move to three-dimensional (3D) devices has affected this field. Particular emphasis is placed on mechanisms developing along the lifetime of the memory array, as opposed to time-zero or technological issues, and the viewpoint is focused on the understanding of the root causes. The impressive amount of published work demonstrates that Flash reliability is a complex yet well-understood field, where nonetheless tighter and tighter constraints are set by device scaling. Three-dimensional NAND have offset the traditional scaling scenario, leading to an improvement in performance and reliability while raising new issues to be dealt with, determined by the newer and more complex cell and array architectures as well as operation modes. A thorough understanding of the complex phenomena involved in the operation and reliability of NAND cells remains vital for the development of future technology nodes. Full article
(This article belongs to the Special Issue 3D Flash Memories)
Figures

Figure 1

Back to Top