Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 10, Issue 2 (June 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-36
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle DNA Paired Fragment Assembly Using Graph Theory
Algorithms 2017, 10(2), 36; doi:10.3390/a10020036
Received: 26 January 2017 / Revised: 27 February 2017 / Accepted: 17 March 2017 / Published: 24 March 2017
PDF Full-text (3509 KB) | HTML Full-text | XML Full-text
Abstract
DNA fragment assembly requirements have generated an important computational problem created by their structure and the volume of data. Therefore, it is important to develop algorithms able to produce high-quality information that use computer resources efficiently. Such an algorithm, using graph theory, is
[...] Read more.
DNA fragment assembly requirements have generated an important computational problem created by their structure and the volume of data. Therefore, it is important to develop algorithms able to produce high-quality information that use computer resources efficiently. Such an algorithm, using graph theory, is introduced in the present article. We first determine the overlaps between DNA fragments, obtaining the edges of a directed graph; with this information, the next step is to construct an adjacency list with some particularities. Using the adjacency list, it is possible to obtain the DNA contigs (group of assembled fragments building a contiguous element) using graph theory. We performed a set of experiments on real DNA data and compared our results to those obtained with common assemblers (Edena and Velvet). Finally, we searched the contigs in the original genome, in our results and in those of Edena and Velvet. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Figures

Figure 1

Open AccessArticle A Spatial-Temporal-Semantic Neural Network Algorithm for Location Prediction on Moving Objects
Algorithms 2017, 10(2), 37; doi:10.3390/a10020037
Received: 20 February 2017 / Revised: 21 March 2017 / Accepted: 22 March 2017 / Published: 24 March 2017
PDF Full-text (12458 KB) | HTML Full-text | XML Full-text
Abstract
Location prediction has attracted much attention due to its important role in many location-based services, such as food delivery, taxi-service, real-time bus system, and advertisement posting. Traditional prediction methods often cluster track points into regions and mine movement patterns within the regions. Such
[...] Read more.
Location prediction has attracted much attention due to its important role in many location-based services, such as food delivery, taxi-service, real-time bus system, and advertisement posting. Traditional prediction methods often cluster track points into regions and mine movement patterns within the regions. Such methods lose information of points along the road and cannot meet the demand of specific services. Moreover, traditional methods utilizing classic models may not perform well with long location sequences. In this paper, a spatial-temporal-semantic neural network algorithm (STS-LSTM) has been proposed, which includes two steps. First, the spatial-temporal-semantic feature extraction algorithm (STS) is used to convert the trajectory to location sequences with fixed and discrete points in the road networks. The method can take advantage of points along the road and can transform trajectory into model-friendly sequences. Then, a long short-term memory (LSTM)-based model is constructed to make further predictions, which can better deal with long location sequences. Experimental results on two real-world datasets show that STS-LSTM has stable and higher prediction accuracy over traditional feature extraction and model building methods, and the application scenarios of the algorithm are illustrated. Full article
Figures

Open AccessArticle An Asynchronous Message-Passing Distributed Algorithm for the Generalized Local Critical Section Problem
Algorithms 2017, 10(2), 38; doi:10.3390/a10020038
Received: 27 January 2017 / Revised: 14 March 2017 / Accepted: 22 March 2017 / Published: 24 March 2017
PDF Full-text (265 KB) | HTML Full-text | XML Full-text
Abstract
This paper discusses the generalized local version of critical section problems including mutual exclusion, mutual inclusion, k-mutual exclusion and l-mutual inclusion. When a pair of numbers (li, ki) is given for each process Pi,
[...] Read more.
This paper discusses the generalized local version of critical section problems including mutual exclusion, mutual inclusion, k-mutual exclusion and l-mutual inclusion. When a pair of numbers (li, ki) is given for each process Pi, it is the problem of controlling the system in such a way that the number of processes that can execute their critical sections at a time is at least li and at most ki among its neighboring processes and Pi itself. We propose the first solution for the generalized local (li, |Ni| + 1)-critical section problem (i.e., the generalized local li-mutual inclusion problem). Additionally, we show the relationship between the generalized local (li, ki)-critical section problem and the generalized local (|Ni| + 1 − ki, |Ni| + 1 − li)-critical section problem. Finally, we propose the first solution for the generalized local (li, ki)-critical section problem for arbitrary (li, ki), where 0 ≤ li < ki + |Ni| + 1 for each process Pi. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Open AccessArticle Fuzzy Random Walkers with Second Order Bounds: An Asymmetric Analysis
Algorithms 2017, 10(2), 40; doi:10.3390/a10020040
Received: 22 December 2016 / Revised: 22 March 2017 / Accepted: 27 March 2017 / Published: 30 March 2017
PDF Full-text (315 KB) | HTML Full-text | XML Full-text
Abstract
Edge-fuzzy graphs constitute an essential modeling paradigm across a broad spectrum of domains ranging from artificial intelligence to computational neuroscience and social network analysis. Under this model, fundamental graph properties such as edge length and graph diameter become stochastic and as such they
[...] Read more.
Edge-fuzzy graphs constitute an essential modeling paradigm across a broad spectrum of domains ranging from artificial intelligence to computational neuroscience and social network analysis. Under this model, fundamental graph properties such as edge length and graph diameter become stochastic and as such they are consequently expressed in probabilistic terms. Thus, algorithms for fuzzy graph analysis must rely on non-deterministic design principles. One such principle is Random Walker, which is based on a virtual entity and selects either edges or, like in this case, vertices of a fuzzy graph to visit. This allows the estimation of global graph properties through a long sequence of local decisions, making it a viable strategy candidate for graph processing software relying on native graph databases such as Neo4j. As a concrete example, Chebyshev Walktrap, a heuristic fuzzy community discovery algorithm relying on second order statistics and on the teleportation of the Random Walker, is proposed and its performance, expressed in terms of community coherence and number of vertex visits, is compared to the previously proposed algorithms of Markov Walktrap, Fuzzy Walktrap, and Fuzzy Newman–Girvan. In order to facilitate this comparison, a metric based on the asymmetric metrics of Tversky index and Kullback–Leibler divergence is used. Full article
(This article belongs to the Special Issue Humanistic Data Processing)
Figures

Figure 1

Open AccessArticle RST Resilient Watermarking Scheme Based on DWT-SVD and Scale-Invariant Feature Transform
Algorithms 2017, 10(2), 41; doi:10.3390/a10020041
Received: 17 January 2017 / Revised: 2 March 2017 / Accepted: 22 March 2017 / Published: 30 March 2017
PDF Full-text (20927 KB) | HTML Full-text | XML Full-text
Abstract
Currently, most digital image watermarking schemes are affected by geometric attacks like rotation, scaling, and translation (RST). In the watermark embedding process, a robust watermarking scheme is proposed against RST attacks. In this paper, three-level discrete wavelet transform (DWT) is applied to the
[...] Read more.
Currently, most digital image watermarking schemes are affected by geometric attacks like rotation, scaling, and translation (RST). In the watermark embedding process, a robust watermarking scheme is proposed against RST attacks. In this paper, three-level discrete wavelet transform (DWT) is applied to the original image. The three-level low frequency sub-band is decomposed by the singular value decomposition (SVD), and its singular values matrix is extracted for watermarking embedding. Before the watermarking extraction, the keypoints are selected by scale-invariant feature transform (SIFT) in the original image and attacked image. By matching the keypoints in two images, the RST attacks can be precisely corrected and the better performance can be obtained. The experimental results show that the proposed scheme achieves good performance of imperceptibility and robustness against common image processing and malicious attacks, especially geometric attacks. Full article
Figures

Figure 1

Open AccessArticle RGloVe: An Improved Approach of Global Vectors for Distributional Entity Relation Representation
Algorithms 2017, 10(2), 42; doi:10.3390/a10020042
Received: 10 January 2017 / Revised: 21 March 2017 / Accepted: 13 April 2017 / Published: 17 April 2017
PDF Full-text (1247 KB) | HTML Full-text | XML Full-text
Abstract
Most of the previous works on relation extraction between named entities are often limited to extracting the pre-defined types; which are inefficient for massive unlabeled text data. Recently; with the appearance of various distributional word representations; unsupervised methods for many natural language processing
[...] Read more.
Most of the previous works on relation extraction between named entities are often limited to extracting the pre-defined types; which are inefficient for massive unlabeled text data. Recently; with the appearance of various distributional word representations; unsupervised methods for many natural language processing (NLP) tasks have been widely researched. In this paper; we focus on a new finding of unsupervised relation extraction; which is called distributional relation representation. Without requiring the pre-defined types; distributional relation representation aims to automatically learn entity vectors and further estimate semantic similarity between these entities. We choose global vectors (GloVe) as our original model to train entity vectors because of its excellent balance between local context and global statistics in the whole corpus. In order to train model more efficiently; we improve the traditional GloVe model by using cosine similarity between entity vectors to approximate the entity occurrences instead of dot product. Because cosine similarity can convert vector to unit vector; it is intuitively more reasonable and more easily converge to a local optimum. We call the improved model RGloVe. Experimental results on a massive corpus of Sina News show that our proposed model outperforms the traditional global vectors. Finally; a graph database of Neo4j is introduced to store these relationships between named entities. The most competitive advantage of Neo4j is that it provides a highly accessible way to query the direct and indirect relationships between entities. Full article
Figures

Figure 1

Open AccessArticle Reliable Portfolio Selection Problem in Fuzzy Environment: An mλ Measure Based Approach
Algorithms 2017, 10(2), 43; doi:10.3390/a10020043
Received: 16 February 2017 / Revised: 31 March 2017 / Accepted: 13 April 2017 / Published: 18 April 2017
PDF Full-text (809 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of
[...] Read more.
This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle Revised Gravitational Search Algorithms Based on Evolutionary-Fuzzy Systems
Algorithms 2017, 10(2), 44; doi:10.3390/a10020044
Received: 25 January 2017 / Revised: 4 April 2017 / Accepted: 18 April 2017 / Published: 21 April 2017
Cited by 5 | PDF Full-text (400 KB) | HTML Full-text | XML Full-text
Abstract
The choice of the best optimization algorithm is a hard issue, and it sometime depends on specific problem. The Gravitational Search Algorithm (GSA) is a search algorithm based on the law of gravity, which states that each particle attracts every other particle with
[...] Read more.
The choice of the best optimization algorithm is a hard issue, and it sometime depends on specific problem. The Gravitational Search Algorithm (GSA) is a search algorithm based on the law of gravity, which states that each particle attracts every other particle with a force called gravitational force. Some revised versions of GSA have been proposed by using intelligent techniques. This work proposes some GSA versions based on fuzzy techniques powered by evolutionary methods, such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE), to improve GSA. The designed algorithms tune a suitable parameter of GSA through a fuzzy controller whose membership functions are optimized by GA, PSO and DE. The results show that Fuzzy Gravitational Search Algorithm (FGSA) optimized by DE is optimal for unimodal functions, whereas FGSA optimized through GA is good for multimodal functions. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems
Algorithms 2017, 10(2), 45; doi:10.3390/a10020045
Received: 26 January 2017 / Revised: 8 April 2017 / Accepted: 20 April 2017 / Published: 25 April 2017
PDF Full-text (407 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper,
[...] Read more.
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Figures

Figure 1

Open AccessArticle An Improved Multiobjective Particle Swarm Optimization Based on Culture Algorithms
Algorithms 2017, 10(2), 46; doi:10.3390/a10020046
Received: 14 February 2017 / Revised: 14 April 2017 / Accepted: 18 April 2017 / Published: 25 April 2017
PDF Full-text (2903 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a new approach to raise the performance of multiobjective particle swam optimization. The personal guide and global guide are updated using three kinds of knowledge extracted from the population based on cultural algorithms. An epsilon domination criterion has
[...] Read more.
In this paper, we propose a new approach to raise the performance of multiobjective particle swam optimization. The personal guide and global guide are updated using three kinds of knowledge extracted from the population based on cultural algorithms. An epsilon domination criterion has been employed to enhance the convergence and diversity of the approximate Pareto front. Moreover, a simple polynomial mutation operator has been applied to both the population and the non-dominated archive. Experiments on two series of bench test suites have shown the effectiveness of the proposed approach. A comparison with several other algorithms that are considered good representatives of particle swarm optimization solutions has also been conducted, in order to verify the competitive performance of the proposed algorithm in solve multiobjective optimization problems. Full article
Figures

Figure 1

Open AccessArticle Trust in the Balance: Data Protection Laws as Tools for Privacy and Security in the Cloud
Algorithms 2017, 10(2), 47; doi:10.3390/a10020047
Received: 24 January 2017 / Revised: 14 April 2017 / Accepted: 18 April 2017 / Published: 27 April 2017
PDF Full-text (209 KB) | HTML Full-text | XML Full-text
Abstract
A popular bumper sticker states: “There is no cloud. It’s just someone else’s computer.” Despite the loss of control that comes with its use, critical records are increasingly being entrusted to the cloud, generating ever-growing concern about the privacy and security of those
[...] Read more.
A popular bumper sticker states: “There is no cloud. It’s just someone else’s computer.” Despite the loss of control that comes with its use, critical records are increasingly being entrusted to the cloud, generating ever-growing concern about the privacy and security of those records. Ultimately, privacy and security constitute an attempt to balance competing needs: privacy balances the need to use information against the need to protect personal data, while security balances the need to provide access to records against the need to stop unauthorized access. The importance of these issues has led to a multitude of legal and regulatory efforts to find a balance and, ultimately, to ensure trust in both digital records and their storage in the cloud. Adding a particular challenge is the fact that distinct jurisdictions approach privacy differently and an in-depth understanding of what a jurisdiction’s laws may be, or even under what jurisdiction particular data might be, requires a Herculean effort. And yet, in order to protect privacy and enhance security, this effort is required. This article examines two legal tools for ensuring the privacy and security of records in the cloud, data protection laws, and data localization laws, through the framework of “trust” as understood in archival science. This framework of trust provides new directions for algorithmic research, identifying those areas of digital record creation and preservation most in need of novel solutions. Full article
(This article belongs to the Special Issue Security and Privacy in Cloud Computing Environments)
Open AccessArticle Adaptive Mutation Dynamic Search Fireworks Algorithm
Algorithms 2017, 10(2), 48; doi:10.3390/a10020048
Received: 23 February 2017 / Revised: 20 April 2017 / Accepted: 25 April 2017 / Published: 28 April 2017
PDF Full-text (1481 KB) | HTML Full-text | XML Full-text
Abstract
The Dynamic Search Fireworks Algorithm (dynFWA) is an effective algorithm for solving optimization problems. However, dynFWA easily falls into local optimal solutions prematurely and it also has a slow convergence rate. In order to improve these problems, an adaptive mutation dynamic search fireworks
[...] Read more.
The Dynamic Search Fireworks Algorithm (dynFWA) is an effective algorithm for solving optimization problems. However, dynFWA easily falls into local optimal solutions prematurely and it also has a slow convergence rate. In order to improve these problems, an adaptive mutation dynamic search fireworks algorithm (AMdynFWA) is introduced in this paper. The proposed algorithm applies the Gaussian mutation or the Levy mutation for the core firework (CF) with mutation probability. Our simulation compares the proposed algorithm with the FWA-Based algorithms and other swarm intelligence algorithms. The results show that the proposed algorithm achieves better overall performance on the standard test functions. Full article
Figures

Figure 1

Open AccessArticle Multivariate Statistical Process Control Using Enhanced Bottleneck Neural Network
Algorithms 2017, 10(2), 49; doi:10.3390/a10020049
Received: 10 March 2017 / Revised: 24 April 2017 / Accepted: 24 April 2017 / Published: 29 April 2017
PDF Full-text (2154 KB) | HTML Full-text | XML Full-text
Abstract
Monitoring process upsets and malfunctions as early as possible and then finding and removing the factors causing the respective events is of great importance for safe operation and improved productivity. Conventional process monitoring using principal component analysis (PCA) often supposes that process data
[...] Read more.
Monitoring process upsets and malfunctions as early as possible and then finding and removing the factors causing the respective events is of great importance for safe operation and improved productivity. Conventional process monitoring using principal component analysis (PCA) often supposes that process data follow a Gaussian distribution. However, this kind of constraint cannot be satisfied in practice because many industrial processes frequently span multiple operating states. To overcome this difficulty, PCA can be combined with nonparametric control charts for which there is no assumption need on the distribution. However, this approach still uses a constant confidence limit where a relatively high rate of false alarms are generated. Although nonlinear PCA (NLPCA) using autoassociative bottle-neck neural networks plays an important role in the monitoring of industrial processes, it is difficult to design correct monitoring statistics and confidence limits that check new performance. In this work, a new monitoring strategy using an enhanced bottleneck neural network (EBNN) with an adaptive confidence limit for non Gaussian data is proposed. The basic idea behind it is to extract internally homogeneous segments from the historical normal data sets by filling a Gaussian mixture model (GMM). Based on the assumption that process data follow a Gaussian distribution within an operating mode, a local confidence limit can be established. The EBNN is used to reconstruct input data and estimate probabilities of belonging to the various local operating regimes, as modelled by GMM. An abnormal event for an input measurement vector is detected if the squared prediction error (SPE) is too large, or above a certain threshold which is made adaptive. Moreover, the sensor validity index (SVI) is employed successfully to identify the detected faulty variable. The results demonstrate that, compared with NLPCA, the proposed approach can effectively reduce the number of false alarms, and is hence expected to better monitor many practical processes. Full article
Figures

Figure 1

Open AccessArticle Hierarchical Parallel Evaluation of a Hamming Code
Algorithms 2017, 10(2), 50; doi:10.3390/a10020050
Received: 29 March 2017 / Revised: 20 April 2017 / Accepted: 27 April 2017 / Published: 30 April 2017
PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of
[...] Read more.
The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory. Full article
Figures

Figure 1

Open AccessArticle Adaptive Vector Quantization for Lossy Compression of Image Sequences
Algorithms 2017, 10(2), 51; doi:10.3390/a10020051
Received: 23 January 2017 / Revised: 24 April 2017 / Accepted: 4 May 2017 / Published: 9 May 2017
PDF Full-text (24119 KB) | HTML Full-text | XML Full-text
Abstract
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using
[...] Read more.
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using the properties of the vector quantization to approximate data. First, we review the key aspects of the AVQ algorithm and, subsequently, we outline the basic concepts and the design choices behind the proposed scheme. Finally, we report the experimental results, which highlight an improvement in compression performances when our scheme is compared with the AVQ algorithm. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Figures

Figure 1

Open AccessArticle Searchable Data Vault: Encrypted Queries in Secure Distributed Cloud Storage
Algorithms 2017, 10(2), 52; doi:10.3390/a10020052
Received: 28 February 2017 / Revised: 19 April 2017 / Accepted: 3 May 2017 / Published: 9 May 2017
PDF Full-text (3001 KB) | HTML Full-text | XML Full-text
Abstract
Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also
[...] Read more.
Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also be leaked by curious administrators. A simple solution is for the user to encrypt all documents before submitting them. This method, however, makes it impossible to efficiently search for documents as they are all encrypted. To resolve this problem, we propose a multi-server searchable symmetric encryption (SSE) scheme and construct a system called the searchable data vault (SDV). A unique feature of the scheme is that it allows an encrypted document to be divided into blocks and distributed to different storage servers so that no single storage provider has a complete document. By incorporating the scheme, the SDV protects the privacy of documents while allowing for efficient private queries. It utilizes a web interface and a controller that manages user credentials, query indexes and submission of encrypted documents to cloud storage services. It is also the first system that enables a user to simultaneously outsource and privately query documents from a few cloud storage services. Our preliminary performance evaluation shows that this feature introduces acceptable computation overheads when compared to submitting documents directly to a cloud storage service. Full article
(This article belongs to the Special Issue Security and Privacy in Cloud Computing Environments)
Figures

Figure 1

Open AccessArticle Application of Gradient Descent Continuous Actor-Critic Algorithm for Bilateral Spot Electricity Market Modeling Considering Renewable Power Penetration
Algorithms 2017, 10(2), 53; doi:10.3390/a10020053
Received: 2 March 2017 / Revised: 28 April 2017 / Accepted: 3 May 2017 / Published: 10 May 2017
PDF Full-text (2299 KB) | HTML Full-text | XML Full-text
Abstract
The bilateral spot electricity market is very complicated because all generation units and demands must strategically bid in this market. Considering renewable resource penetration, the high variability and the non-dispatchable nature of these intermittent resources make it more difficult to model and simulate
[...] Read more.
The bilateral spot electricity market is very complicated because all generation units and demands must strategically bid in this market. Considering renewable resource penetration, the high variability and the non-dispatchable nature of these intermittent resources make it more difficult to model and simulate the dynamic bidding process and the equilibrium in the bilateral spot electricity market, which makes developing fast and reliable market modeling approaches a matter of urgency nowadays. In this paper, a Gradient Descent Continuous Actor-Critic algorithm is proposed for hour-ahead bilateral electricity market modeling in the presence of renewable resources because this algorithm can solve electricity market modeling problems with continuous state and action spaces without causing the “curse of dimensionality” and has low time complexity. In our simulation, the proposed approach is implemented on an IEEE 30-bus test system. The adequate performance of our proposed approach—such as reaching Nash Equilibrium results after enough iterations of training are tested and verified, and some conclusions about the relationship between increasing the renewable power output and participants’ bidding strategy, locational marginal prices, and social welfare—is also evaluated. Moreover, the comparison of our proposed approach with the fuzzy Q-learning-based electricity market approach implemented in this paper confirms the superiority of our proposed approach in terms of participants’ profits, social welfare, average locational marginal prices, etc. Full article
Figures

Figure 1

Open AccessArticle Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions
Algorithms 2017, 10(2), 54; doi:10.3390/a10020054
Received: 18 April 2017 / Accepted: 9 May 2017 / Published: 12 May 2017
PDF Full-text (238 KB) | HTML Full-text | XML Full-text
Abstract
We present the semilocal convergence of a multi-step modified Newton-Hermitian and Skew-Hermitian Splitting method (MMN-HSS method) to approximate a solution of a nonlinear equation. Earlier studies show convergence under only Lipschitz conditions limiting the applicability of this method. The convergence in this study
[...] Read more.
We present the semilocal convergence of a multi-step modified Newton-Hermitian and Skew-Hermitian Splitting method (MMN-HSS method) to approximate a solution of a nonlinear equation. Earlier studies show convergence under only Lipschitz conditions limiting the applicability of this method. The convergence in this study is shown under generalized Lipschitz-type conditions and restricted convergence domains. Hence, the applicability of the method is extended. Moreover, numerical examples are also provided to show that our results can be applied to solve equations in cases where earlier study cannot be applied. Furthermore, in the cases where both old and new results are applicable, the latter provides a larger domain of convergence and tighter error bounds on the distances involved. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Open AccessArticle Clustering Using an Improved Krill Herd Algorithm
by and
Algorithms 2017, 10(2), 56; doi:10.3390/a10020056
Received: 27 March 2017 / Revised: 6 May 2017 / Accepted: 12 May 2017 / Published: 17 May 2017
Cited by 1 | PDF Full-text (951 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, metaheuristic algorithms have been widely used in solving clustering problems because of their good performance and application effects. Krill herd algorithm (KHA) is a new effective algorithm to solve optimization problems based on the imitation of krill individual behavior, and
[...] Read more.
In recent years, metaheuristic algorithms have been widely used in solving clustering problems because of their good performance and application effects. Krill herd algorithm (KHA) is a new effective algorithm to solve optimization problems based on the imitation of krill individual behavior, and it is proven to perform better than other swarm intelligence algorithms. However, there are some weaknesses yet. In this paper, an improved krill herd algorithm (IKHA) is studied. Modified mutation operators and updated mechanisms are applied to improve global optimization, and the proposed IKHA can overcome the weakness of KHA and performs better than KHA in optimization problems. Then, KHA and IKHA are introduced into the clustering problem. In our proposed clustering algorithm, KHA and IKHA are used to find appropriate cluster centers. Experiments were conducted on University of California Irvine (UCI) standard datasets, and the results showed that the IKHA clustering algorithm is the most effective. Full article
Figures

Figure 1

Open AccessArticle A Prediction of Precipitation Data Based on Support Vector Machine and Particle Swarm Optimization (PSO-SVM) Algorithms
Algorithms 2017, 10(2), 57; doi:10.3390/a10020057
Received: 4 April 2017 / Revised: 11 May 2017 / Accepted: 11 May 2017 / Published: 17 May 2017
Cited by 1 | PDF Full-text (1547 KB) | HTML Full-text | XML Full-text
Abstract
Precipitation is a very important topic in weather forecasts. Weather forecasts, especially precipitation prediction, poses complex tasks because they depend on various parameters to predict the dependent variables like temperature, humidity, wind speed and direction, which are changing from time to time and
[...] Read more.
Precipitation is a very important topic in weather forecasts. Weather forecasts, especially precipitation prediction, poses complex tasks because they depend on various parameters to predict the dependent variables like temperature, humidity, wind speed and direction, which are changing from time to time and weather calculation varies with the geographical location along with its atmospheric variables. To improve the prediction accuracy of precipitation, this context proposes a prediction model for rainfall forecast based on Support Vector Machine with Particle Swarm Optimization (PSO-SVM) to replace the linear threshold used in traditional precipitation. Parameter selection has a critical impact on the predictive accuracy of SVM, and PSO is proposed to find the optimal parameters for SVM. The PSO-SVM algorithm was used for the training of a model by using the historical data for precipitation prediction, which can be useful information and used by people of all walks of life in making wise and intelligent decisions. The simulations demonstrate that prediction models indicate that the performance of the proposed algorithm has much better accuracy than the direct prediction model based on a set of experimental data if other things are equal. On the other hand, simulation results demonstrate the effectiveness and advantages of the SVM-PSO model used in machine learning and further promises the scope for improvement as more and more relevant attributes can be used in predicting the dependent variables. Full article
Figures

Figure 1

Open AccessArticle A Flexible Pattern-Matching Algorithm for Network Intrusion Detection Systems Using Multi-Core Processors
Algorithms 2017, 10(2), 58; doi:10.3390/a10020058
Received: 15 March 2017 / Revised: 17 May 2017 / Accepted: 20 May 2017 / Published: 24 May 2017
Cited by 1 | PDF Full-text (1812 KB) | HTML Full-text | XML Full-text
Abstract
As part of network security processes, network intrusion detection systems (NIDSs) determine whether incoming packets contain malicious patterns. Pattern matching, the key NIDS component, consumes large amounts of execution time. One of several trends involving general-purpose processors (GPPs) is their use in software-based
[...] Read more.
As part of network security processes, network intrusion detection systems (NIDSs) determine whether incoming packets contain malicious patterns. Pattern matching, the key NIDS component, consumes large amounts of execution time. One of several trends involving general-purpose processors (GPPs) is their use in software-based NIDSs. In this paper, we describe our proposal for an efficient and flexible pattern-matching algorithm for inspecting packet payloads using a head-body finite automaton (HBFA). The proposed algorithm takes advantage of multi-core GPP parallelism and single-instruction multiple-data operations to achieve higher throughput compared to that resulting from traditional deterministic finite automata (DFA) using the Aho-Corasick algorithm. Whereas the head-body matching (HBM) algorithm is based on pre-defined DFA depth value, our HBFA algorithm is based on head size. Experimental results using Snort and ClamAV pattern sets indicate that the proposed algorithm achieves up to 58% higher throughput compared to its HBM counterpart. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Figures

Figure 1

Open AccessArticle Contradiction Detection with Contradiction-Specific Word Embedding
Algorithms 2017, 10(2), 59; doi:10.3390/a10020059
Received: 18 January 2017 / Revised: 30 April 2017 / Accepted: 12 May 2017 / Published: 24 May 2017
PDF Full-text (1306 KB) | HTML Full-text | XML Full-text
Abstract
Contradiction detection is a task to recognize contradiction relations between a pair of sentences. Despite the effectiveness of traditional context-based word embedding learning algorithms in many natural language processing tasks, such algorithms are not powerful enough for contradiction detection. Contrasting words such as
[...] Read more.
Contradiction detection is a task to recognize contradiction relations between a pair of sentences. Despite the effectiveness of traditional context-based word embedding learning algorithms in many natural language processing tasks, such algorithms are not powerful enough for contradiction detection. Contrasting words such as “overfull” and “empty” are mostly mapped into close vectors in such embedding space. To solve this problem, we develop a tailored neural network to learn contradiction-specific word embedding (CWE). The method can separate antonyms in the opposite ends of a spectrum. CWE is learned from a training corpus which is automatically generated from the paraphrase database, and is naturally applied as features to carry out contradiction detection in SemEval 2014 benchmark dataset. Experimental results show that CWE outperforms traditional context-based word embedding in contradiction detection. The proposed model for contradiction detection performs comparably with the top-performing system in accuracy of three-category classification and enhances the accuracy from 75.97% to 82.08% in the contradiction category. Full article
Figures

Figure 1

Open AccessArticle Design and Implementation of a Multi-Modal Biometric System for Company Access Control
Algorithms 2017, 10(2), 61; doi:10.3390/a10020061
Received: 1 February 2017 / Revised: 18 May 2017 / Accepted: 23 May 2017 / Published: 27 May 2017
PDF Full-text (388 KB) | HTML Full-text | XML Full-text
Abstract
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive
[...] Read more.
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman’s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user’s face only or the serial acquisition of both the user’s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent “response behaviors” according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Figures

Figure 1

Open AccessArticle Influence Factors Analysis on the Modal Characteristics of Irregularly-Shaped Bridges Based on a Free-Interface Mode Synthesis Algorithm
Algorithms 2017, 10(2), 62; doi:10.3390/a10020062
Received: 24 December 2016 / Revised: 23 May 2017 / Accepted: 24 May 2017 / Published: 28 May 2017
PDF Full-text (2886 KB) | HTML Full-text | XML Full-text
Abstract
In order to relieve traffic congestion, irregularly-shaped bridges have been widely used in urban overpasses. However, the analysis on modal characteristics of irregularly-shaped bridges is not exhaustive, and the effect of design parameters on modal characteristics will be deeply investigated in future studies.
[...] Read more.
In order to relieve traffic congestion, irregularly-shaped bridges have been widely used in urban overpasses. However, the analysis on modal characteristics of irregularly-shaped bridges is not exhaustive, and the effect of design parameters on modal characteristics will be deeply investigated in future studies. In this paper, a novel strategy based on a free-interface mode synthesis algorithm is proposed to evaluate the parameters’ effect on the modal characteristics of irregularly-shaped bridges. First, a complicated, irregularly-shaped bridge is divided into several substructures based on its properties. Then, the modal characteristics of the overall structure can be obtained, only by a few low-order modal parameters of each substructure, using a free-interface mode synthesis method. A numerical model of a typical irregularly-shaped bridge is employed to verify the effectiveness of the proposed strategy. Simulation results reveal that the free-interface mode synthesis method possesses favorable calculation accuracy for analyzing the modal characteristics of irregularly-shaped bridges. The effect of design parameters such as ramp curve radius, diaphragm beam stiffness, cross-section feature, and bearing conditions on the modal characteristics of an irregularly-shaped bridge is evaluated in detail. Analysis results can provide references for further research into and the design of irregularly-shaped bridges. Full article
Figures

Figure 1

Open AccessArticle Development of Filtered Bispectrum for EEG Signal Feature Extraction in Automatic Emotion Recognition Using Artificial Neural Networks
Algorithms 2017, 10(2), 63; doi:10.3390/a10020063
Received: 31 March 2017 / Revised: 12 May 2017 / Accepted: 25 May 2017 / Published: 30 May 2017
PDF Full-text (3006 KB) | HTML Full-text | XML Full-text
Abstract
The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG) signal is preferred over facial expression,
[...] Read more.
The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG) signal is preferred over facial expression, as people cannot control the EEG signal generated by their brain; the EEG ensures a stronger reliability in the psychological signal. However, because of its uniqueness between individuals and its vulnerability to noise, use of EEG signals can be rather complicated. In this paper, we propose a methodology to conduct EEG-based emotion recognition by using a filtered bispectrum as the feature extraction subsystem and an artificial neural network (ANN) as the classifier. The bispectrum is theoretically superior to the power spectrum because it can identify phase coupling between the nonlinear process components of the EEG signal. In the feature extraction process, to extract the information contained in the bispectrum matrices, a 3D pyramid filter is used for sampling and quantifying the bispectrum value. Experiment results show that the mean percentage of the bispectrum value from 5 × 5 non-overlapped 3D pyramid filters produces the highest recognition rate. We found that reducing the number of EEG channels down to only eight in the frontal area of the brain does not significantly affect the recognition rate, and the number of data samples used in the training process is then increased to improve the recognition rate of the system. We have also utilized a probabilistic neural network (PNN) as another classifier and compared its recognition rate with that of the back-propagation neural network (BPNN), and the results show that the PNN produces a comparable recognition rate and lower computational costs. Our research shows that the extracted bispectrum values of an EEG signal using 3D filtering as a feature extraction method is suitable for use in an EEG-based emotion recognition system. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Figures

Figure 1

Open AccessArticle Expanding the Applicability of Some High Order Househölder-Like Methods
Algorithms 2017, 10(2), 64; doi:10.3390/a10020064
Received: 3 April 2017 / Revised: 26 May 2017 / Accepted: 26 May 2017 / Published: 31 May 2017
PDF Full-text (268 KB) | HTML Full-text | XML Full-text
Abstract
This paper is devoted to the semilocal convergence of a Househölder-like method for nonlinear equations. The method includes many of the studied third order iterative methods. In the present study, we use our new idea of restricted convergence domains leading to smaller γ
[...] Read more.
This paper is devoted to the semilocal convergence of a Househölder-like method for nonlinear equations. The method includes many of the studied third order iterative methods. In the present study, we use our new idea of restricted convergence domains leading to smaller γ -parameters, which in turn lead to the following advantages over earlier works (and under the same computational cost): larger convergence domain; tighter error bounds on the distances involved, and at least as precise information on the location of the solution. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Figures

Figure 1

Open AccessArticle Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering
Algorithms 2017, 10(2), 65; doi:10.3390/a10020065
Received: 28 March 2017 / Revised: 25 May 2017 / Accepted: 31 May 2017 / Published: 7 June 2017
PDF Full-text (1131 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated,
[...] Read more.
We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, and each dictionary is used for one cluster’s sparse coding. In this way, the signals in one cluster could be well represented by their corresponding dictionaries. A nonparametric Bayesian dictionary learning method is used to learn the dictionaries, which naturally infers an appropriate dictionary size for each cluster. A uniform quantizer and an adaptive arithmetic coding algorithm are adopted to code the sparse coefficients. With comparisons to other state-of-the art approaches, the effectiveness of the proposed method could be validated in the experiments. Full article
Figures

Figure 1

Open AccessArticle A New Approach to Image-Based Estimation of Food Volume
Algorithms 2017, 10(2), 66; doi:10.3390/a10020066
Received: 19 April 2017 / Revised: 24 May 2017 / Accepted: 6 June 2017 / Published: 10 June 2017
PDF Full-text (8456 KB) | HTML Full-text | XML Full-text
Abstract
A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food
[...] Read more.
A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food intake has been shown to be inaccurate and often impractical. This paper presents a new approach to food intake quantity estimation using image-based modeling. The modeling method consists of three steps: firstly, a short video of the food is taken by the user’s smartphone. From such a video, six frames are selected based on the pictures’ viewpoints as determined by the smartphone’s orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point-cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized such that the use of six input images still results in an acceptable computation cost. In our evaluation procedure, we achieved an average accuracy of 92 % on a test set that includes images of different kinds of pasta and bread, with an average processing time of about 23 s. Full article
Figures

Figure 1

Open AccessArticle Research on Misalignment Fault Isolation of Wind Turbines Based on the Mixed-Domain Features
Algorithms 2017, 10(2), 67; doi:10.3390/a10020067
Received: 2 May 2017 / Revised: 7 June 2017 / Accepted: 8 June 2017 / Published: 10 June 2017
Cited by 2 | PDF Full-text (1367 KB) | HTML Full-text | XML Full-text
Abstract
The misalignment of the drive system of the DFIG (Doubly Fed Induction Generator) wind turbine is one of the important factors that cause damage to the gears, bearings of the high-speed gearbox and the generator bearings. How to use the limited information to
[...] Read more.
The misalignment of the drive system of the DFIG (Doubly Fed Induction Generator) wind turbine is one of the important factors that cause damage to the gears, bearings of the high-speed gearbox and the generator bearings. How to use the limited information to accurately determine the type of failure has become a difficult study for the scholars. In this paper, the time-domain indexes and frequency-domain indexes are extracted by using the vibration signals of various misaligned simulation conditions of the wind turbine drive system, and the time-frequency domain features—energy entropy are also extracted by the IEMD (Improved Empirical Mode Decomposition). A mixed-domain feature set is constructed by them. Then, SVM (Support Vector Machine) is used as the classifier, the mixed-domain features are used as the inputs of SVM, and PSO (Particle Swarm Optimization) is used to optimize the parameters of SVM. The fault types of misalignment are classified successfully. Compared with other methods, the accuracy of the given fault isolation model is improved. Full article
Figures

Figure 1

Open AccessArticle An Easily Understandable Grey Wolf Optimizer and Its Application to Fuzzy Controller Tuning
Algorithms 2017, 10(2), 68; doi:10.3390/a10020068
Received: 25 April 2017 / Revised: 7 June 2017 / Accepted: 8 June 2017 / Published: 10 June 2017
Cited by 2 | PDF Full-text (660 KB) | HTML Full-text | XML Full-text
Abstract
This paper proposes an easily understandable Grey Wolf Optimizer (GWO) applied to the optimal tuning of the parameters of Takagi-Sugeno proportional-integral fuzzy controllers (T-S PI-FCs). GWO is employed for solving optimization problems focused on the minimization of discrete-time objective functions defined as the
[...] Read more.
This paper proposes an easily understandable Grey Wolf Optimizer (GWO) applied to the optimal tuning of the parameters of Takagi-Sugeno proportional-integral fuzzy controllers (T-S PI-FCs). GWO is employed for solving optimization problems focused on the minimization of discrete-time objective functions defined as the weighted sum of the absolute value of the control error and of the squared output sensitivity function, and the vector variable consists of the tuning parameters of the T-S PI-FCs. Since the sensitivity functions are introduced with respect to the parametric variations of the process, solving these optimization problems is important as it leads to fuzzy control systems with a reduced process parametric sensitivity obtained by a GWO-based fuzzy controller tuning approach. GWO algorithms applied with this regard are formulated in easily understandable terms for both vector and scalar operations, and discussions on stability, convergence, and parameter settings are offered. The controlled processes referred to in the course of this paper belong to a family of nonlinear servo systems, which are modeled by second order dynamics plus a saturation and dead zone static nonlinearity. Experimental results concerning the angular position control of a laboratory servo system are included for validating the proposed method. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Figures

Figure 1

Open AccessArticle Cross-Language Plagiarism Detection System Using Latent Semantic Analysis and Learning Vector Quantization
Algorithms 2017, 10(2), 69; doi:10.3390/a10020069
Received: 31 March 2017 / Revised: 16 May 2017 / Accepted: 10 June 2017 / Published: 13 June 2017
PDF Full-text (2181 KB) | HTML Full-text | XML Full-text
Abstract
Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising). Due to
[...] Read more.
Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising). Due to the syntax disparity between Bahasa Indonesia and English, most of the existing methods for automated cross-language plagiarism detection do not provide satisfactory results. This paper analyses the probability of developing Latent Semantic Analysis (LSA) for a computerized cross-language plagiarism detector for two languages with different syntax. To improve performance, various alterations in LSA are suggested. By using a linear vector quantization (LVQ) classifier in the LSA and taking into account the Frobenius norm, output has reached up to 65.98% in accuracy. The results of the experiments showed that the best accuracy achieved is 87% with a document size of 6 words, and the document definition size must be kept below 10 words in order to maintain high accuracy. Additionally, based on experimental results, this paper suggests utilizing the frequency occurrence method as opposed to the binary method for the term–document matrix construction. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Figures

Figure 1

Open AccessArticle An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification
Algorithms 2017, 10(2), 70; doi:10.3390/a10020070
Received: 14 March 2017 / Revised: 5 June 2017 / Accepted: 9 June 2017 / Published: 14 June 2017
Cited by 1 | PDF Full-text (2840 KB) | HTML Full-text | XML Full-text
Abstract
Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN) is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications.
[...] Read more.
Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN) is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. In this paper, an improved brain-inspired emotional learning (BEL) algorithm is proposed for fast classification. The BEL algorithm was put forward to mimic the high speed of the emotional learning mechanism in mammalian brain, which has the superior features of fast learning and low computational complexity. To improve the accuracy of BEL in classification, the genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in the BEL neural network. The combinational algorithm named as GA-BEL has been tested on eight University of California at Irvine (UCI) datasets and two well-known databases (Japanese Female Facial Expression, Cohn–Kanade). The comparisons of experiments indicate that the proposed GA-BEL is more accurate than the original BEL algorithm, and it is much faster than the traditional algorithm. Full article
Figures

Figure 1

Open AccessArticle Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models
Algorithms 2017, 10(2), 71; doi:10.3390/a10020071
Received: 25 May 2017 / Revised: 8 June 2017 / Accepted: 16 June 2017 / Published: 21 June 2017
PDF Full-text (770 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms,
[...] Read more.
In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed. Full article

Review

Jump to: Research, Other

Open AccessReview From Intrusion Detection to an Intrusion Response System: Fundamentals, Requirements, and Future Directions
Algorithms 2017, 10(2), 39; doi:10.3390/a10020039
Received: 24 February 2017 / Revised: 20 March 2017 / Accepted: 24 March 2017 / Published: 27 March 2017
Cited by 3 | PDF Full-text (1358 KB) | HTML Full-text | XML Full-text
Abstract
In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic
[...] Read more.
In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies. Full article
(This article belongs to the Special Issue Security and Privacy in Cloud Computing Environments)
Figures

Figure 1

Other

Jump to: Research, Review

Open AccessErratum Erratum: Ahmad, F., et al. A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity. Algorithms 2017, 10, 17
Algorithms 2017, 10(2), 55; doi:10.3390/a10020055
Received: 24 April 2017 / Accepted: 11 May 2017 / Published: 12 May 2017
PDF Full-text (157 KB) | HTML Full-text | XML Full-text
Open AccessCorrection Correction: A No Reference Image Quality Assessment Metric Based on Visual Perception. Algorithms 2016, 9, 87
Algorithms 2017, 10(2), 60; doi:10.3390/a10020060
Received: 19 May 2017 / Revised: 25 May 2017 / Accepted: 25 May 2017 / Published: 26 May 2017
PDF Full-text (448 KB) | HTML Full-text | XML Full-text
Abstract
We would like to make the following change to our article [1]. [...] Full article
Figures

Figure 13

Back to Top