Next Issue
Volume 10, September
Previous Issue
Volume 10, March
 
 

Algorithms, Volume 10, Issue 2 (June 2017) – 36 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
770 KiB  
Article
Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models
by Qixuan Bi and Wenhao Gui
Algorithms 2017, 10(2), 71; https://doi.org/10.3390/a10020071 - 21 Jun 2017
Cited by 22 | Viewed by 4973
Abstract
In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, [...] Read more.
In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed. Full article
2840 KiB  
Article
An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification
by Ying Mei, Guanzheng Tan and Zhentao Liu
Algorithms 2017, 10(2), 70; https://doi.org/10.3390/a10020070 - 14 Jun 2017
Cited by 29 | Viewed by 8559
Abstract
Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN) is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. [...] Read more.
Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN) is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. In this paper, an improved brain-inspired emotional learning (BEL) algorithm is proposed for fast classification. The BEL algorithm was put forward to mimic the high speed of the emotional learning mechanism in mammalian brain, which has the superior features of fast learning and low computational complexity. To improve the accuracy of BEL in classification, the genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in the BEL neural network. The combinational algorithm named as GA-BEL has been tested on eight University of California at Irvine (UCI) datasets and two well-known databases (Japanese Female Facial Expression, Cohn–Kanade). The comparisons of experiments indicate that the proposed GA-BEL is more accurate than the original BEL algorithm, and it is much faster than the traditional algorithm. Full article
Show Figures

Figure 1

2181 KiB  
Article
Cross-Language Plagiarism Detection System Using Latent Semantic Analysis and Learning Vector Quantization
by Anak Agung Putri Ratna, Prima Dewi Purnamasari, Boma Anantasatya Adhi, F. Astha Ekadiyanto, Muhammad Salman, Mardiyah Mardiyah and Darien Jonathan Winata
Algorithms 2017, 10(2), 69; https://doi.org/10.3390/a10020069 - 13 Jun 2017
Cited by 11 | Viewed by 7283
Abstract
Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising). Due to [...] Read more.
Computerized cross-language plagiarism detection has recently become essential. With the scarcity of scientific publications in Bahasa Indonesia, many Indonesian authors frequently consult publications in English in order to boost the quantity of scientific publications in Bahasa Indonesia (which is currently rising). Due to the syntax disparity between Bahasa Indonesia and English, most of the existing methods for automated cross-language plagiarism detection do not provide satisfactory results. This paper analyses the probability of developing Latent Semantic Analysis (LSA) for a computerized cross-language plagiarism detector for two languages with different syntax. To improve performance, various alterations in LSA are suggested. By using a linear vector quantization (LVQ) classifier in the LSA and taking into account the Frobenius norm, output has reached up to 65.98% in accuracy. The results of the experiments showed that the best accuracy achieved is 87% with a document size of 6 words, and the document definition size must be kept below 10 words in order to maintain high accuracy. Additionally, based on experimental results, this paper suggests utilizing the frequency occurrence method as opposed to the binary method for the term–document matrix construction. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Show Figures

Figure 1

660 KiB  
Article
An Easily Understandable Grey Wolf Optimizer and Its Application to Fuzzy Controller Tuning
by Radu-Emil Precup, Radu-Codrut David, Alexandra-Iulia Szedlak-Stinean, Emil M. Petriu and Florin Dragan
Algorithms 2017, 10(2), 68; https://doi.org/10.3390/a10020068 - 10 Jun 2017
Cited by 53 | Viewed by 6173
Abstract
This paper proposes an easily understandable Grey Wolf Optimizer (GWO) applied to the optimal tuning of the parameters of Takagi-Sugeno proportional-integral fuzzy controllers (T-S PI-FCs). GWO is employed for solving optimization problems focused on the minimization of discrete-time objective functions defined as the [...] Read more.
This paper proposes an easily understandable Grey Wolf Optimizer (GWO) applied to the optimal tuning of the parameters of Takagi-Sugeno proportional-integral fuzzy controllers (T-S PI-FCs). GWO is employed for solving optimization problems focused on the minimization of discrete-time objective functions defined as the weighted sum of the absolute value of the control error and of the squared output sensitivity function, and the vector variable consists of the tuning parameters of the T-S PI-FCs. Since the sensitivity functions are introduced with respect to the parametric variations of the process, solving these optimization problems is important as it leads to fuzzy control systems with a reduced process parametric sensitivity obtained by a GWO-based fuzzy controller tuning approach. GWO algorithms applied with this regard are formulated in easily understandable terms for both vector and scalar operations, and discussions on stability, convergence, and parameter settings are offered. The controlled processes referred to in the course of this paper belong to a family of nonlinear servo systems, which are modeled by second order dynamics plus a saturation and dead zone static nonlinearity. Experimental results concerning the angular position control of a laboratory servo system are included for validating the proposed method. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Show Figures

Figure 1

1367 KiB  
Article
Research on Misalignment Fault Isolation of Wind Turbines Based on the Mixed-Domain Features
by Yancai Xiao, Yujia Wang, Huan Mu and Na Kang
Algorithms 2017, 10(2), 67; https://doi.org/10.3390/a10020067 - 10 Jun 2017
Cited by 9 | Viewed by 4597
Abstract
The misalignment of the drive system of the DFIG (Doubly Fed Induction Generator) wind turbine is one of the important factors that cause damage to the gears, bearings of the high-speed gearbox and the generator bearings. How to use the limited information to [...] Read more.
The misalignment of the drive system of the DFIG (Doubly Fed Induction Generator) wind turbine is one of the important factors that cause damage to the gears, bearings of the high-speed gearbox and the generator bearings. How to use the limited information to accurately determine the type of failure has become a difficult study for the scholars. In this paper, the time-domain indexes and frequency-domain indexes are extracted by using the vibration signals of various misaligned simulation conditions of the wind turbine drive system, and the time-frequency domain features—energy entropy are also extracted by the IEMD (Improved Empirical Mode Decomposition). A mixed-domain feature set is constructed by them. Then, SVM (Support Vector Machine) is used as the classifier, the mixed-domain features are used as the inputs of SVM, and PSO (Particle Swarm Optimization) is used to optimize the parameters of SVM. The fault types of misalignment are classified successfully. Compared with other methods, the accuracy of the given fault isolation model is improved. Full article
Show Figures

Figure 1

8456 KiB  
Article
A New Approach to Image-Based Estimation of Food Volume
by Hamid Hassannejad, Guido Matrella, Paolo Ciampolini, Ilaria De Munari, Monica Mordonini and Stefano Cagnoni
Algorithms 2017, 10(2), 66; https://doi.org/10.3390/a10020066 - 10 Jun 2017
Cited by 27 | Viewed by 7909
Abstract
A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food [...] Read more.
A balanced diet is the key to a healthy lifestyle and is crucial for preventing or dealing with many chronic diseases such as diabetes and obesity. Therefore, monitoring diet can be an effective way of improving people’s health. However, manual reporting of food intake has been shown to be inaccurate and often impractical. This paper presents a new approach to food intake quantity estimation using image-based modeling. The modeling method consists of three steps: firstly, a short video of the food is taken by the user’s smartphone. From such a video, six frames are selected based on the pictures’ viewpoints as determined by the smartphone’s orientation sensors. Secondly, the user marks one of the frames to seed an interactive segmentation algorithm. Segmentation is based on a Gaussian Mixture Model alongside the graph-cut algorithm. Finally, a customized image-based modeling algorithm generates a point-cloud to model the food. At the same time, a stochastic object-detection method locates a checkerboard used as size/ground reference. The modeling algorithm is optimized such that the use of six input images still results in an acceptable computation cost. In our evaluation procedure, we achieved an average accuracy of 92 % on a test set that includes images of different kinds of pasta and bread, with an average processing time of about 23 s. Full article
Show Figures

Figure 1

1131 KiB  
Article
Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering
by Xin Tian and Song Li
Algorithms 2017, 10(2), 65; https://doi.org/10.3390/a10020065 - 7 Jun 2017
Cited by 2 | Viewed by 4930
Abstract
We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, [...] Read more.
We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, and each dictionary is used for one cluster’s sparse coding. In this way, the signals in one cluster could be well represented by their corresponding dictionaries. A nonparametric Bayesian dictionary learning method is used to learn the dictionaries, which naturally infers an appropriate dictionary size for each cluster. A uniform quantizer and an adaptive arithmetic coding algorithm are adopted to code the sparse coefficients. With comparisons to other state-of-the art approaches, the effectiveness of the proposed method could be validated in the experiments. Full article
Show Figures

Figure 1

268 KiB  
Article
Expanding the Applicability of Some High Order Househölder-Like Methods
by Sergio Amat, Ioannis K. Argyros, Miguel A. Hernández-Verón and Natalia Romero
Algorithms 2017, 10(2), 64; https://doi.org/10.3390/a10020064 - 31 May 2017
Cited by 1 | Viewed by 4225
Abstract
This paper is devoted to the semilocal convergence of a Househölder-like method for nonlinear equations. The method includes many of the studied third order iterative methods. In the present study, we use our new idea of restricted convergence domains leading to smaller [...] Read more.
This paper is devoted to the semilocal convergence of a Househölder-like method for nonlinear equations. The method includes many of the studied third order iterative methods. In the present study, we use our new idea of restricted convergence domains leading to smaller γ -parameters, which in turn lead to the following advantages over earlier works (and under the same computational cost): larger convergence domain; tighter error bounds on the distances involved, and at least as precise information on the location of the solution. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Show Figures

Figure 1

3006 KiB  
Article
Development of Filtered Bispectrum for EEG Signal Feature Extraction in Automatic Emotion Recognition Using Artificial Neural Networks
by Prima Dewi Purnamasari, Anak Agung Putri Ratna and Benyamin Kusumoputro
Algorithms 2017, 10(2), 63; https://doi.org/10.3390/a10020063 - 30 May 2017
Cited by 26 | Viewed by 8026
Abstract
The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG) signal is preferred over facial expression, [...] Read more.
The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG) signal is preferred over facial expression, as people cannot control the EEG signal generated by their brain; the EEG ensures a stronger reliability in the psychological signal. However, because of its uniqueness between individuals and its vulnerability to noise, use of EEG signals can be rather complicated. In this paper, we propose a methodology to conduct EEG-based emotion recognition by using a filtered bispectrum as the feature extraction subsystem and an artificial neural network (ANN) as the classifier. The bispectrum is theoretically superior to the power spectrum because it can identify phase coupling between the nonlinear process components of the EEG signal. In the feature extraction process, to extract the information contained in the bispectrum matrices, a 3D pyramid filter is used for sampling and quantifying the bispectrum value. Experiment results show that the mean percentage of the bispectrum value from 5 × 5 non-overlapped 3D pyramid filters produces the highest recognition rate. We found that reducing the number of EEG channels down to only eight in the frontal area of the brain does not significantly affect the recognition rate, and the number of data samples used in the training process is then increased to improve the recognition rate of the system. We have also utilized a probabilistic neural network (PNN) as another classifier and compared its recognition rate with that of the back-propagation neural network (BPNN), and the results show that the PNN produces a comparable recognition rate and lower computational costs. Our research shows that the extracted bispectrum values of an EEG signal using 3D filtering as a feature extraction method is suitable for use in an EEG-based emotion recognition system. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Show Figures

Figure 1

2886 KiB  
Article
Influence Factors Analysis on the Modal Characteristics of Irregularly-Shaped Bridges Based on a Free-Interface Mode Synthesis Algorithm
by Hanbing Liu, Mengsu Zhang, Xianqiang Wang, Shuai Tian and Yubo Jiao
Algorithms 2017, 10(2), 62; https://doi.org/10.3390/a10020062 - 28 May 2017
Viewed by 4474
Abstract
In order to relieve traffic congestion, irregularly-shaped bridges have been widely used in urban overpasses. However, the analysis on modal characteristics of irregularly-shaped bridges is not exhaustive, and the effect of design parameters on modal characteristics will be deeply investigated in future studies. [...] Read more.
In order to relieve traffic congestion, irregularly-shaped bridges have been widely used in urban overpasses. However, the analysis on modal characteristics of irregularly-shaped bridges is not exhaustive, and the effect of design parameters on modal characteristics will be deeply investigated in future studies. In this paper, a novel strategy based on a free-interface mode synthesis algorithm is proposed to evaluate the parameters’ effect on the modal characteristics of irregularly-shaped bridges. First, a complicated, irregularly-shaped bridge is divided into several substructures based on its properties. Then, the modal characteristics of the overall structure can be obtained, only by a few low-order modal parameters of each substructure, using a free-interface mode synthesis method. A numerical model of a typical irregularly-shaped bridge is employed to verify the effectiveness of the proposed strategy. Simulation results reveal that the free-interface mode synthesis method possesses favorable calculation accuracy for analyzing the modal characteristics of irregularly-shaped bridges. The effect of design parameters such as ramp curve radius, diaphragm beam stiffness, cross-section feature, and bearing conditions on the modal characteristics of an irregularly-shaped bridge is evaluated in detail. Analysis results can provide references for further research into and the design of irregularly-shaped bridges. Full article
Show Figures

Figure 1

388 KiB  
Article
Design and Implementation of a Multi-Modal Biometric System for Company Access Control
by Elisabetta Stefani and Carlo Ferrari
Algorithms 2017, 10(2), 61; https://doi.org/10.3390/a10020061 - 27 May 2017
Cited by 5 | Viewed by 5457
Abstract
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive [...] Read more.
This paper is about the design, implementation, and deployment of a multi-modal biometric system to grant access to a company structure and to internal zones in the company itself. Face and iris have been chosen as biometric traits. Face is feasible for non-intrusive checking with a minimum cooperation from the subject, while iris supports very accurate recognition procedure at a higher grade of invasivity. The recognition of the face trait is based on the Local Binary Patterns histograms, and the Daughman’s method is implemented for the analysis of the iris data. The recognition process may require either the acquisition of the user’s face only or the serial acquisition of both the user’s face and iris, depending on the confidence level of the decision with respect to the set of security levels and requirements, stated in a formal way in the Service Level Agreement at a negotiation phase. The quality of the decision depends on the setting of proper different thresholds in the decision modules for the two biometric traits. Any time the quality of the decision is not good enough, the system activates proper rules, which ask for new acquisitions (and decisions), possibly with different threshold values, resulting in a system not with a fixed and predefined behaviour, but one which complies with the actual acquisition context. Rules are formalized as deduction rules and grouped together to represent “response behaviors” according to the previous analysis. Therefore, there are different possible working flows, since the actual response of the recognition process depends on the output of the decision making modules that compose the system. Finally, the deployment phase is described, together with the results from the testing, based on the AT&T Face Database and the UBIRIS database. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

448 KiB  
Correction
Correction: A No Reference Image Quality Assessment Metric Based on Visual Perception. Algorithms 2016, 9, 87
by Yan Fu and Shengchun Wang
Algorithms 2017, 10(2), 60; https://doi.org/10.3390/a10020060 - 26 May 2017
Viewed by 3435
Abstract
We would like to make the following change to our article [1]. [...] Full article
Show Figures

Figure 13

1334 KiB  
Article
Contradiction Detection with Contradiction-Specific Word Embedding
by Luyang Li, Bing Qin and Ting Liu
Algorithms 2017, 10(2), 59; https://doi.org/10.3390/a10020059 - 24 May 2017
Cited by 24 | Viewed by 15235
Abstract
Contradiction detection is a task to recognize contradiction relations between a pair of sentences. Despite the effectiveness of traditional context-based word embedding learning algorithms in many natural language processing tasks, such algorithms are not powerful enough for contradiction detection. Contrasting words such as [...] Read more.
Contradiction detection is a task to recognize contradiction relations between a pair of sentences. Despite the effectiveness of traditional context-based word embedding learning algorithms in many natural language processing tasks, such algorithms are not powerful enough for contradiction detection. Contrasting words such as “overfull” and “empty” are mostly mapped into close vectors in such embedding space. To solve this problem, we develop a tailored neural network to learn contradiction-specific word embedding (CWE). The method can separate antonyms in the opposite ends of a spectrum. CWE is learned from a training corpus which is automatically generated from the paraphrase database, and is naturally applied as features to carry out contradiction detection in SemEval 2014 benchmark dataset. Experimental results show that CWE outperforms traditional context-based word embedding in contradiction detection. The proposed model for contradiction detection performs comparably with the top-performing system in accuracy of three-category classification and enhances the accuracy from 75.97% to 82.08% in the contradiction category. Full article
Show Figures

Figure 1

1812 KiB  
Article
A Flexible Pattern-Matching Algorithm for Network Intrusion Detection Systems Using Multi-Core Processors
by Chun-Liang Lee and Tzu-Hao Yang
Algorithms 2017, 10(2), 58; https://doi.org/10.3390/a10020058 - 24 May 2017
Cited by 4 | Viewed by 6134
Abstract
As part of network security processes, network intrusion detection systems (NIDSs) determine whether incoming packets contain malicious patterns. Pattern matching, the key NIDS component, consumes large amounts of execution time. One of several trends involving general-purpose processors (GPPs) is their use in software-based [...] Read more.
As part of network security processes, network intrusion detection systems (NIDSs) determine whether incoming packets contain malicious patterns. Pattern matching, the key NIDS component, consumes large amounts of execution time. One of several trends involving general-purpose processors (GPPs) is their use in software-based NIDSs. In this paper, we describe our proposal for an efficient and flexible pattern-matching algorithm for inspecting packet payloads using a head-body finite automaton (HBFA). The proposed algorithm takes advantage of multi-core GPP parallelism and single-instruction multiple-data operations to achieve higher throughput compared to that resulting from traditional deterministic finite automata (DFA) using the Aho-Corasick algorithm. Whereas the head-body matching (HBM) algorithm is based on pre-defined DFA depth value, our HBFA algorithm is based on head size. Experimental results using Snort and ClamAV pattern sets indicate that the proposed algorithm achieves up to 58% higher throughput compared to its HBM counterpart. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Show Figures

Figure 1

1547 KiB  
Article
A Prediction of Precipitation Data Based on Support Vector Machine and Particle Swarm Optimization (PSO-SVM) Algorithms
by Jinglin Du, Yayun Liu, Yanan Yu and Weilan Yan
Algorithms 2017, 10(2), 57; https://doi.org/10.3390/a10020057 - 17 May 2017
Cited by 99 | Viewed by 10419
Abstract
Precipitation is a very important topic in weather forecasts. Weather forecasts, especially precipitation prediction, poses complex tasks because they depend on various parameters to predict the dependent variables like temperature, humidity, wind speed and direction, which are changing from time to time and [...] Read more.
Precipitation is a very important topic in weather forecasts. Weather forecasts, especially precipitation prediction, poses complex tasks because they depend on various parameters to predict the dependent variables like temperature, humidity, wind speed and direction, which are changing from time to time and weather calculation varies with the geographical location along with its atmospheric variables. To improve the prediction accuracy of precipitation, this context proposes a prediction model for rainfall forecast based on Support Vector Machine with Particle Swarm Optimization (PSO-SVM) to replace the linear threshold used in traditional precipitation. Parameter selection has a critical impact on the predictive accuracy of SVM, and PSO is proposed to find the optimal parameters for SVM. The PSO-SVM algorithm was used for the training of a model by using the historical data for precipitation prediction, which can be useful information and used by people of all walks of life in making wise and intelligent decisions. The simulations demonstrate that prediction models indicate that the performance of the proposed algorithm has much better accuracy than the direct prediction model based on a set of experimental data if other things are equal. On the other hand, simulation results demonstrate the effectiveness and advantages of the SVM-PSO model used in machine learning and further promises the scope for improvement as more and more relevant attributes can be used in predicting the dependent variables. Full article
Show Figures

Figure 1

951 KiB  
Article
Clustering Using an Improved Krill Herd Algorithm
by Qin Li and Bo Liu
Algorithms 2017, 10(2), 56; https://doi.org/10.3390/a10020056 - 17 May 2017
Cited by 13 | Viewed by 4471
Abstract
In recent years, metaheuristic algorithms have been widely used in solving clustering problems because of their good performance and application effects. Krill herd algorithm (KHA) is a new effective algorithm to solve optimization problems based on the imitation of krill individual behavior, and [...] Read more.
In recent years, metaheuristic algorithms have been widely used in solving clustering problems because of their good performance and application effects. Krill herd algorithm (KHA) is a new effective algorithm to solve optimization problems based on the imitation of krill individual behavior, and it is proven to perform better than other swarm intelligence algorithms. However, there are some weaknesses yet. In this paper, an improved krill herd algorithm (IKHA) is studied. Modified mutation operators and updated mechanisms are applied to improve global optimization, and the proposed IKHA can overcome the weakness of KHA and performs better than KHA in optimization problems. Then, KHA and IKHA are introduced into the clustering problem. In our proposed clustering algorithm, KHA and IKHA are used to find appropriate cluster centers. Experiments were conducted on University of California Irvine (UCI) standard datasets, and the results showed that the IKHA clustering algorithm is the most effective. Full article
Show Figures

Figure 1

157 KiB  
Erratum
Erratum: Ahmad, F., et al. A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity. Algorithms 2017, 10, 17
by Fayyaz Ahmad, Toseef Akhter Bhutta, Umar Shoaib, Malik Zaka Ullah, Ali Saleh Alshomrani, Shamshad Ahmad and Shahid Ahmad
Algorithms 2017, 10(2), 55; https://doi.org/10.3390/a10020055 - 12 May 2017
Viewed by 3032
238 KiB  
Article
Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions
by Ioannis K. Argyros, Janak Raj Sharma and Deepak Kumar
Algorithms 2017, 10(2), 54; https://doi.org/10.3390/a10020054 - 12 May 2017
Cited by 1 | Viewed by 3921
Abstract
We present the semilocal convergence of a multi-step modified Newton-Hermitian and Skew-Hermitian Splitting method (MMN-HSS method) to approximate a solution of a nonlinear equation. Earlier studies show convergence under only Lipschitz conditions limiting the applicability of this method. The convergence in this study [...] Read more.
We present the semilocal convergence of a multi-step modified Newton-Hermitian and Skew-Hermitian Splitting method (MMN-HSS method) to approximate a solution of a nonlinear equation. Earlier studies show convergence under only Lipschitz conditions limiting the applicability of this method. The convergence in this study is shown under generalized Lipschitz-type conditions and restricted convergence domains. Hence, the applicability of the method is extended. Moreover, numerical examples are also provided to show that our results can be applied to solve equations in cases where earlier study cannot be applied. Furthermore, in the cases where both old and new results are applicable, the latter provides a larger domain of convergence and tighter error bounds on the distances involved. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
2299 KiB  
Article
Application of Gradient Descent Continuous Actor-Critic Algorithm for Bilateral Spot Electricity Market Modeling Considering Renewable Power Penetration
by Huiru Zhao, Yuwei Wang, Mingrui Zhao, Chuyu Sun and Qingkun Tan
Algorithms 2017, 10(2), 53; https://doi.org/10.3390/a10020053 - 10 May 2017
Cited by 6 | Viewed by 5001
Abstract
The bilateral spot electricity market is very complicated because all generation units and demands must strategically bid in this market. Considering renewable resource penetration, the high variability and the non-dispatchable nature of these intermittent resources make it more difficult to model and simulate [...] Read more.
The bilateral spot electricity market is very complicated because all generation units and demands must strategically bid in this market. Considering renewable resource penetration, the high variability and the non-dispatchable nature of these intermittent resources make it more difficult to model and simulate the dynamic bidding process and the equilibrium in the bilateral spot electricity market, which makes developing fast and reliable market modeling approaches a matter of urgency nowadays. In this paper, a Gradient Descent Continuous Actor-Critic algorithm is proposed for hour-ahead bilateral electricity market modeling in the presence of renewable resources because this algorithm can solve electricity market modeling problems with continuous state and action spaces without causing the “curse of dimensionality” and has low time complexity. In our simulation, the proposed approach is implemented on an IEEE 30-bus test system. The adequate performance of our proposed approach—such as reaching Nash Equilibrium results after enough iterations of training are tested and verified, and some conclusions about the relationship between increasing the renewable power output and participants’ bidding strategy, locational marginal prices, and social welfare—is also evaluated. Moreover, the comparison of our proposed approach with the fuzzy Q-learning-based electricity market approach implemented in this paper confirms the superiority of our proposed approach in terms of participants’ profits, social welfare, average locational marginal prices, etc. Full article
Show Figures

Figure 1

3001 KiB  
Article
Searchable Data Vault: Encrypted Queries in Secure Distributed Cloud Storage
by Geong Sen Poh, Vishnu Monn Baskaran, Ji-Jian Chin, Moesfa Soeheila Mohamad, Kay Win Lee, Dharmadharshni Maniam and Muhammad Reza Z’aba
Algorithms 2017, 10(2), 52; https://doi.org/10.3390/a10020052 - 9 May 2017
Cited by 2 | Viewed by 6853
Abstract
Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also [...] Read more.
Cloud storage services allow users to efficiently outsource their documents anytime and anywhere. Such convenience, however, leads to privacy concerns. While storage providers may not read users’ documents, attackers may possibly gain access by exploiting vulnerabilities in the storage system. Documents may also be leaked by curious administrators. A simple solution is for the user to encrypt all documents before submitting them. This method, however, makes it impossible to efficiently search for documents as they are all encrypted. To resolve this problem, we propose a multi-server searchable symmetric encryption (SSE) scheme and construct a system called the searchable data vault (SDV). A unique feature of the scheme is that it allows an encrypted document to be divided into blocks and distributed to different storage servers so that no single storage provider has a complete document. By incorporating the scheme, the SDV protects the privacy of documents while allowing for efficient private queries. It utilizes a web interface and a controller that manages user credentials, query indexes and submission of encrypted documents to cloud storage services. It is also the first system that enables a user to simultaneously outsource and privately query documents from a few cloud storage services. Our preliminary performance evaluation shows that this feature introduces acceptable computation overheads when compared to submitting documents directly to a cloud storage service. Full article
(This article belongs to the Special Issue Security and Privacy in Cloud Computing Environments)
Show Figures

Figure 1

24119 KiB  
Article
Adaptive Vector Quantization for Lossy Compression of Image Sequences
by Raffaele Pizzolante, Bruno Carpentieri and Sergio De Agostino
Algorithms 2017, 10(2), 51; https://doi.org/10.3390/a10020051 - 9 May 2017
Cited by 5 | Viewed by 6247
Abstract
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using [...] Read more.
In this work, we present a scheme for the lossy compression of image sequences, based on the Adaptive Vector Quantization (AVQ) algorithm. The AVQ algorithm is a lossy compression algorithm for grayscale images, which processes the input data in a single-pass, by using the properties of the vector quantization to approximate data. First, we review the key aspects of the AVQ algorithm and, subsequently, we outline the basic concepts and the design choices behind the proposed scheme. Finally, we report the experimental results, which highlight an improvement in compression performances when our scheme is compared with the AVQ algorithm. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Show Figures

Figure 1

260 KiB  
Article
Hierarchical Parallel Evaluation of a Hamming Code
by Shmuel T. Klein and Dana Shapira
Algorithms 2017, 10(2), 50; https://doi.org/10.3390/a10020050 - 30 Apr 2017
Viewed by 5515
Abstract
The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of [...] Read more.
The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory. Full article
Show Figures

Figure 1

2154 KiB  
Article
Multivariate Statistical Process Control Using Enhanced Bottleneck Neural Network
by Khaled Bouzenad and Messaoud Ramdani
Algorithms 2017, 10(2), 49; https://doi.org/10.3390/a10020049 - 29 Apr 2017
Cited by 8 | Viewed by 6039
Abstract
Monitoring process upsets and malfunctions as early as possible and then finding and removing the factors causing the respective events is of great importance for safe operation and improved productivity. Conventional process monitoring using principal component analysis (PCA) often supposes that process data [...] Read more.
Monitoring process upsets and malfunctions as early as possible and then finding and removing the factors causing the respective events is of great importance for safe operation and improved productivity. Conventional process monitoring using principal component analysis (PCA) often supposes that process data follow a Gaussian distribution. However, this kind of constraint cannot be satisfied in practice because many industrial processes frequently span multiple operating states. To overcome this difficulty, PCA can be combined with nonparametric control charts for which there is no assumption need on the distribution. However, this approach still uses a constant confidence limit where a relatively high rate of false alarms are generated. Although nonlinear PCA (NLPCA) using autoassociative bottle-neck neural networks plays an important role in the monitoring of industrial processes, it is difficult to design correct monitoring statistics and confidence limits that check new performance. In this work, a new monitoring strategy using an enhanced bottleneck neural network (EBNN) with an adaptive confidence limit for non Gaussian data is proposed. The basic idea behind it is to extract internally homogeneous segments from the historical normal data sets by filling a Gaussian mixture model (GMM). Based on the assumption that process data follow a Gaussian distribution within an operating mode, a local confidence limit can be established. The EBNN is used to reconstruct input data and estimate probabilities of belonging to the various local operating regimes, as modelled by GMM. An abnormal event for an input measurement vector is detected if the squared prediction error (SPE) is too large, or above a certain threshold which is made adaptive. Moreover, the sensor validity index (SVI) is employed successfully to identify the detected faulty variable. The results demonstrate that, compared with NLPCA, the proposed approach can effectively reduce the number of false alarms, and is hence expected to better monitor many practical processes. Full article
Show Figures

Figure 1

1481 KiB  
Article
Adaptive Mutation Dynamic Search Fireworks Algorithm
by Xi-Guang Li, Shou-Fei Han, Liang Zhao, Chang-Qing Gong and Xiao-Jing Liu
Algorithms 2017, 10(2), 48; https://doi.org/10.3390/a10020048 - 28 Apr 2017
Cited by 8 | Viewed by 4676
Abstract
The Dynamic Search Fireworks Algorithm (dynFWA) is an effective algorithm for solving optimization problems. However, dynFWA easily falls into local optimal solutions prematurely and it also has a slow convergence rate. In order to improve these problems, an adaptive mutation dynamic search fireworks [...] Read more.
The Dynamic Search Fireworks Algorithm (dynFWA) is an effective algorithm for solving optimization problems. However, dynFWA easily falls into local optimal solutions prematurely and it also has a slow convergence rate. In order to improve these problems, an adaptive mutation dynamic search fireworks algorithm (AMdynFWA) is introduced in this paper. The proposed algorithm applies the Gaussian mutation or the Levy mutation for the core firework (CF) with mutation probability. Our simulation compares the proposed algorithm with the FWA-Based algorithms and other swarm intelligence algorithms. The results show that the proposed algorithm achieves better overall performance on the standard test functions. Full article
Show Figures

Figure 1

209 KiB  
Article
Trust in the Balance: Data Protection Laws as Tools for Privacy and Security in the Cloud
by Darra Hofman, Luciana Duranti and Elissa How
Algorithms 2017, 10(2), 47; https://doi.org/10.3390/a10020047 - 27 Apr 2017
Cited by 5 | Viewed by 7451
Abstract
A popular bumper sticker states: “There is no cloud. It’s just someone else’s computer.” Despite the loss of control that comes with its use, critical records are increasingly being entrusted to the cloud, generating ever-growing concern about the privacy and security of those [...] Read more.
A popular bumper sticker states: “There is no cloud. It’s just someone else’s computer.” Despite the loss of control that comes with its use, critical records are increasingly being entrusted to the cloud, generating ever-growing concern about the privacy and security of those records. Ultimately, privacy and security constitute an attempt to balance competing needs: privacy balances the need to use information against the need to protect personal data, while security balances the need to provide access to records against the need to stop unauthorized access. The importance of these issues has led to a multitude of legal and regulatory efforts to find a balance and, ultimately, to ensure trust in both digital records and their storage in the cloud. Adding a particular challenge is the fact that distinct jurisdictions approach privacy differently and an in-depth understanding of what a jurisdiction’s laws may be, or even under what jurisdiction particular data might be, requires a Herculean effort. And yet, in order to protect privacy and enhance security, this effort is required. This article examines two legal tools for ensuring the privacy and security of records in the cloud, data protection laws, and data localization laws, through the framework of “trust” as understood in archival science. This framework of trust provides new directions for algorithmic research, identifying those areas of digital record creation and preservation most in need of novel solutions. Full article
(This article belongs to the Special Issue Security and Privacy in Cloud Computing Environments)
2903 KiB  
Article
An Improved Multiobjective Particle Swarm Optimization Based on Culture Algorithms
by Chunhua Jia and Hong Zhu
Algorithms 2017, 10(2), 46; https://doi.org/10.3390/a10020046 - 25 Apr 2017
Cited by 4 | Viewed by 4852
Abstract
In this paper, we propose a new approach to raise the performance of multiobjective particle swam optimization. The personal guide and global guide are updated using three kinds of knowledge extracted from the population based on cultural algorithms. An epsilon domination criterion has [...] Read more.
In this paper, we propose a new approach to raise the performance of multiobjective particle swam optimization. The personal guide and global guide are updated using three kinds of knowledge extracted from the population based on cultural algorithms. An epsilon domination criterion has been employed to enhance the convergence and diversity of the approximate Pareto front. Moreover, a simple polynomial mutation operator has been applied to both the population and the non-dominated archive. Experiments on two series of bench test suites have shown the effectiveness of the proposed approach. A comparison with several other algorithms that are considered good representatives of particle swarm optimization solutions has also been conducted, in order to verify the competitive performance of the proposed algorithm in solve multiobjective optimization problems. Full article
Show Figures

Figure 1

407 KiB  
Article
An Efficient Sixth-Order Newton-Type Method for Solving Nonlinear Systems
by Xiaofeng Wang and Yang Li
Algorithms 2017, 10(2), 45; https://doi.org/10.3390/a10020045 - 25 Apr 2017
Cited by 15 | Viewed by 4605
Abstract
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, [...] Read more.
In this paper, we present a new sixth-order iterative method for solving nonlinear systems and prove a local convergence result. The new method requires solving five linear systems per iteration. An important feature of the new method is that the LU (lower upper, also called LU factorization) decomposition of the Jacobian matrix is computed only once in each iteration. The computational efficiency index of the new method is compared to that of some known methods. Numerical results are given to show that the convergence behavior of the new method is similar to the existing methods. The new method can be applied to small- and medium-sized nonlinear systems. Full article
(This article belongs to the Special Issue Numerical Algorithms for Solving Nonlinear Equations and Systems 2017)
Show Figures

Figure 1

400 KiB  
Article
Revised Gravitational Search Algorithms Based on Evolutionary-Fuzzy Systems
by Danilo Pelusi, Raffaele Mascella and Luca Tallini
Algorithms 2017, 10(2), 44; https://doi.org/10.3390/a10020044 - 21 Apr 2017
Cited by 25 | Viewed by 4869
Abstract
The choice of the best optimization algorithm is a hard issue, and it sometime depends on specific problem. The Gravitational Search Algorithm (GSA) is a search algorithm based on the law of gravity, which states that each particle attracts every other particle with [...] Read more.
The choice of the best optimization algorithm is a hard issue, and it sometime depends on specific problem. The Gravitational Search Algorithm (GSA) is a search algorithm based on the law of gravity, which states that each particle attracts every other particle with a force called gravitational force. Some revised versions of GSA have been proposed by using intelligent techniques. This work proposes some GSA versions based on fuzzy techniques powered by evolutionary methods, such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE), to improve GSA. The designed algorithms tune a suitable parameter of GSA through a fuzzy controller whose membership functions are optimized by GA, PSO and DE. The results show that Fuzzy Gravitational Search Algorithm (FGSA) optimized by DE is optimal for unimodal functions, whereas FGSA optimized through GA is good for multimodal functions. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Show Figures

Figure 1

809 KiB  
Article
Reliable Portfolio Selection Problem in Fuzzy Environment: An mλ Measure Based Approach
by Yuan Feng, Li Wang and Xinhong Liu
Algorithms 2017, 10(2), 43; https://doi.org/10.3390/a10020043 - 18 Apr 2017
Cited by 1 | Viewed by 4422
Abstract
This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of [...] Read more.
This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm. Full article
(This article belongs to the Special Issue Extensions to Type-1 Fuzzy Logic: Theory, Algorithms and Applications)
Show Figures

Figure 1

1247 KiB  
Article
RGloVe: An Improved Approach of Global Vectors for Distributional Entity Relation Representation
by Ziyan Chen, Yu Huang, Yuexian Liang, Yang Wang, Xingyu Fu and Kun Fu
Algorithms 2017, 10(2), 42; https://doi.org/10.3390/a10020042 - 17 Apr 2017
Cited by 7 | Viewed by 5299
Abstract
Most of the previous works on relation extraction between named entities are often limited to extracting the pre-defined types; which are inefficient for massive unlabeled text data. Recently; with the appearance of various distributional word representations; unsupervised methods for many natural language processing [...] Read more.
Most of the previous works on relation extraction between named entities are often limited to extracting the pre-defined types; which are inefficient for massive unlabeled text data. Recently; with the appearance of various distributional word representations; unsupervised methods for many natural language processing (NLP) tasks have been widely researched. In this paper; we focus on a new finding of unsupervised relation extraction; which is called distributional relation representation. Without requiring the pre-defined types; distributional relation representation aims to automatically learn entity vectors and further estimate semantic similarity between these entities. We choose global vectors (GloVe) as our original model to train entity vectors because of its excellent balance between local context and global statistics in the whole corpus. In order to train model more efficiently; we improve the traditional GloVe model by using cosine similarity between entity vectors to approximate the entity occurrences instead of dot product. Because cosine similarity can convert vector to unit vector; it is intuitively more reasonable and more easily converge to a local optimum. We call the improved model RGloVe. Experimental results on a massive corpus of Sina News show that our proposed model outperforms the traditional global vectors. Finally; a graph database of Neo4j is introduced to store these relationships between named entities. The most competitive advantage of Neo4j is that it provides a highly accessible way to query the direct and indirect relationships between entities. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop