Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 10, Issue 1 (March 2017)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-35
Export citation of selected articles as:

Editorial

Jump to: Research, Review

Open AccessEditorial Acknowledgement to Reviewers of Algorithms in 2016
Algorithms 2017, 10(1), 11; doi:10.3390/a10010011
Received: 10 January 2017 / Revised: 10 January 2017 / Accepted: 10 January 2017 / Published: 10 January 2017
PDF Full-text (175 KB) | HTML Full-text | XML Full-text
Abstract The editors of Algorithms would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...] Full article

Research

Jump to: Editorial, Review

Open AccessArticle Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Algorithms 2017, 10(1), 7; doi:10.3390/a10010007
Received: 13 October 2016 / Revised: 21 December 2016 / Accepted: 4 January 2017 / Published: 6 January 2017
PDF Full-text (4260 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a variant of the iterative shrinkage-thresholding (IST) algorithm, called backtracking-based adaptive IST (BAIST), for image compressive sensing (CS) reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the
[...] Read more.
This paper presents a variant of the iterative shrinkage-thresholding (IST) algorithm, called backtracking-based adaptive IST (BAIST), for image compressive sensing (CS) reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Figures

Figure 1

Open AccessCommunication Using Force-Field Grids for Sampling Translation/Rotation of Partially Rigid Macromolecules
Algorithms 2017, 10(1), 6; doi:10.3390/a10010006
Received: 30 October 2016 / Revised: 19 December 2016 / Accepted: 23 December 2016 / Published: 4 January 2017
PDF Full-text (959 KB) | HTML Full-text | XML Full-text
Abstract
An algorithm is presented for the simulation of two partially flexible macromolecules where the interaction between the flexible parts and rigid parts is represented by energy grids associated with the rigid part of each macromolecule. The proposed algorithm avoids the transformation of the
[...] Read more.
An algorithm is presented for the simulation of two partially flexible macromolecules where the interaction between the flexible parts and rigid parts is represented by energy grids associated with the rigid part of each macromolecule. The proposed algorithm avoids the transformation of the grid upon molecular movement at the expense of the significantly lesser effect of transforming the flexible part. Full article
Figures

Figure 1

Open AccessArticle A Pilot-Pattern Based Algorithm for MIMO-OFDM Channel Estimation
Algorithms 2017, 10(1), 3; doi:10.3390/a10010003
Received: 12 September 2016 / Revised: 7 December 2016 / Accepted: 13 December 2016 / Published: 28 December 2016
PDF Full-text (1103 KB) | HTML Full-text | XML Full-text
Abstract
An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS) algorithm, which belongs to the space-time block-coded
[...] Read more.
An improved pilot pattern algorithm for facilitating the channel estimation in multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems is proposed in this paper. The presented algorithm reconfigures the parameter in the least square (LS) algorithm, which belongs to the space-time block-coded (STBC) category for channel estimation in pilot-based MIMO-OFDM system. Simulation results show that the algorithm has better performance in contrast to the classical single symbol scheme. In contrast to the double symbols scheme, the proposed algorithm can achieve nearly the same performance with only half of the complexity of the double symbols scheme. Full article
Figures

Figure 1

Open AccessArticle Efficient Algorithms for the Maximum Sum Problems
Algorithms 2017, 10(1), 5; doi:10.3390/a10010005
Received: 9 August 2016 / Revised: 2 December 2016 / Accepted: 26 December 2016 / Published: 4 January 2017
PDF Full-text (1175 KB) | HTML Full-text | XML Full-text
Abstract
We present efficient sequential and parallel algorithms for the maximum sum (MS) problem, which is to maximize the sum of some shape in the data array. We deal with two MS problems; the maximum subarray (MSA) problem and the maximum convex sum (MCS)
[...] Read more.
We present efficient sequential and parallel algorithms for the maximum sum (MS) problem, which is to maximize the sum of some shape in the data array. We deal with two MS problems; the maximum subarray (MSA) problem and the maximum convex sum (MCS) problem. In the MSA problem, we find a rectangular part within the given data array that maximizes the sum in it. The MCS problem is to find a convex shape rather than a rectangular shape that maximizes the sum. Thus, MCS is a generalization of MSA. For the MSA problem, O ( n ) time parallel algorithms are already known on an ( n , n ) 2D array of processors. We improve the communication steps from 2 n 1 to n, which is optimal. For the MCS problem, we achieve the asymptotic time bound of O ( n ) on an ( n , n ) 2D array of processors. We provide rigorous proofs for the correctness of our parallel algorithm based on Hoare logic and also provide some experimental results of our algorithm that are gathered from the Blue Gene/P super computer. Furthermore, we briefly describe how to compute the actual shape of the maximum convex sum. Full article
Figures

Figure 1

Open AccessArticle Dependent Shrink of Transitions for Calculating Firing Frequencies in Signaling Pathway Petri Net Model
Algorithms 2017, 10(1), 4; doi:10.3390/a10010004
Received: 13 August 2016 / Revised: 5 December 2016 / Accepted: 26 December 2016 / Published: 31 December 2016
PDF Full-text (2499 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Despite the recent rapid progress in high throughput measurements of biological data, it is still difficult to gather all of the reaction speed data in biological pathways. This paper presents a Petri net-based algorithm that can derive estimated values for non-valid reaction speeds
[...] Read more.
Despite the recent rapid progress in high throughput measurements of biological data, it is still difficult to gather all of the reaction speed data in biological pathways. This paper presents a Petri net-based algorithm that can derive estimated values for non-valid reaction speeds in a signaling pathway from biologically-valid data. In fact, these reaction speeds are reflected based on the delay times in the timed Petri net model of the signaling pathway. We introduce the concept of a “dependency relation” over a transition set of a Petri net and derive the properties of the dependency relation through a structural analysis. Based on the theoretical results, the proposed algorithm can efficiently shrink the transitions with two elementary structures into a single transition repeatedly to reduce the Petri net size in order to eventually discover all transition sets with a dependency relation. Finally, to show the usefulness of our algorithm, we apply our algorithm to the IL-3 Petri net model. Full article
(This article belongs to the Special Issue Biological Networks)
Figures

Figure 1

Open AccessArticle A Tensor Decomposition Based Multiway Structured Sparse SAR Imaging Algorithm with Kronecker Constraint
Algorithms 2017, 10(1), 2; doi:10.3390/a10010002
Received: 16 October 2016 / Revised: 14 December 2016 / Accepted: 17 December 2016 / Published: 25 December 2016
PDF Full-text (956 KB) | HTML Full-text | XML Full-text
Abstract
This paper investigates a structured sparse SAR imaging algorithm for point scattering model based on tensor decomposition. Several SAR imaging schemes have been developed by researchers for improving the imaging quality. For a typical SAR target scenario, the scatterers distribution usually has the
[...] Read more.
This paper investigates a structured sparse SAR imaging algorithm for point scattering model based on tensor decomposition. Several SAR imaging schemes have been developed by researchers for improving the imaging quality. For a typical SAR target scenario, the scatterers distribution usually has the feature of structured sparsity. Without considering this feature thoroughly, the existing schemes have still certain drawbacks. The classic matching pursuit algorithms can obtain clearer imaging results, but the cost is resulting in an extreme complexity and a huge computation resource consumption. Therefore, this paper put forward a tensor-based SAR imaging algorithm by means of multiway structured sparsity which makes full use of the above geometrical feature of the scatterers distribution. The spotlight SAR observation signal is formulated as a Tucker model considering the Kronecker constraint, and then a sparse reconstruction algorithm is introduced by utilizing the structured sparsity of the scene. The proposed tensor-based SAR imaging model is able to take advantage of the Kronecker information in each mode, which ensures the robustness for the signal reconstruction. Both the algorithm complexity analysis and numerical simulations show that the proposed method requires less computation than the existing sparsity-driven SAR imaging algorithms. The imaging realizations based on the practical measured data also indicate that the proposed algorithm is superior to the reference methods even in the severe noisy environment, under the condition of multiway structured sparsity. Full article
Figures

Figure 1

Open AccessArticle MultiAspect Graphs: Algebraic Representation and Algorithms
Algorithms 2017, 10(1), 1; doi:10.3390/a10010001
Received: 25 September 2016 / Revised: 12 December 2016 / Accepted: 19 December 2016 / Published: 25 December 2016
PDF Full-text (778 KB) | HTML Full-text | XML Full-text
Abstract
We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs). A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular,
[...] Read more.
We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs). A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular, we show that, as a consequence of the properties associated with the MAG structure, a MAG can be represented in matrix form. Moreover, we also show that any possible MAG function (algorithm) can be obtained from this matrix-based representation. This is an important theoretical result since it paves the way for adapting well-known graph algorithms for application in MAGs. We present a set of basic MAG algorithms, constructed from well-known graph algorithms, such as degree computing, Breadth First Search (BFS), and Depth First Search (DFS). These algorithms adapted to the MAG context can be used as primitives for building other more sophisticated MAG algorithms. Therefore, such examples can be seen as guidelines on how to properly derive MAG algorithms from basic algorithms on directed graphs. We also make available Python implementations of all the algorithms presented in this paper. Full article
(This article belongs to the Special Issue Algorithms for Complex Network Analysis)
Figures

Figure 1

Open AccessArticle Elite Opposition-Based Social Spider Optimization Algorithm for Global Function Optimization
Algorithms 2017, 10(1), 9; doi:10.3390/a10010009
Received: 27 November 2016 / Revised: 27 December 2016 / Accepted: 4 January 2017 / Published: 8 January 2017
PDF Full-text (3901 KB) | HTML Full-text | XML Full-text
Abstract
The Social Spider Optimization algorithm (SSO) is a novel metaheuristic optimization algorithm. To enhance the convergence speed and computational accuracy of the algorithm, in this paper, an elite opposition-based Social Spider Optimization algorithm (EOSSO) is proposed; we use an elite opposition-based learning strategy
[...] Read more.
The Social Spider Optimization algorithm (SSO) is a novel metaheuristic optimization algorithm. To enhance the convergence speed and computational accuracy of the algorithm, in this paper, an elite opposition-based Social Spider Optimization algorithm (EOSSO) is proposed; we use an elite opposition-based learning strategy to enhance the convergence speed and computational accuracy of the SSO algorithm. The 23 benchmark functions are tested, and the results show that the proposed elite opposition-based Social Spider Optimization algorithm is able to obtain an accurate solution, and it also has a fast convergence speed and a high degree of stability. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications)
Figures

Figure 1

Open AccessArticle Computing a Clique Tree with the Algorithm Maximal Label Search
Algorithms 2017, 10(1), 20; doi:10.3390/a10010020
Received: 29 October 2016 / Revised: 12 January 2017 / Accepted: 16 January 2017 / Published: 25 January 2017
PDF Full-text (311 KB) | HTML Full-text | XML Full-text
Abstract
The algorithm MLS (Maximal Label Search) is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS), Lexicographic Breadth-First Search (LexBFS), Lexicographic Depth-First Search (LexDFS) and Maximal Neighborhood Search (MNS). On a chordal graph, MLS computes a PEO (perfect elimination ordering)
[...] Read more.
The algorithm MLS (Maximal Label Search) is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS), Lexicographic Breadth-First Search (LexBFS), Lexicographic Depth-First Search (LexDFS) and Maximal Neighborhood Search (MNS). On a chordal graph, MLS computes a PEO (perfect elimination ordering) of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering), as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph. Full article
Figures

Figure 1

Open AccessArticle Estimating the Local Radius of Convergence for Picard Iteration
Algorithms 2017, 10(1), 10; doi:10.3390/a10010010
Received: 10 October 2016 / Revised: 28 December 2016 / Accepted: 30 December 2016 / Published: 9 January 2017
PDF Full-text (296 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose an algorithm to estimate the radius of convergence for the Picard iteration in the setting of a real Hilbert space. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best
[...] Read more.
In this paper, we propose an algorithm to estimate the radius of convergence for the Picard iteration in the setting of a real Hilbert space. Numerical experiments show that the proposed algorithm provides convergence balls close to or even identical to the best ones. As the algorithm does not require to evaluate the norm of derivatives, the computing effort is relatively low. Full article
Figures

Figure 1

Open AccessArticle Modeling Delayed Dynamics in Biological Regulatory Networks from Time Series Data
Algorithms 2017, 10(1), 8; doi:10.3390/a10010008
Received: 31 October 2016 / Revised: 13 December 2016 / Accepted: 20 December 2016 / Published: 9 January 2017
PDF Full-text (1155 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Background: The modeling of Biological Regulatory Networks (BRNs) relies on background knowledge, deriving either from literature and/or the analysis of biological observations. However, with the development of high-throughput data, there is a growing need for methods that automatically generate admissible models. Methods: Our
[...] Read more.
Background: The modeling of Biological Regulatory Networks (BRNs) relies on background knowledge, deriving either from literature and/or the analysis of biological observations. However, with the development of high-throughput data, there is a growing need for methods that automatically generate admissible models. Methods: Our research aim is to provide a logical approach to infer BRNs based on given time series data and known influences among genes. Results: We propose a new methodology for models expressed through a timed extension of the automata networks (well suited for biological systems). The main purpose is to have a resulting network as consistent as possible with the observed datasets. Conclusion: The originality of our work is three-fold: (i) identifying the sign of the interaction; (ii) the direct integration of quantitative time delays in the learning approach; and (iii) the identification of the qualitative discrete levels that lead to the systems’ dynamics. We show the benefits of such an automatic approach on dynamical biological models, the DREAM4(in silico) and DREAM8 (breast cancer) datasets, popular reverse-engineering challenges, in order to discuss the precision and the computational performances of our modeling method. Full article
(This article belongs to the Special Issue Biological Networks)
Figures

Figure 1

Open AccessArticle Towards Efficient Positional Inverted Index †
Algorithms 2017, 10(1), 30; doi:10.3390/a10010030
Received: 23 December 2016 / Revised: 7 February 2017 / Accepted: 17 February 2017 / Published: 22 February 2017
PDF Full-text (1014 KB) | HTML Full-text | XML Full-text
Abstract
We address the problem of positional indexing in the natural language domain. The positional inverted index contains the information of the word positions. Thus, it is able to recover the original text file, which implies that it is not necessary to store the
[...] Read more.
We address the problem of positional indexing in the natural language domain. The positional inverted index contains the information of the word positions. Thus, it is able to recover the original text file, which implies that it is not necessary to store the original file. Our Positional Inverted Self-Index (PISI) stores the word position gaps encoded by variable byte code. Inverted lists of single terms are combined into one inverted list that represents the backbone of the text file since it stores the sequence of the indexed words of the original file. The inverted list is synchronized with a presentation layer that stores separators, stop words, as well as variants of the indexed words. The Huffman coding is used to encode the presentation layer. The space complexity of the PISI inverted list is O ( ( N n ) log 2 b N + ( N n α + n ) × ( log 2 b n + 1 ) ) where N is a number of stems, n is a number of unique stems, α is a step/period of the back pointers in the inverted list and b is the size of the word of computer memory given in bits. The space complexity of the presentation layer is O ( i = 1 N log 2 p i n ( i ) j = 1 N log 2 p j + N ) with respect to p i n ( i ) as a probability of a stem variant at position i, p j as the probability of separator or stop word at position j and N as the number of separators and stop words. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Figures

Figure 1

Open AccessArticle Concurrent vs. Exclusive Reading in Parallel Decoding of LZ-Compressed Files
Algorithms 2017, 10(1), 21; doi:10.3390/a10010021
Received: 25 November 2016 / Revised: 24 December 2016 / Accepted: 23 January 2017 / Published: 28 January 2017
PDF Full-text (236 KB) | HTML Full-text | XML Full-text
Abstract
Broadcasting a message from one to many processors in a network corresponds to concurrent reading on a random access shared memory parallel machine. Computing the trees of a forest, the level of each node in its tree and the path between two nodes
[...] Read more.
Broadcasting a message from one to many processors in a network corresponds to concurrent reading on a random access shared memory parallel machine. Computing the trees of a forest, the level of each node in its tree and the path between two nodes are problems that can easily be solved with concurrent reading in a time logarithmic in the maximum height of a tree. Solving such problems with exclusive reading requires a time logarithmic in the number of nodes, implying message passing between disjoint pairs of processors on a distributed system. Allowing concurrent reading in parallel algorithm design for distributed computing might be advantageous in practice if these problems are faced on shallow trees with some specific constraints. We show an application to LZC (Lempel-Ziv-Compress)-compressed file decoding, whose parallelization employs these computations on such trees for realistic data. On the other hand, zipped files do not have this advantage, since they are compressed by the Lempel–Ziv sliding window technique. Full article
(This article belongs to the Special Issue Data Compression, Communication Processing and Security 2016)
Open AccessArticle Coupled Least Squares Identification Algorithms for Multivariate Output-Error Systems
Algorithms 2017, 10(1), 12; doi:10.3390/a10010012
Received: 17 November 2016 / Revised: 5 January 2017 / Accepted: 6 January 2017 / Published: 12 January 2017
PDF Full-text (363 KB) | HTML Full-text | XML Full-text
Abstract
This paper focuses on the recursive identification problems for a multivariate output-error system. By decomposing the system into several subsystems and by forming a coupled relationship between the parameter estimation vectors of the subsystems, two coupled auxiliary model based recursive least squares (RLS)
[...] Read more.
This paper focuses on the recursive identification problems for a multivariate output-error system. By decomposing the system into several subsystems and by forming a coupled relationship between the parameter estimation vectors of the subsystems, two coupled auxiliary model based recursive least squares (RLS) algorithms are presented. Moreover, in contrast to the auxiliary model based recursive least squares algorithm, the proposed algorithms provide a reference to improve the identification accuracy of the multivariate output-error system. The simulation results confirm the effectiveness of the proposed algorithms. Full article
Figures

Figure 1

Open AccessArticle A New Quintic Spline Method for Integro Interpolation and Its Error Analysis
Algorithms 2017, 10(1), 32; doi:10.3390/a10010032
Received: 24 December 2016 / Revised: 26 February 2017 / Accepted: 1 March 2017 / Published: 3 March 2017
PDF Full-text (761 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, to overcome the innate drawbacks of some old methods, we present a new quintic spline method for integro interpolation. The method is free of any exact end conditions, and it can reconstruct a function and its first order to fifth
[...] Read more.
In this paper, to overcome the innate drawbacks of some old methods, we present a new quintic spline method for integro interpolation. The method is free of any exact end conditions, and it can reconstruct a function and its first order to fifth order derivatives with high accuracy by only using the given integral values of the original function. The approximation properties of the obtained integro quintic spline are well studied and examined. The theoretical analysis and the numerical tests show that the new method is very effective for integro interpolation. Full article
Open AccessArticle Evaluation of Diversification Techniques for Legal Information Retrieval
Algorithms 2017, 10(1), 22; doi:10.3390/a10010022
Received: 22 November 2016 / Revised: 17 January 2017 / Accepted: 19 January 2017 / Published: 29 January 2017
PDF Full-text (435 KB) | HTML Full-text | XML Full-text
Abstract
“Public legal information from all countries and international institutions is part of the common heritage of humanity. Maximizing access to this information promotes justice and the rule of law”. In accordance with the aforementioned declaration on free access to law by legal information
[...] Read more.
“Public legal information from all countries and international institutions is part of the common heritage of humanity. Maximizing access to this information promotes justice and the rule of law”. In accordance with the aforementioned declaration on free access to law by legal information institutes of the world, a plethora of legal information is available through the Internet, while the provision of legal information has never before been easier. Given that law is accessed by a much wider group of people, the majority of whom are not legally trained or qualified, diversification techniques should be employed in the context of legal information retrieval, as to increase user satisfaction. We address the diversification of results in legal search by adopting several state of the art methods from the web search, network analysis and text summarization domains. We provide an exhaustive evaluation of the methods, using a standard dataset from the common law domain that we objectively annotated with relevance judgments for this purpose. Our results: (i) reveal that users receive broader insights across the results they get from a legal information retrieval system; (ii) demonstrate that web search diversification techniques outperform other approaches (e.g., summarization-based, graph-based methods) in the context of legal diversification; and (iii) offer balance boundaries between reinforcing relevant documents or sampling the information space around the legal query. Full article
(This article belongs to the Special Issue Humanistic Data Processing)
Figures

Figure 1

Open AccessArticle Large Scale Implementations for Twitter Sentiment Classification
Algorithms 2017, 10(1), 33; doi:10.3390/a10010033
Received: 8 December 2016 / Revised: 28 February 2017 / Accepted: 1 March 2017 / Published: 4 March 2017
PDF Full-text (343 KB) | HTML Full-text | XML Full-text
Abstract
Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide
[...] Read more.
Sentiment Analysis on Twitter Data is indeed a challenging problem due to the nature, diversity and volume of the data. People tend to express their feelings freely, which makes Twitter an ideal source for accumulating a vast amount of opinions towards a wide spectrum of topics. This amount of information offers huge potential and can be harnessed to receive the sentiment tendency towards these topics. However, since no one can invest an infinite amount of time to read through these tweets, an automated decision making approach is necessary. Nevertheless, most existing solutions are limited in centralized environments only. Thus, they can only process at most a few thousand tweets. Such a sample is not representative in order to define the sentiment polarity towards a topic due to the massive number of tweets published daily. In this work, we develop two systems: the first in the MapReduce and the second in the Apache Spark framework for programming with Big Data. The algorithm exploits all hashtags and emoticons inside a tweet, as sentiment labels, and proceeds to a classification method of diverse sentiment types in a parallel and distributed manner. Moreover, the sentiment analysis tool is based on Machine Learning methodologies alongside Natural Language Processing techniques and utilizes Apache Spark’s Machine learning library, MLlib. In order to address the nature of Big Data, we introduce some pre-processing steps for achieving better results in Sentiment Analysis as well as Bloom filters to compact the storage size of intermediate data and boost the performance of our algorithm. Finally, the proposed system was trained and validated with real data crawled by Twitter, and, through an extensive experimental evaluation, we prove that our solution is efficient, robust and scalable while confirming the quality of our sentiment identification. Full article
(This article belongs to the Special Issue Humanistic Data Processing)
Figures

Figure 1

Open AccessArticle A Fault Detection and Data Reconciliation Algorithm in Technical Processes with the Help of Haar Wavelets Packets
Algorithms 2017, 10(1), 13; doi:10.3390/a10010013
Received: 30 September 2016 / Revised: 17 December 2016 / Accepted: 7 January 2017 / Published: 14 January 2017
PDF Full-text (1878 KB) | HTML Full-text | XML Full-text
Abstract
This article is focused on the detection of errors using an approach that is signal based. The proposed algorithm considers several criteria: soft, hard and very hard recognition error. After the recognition of the error, the error is replaced. In this sense, different
[...] Read more.
This article is focused on the detection of errors using an approach that is signal based. The proposed algorithm considers several criteria: soft, hard and very hard recognition error. After the recognition of the error, the error is replaced. In this sense, different strategies for data reconciliation are associated with the proposed criteria error detection. Algorithms in several industrial software platforms are used for detecting errors of sensors. Computer simulations confirm the validation of the presented applications. Results with actual sensor measurements in industrial processes are presented. Full article
Figures

Figure 1

Open AccessArticle An Architectural Based Framework for the Distributed Collection, Analysis and Query from Inhomogeneous Time Series Data Sets and Wearables for Biofeedback Applications
Algorithms 2017, 10(1), 23; doi:10.3390/a10010023
Received: 30 May 2016 / Accepted: 20 January 2017 / Published: 1 February 2017
PDF Full-text (5641 KB) | HTML Full-text | XML Full-text
Abstract
The increasing professionalism of sports persons and desire of consumers to imitate this has led to an increased metrification of sport. This has been driven in no small part by the widespread availability of comparatively cheap assessment technologies and, more recently, wearable technologies.
[...] Read more.
The increasing professionalism of sports persons and desire of consumers to imitate this has led to an increased metrification of sport. This has been driven in no small part by the widespread availability of comparatively cheap assessment technologies and, more recently, wearable technologies. Historically, whilst these have produced large data sets, often only the most rudimentary analysis has taken place (Wisbey et al in: “Quantifying movement demands of AFL football using GPS tracking”). This paucity of analysis is due in no small part to the challenges of analysing large sets of data that are often from disparate data sources to glean useful key performance indicators, which has been a largely a labour intensive process. This paper presents a framework that can be cloud based for the gathering, storing and algorithmic interpretation of large and inhomogeneous time series data sets. The framework is architecture based and technology agnostic in the data sources it can gather, and presents a model for multi set analysis for inter- and intra- devices and individual subject matter. A sample implementation demonstrates the utility of the framework for sports performance data collected from distributed inertial sensors in the sport of swimming. Full article
Figures

Figure 1

Open AccessArticle Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification
Algorithms 2017, 10(1), 14; doi:10.3390/a10010014
Received: 8 October 2016 / Revised: 22 December 2016 / Accepted: 11 January 2017 / Published: 14 January 2017
PDF Full-text (1618 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second,
[...] Read more.
This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second, we applied the differential harmony search algorithm to the kernel fuzzy clustering to help the clustering method obtain better solutions. Finally, the combination of the kernel fuzzy clustering and the differential harmony search is applied for water diversion scheduling in East Lake. A comparison of the proposed method with other methods has been carried out. The results show that the kernel clustering with the differential harmony search algorithm has good performance to cooperate with the water diversion scheduling problems. Full article
Figures

Figure 1

Open AccessArticle Problems on Finite Automata and the Exponential Time Hypothesis
Algorithms 2017, 10(1), 24; doi:10.3390/a10010024
Received: 20 September 2016 / Revised: 23 January 2017 / Accepted: 25 January 2017 / Published: 5 February 2017
PDF Full-text (349 KB) | HTML Full-text | XML Full-text
Abstract
We study several classical decision problems on finite automata under the (Strong) Exponential Time Hypothesis. We focus on three types of problems: universality, equivalence, and emptiness of intersection. All these problems are known to be CoNP-hard for nondeterministic finite automata, even when restricted
[...] Read more.
We study several classical decision problems on finite automata under the (Strong) Exponential Time Hypothesis. We focus on three types of problems: universality, equivalence, and emptiness of intersection. All these problems are known to be CoNP-hard for nondeterministic finite automata, even when restricted to unary input alphabets. A different type of problems on finite automata relates to aperiodicity and to synchronizing words. We also consider finite automata that work on commutative alphabets and those working on two-dimensional words. Full article
Figures

Figure 1

Open AccessArticle A Novel, Gradient Boosting Framework for Sentiment Analysis in Languages where NLP Resources Are Not Plentiful: A Case Study for Modern Greek
Algorithms 2017, 10(1), 34; doi:10.3390/a10010034
Received: 11 December 2016 / Accepted: 24 February 2017 / Published: 6 March 2017
PDF Full-text (1348 KB) | HTML Full-text | XML Full-text
Abstract
Sentiment analysis has played a primary role in text classification. It is an undoubted fact that some years ago, textual information was spreading in manageable rates; however, nowadays, such information has overcome even the most ambiguous expectations and constantly grows within seconds. It
[...] Read more.
Sentiment analysis has played a primary role in text classification. It is an undoubted fact that some years ago, textual information was spreading in manageable rates; however, nowadays, such information has overcome even the most ambiguous expectations and constantly grows within seconds. It is therefore quite complex to cope with the vast amount of textual data particularly if we also take the incremental production speed into account. Social media, e-commerce, news articles, comments and opinions are broadcasted on a daily basis. A rational solution, in order to handle the abundance of data, would be to build automated information processing systems, for analyzing and extracting meaningful patterns from text. The present paper focuses on sentiment analysis applied in Greek texts. Thus far, there is no wide availability of natural language processing tools for Modern Greek. Hence, a thorough analysis of Greek, from the lexical to the syntactical level, is difficult to perform. This paper attempts a different approach, based on the proven capabilities of gradient boosting, a well-known technique for dealing with high-dimensional data. The main rationale is that since English has dominated the area of preprocessing tools and there are also quite reliable translation services, we could exploit them to transform Greek tokens into English, thus assuring the precision of the translation, since the translation of large texts is not always reliable and meaningful. The new feature set of English tokens is augmented with the original set of Greek, consequently producing a high dimensional dataset that poses certain difficulties for any traditional classifier. Accordingly, we apply gradient boosting machines, an ensemble algorithm that can learn with different loss functions providing the ability to work efficiently with high dimensional data. Moreover, for the task at hand, we deal with a class imbalance issues since the distribution of sentiments in real-world applications often displays issues of inequality. For example, in political forums or electronic discussions about immigration or religion, negative comments overwhelm the positive ones. The class imbalance problem was confronted using a hybrid technique that performs a variation of under-sampling the majority class and over-sampling the minority class, respectively. Experimental results, considering different settings, such as translation of tokens against translation of sentences, consideration of limited Greek text preprocessing and omission of the translation phase, demonstrated that the proposed gradient boosting framework can effectively cope with both high-dimensional and imbalanced datasets and performs significantly better than a plethora of traditional machine learning classification approaches in terms of precision and recall measures. Full article
(This article belongs to the Special Issue Humanistic Data Processing)
Figures

Figure 1

Open AccessArticle A Geo-Clustering Approach for the Detection of Areas-of-Interest and Their Underlying Semantics
Algorithms 2017, 10(1), 35; doi:10.3390/a10010035
Received: 20 December 2016 / Revised: 1 March 2017 / Accepted: 13 March 2017 / Published: 18 March 2017
PDF Full-text (12600 KB) | HTML Full-text | XML Full-text
Abstract
Living in the “era of social networking”, we are experiencing a data revolution, generating an astonishing amount of digital information every single day. Due to this proliferation of data volume, there has been an explosion of new application domains for information mined from
[...] Read more.
Living in the “era of social networking”, we are experiencing a data revolution, generating an astonishing amount of digital information every single day. Due to this proliferation of data volume, there has been an explosion of new application domains for information mined from social networks. In this paper, we leverage this “socially-generated knowledge” (i.e., user-generated content derived from social networks) towards the detection of areas-of-interest within an urban region. These large and homogeneous areas contain multiple points-of-interest which are of special interest to particular groups of people (e.g., tourists and/or consumers). In order to identify them, we exploit two types of metadata, namely location-based information included within geo-tagged photos that we collect from Flickr, along with plain simple textual information from user-generated tags. We propose an algorithm that divides a predefined geographical area (i.e., the center of Athens, Greece) into “tile”-shaped sub-regions and based on an iterative merging procedure, it aims to detect larger, cohesive areas. We examine the performance of the algorithm both in a qualitative and quantitative manner. Our experiments demonstrate that the proposed geo-clustering algorithm is able to correctly detect regions that contain popular tourist attractions within them with very promising results. Full article
(This article belongs to the Special Issue Humanistic Data Processing)
Figures

Figure 1

Open AccessArticle Toward Personalized Vibrotactile Support When Learning Motor Skills
Algorithms 2017, 10(1), 15; doi:10.3390/a10010015
Received: 8 September 2016 / Revised: 29 December 2016 / Accepted: 12 January 2017 / Published: 16 January 2017
PDF Full-text (688 KB) | HTML Full-text | XML Full-text
Abstract
Personal tracking technologies allow sensing of the physical activity carried out by people. Data flows collected with these sensors are calling for big data techniques to support data collection, integration and analysis, aimed to provide personalized support when learning motor skills through varied
[...] Read more.
Personal tracking technologies allow sensing of the physical activity carried out by people. Data flows collected with these sensors are calling for big data techniques to support data collection, integration and analysis, aimed to provide personalized support when learning motor skills through varied multisensorial feedback. In particular, this paper focuses on vibrotactile feedback as it can take advantage of the haptic sense when supporting the physical interaction to be learnt. Despite each user having different needs, when providing this vibrotactile support, personalization issues are hardly taken into account, but the same response is delivered to each and every user of the system. The challenge here is how to design vibrotactile user interfaces for adaptive learning of motor skills. TORMES methodology is proposed to facilitate the elicitation of this personalized support. The resulting systems are expected to dynamically adapt to each individual user’s needs by monitoring, comparing and, when appropriate, correcting in a personalized way how the user should move when practicing a predefined movement, for instance, when performing a sport technique or playing a musical instrument. Full article
Figures

Open AccessArticle An On-Line Tracker for a Stochastic Chaotic System Using Observer/Kalman Filter Identification Combined with Digital Redesign Method
Algorithms 2017, 10(1), 25; doi:10.3390/a10010025
Received: 15 November 2016 / Revised: 8 February 2017 / Accepted: 9 February 2017 / Published: 15 February 2017
PDF Full-text (2655 KB) | HTML Full-text | XML Full-text
Abstract
This is the first paper to present such a digital redesign method for the (conventional) OKID system and apply this novel technique for nonlinear system identification. First, the Observer/Kalman filter Identification (OKID) method is used to obtain the lower-order state-space model for a
[...] Read more.
This is the first paper to present such a digital redesign method for the (conventional) OKID system and apply this novel technique for nonlinear system identification. First, the Observer/Kalman filter Identification (OKID) method is used to obtain the lower-order state-space model for a stochastic chaos system. Then, a digital redesign approach with the high-gain property is applied to improve and replace the observer identified by OKID. Therefore, the proposed OKID combined with an observer-based digital redesign novel tracker not only suppresses the uncertainties and the nonlinear perturbations, but also improves more accurate observation parameters of OKID for complex Multi-Input Multi-Output systems. In this research, Chen’s stochastic chaotic system is used as an illustrative example to demonstrate the effectiveness and excellence of the proposed methodology. Full article
Figures

Figure 1

Open AccessArticle Analysis and Improvement of Fireworks Algorithm
Algorithms 2017, 10(1), 26; doi:10.3390/a10010026
Received: 12 December 2016 / Accepted: 14 February 2017 / Published: 17 February 2017
PDF Full-text (1317 KB) | HTML Full-text | XML Full-text
Abstract
The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global
[...] Read more.
The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA) with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions. Full article
Figures

Figure 1

Open AccessArticle Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection
Algorithms 2017, 10(1), 16; doi:10.3390/a10010016
Received: 29 November 2016 / Revised: 5 January 2017 / Accepted: 11 January 2017 / Published: 18 January 2017
PDF Full-text (2302 KB) | HTML Full-text | XML Full-text
Abstract
Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI) plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS) contains a DPI technique that examines the incoming packet payloads by employing
[...] Read more.
Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI) plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS) contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU) or the graphic processing unit (GPU) were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA). In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing)
Figures

Figure 1

Open AccessArticle A Preconditioned Iterative Method for Solving Systems of Nonlinear Equations Having Unknown Multiplicity
Algorithms 2017, 10(1), 17; doi:10.3390/a10010017
Received: 13 November 2016 / Revised: 24 December 2016 / Accepted: 13 January 2017 / Published: 18 January 2017
PDF Full-text (245 KB) | HTML Full-text | XML Full-text
Abstract
A modification to an existing iterative method for computing zeros with unknown multiplicities of nonlinear equations or a system of nonlinear equations is presented. We introduce preconditioners to nonlinear equations or a system of nonlinear equations and their corresponding Jacobians. The inclusion of
[...] Read more.
A modification to an existing iterative method for computing zeros with unknown multiplicities of nonlinear equations or a system of nonlinear equations is presented. We introduce preconditioners to nonlinear equations or a system of nonlinear equations and their corresponding Jacobians. The inclusion of preconditioners provides numerical stability and accuracy. The different selection of preconditioner offers a family of iterative methods. We modified an existing method in a way that we do not alter its inherited quadratic convergence. Numerical simulations confirm the quadratic convergence of the preconditioned iterative method. The influence of preconditioners is clearly reflected in the numerically achieved accuracy of computed solutions. Full article
Open AccessArticle Fragile Watermarking for Image Authentication Using the Characteristic of SVD
Algorithms 2017, 10(1), 27; doi:10.3390/a10010027
Received: 21 December 2016 / Accepted: 15 February 2017 / Published: 17 February 2017
PDF Full-text (4516 KB) | HTML Full-text | XML Full-text
Abstract
Digital image authentication has become a hot topic in the last few years. In this paper, a pixel-based fragile watermarking method is presented for image tamper identification and localization. By analyzing the left and right singular matrices of SVD, it is found that
[...] Read more.
Digital image authentication has become a hot topic in the last few years. In this paper, a pixel-based fragile watermarking method is presented for image tamper identification and localization. By analyzing the left and right singular matrices of SVD, it is found that the matrix product between the first column of the left singular matrix and the transposition of the first column in the right singular matrix is closely related to the image texture features. Based on this characteristic, a binary watermark consisting of image texture information is generated and inserted into the least significant bit (LSB) of the original host image. To improve the security of the presented algorithm, the Arnold transform is applied twice in the watermark embedding process. Experimental results indicate that the proposed watermarking algorithm has high security and perceptual invisibility. Moreover, it can detect and locate the tampered region effectively for various malicious attacks. Full article
Figures

Figure 1

Open AccessArticle Mining Domain-Specific Design Patterns: A Case Study †
Algorithms 2017, 10(1), 28; doi:10.3390/a10010028
Received: 16 November 2016 / Revised: 24 January 2017 / Accepted: 16 February 2017 / Published: 21 February 2017
PDF Full-text (2979 KB) | HTML Full-text | XML Full-text
Abstract
Domain-specific design patterns provide developers with proven solutions to common design problems that arise, particularly in a target application domain, facilitating them to produce quality designs in the domain contexts. However, research in this area is not mature and there are no techniques
[...] Read more.
Domain-specific design patterns provide developers with proven solutions to common design problems that arise, particularly in a target application domain, facilitating them to produce quality designs in the domain contexts. However, research in this area is not mature and there are no techniques to support their detection. Towards this end, we propose a methodology which, when applied on a collection of websites in a specific domain, facilitates the automated identification of domain-specific design patterns. The methodology automatically extracts the conceptual models of the websites, which are subsequently analyzed in terms of all of the reusable design fragments used in them for supporting common domain functionalities. At the conceptual level, we consider these fragments as recurrent patterns consisting of a configuration of front-end interface components that interrelate each other and interact with end-users to support certain functionality. By performing a pattern-based analysis of the models, we locate the occurrences of all the recurrent patterns in the various website designs which are then evaluated towards their consistent use. The detected patterns can be used as building blocks in future designs, assisting developers to produce consistent and quality designs in the target domain. To support our case, we present a case study for the educational domain. Full article
(This article belongs to the Special Issue Humanistic Data Processing)
Figures

Figure 1

Open AccessArticle Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions
Algorithms 2017, 10(1), 18; doi:10.3390/a10010018
Received: 28 September 2016 / Revised: 4 January 2017 / Accepted: 16 January 2017 / Published: 23 January 2017
PDF Full-text (4449 KB) | HTML Full-text | XML Full-text
Abstract
In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it
[...] Read more.
In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications)
Figures

Figure 1

Open AccessArticle Pressure Control for a Hydraulic Cylinder Based on a Self-Tuning PID Controller Optimized by a Hybrid Optimization Algorithm
Algorithms 2017, 10(1), 19; doi:10.3390/a10010019
Received: 24 November 2016 / Revised: 9 January 2017 / Accepted: 18 January 2017 / Published: 23 January 2017
PDF Full-text (2032 KB) | HTML Full-text | XML Full-text
Abstract
In order to improve the performance of the hydraulic support electro-hydraulic control system test platform, a self-tuning proportion integration differentiation (PID) controller is proposed to imitate the actual pressure of the hydraulic support. To avoid the premature convergence and to improve the convergence
[...] Read more.
In order to improve the performance of the hydraulic support electro-hydraulic control system test platform, a self-tuning proportion integration differentiation (PID) controller is proposed to imitate the actual pressure of the hydraulic support. To avoid the premature convergence and to improve the convergence velocity for tuning PID parameters, the PID controller is optimized with a hybrid optimization algorithm integrated with the particle swarm algorithm (PSO) and genetic algorithm (GA). A selection probability and an adaptive cross probability are introduced into the PSO to enhance the diversity of particles. The proportional overflow valve is installed to control the pressure of the pillar cylinder. The data of the control voltage of the proportional relief valve amplifier and pillar pressure are collected to acquire the system transfer function. Several simulations with different methods are performed on the hydraulic cylinder pressure system. The results demonstrate that the hybrid algorithm for a PID controller has comparatively better global search ability and faster convergence velocity on the pressure control of the hydraulic cylinder. Finally, an experiment is conducted to verify the validity of the proposed method. Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications)
Figures

Figure 1

Open AccessArticle Stable Analysis of Compressive Principal Component Pursuit
Algorithms 2017, 10(1), 29; doi:10.3390/a10010029
Received: 5 January 2017 / Revised: 10 February 2017 / Accepted: 17 February 2017 / Published: 21 February 2017
PDF Full-text (256 KB) | HTML Full-text | XML Full-text
Abstract
Compressive principal component pursuit (CPCP) recovers a target matrix that is a superposition of low-complexity structures from a small set of linear measurements. Pervious works mainly focus on the analysis of the existence and uniqueness. In this paper, we address its stability. We
[...] Read more.
Compressive principal component pursuit (CPCP) recovers a target matrix that is a superposition of low-complexity structures from a small set of linear measurements. Pervious works mainly focus on the analysis of the existence and uniqueness. In this paper, we address its stability. We prove that the solution to the related convex programming of CPCP gives an estimate that is stable to small entry-wise noise. We also provide numerical simulation results to support our result. Numerical results show that the solution to the related convex program is stable to small entry-wise noise under board condition. Full article
Figures

Figure 1

Review

Jump to: Editorial, Research

Open AccessReview Optimization-Based Approaches to Control of Probabilistic Boolean Networks
Algorithms 2017, 10(1), 31; doi:10.3390/a10010031
Received: 30 September 2016 / Revised: 17 February 2017 / Accepted: 20 February 2017 / Published: 22 February 2017
PDF Full-text (219 KB) | HTML Full-text | XML Full-text
Abstract
Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs), which is well known as a model of gene regulatory networks, has been widely studied. In this review paper,
[...] Read more.
Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs), which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs) are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained. Full article
(This article belongs to the Special Issue Biological Networks)
Figures

Figure 1

Journal Contact

MDPI AG
Algorithms Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Algorithms Edit a special issue Review for Algorithms
loading...
Back to Top