Next Issue
Volume 6, June
Previous Issue
Volume 5, December
 
 

Algorithms, Volume 6, Issue 1 (March 2013) – 11 articles , Pages 1-196

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
401 KiB  
Article
An Open-Source Implementation of the Critical-Line Algorithm for Portfolio Optimization
by David H. Bailey and Marcos López de Prado
Algorithms 2013, 6(1), 169-196; https://doi.org/10.3390/a6010169 - 22 Mar 2013
Cited by 13 | Viewed by 10687
Abstract
Portfolio optimization is one of the problems most frequently encountered by financial practitioners. The main goal of this paper is to fill a gap in the literature by providing a well-documented, step-by-step open-source implementation of Critical Line Algorithm (CLA) in scientific language. The [...] Read more.
Portfolio optimization is one of the problems most frequently encountered by financial practitioners. The main goal of this paper is to fill a gap in the literature by providing a well-documented, step-by-step open-source implementation of Critical Line Algorithm (CLA) in scientific language. The code is implemented as a Python class object, which allows it to be imported like any other Python module, and integrated seamlessly with pre-existing code. We discuss the logic behind CLA following the algorithm’s decision flow. In addition, we developed several utilities that support finding answers to recurrent practical problems. We believe this publication will offer a better alternative to financial practitioners, many of whom are currently relying on generic-purpose optimizers which often deliver suboptimal solutions. The source code discussed in this paper can be downloaded at the authors’ websites (see Appendix). Full article
(This article belongs to the Special Issue Algorithms and Financial Optimization)
Show Figures

Figure 1

165 KiB  
Article
Stable Multicommodity Flows
by Tamás Király and Júlia Pap
Algorithms 2013, 6(1), 161-168; https://doi.org/10.3390/a6010161 - 18 Mar 2013
Cited by 7 | Viewed by 6755
Abstract
We extend the stable flow model of Fleiner to multicommodity flows. In addition to the preference lists of agents on trading partners for each commodity, every trading pair has a preference list on the commodities that the seller can sell to the buyer. [...] Read more.
We extend the stable flow model of Fleiner to multicommodity flows. In addition to the preference lists of agents on trading partners for each commodity, every trading pair has a preference list on the commodities that the seller can sell to the buyer. A blocking walk (with respect to a certain commodity) may include saturated arcs, provided that a positive amount of less preferred commodity is traded along the arc. We prove that a stable multicommodity flow always exists, although it is PPAD-hard to find one. Full article
(This article belongs to the Special Issue Special Issue on Matching under Preferences)
297 KiB  
Review
Algorithms for Non-Negatively Constrained Maximum Penalized Likelihood Reconstruction in Tomographic Imaging
by Jun Ma
Algorithms 2013, 6(1), 136-160; https://doi.org/10.3390/a6010136 - 12 Mar 2013
Cited by 2 | Viewed by 6547
Abstract
Image reconstruction is a key component in many medical imaging modalities. The problem of image reconstruction can be viewed as a special inverse problem where the unknown image pixel intensities are estimated from the observed measurements. Since the measurements are usually noise contaminated, [...] Read more.
Image reconstruction is a key component in many medical imaging modalities. The problem of image reconstruction can be viewed as a special inverse problem where the unknown image pixel intensities are estimated from the observed measurements. Since the measurements are usually noise contaminated, statistical reconstruction methods are preferred. In this paper we review some non-negatively constrained simultaneous iterative algorithms for maximum penalized likelihood reconstructions, where all measurements are used to estimate all pixel intensities in each iteration. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)
200 KiB  
Article
A Polynomial-Time Algorithm for Computing the Maximum Common Connected Edge Subgraph of Outerplanar Graphs of Bounded Degree
by Tatsuya Akutsu and Takeyuki Tamura
Algorithms 2013, 6(1), 119-135; https://doi.org/10.3390/a6010119 - 18 Feb 2013
Cited by 15 | Viewed by 10588
Abstract
The maximum common connected edge subgraph problem is to find a connected graph with the maximum number of edges that is isomorphic to a subgraph of each of the two input graphs, where it has applications in pattern recognition and chemistry. This paper [...] Read more.
The maximum common connected edge subgraph problem is to find a connected graph with the maximum number of edges that is isomorphic to a subgraph of each of the two input graphs, where it has applications in pattern recognition and chemistry. This paper presents a dynamic programming algorithm for the problem when the two input graphs are outerplanar graphs of a bounded vertex degree, where it is known that the problem is NP-hard, even for outerplanar graphs of an unbounded degree. Although the algorithm repeatedly modifies input graphs, it is shown that the number of relevant subproblems is polynomially bounded, and thus, the algorithm works in polynomial time. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

450 KiB  
Article
Computing the Eccentricity Distribution of Large Graphs
by Frank W. Takes and Walter A. Kosters
Algorithms 2013, 6(1), 100-118; https://doi.org/10.3390/a6010100 - 18 Feb 2013
Cited by 43 | Viewed by 16462
Abstract
The eccentricity of a node in a graph is defined as the length of a longest shortest path starting at that node. The eccentricity distribution over all nodes is a relevant descriptive property of the graph, and its extreme values allow the derivation [...] Read more.
The eccentricity of a node in a graph is defined as the length of a longest shortest path starting at that node. The eccentricity distribution over all nodes is a relevant descriptive property of the graph, and its extreme values allow the derivation of measures such as the radius, diameter, center and periphery of the graph. This paper describes two new methods for computing the eccentricity distribution of large graphs such as social networks, web graphs, biological networks and routing networks.We first propose an exact algorithm based on eccentricity lower and upper bounds, which achieves significant speedups compared to the straightforward algorithm when computing both the extreme values of the distribution as well as the eccentricity distribution as a whole. The second algorithm that we describe is a hybrid strategy that combines the exact approach with an efficient sampling technique in order to obtain an even larger speedup on the computation of the entire eccentricity distribution. We perform an extensive set of experiments on a number of large graphs in order to measure and compare the performance of our algorithms, and demonstrate how we can efficiently compute the eccentricity distribution of various large real-world graphs. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

3056 KiB  
Article
Dubins Traveling Salesman Problem with Neighborhoods: A Graph-Based Approach
by Jason T. Isaacs and João P. Hespanha
Algorithms 2013, 6(1), 84-99; https://doi.org/10.3390/a6010084 - 04 Feb 2013
Cited by 45 | Viewed by 10091
Abstract
We study the problem of finding the minimum-length curvature constrained closed path through a set of regions in the plane. This problem is referred to as the Dubins Traveling Salesperson Problem with Neighborhoods (DTSPN). An algorithm is presented that uses sampling to cast [...] Read more.
We study the problem of finding the minimum-length curvature constrained closed path through a set of regions in the plane. This problem is referred to as the Dubins Traveling Salesperson Problem with Neighborhoods (DTSPN). An algorithm is presented that uses sampling to cast this infinite dimensional combinatorial optimization problem as a Generalized Traveling Salesperson Problem (GTSP) with intersecting node sets. The GTSP is then converted to an Asymmetric Traveling Salesperson Problem (ATSP) through a series of graph transformations, thus allowing the use of existing approximation algorithms. This algorithm is shown to perform no worse than the best existing DTSPN algorithm and is shown to perform significantly better when the regions overlap. We report on the application of this algorithm to route an Unmanned Aerial Vehicle (UAV) equipped with a radio to collect data from sparsely deployed ground sensors in a field demonstration of autonomous detection, localization, and verification of multiple acoustic events. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

193 KiB  
Article
Tractabilities and Intractabilities on Geometric Intersection Graphs
by Ryuhei Uehara
Algorithms 2013, 6(1), 60-83; https://doi.org/10.3390/a6010060 - 25 Jan 2013
Cited by 9 | Viewed by 7278
Abstract
A graph is said to be an intersection graph if there is a set of objects such that each vertex corresponds to an object and two vertices are adjacent if and only if the corresponding objects have a nonempty intersection. There are several [...] Read more.
A graph is said to be an intersection graph if there is a set of objects such that each vertex corresponds to an object and two vertices are adjacent if and only if the corresponding objects have a nonempty intersection. There are several natural graph classes that have geometric intersection representations. The geometric representations sometimes help to prove tractability/intractability of problems on graph classes. In this paper, we show some results proved by using geometric representations. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

150 KiB  
Article
Computational Study on a PTAS for Planar Dominating Set Problem
by Marjan Marzban and Qian-Ping Gu
Algorithms 2013, 6(1), 43-59; https://doi.org/10.3390/a6010043 - 21 Jan 2013
Cited by 4 | Viewed by 7655
Abstract
The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS) for a class of NP-hard problems [...] Read more.
The dominating set problem is a core NP-hard problem in combinatorial optimization and graph theory, and has many important applications. Baker [JACM 41,1994] introduces a k-outer planar graph decomposition-based framework for designing polynomial time approximation scheme (PTAS) for a class of NP-hard problems in planar graphs. It is mentioned that the framework can be applied to obtain an O(2ckn) time, c is a constant, (1+1/k)-approximation algorithm for the planar dominating set problem. We show that the approximation ratio achieved by the mentioned application of the framework is not bounded by any constant for the planar dominating set problem. We modify the application of the framework to give a PTAS for the planar dominating set problem. With k-outer planar graph decompositions, the modified PTAS has an approximation ratio (1 + 2/k). Using 2k-outer planar graph decompositions, the modified PTAS achieves the approximation ratio (1+1/k) in O(22ckn) time. We report a computational study on the modified PTAS. Our results show that the modified PTAS is practical. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

250 KiB  
Article
Energy Efficient Routing in Wireless Sensor Networks Through Balanced Clustering
by Stefanos A. Nikolidakis, Dionisis Kandris, Dimitrios D. Vergados and Christos Douligeris
Algorithms 2013, 6(1), 29-42; https://doi.org/10.3390/a6010029 - 18 Jan 2013
Cited by 169 | Viewed by 16284
Abstract
The wide utilization of Wireless Sensor Networks (WSNs) is obstructed by the severely limited energy constraints of the individual sensor nodes. This is the reason why a large part of the research in WSNs focuses on the development of energy efficient routing protocols. [...] Read more.
The wide utilization of Wireless Sensor Networks (WSNs) is obstructed by the severely limited energy constraints of the individual sensor nodes. This is the reason why a large part of the research in WSNs focuses on the development of energy efficient routing protocols. In this paper, a new protocol called Equalized Cluster Head Election Routing Protocol (ECHERP), which pursues energy conservation through balanced clustering, is proposed. ECHERP models the network as a linear system and, using the Gaussian elimination algorithm, calculates the combinations of nodes that can be chosen as cluster heads in order to extend the network lifetime. The performance evaluation of ECHERP is carried out through simulation tests, which evince the effectiveness of this protocol in terms of network energy efficiency when compared against other well-known protocols. Full article
(This article belongs to the Special Issue Sensor Network)
Show Figures

Graphical abstract

513 KiB  
Article
1 Major Component Detection and Analysis (1 MCDA): Foundations in Two Dimensions
by Ye Tian, Qingwei Jin, John E. Lavery and Shu-Cherng Fang
Algorithms 2013, 6(1), 12-28; https://doi.org/10.3390/a6010012 - 17 Jan 2013
Cited by 3 | Viewed by 6588
Abstract
Principal Component Analysis (PCA) is widely used for identifying the major components of statistically distributed point clouds. Robust versions of PCA, often based in part on the 1 norm (rather than the ℓ2 norm), are increasingly used, especially for point clouds with [...] Read more.
Principal Component Analysis (PCA) is widely used for identifying the major components of statistically distributed point clouds. Robust versions of PCA, often based in part on the 1 norm (rather than the ℓ2 norm), are increasingly used, especially for point clouds with many outliers. Neither standard PCA nor robust PCAs can provide, without additional assumptions, reliable information for outlier-rich point clouds and for distributions with several main directions (spokes). We carry out a fundamental and complete reformulation of the PCA approach in a framework based exclusively on the 1 norm and heavy-tailed distributions. The 1 Major Component Detection and Analysis (1 MCDA) that we propose can determine the main directions and the radial extent of 2D data from single or multiple superimposed Gaussian or heavy-tailed distributions without and with patterned artificial outliers (clutter). In nearly all cases in the computational results, 2D 1 MCDA has accuracy superior to that of standard PCA and of two robust PCAs, namely, the projection-pursuit method of Croux and Ruiz-Gazen and the 1 factorization method of Ke and Kanade. (Standard PCA is, of course, superior to 1 MCDA for Gaussian-distributed point clouds.) The computing time of 1 MCDA is competitive with the computing times of the two robust PCAs. Full article
Show Figures

Figure 1

189 KiB  
Article
Maximum Disjoint Paths on Edge-Colored Graphs: Approximability and Tractability
by Paola Bonizzoni, Riccardo Dondi and Yuri Pirola
Algorithms 2013, 6(1), 1-11; https://doi.org/10.3390/a6010001 - 27 Dec 2012
Cited by 10 | Viewed by 6816
Abstract
The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate the approximation and parameterized complexity of the problem. First, we [...] Read more.
The problem of finding the maximum number of vertex-disjoint uni-color paths in an edge-colored graph has been recently introduced in literature, motivated by applications in social network analysis. In this paper we investigate the approximation and parameterized complexity of the problem. First, we show that, for any constant ε > 0, the problem is not approximable within factor c1-ε, where c is the number of colors, and that the corresponding decision problem is W[1]-hard when parametrized by the number of disjoint paths. Then, we present a fixed-parameter algorithm for the problem parameterized by the number and the length of the disjoint paths. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop