Next Issue
Volume 6, March
Previous Issue
Volume 5, September
 
 

Algorithms, Volume 5, Issue 4 (December 2012) – 15 articles , Pages 398-667

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
154 KiB  
Article
Extracting Co-Occurrence Relations from ZDDs
by Takahisa Toda
Algorithms 2012, 5(4), 654-667; https://doi.org/10.3390/a5040654 - 13 Dec 2012
Cited by 2 | Viewed by 8587
Abstract
A zero-suppressed binary decision diagram (ZDD) is a graph representation suitable for handling sparse set families. Given a ZDD representing a set family, we present an efficient algorithm to discover a hidden structure, called a co-occurrence relation, on the ground set. This computation [...] Read more.
A zero-suppressed binary decision diagram (ZDD) is a graph representation suitable for handling sparse set families. Given a ZDD representing a set family, we present an efficient algorithm to discover a hidden structure, called a co-occurrence relation, on the ground set. This computation can be done in time complexity that is related not to the number of sets, but to some feature values of the ZDD. We furthermore introduce a conditional co-occurrence relation and present an extraction algorithm, which enables us to discover further structural information. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

1412 KiB  
Article
Edge Detection from MRI and DTI Images with an Anisotropic Vector Field Flow Using a Divergence Map
by Donatella Giuliani
Algorithms 2012, 5(4), 636-653; https://doi.org/10.3390/a5040636 - 13 Dec 2012
Cited by 3 | Viewed by 7788
Abstract
The aim of this work is the extraction of edges from Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI) images by a deformable contour procedure, using an external force field derived from an anisotropic flow. Moreover, we introduce a divergence map in [...] Read more.
The aim of this work is the extraction of edges from Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI) images by a deformable contour procedure, using an external force field derived from an anisotropic flow. Moreover, we introduce a divergence map in order to check the convergence of the process. As we know from vector calculus, divergence is a measure of the magnitude of a vector field convergence at a given point. Thus by means level curves of the divergence map, we have automatically selected an initial contour for the deformation process. If the initial curve includes the areas from which the vector field diverges, it will be able to push the curve towards the edges. Furthermore the divergence map highlights the presence of curves pointing to the most significant geometric parts of boundaries corresponding to high curvature values. In this way, the skeleton of the extracted object will be rather well defined and may subsequently be employed in shape analysis and morphological studies. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)
Show Figures

Figure 1

136 KiB  
Article
Testing Goodness of Fit of Random Graph Models
by Villõ Csiszár, Péter Hussami, János Komlós, Tamás F. Móri, Lídia Rejtõ and Gábor Tusnády
Algorithms 2012, 5(4), 629-635; https://doi.org/10.3390/a5040629 - 6 Dec 2012
Cited by 4 | Viewed by 6364
Abstract
Random graphs are matrices with independent 0–1 elements with probabilities determined by a small number of parameters. One of the oldest models is the Rasch model where the odds are ratios of positive numbers scaling the rows and columns. Later Persi Diaconis with [...] Read more.
Random graphs are matrices with independent 0–1 elements with probabilities determined by a small number of parameters. One of the oldest models is the Rasch model where the odds are ratios of positive numbers scaling the rows and columns. Later Persi Diaconis with his coworkers rediscovered the model for symmetric matrices and called the model beta. Here we give goodness-of-fit tests for the model and extend the model to a version of the block model introduced by Holland, Laskey and Leinhard. Full article
471 KiB  
Article
Laplace–Fourier Transform of the Stretched Exponential Function: Analytic Error Bounds, Double Exponential Transform, and Open-Source Implementation “libkww”
by Joachim Wuttke
Algorithms 2012, 5(4), 604-628; https://doi.org/10.3390/a5040604 - 22 Nov 2012
Cited by 30 | Viewed by 11817
Abstract
The C library libkww provides functions to compute the Kohlrausch–Williams– Watts function, i.e., the Laplace–Fourier transform of the stretched (or compressed) exponential function exp(-tβ ) for exponents β between 0.1 and 1.9 with double precision. Analytic error bounds are derived for [...] Read more.
The C library libkww provides functions to compute the Kohlrausch–Williams– Watts function, i.e., the Laplace–Fourier transform of the stretched (or compressed) exponential function exp(-tβ ) for exponents β between 0.1 and 1.9 with double precision. Analytic error bounds are derived for the low and high frequency series expansions. For intermediate frequencies, the numeric integration is enormously accelerated by using the Ooura–Mori double exponential transformation. The primitive of the cosine transform needed for the convolution integrals is also implemented. The software is hosted at http://apps.jcns.fz-juelich.de/kww; version 3.0 is deposited as supplementary material to this article. Full article
Show Figures

Figure 1

3787 KiB  
Article
An Efficient Algorithm for Automatic Peak Detection in Noisy Periodic and Quasi-Periodic Signals
by Felix Scholkmann, Jens Boss and Martin Wolf
Algorithms 2012, 5(4), 588-603; https://doi.org/10.3390/a5040588 - 21 Nov 2012
Cited by 330 | Viewed by 46177
Abstract
We present a new method for automatic detection of peaks in noisy periodic and quasi-periodic signals. The new method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences [...] Read more.
We present a new method for automatic detection of peaks in noisy periodic and quasi-periodic signals. The new method, called automatic multiscale-based peak detection (AMPD), is based on the calculation and analysis of the local maxima scalogram, a matrix comprising the scale-dependent occurrences of local maxima. The usefulness of the proposed method is shown by applying the AMPD algorithm to simulated and real-world signals. Full article
Show Figures

Figure 1

1432 KiB  
Article
Exact Algorithms for Maximum Clique: A Computational Study
by Patrick Prosser
Algorithms 2012, 5(4), 545-587; https://doi.org/10.3390/a5040545 - 19 Nov 2012
Cited by 66 | Viewed by 14488
Abstract
We investigate a number of recently reported exact algorithms for the maximum clique problem. The program code is presented and analyzed to show how small changes in implementation can have a drastic effect on performance. The computational study demonstrates how problem features and [...] Read more.
We investigate a number of recently reported exact algorithms for the maximum clique problem. The program code is presented and analyzed to show how small changes in implementation can have a drastic effect on performance. The computational study demonstrates how problem features and hardware platforms influence algorithm behaviour. The effect of vertex ordering is investigated. One of the algorithms (MCS) is broken into its constituent parts and we discover that one of these parts frequently degrades performance. It is shown that the standard procedure used for rescaling published results (i.e., adjusting run times based on the calibration of a standard program over a set of benchmarks) is unsafe and can lead to incorrect conclusions being drawn from empirical data. Full article
(This article belongs to the Special Issue Graph Algorithms)
Show Figures

Figure 1

2904 KiB  
Article
Finite Element Quadrature of Regularized Discontinuous and Singular Level Set Functions in 3D Problems
by Elena Benvenuti, Giulio Ventura and Nicola Ponara
Algorithms 2012, 5(4), 529-544; https://doi.org/10.3390/a5040529 - 7 Nov 2012
Cited by 9 | Viewed by 7357
Abstract
Regularized Heaviside and Dirac delta function are used in several fields of computational physics and mechanics. Hence the issue of the quadrature of integrals of discontinuous and singular functions arises. In order to avoid ad-hoc quadrature procedures, regularization of the discontinuous and the [...] Read more.
Regularized Heaviside and Dirac delta function are used in several fields of computational physics and mechanics. Hence the issue of the quadrature of integrals of discontinuous and singular functions arises. In order to avoid ad-hoc quadrature procedures, regularization of the discontinuous and the singular fields is often carried out. In particular, weight functions of the signed distance with respect to the discontinuity interface are exploited. Tornberg and Engquist (Journal of Scientific Computing, 2003, 19: 527–552) proved that the use of compact support weight function is not suitable because it leads to errors that do not vanish for decreasing mesh size. They proposed the adoption of non-compact support weight functions. In the present contribution, the relationship between the Fourier transform of the weight functions and the accuracy of the regularization procedure is exploited. The proposed regularized approach was implemented in the eXtended Finite Element Method. As a three-dimensional example, we study a slender solid characterized by an inclined interface across which the displacement is discontinuous. The accuracy is evaluated for varying position of the discontinuity interfaces with respect to the underlying mesh. A procedure for the choice of the regularization parameters is proposed. Full article
Show Figures

Figure 1

81 KiB  
Article
Alpha-Beta Pruning and Althöfer’s Pathology-Free Negamax Algorithm
by Ashraf M. Abdelbar
Algorithms 2012, 5(4), 521-528; https://doi.org/10.3390/a5040521 - 5 Nov 2012
Cited by 4 | Viewed by 10977
Abstract
The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can [...] Read more.
The minimax algorithm, also called the negamax algorithm, remains today the most widely used search technique for two-player perfect-information games. However, minimaxing has been shown to be susceptible to game tree pathology, a paradoxical situation in which the accuracy of the search can decrease as the height of the tree increases. Althöfer’s alternative minimax algorithm has been proven to be invulnerable to pathology. However, it has not been clear whether alpha-beta pruning, a crucial component of practical game programs, could be applied in the context of Alhöfer’s algorithm. In this brief paper, we show how alpha-beta pruning can be adapted to Althöfer’s algorithm. Full article
Show Figures

Figure 1

1199 KiB  
Article
Extracting Hierarchies from Data Clusters for Better Classification
by German Sapozhnikov and Alexander Ulanov
Algorithms 2012, 5(4), 506-520; https://doi.org/10.3390/a5040506 - 23 Oct 2012
Cited by 2 | Viewed by 6226
Abstract
In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each [...] Read more.
In this paper we present the PHOCS-2 algorithm, which extracts a “Predicted Hierarchy Of ClassifierS”. The extracted hierarchy helps us to enhance performance of flat classification. Nodes in the hierarchy contain classifiers. Each intermediate node corresponds to a set of classes and each leaf node corresponds to a single class. In the PHOCS-2 we make estimation for each node and achieve more precise computation of false positives, true positives and false negatives. Stopping criteria are based on the results of the flat classification. The proposed algorithm is validated against nine datasets. Full article
Show Figures

Figure 1

1308 KiB  
Article
The Effects of Tabular-Based Content Extraction on Patent Document Clustering
by Denise R. Koessler, Benjamin W. Martin, Bruce E. Kiefer and Michael W. Berry
Algorithms 2012, 5(4), 490-505; https://doi.org/10.3390/a5040490 - 22 Oct 2012
Viewed by 6688
Abstract
Data can be represented in many different ways within a particular document or set of documents. Hence, attempts to automatically process the relationships between documents or determine the relevance of certain document objects can be problematic. In this study, we have developed software [...] Read more.
Data can be represented in many different ways within a particular document or set of documents. Hence, attempts to automatically process the relationships between documents or determine the relevance of certain document objects can be problematic. In this study, we have developed software to automatically catalog objects contained in HTML files for patents granted by the United States Patent and Trademark Office (USPTO). Once these objects are recognized, the software creates metadata that assigns a data type to each document object. Such metadata can be easily processed and analyzed for subsequent text mining tasks. Specifically, document similarity and clustering techniques were applied to a subset of the USPTO document collection. Although our preliminary results demonstrate that tables and numerical data do not provide quantifiable value to a document’s content, the stage for future work in measuring the importance of document objects within a large corpus has been set. Full article
Show Figures

Graphical abstract

3733 KiB  
Article
Contextual Anomaly Detection in Text Data
by Amogh Mahapatra, Nisheeth Srivastava and Jaideep Srivastava
Algorithms 2012, 5(4), 469-489; https://doi.org/10.3390/a5040469 - 19 Oct 2012
Cited by 24 | Viewed by 15519
Abstract
We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational [...] Read more.
We propose using side information to further inform anomaly detection algorithms of the semantic context of the text data they are analyzing, thereby considering both divergence from the statistical pattern seen in particular datasets and divergence seen from more general semantic expectations. Computational experiments show that our algorithm performs as expected on data that reflect real-world events with contextual ambiguity, while replicating conventional clustering on data that are either too specialized or generic to result in contextual information being actionable. These results suggest that our algorithm could potentially reduce false positive rates in existing anomaly detection systems. Full article
Show Figures

Graphical abstract

283 KiB  
Article
Forecasting the Unit Cost of a Product with Some Linear Fuzzy Collaborative Forecasting Models
by Toly Chen
Algorithms 2012, 5(4), 449-468; https://doi.org/10.3390/a5040449 - 15 Oct 2012
Cited by 5 | Viewed by 6974
Abstract
Forecasting the unit cost of every product type in a factory is an important task. However, it is not easy to deal with the uncertainty of the unit cost. Fuzzy collaborative forecasting is a very effective treatment of the uncertainty in the distributed [...] Read more.
Forecasting the unit cost of every product type in a factory is an important task. However, it is not easy to deal with the uncertainty of the unit cost. Fuzzy collaborative forecasting is a very effective treatment of the uncertainty in the distributed environment. This paper presents some linear fuzzy collaborative forecasting models to predict the unit cost of a product. In these models, the experts’ forecasts differ and therefore need to be aggregated through collaboration. According to the experimental results, the effectiveness of forecasting the unit cost was considerably improved through collaboration. Full article
Show Figures

Figure 1

379 KiB  
Article
Interaction Enhanced Imperialist Competitive Algorithms
by Jun-Lin Lin, Yu-Hsiang Tsai, Chun-Ying Yu and Meng-Shiou Li
Algorithms 2012, 5(4), 433-448; https://doi.org/10.3390/a5040433 - 15 Oct 2012
Cited by 21 | Viewed by 8376
Abstract
Imperialist Competitive Algorithm (ICA) is a new population-based evolutionary algorithm. It divides its population of solutions into several sub-populations, and then searches for the optimal solution through two operations: assimilation and competition. The assimilation operation moves each non-best solution (called colony) in [...] Read more.
Imperialist Competitive Algorithm (ICA) is a new population-based evolutionary algorithm. It divides its population of solutions into several sub-populations, and then searches for the optimal solution through two operations: assimilation and competition. The assimilation operation moves each non-best solution (called colony) in a sub-population toward the best solution (called imperialist) in the same sub-population. The competition operation removes a colony from the weakest sub-population and adds it to another sub-population. Previous work on ICA focuses mostly on improving the assimilation operation or replacing the assimilation operation with more powerful meta-heuristics, but none focuses on the improvement of the competition operation. Since the competition operation simply moves a colony (i.e., an inferior solution) from one sub-population to another sub-population, it incurs weak interaction among these sub-populations. This work proposes Interaction Enhanced ICA that strengthens the interaction among the imperialists of all sub-populations. The performance of Interaction Enhanced ICA is validated on a set of benchmark functions for global optimization. The results indicate that the performance of Interaction Enhanced ICA is superior to that of ICA and its existing variants. Full article
Show Figures

Figure 1

532 KiB  
Article
Univariate Lp and ɭ p Averaging, 0 < p < 1, in Polynomial Time by Utilization of Statistical Structure
by John E. Lavery
Algorithms 2012, 5(4), 421-432; https://doi.org/10.3390/a5040421 - 5 Oct 2012
Cited by 1 | Viewed by 6829
Abstract
We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 p a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are [...] Read more.
We present evidence that one can calculate generically combinatorially expensive Lp and lp averages, 0 < p < 1, in polynomial time by restricting the data to come from a wide class of statistical distributions. Our approach differs from the approaches in the previous literature, which are based on a priori sparsity requirements or on accepting a local minimum as a replacement for a global minimum. The functionals by which Lp averages are calculated are not convex but are radially monotonic and the functionals by which lp averages are calculated are nearly so, which are the keys to solvability in polynomial time. Analytical results for symmetric, radially monotonic univariate distributions are presented. An algorithm for univariate lp averaging is presented. Computational results for a Gaussian distribution, a class of symmetric heavy-tailed distributions and a class of asymmetric heavy-tailed distributions are presented. Many phenomena in human-based areas are increasingly known to be represented by data that have large numbers of outliers and belong to very heavy-tailed distributions. When tails of distributions are so heavy that even medians (L1 and l1 averages) do not exist, one needs to consider using lp minimization principles with 0 < p < 1. Full article
Show Figures

Figure 1

401 KiB  
Article
Better Metrics to Automatically Predict the Quality of a Text Summary
by Peter A. Rankel, John M. Conroy and Judith D. Schlesinger
Algorithms 2012, 5(4), 398-420; https://doi.org/10.3390/a5040398 - 26 Sep 2012
Cited by 14 | Viewed by 7726
Abstract
In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The [...] Read more.
In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop