Next Issue
Previous Issue

Table of Contents

Algorithms, Volume 3, Issue 1 (March 2010), Pages 1-99

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle A Clinical Decision Support Framework for Incremental Polyps Classification in Virtual Colonoscopy
Algorithms 2010, 3(1), 1-20; doi:10.3390/a3010001
Received: 28 September 2009 / Accepted: 6 October 2009 / Published: 4 January 2010
Cited by 4 | PDF Full-text (405 KB) | HTML Full-text | XML Full-text
Abstract
We present in this paper a novel dynamic learning method for classifying polyp candidate detections in Computed Tomographic Colonography (CTC) using an adaptation of the Least Square Support Vector Machine (LS-SVM). The proposed technique, called Weighted Proximal Support Vector Machines (WP-SVM [...] Read more.
We present in this paper a novel dynamic learning method for classifying polyp candidate detections in Computed Tomographic Colonography (CTC) using an adaptation of the Least Square Support Vector Machine (LS-SVM). The proposed technique, called Weighted Proximal Support Vector Machines (WP-SVM), extends the offline capabilities of the SVM scheme to address practical CTC applications. Incremental data are incorporated in the WP-SVM as a weighted vector space, and the only storage requirements are the hyperplane parameters. WP-SVM performance evaluation based on 169 clinical CTC cases using a 3D computer-aided diagnosis (CAD) scheme for feature reduction comparable favorably with previously published CTC CAD studies that have however involved only binary and offline classification schemes. The experimental results obtained from iteratively applying WP-SVM to improve detection sensitivity demonstrate its viability for incremental learning, thereby motivating further follow on research to address a wider range of true positive subclasses such as pedunculated, sessile, and flat polyps, and over a wider range of false positive subclasses such as folds, stool, and tagged materials. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)
Open AccessArticle A Robust and Fast System for CTC Computer-Aided Detection of Colorectal Lesions
Algorithms 2010, 3(1), 21-43; doi:10.3390/a3010021
Received: 9 November 2009 / Revised: 14 December 2009 / Accepted: 23 December 2009 / Published: 5 January 2010
Cited by 24 | PDF Full-text (2432 KB) | HTML Full-text | XML Full-text
Abstract
We present a complete, end-to-end computer-aided detection (CAD) system for identifying lesions in the colon, imaged with computed tomography (CT). This system includes facilities for colon segmentation, candidate generation, feature analysis, and classification. The algorithms have been designed to offer robust performance [...] Read more.
We present a complete, end-to-end computer-aided detection (CAD) system for identifying lesions in the colon, imaged with computed tomography (CT). This system includes facilities for colon segmentation, candidate generation, feature analysis, and classification. The algorithms have been designed to offer robust performance to variation in image data and patient preparation. By utilizing efficient 2D and 3D processing, software optimizations, multi-threading, feature selection, and an optimized cascade classifier, the CAD system quickly determines a set of detection marks. The colon CAD system has been validated on the largest set of data to date, and demonstrates excellent performance, in terms of its high sensitivity, low false positive rate, and computational efficiency. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)
Figures

Open AccessArticle Breast Cancer Detection with Gabor Features from Digital Mammograms
Algorithms 2010, 3(1), 44-62; doi:10.3390/a3010044
Received: 28 October 2009 / Revised: 14 January 2010 / Accepted: 14 January 2010 / Published: 19 January 2010
Cited by 19 | PDF Full-text (831 KB) | HTML Full-text | XML Full-text
Abstract
A new breast cancer detection algorithm, named the “Gabor Cancer Detection” (GCD) algorithm, utilizing Gabor features is proposed. Three major steps are involved in the GCD algorithm, preprocessing, segmentation (generating alarm segments), and classification (reducing false alarms). In preprocessing, a digital mammogram [...] Read more.
A new breast cancer detection algorithm, named the “Gabor Cancer Detection” (GCD) algorithm, utilizing Gabor features is proposed. Three major steps are involved in the GCD algorithm, preprocessing, segmentation (generating alarm segments), and classification (reducing false alarms). In preprocessing, a digital mammogram is down-sampled, quantized, denoised and enhanced. Nonlinear diffusion is used for noise suppression. In segmentation, a band-pass filter is formed by rotating a 1-D Gaussian filter (off center) in frequency space, termed as “Circular Gaussian Filter” (CGF). A CGF can be uniquely characterized by specifying a central frequency and a frequency band. A mass or calcification is a space-occupying lesion and usually appears as a bright region on a mammogram. The alarm segments (suspicious to be masses/calcifications) can be extracted out using a threshold that is adaptively decided upon the histogram analysis of the CGF-filtered mammogram. In classification, a Gabor filter bank is formed with five bands by four orientations (horizontal, vertical, 45 and 135 degree) in Fourier frequency domain. For each mammographic image, twenty Gabor-filtered images are produced. A set of edge histogram descriptors (EHD) are then extracted from 20 Gabor images for classification. An EHD signature is computed with four orientations of Gabor images along each band and five EHD signatures are then joined together to form an EHD feature vector of 20 dimensions. With the EHD features, the fuzzy C-means clustering technique and k-nearest neighbor (KNN) classifier are used to reduce the number of false alarms. The experimental results tested on the DDSM database (University of South Florida) show the promises of GCD algorithm in breast cancer detection, which achieved TP (true positive rate) = 90% at FPI (false positives per image) = 1.21 in mass detection; and TP = 93% at FPI = 1.19 in calcification detection. Full article
(This article belongs to the Special Issue Machine Learning for Medical Imaging)
Figures

Open AccessArticle Interactive Compression of Digital Data
Algorithms 2010, 3(1), 63-75; doi:10.3390/a3010063
Received: 4 November 2009 / Revised: 8 January 2010 / Accepted: 25 January 2010 / Published: 29 January 2010
PDF Full-text (375 KB) | HTML Full-text | XML Full-text
Abstract
If we can use previous knowledge of the source (or the knowledge of a source that is correlated to the one we want to compress) to exploit the compression process then we can have significant gains in compression. By doing this in [...] Read more.
If we can use previous knowledge of the source (or the knowledge of a source that is correlated to the one we want to compress) to exploit the compression process then we can have significant gains in compression. By doing this in the fundamental source coding theorem we can substitute entropy with conditional entropy and we have a new theoretical limit that allows for better compression. To do this, when data compression is used for data transmission, we can assume some degree of interaction between the compressor and the decompressor that can allow a more efficient usage of the previous knowledge they both have of the source. In this paper we review previous work that applies interactive approaches to data compression and discuss this possibility. Full article
(This article belongs to the Special Issue Data Compression)
Open AccessArticle InfoVis Interaction Techniques in Animation of Recursive Programs
Algorithms 2010, 3(1), 76-91; doi:10.3390/a3010076
Received: 8 December 2009 / Revised: 18 January 2010 / Accepted: 25 January 2010 / Published: 10 February 2010
Cited by 4 | PDF Full-text (320 KB) | HTML Full-text | XML Full-text
Abstract
Algorithm animations typically assist in educational tasks aimed simply at achieving understanding. Potentially, animations could assist in higher levels of cognition, such as the analysis level, but they usually fail in providing this support because they are not flexible or comprehensive enough. [...] Read more.
Algorithm animations typically assist in educational tasks aimed simply at achieving understanding. Potentially, animations could assist in higher levels of cognition, such as the analysis level, but they usually fail in providing this support because they are not flexible or comprehensive enough. In particular, animations of recursion provided by educational systems hardly support the analysis of recursive algorithms. Here we show how to provide full support to the analysis of recursive algorithms. From a technical point of view, animations are enriched with interaction techniques inspired by the information visualization (InfoVis) field. Interaction tasks are presented in seven categories, and deal with both static visualizations and dynamic animations. All of these features are implemented in the SRec system, and visualizations generated by SRec are used to illustrate the article. Full article
Figures

Open AccessArticle Base Oils Biodegradability Prediction with Data Mining Techniques
Algorithms 2010, 3(1), 92-99; doi:10.3390/algor3010092
Received: 30 December 2009 / Revised: 24 January 2010 / Accepted: 28 January 2010 / Published: 23 February 2010
PDF Full-text (244 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In [...] Read more.
In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques. Full article
(This article belongs to the Special Issue Data Mining in Multi-Core, Many-Core and Cloud Era)

Journal Contact

MDPI AG
Algorithms Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
algorithms@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Algorithms
Back to Top