Classification and Regression in Machine Learning

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (15 October 2020) | Viewed by 7617

Special Issue Editor


E-Mail Website
Guest Editor
Computational Intelligence Group, School of Computing, University of Kent, Chatham ME4 4AG, UK
Interests: data mining; knowledge discovery; bio-inspired algorithms; bioinformatics

Special Issue Information

Dear Colleagues,

I would like to announce a Special Issue entitled “Classification and Regression in Machine Learning” to be published in the MDPI journal Algorithms. This Special Issue focuses on algorithms for two supervised machine learning tasks, namely, classification and regression. We invite papers describing novel research on methods and applications in these tasks, including survey papers discussing current issues and open research problems.

Potential topics include (but are not limited to):

  • Machine learning algorithms (e.g., support vector machines, Bayesian learning, statistical methods);
  • Bio-inspired algorithms (e.g., evolutionary algorithms, swarm intelligence, artificial neural networks);
  • Model representation (e.g., decision and regression trees, rules, deep learning);
  • Ensemble learning;
  • Scalable and distributed learning;
  • Interpretability of models;
  • Evaluation techniques;
  • Transparency and fairness;
  • Application case studies;

Dr. Fernando Otero
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1492 KiB  
Article
A Weighted Ensemble Learning Algorithm Based on Diversity Using a Novel Particle Swarm Optimization Approach
by Gui-Rong You, Yeou-Ren Shiue, Wei-Chang Yeh, Xi-Li Chen and Chih-Ming Chen
Algorithms 2020, 13(10), 255; https://doi.org/10.3390/a13100255 - 9 Oct 2020
Cited by 6 | Viewed by 3056
Abstract
In ensemble learning, accuracy and diversity are the main factors affecting its performance. In previous studies, diversity was regarded only as a regularization term, which does not sufficiently indicate that diversity should implicitly be treated as an accuracy factor. In this study, a [...] Read more.
In ensemble learning, accuracy and diversity are the main factors affecting its performance. In previous studies, diversity was regarded only as a regularization term, which does not sufficiently indicate that diversity should implicitly be treated as an accuracy factor. In this study, a two-stage weighted ensemble learning method using the particle swarm optimization (PSO) algorithm is proposed to balance the diversity and accuracy in ensemble learning. The first stage is to enhance the diversity of the individual learner, which can be achieved by manipulating the datasets and the input features via a mixed-binary PSO algorithm to search for a set of individual learners with appropriate diversity. The purpose of the second stage is to improve the accuracy of the ensemble classifier using a weighted ensemble method that considers both diversity and accuracy. The set of weighted classifier ensembles is obtained by optimization via the PSO algorithm. The experimental results on 30 UCI datasets demonstrate that the proposed algorithm outperforms other state-of-the-art baselines. Full article
(This article belongs to the Special Issue Classification and Regression in Machine Learning)
Show Figures

Figure 1

24 pages, 1549 KiB  
Article
Sparse Logistic Regression: Comparison of Regularization and Bayesian Implementations
by Mattia Zanon, Giuliano Zambonin, Gian Antonio Susto and Seán McLoone
Algorithms 2020, 13(6), 137; https://doi.org/10.3390/a13060137 - 8 Jun 2020
Cited by 1 | Viewed by 3719
Abstract
In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model [...] Read more.
In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model estimation techniques that provide a sparse solution, i.e., where coefficients associated with non-important variables are set to zero. In this work we compare the performance of two methods: the first one is based on the well known Least Absolute Shrinkage and Selection Operator (LASSO) which involves regularization with an 1 norm; the second one is the Relevance Vector Machine (RVM) which is based on a Bayesian implementation of the linear logistic model. The two methods are extensively compared in this paper, on real and simulated datasets. Results show that, in general, the two approaches are comparable in terms of prediction performance. RVM outperforms the LASSO both in term of structure recovery (estimation of the correct non-zero model coefficients) and prediction accuracy when the dimensionality of the data tends to increase. However, LASSO shows comparable performance to RVM when the dimensionality of the data is much higher than number of samples that is p > > n . Full article
(This article belongs to the Special Issue Classification and Regression in Machine Learning)
Show Figures

Figure 1

Back to TopTop