Recent Advances in Automated Machine Learning: 2nd Edition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 695

Special Issue Editor


E-Mail Website1 Website2
Guest Editor
Machine Learning/Deep Learning Research Labs, Department of Computer Engineering, Dongseo University, Busan 47011, Republic of Korea
Interests: automated machine learning; adversarial machine learning; multi-agent reinforcement learning; few shot learning; generative adversarial network
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are seeking submissions for a Special Issue entitled “Recent Advances in Automated Machine Learning”.

Big data, a phenomenon which has spurred remarkable advances in deep learning, can now be found in various domains, with many researchers investigating theories and applications of automated machine learning (AutoML). Advances in AutoML will have a huge impact in many areas of deep learning, such as data preparation, feature engineering, model selection and evaluation, hyperparameter tuning, network architecture search, and ensemble methods. For machine learning projects to be successful, we need to automate exploratory data analysis and feature selection to explore and understand the context, properties, and quality of the data. In this initial process, automated data exploration tools and feature recommendation tools will be of great assistance. For optimal performance in terms of learning time and evaluation metrics (including accuracy), however, we need to develop effective model selection and evaluation methods to search for optimal hyperparameters and network architectures. Moreover, since AutoML methodologies deal with multiple models simultaneously, we need to devise smart strategies for maintaining homogeneous/heterogeneous models with parallelized and limited resources. Techniques for searching for (or creating) optimal hyperparameters and network architectures with contemporary machine learning scenarios such as federated machine learning, meta-learning, self-supervised machine learning, etc., are attracting increasing interest from the research community.

For this Special Issue, we invite submissions that present cutting-edge research and the recent advances in the fields of automated machine learning. Both theoretical and experimental studies, as well as comprehensive review and survey papers, are welcome.

Prof. Dr. Dae-Ki Kang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • automated domain adaptation
  • automated feature engineering
  • AutoML for meta-learning
  • explainability in AutoML
  • federated AutoML
  • hyperparameter optimization and creation
  • metaheuristics for AutoML
  • network architecture search
  • optimal resource utilization in AutoML
  • reinforcement learning for AutoML
  • security and privacy in AutoML
  • self-supervised learning and AutoML
  • semi-automated machine learning
  • stopping criteria for AutoML

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2362 KiB  
Article
Reducing Model Complexity in Neural Networks by Using Pyramid Training Approaches
by Şahım Giray Kıvanç, Baha Şen, Fatih Nar and Ali Özgün Ok
Appl. Sci. 2024, 14(13), 5898; https://doi.org/10.3390/app14135898 - 5 Jul 2024
Viewed by 375
Abstract
Throughout the evolution of machine learning, the size of models has steadily increased as researchers strive for higher accuracy by adding more layers. This escalation in model complexity necessitates enhanced hardware capabilities. Today, state-of-the-art machine learning models have become so large that effectively [...] Read more.
Throughout the evolution of machine learning, the size of models has steadily increased as researchers strive for higher accuracy by adding more layers. This escalation in model complexity necessitates enhanced hardware capabilities. Today, state-of-the-art machine learning models have become so large that effectively training them requires substantial hardware resources, which may be readily available to large companies but not to students or independent researchers. To make the research on machine learning models more accessible, this study introduces a size reduction technique that leverages stages in pyramid training and similarity comparison. We conducted experiments on classification, segmentation, and object detection tasks using various network configurations. Our results demonstrate that pyramid training can reduce model complexity by up to 70% while maintaining accuracy comparable to conventional full-sized models. These findings offer a scalable and resource-efficient solution for researchers and practitioners in hardware-constrained environments. Full article
(This article belongs to the Special Issue Recent Advances in Automated Machine Learning: 2nd Edition)
Show Figures

Figure 1

Back to TopTop