entropy-logo

Journal Browser

Journal Browser

Advances in Bayesian Optimization and Deep Reinforcement Learning

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 16 December 2024 | Viewed by 77

Special Issue Editor


E-Mail Website
Guest Editor
Department of Quantitative Methods, Universidad Pontificia Comillas, Madrid, Spain
Interests: research scientist on Bayesian optimization; deep reinforcement learning; information theory; AutoML and AI ethics

Special Issue Information

Dear Colleagues,

Bayesian optimization has been shown to be a state-of-the-art technique in the optimization of black boxes, that is, expensive-to-evaluate functions with unknown analytical expression and, hence, unknown gradients to use in the optimization whose evaluations can also be noisy. In particular, the significance of the methodology has been shown in its application to the hyperparameter tuning problem of machine learning algorithms with success. We have seen a whole set of papers dealing with advanced Bayesian optimization techniques where its application has almost always been supervised learning algorithms.

However, in recent times, deep reinforcement learning has shown promising results for a variety of problems, displaying numerous proposals of different new algorithms. That is why it is now very important to translate the performance of Bayesian optimization in supervised learning to deep reinforcement learning algorithms, both from a methodological and an application point of view. Notice that many advanced Bayesian optimization scenarios can now be tested with deep reinforcement learning.

This Special Issue aims to be a forum for the presentation of new and improved Bayesian optimization methodologies, deep reinforcement learning methodologies or applications of Bayesian optimization to enhance the performance of deep reinforcement learning in a plethora of scenarios from robotics to financial portfolio management. Bonus points will be given to those methodologies that include an information theory approach to solving the problem in the Bayesian optimization or deep reinforcement learning paradigms.

Dr. Eduardo C. Garrido-Merchán
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Bayesian optimization
  • deep reinforcement learning
  • hyperparameter tuning
  • AutoRL
  • information

Published Papers

This special issue is now open for submission.
Back to TopTop