entropy-logo

Journal Browser

Journal Browser

Physics-Informed Neural Networks

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 30 October 2024 | Viewed by 1033

Special Issue Editors


E-Mail Website
Guest Editor
Area of Methods & Algorithms for Artificial Intelligence, Know-Center GmbH, 8010 Graz, Austria
Interests: machine learning; deep learning; physics-informed machine learning; physics-informed neural networks; theory-inspired machine learning

E-Mail Website
Guest Editor
1. Area of Methods & Algorithms for Artificial Intelligence, Know-Center GmbH, 8010 Graz, Austria
2. Signal Processing and Speech Communication Laboratory, Graz University of Technology, 8010 Graz, Austria
Interests: information-theoretic model reduction; information bottleneck theory of deep learning; information-theoretic analysis of machine learning systems; theory-inspired machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Physics-informed neural networks (PINNs) have recently emerged as a novel deep learning method that is applicable to both forward and inverse problems governed by systems of ordinary or partial differential equations. Designed to approximate solutions to these differential equations, in their standard formulation, PINN training aims to simultaneously ensure that the learned function agrees with the provided training data and satisfies the differential equations. The multi-objective nature of PINN training was shown to lead to optimization scenarios that appear novel and that often cannot be successfully addressed by existing approaches in the field of deep learning.

The aim of this Special Issue is to deepen our understanding of the peculiarities of PINN training and to draw connections with recent trends in machine learning theory, such as generalization bounds, spectral bias theory, and the appropriateness of certain optimization approaches. We therefore seek submissions that provide the following:

  • In-depth investigations into the potential pitfalls of PINN training for inverse and forward problems;
  • Insights into the effects of optimization approaches and architectural and modeling choices on PINN training;
  • Approaches to improving the success of PINN training, including novel sampling or loss weighting schemes or probabilistic or Bayesian approaches;
  • Theoretical performance bounds of PINNs derived from first principles.

Of particular interest are submissions that employ approaches from probability and statistics, statistical or quantum mechanics, or information theory, as these approaches align with the scope of Entropy. Further, while we do not solicit application papers that employ PINNs in certain scientific or engineering domains, we are interested in works that evaluate the appropriateness of PINNs for wider classes of differential equations.

Dr. Franz Martin Rohrhofer
Dr. Bernhard C. Geiger
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • physics-informed neural networks
  • physics-informed machine learning
  • generalization bounds
  • multi-objective optimization
  • Bayesian inference
  • machine learning theory

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 7008 KiB  
Article
An Adaptive Sampling Algorithm with Dynamic Iterative Probability Adjustment Incorporating Positional Information
by Yanbing Liu, Liping Chen, Yu Chen and Jianwan Ding
Entropy 2024, 26(6), 451; https://doi.org/10.3390/e26060451 - 26 May 2024
Viewed by 658
Abstract
Physics-informed neural networks (PINNs) have garnered widespread use for solving a variety of complex partial differential equations (PDEs). Nevertheless, when addressing certain specific problem types, traditional sampling algorithms still reveal deficiencies in efficiency and precision. In response, this paper builds upon the progress [...] Read more.
Physics-informed neural networks (PINNs) have garnered widespread use for solving a variety of complex partial differential equations (PDEs). Nevertheless, when addressing certain specific problem types, traditional sampling algorithms still reveal deficiencies in efficiency and precision. In response, this paper builds upon the progress of adaptive sampling techniques, addressing the inadequacy of existing algorithms to fully leverage the spatial location information of sample points, and introduces an innovative adaptive sampling method. This approach incorporates the Dual Inverse Distance Weighting (DIDW) algorithm, embedding the spatial characteristics of sampling points within the probability sampling process. Furthermore, it introduces reward factors derived from reinforcement learning principles to dynamically refine the probability sampling formula. This strategy more effectively captures the essential characteristics of PDEs with each iteration. We utilize sparsely connected networks and have adjusted the sampling process, which has proven to effectively reduce the training time. In numerical experiments on fluid mechanics problems, such as the two-dimensional Burgers’ equation with sharp solutions, pipe flow, flow around a circular cylinder, lid-driven cavity flow, and Kovasznay flow, our proposed adaptive sampling algorithm markedly enhances accuracy over conventional PINN methods, validating the algorithm’s efficacy. Full article
(This article belongs to the Special Issue Physics-Informed Neural Networks)
Show Figures

Figure 1

Back to TopTop