entropy-logo

Journal Browser

Journal Browser

Gaussian Fields and Their Application in Computational Engineering and Mathematical Physics

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 30 September 2025 | Viewed by 951

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Applied Mathematics, Faculty of Mathematics and Natural Sciences, Kaunas University of Technology, K. Donelaičio g. 73, 44249 Kaunas, Lithuania
2. Lithuanian Energy Institute, Breslaujos St. 3, 44403 Kaunas, Lithuania
Interests: probabilistic risk assessment; complex systems; Bayesian inference; artificial intelligence; big data analytics; data mining; machine learning; artificial neural networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Department of Engineering, University of Cambridge, Cambridge CB3 0FA, UK
2. Director of the Lloyd's Register Foundation, Turing Programme on Data Centric Engineering, The Alan Turing Institute, The British Library, 96 Euston Road, London NW1 2DB, UK
Interests: achine learning; computational statistics; Bayesian statistics; Monte Carlo methods

E-Mail Website
Guest Editor
Department of Engineering, Faculty of Environment, Science and Economy, University of Exeter, Exeter EX4 4P, UK
Interests: probabilistic modeling; random fields; Bayesian inference; uncertainty quantification; numerical analysis; computational statistics; data-drivern engineering

Special Issue Information

Dear Colleagues,

The real world is inherently associated with uncertainty. Therefore, any digital model of a real-world phenomenon should account for these uncertainties. Probabilistic modeling, as a natural way of addressing such uncertainties, enables one to reason under uncertainty and make informed decisions in situations where complete information is unavailable. In recent years, probabilistic modeling has gained substantial attention due to the continuous increase in computing power and the growing availability of data.

Consequently, Gaussian fields such as one-dimensional Gaussian processes, as a subset of probabilistic modeling, play a significant role in computational engineering. Because of their versatility and flexibility, Gaussian random fields are often employed for forecasting, surrogate modeling, and the modeling of population variability.

Despite the advances made, challenges such as computational efficiency and the need for physically viable fields hinder their full potential in computational engineering and mathematical physics. This Special Issue aims to address these theoretical challenges as well as various applications of Gaussian processes. Therefore, we are seeking contributions regarding topics that include, but are not limited to, the following themes:

  • Constraint Gaussian fields (e.g., embedded information and monotonic Gaussian fields);
  • Non-Gaussian fields (and applications beyond Gaussianity, e.g. including truncated and transformed fields);
  • Gaussian processes and applications for the dimensionality reduction of large-scale problems;
  • Scalable Gaussian fields for big data (e.g., state-of-the-art decompositions);
  • Uncertainty incorporation, extrapolation, and pattern discovery;
  • Multi-output Gaussian fields and applications.

Prof. Dr. Robertas Alzbutas
Prof. Dr. Mark Girolami
Dr. Hussein Rappel
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Gaussian fields
  • probabilistic modeling
  • Bayesian inference
  • forecasting

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 920 KiB  
Article
Decomposed Gaussian Processes for Efficient Regression Models with Low Complexity
by Anis Fradi, Tien-Tam Tran and Chafik Samir
Entropy 2025, 27(4), 393; https://doi.org/10.3390/e27040393 - 7 Apr 2025
Viewed by 45
Abstract
In this paper, we address the challenges of inferring and learning from a substantial number of observations (N1) with a Gaussian process regression model. First, we propose a flexible construction of well-adapted covariances originally derived from specific differential operators. [...] Read more.
In this paper, we address the challenges of inferring and learning from a substantial number of observations (N1) with a Gaussian process regression model. First, we propose a flexible construction of well-adapted covariances originally derived from specific differential operators. Second, we prove its convergence and show its low computational cost scaling as O(Nm2) for inference and O(m3) for learning instead of O(N3) for a canonical Gaussian process where Nm. Moreover, we develop an implementation that requires less memory O(m2) instead of O(N2). Finally, we demonstrate the effectiveness of the proposed method with simulation studies and experiments on real data. In addition, we conduct a comparative study with the aim of situating it in relation to certain cutting-edge methods. Full article
Show Figures

Figure 1

25 pages, 2080 KiB  
Article
Multilabel Classification for Entry-Dependent Expert Selection in Distributed Gaussian Processes
by Hamed Jalali and Gjergji Kasneci
Entropy 2025, 27(3), 307; https://doi.org/10.3390/e27030307 - 14 Mar 2025
Viewed by 301
Abstract
By distributing the training process, local approximation reduces the cost of the standard Gaussian process. An ensemble method aggregates predictions from local Gaussian experts, each trained on different data partitions, under the assumption of perfect diversity among them. While this assumption ensures tractable [...] Read more.
By distributing the training process, local approximation reduces the cost of the standard Gaussian process. An ensemble method aggregates predictions from local Gaussian experts, each trained on different data partitions, under the assumption of perfect diversity among them. While this assumption ensures tractable aggregation, it is frequently violated in practice. Although ensemble methods provide consistent results by modeling dependencies among experts, they incur a high computational cost, scaling cubically with the number of experts. Implementing an expert-selection strategy reduces the number of experts involved in the final aggregation step, thereby improving efficiency. However, selection approaches that assign a fixed set of experts to each data point cannot account for the unique properties of individual data points. This paper introduces a flexible expert-selection approach tailored to the characteristics of individual data points. To achieve this, we frame the selection task as a multi-label classification problem in which experts define the labels, and each data point is associated with specific experts. We discuss in detail the prediction quality, efficiency, and asymptotic properties of the proposed solution. We demonstrate the efficiency of the proposed method through extensive numerical experiments on synthetic and real-world datasets. This strategy is easily extendable to distributed learning scenarios and multi-agent models, regardless of Gaussian assumptions regarding the experts. Full article
Show Figures

Figure 1

Back to TopTop