Next Issue
Volume 10, IOCP 2024
Previous Issue
Volume 8, NuFACT 2022
 
 

Phys. Sci. Forum, 2023, MaxEnt 2023

The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering

Garching, Germany | 3–7 July 2023

Volume Editors:
Roland Preuss, Max-Planck-Institut for Plasmaphysics, Germany
Udo von Toussaint, Max-Planck-Institut for Plasmaphysics, Germany

Number of Papers: 26
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Cover Story (view full-size image): The 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering continued a long series of MaxEnt-Workshops that started in the late 1970s of the previous [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Other

3 pages, 1339 KiB  
Editorial
Preface of the 42nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering—MaxEnt 2023
by Udo von Toussaint and Roland Preuss
Phys. Sci. Forum 2023, 9(1), 1; https://doi.org/10.3390/psf2023009001 - 23 Nov 2023
Viewed by 782
Abstract
The forty-second International Conference on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (42nd MaxEnt’23) was held at the Max Planck Institute for Plasmaphysics (IPP) in Garching, Germany, from 3rd to 7th of July 2023 (https://www [...] Full article
Show Figures

Figure 1

Other

Jump to: Editorial

8 pages, 465 KiB  
Proceeding Paper
An Iterative Bayesian Algorithm for 3D Image Reconstruction Using Multi-View Compton Data
by Nhan Le, Hichem Snoussi and Alain Iltis
Phys. Sci. Forum 2023, 9(1), 2; https://doi.org/10.3390/psf2023009002 - 24 Nov 2023
Viewed by 800
Abstract
Conventional maximum likelihood-based algorithms for 3D Compton image reconstruction are often stuck with slow convergence and large data volume, which could be unsuitable for some practical applications, such as nuclear engineering. Taking advantage of the Bayesian framework, we propose a fast-converging iterative maximum [...] Read more.
Conventional maximum likelihood-based algorithms for 3D Compton image reconstruction are often stuck with slow convergence and large data volume, which could be unsuitable for some practical applications, such as nuclear engineering. Taking advantage of the Bayesian framework, we propose a fast-converging iterative maximum a posteriori reconstruction algorithm under the assumption of the Poisson data model and Markov random field-based convex prior in this paper. The main originality resides in developing a new iterative maximization scheme with simultaneous updates following the line search strategy to bypass the spatial dependencies among neighboring voxels. Numerical experiments on real datasets conducted with hand-held Temporal Compton cameras developed by Damavan Imaging company and punctual 0.2 MBq 22Na sources with zero-mean Gaussian Markov random field confirm the outperformance of the proposed maximum a posteriori algorithm over various existing expectation–maximization type solutions. Full article
Show Figures

Figure 1

9 pages, 1744 KiB  
Proceeding Paper
Behavioral Influence of Social Self Perception in a Sociophysical Simulation
by Fabian Sigler, Viktoria Kainz, Torsten Enßlin, Céline Boehm and Sonja Utz
Phys. Sci. Forum 2023, 9(1), 3; https://doi.org/10.3390/psf2023009003 - 24 Nov 2023
Viewed by 811
Abstract
Humans make decisions about their actions based on a combination of their objectives and their knowledge about the state of the world surrounding them. In social interactions, one prevalent goal is the ambition to be perceived to be an honest, trustworthy person in [...] Read more.
Humans make decisions about their actions based on a combination of their objectives and their knowledge about the state of the world surrounding them. In social interactions, one prevalent goal is the ambition to be perceived to be an honest, trustworthy person in terms of having a reputation of frequently making true statements. Aiming for this goal requires the decision whether to communicate information truthfully or if deceptive lies might improve the reputation even more. The basis of this decision involves not only an individual’s belief about others, but also their understanding of others’ beliefs, described by the concept of Theory of Mind, and the mental processes from which these beliefs emerge. In the present work, we used the Reputation Game Simulation as an approach for modeling the evolution of reputation in agent-based social communication networks, in which agents treat information approximately according to Bayesian logic. We implemented a second-order Theory of Mind based message decision strategy that allows the agents to mentally simulate the impact of different communication options on the knowledge of their counterparts’ minds in order to identify the message that is expected to maximize their reputation. Analysis of the communication patterns obtained showed that deception was chosen more frequently than the truthful communication option. However, the efficacy of such deceptive behavior turned out to have a strong correlation with the accuracy of the agents’ Theory of Mind representation. Full article
Show Figures

Figure 1

10 pages, 7748 KiB  
Proceeding Paper
Uncertainty Quantification with Deep Ensemble Methods for Super-Resolution of Sentinel 2 Satellite Images
by David Iagaru and Nina Maria Gottschling
Phys. Sci. Forum 2023, 9(1), 4; https://doi.org/10.3390/psf2023009004 - 27 Nov 2023
Viewed by 898
Abstract
The recently deployed Sentinel 2 satellite constellation produces images in 13 wavelength bands with a Ground Sampling Distance (GSD) of 10 m, 20 m, and 60 m. Super-resolution aims to generate all 13 bands with a spatial resolution of 10 m. This paper [...] Read more.
The recently deployed Sentinel 2 satellite constellation produces images in 13 wavelength bands with a Ground Sampling Distance (GSD) of 10 m, 20 m, and 60 m. Super-resolution aims to generate all 13 bands with a spatial resolution of 10 m. This paper investigates the performance of DSen2, a proposed convolutional neural network (CNN)-based method, for tackling super-resolution in terms of accuracy and uncertainty. As the optimization problem for obtaining the weights of a CNN is highly non-convex, there are multiple different local minima for the loss function. This results in several possible CNN models with different weights and thus implies epistemic uncertainty. In this work, methods to quantify epistemic uncertainty, termed weighted deep ensembles (WDESen2) and its variants), are proposed. These allow the quantification of predictive uncertainty estimates and, moreover, the improvement of the accuracy of the prediction by selective prediction. They involve a consideration of deep ensembles, and each model’s importance can be weighted depending on the model’s validation loss. We show that weighted deep ensembles improve the accuracy of prediction compared to state-of-the-art methods and deep ensembles. Moreover, the uncertainties can be linked to the underlying inverse problem and physical patterns on the ground. This allows us to improve the trustworthiness of CNN predictions and the predictive accuracy with selective prediction. Full article
Show Figures

Figure 1

10 pages, 1396 KiB  
Proceeding Paper
Geodesic Least Squares: Robust Regression Using Information Geometry
by Geert Verdoolaege
Phys. Sci. Forum 2023, 9(1), 5; https://doi.org/10.3390/psf2023009005 - 27 Nov 2023
Viewed by 798
Abstract
Geodesic least squares (GLS) is a regression technique that operates in spaces of probability distributions. Based on the minimization of the Rao geodesic distance between two probability models of the response variable, GLS is robust against outliers and model misspecification. The method is [...] Read more.
Geodesic least squares (GLS) is a regression technique that operates in spaces of probability distributions. Based on the minimization of the Rao geodesic distance between two probability models of the response variable, GLS is robust against outliers and model misspecification. The method is very simple, without any tuning parameters, owing to its solid foundations rooted in information geometry. Here, we illustrate the robustness properties of GLS using applications in the fields of magnetic confinement fusion and astrophysics. Additional interpretation is gained from visualizations using several models for the manifold of Gaussian probability distributions. Full article
Show Figures

Figure 1

11 pages, 2989 KiB  
Proceeding Paper
Magnetohydrodynamic Equilibrium Reconstruction with Consistent Uncertainties
by Robert Köberl, Robert Babin and Christopher G. Albert
Phys. Sci. Forum 2023, 9(1), 6; https://doi.org/10.3390/psf2023009006 - 27 Nov 2023
Viewed by 797
Abstract
We report on progress towards a probabilistic framework for consistent uncertainty quantification and propagation in the analysis and numerical modeling of physics in magnetically confined plasmas in the stellarator configuration. A frequent starting point in this process is the calculation of a magnetohydrodynamic [...] Read more.
We report on progress towards a probabilistic framework for consistent uncertainty quantification and propagation in the analysis and numerical modeling of physics in magnetically confined plasmas in the stellarator configuration. A frequent starting point in this process is the calculation of a magnetohydrodynamic equilibrium from plasma profiles. Profiles, and thus the equilibrium, are typically reconstructed from experimental data. What sets equilibrium reconstruction apart from usual inverse problems is that profiles are given as functions over a magnetic flux derived from the magnetic field, rather than spatial coordinates. This makes it a fixed-point problem that is traditionally left inconsistent or solved iteratively in a least-squares sense. The aim here is progressing towards a straightforward and transparent process to quantify and propagate uncertainties and their correlations for function-valued fields and profiles in this setting. We propose a framework that utilizes a low-dimensional prior distribution of equilibria, constructed with principal component analysis. A surrogate of the forward model is trained to enable faster sampling. Full article
Show Figures

Figure 1

8 pages, 626 KiB  
Proceeding Paper
Improving Inferences about Exoplanet Habitability
by Risinie D. Perera and Kevin H. Knuth
Phys. Sci. Forum 2023, 9(1), 7; https://doi.org/10.3390/psf2023009007 - 27 Nov 2023
Cited by 1 | Viewed by 818
Abstract
Assessing the habitability of exoplanets (planets orbiting other stars) is of great importance in deciding which planets warrant further careful study. Planets in the habitable zones of stars like our Sun are sufficiently far away from the star so that the light rays [...] Read more.
Assessing the habitability of exoplanets (planets orbiting other stars) is of great importance in deciding which planets warrant further careful study. Planets in the habitable zones of stars like our Sun are sufficiently far away from the star so that the light rays from the star can be assumed to be parallel, leading to straightforward analytic models for stellar illumination of the planet’s surface. However, for planets in the close-in habitable zones of dim red dwarf stars, such as the potentially habitable planet orbiting our nearest stellar neighbor, Proxima Centauri, the analytic illumination models based on the parallel ray approximation do not hold, resulting in severe biases in the estimates of the planetary characteristics, thus impacting efforts to understand the planet’s atmosphere and climate. In this paper, we present our efforts to improve the instellation (stellar illumination) models for close-in orbiting planets and the significance of the implementation of these improved models into EXONEST, which is a Bayesian machine learning application for exoplanet characterization. The ultimate goal is to use these improved models and parameter estimates to model the climates of close-in orbiting exoplanets using planetary General Circulation Models (GCM). More specifically, we are working to apply our instellation corrections to the NASA ROCKE-3D GCM to study the climates of the potentially habitable planets in the Trappist-1 system. Full article
Show Figures

Figure 1

9 pages, 582 KiB  
Proceeding Paper
Bayesian Model Selection and Parameter Estimation for Complex Impedance Spectroscopy Data of Endothelial Cell Monolayers
by Franziska Zimmermann, Frauke Viola Härtel, Anupam Das, Thomas Noll and Peter Dieterich
Phys. Sci. Forum 2023, 9(1), 8; https://doi.org/10.3390/psf2023009008 - 28 Nov 2023
Viewed by 622
Abstract
Endothelial barrier function can be quantified by the determination of the transendothelial resistance (TER) via impedance spectroscopy. However, TER can only be obtained indirectly based on a mathematical model. Models usually comprise a sequence of a resistance in parallel with a capacitor (RC-circuit), [...] Read more.
Endothelial barrier function can be quantified by the determination of the transendothelial resistance (TER) via impedance spectroscopy. However, TER can only be obtained indirectly based on a mathematical model. Models usually comprise a sequence of a resistance in parallel with a capacitor (RC-circuit), each for the cell layer (including TER) and the filter substrate, one resistance (R) for the medium, and a constant phase element (CPE) for the electrode–electrolyte interface. We applied Bayesian data analysis on a variety of model variants. Phase and absolute values of impedance data were acquired over time by a commercial device for measurements of pure medium, medium and raw filter, and medium with cell-covered filter stimulated with different agents. Medium and raw filter were best described by a series of four and three RC-circuits, respectively. Parameter estimation of the TER showed a concentration-dependent decrease in response to thrombin. Model comparison indicated that even high concentrations of thrombin did not fully disrupt the endothelial barrier. Insights in the biophysical meaning of model parameters were gained through complemental cell-free measurements with sodium chloride. In summary, Bayesian analysis allows for valid parameter estimation and the selection of models with different complexity under various experimental conditions to characterize endothelial barrier function. Full article
Show Figures

Figure 1

9 pages, 12026 KiB  
Proceeding Paper
A Bayesian Data Analysis Method for an Experiment to Measure the Gravitational Acceleration of Antihydrogen
by Danielle Hodgkinson, Joel Fajans and Jonathan S. Wurtele
Phys. Sci. Forum 2023, 9(1), 9; https://doi.org/10.3390/psf2023009009 - 28 Nov 2023
Viewed by 744
Abstract
The ALPHA-g experiment at CERN intends to observe the effect of gravity on antihydrogen. In ALPHA-g, antihydrogen is confined to a magnetic trap with an axis aligned parallel to the Earth’s gravitational field. An imposed difference in the magnetic field of the confining [...] Read more.
The ALPHA-g experiment at CERN intends to observe the effect of gravity on antihydrogen. In ALPHA-g, antihydrogen is confined to a magnetic trap with an axis aligned parallel to the Earth’s gravitational field. An imposed difference in the magnetic field of the confining coils above and below the trapping region, known as a bias, can be delicately adjusted to compensate for the gravitational potential experienced by the trapped anti-atoms. With the bias maintained, the magnetic fields of the coils can be ramped down slowly compared to the anti-atom motion; this releases the antihydrogen and leads to annihilations on the walls of the apparatus, which are detected by a position-sensitive detector. If the bias cancels out the gravitational potential, antihydrogen will escape the trap upwards or downwards with equal probability. Determining the downward (or upward) escape probability, p, from observed annihilations is non-trivial because the annihilation detection efficiency may be up–down asymmetric; some small fraction of antihydrogen escaping downwards may be detected in the upper region (and vice versa) meaning that the precise number of trapped antihydrogen atoms is unknown. In addition, cosmic rays passing through the apparatus lead to a background annihilation rate, which may also be up–down asymmetric. We present a Bayesian method to determine p by assuming annihilations detected in the upper and lower regions are independently Poisson distributed, with the Poisson mean expressed in terms of experimental quantities. We solve for the posterior p using the Markov chain Monte Carlo integration package, Stan. Further, we present a method to determine the gravitational acceleration of antihydrogen, ag, by modifying the analysis described above to include simulation results. In the modified analysis, p is replaced by the simulated probability of downward escape, which is a function of ag. Full article
Show Figures

Figure 1

10 pages, 963 KiB  
Proceeding Paper
Learned Harmonic Mean Estimation of the Marginal Likelihood with Normalizing Flows
by Alicja Polanska, Matthew A. Price, Alessio Spurio Mancini and Jason D. McEwen
Phys. Sci. Forum 2023, 9(1), 10; https://doi.org/10.3390/psf2023009010 - 29 Nov 2023
Cited by 1 | Viewed by 834
Abstract
Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of [...] Read more.
Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work, a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article, we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e., by lowering its “temperature”, ensuring that its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimization problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows. Full article
Show Figures

Figure 1

9 pages, 3921 KiB  
Proceeding Paper
Quantification of Endothelial Cell Migration Dynamics Using Bayesian Data Analysis
by Anselm Hohlstamm, Andreas Deussen, Stephan Speier and Peter Dieterich
Phys. Sci. Forum 2023, 9(1), 11; https://doi.org/10.3390/psf2023009011 - 30 Nov 2023
Viewed by 796
Abstract
Endothelial cells keep a tight and adaptive inner cell layer in blood vessels. Thereby, the cells develop complex dynamics through integrating active individual and collective cell migration, cell-cell interactions as well as interactions with external stimuli. It is the aim of this study [...] Read more.
Endothelial cells keep a tight and adaptive inner cell layer in blood vessels. Thereby, the cells develop complex dynamics through integrating active individual and collective cell migration, cell-cell interactions as well as interactions with external stimuli. It is the aim of this study to quantify and model these underlying dynamics. Therefore, we seeded and stained human umbilical vein endothelial cells (HUVECs) and recorded their positions every 10 min for 48 h via live-cell imaging. After image segmentation and tracking of several 10.000 cells, we applied Bayesian data analysis to models assessing the experimentally obtained cell trajectories. By analyzing the mean squared velocities, we found a dependence on the local cell density. Based on this connection, we developed a model, which approximates the time-dependent frequency of cell divisions. Furthermore, we determined two different phases of velocity deceleration, which are influenced by the emergence of correlated cell movements and time-dependent aging in this non-stationary system. By integrating the findings of correlation functions, we will be able to develop a comprehensive model to improve the understanding of endothelial cell migration in the future. Full article
Show Figures

Figure 1

10 pages, 626 KiB  
Proceeding Paper
Variational Bayesian Approximation (VBA) with Exponential Families and Covariance Estimation
by Seyedeh Azadeh Fallah Mortezanejad and Ali Mohammad-Djafari
Phys. Sci. Forum 2023, 9(1), 12; https://doi.org/10.3390/psf2023009012 - 30 Nov 2023
Viewed by 800
Abstract
Variational Bayesian Approximation (VBA) is a fast technique for approximating Bayesian computation. The main idea is to assess the joint posterior distribution of all the unknown variables with a simple expression. Mean–Field Variational Bayesian Approximation (MFVBA) is a particular case developed for large–scale [...] Read more.
Variational Bayesian Approximation (VBA) is a fast technique for approximating Bayesian computation. The main idea is to assess the joint posterior distribution of all the unknown variables with a simple expression. Mean–Field Variational Bayesian Approximation (MFVBA) is a particular case developed for large–scale problems where the approximated probability law is separable in all variables. A well–known drawback of MFVBA is that it tends to underestimate the variances in the variables, even though it estimates the means well. It can lead to poor inference results. We can obtain a fixed point algorithm to evaluate the means in exponential families for the approximating distribution. However, this does not solve the problem of underestimating the variances. In this paper, we propose a modified method of VBA with exponential families to first estimate the posterior mean and then improve the estimation of the posterior covariance. We demonstrate the performance of the procedure with an example. Full article
Show Figures

Figure 1

10 pages, 574 KiB  
Proceeding Paper
Proximal Nested Sampling with Data-Driven Priors for Physical Scientists
by Jason D. McEwen, Tobías I. Liaudat, Matthew A. Price, Xiaohao Cai and Marcelo Pereyra
Phys. Sci. Forum 2023, 9(1), 13; https://doi.org/10.3390/psf2023009013 - 1 Dec 2023
Cited by 2 | Viewed by 959
Abstract
Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold. [...] Read more.
Proximal nested sampling was introduced recently to open up Bayesian model selection for high-dimensional problems such as computational imaging. The framework is suitable for models with a log-convex likelihood, which are ubiquitous in the imaging sciences. The purpose of this article is two-fold. First, we review proximal nested sampling in a pedagogical manner in an attempt to elucidate the framework for physical scientists. Second, we show how proximal nested sampling can be extended in an empirical Bayes setting to support data-driven priors, such as deep neural networks learned from training data. Full article
Show Figures

Figure 1

13 pages, 1585 KiB  
Proceeding Paper
Bayesian Inference and Deep Learning for Inverse Problems
by Ali Mohammad-Djafari, Ning Chu, Li Wang and Liang Yu
Phys. Sci. Forum 2023, 9(1), 14; https://doi.org/10.3390/psf2023009014 - 1 Dec 2023
Viewed by 1236
Abstract
Inverse problems arise anywhere we have an indirect measurement. In general, they are ill-posed to obtain satisfactory solutions, which needs prior knowledge. Classically, different regularization methods and Bayesian inference-based methods have been proposed. As these methods need a great number of forward and [...] Read more.
Inverse problems arise anywhere we have an indirect measurement. In general, they are ill-posed to obtain satisfactory solutions, which needs prior knowledge. Classically, different regularization methods and Bayesian inference-based methods have been proposed. As these methods need a great number of forward and backward computations, they become costly in computation, particularly when the forward or generative models are complex, and the evaluation of the likelihood becomes very costly. Using deep neural network surrogate models and approximate computation can become very helpful. However, in accounting for the uncertainties, we need first to understand Bayesian deep learning, and then we can see how we can use it for inverse problems. In this work, we focus on NN, DL, and, more specifically, the Bayesian DL particularly adapted for inverse problems. We first give details of Bayesian DL approximate computations with exponential families; then, we see how we can use them for inverse problems. We consider two cases: First, we consider the case where the forward operator is known and used as a physics constraint, and the second examines more general data-driven DL methods. Full article
Show Figures

Figure 1

11 pages, 2314 KiB  
Proceeding Paper
Physics-Consistency Condition for Infinite Neural Networks and Experimental Characterization
by Sascha Ranftl and Shaoheng Guan
Phys. Sci. Forum 2023, 9(1), 15; https://doi.org/10.3390/psf2023009015 - 4 Dec 2023
Viewed by 950
Abstract
It has previously been shown that prior physics knowledge can be incorporated into the structure of an artificial neural network via neural activation functions based on (i) the correspondence under the infinite-width limit between neural networks and Gaussian processes if the central limit [...] Read more.
It has previously been shown that prior physics knowledge can be incorporated into the structure of an artificial neural network via neural activation functions based on (i) the correspondence under the infinite-width limit between neural networks and Gaussian processes if the central limit theorem holds and (ii) the construction of physics-consistent Gaussian process kernels, i.e., specialized covariance functions that ensure that the Gaussian process fulfills a priori some linear (differential) equation. Such regression models can be useful in many-query problems, e.g., inverse problems, uncertainty quantification or optimization, when a single forward solution or likelihood evaluation is costly. Based on a small set of training data, the learned model or “surrogate” can then be used as a fast approximator. The bottleneck is then for the surrogate to also learn efficiently and effectively from small data sets while at the same time ensuring physically consistent predictions. Based on this, we will further explore the properties of so-constructed neural networks. In particular, we will characterize (i) generalization behavior and (ii) the approximation quality or Gaussianity as a function of network width and discuss (iii) extensions from shallow to deep NNs. Full article
Show Figures

Figure 1

9 pages, 285 KiB  
Proceeding Paper
Quantum Measurement and Objective Classical Reality
by Vishal Johnson, Philipp Frank and Torsten Enßlin
Phys. Sci. Forum 2023, 9(1), 16; https://doi.org/10.3390/psf2023009016 - 6 Dec 2023
Viewed by 837
Abstract
We explore quantum measurement in the context of Everettian unitary quantum mechanics and construct an explicit unitary measurement procedure. We propose the existence of prior correlated states that enable this procedure to work and therefore argue that correlation is a resource that is [...] Read more.
We explore quantum measurement in the context of Everettian unitary quantum mechanics and construct an explicit unitary measurement procedure. We propose the existence of prior correlated states that enable this procedure to work and therefore argue that correlation is a resource that is consumed when measurements take place. It is also argued that a network of such measurements establishes a stable objective classical reality. Full article
Show Figures

Figure 1

5 pages, 255 KiB  
Proceeding Paper
Snowballing Nested Sampling
by Johannes Buchner
Phys. Sci. Forum 2023, 9(1), 17; https://doi.org/10.3390/psf2023009017 - 6 Dec 2023
Viewed by 730
Abstract
A new way to run nested sampling, combined with realistic MCMC proposals to generate new live points, is presented. Nested sampling is run with a fixed number of MCMC steps. Subsequently, snowballing nested sampling extends the run to more and more live points. [...] Read more.
A new way to run nested sampling, combined with realistic MCMC proposals to generate new live points, is presented. Nested sampling is run with a fixed number of MCMC steps. Subsequently, snowballing nested sampling extends the run to more and more live points. This stabilizes the MCMC proposal of later MCMC proposals, and leads to pleasant properties, including that the number of live points and number of MCMC steps do not have to be calibrated, that the evidence and posterior approximation improve as more compute is added and can be diagnosed with convergence diagnostics from the MCMC community. Snowballing nested sampling converges to a “perfect” nested sampling run with an infinite number of MCMC steps. Full article
Show Figures

Figure 1

10 pages, 7936 KiB  
Proceeding Paper
A BRAIN Study to Tackle Image Analysis with Artificial Intelligence in the ALMA 2030 Era
by Fabrizia Guglielmetti, Michele Delli Veneri, Ivano Baronchelli, Carmen Blanco, Andrea Dosi, Torsten Enßlin, Vishal Johnson, Giuseppe Longo, Jakob Roth, Felix Stoehr, Łukasz Tychoniec and Eric Villard
Phys. Sci. Forum 2023, 9(1), 18; https://doi.org/10.3390/psf2023009018 - 13 Dec 2023
Viewed by 1022
Abstract
An ESO internal ALMA development study, BRAIN, is addressing the ill-posed inverse problem of synthesis image analysis, employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this [...] Read more.
An ESO internal ALMA development study, BRAIN, is addressing the ill-posed inverse problem of synthesis image analysis, employing astrostatistics and astroinformatics. These emerging fields of research offer interdisciplinary approaches at the intersection of observational astronomy, statistics, algorithm development, and data science. In this study, we provide evidence of the benefits of employing these approaches to ALMA imaging for operational and scientific purposes. We show the potential of two techniques, RESOLVE and DeepFocus, applied to ALMA-calibrated science data. Significant advantages are provided with the prospect to improve the quality and completeness of the data products stored in the science archive and the overall processing time for operations. Both approaches evidence the logical pathway to address the incoming revolution in data rates dictated by the planned electronic upgrades. Moreover, we bring to the community additional products through a new package, ALMASim, to promote advancements in these fields, providing a refined ALMA simulator usable by a large community for training and testing new algorithms. Full article
Show Figures

Figure 1

9 pages, 982 KiB  
Proceeding Paper
Inferring Evidence from Nested Sampling Data via Information Field Theory
by Margret Westerkamp, Jakob Roth, Philipp Frank, Will Handley and Torsten Enßlin
Phys. Sci. Forum 2023, 9(1), 19; https://doi.org/10.3390/psf2023009019 - 13 Dec 2023
Cited by 1 | Viewed by 824
Abstract
Nested sampling provides an estimate of the evidence of a Bayesian inference problem via probing the likelihood as a function of the enclosed prior volume. However, the lack of precise values of the enclosed prior mass of the samples introduces probing noise, which [...] Read more.
Nested sampling provides an estimate of the evidence of a Bayesian inference problem via probing the likelihood as a function of the enclosed prior volume. However, the lack of precise values of the enclosed prior mass of the samples introduces probing noise, which can hamper high-accuracy determinations of the evidence values as estimated from the likelihood-prior-volume function. We introduce an approach based on information field theory, a framework for non-parametric function reconstruction from data, that infers the likelihood-prior-volume function by exploiting its smoothness and thereby aims to improve the evidence calculation. Our method provides posterior samples of the likelihood-prior-volume function that translate into a quantification of the remaining sampling noise for the evidence estimate, or for any other quantity derived from the likelihood-prior-volume function. Full article
Show Figures

Figure 1

8 pages, 2635 KiB  
Proceeding Paper
Knowledge-Based Image Analysis: Bayesian Evidences Enable the Comparison of Different Image Segmentation Pipelines
by Mats Leif Moskopp, Andreas Deussen and Peter Dieterich
Phys. Sci. Forum 2023, 9(1), 20; https://doi.org/10.3390/psf2023009020 - 4 Jan 2024
Viewed by 866
Abstract
The analysis and evaluation of microscopic image data is essential in life sciences. Increasing temporal and spatial digital image resolution and the size of data sets promotes the necessity of automated image analysis. Previously, our group proposed a Bayesian formalism that allows for [...] Read more.
The analysis and evaluation of microscopic image data is essential in life sciences. Increasing temporal and spatial digital image resolution and the size of data sets promotes the necessity of automated image analysis. Previously, our group proposed a Bayesian formalism that allows for converting the experimenter’s knowledge, in the form of a manually segmented image, into machine-readable probability distributions of the parameters of an image segmentation pipeline. This approach preserved the level of detail provided by expert knowledge and interobserver variability and has proven robust to a variety of recording qualities and imaging artifacts. In the present work, Bayesian evidences were used to compare different image processing pipelines. As an illustrative example, a microscopic phase contrast image of a wound healing assay and its manual segmentation by the experimenter (ground truth) are used. Six different variations of image segmentation pipelines are introduced. The aim was to find the image segmentation pipeline that is best to automatically segment the input image given the expert knowledge with respect to the principle of Occam’s razor to avoid unnecessary complexity and computation. While none of the introduced image segmentation pipelines fail completely, it is illustrated that assessing the quality of the image segmentation with the naked eye is not feasible. Bayesian evidence (and the intrinsically estimated uncertainty σ of the image segmentation) is used to choose the best image processing pipeline for the given image. This work illustrates a proof of principle and is extendable to a diverse range of image segmentation problems. Full article
Show Figures

Figure 1

9 pages, 1337 KiB  
Proceeding Paper
Flow Annealed Kalman Inversion for Gradient-Free Inference in Bayesian Inverse Problems
by Richard D. P. Grumitt, Minas Karamanis and Uroš Seljak
Phys. Sci. Forum 2023, 9(1), 21; https://doi.org/10.3390/psf2023009021 - 4 Jan 2024
Viewed by 727
Abstract
For many scientific inverse problems, we are required to evaluate an expensive forward model. Moreover, the model is often given in such a form that it is unrealistic to access its gradients. In such a scenario, standard Markov Chain Monte Carlo algorithms quickly [...] Read more.
For many scientific inverse problems, we are required to evaluate an expensive forward model. Moreover, the model is often given in such a form that it is unrealistic to access its gradients. In such a scenario, standard Markov Chain Monte Carlo algorithms quickly become impractical, requiring a large number of serial model evaluations to converge on the target distribution. In this paper, we introduce Flow Annealed Kalman Inversion (FAKI). This is a generalization of Ensemble Kalman Inversion (EKI) where we embed the Kalman filter updates in a temperature annealing scheme and use normalizing flows (NFs) to map the intermediate measures corresponding to each temperature level to the standard Gaussian. Thus, we relax the Gaussian ansatz for the intermediate measures used in standard EKI, allowing us to achieve higher-fidelity approximations to non-Gaussian targets. We demonstrate the performance of FAKI on two numerical benchmarks, showing dramatic improvements over standard EKI in terms of accuracy whilst accelerating its already rapid convergence properties (typically in O(10) steps). Full article
Show Figures

Figure 1

8 pages, 2089 KiB  
Proceeding Paper
Nested Sampling—The Idea
by John Skilling
Phys. Sci. Forum 2023, 9(1), 22; https://doi.org/10.3390/psf2023009022 - 8 Jan 2024
Viewed by 894
Abstract
We seek to add up Q=fdX over unit volume in arbitrary dimension. Nested sampling locates the bulk of Q by geometrical compression, using a Monte Carlo ensemble constrained within a progressively more restrictive lower limit [...] Read more.
We seek to add up Q=fdX over unit volume in arbitrary dimension. Nested sampling locates the bulk of Q by geometrical compression, using a Monte Carlo ensemble constrained within a progressively more restrictive lower limit ff*. This domain is divided into a core f>f* and a shell f=f*, with the core kept adequately populated. Full article
Show Figures

Figure 1

10 pages, 743 KiB  
Proceeding Paper
Preconditioned Monte Carlo for Gradient-Free Bayesian Inference in the Physical Sciences
by Minas Karamanis and Uroš Seljak
Phys. Sci. Forum 2023, 9(1), 23; https://doi.org/10.3390/psf2023009023 - 9 Jan 2024
Cited by 1 | Viewed by 835
Abstract
We present preconditioned Monte Carlo (PMC), a novel Monte Carlo method for Bayesian inference in complex probability distributions. PMC incorporates a normalizing flow (NF) and an adaptive Sequential Monte Carlo (SMC) scheme, along with a novel past resampling scheme to boost the number [...] Read more.
We present preconditioned Monte Carlo (PMC), a novel Monte Carlo method for Bayesian inference in complex probability distributions. PMC incorporates a normalizing flow (NF) and an adaptive Sequential Monte Carlo (SMC) scheme, along with a novel past resampling scheme to boost the number of propagated particles without extra computational costs. Additionally, we utilize preconditioned Crank–Nicolson updates, enabling PMC to scale to higher dimensions without the gradient of target distribution. The efficacy of PMC in producing samples, estimating model evidence, and executing robust inference is showcased through two challenging case studies, highlighting its superior performance compared to conventional methods. Full article
Show Figures

Figure 1

7 pages, 269 KiB  
Proceeding Paper
Analysis of Ecological Networks: Linear Inverse Modeling and Information Theory Tools
by Valérie Girardin, Théo Grente, Nathalie Niquil and Philippe Regnault
Phys. Sci. Forum 2023, 9(1), 24; https://doi.org/10.3390/psf2023009024 - 20 Feb 2024
Viewed by 874
Abstract
In marine ecology, the most studied interactions are trophic and are in networks called food webs. Trophic modeling is mainly based on weighted networks, where each weighted edge corresponds to a flow of organic matter between two trophic compartments, containing individuals of similar [...] Read more.
In marine ecology, the most studied interactions are trophic and are in networks called food webs. Trophic modeling is mainly based on weighted networks, where each weighted edge corresponds to a flow of organic matter between two trophic compartments, containing individuals of similar feeding behaviors and metabolisms and with the same predators. To take into account the unknown flow values within food webs, a class of methods called Linear Inverse Modeling was developed. The total linear constraints, equations and inequations defines a multidimensional convex-bounded polyhedron, called a polytope, within which lie all realistic solutions to the problem. To describe this polytope, a possible method is to calculate a representative sample of solutions by using the Monte Carlo Markov Chain approach. In order to extract a unique solution from the simulated sample, several goal (cost) functions—also called Ecological Network Analysis indices—have been introduced in the literature as criteria of fitness to the ecosystems. These tools are all related to information theory. Here we introduce new functions that potentially provide a better fit of the estimated model to the ecosystem. Full article
6 pages, 669 KiB  
Proceeding Paper
Manifold-Based Geometric Exploration of Optimization Solutions
by Guillaume Lebonvallet, Faicel Hnaien and Hichem Snoussi
Phys. Sci. Forum 2023, 9(1), 25; https://doi.org/10.3390/psf2023009025 - 16 May 2024
Viewed by 702
Abstract
This work introduces a new method for the exploration of solutions space in complex problems. This method consists of the build of a latent space which gives a new encoding of the solution space. We map the objective function on the latent space [...] Read more.
This work introduces a new method for the exploration of solutions space in complex problems. This method consists of the build of a latent space which gives a new encoding of the solution space. We map the objective function on the latent space using a manifold, i.e., a mathematical object defined by an equations system. The latent space is built with some knowledge of the objective function to make the mapping of the manifold easier. In this work, we introduce a new encoding for the Travelling Salesman Problem (TSP) and we give a new method for finding the optimal round. Full article
Show Figures

Figure 1

7 pages, 967 KiB  
Proceeding Paper
Nested Sampling for Detection and Localization of Sound Sources Using a Spherical Microphone Array
by Ning Xiang and Tomislav Jasa
Phys. Sci. Forum 2023, 9(1), 26; https://doi.org/10.3390/psf2023009026 - 20 May 2024
Viewed by 816
Abstract
Since its inception in 2004, nested sampling has been used in acoustics applications. This work applies nested sampling within a Bayesian framework to the detection and localization of sound sources using a spherical microphone array. Beyond an existing work, this source localization task [...] Read more.
Since its inception in 2004, nested sampling has been used in acoustics applications. This work applies nested sampling within a Bayesian framework to the detection and localization of sound sources using a spherical microphone array. Beyond an existing work, this source localization task relies on spherical harmonics to establish parametric models that distinguish the background sound environment from the presence of sound sources. Upon a positive detection, the parametric models are also involved to estimate an unknown number of potentially multiple sound sources. For the purpose of source detection, a no-source scenario needs to be considered in addition to the presence of at least one sound source. Specifically, the spherical microphone array senses the sound environment. The acoustic data are analyzed via spherical Fourier transforms using a Bayesian model comparison of two different models accounting for the absence and presence of sound sources for the source detection. Upon a positive detection, potentially multiple source models are involved to analyze direction of arrivals (DoAs) using Bayesian model selection and parameter estimation for the sound source enumeration and localization. These are two levels (enumeration and localization) of inferential estimations necessary to correctly localize potentially multiple sound sources. This paper discusses an efficient implementation of the nested sampling algorithm applied to the sound source detection and localization within the Bayesian framework. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop