1. Introduction
The study of solitonic cores in physical systems has attracted significant interest due to their relevance in various fields, including cosmology, astrophysics, and condensed matter physics. These localized, stable structures arise from the balance between attractive and repulsive forces, leading to unique properties and behaviors.
One specific field where this system has become particularly interesting is that of bosonic dark matter. This model assumes that dark matter particle is a spinless ultralight boson with a mass of order
–
, and there are various reviews that draw a general panorama of the subject [
1,
2,
3,
4,
5]. The dynamics of this type of matter in the mean-field approximation is ruled by the Gross–Pitaevskii–Poisson (GPP) system of equations, and the system can be seen as a Bose gas trapped by the gravitational field sourced by itself [
6]. When the gas has no self-interaction, it is called the Fuzzy Dark Matter (FDM) regime [
7] and is the main stream of the subject that has led to study both local (e.g., [
8,
9]) and structure formation dynamics [
10,
11,
12].
The interesting thing about this model is that for this ultralight particle, the de Broglie wavelength is large, and thus the structures have a minimum size. In fact, it was found since the breakthrough simulation in [
10] that cores were an essential part of structures and were an attractor profile surrounded by an envelope with high kinetic energy. This meant that these cores could be fitted by an empirical density profile that coincided numerically with the solution of the ground-state solutions of the GPP system [
13]. Ever since, these cores have been considered literally the keystone of structures in FDM. Bounds on the values of self-interactions arise from local and cosmological constraints (see, e.g., [
14,
15,
16,
17,
18]).
The generalization to include self-interacting bosons is a natural extension, and limits to the self-interaction regime are a matter of interest, because the dark matter distribution differs from that of FDM [
19], time scales of saturation and relaxation change [
20], and ultimately, there is a Tomas–Fermi regime with rather simple density profiles [
21].
This background leads us to revisit some properties and stability of solitonic cores with self-interaction that prove to be useful. We solve the well-known eigenvalue problem for ground-state solutions of the GPP system, with a rather unusual but efficient method that involves genetic algorithms. We then propose an empirical formula that describes its profile. We then study the stability of the solutions and those associated with the empirical formula. We compare the reaction of both to truncation error perturbations in both amplitude and frequencies of the oscillation modes triggered. We point out interesting differences and potential implications within the dark-matter context.
This paper is organized as follows:
Section 2 provides an overview of the theoretical framework and equations governing solitonic cores.
Section 3 presents the methodology used to construct the ground-state solutions.
Section 4 contains a comparison with empirical formulas, while in
Section 5, we compare the evolution of solutions. Finally,
Section 6 draws some conclusions.
2. Model and Equations
In a Bose–Einstein Condensate (BEC), a significant portion of the particles occupy the lowest quantum state, resulting in overlapping wave functions. Consequently, the state of a BEC can be effectively described by a collective wave function, known as the order parameter,
. Due to interactions between bosons, which induce nonlinear effects, this collective wave function satisfies the Gross–Pitaevskii (GP) equation:
where
ħ is the reduced Planck constant,
m is the mass of the boson,
V is the external potential acting as a gas trap, and
is the nonlinear coefficient, where
is the scattering length of two interacting bosons.
The concept of BEC Dark Matter (BEC-DM) hypothesizes that dark matter exists in the form of a BEC, where the trapping potential is self-generated by the bosonic ensemble through the Poisson equation:
The nonlinear system (
1) and (
2) is known as the Gross–Pitaevskii–Poisson (GPP) system and exhibits scale invariance, described by the transformation [
13]:
where
is a scaling factor.
An alternative description of the GPP system (
1) and (
2) can be obtained through the Madelung transformation
, where
is the mass density, and
S represents the phase of the wave function. By separating the real and imaginary parts and defining the fluid velocity as
, it is possible to rewrite the GPP system as [
22,
23]:
where
is the self-interaction pressure, and
is the quantum potential. In this framework, the GPP system is known as the quantum hydrodynamic formulation.
If we take the limit , the quantum potential becomes zero, and the classical hydrodynamic equations are recovered with a polytropic equation of state with polytropic constant and polytropic index . In this limit, we can see the effect of the nonlinear term in the GP equation classically, which results from two-body dispersion interactions between bosons:
Repulsive self-interaction (): the gas has positive pressure opposing the gravitational force, allowing stable structures.
Attractive self-interaction (): no classical behavior is possible since negative pressure is not physically acceptable.
Without self-interaction (): the limit of positive self-interaction when the polytropic constant tends to zero corresponds to a dustlike state.
When
, the quantum potential reintroduces the possibility of structure formation regardless of the value of
g. The quantum potential
Q is crucial for this behavior. As in any macroscopic system, global quantities can be measured. Some of these are:
where
is the mass,
W is the potential energy,
K is the total kinetic energy (with
from classical contributions and
from quantum effects), and
I is the self-interaction energy. The total energy is defined as
, and the virial function is
(see, e.g., [
24]). All these quantities are helpful for the diagnostics of any solution of the system (
1) and (
2), and here, we use them for equilibrium solutions.
2.1. Adimensionalization of the System
Transforming the system into a dimensionless coordinate system ensures the uniformity of units and prevents issues arising from disparate scales when performing numerical calculations. To achieve this, we perform the following transformations:
,
,
,
,
, and
, where tilde variables are dimensionless and are said to be code units, while nontilded ones are physical. Appropriate scale factors for the GPP system are:
Thus, the system effectively possesses a single degree of freedom, which we express in terms of a scaling factor
, equivalent to the transformation (
3). With these new variables, the GPP system can be rewritten in dimensionless units as:
where we omit the tilde in all variables. Our objective now is to determine the ground state of the stationary version of the GPP system (
12) and (
13) since excited states are unstable [
6].
2.2. The Stationary GPP System
Stationary GPP equations are constructed by assuming spherical symmetry and that the order parameter can be rewritten as
, with
an eigenfrequency, and
a real function of the radial coordinate
r. With these assumptions, the GPP system is written as follows, according to [
13,
25]:
To ensure physically acceptable solutions, we impose certain boundary conditions. For the stationary order parameter
, we require that
,
, and
.
For the gravitational potential V, we set and . The choice of can be arbitrary since shifting this condition to is equivalent to finding an eigenvalue for some arbitrary value . These boundary conditions ensure physically meaningful solutions that satisfy the requirements of regularity and isolation.
Since this set of equations is solved numerically, it is convenient to write it as a first-order system by defining the variables
and
, where
is the derivative operator. The above system is then rewritten as:
with the boundary conditions
,
,
,
, and
. This set of equations along with the boundary conditions define an eigenvalue problem, where the eigenvalue is
.
For ease, it is convenient to define the vector
and the right side of the system as
. The system can be written compactly as:
where
.
3. Numerical Methods
Different strategies can be employed to numerically solve the systems of equations above. Some of them solve the system on a discrete domain, traditionally using a shooting routine, like in the flagship reference [
25] and most of follow-up papers. We use a discrete domain but not a shooting method.
3.1. Stationary System
We construct the solution on a finite domain , where the boundary conditions are redefined approximately as , that is, we use a finite value in which we seek to satisfy the boundary conditions approximately at the external boundary .
We define the discrete domain as , where is the number of points. The simplest way to construct is by employing a uniform partition, where the points are chosen as , with the resolution of the discrete domain.
Note that in order to integrate the system (
16)–(
19), we must set the parameters
,
,
g,
, from which we do not know the eigenvalue
, and for reasons of numerical precision, the upper radius
. Therefore, it is necessary to find these values in such a way that they approximately satisfy the boundary conditions at the outer boundary
.
Instead of using the shooting method to search for the eigenvalue
of the problem (
16)–(
19) as traditional (e.g., [
6,
25]), here, we propose an alternative method based on optimization.
3.2. Description of the Eigenvalue Search Method
We search for the eigenvalue that satisfies the boundary conditions at . To accomplish this, we employ a genetic algorithm (GA), which is rooted in the theory of evolution. In a GA, an initial population exists within a defined environment. Each individual in this population is assigned a fitness level, representing their suitability for survival in the environment. This fitness level is determined solely by the DNA of each individual of the population.
Better-adapted individuals have a higher likelihood of reproducing and passing on their genetic material to subsequent generations. Offspring are generated from two parents, each contributing approximately 50% of their genetic material. However, in nature, offspring may adapt better to the environment than their parents due to mutations in their DNA. This iterative process continues for many generations until significant changes are observed in the population.
Based on this understanding of evolution, we outline our GA as follows:
Define the problem: In our context, each individual represents a potential solution to the eigenvalue problem of system (
20). The DNA chain determining each individual is represented as
, where the components of these vectors are called genes.
Initialize the population: the population is generated randomly, with a constant size maintained throughout the evolution.
Fitness function: Define the fitness function as
where
is the solution of the system (
20) associated with the value
. The choice of the form of Equation (
21) is due to the fact that both the wave function and its derivative must vanish in the limit when
, and we would like the violation of the condition on
and on
to be of the same order; then, we define the fitness as the inverse of the mean squared violation of the separate violations of
and on
.
Selection: using an elitist method to select the best individuals, the value of the fitness function of each element of the population is obtained, and they are arranged in such a way that those with a higher fit are first in the list.
Reproduction: Select two random parents from the first to generate a new individual. The DNA genes of the new individual are randomly selected from the genes of the parents.
First Mutation: after creating a new individual, apply a mutation where each gene in the DNA chain has a probability of being amplified by a factor of 1.5.
Replacement: Repeat steps 5–6 for to generate the remaining individuals.
Second mutation: Apply a differential mutation to the entire population. For each individual with , select two other individuals with and randomly. Generate a new individual with . If , replace the ith individual with the new one.
Stop condition: repeat steps 4–8 for multiple generations until for at least one individual in the population.
By applying this algorithm, it is possible to find a solution to our problem by specifying only the amplitude of the order parameter
at the origin and the coefficient of the nonlinear term
g. Let us remember that the choice of
can be made arbitrarily; however, once the solution is found, we can rescale the gravitational potential and the eigenvalue as follows:
so that the gravitational potential satisfies monopolar boundary conditions.
Finally, the specific parameters of the GA for the solution of the eigenvalue problem are that , whereas half of the population is selected from each generation, , to survive and crossover.
5. Evolution
While the solutions found with the GA serve as good approximations to the exact solutions, they are not exact. Let us denote the exact solution of the eigenvalue problem as and the numerical solution as , where represents the error associated with the numerical truncation error of using a discrete domain for its construction.
Stability can be tested by monitoring the behavior of this error over time when the exact solution is used as the initial condition plus the perturbation. Specifically, we analyze the dynamics triggered by such perturbation. The stationary solution is deemed stable if the perturbation remains bounded during its evolution, and it is considered unstable if the perturbation grows over time. This error analysis is commonly used to test the convergence of numerical solutions of stable systems [
6] and to test how the errors converge to zero while numerical resolution is increased.
Thus, the evolution of the numerical solution of the initial value problem has an error that we show does not diverge (see, e.g., [
6,
13]). But we would like to monitor the error when using the empirical density profile as initial condition and see how it behaves.
For this, we programmed a code that evolved these initial conditions by solving the time-dependent system (
1) and (
2) using spherical symmetry. The solution took place on the same numerical domain
used to solve the eigenvalue problem. The code used the method of lines to solve GP Equation (
1) with a fourth-order Runge–Kutta integrator and second-order accurate stencils for spatial derivatives. At the origin, the order parameter was extrapolated with a second-order accurate approximation. Simultaneously, at each intermediate step of the Runge–Kutta method, we solved Poisson Equation (
2) outwards from the origin until
.
The diagnostics we used to monitor the growth of perturbations was the central density at the origin, and the results are shown in
Figure 4 for the evolution of the solution to the eigenvalue problem and the empirical profile (
24) as well as that of Formula (
29) obtained from simulations in [
20], for various combinations of
and
g. In the left column, we show the time series of the central density, while the right column shows its Fourier transform that illustrates the triggered oscillation modes.
The first case is an unstable solution that collapses due to the attractive nature of self-interaction (negative g). In this case, the density when initial conditions are the numerical solution blows up at a finite time, whereas when evolving the empirical profile, the density also acquires an out-of-bound central density, also indicating instability. In that case the Fourier spectrum says little about the oscillation modes and is not shown.
There are also three stable cases with
,
, whose central density oscillates with different amplitudes and frequency modes. As expected, the density of the solution of the eigenvalue problem is closer to the exact solution than empirical Formula (
24). An implication is that in the first case, the amplitude of the oscillations triggered by the truncation error is smaller than in the second case, where the difference between the numerical solution of the eigenproblem and the empirical formula add an extra perturbation. The amplitudes differ approximately by an order of magnitude. Now, what can be seen is that the excited oscillation frequencies coincide and are independent of the oscillation amplitudes.
As a particular case, we show the differences between the evolution of the ground-state solution and its empirical formula for the case
corresponding to FDM. The magnitude of oscillations is particularly important in this case, because initial conditions involving mergers and rotation curves use cores as workhorse configurations for initial conditions and it is interesting to note how these profiles carry on an intrinsic oscillation. The results are shown in
Figure 5.
Finally, we carried out an analysis similar to that in [
20], where a value of
was found that separated the stable and unstable branches of solutions for
, derived from empirical Formula (26), considering the quantity
as a function of the invariant
. We show the result in
Figure 6. The critical value was found where the quantity
reached its maximum value at
, which was similar to the value
found by Chan et al. [
20]. This result shows the consistency of our one-parameter formula with the formula obtained from simulations of dark-matter collapse.
6. Conclusions
We constructed the well-known ground-state solutions of the GPP system of equations, this time using a genetic algorithm. The motivation to implement this type of method is that it can be used in the case of many parameters, or equivalently, DNA made of coefficients of a multimode wave function. In this sense, this paper is a proof of concept for the usage of this method in core-plus-halo FDM configurations that we plan to analyze in the near future.
One of the contributions of this work is the construction of a one-parameter empirical formula that describes the density profile of ground-state solutions with self-interaction. Moreover, this formula works for arbitrary
g, which is a small but probably important step with respect to the very general formula for core profiles in [
20] obtained from simulations.
We also evolved the ground-state solutions, the density profiles given by our empirical formula, and as a control case, we also evolved the profiles obtained in [
20]. We found that the evolution of empirical formulas was different, even though they produced pretty similar density profiles. The fact that empirical formulas were an approximate version of the solution of the eigenvalue problem, which is already an approximate solution, produced higher-amplitude perturbations. Specifically, we found that the amplitude of the oscillation of stable solutions was bigger than an order of magnitude with respect to those of the ground-state solutions. This is relevant because empirical formulas are commonly used as initial conditions for binary and multimergers of ground-state solutions, which can be improved.
Finally, we verified that the evolution of certain configurations with negative self-interaction could collapse and found the threshold between stable and unstable solutions using our empirical formula was consistent with the one found by the analysis in [
20].