*5.2. The SNe Ia Measurements*

SNe are widely-adopted in astrophysics as standard candles. Thereby, several SN catalogs are often updated, furnishing today a large number of data points that combined with other data sets enable one to fix tighter constraints over the universe expansion history in terms of its constituents. In particular, SNe Ia are likely the most used objects that constrain DE at late times. The standard procedure makes use of the luminosity distance *dL*(*z*) and of apparent magnitude. A general relation for *dL*(*z*) has been previously written, with *θ* the set of free parameters of a given model. Then, we can notice that exploring a given cosmological model is equivalent to getting the whole set of parameters, *θ*.

In particular, when one adopts a given cosmological model, then an indirect requirement naturally holds: *the underlying cosmological model is the most suitable one*. This is clearly a limitation because this hypothesis does not always coincide with the most feasible statistical model. Thus, more than one scenario can lead to subtle bounds, indicating a degeneracy problem among different models. This justifies the need of analyzing different cosmological paradigms working out data set hierarchy, i.e., combining more than one data catalog. In addition, statistical criteria are also crucial to check the goodness of a given paradigm.

For SNe Ia, by virtue of Equation (20), it is possible to relate the brightness to fluxes to get the distance modulus

$$
\mu(z) = 25 + 5 \log \left( \frac{d\_{\rm L}}{\rm Mpc} \right) . \tag{24}
$$

Neglecting error bars on *z*, we underline errors on *µ*, namely *σµ*, whereas the best fit is determined by the standard maximization of the underlying likelihood function, or simply minimizing the *χ* 2 , provided by

$$\chi^2(\theta\_{\rm min}) = \sum\_{i=1}^{N\_\circ} \left[ \frac{\mu\_i(z\_i, \theta) - \mu\_{obs,i}(z\_i)}{\sigma\_{\mu,i}} \right]^2 \tag{25}$$

where the subscript *min* refers to the set of values that minimize the chi square function, as requested above. Theoretical models can be therefore tested by *χ* 2 statistics, leading to probing DE by inferring *d<sup>L</sup>* in units of megaparsecs and using it by means of the apparent magnitude.

Again, intertwining more than one data set with other surveys is quite essential to determine the whole set of parameters, with refined accuracy. For instance, SNe alone, as well as GRBs<sup>20</sup> , *H*<sup>0</sup> cannot be arguable. In fact, expanding up to the first order the luminosity distance, valid up to *z* . 0.001, one gets

$$d\_{\mathcal{L}}(z, H\_0) \simeq \frac{cz}{H\_0} \tag{26}$$

that clearly vanishes at *z* = 0, implying that *H*<sup>0</sup> cannot be constrained with SNe Ia alone. In addition, a multiplicative degeneracy between *H*<sup>0</sup> and the other free parameters occurs.

Once the chi square statistic is computed, the confidence regions are planes with fixed *χ* 2 . For example, one can get Ω*<sup>M</sup>* − *θ<sup>i</sup>* planes by marginalizing the likelihood functions over *H*0. This procedure consists of integrating the probability density *p* ∝ exp(−*χ* <sup>2</sup>/2) for all values of *H*0. Marginalization is a generic technique, clearly not limited to *H*0. In fact, one who desires to simultaneously constrain a few parameters and in the meantime wants to get the corresponding probability distribution regardless of the values of a given parameter, say *θ* ? , can proceed with marginalizing. Let us call *θ* ? the parameter we do not care about; the marginalized probability density, computed for example for Ω*m*, is given by *p*(Ω*m*) = R *dθ* ? *p*(Ω*m*, *θ* ? ).
