*5.6. Confidence Levels and Uncertainties*

As one performs fits combining GRBs with other observable quantities, meaningful information on the best-fit parameters is achieved by computing their confidence limits or contour plots, which define the allowed parameter phase-space. These are essentially regions constructed around a set of best fit parameters obtained from computation. One does not mind about the number of dimensional parameter space, namely *m*, corresponding de facto to the number of parameters, since, to make those regions compact, one holds constant *χ* <sup>2</sup> boundaries, fixing the chi squared values to specific numbers. Thus, one takes *m* to be the number of parameters, *n* the number of data, and *p* to be the confidence limit that one desires to reach. Assuming to shift by solving *Q*[*n* − *m*, *min*(*χ* 2 ) + ∆*χ* 2 ] = *p*, and to find the parameter region where *χ* <sup>2</sup> <sup>≤</sup> *min*(*<sup>χ</sup>* 2 ) + ∆*χ* 2 , immediately one gets the requested confidence region. Once the regions have been computed, it is necessary to obtain uncertainties. To do so, expanding the log likelihood in Taylor series lnL = lnL(*θ*0) + <sup>1</sup> <sup>2</sup> ∑*ij*(*θ<sup>i</sup>* − *θi*,0) *∂* 2 lnL *∂θi∂θ<sup>j</sup> θ*0 (*θ<sup>j</sup>* − *θj*0) + ..., we define the Hessian matrix by

$$\mathcal{H}\_{\text{ij}} = -\frac{\partial^2 \ln \mathcal{L}}{\partial \theta\_i \partial \theta\_j}. \tag{36}$$

Since its non diagonal terms indicate correlated parameters, one can assume the errors on a given *i* parameter to be 1/ √ H*ii*. This naive representation of errors is a coarse-grained approach, dubbed conditional error, not frequently adopted in the literature. On the other hand, one can compute the Fisher information matrix, as a forecast expression for error bars

$$F\_{\vec{l}\vec{j}} = \langle \mathcal{H} \rangle = - \left\langle \frac{\partial^2 \ln \mathcal{L}}{\partial \theta\_i \partial \theta\_{\vec{j}}} \right\rangle , \tag{37}$$

with the ensemble average over observational data. In analogy to conditional errors, we write *σ* 2 *ij* ≥ (*F* −1 )*ij*, while the marginalized errors become *σθ<sup>i</sup>* ≥ (*F* −1 ) 1/2 *ii* .

We underlined above that the Fisher matrix is somehow related to error bars. In this respect, we mean that the Fisher Information matrix enables to estimate the parameters errors before the experiment is performed. Hence, it permits to explore different experimental set ups that could optimize the experiment itself. For these reasons, the Fisher matrix is largely adopted in the literature.
