**Proof.** See Appendix A.

To analyze the robustness of an estimator, Hampel et al. [22] introduced the concept of the influence function (IF). Since then, the IF has been widely used in statistical literature to measure robustness in different statistical contexts. Intuitively, the IF describes the effect of an infinitesimal contamination of the model on the estimate. Then, IFs associated to locally robust (B-robust) estimators should be bounded. Let us now obtain the IF of RMRPE and analyze its boundedness to asses the robustness of the proposed estimators. We consider the contaminated model *gε*(*x*)=(1 − *ε*)*fθ*(*x*) + *ε*Δ*x*, with Δ*<sup>x</sup>* the indicator function in *x*, and we denote \* *θτ*,*<sup>ε</sup>* = *T*\**τ*(*Gε*), being *G<sup>ε</sup>* the distribution function associated to *gε*. By definition, \* *θτ*,*<sup>ε</sup>* is the minimizer of *Rτ*(*g*, *fθ*) subject to *g*(\* *θτ***,***ε*) **= 0.** Following the same steps as in Theorem 5 in Broniatowski et al. [8], it can be seen that the influence function of *T*\**<sup>τ</sup>* in *f<sup>θ</sup>* is given by

$$IF(\mathbf{x}, \bar{T}\_{\tau}, \boldsymbol{\theta}) = \mathcal{M}\_{\tau}(\boldsymbol{\theta})^{-1} [f\_{\boldsymbol{\theta}}(\mathbf{x})^{\tau} u\_{\boldsymbol{\theta}}(\mathbf{x}) - c\_{\tau}(\boldsymbol{\theta}) f\_{\boldsymbol{\theta}}(\mathbf{x})^{\tau}],\tag{20}$$

where *cτ*(*θ*) was defined in (8) and

$$\begin{split} \mathcal{M}\_{\mathbf{r}}(\boldsymbol{\theta}) &= \frac{1}{\int f\_{\boldsymbol{\theta}}(\boldsymbol{\chi})^{\boldsymbol{\tau}+1} d\boldsymbol{x}} \Bigg[ \int f\_{\boldsymbol{\theta}}(\boldsymbol{\chi})^{\boldsymbol{\tau}+1} d\boldsymbol{x} \int f\_{\boldsymbol{\theta}}(\boldsymbol{\chi})^{\boldsymbol{\tau}+1} \boldsymbol{u}\_{\boldsymbol{\theta}}(\boldsymbol{\chi}) \boldsymbol{u}\_{\boldsymbol{\theta}}(\boldsymbol{\chi})^{T} d\boldsymbol{x} \\ &- \left( \int f\_{\boldsymbol{\theta}}(\boldsymbol{\chi})^{\boldsymbol{\tau}+1} \boldsymbol{u}\_{\boldsymbol{\theta}}(\boldsymbol{\chi}) d\boldsymbol{x} \right) \Bigg( \int f\_{\boldsymbol{\theta}}(\boldsymbol{\chi})^{\boldsymbol{\tau}+1} \boldsymbol{u}\_{\boldsymbol{\theta}}(\boldsymbol{\chi}) d\boldsymbol{x} \Big)^{T} \Bigg], \end{split}$$

with the additional condition that *g*(\* *θτ***,***ε*) **= 0.** Note that expression (20) corresponds to the IF of the unrestricted MRPE. Differentiating this last equation gives, at *ε* = 0,

$$G\left(\boldsymbol{\theta}\right)^{T}IF(\mathbf{x},\tilde{T}\_{\tau},\boldsymbol{\theta})=\boldsymbol{0}.\tag{21}$$

Based on (20) and (21) we have

$$\begin{pmatrix} \mathbf{M}\_{\mathbf{\bar{r}}}(\boldsymbol{\theta}) \\ G(\boldsymbol{\theta})^{\mathsf{T}} \end{pmatrix} IF(\mathbf{x}, \boldsymbol{\bar{T}}\_{\mathsf{r}}, \boldsymbol{\theta}) = \begin{pmatrix} \left[ f\_{\boldsymbol{\theta}}(\boldsymbol{\bar{x}})^{\mathsf{T}} \boldsymbol{u}\_{\boldsymbol{\theta}}(\boldsymbol{\bar{x}}) - \mathbf{c}\_{\mathsf{r}}(\boldsymbol{\theta}) f\_{\boldsymbol{\theta}}(\boldsymbol{\bar{x}})^{\mathsf{T}} \right] \\ \mathbf{0} \end{pmatrix}.$$

Therefore,

$$\begin{pmatrix} \begin{pmatrix} \ \mathbf{M}\_{\mathsf{T}}(\boldsymbol{\theta})^{\mathsf{T}} & \mathbf{G}(\boldsymbol{\theta}) \end{pmatrix} \end{pmatrix} \begin{pmatrix} \begin{pmatrix} \mathbf{M}\_{\mathsf{T}}(\boldsymbol{\theta}) \\ \mathbf{G}(\boldsymbol{\theta})^{\mathsf{T}} \end{pmatrix} \end{pmatrix} \operatorname{IF}(\mathbf{x}, \boldsymbol{\widetilde{T}}\_{\mathsf{T}}, \boldsymbol{\theta}) = \mathbf{M}\_{\mathsf{T}}(\boldsymbol{\theta})^{\mathsf{T}} [f\_{\boldsymbol{\theta}}(\mathbf{x})^{\mathsf{T}} \boldsymbol{\mathfrak{u}}\_{\boldsymbol{\theta}}(\mathbf{x}) - \mathbf{c}\_{\mathsf{T}}(\boldsymbol{\theta}) f\_{\boldsymbol{\theta}}(\mathbf{x})^{\mathsf{T}}] \end{pmatrix}$$

and

$$\mathrm{IF}(\mathbf{x}, \overline{\mathbf{T}}\_{\tau}, \boldsymbol{\theta}) = \left(\mathbf{M}\_{\tau}(\boldsymbol{\theta})^{\mathrm{T}}\mathbf{M}\_{\tau}(\boldsymbol{\theta}) + \mathbf{G}(\boldsymbol{\theta})\mathbf{G}(\boldsymbol{\theta})^{\mathrm{T}}\right)^{-1}\mathbf{M}\_{\tau}(\boldsymbol{\theta})^{\mathrm{T}}[f\_{\boldsymbol{\theta}}(\mathbf{x})^{\mathrm{T}}\mathbf{u}\_{\boldsymbol{\theta}}(\mathbf{x}) - \mathbf{c}\_{\tau}(\boldsymbol{\theta})f\_{\boldsymbol{\theta}}(\mathbf{x})^{\mathrm{T}}].\tag{22}$$

Note that matrices *Mτ*(*θ*) and *G*(*θ*) involved in the expression (22) are defined except for the model and tuning parameters *θ* and *τ*, and so the boundedness of the IF of the RMRPE depends, therefore, on the boundedness of the factor

$$[f\_{\mathfrak{G}}(\mathfrak{x})^{\mathsf{T}}\mathfrak{u}\_{\mathfrak{G}}(\mathfrak{x}) - \mathfrak{c}\_{\mathfrak{T}}(\mathfrak{g})f\_{\mathfrak{G}}(\mathfrak{x})^{\mathsf{T}}].$$

Therefore, the boundedness of the IF of the RMRPE depends directly on the boundedness of IF of the MRPE, stated in (20). The IF of the MRPE has been widely studied for general statistical models, concluding that the MRPEs are robust for positive values of *τ*, and that such robustness increases with the tuning parameter. A whole discussion can be found in the work of Broniatowski et al. [8]. Hence, the same properties hold for RMRPEs.

#### **4. Robust Test Statistics Based on RMRPEs**

In this section, we develop two statistics based on the RMRPEs for testing composite null hypothesis, and their asymptotic distributions are obtained. Both procedures are particularized to standard deviation testing (with unknown mean) under normal populations, and explicit expressions of the test statistics are obtained.

## *4.1. Testing Based on Divergence Measures*

In this section, we present the family of Rényi's pseudodistance test statistics (RPTS) for testing the null hypothesis given in (3). This family of test statistics is given by

$$T\_{\gamma}(\theta\_{\tau}, \theta\_{\tau}) = 2nR\_{\gamma}(f\_{\overline{\theta}\_{\overline{\theta}\_{\overline{\tau}}}}f\_{\overline{\theta}\_{\overline{\theta}\_{\overline{\tau}}}}).\tag{23}$$

The RPTS, *Tγ*(4 *θτ*, \* *θτ*), can be understood as a measure between the best unrestricted estimator of the model parameter, and the best estimator satisfying the null hypothesis. Large values of the RPTS indicate that the model densities associated with the restricted and unrestricted estimators are far away one from the other, and so the null hypothesis is not supported by the observed data. Hence, we should reject *H*<sup>0</sup> for large enough *Tγ*(4 *θτ*, \* *θτ*). We can observe that the family of RPTS defined in (23) depends on two tuning parameters, *τ* and *γ*. The first is used for estimating the unknown parameters, while the second is applied to obtain the family of test statistics. The following theorem presents the asymptotic distribution of the family of RPTS defined in (23).

**Theorem 3.** *The asymptotic distribution of Tγ*(4 *θτ*, \* *θτ*) *defined in (23) coincides, under the null hypothesis H*<sup>0</sup> *given in (3), with the distribution of the random variable*

$$\sum\_{i=1}^{r} \lambda\_i^{\tau,\gamma}(\mathfrak{G}\_0) Z\_i^2 \,\_{\tau}$$

*where <sup>Z</sup>*1, ... , *Zr are independent standard normal variables, <sup>λ</sup>τ*,*<sup>γ</sup>* <sup>1</sup> (*θ*0), ... , *<sup>λ</sup>τ*,*<sup>γ</sup> <sup>r</sup>* (*θ*0) *are the nonzero eigenvalues of Mγ*,*τ*(*θ*0) = *Aγ*(*θ*0)*Bτ*(*θ*0)*Kτ*(*θ*0)*Bτ*(*θ*0) *and k* = *r*. *The matrices Aγ*(*θ*0)*and Bτ*(*θ*0) *are given by,*

$$A\_{\gamma}(\theta\_0) \quad = \begin{array}{c} \mathcal{S}\_{\gamma}(\theta\_0) \\ \frac{\kappa\_{\tau}(\theta\_0)}{\kappa\_{\tau}(\theta\_0)} \end{array} \tag{24}$$

$$B\_{\tau}(\theta\_0) \quad = \quad \mathbf{Q}\_{\tau}(\theta\_0) \mathbf{G}(\theta\_0)^T \mathbf{S}\_{\tau}(\theta\_0)^{-1}. \tag{25}$$

**Proof.** See Appendix A.

Rényi's Pseudodistance Test Statistics for Normal Populations

Under the <sup>N</sup> (*μ*, *<sup>σ</sup>*2) model, consider the problem of testing

$$H\_0: \sigma = \sigma\_0 \text{ versus } H\_1: \sigma \neq \sigma\_0 \tag{26}$$

where *μ* is an unknown nuisance parameter. In this case, the unrestricted and null parameter spaces are given by <sup>Θ</sup> <sup>=</sup> {(*μ*, *<sup>σ</sup>*2) <sup>∈</sup> <sup>R</sup>2|*<sup>μ</sup>* <sup>∈</sup> <sup>R</sup>, *<sup>σ</sup>*<sup>2</sup> <sup>∈</sup> <sup>R</sup>+} and <sup>Θ</sup><sup>0</sup> <sup>=</sup> {(*μ*, *<sup>σ</sup>*) <sup>∈</sup> <sup>R</sup>2|*<sup>σ</sup>* <sup>=</sup> *<sup>σ</sup>*0, *<sup>μ</sup>* <sup>∈</sup> <sup>R</sup>}, respectively. If we consider the function *<sup>g</sup>*(*θ*) = *<sup>σ</sup>* <sup>−</sup> *<sup>σ</sup>*0, with *<sup>θ</sup>* <sup>=</sup> (*μ*, *<sup>σ</sup>*) *<sup>T</sup>*, the null hypothesis *H*<sup>0</sup> can be written as

$$H\_0: \mathcal{g}(\theta) = 0$$

and we are in the situation considered in (26). We can observe that in our case *G*(*θ*) = (0, 1) *<sup>T</sup>*. Based on (6) and taking into account the fact that *fθ*(*x*) is the normal density with mean *<sup>μ</sup>* and variance *<sup>σ</sup>*2, the MRPE <sup>4</sup> *<sup>θ</sup><sup>τ</sup>* = (*μ*4*τ*, <sup>4</sup>*στ*)*<sup>T</sup>* of *<sup>θ</sup>* = (*μ*, *<sup>σ</sup>*)*<sup>T</sup>* is the solution of the system of nonlinear equations

$$\begin{cases} \quad \sum\_{i=1}^{n} (X\_i - \mu) \exp\left\{-\frac{\tau}{2} \left(\frac{X\_i - \mu}{\sigma}\right)^2\right\} = 0\\ \quad \sum\_{i=1}^{n} \left\{ \left(\frac{X\_i - \mu}{\sigma}\right)^2 - \frac{1}{1 + \tau} \right\} \exp\left\{-\frac{\tau}{2} \left(\frac{X\_i - \mu}{\sigma}\right)^2\right\} = 0 \end{cases}$$

while the RMRPE \* *<sup>θ</sup><sup>β</sup>* <sup>=</sup> (*μ*\**τ*, *<sup>σ</sup>*0) *<sup>T</sup>*, when *σ* = *σ*<sup>0</sup> is the solution of the nonlinear equation

$$\sum\_{i=1}^{n} \left\{ \left( \frac{X\_i - \mu}{\sigma\_0} \right)^2 - \frac{1}{1 + \tau} \right\} \exp\left\{ -\frac{\tau}{2} \left( \frac{X\_i - \mu}{\sigma\_0} \right)^2 \right\} = 0.1$$

After some algebra (see the Appendix A) we obtain that the RPTS for testing (26) under normal populations can be expressed as

$$\begin{split} T\_{\gamma}(\widehat{\theta}\_{\tau}, \overline{\theta}\_{\tau}) &= \, 2n \mathbb{R}\_{\gamma} \Big( \mathcal{N}(\widehat{\mu}\_{\tau}, \widehat{\sigma}\_{\tau}^{2})\_{\prime} \mathcal{N}(\widetilde{\mu}\_{\tau}, \sigma\_{0}) \Big) \\ &= \, \frac{2n}{\gamma(\gamma+1)} \log \left[ \frac{1}{\widehat{\sigma}\_{\overline{\tau}} \sigma\_{0}^{\gamma}} \left( \frac{\sqrt{\widehat{\sigma}\_{\tau}^{2} + \gamma \sigma\_{0}^{2}}}{\sqrt{\gamma+1}} \right)^{\gamma+1} \right] + n \frac{(\widehat{\mu}\_{\tau} - \widetilde{\mu}\_{\tau})^{2}}{\left(\gamma \sigma\_{0}^{2} + \widehat{\sigma}\_{\overline{\tau}}^{2}\right)} \end{split} \tag{27}$$

Based in (27), and taking into account that the eigenvalue of the matrix *Aγ*(*θ*)*Bτ*(*θ*)*Kτ*(*θ*)*Bτ*(*θ*) is given by (see Appendix A)

$$d\_{\tau,\gamma}(\sigma) = \frac{1}{2} \frac{\left(\tau + 1\right)^3}{\left(\gamma + 1\right)^2 \left(2\tau + 1\right)^{\frac{5}{2}}} \left(3\tau^2 + 4\tau + 2\right).$$

we apply Theorem 3 such that

$$\mathcal{I}\_{\mathsf{T},\gamma}(\boldsymbol{\sigma}\_{0})^{-1} \left( \frac{2n}{\gamma(\gamma+1)} \log \left[ \frac{1}{\hat{\boldsymbol{\sigma}}\_{\mathsf{T}} \boldsymbol{\sigma}\_{0}^{\gamma}} \left( \frac{\sqrt{\hat{\boldsymbol{\sigma}}\_{\mathsf{T}}^{2} + \gamma \boldsymbol{\sigma}\_{0}^{2}}}{\sqrt{\gamma+1}} \right)^{\gamma+1} \right] + n \frac{(\hat{\boldsymbol{\mu}}\_{\mathsf{T}} - \boldsymbol{\tilde{\mu}}\_{\mathsf{T}})^{2}}{(\gamma \boldsymbol{\sigma}\_{0}^{2} + \hat{\boldsymbol{\sigma}}\_{\mathsf{T}}^{2})} \right) \overset{\mathcal{L}}{\underset{\boldsymbol{n} \to \infty}{\rightarrow}} \chi\_{1}^{2} .$$

Note that the RPTS is indexed by two tuning parameters, *γ* and *τ*, the first controlling the robustness of the pseudodistance and the second controlling the robustness on the estimation. For simplicity, we use *γ* = *τ* for the normal population application.

**Remark 1.** *For τ* = *γ* = 0*, the RPTS coincides with the asymptotic likelihood ratio test for testing (26). Indeed, for τ* = 0, *we have that the MLE and RMLE are given, respectively, by*

$$
\widehat{\boldsymbol{\theta}} = (\overline{\boldsymbol{X}}, \widehat{\sigma}\_n^2 = \frac{1}{n} \sum\_{i=1}^n (X\_i - \overline{X})^2) \text{ and } \quad \widetilde{\boldsymbol{\theta}} = (\overline{\boldsymbol{X}}, \sigma\_0^2).
$$

*Now, the expression of the Kullback–Leibler divergence (the RP for γ* = 0*) between two normal densities,* N (*μ*1, *σ*1) *and* N (*μ*2, *σ*2), *is given by*

$$\lim\_{\gamma\_{\parallel}\to 0} R\_{\gamma}(\mathcal{N}(\mu\_1, \sigma\_1), \mathcal{N}(\mu\_2, \sigma\_2)) = \frac{\sigma\_2^2 - \sigma\_1^2}{2\sigma\_1^2} + \ln \frac{\sigma\_1}{\sigma\_2} + \frac{1}{2} \frac{\left(\mu\_1 - \mu\_2\right)^2}{\sigma\_1^2}.\tag{28}$$

*and thus the RPTS for γ* = *τ* = 0 *is*

$$T\_0(\widehat{\theta}, \overline{\theta}) = n \frac{\sigma\_0^2}{\widehat{\sigma}\_n^2} - n + 2n \ln \frac{\widehat{\sigma}\_n}{\sigma\_0}.$$

*On the other hand, the likelihood ratio for testing (26) is given by*

$$
\lambda(X\_{1'}, \dots, X\_n) = \left(\frac{\partial\_n}{\sigma\_0}\right)^{n/2} e^{-n\frac{\beta\_n^2}{2\sigma\_0^2}} e^{n/2},
$$

*and so, both expressions are related through*

$$-2\ln\lambda(X\_1,\ldots,X\_n) = T\_0(\overleftarrow{\theta},\overleftarrow{\theta})\dots$$
