**1. Introduction**

Robust statistics, as a new field of mathematical statistics, originates from the pioneering works of John Tukey (1960) [1], Peter Huber (1964) [2], and Frank Hampel (1968) [3]. The term "robust" (Latin: strong, vigorous, sturdy, tough, powerful) was introduced into statistics by George Box (1953) [4].

The reasons of research in this field of statistics are of a general mathematical nature: the conceptions of "optimality" and "stability" are mutually complementary in performance evaluation for almost all mathematical procedures, and the trade-off between them is often a sought goal.

It is not rare that the performance of optimal solutions is rather sensitive to small violations of the assumed conditions of optimality. In statistics, the classical example of such unstable optimal procedure is given by the least squares estimates, in which performance under small deviations from normality can become disastrous [5].

Since the term "stability" is overloaded in mathematics, the term "robustness" being its synonym is at present conventionally used in statistics and in optimal control theory: in general, it means the stability of statistical inference under uncontrolled violations of accepted distribution models.

In present, there are two main approaches to robustness: historically, the first global minimax approach of Huber (quantitative robustness) [5] and the local approach of Hampel based on influence functions (qualitative robustness) [6]. Within the first approach, the least informative (favorable) distribution minimizing Fisher information over a certain distribution class is obtained with the subsequent use of the asymptotically optimal maximum likelihood parameter estimate for this

distribution. In this case, the minimax approach gives the guaranteed accuracy of robust estimates, that is, the asymptotic variance of the optimal parameter estimate is upper-bounded for distributions from the aforementioned class.

Within the second approach, a parameter estimate is defined by its desired influence function, which determines the qualitative robustness properties of an estimate, such as its low sensitivity to the presence of gross outliers in the data, to the data rounding-o ff, to the missing data, etc.

In what follows, we consider these methodologies in detail focusing on the optimization and variational calculus tools used in both aforementioned approaches. Within Huber's minimax approach, we review the conventional least informative (favorable) distributions obtained for the neighborhoods of a Gaussian [5], as well as those designed for a variety of the non-standard distribution classes of a non-neighborhood nature [7]. Within Hampel's local approach [6], we mostly emphasize its recently developed stable estimation branch with the originally posed variational calculus problems and rather prospective results on their application to robust statistics [8].

While this paper focuses on particular topics in the field of robust statistics, it is worth noting a few comprehensive reviews also covering the present state of art in this field, namely Reference [9–14].

An outline of the remainder of the article is as follows. In Section 2, a general problem setting for the design of minimax variance *M*-estimates of location is recalled. In Section 3, the globally stable Meshalkin-Shurygin's *M*-estimates are described. In Section 4, a comparative performance evaluation of the conventional robust *M*-estimates of location with the several novel proposed *M*-estimates is examined (univariate setting is considered throughout the paper), and several unforeseen and unexpected results have been obtained. In Section 5, some conclusions are given.

#### **2. Huber's Minimax Variance Robust M-Estimates of Location**
