**1. Introduction**

Nowadays, ranking of scientific impacts is a crucial task and it is a focus of research communities, universities, and governmental funding agencies. In this ranking, the target entities can be researchers, universities, countries, journals, or conferences. Performance analysis and benchmarking of scientific achievement has a variety of substantial purposes. At the researcher level, the research's impact is an important measure to define the main rules of academic institutions and universities on determination of funding, hiring, and promotions [1–3]. From the university's view point, university rankings are considered as a source of strategic information for governments, funding agencies, and the media in order to compare universities; then students and their parents use university rankings as a selection criterion [4]. As the assessment of scientific achievement has gained a grea<sup>t</sup> deal of attention for various interested groups, such as students, parents, institutions, academicians, policy makers, political leaders, donors/funding agencies, and news media; several assessment methods have been developed in the field of bibliometry and scientometrics through the utilization of mathematical and/or statistical methods [1].

In order to measure a researcher's performance, many indicators have been proposed which can also be utilized in other scientific areas. Traditional research indicators include the numbers of publications and citations, the average number of citations per paper, and the average number of citations per year [5]. In 2005, Hirsch [6] proposed a new indicator, called *h*-index, which revolutionized scientometrics (informetrics). The original definition of the *h*-index indicator is that, "A scientist has the index *h* if *h* of his/her *Np* papers have at least received *h* citations each, and the other *Np* − *h* papers have no more than *h* citations each." Later, other indicators were proposed to enhance the *h*-index. Additionally, *h*-index was defined for other scientific aggregation levels [7]. Ranking methods at researcher level tend to use only one indicator (*h*-index or its improved versions), but at other aggregation scientific levels they prefer to have a more comprehensive set of indicators. Research works in the scientometrics can be divided into the following two main categories: the first category includes methods which focus on introducing new indicators to enhance the performances of assessment metrics, and in the second category, methods attempt to develop enhanced ranking methods for obtaining ranks by using several various indicators.

There are various kinds of ranking methods; first, methods which focus only on one indicator; and second, methods which combine several of them. Considering only a specific indicator makes differences among the quality assessments of research outcomes very hard to be revealed. On the other hand, there are a few challenges for considering several indicators simultaneously. For instance, the method needs to find the proper weights for combining the indicators and also an efficient merging strategy to combine several different types of indicators.

In the field of optimization, an algorithm tries to find the best solution in a search space in terms of an objective function which should be minimized or maximized [8] accordingly. However, in singe-objective problems [9], there is only one objective to be optimized; in the multi-objective version, the algorithm tries to find a set of solutions based on more than one objective [10]. In the multi-objective optimization [11,12], the non-dominated sorting [13,14] is defined and used as a measure of efficiency in metaheuristic-based methods [15,16]. In [17], the basic dominance ranking was used to identify the excellent scientists according to all selected criteria. They selected all researchers in the first Pareto-front as excellent scientists, but by increasing the number of criteria (more than three) most compared entities were placed in the first Pareto front [17]. In this paper, we propose a modified, non-dominated sorting, which according to the basic dominance ranking, utilizes two main metrics and then two statistical metrics which are the computed means and medians of some ranks obtained by sorting each criterion's value in all compared vectors. This ranking has many major advantages: (1) it can perform very well at ranking all compared vectors even with a large number of criteria; (2) each obtained Pareto front in the modified non-dominated sorting has a smaller number of vectors in compared to the basic non-dominated sorting approach; (3) it can consider the length time of academic research (called the research period) as an independent indicator, which makes it possible to compare junior and senior researchers; (4) it is independent and capable of accommodating new indicators; (5) there is no need to determine the optimal weights to combine indicators. The modified Pareto dominance ranking was used to rank two research datasets with many criteria, ranking universities (200 samples) and countries (231 samples); additionally, the basic dominance ranking was applied to rank two research datasets with a low number of the criteria, ranking computer science researchers based on h-index and period of publication (350 samples) and ranking of universities based on triple rankings resources (100 samples).

The remaining sections of this paper are organized as follows. Section 2 presents a background review which provides state-of-the-art scientific indicators and ranking methods. Section 3 describes the proposed ranking method in detail. Section 4 presents case studies and corresponding discussions. Finally, the paper is concluded in Section 5.
