**2. Background Review**

In this section, we review several state-of-the-art scientific indicators and several recent ranking methods.

#### *2.1. A Brief Description of State-of-the-Art Scientific Indicators*

Several indicators have been proposed to measure the scientific achievements. The pioneer studies introduced some basic indicators and described how these indicators can be combined to find the general intuition of the scientific outputs for researchers [18,19]. These indicators can be categorized in the following three main groups [20,21]:


Some advantages and disadvantages of well-known indicators [6,22] are shown in Table 1.


**Table 1.** A summary of advantages and disadvantages for some commonly used indicators.

In 2005, Hirsch dramatically changed scientometrics (informetrics) by introducing the *h*-index measure. Several studies have discussed and extended the validity of the *h*-index [23] since its introduction. The *h*-index has some significant properties [24,25]. It considers two aspects, the number of publications and their impacts on research. It performs better than other basic indicators (total number of papers, total number of citations, average number of significant papers, etc.) at evaluating scientific achievements. In [25], an empirical study was conducted to confirm the superiority of the *h*-index over other basic indicators. In addition, the *h*-index can effortlessly be computed by using available resources such as the ISI Web of Science. Although it was extensively utilized as a scientometrics measure, it still suffers from the following drawbacks [1,26–28]:


Several variants of the *h*-index have been developed to overcome the drawbacks of the *h*-index. The *m*-quotient [6] was proposed to account for years since the first publication, and it is computed as follows.

$$m\text{-quotient} = \frac{h\text{-index}}{n},\tag{1}$$

where *n* is the number of years since the first published paper of the scientist. Batista et al. [29] introduced a complementary index as the *hI* index which is defined by:

*hI* = *h*2/*N<sup>T</sup> a* , (2)

where *N<sup>T</sup> a* is the number of authors in the considered *h* papers. In [30], *A*-index was suggested as the average number of citations of publications included in the *h*(Hirsch)-core which is mathematically defined as.

$$A = \frac{1}{h} \sum\_{j=1}^{h} cit\_j \tag{3}$$

The *AR* index [31] was proposed as the square root of the sum of the average number of citations per year of articles included in the *h*(Hirsch)-core. The mathematical definition of the index is as bellow.

$$AR = \sqrt{\sum\_{j=1}^{h} \frac{cit\_j}{a\_j}} \,\,\,\,\tag{4}$$

where *aj* is the age of *j*th paper. Liang et al. [26] suggested a new index, the *R*-index, which found by calculating the square root of the sum of citations in the Hirsch core without dividing by *h*. This indicator is mathematically defined as.

$$R = \sqrt{\sum\_{j=1}^{h} cit\_j} \tag{5}$$

Egghe [28] introduced the *g* index which is defined as the highest number *g* of papers such that the top *g* papers together have at least *g*2 citations. Additionally, it has proven that there is a unique *g* for any set of papers and *g* > *h*. Egghe and Rousseau [32] proposed the citation-weighted *h*-index (*hw*-index) as follows.

$$h\_w = \sqrt{\sum\_{j=1}^{r\_0} cit\_{j\prime}} \ r\_w(i) = \frac{\sum\_{j=1}^i cit\_j}{h} \,\,\,\tag{6}$$

where *citj* is the number of the *j*-th most cited paper; *r*0 is the largest row index *i* such that *rw*(*i*) ≤ *citi*. In general, even enhanced version of h-index metrics suffer from combining several metrics instead of considering them simultaneously.

#### *2.2. A Brief Review of Ranking Methods*

At the researcher level, all mentioned indicators can be applied to measure researchers' achievements. Although other scientific applications such as ranking scientific journals, research teams, research institutions, and countries tend to include a more comprehensive set of indicators, it is possible to apply the scientific indicators of researcher in other scientific comparative applications. For example, *h*-index can be calculated for an institute: "The *h*-index of an institute would be *h*2 if *h*2 number of its researchers have an *h*1-index of at least *h*2 each, and the other ( *N* − *h*2) researchers have *h*1-indices lower than *h*2 each" [7]. In following, we briefly review some common ranking methods and indicators for universities. University rankings mainly use two different general categorizes of methodologies [33–39]; the first category uses all indicators [40,41] to calculate a single score, while the second category focuses more on a single dimension of university performance, such as the quality of research output [4], career outcomes of graduates [37], or the mean h-index [42]. The other indicators for university rankings are publication and citation counts, student/faculty ratio, percentage of international students, Nobel and other prize commonality, number of highly cited researchers and papers, articles published in Science and Nature, the h-index, and web visibility. First, some ranking methodologies of the first category are briefly described as below.

Liu and Cheng [43] proposed a ranking strategy, called Academic Ranking of World Universities (ARWU), which considers four measures: quality of education, quality of faculty, research output, and per capita performance. For comparison of four measures, the following six indicators are considered: (1) alumni of a university winning a Nobel Prize or a Fields Medal, (2) staff of a university winning a Nobel Prize or a Fields Medal, (3) highly cited researchers in 21 broad scientific fields, (4) publications in Nature and Science, (5) publications indexed in Web of Science, and (6) per capita academic performance of a university. It gives a score of 100 for the best performing university in each category and this university is considered as the benchmark against for computing the scores of all other universities. Then, the total scores of Universities are calculated as weighted averages of their individual category scores [44]. THE-QS World University Ranking (THE-QS) (http: //www.topuniversities.com) was published by the Quacquarelli Symonds Company and considers six distinct indicators: academic reputation according to a large survey (40%), employer reputation (10%), the student faculty ratio (20%), citations per faculty based on the Scopus database (20%), the proportions of international professors (5%), and international students (5%). The World University Ranking was developed by Times Higher Education (www.timeshighereducation.co.uk/world-universityrankings) [41] which considers 13 indicators to rank universities. These indicators are categorized into five areas: teaching (30%), research (30%), citations (30%), industry income (2.5%), and international outlook (7.5%). They normalize the citation impact indicator to be suitable for different scientific output data.

Another global ranking is the Scimago Institutions Rankings (SIR) developed by the Scimago research group in Spain (www.scimagoir.com) [45]. SIR combines a quantity and various quality metrics. Indicators are divided into three groups: research output (total number of the publication based on the Scopus database), international collaboration, leader output, high quality publications, excellence, scientific leadership (excellence with leadership, and scientific talent pool), innovation (innovative knowledge and technological impact), and societal (web size and the number of incoming links). The Cybermetrics Lab developed the Ranking Web of World Universities or Webometrics Ranking [46,47] which uses web data extracted from commercial search engines, including the number of webpages, documents in rich formats (pdf, doc, ppt, and ps), papers indexed by Google Scholar (indicator added in 2006), and the number of external in links as a measure of link visibility or impact. Higher Education Evaluation and Accreditation Council of Taiwan [48]) conducts university ranking which applies multiple indicators in the three categories: research productivity (the number of articles published in the past 11 years (10%) and the number of articles published in the current year (15%)), research impact (number of citations in the past 11 years (15%), number of citations in the past 2 years (10%), and average number of citations in the past 11 years (10%)), and research excellence (the *h*-index of the last 2 years (10%), the number of highly cited papers in the past 11 years (15%), and the number of articles of the current year in high impact journals (15%)). These rankings combine multiple weighted indicators to gain a single aggregate score to rank all universities. Additionally, some universities rankings [49,50] employed I-distance method [51] to apply all indicators for computing a single score as the rank. Besides its ability to calculate a single index (by considering several indicators) and consequently ranking countries, CIDI startegy utilizes the Pearson's coefficients of correlation, calculated using the I-distance method. In this case, the relevance of each input measure will be preserved. The I-distance method specifies the most important indicator instead of calculating numerical weights. The rank of indicator is determined by ordering them based on these correlations. In following, we mention some of ranking methodologies of the second category. The Centre for Science and Technology Studies at Leiden University published the LEIDEN Ranking (http://www.cwts.nl/ranking/LeidenRankingWebsite) [4,52] which has two main categories of indicators: impact and collaboration. The impact group includes three indicators: mean citation score, mean normalized citation score, and proportion of top 10% publications. The collaboration group includes four indicators: proportion of inter-institutional collaborative publications, proportion of international collaborative publications, proportion of collaborative publications with industry, and mean geographical collaboration distance. The Leiden Ranking considers the scientific performance instead of combining multiple indicators of university performance in a single aggregate indicator. U-Multirank [53,54] employs the variety of institutional missions and profiles and includes teaching and learning-related indicators. Additionally, it considers the importance of a user-driven approach in which the stakeholders/users are asked to determine indicators and their quality for ranking. In [37], they proposed a ranking methodology which considers only career outcomes of university graduates. This ranking focuses on the impact of universities on industry by their graduates. The mean h-index was used in [42] as a ranking metric to rank the chemical engineering, chemistry, materials science, and physics departments in Greece.
