Next Article in Journal
Strain Analysis of GaN HEMTs on (111) Silicon with Two Transitional AlxGa1−xN Layers
Next Article in Special Issue
Materials for Hip Prostheses: A Review of Wear and Loading Considerations
Previous Article in Journal
Metallurgical and Hydrogen Effects on the Small Punch Tested Mechanical Properties of PH-13-8Mo Stainless Steel
Previous Article in Special Issue
Development of a Novel in Silico Model to Investigate the Influence of Radial Clearance on the Acetabular Cup Contact Pressure in Hip Implants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Does the Hirsch Index Improve Research Quality in the Field of Biomaterials? A New Perspective in the Biomedical Research Field

by
Saverio Affatato
* and
Massimiliano Merola
Laboratorio di Tecnologia Medica, IRCCS-Istituto Ortopedico Rizzoli, Via di Barbiano, 1/10 40136 Bologna, Italy
*
Author to whom correspondence should be addressed.
Materials 2018, 11(10), 1967; https://doi.org/10.3390/ma11101967
Submission received: 21 September 2018 / Revised: 8 October 2018 / Accepted: 11 October 2018 / Published: 13 October 2018

Abstract

:
Orthopaedic implants offer valuable solutions to many pathologies of bones and joints. The research in this field is driven by the aim of realizing durable and biocompatible devices; therefore, great effort is spent on material analysis and characterization. As a demonstration of the importance assumed by tribology in material devices, wear and friction are two of the main topics of investigation for joint prostheses. Research is led and supported by public institutions, whether universities or research centers, based on the laboratories’ outputs. Performance criteria assessing an author’s impact on research contribute somewhat to author inflation per publication. The need to measure the research activity of an institution is an essential goal and this leads to the development of indicators capable of giving a rating to the publication that disseminates them. The main purpose of this work was to observe the variation of the Hirsch Index (h-index) when the position of the authors is considered. To this end, we conducted an analysis evaluating the h-index by excluding the intermediate positions. We found that the higher the h value, the larger the divergence between this value and the corrected one. The correction relies on excluding publications for which the author does not have a relevant position. We propose considering the authorship order in a publication in order to obtain more information on the impact that authors have on their research field. We suggest giving the users of researcher registers (e.g., Scopus, Google Scholar) the possibility to exclude from the h-index evaluation the objects of research where the scientist has a marginal position.

Graphical Abstract

1. Introduction

Joint replacement surgery is a successful and consolidated branch of orthopaedics. Its progressive achievement in alleviating pain and disability, helping patients to return to an active life, is reliant on efficient relationships between clinicians and researchers working across transverse areas of medicine and science [1]. The purpose of tribology research applied to orthopaedics is the minimization and elimination of losses resulting from friction and wear [2]. The research of new biomaterials plays an important role, and as a consequence, in vitro tests for such materials are of great importance [3]. The knowledge of the laboratory wear rate is an important aspect in the preclinical validation of prostheses. Research and development of wear-resistant materials continues to be a high priority [4,5,6]. Clinical research designed to carefully evaluate the performance of new materials intended to reduce wear is essential to ascertaining their efficacy and preventing the possibility of unexpected failure [7,8]. Unfortunately, failures and revision surgeries still constitute the main clinical problems related to total joint replacement [9]. The research is therefore constantly pushed to find new solutions to wear-related issues and to identify new high-standard materials. The public institutions of research receive national funding on the bases of their results and are therefore constrained to obtain high levels of quality assessment [10]. Evaluation of scientific publications is the criterion used by the universities and research institutes to measure the merit and value of researchers and academics [11], and it has a crucial impact on research funds distribution [12,13,14]. The need to measure the research quality of institutions is an essential goal, which has led to the development of indicators capable of giving a rating to publications. These indicators are used in bibliometric disciplines to quantitatively evaluate the quality and diffusion of scientific production within the scientific community. In order to obtain funds within the orthopaedic community, there is a strong pressure on researchers to publish even if the merit of the study is unreliable; this is because objectives are set to achieve a certain number of publications instead of focusing on the quality of the research [15].
There are two main ways to evaluate scientific research:
  • Bibliometric indicators are quantitative methods based on the number of times a publication is cited. The higher the number of citations, the larger the group of researchers who have used this work as a reference and, thus, the stronger its impact on the scientific community;
  • Peer review is a qualitative method based on the judgement of experts. A small number of researchers, specialized in the field of the work, analyze and evaluate the scientific value of a publication.
Eugene Garfield proposed the Impact Factor (IF) in 1955 [16,17] with the intent to help scientists look for bibliographic references; the IF indicator was quickly adopted to assess the influence of journals and, not long after, of individual scientists. A journal’s impact factor is the ratio of two elements: the numerator is the number of citations in the current year to items published in the previous two years, and the denominator is the total number of articles published in the same two years [16,17]. It is published by the ISI-Thomson publisher on the basis of the Web of Science database and measures the frequency with which an article published in a journal is cited by other periodicals over a specific period of time (two years after its release). This measure is used as an appraisal of the importance of a magazine compared with others in the same sector: the higher the impact factor, the more authoritative the magazine [18]. The impact factors of each magazine can be consulted on the website of the Journal of Citation Reports (JCR) [19].
The impact factor is widely used as an index of academic research quality. It is also applied as a winning criterion for the granting of funds and incentives, or as a basis for the evaluation of a scholar or a professional in public competitions [11]. Yet the impact factor is an indicator for measuring the impact of a magazine in its specific disciplinary area, certainly not for evaluating the authors, and this latter use has been criticized for many reasons. In order to overcome the problems of the impact factor, in 2005, Jorge E. Hirsch [20] proposed a new index, known as the h-index or Hirsch index, as a single-number criterion to evaluate the scientific output of a researcher. It combines a measure of quantity (publications) and impact (citations). In other words, a scientist has index h if h-many of his articles have at least h-many citations each, and the other articles have fewer than h-many citations each [21]. It performs better than other single-number criteria previously used to evaluate the scientific output of a researcher (impact factor, total number of documents, total number of citations, citation per paper rate, and number of highly cited papers) [22]. The h-index is easy to understand and can be easily gained by anyone with access to the Thomson ISI Web of Science [22]. Actually, the h-index has been reviewed as one of the most reliable criteria for evaluating the scientific outputs of researchers [23]. This index has many flaws: the articles with citations less than the h-index value are excluded from the calculation. The number of citations is influenced by self-citations and colleague citations, meaning that its value can be increased by recommendation to friends and colleagues [23,24,25]. There are many circumstances where the h-index provides misleading information on the impact of an author. However, this popular bibliometric indicator does not consider multiple co-authorship nor author position [11,26]. To our knowledge, no policy guides author order in biomedical publications. The position of an author, controversies about author order, and disagreements on the involvement of the last author are constantly debated; thus, it is worth analyzing the relationship between author position and bibliometric indicators [27]. The contribution of an author to a research project is not always clear, especially when a manuscript is attributed to a large group [28].
Rating and weighting of research value is not an easy task, as history shows; often, what is believed to be modern and mainstream research will yield highly rated papers but is not always a guarantee of innovation and scientific progress [15]. With this in mind, and to go more into depth in this matter, we produced a statistical evaluation of the h-index on a cohort of 60 authors belonging to the biomedical field, accounting for only given positions. In detail, three modified h-values were evaluated: considering only the First (F) and Second (S) authorships (referred to as FS); only the F and the Last (L) authorships (referred to as FL); and the F, S, and L positions (referred to as FSL). The main goal of this work was to implement an algorithm that can calculate a modified h-index considering the author’s position in an article.
Different approaches are available to decide the author order, like sorting them alphabetically or listing them in descending order according to their contribution [29]. Several approaches may be used to assess the contribution of an author to a paper [30]. In the “sequence-determines-credit” (SDC) system, the author order reflects the declining importance of their contribution. The “equal contribution” approach uses alphabetical order and implies identical involvement. The “first-last-emphasis” norm underlines the importance of the last author. Using the “percent-contribution-indicated” implies detailing each author’s impact.
The convention used in the biomedical field of research, as reported in the literature [27], is as follows: the first author conducts the majority of the work; the last author could be the senior member of the group and usually leads the research; co-authors, those between the first and last, are ranked in order of their input to the work; the corresponding authors—typically senior scholars—communicate with editors and readers. With this premise, we considered the first, the second, and the last authors as the main contributors to biomedical research.

2. Methods

A total of 60 authors from the biomedical field were selected as a cohort; 30 of them were obtained from an Italian scientist ranking system [31] and the other 30 were chosen from all around the world.
To obtain the modified h-index, an algorithm was implemented in MATLAB (MathWorks, Natick, MA, USA). The complete list of articles attributed to an author was extracted from the scopus.com webpage using the implemented feature called “Export all”. The information extracted was limited to Authors, Title, and Citation Count. Each list was obtained in the csv extension; these lists were then imported into the workspace of MATLAB using the Import Tool App. Each author entry was analyzed to find out their position; thus, according to the excluding criterion, the publication was considered or not for the evaluation of h. Given that authors are frequently cited in different ways (e.g., Surname, Name; Surname, N.; Surname M.N., etc.), the investigation considered all different options. If a publication respected the inclusion criterion, its citation count was taken into account. The list of included citation counts was sorted in ascending order; thus, the h value was by evaluated considering the last position at which the citation count was higher than or equal to the position itself. In Figure 1, a flow chart of the exclusion process is shown.

3. Results

In Figure 2 are summarized the results obtained through the different exclusion criteria here studied. It is worth underlining the large divergences obtained comparing the higher h-values with their respective corrected ones.
From the results in Figure 2, we emphasize that on the basis of our exclusion criteria, the modified h-value decreases significantly.
To highlight these differences, the entire cohort was divided into three sub-cohorts. A first group of authors with h-value ranging from 0 to 35 (called Low), a second group from 36 to 50 (Middle), and a third group from 51 up to the maximum of 181 (High) were extracted, as shown in Figure 3.
In Figure 3 it is underlined that the High sub-cohort, starting from the highest value of h, is the most affected by all the exclusion criteria. This is especially true in the FS case, where it reaches a mean reduction of more than 50%. On the contrary, for the Low group, this exclusion criterion only decreases the h-value by roughly 30%. The Low group is more influenced by the exclusion of the publications where the authors are present as second authors (35% of decrease). The Middle group is more affected by the FS criterion, followed by the FL and the FSL.
In Figure 4, another histogram of the influence of the exclusion criteria is presented. In this case, the sub-cohorts were obtained based on the number of articles each author has. Considering the large range (from a minimum of 15 to a maximum of 1241), we chose to obtain a further group, yielding a total of four. The Low group collected the authors with up to 150 publications, the Middle group ranged from 151 to 300, High from 301 to 500, and Super High (S. High) from 501 to the maximum.
This representation also outlined how the authors with a great number of publications are more affected by the exclusion criteria that do not take into consideration the articles where they are listed last among the authors. The first three sub-cohorts have a mean reduction of around 50% from their starting h-value when the FS criterion is applied. On the counter side, the FL criterion more greatly affects the Middle and the Low cohorts, reaching roughly 40% against the 30% and 20% of the Super High and High, respectively. The FSL criterion has a similar outcome on the group, but is stronger on the Middle and the Super High sub-cohorts, where it is about 30%.

4. Discussion and Conclusions

The term “impact factor” has gradually evolved to describe both journals’ and authors’ impacts. Journal impact factors generally involve relatively large populations of articles and citations. There are some ambiguities in the use of the h-index that could provoke changes in the publishing behaviour of scientists, such as increasing the number of self-citations distributed among the documents on the edge of the h-index [32]. Another disadvantage of the h-index is that not all citations of a researcher are involved in the calculation and its value may not increase with a rise in citations. Highly cited papers are significant for the evaluation of the h-index, but once they belong to the top h-many papers, the effect of number of citations they get is negligible [22]. Considering that the evaluators reduce scientific success to a single value, researchers can change their behaviour to increase these values, even by using unethical strategies [33]. Moreover, the scientific independent criterion of peer-review evaluation is going to be replaced by a system of private companies whose only feedback is the indicator number [11]. The authors believe that it is essential to continue analyzing the bibliometric indicators in order to establish their drawbacks and limitations and to propose improvements where necessary. It is especially relevant to determine in which cases this index could be biased, since it could have serious consequences for the assessment of scientists and academics. Our work proposes to be an insight into the bibliometric indicators and to help the scientific community and research institutions consider how authorship position impacts on the h-value.
The main purpose of this work was to observe the variation of the h-value when the position of authorship is considered. Therefore, an empirical analysis was conducted to assess the influence of an author’s intermediate positions on their h-index. The results of this study showed that when the h-value is high, there is a large divergence between this value and the “corrected” one. We propose an improved method to consider the importance of the authorship in a publication in order to obtain a more profound understanding of the effective impact that an author has on their research field. This could be realized by giving users of researcher registers (e.g., Scopus, Google Scholar, etc.) the possibility to select which version of the h-index they want to analyze. The users should have the possibility, along with the already-implemented exclusion of self-citations, to exclude the publications where the authors have an intermediate position. It is worth noting that this correction should be considered for senior researchers, whereas young ones often have difficulties obtaining a relevant position in authorship. Thus, this exclusion can negatively affect the process of researcher renewal, discouraging young ones who would not see their efforts rewarded.
We believe that bibliometric indices should evolve with the aim of fairer evaluation of scientific productions. This would also be beneficial for funding distribution in the biomedical and orthopaedic research spheres, above all considering the threat of research cuts being to the detriment of patients’ health. Alternatively, there is the risk that research toward new and more efficient biomaterials could stagnate due to a lack of capital granted to deserving scientists.

Author Contributions

S.A. conceived and designed the experiments; M.M. performed the software analyses; S.A. wrote the first draft of the manuscript; S.A. and M.M. analyzed the data; S.A. and M.M. wrote the final paper.

Funding

This research received no external funding.

Acknowledgements

The authors thank Barbara Bordini for her help with statistical analyses.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Affatato, S. Perspectives in Total Hip Arthroplasty: Advances in Biomaterials and their Tribological Interactions; Affatato, S., Ed.; Elsevier Science: New York, NY, USA, 2014. [Google Scholar]
  2. Viceconti, M.; Affatato, S.; Baleani, M.; Bordini, B.; Cristofolini, L.; Taddei, F. Pre-clinical validation of joint prostheses: A systematic approach. J. Mech. Behav. Biomed. Mater. 2009, 2, 120–127. [Google Scholar] [CrossRef] [PubMed]
  3. Matsoukas, G.; Willing, R.; Kim, I.Y. Total Hip Wear Assessment: A Comparison Between Computational and In Vitro Wear Assessment Techniques Using ISO 14242 Loading and Kinematics. J. Biomech. Eng. 2009, 131, 41011. [Google Scholar] [CrossRef] [PubMed]
  4. Oral, E.; Neils, A.; Muratoglu, O.K. High vitamin E content, impact resistant UHMWPE blend without loss of wear resistance. J. Biomed. Mater. Res. B Appl. Biomater. 2015, 103, 790–797. [Google Scholar] [CrossRef] [PubMed]
  5. Ansari, F.; Ries, M.D.; Pruitt, L. Effect of processing, sterilization and crosslinking on UHMWPE fatigue fracture and fatigue wear mechanisms in joint arthroplasty. J. Mech. Behav. Biomed. Mater. 2016, 53, 329–340. [Google Scholar] [CrossRef] [PubMed]
  6. Kyomoto, M.; Moro, T.; Yamane, S.; Saiga, K.; Watanabe, K.; Tanaka, S.; Ishihara, K. High fatigue and wear resistance of phospholipid polymer grafted cross-linked polyethylene with anti-oxidant reagent. In Proceedings of the 10th World Biomaterials Congress, Montréal, QC, Canada, 17–22 May 2016. [Google Scholar]
  7. Essner, A.; Schmidig, G.; Wang, A. The clinical relevance of hip joint simulator testing: In vitro and in vivo comparisons. Wear 2005, 259, 882–886. [Google Scholar] [CrossRef]
  8. Affatato, S.; Spinelli, M.; Zavalloni, M.; Mazzega-Fabbro, C.; Viceconti, M. Tribology and total hip joint replacement: Current concepts in mechanical simulation. Med. Eng. Phys. 2008, 30, 1305–1317. [Google Scholar] [CrossRef] [PubMed]
  9. Ulrich, S.D.; Seyler, T.M.; Bennett, D.; Delanois, R.E.; Saleh, K.J.; Thongtrangan, I.; Stiehl, J.B. Total hip arthroplasties: What are the reasons for revision? Int. Orthop. 2008, 32, 597–604. [Google Scholar] [CrossRef] [PubMed]
  10. Geuna, A.; Martin, B.R. University Research Evaluation and Funding: An International Comparison. Minerva 2003, 41, 277–304. [Google Scholar] [CrossRef]
  11. Carpenter, C.R.; Cone, D.C.; Sarli, C.C. Using publication metrics to highlight academic productivity and research impact. Acad. Emerg. Med. 2014, 21, 1160–1172. [Google Scholar] [CrossRef] [PubMed]
  12. Rezek, I.; McDonald, R.J.; Kallmes, D.F. Is the h-index Predictive of Greater NIH Funding Success Among Academic Radiologists? Acad. Radiol. 2011, 18, 1337–1340. [Google Scholar] [CrossRef]
  13. Ciriminna, R.; Pagliaro, M. On the use of the h-index in evaluating chemical research. Chem. Central J. 2013, 7, 132. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Narin, F.; Olivastro, D.; Stevens, K.A. Bibliometrics: Theory, practice and problems. Eval. Rev. 1994, 18, 65–76. [Google Scholar] [CrossRef]
  15. Fayaz, H.C.; Haas, N.; Kellam, J.; Bavonratanavech, S.; Parvizi, J.; Dyer, G.; Smith, M. Improvement of research quality in the fields of orthopaedics and trauma—A global perspective. Int. Orthop. 2013, 37, 12051212. [Google Scholar] [CrossRef] [PubMed]
  16. Garfield, E. The history and meaning of the journal impact factor. JAMA 2006, 295, 90–93. [Google Scholar] [CrossRef] [PubMed]
  17. Garfield, E. The meaning of the Impact Factor. Int. J. Clin. Health Psychol. 2003, 3, 363–369. [Google Scholar]
  18. Bordons, M.; Fernández, M.T.; Gómez, I. Advantages and limitations in the use of impact factor measures for the assessment of research performance. Scientometrics 2002, 53, 195–206. [Google Scholar] [CrossRef]
  19. How Do I Find the Impact Factor and Rank for a Journal ? Available online: https://guides.hsl.virginia.edu/faq-jcr (accessed on 12 October 2018).
  20. Hirsch, J.E. An index to quantify an individual’s scientific research output. Proc. Natl. Acad. Sci. USA 2005, 102, 16569–16572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Bornmann, L.; Daniel, H.D. Does the h-index for ranking of scientists really work? Scientometrics 2005, 65, 391–392. [Google Scholar] [CrossRef]
  22. Costas, R.; Bordons, M. The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. J. Informetr. 2007, 1, 193–203. [Google Scholar] [CrossRef]
  23. Ahangar, H.G.; Siamian, H.; Yaminfirooz, M. Evaluation of the scientific outputs of researchers with similar h index: A critical approach. Acta Inform. Med. 2014, 22, 255–258. [Google Scholar] [CrossRef] [PubMed]
  24. Martin, B.R. Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment. Res. Policy 2013, 42, 1005–1014. [Google Scholar] [CrossRef]
  25. Foo, J.Y.A. Impact of excessive journal self-citations: A case study on the folia phoniatrica et logopaedica journal. Sci. Eng. Ethics 2011, 17, 65–73. [Google Scholar] [CrossRef] [PubMed]
  26. Kreiman, G.; Maunsell, J.H.R. Nine Criteria for a Measure of Scientific Output. Front. Comput. Neurosci. 2011, 5, 1–6. [Google Scholar] [CrossRef] [PubMed]
  27. Du, J.; Tang, X.L. Perceptions of author order versus contribution among researchers with different professional ranks and the potential of harmonic counts for encouraging ethical co-authorship practices. Scientometrics 2013, 96, 277–295. [Google Scholar]
  28. Tarkang, E.E.; Kweku, M.; Zotor, F.B. Publication practices and responsible authorship: A review article. J. Public Health Afr. 2017, 8, 36–42. [Google Scholar] [CrossRef] [PubMed]
  29. Kissan, J.; Laband, D.N.; Patil, V. Author order and research quality. South. Econ. J. 2005, 7, 545–555. [Google Scholar]
  30. Tscharntke, T.; Hochberg, M.E.; Rand, T.A.; Resh, V.H.; Krauss, J. Author sequence and credit for contributions in multiauthored publications. PLoS Biol. 2007, 5, 18. [Google Scholar] [CrossRef] [PubMed]
  31. Degli Esposti, M.; Boscolo, L. Top Italian Scientists Biomedical Sciences. 2018. Available online: http://www.topitalianscientists.org/TIS_HTML/Top_Italian_Scientists_Biomedical_Sciences.htm (accessed on 17 April 2018).
  32. Van Raan, A.F. Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics 2006, 67, 491–502. [Google Scholar] [CrossRef] [Green Version]
  33. Masic, I. H-index and how to improve it? Donald Sch. J. Ultrasound. Obstet. Gynecol. 2016, 10, 83–89. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the process to obtain the corrected h-value.
Figure 1. Flowchart of the process to obtain the corrected h-value.
Materials 11 01967 g001
Figure 2. A box plot shows the modified h-index values (± standard deviation) for all authors considered in this study. The total h-index retrieved from Scopus is the highest of the four classifications.
Figure 2. A box plot shows the modified h-index values (± standard deviation) for all authors considered in this study. The total h-index retrieved from Scopus is the highest of the four classifications.
Materials 11 01967 g002
Figure 3. Histogram of the influence of the exclusion criteria on the h-values in sub-cohorts based on the starting h from Scopus.
Figure 3. Histogram of the influence of the exclusion criteria on the h-values in sub-cohorts based on the starting h from Scopus.
Materials 11 01967 g003
Figure 4. Histogram of the influence of the exclusion criteria on the h-values with sub-cohorts based on the number of publications.
Figure 4. Histogram of the influence of the exclusion criteria on the h-values with sub-cohorts based on the number of publications.
Materials 11 01967 g004

Share and Cite

MDPI and ACS Style

Affatato, S.; Merola, M. Does the Hirsch Index Improve Research Quality in the Field of Biomaterials? A New Perspective in the Biomedical Research Field. Materials 2018, 11, 1967. https://doi.org/10.3390/ma11101967

AMA Style

Affatato S, Merola M. Does the Hirsch Index Improve Research Quality in the Field of Biomaterials? A New Perspective in the Biomedical Research Field. Materials. 2018; 11(10):1967. https://doi.org/10.3390/ma11101967

Chicago/Turabian Style

Affatato, Saverio, and Massimiliano Merola. 2018. "Does the Hirsch Index Improve Research Quality in the Field of Biomaterials? A New Perspective in the Biomedical Research Field" Materials 11, no. 10: 1967. https://doi.org/10.3390/ma11101967

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop