Next Article in Journal
A Conceptual Framework for Estimating Building Embodied Carbon Based on Digital Twin Technology and Life Cycle Assessment
Previous Article in Journal
Application of the DEMATEL Model for Assessing IT Sector’s Sustainability
 
 
Article
Peer-Review Record

Novel World University Rankings Combining Academic, Environmental and Resource Indicators

Sustainability 2021, 13(24), 13873; https://doi.org/10.3390/su132413873
by Wei-Chao Lin 1,2,3,* and Ching Chen 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2021, 13(24), 13873; https://doi.org/10.3390/su132413873
Submission received: 30 October 2021 / Revised: 5 December 2021 / Accepted: 13 December 2021 / Published: 15 December 2021
(This article belongs to the Section Sustainable Education and Approaches)

Round 1

Reviewer 1 Report

This is a very interesting and relevant piece of research. Comparative analysis and weaknesses identification of the University Ranking is crucial for fair evolution of the industry.

The article is well-structured, however I identified some aspects that require improvements:

  • Abstract. The research question is not clearly mentioned and the research framework is missing. It will be also better to consider indicating the period of analysis
  • Introduction. among the benefits for the students to join highly ranked university an important factor is ignored: engagement with better networks. Please consider the below references on this matter as an example.

(Breznik, K. and Law, K.M., 2019. What do mission statements reveal about the values of top universities in the world?. International Journal of Organizational Analysis.)

  • Research methodology. A better detail of the environmental and resource indicator in terms of description and rationale could help the reader understading the novelty of this research. Maybe the authors could  also consider ESG measures in general 
  • Conclusions. Although relevant, the conclusions must mention open discussions in terms of limitations (i.e. feasibility of the calculation for the proposed indicators)  and trade-off. 

 

 

Comments for author File: Comments.docx

Author Response

This is a very interesting and relevant piece of research. Comparative analysis and weaknesses identification of the University Ranking is crucial for fair evolution of the industry.

The article is well-structured, however I identified some aspects that require improvements:

  • Abstract. The research question is not clearly mentioned and the research framework is missing. It will be also better to consider indicating the period of analysis

The research question of this paper is provided (c.f. Abstract, the 8th sentence). The research framework is provided in Figure 1 (c.f. Section 3.1). The period of conducting this study was between 2021/01/05 and 2021/06/30 (c.f. Section 3.3, the 1st paragraph, the 4th sentence).

 

  • Introduction. among the benefits for the students to join highly ranked university an important factor is ignored: engagement with better networks. Please consider the below references on this matter as an example.

(Breznik, K. and Law, K.M., 2019. What do mission statements reveal about the values of top universities in the world?. International Journal of Organizational Analysis.)

This reference is cited and related descriptions are provided (c.f. Section 1, the 1st paragraph, the 3rd sentence).

 

  • Research methodology. A better detail of the environmental and resource indicator in terms of description and rationale could help the reader understanding the novelty of this research. Maybe the authors could  also consider ESG measures in general 

Related descriptions are provided (c.f. Section 3.1, the 3rd paragraph).

 

  • Conclusions. Although relevant, the conclusions must mention open discussions in terms of limitations (i.e. feasibility of the calculation for the proposed indicators) and trade-off. 

Related descriptions are provided (c.f. Section 5, the final paragraph).

 

Author Response File: Author Response.pdf

Reviewer 2 Report

This manuscript proposes an interesting approach to higher education rankings by incorporating indicators of environment and resources that can explain some initial and fixed differences between institutions.

 

Since that is the primary contribution of the study, more argument and rationale is needed to motivate the reasoning for this. The authors have described the main rankings systems and described what these systems assign value to, along with some shortcomings of them, but they have not really articulated the need to include environmental and resource indicators beyond a few sentences in section 3.1. A section on the importance of environment and resources should be added after 2.2. I was also expecting more specific explanation of what the environmental and resource indicators are. There’s no explanation of what they are beyond what is in Table 1. The argument is certainly compelling – you want to adjust for differences in environment and resources when adjudicating college quality. Therefore, a key piece of your manuscript would be saying why it is important to account for things like industry and innovation, population structure, urban scale, land area, livability level, GDP, etc. In addition, is there some theoretical framework of college quality or geographic inequality or something like that which might help to motivate why these specific indicators were chosen? What led you to the choice of the inclusion and consideration of these variables?

 

In addition, I did not understand from the manuscript how exactly the environmental and resource indicators were used to calculate the various ranks. Is this some regression to predict ranking? Are you determining a ranking for each indicator, and then inputting it in the equation?

 

 

Other

Page 2, second paragraph – you say that rankings influence some behaviors, but then you don’t really say how.

 

For 2.1.2 QS Rankings – the response on what? Who gets it?

 

The critique of UNWNR doesn’t make sense – grad rates and retention rates are just measuring outcomes for the starting cohort. It doesn’t matter what the reason for noncompletion is.

Author Response

This manuscript proposes an interesting approach to higher education rankings by incorporating indicators of environment and resources that can explain some initial and fixed differences between institutions.

 

Since that is the primary contribution of the study, more argument and rationale is needed to motivate the reasoning for this. The authors have described the main rankings systems and described what these systems assign value to, along with some shortcomings of them, but they have not really articulated the need to include environmental and resource indicators beyond a few sentences in section 3.1. A section on the importance of environment and resources should be added after 2.2. I was also expecting more specific explanation of what the environmental and resource indicators are. There’s no explanation of what they are beyond what is in Table 1. The argument is certainly compelling – you want to adjust for differences in environment and resources when adjudicating college quality. Therefore, a key piece of your manuscript would be saying why it is important to account for things like industry and innovation, population structure, urban scale, land area, livability level, GDP, etc. In addition, is there some theoretical framework of college quality or geographic inequality or something like that which might help to motivate why these specific indicators were chosen? What led you to the choice of the inclusion and consideration of these variables?

Related descriptions are provided (c.f. Section 2.3; Section 3.1, the 3rd paragraph).

 

In addition, I did not understand from the manuscript how exactly the environmental and resource indicators were used to calculate the various ranks. Is this some regression to predict ranking? Are you determining a ranking for each indicator, and then inputting it in the equation?

 Yes. A ranking for each indicator is determined and then it is input in the equation. Related explanation is provided in Section 3.2.

 

Other

Page 2, second paragraph – you say that rankings influence some behaviors, but then you don’t really say how.

 This paragraph is revised for better readability (c.f. Section 1, the 3rd paragraph).

 

For 2.1.2 QS Rankings – the response on what? Who gets it?

Related descriptions are provided (c.f. Section 2.1.2, the 2nd paragraph, the 2nd and 3rd sentences).

 

The critique of UNWNR doesn’t make sense – grad rates and retention rates are just measuring outcomes for the starting cohort. It doesn’t matter what the reason for noncompletion is.

Related statements are removed for better readability.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The additional sections in red are helpful, particularly 2.3 which outlines the reasoning for the academic, environmental, and resource indicators. 

Reading it again, and seeing the authors' responses, I am now more clear on what the Borda count method is. Essentially, you are ranking each university along each indicator and then calculating the average ranking score across all indicators. You should also add in the text that each indicator is weighted equally. This is an important point to make because it is rather different from other rankings indicators, which weight some categories more heavily than others. 

I originally was under the impression that this was a regression framework in which you accounted for fixed differences across university contexts (for example, asking "What is the best university AFTER accounting for differences in GPD, resources, etc.?). But this is apparently not what you are doing. In fact, your are generating a ranking for each of these indicators. I am not sure why some of these indicators are important in determining a university ranking (such as Population Ages 0-4, or Tuberculosis incidence, or coal rents...). If you are ranking universities based on these indicators, aren't you in a sense just ranking the countries or regions they are in? (It's no surprise in this case that many of the universities in the US move up). Doesn't this method just award wealthier, more developed nations? 

I am coming from the perspective of being more critical of university rankings systems that already award universities that benefit from inequality in these resources and environments. I would think that comparing universities AFTER accounting for these environmental and resource differences would be more fair, and this would help to find and recognize excellent universities in the less-developed world that are actually of very high quality.

This is what I would do with the data you have compiled, but of course you are pursuing your own research activities here. If you proceed with the currently outlined Borda approach, then my main suggestion would be to identify the most salient indicators and include only those (i.e., do you need all indicators currently included?). Regardless, the reason for including each of the specific categories of indicators in Table 1 should be described (e.g., population structure). You could also include a discussion of the limitations of this rankings method as well.

Author Response

The additional sections in red are helpful, particularly 2.3 which outlines the reasoning for the academic, environmental, and resource indicators. 

Reading it again, and seeing the authors' responses, I am now more clear on what the Borda count method is. Essentially, you are ranking each university along each indicator and then calculating the average ranking score across all indicators. You should also add in the text that each indicator is weighted equally. This is an important point to make because it is rather different from other rankings indicators, which weight some categories more heavily than others. 

Related descriptions are provided in the last sentence of Section 3.2.

 

I originally was under the impression that this was a regression framework in which you accounted for fixed differences across university contexts (for example, asking "What is the best university AFTER accounting for differences in GPD, resources, etc.?). But this is apparently not what you are doing. In fact, your are generating a ranking for each of these indicators. I am not sure why some of these indicators are important in determining a university ranking (such as Population Ages 0-4, or Tuberculosis incidence, or coal rents...). If you are ranking universities based on these indicators, aren't you in a sense just ranking the countries or regions they are in? (It's no surprise in this case that many of the universities in the US move up). Doesn't this method just award wealthier, more developed nations? 

I am coming from the perspective of being more critical of university rankings systems that already award universities that benefit from inequality in these resources and environments. I would think that comparing universities AFTER accounting for these environmental and resource differences would be more fair, and this would help to find and recognize excellent universities in the less-developed world that are actually of very high quality.

This is what I would do with the data you have compiled, but of course you are pursuing your own research activities here. If you proceed with the currently outlined Borda approach, then my main suggestion would be to identify the most salient indicators and include only those (i.e., do you need all indicators currently included?). Regardless, the reason for including each of the specific categories of indicators in Table 1 should be described (e.g., population structure). You could also include a discussion of the limitations of this rankings method as well.

Thank you very much for these comments.

Related descriptions are provided (c.f. Section 5, the final paragraph).

Author Response File: Author Response.docx

Round 3

Reviewer 2 Report

You have added a note about weighting in the methods section, and a paragraph on the limitations of the study at the end. These are both helpful. However, you did not really address the main conceptual issue -- how is including resource and environmental indicators (especially those at the regional/country level) leading to "more objective" rankings. Doesn't this just penalize those institutions that are in less-resourced countries? You should comment on this possibility in the limitations paragraph, and consider removing language throughout that describes this method as "more objective".

Also, do all of the institutions within the same country get assigned the same indicator values? (i.e., are the indicators at the country level)? Wouldn't, then, universities within the same country just be compared based on their academic indicators?

Again, my concern is that this approach ultimately is just capturing differences between countries rather than differences in the quality of the institution. My main suggestion is to select the most salient resource and environmental indicators that are directly related to university quality.

Author Response

You have added a note about weighting in the methods section, and a paragraph on the limitations of the study at the end. These are both helpful. However, you did not really address the main conceptual issue -- how is including resource and environmental indicators (especially those at the regional/country level) leading to "more objective" rankings. Doesn't this just penalize those institutions that are in less-resourced countries? You should comment on this possibility in the limitations paragraph, and consider removing language throughout that describes this method as "more objective".

Also, do all of the institutions within the same country get assigned the same indicator values? (i.e., are the indicators at the country level)? Wouldn't, then, universities within the same country just be compared based on their academic indicators?

Again, my concern is that this approach ultimately is just capturing differences between countries rather than differences in the quality of the institution. My main suggestion is to select the most salient resource and environmental indicators that are directly related to university quality.

Thank you very much for these comments.

The term “more objective” is revised for better readability.

Since the research objective is to demonstrate that combining related environmental and resource indicators can make the university ranking result more comprehensive, choosing ‘right’ or ‘appropriate’ indicators for deep or specific university rankings is a hard work that we regard it as a limitation of this study. Therefore, a new paragraph describing the limitations of this study is provided (c.f. Section 5, the final paragraph).

Back to TopTop