Next Article in Journal
Construction Techniques and Detailing for Romanian Paiantă Houses: An Engineering Perspective
Previous Article in Journal
Optimal Return Policy of Competitive Retailers’ Pre-Sale Products Based on Strategic Consumer Behavior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Critical Analysis of the GreenMetric World University Ranking System: The Issue of Comparability

by
Riccardo Boiocchi
1,
Marco Ragazzi
1,
Vincenzo Torretta
2 and
Elena Cristina Rada
2,*
1
Department of Civil, Environmental and Mechanical Engineering, University of Trento, 38123 Trento, Italy
2
Department of Theoretical and Applied Sciences, University of Insubria, 21100 Varese, Italy
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(2), 1343; https://doi.org/10.3390/su15021343
Submission received: 26 September 2022 / Revised: 17 December 2022 / Accepted: 5 January 2023 / Published: 10 January 2023

Abstract

:
The Universitas Indonesia GreenMetric World Ranking is the most widely adopted system nowadays to rank worldwide universities’ sustainability. The number of participating universities has consistently increased throughout the last decade. An in-depth analysis of this ranking system is made to assess how sustainability in universities is measured through specific indicators. Specifically, based on expert knowledge, common logic and the scientific literature, these indicators are assessed with respect to whether they can be used to fairly quantify and rank worldwide universities’ sustainability development. Some indicators proposed by the ranking system, such as the number of renewable energy sources on campus and the number of various types of programs for sustainable development, were found to be unable to measure any sustainability development effectively and fairly. Many others, such as the opted sewage disposal modality, the percentage of university budget for sustainability efforts and the ratio of sustainability research funding to total research funding, were found to need adjustment to account for context-specific factors such as availability of renewable energy sources, weather, landscape, original construction and the cultural habits of the enrolled people. Taking into account these considerations, a fairer evaluation and comparison of universities’ sustainability could be achieved which provides universities with information on how to effectively improve their sustainability.

1. Introduction

Higher Education Institutions (HEIs) have the potential to play a key role in the world’s sustainability in a variety of different ways. As a matter of fact, they can heavily impact the environment via greenhouse gas (GHG) emissions, indoor and outdoor air pollution, waste, water and soil management and exploitation of natural resources [1,2,3,4,5,6,7,8,9,10,11]. They have the responsibility to educate the world’s population about sustainable behaviors and habits to adopt [12,13,14]. They can discover novel knowledge about new ways through which sustainability can be achieved or improved. They can invent novel technologies and approaches supporting and promoting sustainability, and they can set a good example for sustainable practices. With the aim of assessing HEIs’ sustainability, few world university green ranking systems have been proposed throughout the last decade [15,16,17,18,19,20,21,22,23,24]. One widely adopted sustainability ranking is the Times Higher Education World University Ranking (THE-WUR), which was deeply analyzed by Galleli et al. [16] in comparison to the Universitas Indonesia GreenMetric system. From this analysis, it emerged that while having fewer adhering universities, the THE-WUR seems to be more focused on social issues rather than on the environment, and when dealing with the environment, the ranking system seems to be more focused on research and education rather than on actual environment-oriented actions. Another tool available for evaluating HEIs’ sustainability is the Sustainability Tracking, Assessment and Rating System (STARS). The STARS seems to focus more on the environment while neglecting social and economic components. However, a critique that is addressed to the STARS is the scarce suitability for universities in developing countries [25]. Another issue with the STARS is that it was not originally designed for comparison purposes but only to measure campuses’ sustainability alone. Other university rankings such as the Academic Ranking of World Universities (ARWU) and QS World University ranking do not include any explicit sustainability item. Among the existing sustainability ranking systems, the one proposed by the Universitas Indonesia (UI), named the UI GreenMetric Ranking system, appears to have gained the most popularity in the last few years. As can be seen in Figure 1, more and more HEIs have been taking part in the UI GreenMetric ranking system since 2010, testifying to their increasing interest in evaluating their degree of sustainability. Figure 1 was produced originally based on publicly accessible data taken from [26].
Aside from its popularity, the relevance attributed to the UI GreenMetric ranking system is clear from the following declarations by stakeholders of universities ranking in the first positions:
  • Rector of Wageningen University & Research (ranked first in 2021) Magnificus Arthur Mol said in an interview, “Of course, we are thrilled with the first place. And it would be great if we were surpassed because this would mean that other universities work on sustainability even harder than we do. But it won’t be easy, because sustainability is embedded in the genes of our students and employees”.
  • John Atherton, Pro Vice Chancellor and Chair of the University of Nottingham Sustainability Committee said in an interview, “The University of Nottingham is delighted to once again be recognised for the hard work it is doing to embed environmental sustainability across the institution and make it an integral part of our education and research. Our global research programme includes a focus on sustainable food production and future food security, green chemicals to tackle greenhouse gases, and sustainable propulsion”.
  • Yanike Sophie, coordinator of the Green Office at the University of Groningen, embraces the high participation of students and employees of the RUG saying, “Our projects only work because of this high involvement”.
  • Camille Kirk, director of the Office of Sustainabilty at UC Davis, said, “Being internationally recognized again for our leadership gives every Aggie a chance to pause and feel pride in the commitment and investment that UC Davis has made in sustainability”.
An HEI willing to evaluate its level of sustainability according to the IU GreenMetric Ranking system has to fill out a questionnaire structured according to six main criteria with a variety of data. Each criterion is made of different items meant to quantify several aspects of a university’s sustainability. For each item, a score is assigned according to the data provided, and by summing up all the scores obtained for each criterion, a total score is then obtained. The HEIs that choose to take part in the survey are then ranked based on their relative total scores and based on the relative score obtained for each criterion. Based on the ranking obtained and their identified strengths and weaknesses in relation to other universities, a participating university can direct actions and draw up new programs to improve their own sustainability. The results of this ranking also have an unavoidable impact on the institutions’ worldwide reputation. For this reason, it is of the utmost importance that scores are correctly assigned. This requires that (1) the indicators incorporated in the ranking properly quantify an institution’s sustainability, (2) the indicators do not favor an institution over another in case of context-specific, unchangeable conditions and (3) the scores to each item are assigned fairly, taking into account the ability of the item for quantifying sustainability.
Some analyses of this ranking system has already carried out [21,25,27,28]. Suwartha et al. [21] made a first evaluation of the UI GreenMetric 2011 version according to the Berlin Principles. From its analysis, it emerged that the ranking system was complying with most of the Berlin Principles, while for a few of these principles, there was ongoing work for improvement. However, an in-depth and critical discussion of how the ranking system complied with these principles is missing. It is also important to point out that since 2011, the ranking system has consistently changed, and a new evaluation of the ranking system is therefore needed. In 2017, Ragazzi and Ghidini [27] pointed out that some of the Berlin Principles have not been respected yet. From the analysis by Lauder et al. [25], more weaknesses emerged, but a one-by-one analysis of the practical implications of the various items included in the ranking was not carried out. Similarly, Veidemane [28] only summarily analyzed the ranking system. By studying the works that have analyzed the UI GreenMetric World Ranking system, it can be noted that the previously mentioned requirements for having a fair sustainability ranking of worldwide HEIs were checked only generically without an in-depth analysis of each item included in the ranking.
For these reasons, in this work, a critical and in-depth analysis of the way HEIs obtain sustainability scores assigned according to the UI GreenMetric ranking system is made, and the practical implications out of it are discussed. In this way, improvements and adjustments can be made for a fairer ranking of universities’ sustainability.

2. Materials and Methods

The critical analysis of the most up-to-date list of items constituting the UI GreenMetric Questionnaire 2022 was made thanks to its availability on the official free-access website for the UI GreenMetric World University Rankings, along with the official guidelines and template related to the year 2022 [26]. The choice of this ranking was due to its popularity and its age. After more than a decade of usage, its methodology is expected to already be optimized and thus steady. The questionnaire is structured according to six criteria:
  • “Setting and Infrastructure” (SI);
  • “Energy and Climate Change” (EC);
  • “Waste” (WS);
  • “Water” (WR);
  • “Transportation” (TR);
  • “Education and Research” (ED).
The total score obtained for each of these criteria results from summing up the scores obtained from the related items chosen to quantify the sustainability of an HEI.
The critical analysis presented in this paper focuses on the suitability of these items not only for the evaluation of the sustainability of a single university but also for the ranking of worldwide universities as a function of their sustainability level. More specifically, as disclosed in the introduction, three main features were checked: (1) the suitability of each item in quantifying an HEI’s sustainability, (2) each item not favoring an institution over another, based on context-specific unchangeable conditions, and (3) scores being fairly assigned to each item based on their ability to describe the HEI’s sustainable development status. With the aim of carrying out this analysis, the items were compared one by one against expert knowledge about the sustainability subject involved. This knowledge was part of the authors’ specialized background, integrated with the literature of interest. In the process, critical thinking represented a key component.

3. Results and Discussions

3.1. Critical Detailed Review of the Items Considered

After an in-depth analysis of the 51 items used to quantify HEI sustainability by score assignment, the following 15 were considered questionable:
  • Number of renewable energy sources on campus (EC3): According to the UI GreenMetric ranking system, an HEI obtains a higher score by simply increasing the variety of renewable energy sources employed. Although this may encourage some university to employ a larger number of renewable energy sources among those available, the inclusion of this item may turn out to unreasonably penalize those HEIs employing only one or few sources of renewable energy. As a matter of fact, an HEI may record a higher score by simply adding a source of renewable energy to what is already being employed, regardless of the overall amount of power provided by these sources. If an HEI employs an additional source of renewable energy, but the overall amount of renewable power provided is the same or decreases, then a higher score will be obtained. Furthermore, the inclusion of this item for sustainability assessment leads to penalizing HEIs based on their locations in the world, which strongly affects the availability of renewable energies. As a matter of fact, an HEI may be located in an area where the access to renewable energy sources is much more limited than in other areas where another HEI taking part in the ranking is located. This may lead to fewer scores for the former compared with the latter, thus penalizing an HEI’s sustainability ranking simply based on its location. This item should not be attributed any sustainability score.
  • Total electricity usage divided by total campus’ population (kWh per person) (EC4): The inclusion of this item seems controversial. According to the UI, the “total electricity” is to be considered, specifically not only the fraction of electrical energy produced from CO2-emitting sources but also the fraction of electrical energy produced from renewable sources. In practice, the aim of this item is to measure the electrical energy efficiency, but the amount of electricity usage should be more relevant when evaluating sustainability with respect to the fraction produced by energy sources different from the renewable ones. This is because the energy produced from renewable sources does not emit CO2 and should therefore not reduce the sustainability of an HEI substantially. Moreover, in case a university is characterized by the presence of high-energy-demand laboratories, the amount to be considered should be the one without the electricity used for research activities so that only the community-related consumption will be taken into account. This is to avoid penalizing some university simply because more CO2-emitting research activity is performed therein. Aside from that, the methodology according to which the total campus population is to be computed is not presented clearly in the questionnaire and does not seem to take into account the actual amount of time that people are spending on campus activities. This period of time can largely vary from one university to another depending on the culture of the campus’ population, which strongly affects the amount of electricity used and therefore should be considered carefully when used for a ranking system that involves universities from all over the world. Finally, this item should be rethought, adding strategic information on its assessment and doubled to take into account the renewability (or not) of the sources. The number of daylight hours is also a relevant influencing factor. As a suggestion for an improved indicator, the total electricity usage should be normalized with respect to the actual amount of time people spend on campus, and the number of daylight hours should be incorporated as well. Furthermore, the percentage of electrical energy provided by CO2-emitting sources should be given a higher weight in the calculation.
  • Greenhouse gas emission reduction program (EC7): Although, in principle, setting up programs for GHG emission reduction shows good intentions for the improvement of an HEI’s sustainability, the inclusion of this item for ranking purposes can unfairly penalize universities that do not need to implement these types of programs. Some universities, due to the typical weather conditions and the culture of the people attending campus activities, present very low GHG emissions and therefore need few to no programs for GHG emission mitigation, while others in other locations may need to implement more GHG reduction programs due to their actual higher need. Therefore, the number of programs for GHG reduction should be considered a function of the actual need of the institution along with the effectiveness of these programs, namely the effective reduction in GHG emissions upon program implementation. As a suggestion for improvement, the indicator can be amended by dividing the total number of GHG emission reduction programs by the current amount of GHG emissions. However, guidelines on how to count for the implementation of these programs in an HEI need to be clearly established, as what should count would be the number of executive actions entailed by each program, rather than the mere number of programs.
  • Total carbon footprint divided by total campus’ population (EC8): According to the UI GreenMetric ranking system, HEIs emitting lower amounts of carbon dioxide or other GHGs obtain higher scores than those emitting more GHGs. This quantity, measured as the carbon dioxide equivalent (CO2-eq), is considered in relation to the total number of people living in the HEI. Although this item is meant to reliably quantify the sustainability of an HEI, considering the official guidelines, the way the total carbon footprint is computed may be misleading. First, the guidelines provide a mere recommendation and not a structured consistent procedure that each HEI must follow in order to come up with a reliable estimation of its CO2-eq emissions. Thus, an HEI may follow the recommendation provided or calculate its carbon dioxide emissions based on another methodology. The discrepancy among methods followed by different HEIs potentially leads to penalizing an HEI with respect to others simply based on the method which was arbitrarily adopted. That aside, the way the emissions of carbon dioxide linked to the electric energy consumption is recommended to be calculated is flawed. According to the UI GreenMetric ranking system, the total amount of electrical energy consumed in a year is multiplied by a fixed CO2 emission factor, regardless of whether the energy is produced from renewable energy sources or not. However, this is not always representative of the actual CO2 emissions. First, the average CO2 emission factors vary from one country to another, taking into account all electrical energy sources employed as reported, for instance, by the European Environmental Agency [29]. Secondly, the emission factor for the electrical energy used at a campus or university should reflect the average fraction of energy not produced by renewable energy. This fraction changes according to the country where the HEI is located. A clear example of how changing the CO2 emission factor for electrical energy strongly affects the overall carbon footprint can be found in the work of Boiocchi et al. [30]. Furthermore, the list of carbon dioxide emission sources recommended should be extended by including the contributions emitted due to the treatment and handling of waste, wastewater and drinking water fluxes coming to and leaving the institution. For instance, an HEI may employ all kinds of sophisticated technologies for optimizing the treatment of waste and wastewater with and without the purpose of recycling and reuse while increasing the amount of CO2 emitted due to the amount and source of the energy employed by the same technologies. The evidence of CO2 emissions due to power consumption by these technologies is numerous [30,31,32,33,34,35,36,37,38]. Additionally, nitrous oxide, a strong greenhouse gas with a global warming potential about 300 times stronger than CO2, can be emitted during the typical biological nitrogen removal processes for domestic wastewaters, according to the treatment system’s design and operational patterns [38,39,40,41]. This in turn can affect the overall carbon footprints of campuses and should therefore be considered. Finally, it is important to keep in mind that some universities will be favored over others simply because they are in a region where more renewable energy sources are available. As a suggestion for improvement, first, the list of CO2-emitting sources by the HEI should be extended by considering the electrical energy consumed for waste and wastewater handling. Secondly, to compute the CO2 emissions, each electrical energy consumption contribution should be multiplied by a CO2 emission factor reflecting the CO2 emitted according to the specific energy mix actually employed. Finally, similar to the case of item EC7, the obtained quantity should be normalized with respect not only to the number of people living on campus but also with respect to the actual time spent therein and the availability of renewable energy sources.
  • Program to reduce the use of paper and plastic on campus (WS2): By considering this item for sustainability assessment, a mere increase in the number of programs aimed at reducing the use of paper and plastics in HEIs can lead to obtaining a higher score. Although the inclusion of this item may encourage HEI administrations to create new programs, this can lead to biased evaluations of sustainability. First and foremost, the mere number of programs in general does not necessarily imply the effectiveness of them. An HEI may prepare and implement several programs which then turn out to be ineffective with respect to their original scope. An HEI can implement one program aimed at reducing the use of paper and plastic, but when measuring if paper and plastic usage has been reduced compared with the time when the program was not in force, it can be found that the use of paper and plastics has not been effectively reduced. Furthermore, when considering this item to rank HEIs’ sustainability, the comparison can be flawed. For example, an HEI may already have a very low consumption of paper and plastics and not need any programs to further reduce it, while another HEI may have very high consumption of paper and plastic and therefore needs to implement several programs to reduce it. When this item is included, the former university gets penalized compared with the latter, while in reality a lot more paper and plastic are consumed by the latter than the former. The number of programs to reduce the use of paper and plastic by campuses should be checked with respect to their effectiveness and considered jointly with the actual need for them within the context of the campus itself. It is also important to point out that an HEI may include only a few executive actions within a program, while another may split the same number actions into more programs. In this case, while the actual number of actions aimed at reducing plastic and paper usage is the same, the HEI that splits these actions into multiple programs will receive a higher score, thus deceiving (intentionally or unintentionally) the ranking system. From the guidelines, it is unclear whether the UI GreenMetric ranking system takes into account the actual number of actions included within a program rather than the number of programs. All of that should be clarified or modified in a new version of the guidelines. As a suggestion for an improved indicator, the total number of executive actions reducing the usage of paper and plastic on campus should be normalized by the current amount of paper and plastic being used by the same campus per capita. Furthermore, criteria should be established in order to evaluate whether an action is likely to be effective or not in reducing the usage of paper and plastics, rather than considering all of the actions in a program effective to the same degree.
  • Sewage disposal (WS6): Different sustainability points are assigned to an HEI according to the way its wastewater is disposed. Specifically, if wastewater is disposed untreated into waterways, then the HEI gets no points. If it is disposed at the same destination but treated, then a score is assigned. Furthermore, an HEI gets increasingly higher scores if, instead of being disposed into waterways, its wastewater is reused, downcycled or upcycled. It can be deduced that the UI GreenMetric ranking system acknowledges the added value of nutrients in wastewater as well as the value of the water itself by assigning higher scores to those HEIs valorizing these resources. However, the guidelines do not make crystal clear what is meant by “water reuse” and its difference, if applicable, from the concept of “water recycling”. Oftentimes, the words “water reuse” and “water recycling” are used in reference to the same meaning. However, there is a subtle difference between the words “reuse” and “recycle”, which consists of the fact that “reuse” literally means using an already-used object as it is without treatment, whereas “recycle” means turning an object into its raw form before using it again [42]. Nevertheless, in the context of water, the terms “water reuse” and “water recycling” are often confused. According to some opinion, the term “water reuse” refers more specifically to the recycling of water for potable usage [43]. However, there is no univocal statement clarifying the difference between water reuse and water recycling. Anderson [44] stated that water reuse can be for agricultural, urban and industrial purposes and has been confused with recycling. Aside from the difference between reuse and recycling, downcycling and upcycling mean that the object is recycled for a lower or upper purpose, respectively, compared with its previous usage purpose. In this context, especially if recycled for potable usage, wastewaters need to be subjected to heavy treatments to guarantee safe human consumption [45,46,47]. For this reason, the treatment requirements for water recycling could be higher than those for water agricultural reuse. For combined sewer systems where black and gray waters are collected jointly [48], water recycling for potable use likely becomes inconvenient due to the high treatment requirements. Therefore, water recycling may not always be feasible to the same extent for all the HEIs taking part in the UI GreenMetric ranking system, depending on a variety of context specific conditions that are difficult or impossible to change, such as the original construction of the sewage collection system. Furthermore, as presented in the official guidelines, higher scores are assigned if an HEI prefers water upcycling or downcycling to reuse. It is questionable whether wastewater upcycling or downcycling should always be considered more sustainable than reuse. For instance, treatments for recycling water for potable usage may not valorize wastewater nutrients such as phosphorus and nitrogen, which would otherwise be valued if the same wastewater were reused for agricultural purposes. If gray and black waters are collected separately, then the former can be more conveniently upcycled, while the latter can be more conveniently reused according to the treatment required without the need to penalize one disposal mode over another. Based on these considerations, sustainability assessment of different campus wastewater disposal modes should be more carefully carried out while taking into account (1) the context of the campus considered, such as the campus sewage collection system, (2) the chances and the needs for water recycling and reuse, (3) the treatment feasibility to make wastewater suitable for reuse and down- and upcycling and (4) the country-specific regulations for water reuse and recycling [49]. Specific indicators taking into account the effective valorization of both water and the nutrient contents should be elaborated.
  • Water recycling program implementation (WR2): According to this item, an HEI will increasingly obtain scores by augmenting the amount of water recycled for human usage. The questionnaire allows an HEI to declare if a program for water recycling has not been implemented yet or, if implemented, how much water has been recycled. With regard to the latter, the guidelines do not make explicit if the amount of recycled water should be expressed as a percentage of the total recyclable water or as a percentage of the total wastewater. It seems that the questionnaire does not make room for those cases where water recycling is not feasible. Possible options for water recycling involve the collection of rain and gray and black waters and their related treatment before delivery for some selected usage. It can be deduced that water recycling is not always feasible for all universities. As a matter of fact, it requires a separate wastewater collection system, where gray and black waters are collected separately from their respective sources. Contrary to gray water recycling, recycling of black water or domestic wastewater (i.e., combined gray and black waters) for potable or other domestic usage may be performed at the expense of very sophisticated treatments and may become inconvenient or hardly carried out. Unless an HEI is under construction, it is very challenging to completely turn over a wastewater collection system. An HEI can therefore be penalized compared with another simply due to original construction choices, which are not easily reversible at all. Secondly, the recycling of rainwater can be carried out only in those places where rainwater heights are significant. Based on these considerations, recycling water is not possible to the same extent for all HEIs due to their locations and due to irreversible construction characteristics. The ranking system does not offer any option for those universities where recycling is simply not feasible or can be carried out only marginally. Although this parameter is very important as it encourages HEIs to recycle more and more water, this should be evaluated while taking into account the actual feasibility of water recycling technologies within the context of the HEIs in order to eliminate biases linked to HEI construction or location. An improved indicator should consider the number of executive actions implemented for water recycling divided by the actual amount of water that is recyclable, which does not correspond to 100% of the total water consumption.
  • The total number of vehicles (cars and motorcycles) divided by the total campus’ population (TR1): By analyzing the information required in the questionnaire, as detailed in the guidelines, this item appears to be controversial. Considering the total number of vehicles without discriminating among the various types (i.e., cars or motorcycles) can give a distorted vision of an institution’s sustainability because of the different environmental impacts of different kinds of vehicles. As a clear example, the amount of CO2 emitted per kilometer and per user and the amount of air pollutants differ substantially based on whether a car or a motorcycle is chosen [50,51]. An improved indicator should employ proper estimations of the CO2 emitted per kilometer by each type of vehicle and then multiply the same value by the actual kilometers travelled. These can be known by carrying out surveys among the campus population.
  • Program to limit or decrease the parking area on campus for the last three years (TR6): In some universities, the induced traffic remains out of the area of the campus as a result of the availability of external parking areas. Therefore, this item should be modified in order to include initiatives aimed at limiting or decreasing all the parking areas occupied by the university’s community, regardless of whether the parking area is within or outside the campus area. More generally, this item should incorporate initiatives aimed at planning more sustainable long-distance travel. For instance, universities promoting the use of trains instead of aircraft should be advantageous in the rankings. Finally, it must be pointed out that the efforts for generating reliable data regarding accesses to the university can present increasing challenges from the case of a single-area campus to the cases of universities with buildings spread out in an urban territory.
  • The ratio of sustainability courses to total courses/subjects (ED1): According to the UI GreenMetric Ranking system, an HEI obtains a higher score by simply increasing the number of sustainability courses compared with the total number of courses taught at the same institution. The inclusion of this item could be deceiving to the sustainability ranking system in a variety of ways. First, the number of hours of sustainability courses should be preferred over the mere number of sustainability courses. As a matter of fact, HEIs arbitrarily split the sustainability content to teach into variable numbers of courses, and this can lead to penalizing those HEIs splitting their sustainability content into fewer courses. Secondly, by considering the number of sustainability courses with respect to the number of total courses, an HEI already teaching a larger variety of subjects and having already set a high amount of time to be spent on sustainability teachings will achieve a lower sustainability score simply due to a longer time being spent teaching subjects other than sustainability. The number of people attending these courses is also worthy of consideration, as having a small-sized audience can hinder the spread of knowledge about sustainability even if the number of sustainability courses is high. The heterogeneity of the sustainability content taught is also important, as the information taught should not be focused on few specific sustainability subjects while neglecting many others. Based on this, as a start for improvement, the indicator should be computed by dividing the number of hours spent for teaching sustainability by the total number of hours for teaching all subjects. In addition, a threshold above which increasing the number of hours for sustainability further does not further increase the assignment of scores that should be established. More points should be assigned if the percentage of the total students attending the course is higher.
  • Percentage of university budget for sustainability efforts (SI6) and the ratio of sustainability research funding to total research funding (ED2): According to both of these items, an HEI can obtain a higher sustainability ranking compared with others by increasing the amount of funding for the implementation of sustainability programs and for research related to sustainability. According to the ranking system, this has to be considered with respect to the total university budget and the total research funding. In this regard, the items included tend to not take into account the fact that the need for more sustainability in each HEI is context-specific and varies for the same HEI over time. An HEI may need more sustainability efforts than another, which means that more funding needs to be invested there compared with another HEI having already achieved a good level of sustainability. In other words, the amount of funding for sustainability implementation and research should be compared against the actual need of the HEI. Additionally, similar to the discussion about item ED1, an HEI already spending a lot of funding for sustainability will receive a lower score by simply augmenting funding for projects and research dealing with topics different from sustainability. Furthermore, funding for research and sustainability efforts—even if related to the total funding—can also change the function of the income of the country where the HEI is located, as the means and tools for sustainability improvement present different economic burdens from one country to another. Therefore, it becomes very difficult to make an unbiased comparison among universities belonging to different countries in relation to the funding for sustainability program implementations and research. As a replacement for the original one, an amended indicator for SI6 should be a division between the budget for sustainability efforts and the total budget available for all university activities. This quantity should then be divided by the amount of sustainability actions that actually need to take place. Similar amendments are suggested for item ED2.
  • Number of scholarly publications on sustainability (ED3): According to the guidelines for the year 2022, this item must be filled out with the average number of scholarly indexed publications about sustainability in the last 3 years. Although the number of scientific publications dealing with sustainability could represent a good indicator of research activities related to sustainability in a university, it is important to also consider the research heterogeneity and novelty in these publications, the relevance of the journal where the research is published with respect to the sustainability topic and, most importantly, the actual impact this research has (for instance, the number of research outputs that have ended up being actively applied to improve sustainability). Although these features are not always easily quantifiable, considering that only the quantity of indexed publications may encourage some universities to privilege quantity over quality, as the peer review process among different journals is not always comparable [52,53]. Furthermore, similar to the case of program implementation, research outputs for sustainability always need to be considered with respect to the actual need for them. This is because research on sustainability should always reflect the need for novel knowledge and tools in the field. This need may saturate over time, as there is not always the same need for an HEI to invest in new research programs. Sometimes, the research carried out by a university provides useful enough outputs for another university, thus reducing the need for the latter to carry out research and the number of scholarly publications produced therefrom. Aside from that, sustainability has several branches (e.g., the six criteria of the UI GreenMetric system), and a university may focus its research on one branch while another university may focus its research on another. Considering that the amount of research content publishable likely differs from one branch to another due to several circumstantial factors (e.g., the state of development of the subject and the acceptance rate of the journals dealing with the subject) [54], it turns out to be unfair to compare the overall mere amount of publications on sustainability by worldwide universities. To take into account more fairly the research activity on sustainability, the published articles should first be grouped according to the branch of sustainability they belong to. Secondly, each article belonging to the same sustainability branch should be counted, using the impact factor of the publishing journal as a weight. A distinct score should be assigned for each sustainability branch. In this way, a biased assignment of scores, by virtue of choosing to carry out more research on a sustainability branch having by itself a higher citation rate, can be expected to be avoided.
  • Number of events related to sustainability (ED4): With respect to this item, an HEI obtains a higher score by simply increasing the number of events related to sustainability. However, the overall number of hours these sustainability events last is largely more important than the mere number of events. This is to avoid an HEI achieving a higher score by simply splitting up the same number of hours for sustainability into a larger number of events. That aside, the number of people involved and, more generally, the number of people attending those events is very relevant too. Similar to the case of item ED3, the heterogeneity of the topics dealt with during these events should also be considered when assigning scores. As an improved indicator, the cumulative number of hours spent on events related to sustainability should be considered instead of the mere number of events.
  • Number of student organizations related to sustainability (ED5): By including this item, an HEI would achieve a higher score by simply having more student organizations related to sustainability, regardless of the actual total number of students enrolled at the university itself. Thus, a university where fewer students are enrolled will achieve a lower sustainability score only because it has fewer student organizations related to sustainability compared with another university where more students are enrolled. This unfairly penalizes small-sized universities. Furthermore, the performance of these organizations in terms of efforts for sustainability improvements deserves consideration when assigning scores. The number of student organizations related to sustainability should be divided by the number of overall students enrolled in the same institution.
A question can be raised as to whether critical analysis, such as the one proposed in this article, must be integrated with statistical analysis. Aside from the fact that statistical analysis has its own limitations [55,56,57], it is important to point out that the considerations written here do not need statistical analysis in support. For instance, it was explained that the “Number of renewable energy sources on campus” (EC3) is not a suitable indicator, as what matters is not the variety of renewable energy sources employed but the contribution to the electrical power by the renewable energy. It is obvious that this consideration does not require any statistical analysis, but critical thinking is sufficient. Another example is the critique to the item “Greenhouse gas emission reduction program” (EC7). The main critiques addressed in this case were the application of a fixed CO2 emission factor regardless of the country where the university is located and the lack of several CO2 contributions. In this case, basic knowledge about the dynamics of CO2 emissions as a function of the energy mix and all the possible CO2 contributions by an institution is a sufficient methodological tool to understand that the item—as presented by the UI GreenMetric Ranking system—needs several amendments. There is no need for statistical analysis to demonstrate that more CO2 contributions need to be included, nor that a specific CO2 emission factor depicting the current energy mix employed in the location of the university should be used instead of a constant one that is equal for worldwide universities. Similarly, statistical analysis is not needed to demonstrate that the number of programs for GHG emission reduction (EC7) and the number of programs for reduction of the use of paper and plastic (WS2) are poor sustainability indicators, since what matters most is the actual effectiveness of these programs and not their mere amount. A basic understanding of vehicle pollution potential is also enough to understand that considering the total number of vehicles (divided by the total campus population) as a sustainability indicator without differentiating between cars and motorcycles does not yield a reliable sustainability assessment. Furthermore, the critique issued regarding “The number of scholarly publications on sustainability” (ED3), according to which not all scholarly publications contribute the same, and therefore, considering the mere number of publications generates biases, is also greatly acknowledged by the scientific community and academics. Experience with university teaching programs is also sufficient to understand that “The ratio of sustainability courses to total courses” (ED1) should be replaced by a better indicator that takes into account the actual number of hours of each sustainability course. As a matter of fact, it is well known that courses can have different credits according to the number of hours, and the larger the number of hours for a course is, the greater the taught content is. Statistical analysis is not needed to demonstrate that several indicators originate ranking biases linked to universities’ landscapes, original construction, weather, availability of renewable energy sources and cultural habits. For instance, it requires basic understanding of electricity usage dynamics to note that the total amount of electricity usage is a function of several local factors, including the presence of high-energy-demand laboratories and the number of hours of daylight. Similarly, it is easy to understand without statistical analysis that the total carbon footprint, despite representing a relevant sustainability parameter, strongly depends on the availability of renewable energy sources, and that this last factor is severely affected by the location of the university. This in turn creates biases and unfair rankings when local factors are not considered.

3.2. Other Considerations

With regard to the criterion of Energy and Climate Change (EC), the ranking system assigns a maximum number of points for the item “Greenhouse gas emission reduction program” (EC7) equal to the one assigned for other items within the same criterion, such as “The ratio of renewable energy production divided by total energy usage per year” (EC5) and “Total carbon footprint divided by total campus’ population (metric tons per person)” (EC8). However, items EC5 and EC8 provide more direct quantifications of an HEI’s sustainability, while the number of programs for GHG reduction are more context-specific and may fail in their scope, as extensively discussed in Section 3.1. Similarly, with regard to the criterion “Waste” (WS3), the ranking system assigns a maximum score for the item “Program to reduce the use of paper and plastic on campus” (WS2) equal to the one assigned to other items belonging to the same criterion. While items different from WS2 quantify more objectively the achievements in terms of waste treatment and recycling, item WS2 only considers the implementation of programs for plastic and paper usage reduction, which—as discussed in Section 3.1—is much more context-specific and does not directly quantify the effective sustainability of a university but only the efforts whose performance is not considered. Although the preparation and implementation of programs to improve sustainability in HEIs is a good starting point, it is questionable whether they should have the same importance as the actual achievements.
There are other important issues related to the UI GreenMetric ranking system. Assuming that all the items and related score assignment methodology are set up correctly, the reliability of the data included could be concerning for some institutions. This is because the data needed to fill out the questionnaire and used to assign sustainability scores are not always easy to estimate. For instance, the amount of total waste recycled by the university (as required for item WS1) or the carbon dioxide equivalent emitted (as required for item EC8) are easily assessable only for those HEIs in locations with advanced monitoring and control of environmental issues. When the data required cannot be easily estimated, various assumptions are usually made, and thus the data provided can be affected by a variably significant degree of uncertainty. This uncertainty needs to be known and taken into account when assigning scores. Importantly, the propagation of data uncertainty to the assigned scores for the item, for the criterion and for the overall ranking needs to be properly addressed. Sometimes, data uncertainty is such that the final score assigned for a criterion or for the overall ranking is too uncertain. In this case, the institution should not be considered for the world universities ranking, as this would entail too much randomness, where other universities with more robust data end up being unfairly favored or penalized.
Another issue is that all the items of the ranking system studied provide either a null score or a positive one. However, worldwide, there are misconducts perpetrated by universities that have detrimental environmental impacts, such as disposal of waste and wastewaters which is not compliant with regulatory standards. These misconducts should also be considered when evaluating an HEI’s sustainability. Some HEIs should be excluded from the ranking if they do not comply with the fundamental criteria of environmental protection (a list of necessary requirements should be made available).
Finally, it is important to note that the UI GreenMetric Ranking system neglects the different economic burdens that the implementation of sustainability development programs may present. The reasons for this can presumably be attributed to the fact that economic burdens are, in the first place, too difficult to take into account due to the typical strong fluctuations they are subjected to in this modern era. Another reason could be their difficult quantification. For instance, in EU member states, several sustainability projects have recently been started up using funding coming from the European Commission and not directly from the same country [58]. Hence, it can be deduced that, even if we knew the economic state of the country where an HEI is located, that would not necessarily be representative of the actual economic burden that a university is subjected to, as external interferences can take place. We believe this is an intrinsic problem with worldwide institution metrics that is difficult to overcome. At the same time, sustainability efforts need to be the priority for countries around the world, despite the economic burden peculiarities for each country.

4. Conclusions and Future Perspectives

This work presents a detailed analysis of the suitability of the items constituting the UI GreenMetric ranking system as tools to quantify worldwide universities’ sustainability. It was found that several of the items fell short with respect to either being a proper indicator for sustainability development or needing to be extensively improved in order to be used in a world university ranking system. Specifically, items such as “Number of renewable energy sources on campus” and “The total number of vehicles (cars and motorcycles) divided by the total campus’ population”, among others, emerged as non-suitable indicators for sustainability development in HEIs. Other items where scores could be earned by simply drawing up programs or increasing the number of sustainability events and organizations were found to be affected by several context-specific factors and therefore lead to biased rankings. The problem does not solely rely on the lack of information or on the fact that some parameters are difficult to measure. While for some indicators this could be true, for others, the issue is more serious. Either the item simply does not qualify for sustainability assessment, or the item is not normalized properly against local context-specific unchangeable factors that favor some universities over others. These problems would occur even if all the information were available. Aside from that, taking into account the reliability of the data used to quantify the various items is not optional, since the error led by data unreliability can be propagated to the final score obtained. Including items difficult to quantify can undermine the reliability of the final ranking, and this is not a light matter. Furthermore, comparing universities using data with different levels of accuracy can also be misleading and problematic.
Based on this, a lot more work should be carried out with the aim of identifying proper indicators evaluating universities’ sustainability in a more unbiased manner. Inspired by the analysis and suggestions presented in this work, experts can start to elaborate more complex sustainability indicators which could describe more effectively and without bias HEIs’ sustainability. Suggested improvements for sustainability indicators basically involve taking into account the current need for a program’s implementation in light of sustainability upgrading and the feasibility of the needed interventions within the context of the university itself. Many indicators proposed by the Universitas Indonesia should be normalized with respect to context-specific factors such as renewable energy source availability, weather, landscape, original construction and the cultural habits of the people enrolled.
Ranking universities based on their sustainability development is a delicate task requiring careful attention to several context-specific factors, with the aim of avoiding penalizing a university with respect to another simply based on irreversible or unchangeable local factors such as original construction choices and weather conditions. Wrong rankings can not only unfairly damage or improve universities’ reputations but also direct universities toward unreasonable interventions by making them focus on issues of little to no importance for sustainable development while much more impactful matters are neglected.

Author Contributions

Conceptualization, M.R.; methodology, R.B. and E.C.R.; validation, E.C.R.; data curation, R.B.; writing—original draft preparation, R.B.; writing—review and editing, R.B. and E.C.R.; visualization, R.B., M.R., V.T. and E.C.R.; supervision, M.R. and V.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by PNR—Italian National Programme for Research 2021–2027 and specifically by the Ministerial Decree n. 737 of 25 June 2021 CUP E65F21004150001.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Samara, F.; Ibrahim, S.; Yousuf, M.E.; Armour, R. Carbon Footprint at a United Arab Emirates University: GHG Protocol. Sustainability 2022, 14, 2522. [Google Scholar] [CrossRef]
  2. Schiavon, M.; Ragazzi, M.; Magaril, E.; Chashchin, M.; Karaeva, A.; Torretta, V.; Rada, E.C. Planning sustainability in higher education: Three case studies. WIT Trans. Ecol. Environ. 2021, 253, 99–110. [Google Scholar] [CrossRef]
  3. Reche, C.; Viana, M.; Rivas, I.; Bouso, L.; Àlvarez-Pedrerol, M.; Alastuey, A.; Sunyer, J.; Querol, X. Outdoor and Indoor UFP in Primary Schools across Barcelona. Sci. Total Environ. 2014, 493, 943–953. [Google Scholar] [CrossRef]
  4. Ragazzi, M.; Rada, E.C.; Zanoni, S.; Passamani, G.; Dalla Valle, L. Particulate Matter and Carbon Dioxide Monitoring in Indoor Places. Int. J. Sustain. Dev. Plan. 2017, 12, 1032–1042. [Google Scholar] [CrossRef]
  5. Erlandson, G.; Magzamen, S.; Carter, E.; Sharp, J.L.; Reynolds, S.J.; Schaeffer, J.W. Characterization of Indoor Air Quality on a College Campus: A Pilot Study. Int. J. Environ. Res. Public Health 2019, 16, 2721. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Karaeva, A.; Cioca, L.I.; Ionescu, G.; Magaril, E.R.; Rada, E.C. Renewable Sources and Its Applications Awareness in Educational Institutions. In Proceedings of the 2019 International Conference on ENERGY and ENVIRONMENT (CIEM), Timisoara, Romania, 17–18 October 2019; pp. 338–342. [Google Scholar] [CrossRef]
  7. Duma, C.; Basliu, V.; Dragan, V.M. Assessment of the Impact of Environmental Factors from a Space Corresponding to a Higher Education Institution. In Proceedings of the 18th International Multidisciplinary Scientific Geoconference Sgem 2018, Albena, Bulgaria, 2–8 July 2018; Volume 18, pp. 121–128. [Google Scholar] [CrossRef]
  8. Rada, E.C.; Bresciani, C.; Girelli, E.; Ragazzi, M.; Schiavon, M.; Torretta, V. Analysis and Measures to Improve Waste Management in Schools. Sustainability 2016, 8, 840. [Google Scholar] [CrossRef] [Green Version]
  9. Khoshbakht, M.; Zomorodian, M.; Tahsildoost, M. A Content Analysis of Sustainability Declaration in Australian Universities. In Proceedings of the 54th International Conference of the Architectural Science Association (ANZAScA) 2020, Auckland, New Zealand, 26–27 November 2020; pp. 41–50. [Google Scholar]
  10. Sousa, S.; Correia, E.; Leite, J.; Viseu, C. Environmental Behavior among Higher Education Students. In Proceedings of the 5th World Congress on Civil, Structural, and Environmental Engineering (CSEE’20), Virtual, 18–20 October 2020. [Google Scholar] [CrossRef]
  11. Torretta, V.; Rada, E.C.; Panaitescu, V.; Apostol, T. Some Considerations on Particulate Generated by Traffic. UPB Sci. Bull. Ser. D Mech. Eng. 2012, 74, 241–248. [Google Scholar]
  12. Akhtar, S.; Khan, K.U.; Atlas, F.; Irfan, M. Stimulating Student’s pro-Environmental Behavior in Higher Education Institutions: An Ability–Motivation–Opportunity Perspective. Environ. Dev. Sustain. 2022, 24, 4128–4149. [Google Scholar] [CrossRef]
  13. Leiva-Brondo, M.; Lajara-Camilleri, N.; Vidal-Meló, A.; Atarés, A.; Lull, C. Spanish University Students’ Awareness and Perception of Sustainable Development Goals and Sustainability Literacy. Sustainability 2022, 14, 4552. [Google Scholar] [CrossRef]
  14. Bertossi, A.; Marangon, F. A Literature Review on the Strategies Implemented by Higher Education Institutions from 2010 to 2020 to Foster Pro-Environmental Behavior of Students. Int. J. Sustain. High. Educ. 2022, 23, 522–547. [Google Scholar] [CrossRef]
  15. Atici, K.B.; Yasayacak, G.; Yildiz, Y.; Ulucan, A. Green University and Academic Performance: An Empirical Study on UI GreenMetric and World University Rankings. J. Clean. Prod. 2021, 291, 125289. [Google Scholar] [CrossRef]
  16. Galleli, B.; Teles, N.E.B.; Santos, J.A.R.d.; Freitas-Martins, M.S.; Hourneaux Junior, F. Sustainability University Rankings: A Comparative Analysis of UI Green Metric and the Times Higher Education World University Rankings. Int. J. Sustain. High. Educ. 2022, 23, 404–425. [Google Scholar] [CrossRef]
  17. Time Higher Education THE Impact Ranking. Available online: https://www.timeshighereducation.com/rankings/impact/2020/overall#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/undefined (accessed on 1 January 2023).
  18. Urbanski, M.; Filho, W.L. Measuring Sustainability at Universities by Means of the Sustainability Tracking, Assessment and Rating System (STARS): Early Findings from STARS Data. Environ. Dev. Sustain. 2015, 17, 209–220. [Google Scholar] [CrossRef]
  19. Studenten Voor Morgen SustainaBul. Available online: https://www.studentenvoormorgen.nl/en/sustainabul/ (accessed on 1 January 2023).
  20. Green Offfice Movement University Sustainability Assessment Framework (UniSAF). Available online: https://www.greenofficemovement.org/sustainability-assessment/ (accessed on 1 January 2023).
  21. Suwartha, N.; Sari, R.F. Evaluating UI GreenMetric as a Tool to Support Green Universities Development: Assessment of the Year 2011 Ranking. J. Clean. Prod. 2013, 61, 46–53. [Google Scholar] [CrossRef]
  22. Baricco, M.; Tartaglino, A.; Gambino, P.; Dansero, E.; Cottafava, D.; Cavaglià, G. University of Turin Performance in UI GreenMetric Energy and Climate Change. E3S Web Conf. 2018, 48, 03003. [Google Scholar] [CrossRef] [Green Version]
  23. Fuentes, J.E.; Garcia, C.E.; Olaya, R.A. Estimation of the Setting and Infrastructure Criterion of the Ui Greenmetric Ranking Using Unmanned Aerial Vehicles. Sustainability 2022, 14, 46. [Google Scholar] [CrossRef]
  24. Lourrinx, E.; Hadiyanto; Budihardjo, M.A. Implementation of UI GreenMetric at Diponegoro University in Order to Environmental Sustainability Efforts. E3S Web Conf. 2019, 125, 02007. [Google Scholar] [CrossRef]
  25. Lauder, A.; Sari, R.F.; Suwartha, N.; Tjahjono, G. Critical Review of a Global Campus Sustainability Ranking: GreenMetric. J. Clean. Prod. 2015, 108, 852–863. [Google Scholar] [CrossRef]
  26. University Indonesia GreenMetric Ranking System. Available online: https://greenmetric.ui.ac.id/ (accessed on 1 January 2023).
  27. Ragazzi, M.; Ghidini, F. Environmental Sustainability of Universities: Critical Analysis of a Green Ranking. Energy Procedia 2017, 119, 111–120. [Google Scholar] [CrossRef]
  28. Veidemane, A. Education for Sustainable Development in Higher Education Rankings: Challenges and Opportunities for Developing Internationally Comparable Indicators. Sustainability 2022, 14, 5102. [Google Scholar] [CrossRef]
  29. European Environment Agency. CO2 Intensity of Electricity Generation; European Environment Agency: Copenhagen, Denmark, 2017.
  30. Boiocchi, R.; Bertanza, G. Evaluating the Potential Impact of Energy-Efficient Ammonia Control on the Carbon Footprint of a Full-Scale Wastewater Treatment Plant. Water Sci. Technol. 2022, 85, 1673–1687. [Google Scholar] [CrossRef]
  31. Flores-Alsina, X.; Arnell, M.; Amerlinck, Y.; Corominas, L.; Gernaey, K.V.; Guo, L.; Lindblom, E.; Nopens, I.; Porro, J.; Shaw, A.; et al. Balancing Effluent Quality, Economic Cost and Greenhouse Gas Emissions during the Evaluation of (Plant-Wide) Control/Operational Strategies in WWTPs. Sci. Total Environ. 2014, 466–467, 616–624. [Google Scholar] [CrossRef] [PubMed]
  32. Eriksson, M.; Strid, I.; Hansson, P.A. Carbon Footprint of Food Waste Management Options in the Waste Hierarchy—A Swedish Case Study. J. Clean. Prod. 2015, 93, 115–125. [Google Scholar] [CrossRef]
  33. Sun, L.; Li, Z.; Fujii, M.; Hijioka, Y.; Fujita, T. Carbon Footprint Assessment for the Waste Management Sector: A Comparative Analysis of China and Japan. Front. Energy 2018, 12, 400–410. [Google Scholar] [CrossRef]
  34. Pérez, J.; de Andrés, J.M.; Lumbreras, J.; Rodríguez, E. Evaluating Carbon Footprint of Municipal Solid Waste Treatment: Methodological Proposal and Application to a Case Study. J. Clean. Prod. 2018, 205, 419–431. [Google Scholar] [CrossRef]
  35. Cornejo, P.K.; Santana, M.V.E.; Hokanson, D.R.; Mihelcic, J.R.; Zhang, Q. Carbon Footprint of Water Reuse and Desalination: A Review of Greenhouse Gas Emissions and Estimation Tools. J. Water Reuse Desalin. 2014, 4, 238–252. [Google Scholar] [CrossRef]
  36. Shrestha, E.; Ahmad, S.; Johnson, W.; Batista, J.R. The Carbon Footprint of Water Management Policy Options. Energy Policy 2012, 42, 201–212. [Google Scholar] [CrossRef]
  37. Mo, W.; Zhang, Q. Can Municipal Wastewater Treatment Systems Be Carbon Neutral? J. Environ. Manag. 2012, 112, 360–367. [Google Scholar] [CrossRef]
  38. Maktabifard, M.; Awaitey, A.; Merta, E.; Haimi, H.; Zaborowska, E.; Mikola, A.; Mąkinia, J. Comprehensive Evaluation of the Carbon Footprint Components of Wastewater Treatment Plants Located in the Baltic Sea Region. Sci. Total Environ. 2022, 806, 150436. [Google Scholar] [CrossRef]
  39. Kampschreur, M.J.; Temmink, H.; Kleerebezem, R.; Jetten, M.S.M.; van Loosdrecht, M.C.M. Nitrous Oxide Emission during Wastewater Treatment. Water Res. 2009, 43, 4093–4103. [Google Scholar] [CrossRef]
  40. Hobson, J. CH4 and N2O Emissions from Waste Water Handling. Good Pract. Guid. Uncertain. Manag. 2000, 441–454. [Google Scholar]
  41. Boiocchi, R.; Gernaey, K.V.; Sin, G. Extending the Benchmark Simulation Model N°2 with Processes for Nitrous Oxide Production and Side-Stream Nitrogen Removal. Comput. Aided Chem. Eng. 2015, 37, 2477–2482. [Google Scholar] [CrossRef]
  42. Clearance Solutions Ltd. Available online: https://www.clearancesolutionsltd.co.uk/reuse-and-recycling/the-three-rs-the-difference-between-recycling-reusing (accessed on 1 January 2023).
  43. IDE Technologies. Available online: https://blog.ide-tech.com/recover-recycle-reuse-the-inevitability-of-water-reuse-as-a-sustainable-way-to-ensure-water-resiliency (accessed on 1 January 2023).
  44. Anderson, J. The Environmental Benefits of Water Recycling and Reuse. Water Sci. Technol. Water Supply 2003, 3, 1–10. [Google Scholar] [CrossRef] [Green Version]
  45. Gupta, V.K.; Ali, I.; Saleh, T.A.; Nayak, A.; Agarwal, S. Chemical Treatment Technologies for Waste-Water Recycling—An Overview. RSC Adv. 2012, 2, 6380–6388. [Google Scholar] [CrossRef]
  46. Bixio, D.; Thoeye, C.; De Koning, J.; Joksimovic, D.; Savic, D.; Wintgens, T.; Melin, T. Wastewater Reuse in Europe. Desalination 2006, 187, 89–101. [Google Scholar] [CrossRef]
  47. Ghernaout, D. Increasing Trends Towards Drinking Water Reclamation from Treated Wastewater. World J. Appl. Chem. 2018, 3, 1–9. [Google Scholar] [CrossRef]
  48. Boiocchi, R.; Zhang, Q.; Gao, M.; Liu, Y. Modeling and Optimization of an Upflow Anaerobic Sludge Blanket (UASB) System Treating Blackwaters. J. Environ. Chem. Eng. 2022, 10, 107614. [Google Scholar] [CrossRef]
  49. Angelakis, A.N.; Bontoux, L.; Lazarova, V. Challenges and Prospectives for Water Recycling and Reuse in EU Countries. Water Sci. Technol. Water Supply 2003, 3, 59–68. [Google Scholar] [CrossRef]
  50. Vasic, A.M.; Weilenmann, M. Comparison of Real-World Emissions from Two-Wheelers and Passenger Cars. Environ. Sci. Technol. 2006, 40, 149–154. [Google Scholar] [CrossRef]
  51. Chan, C.C.; Nien, C.K.; Tsai, C.Y.; Her, G.R. Comparison of Tail-Pipe Emissions from Motorcycles and Passenger Cars. J. Air Waste Manag. Assoc. 1995, 45, 116–124. [Google Scholar] [CrossRef] [Green Version]
  52. Wicherts, J.M. Peer Review Quality and Transparency of the Peer-Review Process in Open Access and Subscription Journals. PLoS ONE 2016, 11, e0147913. [Google Scholar] [CrossRef] [PubMed]
  53. Gasparyan, A.Y. Peer Review in Scholarly Biomedical Journals: A Few Things That Make a Big Difference. J. Korean Med. Sci. 2013, 28, 970–971. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Björk, B.C. Acceptance Rates of Scholarly Peerreviewed Journals: A Literature Survey. Prof. La Inf. 2019, 28, 1–9. [Google Scholar] [CrossRef]
  55. Xie, Y. Values and Limitations of Statistical Models. Res. Soc. Stratif. Mobil. 2011, 29, 343–349. [Google Scholar] [CrossRef] [Green Version]
  56. Stigler, G.J. The Limitations of Statistical Demand Curves. J. Am. Stat. Assoc. 2012, 34, 469–481. [Google Scholar] [CrossRef]
  57. Friedman, J.H. The Role of Statistics in the Data Revolution? Int. Stat. Rev. 2001, 69, 5–10. [Google Scholar] [CrossRef]
  58. European Commission: Recovery and Resilience Facility. Available online: https://commission.europa.eu/energy-climate-change-environment/implementation-eu-countries/energy-and-climate-governance-and-reporting/national-energy-and-climate-plans_en (accessed on 1 January 2023).
Figure 1. Yearly trend of the number of HEIs taking part in the UI GreenMetric ranking system.
Figure 1. Yearly trend of the number of HEIs taking part in the UI GreenMetric ranking system.
Sustainability 15 01343 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boiocchi, R.; Ragazzi, M.; Torretta, V.; Rada, E.C. Critical Analysis of the GreenMetric World University Ranking System: The Issue of Comparability. Sustainability 2023, 15, 1343. https://doi.org/10.3390/su15021343

AMA Style

Boiocchi R, Ragazzi M, Torretta V, Rada EC. Critical Analysis of the GreenMetric World University Ranking System: The Issue of Comparability. Sustainability. 2023; 15(2):1343. https://doi.org/10.3390/su15021343

Chicago/Turabian Style

Boiocchi, Riccardo, Marco Ragazzi, Vincenzo Torretta, and Elena Cristina Rada. 2023. "Critical Analysis of the GreenMetric World University Ranking System: The Issue of Comparability" Sustainability 15, no. 2: 1343. https://doi.org/10.3390/su15021343

APA Style

Boiocchi, R., Ragazzi, M., Torretta, V., & Rada, E. C. (2023). Critical Analysis of the GreenMetric World University Ranking System: The Issue of Comparability. Sustainability, 15(2), 1343. https://doi.org/10.3390/su15021343

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop