Next Article in Journal
Exploring the Need to Use “Plagiarism” Detection Software Rationally
Previous Article in Journal
Methodology for AI-Based Search Strategy of Scientific Papers: Exemplary Search for Hybrid and Battery Electric Vehicles in the Semantic Scholar Database
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research Metrics in Architecture: An Analysis of the Current Challenges Compared to Engineering Disciplines

by
Omar S. Asfour
1,2,* and
Jamal Al-Qawasmi
1,2
1
Architecture and City Design Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
2
Interdisciplinary Research Center for Construction and Building Materials, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
Publications 2024, 12(4), 50; https://doi.org/10.3390/publications12040050
Submission received: 2 October 2024 / Revised: 18 November 2024 / Accepted: 2 December 2024 / Published: 19 December 2024

Abstract

:
The Hirsch index (‘h-index’) is a widely recognized metric for assessing researchers’ impact, considering both the quantity and quality of their research work. Despite its global acceptance, the h-index has created some uncertainty about appropriate benchmark values across different disciplines. One such area of concern is architecture, which is often at a disadvantage compared to the fields of science and engineering. To examine this disparity, this study compared the citation count and h-index in architecture with those of other engineering disciplines. Data were collected extensively from Scopus database, focusing on the top 50 universities. The analysis revealed that architecture consistently recorded lower citation counts and h-index values than the selected engineering fields. Specifically, the average h-index for faculty members at the associate and full professor ranks was found to be 7.0 in architecture, compared to 22.8 in civil engineering and 25.6 in mechanical engineering. The findings highlight that a universal h-index benchmark is impractical, as research areas significantly vary in terms of research opportunities, challenges, and performance expectations. Thus, this study proposes the adoption of an additional relative h-index metric, ‘hr-index’, which accounts for the deviation of individual researchers from the average h-index value within their fields of knowledge. This metric can serve as a complement to the standard h-index, providing a more equitable and accurate assessment of researchers’ performance and impact within their areas of expertise.

1. Introduction

Research is a systematic process of inquiry with various aims, types, and methods, tailored to each specific knowledge domain. It involves data collection and analysis through appropriate research methods and tools [1]. For university faculty members, research is a key responsibility alongside teaching and administrative duties. Architecture, as a discipline, integrates diverse aspects—art, science, psychology, and philosophy—within a single framework. It is deeply influenced by ethical, cultural, socio-economic, and environmental factors that shape the built environment and impact quality of life. Architectural research aims to advance the profession by improving building design, functionality, and user interaction, especially in urban contexts [2]. It employs holistic approaches connected to humanities, social sciences, technical sciences, and design-based knowledge creation. Common methods include case studies, comparative analysis, experimental solutions and simulations, theoretical hypotheses, and interpretive reflection [3,4].
Architects often work collaboratively, and institutions encourage them to conduct research in interdisciplinary environments that engage various relevant fields, fostering innovative findings and conclusions. However, several challenges face researchers in the field of architecture. One main challenge in this regard is the institutional understanding of architecture as a discipline. Architecture is a multidisciplinary subject and, with its intellectual practice, it defies a simple classification. Some technical universities list architecture as a formal applied science profession as a part of the engineering sciences, while others place it within art schools. Although this diversity can be a strength, showcasing architecture’s multidisciplinary nature, it also presents a core challenge in defining its place within the research landscape.
Researchers in architecture also face challenges related to the gap between research and practice. Architectural research often remains confined to the ideas discussed in architectural schools and published in conferences and journals, which creates a disconnect between academic knowledge and real-world applications. Additional challenges include a limited awareness of the importance of architectural research compared to fields like science and technology, along with insufficient funding, especially for topics lacking direct industry ties [5]. The American Institute of Architects [2] has advocated for empowering architects to engage more actively in research through:
  • Increasing funding for research in architecture, including governmental grants, industry support, and collaboration between academia and industry for this purpose;
  • Prioritizing research as an essential competency for architects since commitment to research must become an integral part of architecture firms’ culture;
  • Dissemination of research to share its findings and improve its literacy.
The use of research metrics to evaluate research quality has surged over recent decades, driven by the digitization of research. These metrics offer several benefits, such as providing concrete, quantifiable measures of research output and documenting the performance history of researchers and academic institutions. However, despite their significance, research metrics have fostered a ‘culture of counting’, which could be used to manipulate the scientific impact of research. This has led to unethical practices, including false or inadequate citations, excessive self-citations, and citations of irrelevant work [6,7]. One major challenge for architecture researchers is the unfair comparison of their metrics to those of peers in other disciplines. It is often argued that research in the arts, humanities, and social sciences receives less visibility than that in science and technology, creating this unfair comparison [8].
This includes architecture, which is less focused on journals and, therefore, disadvantaged in research-citation potential. For instance, the well-known Web of Science database includes core journal collections across various disciplines, with many architecture journals classified under the Arts and Humanities Citation Index. This index only began receiving journal impact factors in June 2023 and has yet to be assigned ranking quartiles, which limits the appeal of these journals to researchers and reduces their citation potential [9]. This differs from the Social Sciences Citation Index and the Science Citation Index Expanded, where journals have had impact factors for decades. To address this gap, this study compares citation counts and h-index values in architecture with those in other engineering fields, examining the potential variation between these disciplines in this regard. This is intended to examine the practicality of using a universal h-index benchmark for the different research areas, as they significantly vary in terms of research opportunities, challenges, and expectations. This study suggests the use of an additional relative h-index, tailored to accommodate the unique publishing and citation patterns prevalent in the field of architecture. This metric could serve as a standardized tool for evaluating scientific contributions within the field, enhancing the fairness and relevance of these evaluations.

2. Literature Review

Bibliometrics, the quantitative evaluation of research outputs, encompasses a range of metrics that aims to assess the impact and significance of research output at multiple levels, including individual articles, journals, academic institutions, and researchers. These metrics serve several purposes, such as ranking journals and academic institutions, supporting faculty promotions, and informing decisions on research grants and funding [10]. Citation counting forms the basis of traditional research metrics, with commonly used examples being the Journal Impact Factor for journals and the h-index for individual researchers [11]. The development of electronic indexing technologies in recent decades has driven the rise of numerous research databases that offer a range of metrics to assess research quality and performance, providing unprecedented ways of research quality oversight and facilitating global collaboration and knowledge exchange. However, ongoing debates still exist in literature that highlight a wide range of challenges regarding using research metrics to accurately assess individual research quality and scholarly standing [12,13,14,15]. Different perspectives in this regard emerge from the various research cultures within the disciplines, such as average publication rates and citation counts. This underscores the importance of comparing ‘like with like’ to ensure fair and meaningful evaluations [14,15,16].
Applying research metrics responsibly in evaluating research quality is essential. Since each metric has limitations, using a combination of metrics provides a more comprehensive view of research performance in any given context. A fair, multidimensional assessment of research quality should include both qualitative and quantitative measures [8], leading to more informed evaluations and reducing the risk of low quality or predatory publishing practices [17]. For example, alongside citation counts and other numerical metrics, it is important to consider factors such as peer-review feedback, research productivity, annual publication output, discipline-specific publication rates, research focus, and research leadership. An important metric in research evaluation is citation count, which measures the number of times other studies reference an article. This is often seen as an indicator of an article’s significance based on the attention it has attracted. Citation analysis can be conducted at various levels, including individual researchers, disciplines, or institutions. Several research databases provide citation data and analytical tools. Free databases like Google Scholar allow authors to create profiles, add publications, and track citations over time, offering h-index and citation data updates. Among subscription-based databases, Web of Science (formerly the Institute for Scientific Information, or ISI) and Scopus are among the widely used. Notably, Scopus generally indexes a broader set of publications than Web of Science, making the latter ‘a near-perfect subset of Scopus’ [18].
Citation counts from both Web of Science and Scopus play a key role in calculating various research metrics. The Web of Science publishes the Journal Impact Factor annually in its Journal Citation Report (JCR), evaluating the impact of journals within its database. Similarly, Scopus provides the Scimago Journal Rank (SJR) and CiteScore, both of which assess the impact of journals it indexes. For researcher impact, Web of Science offers a citation report for each author that includes the h-index, total citations (with and without self-citations), citation trends over time, and average citations per publication. Scopus similarly provides citation data by author, listing individual papers and tracking citations per year. It also allows exclusion of self-citations, updating the author’s h-index accordingly. Additionally, the SciVal analytics tool, which uses Scopus data, offers advanced evaluations of research performance across countries, institutions, researchers, and topics, covering aspects such as collaboration, citations, patents, and awards. Recently, some citation platforms have incorporated artificial intelligence to enhance citation analysis. For instance, Semantic Scholar evaluates citation impact and categorizes citations by specific sections of cited articles [19].
The h-index, proposed by J. E. Hirsch in 2005, is one of the widely used metrics to assess quality of research in academic settings. It is designed to measure research productivity and influence based on citation counts. Hirsch defined the h-index as ‘the number of papers with citation number ≥ h’ [20]. Thus, an h-index of 10 indicates that an author has published 10 papers, each of which has received at least 10 citations. Without any citations, a researcher would have an h-index of zero. This index allows for comparing researchers’ performance and impact by reflecting both quality, through citation counts per paper, and productivity, through the number of published papers. However, it typically takes time for researchers to achieve a high h-index, so early-career researchers usually have a lower index. This effect is particularly evident in fields like art and architecture, where citations to recent studies occur less frequently than in scientific disciplines [8]. The h-index is now widely recognized and supported by several research databases, including Web of Science, Scopus, and Google Scholar. Google Scholar, however, often provides a higher h-index, as it includes citations from all online sources, unlike Scopus and Web of Science, which are subscription-based databases.
Despite its advantages, the h-index has some limitations and shortcomings. For example:
  • Some high-quality research works may not attract the expected number of citations for various reasons, such as popularity of the publication platforms. Thus, they will not contribute to improving the researcher’s h-index.
  • Comparing the research performance of several researchers from different disciplines using number of citations and the resulting h-index is difficult. Research subject areas vary in their citation potential, which is reflected on their average h-index [21]. In fact, research in areas that have higher citation numbers, such as cell biology, is not better than research in areas that typically have lower citation numbers, such as history [22]. Unfortunately, no precise guidelines exist in this regard. However, some general recommendations suggest common h-index values: 2 to 5 for assistant professors, 6 to 10 for associate professors, and 12 to 24 for full professors [23].
  • h-index does not consider researchers’ seniority, making comparisons of researchers’ impact at different stages of their research career difficult. To overcome this issue, Hirsch [21] suggested an additional index called m value, which is the h-index divided by the number of years since the researcher’s first publication. Some databases also provide a five-year h-index, allowing for time-bound assessments. This is particularly useful for tracking a researcher’s impact and productivity on an annual basis throughout their career.
  • The h-index also does not account for differences in citation potential between single-authored and co-authored papers. To address this, the hI,norm metric was proposed, which normalizes citation counts by dividing the number of citations by the number of authors, then calculates a single-author equivalent h-index based on this adjusted citation count. Additionally, dividing hI,norm by the researcher’s academic age yields another h-index variant called hI,annual, which reflects annualized impact [24].
  • Using self-citations could skew h-index values. Although self-citation is a justified and useful practice in many cases, its misuse could inflate the h-index value. This is why some research databases offer the option to exclude self-citations when calculating h-index [25].
Alternative metrics, or altmetrics, is an improvement over the conventional traditional citation counts. The term used to refer to a range of measures used to assess the impact and reach of scholarly research beyond traditional citation counts. Unlike conventional metrics, which focus on citations in academic journals, altmetrics track attention across various platforms and media, such as social media, news outlets, blogs, and multimedia, among others. Altmetrics help researchers and institutions understand how quickly their scholarly output is being shared and discussed and where it is generating interest outside academic circles. This type of metrics can be particularly valuable for identifying the societal or practical impact of research, which is often not captured by citation counts alone. Some of the developed alternative metrics provide researchers and academic institutions with digital tools that enable them to communicate research via social media and other networking platforms, such as Plum Analytics, ResearchGate, and Academia.edu. These altmetrics provide metrics such as number of views and downloads, social bookmarks, comments, and ratings [26,27]. These alternative metrics reflect the broader social, economic, and cultural influence of research, highlighting its potential impact beyond academia. Additionally, by increasing a work’s visibility, altmetrics can help improve citation counts by attracting attention to the research. This also includes the Research Interest Score (RI Score) used in ResearchGate database. RI Score combines reads and recommendations on ResearchGate and citations (excluding self-citations). This means that citations are not the only way to estimate a researcher’s impact as it may take a while before a work starts to receive citations [28]. However, this metric is limited to the ResearchGate members and does not consider the differences that exist between the different disciplines in citation counts and patterns.
Several studies have examined the potential for variation in h-index across disciplines and its implications for comparing researchers’ performance [29,30,31,32,33,34,35,36,37,38,39]. For instance, Harzing et al. [30] conducted a comparative study of 146 senior academics across five broad fields: humanities, social sciences, engineering, sciences, and life sciences. This study evaluated research metrics such as the number of publications, citations, and h-index, using data from Google Scholar, Scopus, and the Web of Science collected at eight intervals between 2013 and 2015. The findings suggested that the traditional h-index should be adjusted to ensure fair, cross-disciplinary comparisons. The authors proposed hI, annual, a modified h-index that accounts for co-authorship patterns and researchers’ academic age. Raheel et al. [32] also evaluated h-index and its variants within civil engineering, using multiple databases to identify the most effective metrics for author ranking. The study observed weak correlations among indices, leading to variations in researcher rankings. Additionally, Sheeja and Mathew [33] surveyed researchers in naval architecture affiliated with six higher education institutions in India. They collected altmetric data from ResearchGate profiles and scientometric data from Scopus, finding that the two sets of indicators correlated well, with most researchers achieving citation counts between 1 and 50 and an h-index between 1 and 5.
Park et al. [36] conducted a citation analysis of landscape architecture faculty in North America using Google Scholar data. Results indicated that citation counts correlated with faculty members’ academic rank, degree type, and academic age since their first publication. Notably, the study found that 15% of tenure-track faculty in landscape architecture had no citation records. Zagonari and Foschi [37] discussed the issue of h-index inequity, highlighting factors such as co-author count and the tendency for senior authors to receive more citations. Their study, which surveyed 10,000 Scopus authors from 2006 to 2015, proposed adjustments to h-index calculations to address these challenges and enable fairer cross-disciplinary comparisons. Zagonari [38] further argued for incorporating each researcher’s publication history and collaboration network into h-index calculations. Meanwhile, Sharma and Uddin [39] proposed the Kz index, which accounts for both the impact and age of publications to better reflect researchers’ sustained contributions. They suggested that this index provides a more comprehensive evaluation of research impact. However, alongside considerations of researchers’ seniority, it is also important to account for varying expectations of research impact across different fields.
Thus, this study aims to address the gap observed in the literature by focusing on architecture as a discipline and determining a discipline-specific average h-index across various levels of researcher seniority. Using inductive data collection, this study analyzed the scholarly output of researchers affiliated with the top 50 universities globally, as identified by the QS ranking. It calculated an average h-index value, havg, for each academic rank within the disciplines of architecture, civil engineering, and mechanical engineering. The average h-index formed the foundation for a new research metric, the relative h-index (hr), which measures the deviation of an individual’s h-index from the average within their field. By calculating this metric either for the discipline as a whole or for specific academic ranks, the study introduces a more nuanced approach that accounts for researcher seniority and reflects the differing h-index expectations across various fields of knowledge.

3. Materials and Methods

This study aims to compare citation counts and h-index values in architecture with those in various engineering disciplines to highlight potential differences and enable a fairer comparison of researchers’ impact across these fields. The comparison was based on inductive data collection from the Scopus and SciVal databases, covering the period from January to April 2023. Data were gathered from the top 50 universities according to the QS Rankings, which includes global, regional, and subject-specific rankings. In 2022, the subject-specific ranking encompassed 51 disciplines categorized under five broad academic fields [40]:
  • Arts and Humanities, including 11 disciplines.
  • Engineering and Technology, including 7 disciplines.
  • Life Sciences and Medicine, including 9 disciplines.
  • Natural Sciences, including 9 disciplines.
  • Social Sciences and Management, including 15 disciplines.
The Arts and Humanities category includes architecture under the title ‘Architecture and Built Environment’. Subject-specific university rankings are based on several indicators, including research impact, which is assessed by citations per paper and the h-index of faculty members from the Scopus database [41]. Notably, these indicators are represented as percentages rather than absolute values, complicating direct comparisons of publication impact across academic disciplines. The SciVal database, which also uses Scopus data, provides research metrics by discipline following the All Science Journal Classification (ASJC) system. These metrics include the annual and cumulative citation counts in each discipline, as well as the annual average h-index for researchers. However, the annual average h-index is presented as a single value for all researchers, with no clear consideration of variation in the number of years they were research-active.
To obtain more accurate data, this study adopted an inductive data-collection approach to help architecture researchers assess whether their citation counts are lower than those of peers within their own and other selected fields. For this purpose, we focused on the top 50 universities in the QS University Ranking. This increases the likelihood that any observed differences in research performance among the examined disciplines are related to the different research metrics expectations from researchers in those disciplines rather than the academic performance of the surveyed institutes or researchers. This study considered three domains in this regard: Architecture and Built Environment, from the Arts and Humanities category, and Civil and Structural Engineering and Mechanical Engineering, from the Engineering and Technology category. This study surveyed a total of 150 departments, analyzing the Scopus profiles of 5843 faculty members, with 1405 in architecture, 2151 in civil engineering, and 2287 in mechanical engineering. This study considered faculty members at the associate and full professor ranks. This study gave priority to these two ranks in data collection as they often reflect a level of research maturity and productivity that aligns well with the study objective. Expanding data collection to include other academic ranks is recommended for further investigation.
Scopus, a widely recognized database for research metrics, includes only citations from sources it has indexed, which helps ensure quality by excluding citations from lower-quality sources, as can be seen with Google Scholar [42]. It is also used in the QS university ranking that has been considered in this study to select the surveyed universities. As for the targeted research metrics, the Scopus database provides the number of citations and h-index for the surveyed faculty members. Two values were recorded for citations: total number of citations and citations obtained between 2018 and 2022. Self-citations were excluded in all cases. During data processing, only faculty members with Scopus profiles and a minimum h-index of 1 were included. This resulted in 899 profiles in architecture, 1777 in civil engineering, and 2,054 in mechanical engineering (see Table 1). Notably, in the field of architecture, 36% of surveyed faculty members lacked a Scopus profile, indicating a substantial drop in profile availability for this discipline.
This study used Excel and the Statistical Package for the Social Sciences (SPSS) software for data processing and analysis. As demonstrated in the literature review, the traditional metrics such as total number of publications and citations in addition to ℎ-index provide an overall estimation of researchers’ impact but do not reflect their relative impact within their domain of knowledge compared to their peers of researchers at the different seniority levels. Thus, this study demonstrated the use an average h-index value, havg, for each academic rank within each discipline, allowing us to assess individual researchers’ deviations from this calculated mean. Based on this approach, we proposed the relative h-index (hr-index), a research metric that can be calculated using the average h-index and standard deviation within the discipline. This concept is expressed through the Standard Score formula in Equation (1).
hr-index = (h-index − havg.-index)/σ
where hr-index is the relative h-index value of a sampling unit, havg.-index is the average h-index value of the sample, and σ is the standard deviation of the sample. hr-index could be positive, which indicates that it is higher than the mean value of the examined group and vice versa.

4. Results

Figure 1 and Table 2 summarize the results for average h-index, total citations, and citations from the past five years (2018–2022) across the fields of architecture, civil engineering, and mechanical engineering at the associate and full professor ranks. The results indicate that architecture generally shows lower values for these metrics compared to civil and mechanical engineering. Specifically, the average h-index in architecture is 7.0, which is notably lower than 22.8 for civil engineering and 25.6 for mechanical engineering. Differences also emerged between the ranks of associate and full professor, with full professors typically showing higher values. A one-way analysis of variance (ANOVA) was performed to examine whether there is a significant difference in the h-index between the researchers in architecture and the other two disciplines.
The ANOVA test revealed that there is a statistically significant difference in the h-index between at least two fields of the three examined fields across the corresponding academic ranks (F(2, 1500) = 150.7, p = 0.000). The ANOVA results also showed significant difference among the rank of associate professors (F(2, 1500) = 150.7, p = 0.000) and the rank of full professors (F(2, 2720) = 210.2, p = 0.002). In order to determine exactly where these differences lie (i.e., which specific discipline is different from the other two disciplines in terms of the average h-index scores), the post hoc Scheffe’s test was conducted. The results of Scheffé post hoc test, as presented in Table 3, indicates that the average h-index was significantly lower in the architecture field compared to both civil engineering and mechanical engineering for both academic ranks (p = 0.00 for both academic ranks).
To characterize the difference between the three academic fields in terms of the number of academic journals available for publishing, the researchers have accessed and examined relevant data in the Web of Science and Scopus databases [38,39,40] using archival and content analysis methods, and results are presented in Table 4 and Figure 2. Table 4 shows the number of journals in both the Web of Science and Scopus databases that are classified under architecture in comparison to a sample of other engineering disciplines based on data provided in [43,44]. To further examine the difference between the three disciplines, the researchers used SciVal, an advanced analytics tool developed by Elsevier that provides in-depth research performance analysis based on the Scopus database, to evaluate the annual citation counts for the three examined fields. Figure 2 presents the annual citation count, excluding self-citations, for the years 2000–2020 for the examined fields based on data provided by SciVal [45]. Both Table 4 and Figure 2 show a significant variation between architecture and engineering fields’ examined areas in terms of number of peer-reviewed journals and annual citation, with architecture showing the lowest score in this regard.
To address some of the limitations of the h-index, the researchers proposed using a relative h-index, hr-index, using Equation (1) presented in Section 3. The proposed hr-index aims to compare the performance and productivity of researchers across disciplines that differ in nature and have different publishing and citation patterns such as the case of architecture and engineering. The proposed research metric, hr-index, is calculated as a relative value compared to other peers in a specific discipline. To calculate the hr-index, an average h-index value, havg.-index, should be calculated for each discipline and for each academic rank in that discipline. Then, the hr-index is calculated to assess the performance of each researcher in his/her field by the deviation of the performance of that individual researcher from this precalculated average index value. Figure 3 shows the hr-index values using the data collected for the examined three disciplines. Figure 3 presents both the h-index and hr-index plotted together for comparison, where the h-index is represented by a gray shaded area and its values are plotted on the y-axis to the left of the Figure, while the hr-index is represented by a thick black curve and its values are plotted on the y-axis to the right. A positive value in the hr-index indicates that the research output of a researcher is higher than the average value of the researchers in the corresponding field and vice versa. Figure 3 shows that the suggested hr-index has dramatically altered the researchers’ ranking compared to the rankings calculated using the standard h-index.
Table 5 presents a hypothetical example of nine researchers, three from each discipline. Researcher-1 in the three disciplines has an h-index of 10, while researcher-2 and 3 have an h-index of 15 and 20, respectively. Based on the current h-index calculation method, researcher-1 in all three disciplines is ranked equal in performance regardless of the substantial differences among the fields shown by the havg-index. The same applies to researcher-2 and 3 who will be ranked equal across the three disciplines according to h-index assessment. In addition, according to h-index, researcher-3 in all disciplines is ranked the highest, indicating that he/she is outperforming researcher-2 and that researcher-2 is outperforming researcher 1. This suggests that the h-index seems to provide misleading information when used to compare the performance of the researchers across disciplines that differ in nature and citation patterns. Using the hr-index, which ranks the researchers’ performance relative to that of their peers in the same or similar disciplines, results in ranking the three researchers in each discipline in a different way, which is a more accurate and fair assessment compared to the standard h-index. As shown in Table 5, the hr-index places researchers in discipline 1 at the highest rank, despite the fact that they have equal h-index values with their colleagues in disciplines 2 and 3.

5. Findings and Discussion

This section provides discussion of the main findings of the study as well as the possible implications of these findings as presented below.

5.1. Limitations of the Conventional Count Metrics for Architecture

The findings demonstrate that, although architecture, civil engineering, and mechanical engineering appear to share related concerns about design, construction, and other aspects related to the built environment, the publication and citation patterns in these disciplines are disparate. The results presented in Figure 1 show that the three examined fields differ substantially on the average h-index, where an average value of 7.0 for the faculty members at the associate and full professor ranks was observed in architecture, compared to 22.8 and 25.6 in civil engineering and mechanical engineering, respectively. The results also show considerable differences in the average h-index between the academic ranks of associate and full professor, where the h-index score for the associate and full professor ranks in architecture is less than 33% of those in the other two fields. The observed differences in the h-index are a result of differences in the total number of citations, which can be traced to deeper structural differences among the three fields as detailed in the discussion below. As presented in Table 2, the findings highlight that architecture recorded significantly lower scores in both total number of citations and the recent citations (2018–2022).
The ANOVA test results presented in Table 3 revealed that there is a statistically significant difference in the h-index between at least two fields of the three examined fields across the surveyed academic ranks. Using Scheffe’s method for pairwise comparisons, the study found that the average h-index was significantly lower in architecture compared to both civil engineering and mechanical engineering disciplines for both academic ranks. While a unified approach to assessing research quality and productivity among substantially different fields such as architecture and engineering seems to be needed, the wide range of differences and the essence of scholarly and creative work in each discipline need to be recognized to achieve fair assessment of performance. The research output in civil and mechanical engineering is predominantly technical and quantitative in nature. It focuses on empirical studies, theoretical models, and experimental validations that are regularly published in high-impact journals. The published papers usually receive high citations and, thus, higher h-index, since they include concepts, data, and models that are relevant and useful to many different engineering subfields. Thus, a high h-index score in civil or mechanical engineering may typically correlate with significant research impact and productivity; however, this is not the case in architecture [46,47].
Research in architecture typically combines qualitative and quantitative studies, and its results can range from theoretical discussions and case studies to professional-based arguments and practice-focused design projects [14,15,48]. Due to their practical and regional rather than generalizable impact, many architectural contributions are not as commonly cited. Therefore, a lower h-index in architecture does not always reflect lower-quality research [14,49]. For instance, major architectural research on sustainable architecture practices in one area might not receive many citations in refereed journals, but it might be extremely relevant to practitioners and policymakers in the region and abroad. This discrepancy draws attention to some of the limitations and drawbacks of using the h-index as a comparative metric; it may underestimate the impact and significance of architectural research that deviates from the conventional citation norms.
Here, it is worth mentioning that the calculated h-index in this study is higher than that reported in the literature. This is can be explained by the fact that this study is examining the top universities in the field according to the QS ranking system as a sample and, thus, it is expected that the calculated results would show higher h-index values than those reported in the literature. For example, Schreiber [23] suggested an h-index range of 6 to 10 for associate professors and 12 to 24 for full professors. However, the results of this study, as presented in Figure 1, show that the average h-index in architecture was 5 for associate professors and 9 for full professors, which falls far below the h-index values reported in the literature for the science, technology, and engineering fields.

5.2. Academic Journals and Publication Venues

The study findings also indicate that architecture is a unique field of knowledge with a small pool of researchers and a small number of potential academic journals for publishing refereed output. The results presented in Table 4 and Figure 2 based on data extracted from Web of Science and Scopus databases [43,44] clearly highlight this finding. As presented in Table 4, the number of architecture journals indexed in Scopus and Web of Science databases is substantially less than those in any other field, where there are only about 30-50% as many architecture journals as in any other field. The data show that most of the architecture journals that are indexed by the Web of Science are categorized under the Arts and Humanities Citation Index. This index started to receive journal impact factors for the first time in June 2023 and has not received ranking quartiles so far. This reduces the attractiveness of these journals to researchers, as many universities require their affiliates to publish in high-ranked journals indexed in the Web of Science database with a high impact factor that ranks them in the first or second quartile of a specific knowledge domain. In contrast to Web of Science, Scopus offered an impact factor for architecture journals for some time. Nonetheless, the number of journals that assigned an impact factor is still relatively small.
As presented in Figure 2, results found from examining SciVal data [45] regarding the number of citations during the years 2000–2020 are aligned with and support the findings from the inductive data collected. Results in Figure 2 clearly demonstrate a significant disparity between architecture and engineering disciplines in terms of the number of citations during the years 2000–2020, with architecture showing the lowest score in this regard. The total number of citations for the fields in the selected timeframe was 644 k, 10,626 k, 33,989 k, and 46,519 k for architecture, civil engineering, mechanical engineering, and electrical engineering, respectively. This major difference in the number of citations between these fields reflects on the value of researchers’ h-index. For example, in 2020, the domain of architecture received only about 47 k citations, compared to 539 k citations in civil engineering. The average h-index value per year for these two research areas, also known as hm-index, was 5.9 and 15.4 for architecture and civil engineering, respectively. The same observation could apply to other non-engineering research areas, such as urban studies, whose total number of citations in 2020 was 61k and whose hm-index value was 9.9 [45].
The above findings can be explained by examining the publication patterns and norms in architecture and engineering fields [14,15,48]. Researchers in the engineering fields regularly publish their findings in peer-reviewed journals and conference proceedings, where quick dissemination of research findings is crucial for applied innovation and technological improvement. Because research in these fields is data-driven and reproducible, these articles typically receive high citation rates, which raises their h-index scores. In architecture, however, journal publication is less common and citation patterns are more varied because, in contrast to engineering publications, architectural research is frequently published in specialized or regionally focused journals, books, or design exhibitions, which receive much fewer citations [14,48]. These differences suggest there is a need to further improve and supplement the traditional approach to assess research quality and performance based on citation counts. The h-index and citation counts are given too much importance, while other important ingredients are insufficiently recognized.

5.3. The Need for Supplement Metrics in Architecture

In the current academic practices, the differences in h-index values can have significant effects on academic assessment, career advancement, and funding prospects. Research metrics such as the h-index are commonly used by higher education institutions and funding organizations to evaluate the impact of researchers, allocate funding grants, and decide who gets hired or given tenure. However, when h-index values are applied uniformly across disciplines, architecture researchers are often at a disadvantage compared to those in engineering fields because their research impact is undervalued using conventional citation metrics. This could negatively impact innovation attitudes among the architectural scholars and encourage them to adopt publication practices that emphasize citation over core value of the architectural practice. Over-reliance on the h-index in the architecture field may inadvertently diminish the value of innovative research practices in the field such as practice-based, design-centric, design project, transdisciplinary, or culturally relevant research. This would narrow down the focus of architecture research and shift its attention from creative and practical contributions such as addressing societal, cultural, environmental, and real-life situations to research outputs that generate citations. This suggests that there is a strong need for supplementing the widely used h-index with alternative metrics that could better capture the distinctive contributions of the researchers in the architecture field.
In this regard, this study proposed and calculated an hr-index as a possible solution to compare the performance and productivity of researchers across disparate disciplines that differ in nature and publishing and citation norms such as architecture and engineering disciplines. The results presented in Figure 3 and Table 5 highlight that the proposed hr-index, which ranks the researches’ performance to that of their peers in the same or related disciplines, offers a more accurate and fairer assessment compared to the standard h-index. The results in Table 5 demonstrate some of the significant limitations and insensitivity of the h-index when comparing researchers’ performance across disciplines of different nature. However, the challenge for implementing the hr-index lies in agreeing on a unified framework to specify the main subject area for each researcher [50]. Due to the interdisciplinary nature of the work of many researchers, it is a challenge to agree upon an appropriate operational definition of the research subject areas, though it is not impossible. Wilsdon et al. [11] stated in their famous Metric Tide report that ‘a key issue in the calculation of normalized citation impact indicators is the way in which the concept of a research field is operationalized’. One way to solve this issue is to let the researcher and/or the research database choose the subject area based on predefined ‘research keywords’. Currently, this is a main area of improvement for research databases such as Scopus, which provides a comprehensive list of the topics contributed by each researcher in some selected recent years.
The proposed hr-index merely serves as an illustration of the need to adopt appropriate alternative metrics that are discipline-specific. Other possible complementary metrics appropriate for the architecture field include:
  • Impact on Design Practice: to measure the impact of architectural research on real-world architectural projects, sustainable building and construction practices, urban design projects, as well as research by design publications.
  • Interdisciplinary Impact: to measure the contributions that connect architecture with social sciences, humanities, environmental studies, and policy making, thus capturing wider scholarly contributions.
  • Long-term and Cultural Value Impact: to measure the impact of architectural scholarly and research work that contributes to cultural preservation, where impact is realized over decades and may not be reflected in high citation counts.

6. Conclusions

The results highlight several limitations and challenges regarding the current use of research metrics in the architecture field compared to the engineering disciplines. Most of these challenges stem from the major differences between these disciplines. Evaluating research output and its quality across various fields that differ substantially reveals major limitations, challenges, and will result in conveying wrong information, requiring the use of multiple research metrics. A responsible application of these metrics involves considering both qualitative and quantitative indicators in a multidimensional approach to ensure a fair and comprehensive assessment of research quality. With the rapid expansion of scholarly publications and their growing influence on university rankings, researchers face increasing pressure to produce more papers in areas and journals that are likely to attract higher citation counts. This has made citation metrics, such as the h-index, a central criterion in academic evaluations. While a high number of citations and a resulting high h-index may appear to indicate superior research quality, this is not always the case. Citations should not be regarded as the sole measure of a researcher’s success. Researchers in art, architecture, and design-related fields often find themselves at a disadvantage compared to their peers in other disciplines. Architecture, by nature, is a relatively specialized field with a smaller pool of researchers and journals compared to the broader scientific and engineering disciplines. This limitation reduces its ability to attract citations at the same rate as these other fields, resulting in a lower h-index for architects.
This study explores this issue by comparing citation counts and h-index values between architecture and selected engineering disciplines. The comparison was based on intensive, inductive data collection from the Scopus database, focusing on the top 50 universities in architecture, civil engineering, and mechanical engineering. The findings confirm that architecture generally exhibits lower values for these research metrics than the engineering fields. Specifically, the average h-index score for research-active faculty members at the associate and full professor ranks in architecture was 7.0, compared to 22.8 in civil engineering and 25.6 in mechanical engineering. This trend is also evident in citation counts, where architecture recorded significantly lower numbers. One potential way to address this gap is by fostering interdisciplinary research in architecture, facilitating the integration of methods and tools from multiple disciplines to offer new perspectives and create additional avenues for research dissemination.
This study concluded that a single h-index value cannot be universally applied to researchers across different disciplines, as they have varying research opportunities and expectations, including publication rates, citation counts, and h-index values. Consequently, the study proposes an additional h-index variant: the relative h-index (hr-index), which is discipline-specific. Developing and adopting discipline-specific metrics are very important, particularly disciplines that differ deeply, in order to obtain meaningful and valid evaluation of research output and quality. The proposed hr-index is calculated by determining the difference between an individual researcher’s h-index and the average h-index of their peers in the same discipline and academic rank, then dividing the result by the standard deviation in the discipline. This will show the researcher deviation from the average performance in that specific discipline. Such a metric can be used as a complement to the standard h-index, as it offers a fair and more representative evaluation of researchers’ performance and impact within their areas of expertise.
A more thorough and accurate evaluation of research performance and productivity in architecture would also be possible with the use of alternative metrics. Additionally, they would lessen the h-index’s drawbacks by promoting a wider range of architectural research and interdisciplinary co-operation without sacrificing citation count quality. The study recommends extending data collection and analysis to a wider range of researchers, academic ranks, and disciplines, using different sampling approaches and advanced data processing techniques. There is a need in this regard to develop databases that allow for regular update of discipline-specific research metric data. Future studies could also consider additional research databases such as the WoS in data collection to allow for further comparative analysis of research metrics and broader disciplinary analysis.

Author Contributions

Conceptualization, O.S.A.; methodology, O.S.A.; software, O.S.A.; formal analysis, O.S.A. and J.A.-Q.; investigation, O.S.A. and J.A.-Q.; resources, O.S.A. and J.A.-Q.; writing—original draft preparation, O.S.A. and J.A.-Q.; writing—review and editing, O.S.A. and J.A-Q. All authors have read and agreed to the published version of the manuscript.

Funding

The Article Processing Charges (APCs) were funded by King Fahd University of Petroleum & Minerals (KFUPM).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors would like to thank Esam Al-Sawi from the Department of Mathematics at KFUPM for his valuable thoughts. The authors also acknowledge King Fahd University of Petroleum & Minerals (KFUPM) for financial support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hampshire College. What is Research? 2022. Available online: https://www.hampshire.edu/what-research (accessed on 18 March 2024).
  2. AIA. Architectural Research. 2023. Available online: https://www.aia.org/pages/5626-architectural-research (accessed on 15 March 2024).
  3. Pietrzyk, K. Wicked problems in architectural research: The role of research by design. ARENA J. Archit. Res. 2022, 7, 3. [Google Scholar] [CrossRef]
  4. Ponce de Leon, M. Research, practice, and the making of architecture. Technol. Archit. Des. 2020, 4, 5–8. [Google Scholar] [CrossRef]
  5. Rawat, U.; Karmakar, V. Importance of research in architecture. Int. J. Eng. Res. Technol. 2021, 10, 156–161. [Google Scholar]
  6. Amjad, T.; Rehmat, Y.; Daud, A.; Abbasi, R. Scientific impact of an author and role of self-citations. Scientometrics 2020, 122, 915–932. [Google Scholar] [CrossRef]
  7. Fiorillo, L. Fi-Index: A new method to evaluate authors Hirsch-index reliability. Publ. Res. Q. 2022, 38, 465–474. [Google Scholar] [CrossRef]
  8. Gervits, M.; Orcutt, R. Citation analysis and tenure metrics in art, architecture, and design-related disciplines. J. Art Libr. Soc. North Am. 2016, 35, 218–229. [Google Scholar] [CrossRef]
  9. Quaderi, N. Mapping the Path to Future Changes in the Journal Citation Reports. 2023. Available online: https://clarivate.com/blog/mapping-the-path-to-future-changes-in-the-journal-citation-reports/ (accessed on 1 October 2024).
  10. Samaniego, C.; Lindner, P.; Kazmi, M.A.; Dirr, B.A.; Kong, D.T.; Jeff-Eke, E.; Spitzmueller, C. Higher research productivity  =  more pay? Gender pay-for-productivity inequity across disciplines. Scientometrics 2023, 128, 1395–1407. [Google Scholar] [CrossRef]
  11. Wilsdon, J.; Allen, L.; Belfiore, E.; Campbell, P.; Curry, S.; Hill, S.; Jones, R.; Kain, R.; Kerridge, S.; Thelwall, M.; et al. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management; HEFCE: Bristol, UK, 2015. [Google Scholar] [CrossRef]
  12. David, M. Research Quality Assessment and the Metrication of the Social Sciences. Eur. Political Sci. 2008, 7, 52–63. [Google Scholar] [CrossRef]
  13. Koltun, V.; Hafner, D. The h-index is no longer an effective correlate of scientific reputation. PLoS ONE 2021, 16, e0253397. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  14. Ruamsanitwong, N.; Campbell, J.W.P. Architectural research in university schools of architecture: Cambridge and the Bartlett, 1960–1969. Archit. Res. Q. 2021, 25, 278–287. [Google Scholar] [CrossRef]
  15. Luce, M.; Pfarr-Harfst, M.; Reeh, J.; Schröder, J.; Tessmann, O. Evaluating research excellence in architecture: A view from German technical universities towards a European perspective. ARENA J. Archit. Res. 2023, 8, 9. [Google Scholar] [CrossRef]
  16. UoB (University of Birmingham) Intranet. Responsible Metrics. 2023. Available online: https://intranet.birmingham.ac.uk/as/libraryservices/library/research/influential-researcher/responsible-metrics.aspx (accessed on 1 October 2024).
  17. Collom, C.D.; Oermann, M.H.; Sabol, V.K.; Heintz, P.A. An Assessment of predatory publication use in reviews. Clin. Nurse Spec. 2020, 34, 152–156. [Google Scholar] [CrossRef] [PubMed]
  18. Wouters, P.; Thelwall, M.; Kousha, K.; Ludo Waltman, L.; de Rijcke, S.; Rushforth, A.; Thomas Franssen, T. The Metric Tide: Literature Review (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management); HEFCE: Bristol, UK, 2015. [Google Scholar] [CrossRef]
  19. AI2. About Semantic Scholar. 2023. Available online: https://www.semanticscholar.org/about (accessed on 24 February 2023).
  20. Hirsch, J.E. An index to quantify an individual's scientific research output. Proc. Natl. Acad. Sci. USA 2005, 102, 16569–16572. [Google Scholar] [CrossRef] [PubMed]
  21. Malesios, C.C.; Psarakis, S. Comparison of the h-index for different fields of research using bootstrap methodology. Qual. Quant. 2014, 48, 521–545. [Google Scholar] [CrossRef]
  22. Informa UK Limited. Understanding Research Metrics. 2023. Available online: https://editorresources.taylorandfrancis.com/understanding-research-metrics/# (accessed on 26 February 2023).
  23. Schreiber, W. Scientific Impact and the h-Index. 2019. Available online: https://www.aacc.org/cln/articles/2019/september/scientific-impact-and-the-h-index (accessed on 1 October 2024).
  24. Harzing, A.-W. Metrics: hI,norm and hIa. 2022. Available online: https://harzing.com/resources/publish-or-perish/tutorial/metrics/hi-norm-and-hia (accessed on 1 October 2024).
  25. Flatt, J.W.; Blasimme, A.; Vayena, E. Improving the Measurement of Scientific Success by Reporting a Self-Citation Index. Publications 2017, 5, 20. [Google Scholar] [CrossRef]
  26. Araujo, A.C.; Vanin, A.A.; Nascimento, D.P.; Gonzalez, G.; Costa, L. What are the variables associated with Altmetric scores? Syst. Rev. 2021, 10, 193. [Google Scholar] [CrossRef]
  27. Luc, J.G.Y.; Archer, M.A.; Arora, R.C.; Bender, E.M.; Blitz, A.; Cooke, D.T.; Hlci, T.N.; Kidane, B.; Ouzounian, M.; Varghese, T.K.; et al. Does tweeting improve citations? One-year results from the TSSMN prospective randomized trial. Ann. Thorac. Surg. 2021, 111, 296–300. [Google Scholar] [CrossRef]
  28. ResaerchGate. Research Interest Score. 2024. Available online: https://help.researchgate.net/hc/en-us/articles/14293473316753-Research-Interest-Score (accessed on 30 October 2024).
  29. Jarvey, P.; Usher, A.; McElroy, L. Making Research Count: Analyzing Canadian Academic Publishing Cultures; Higher Education Strategy Associates: Toronto, ON, Canada, 2012. [Google Scholar]
  30. Harzing, A.-W.; Alakangas, S.; Adams, D. hIa: An individual annual h-index to accommodate disciplinary and career length differences. Scientometrics 2014, 99, 811–821. [Google Scholar] [CrossRef]
  31. Harzing, A.-W.; Alakangas, S. Google Scholar, Scopus and the Web of Science: A longitudinal and crossdisciplinary comparison. Scientometrics 2016, 106, 787–804. [Google Scholar] [CrossRef]
  32. Raheel, M.; Ayaz, S.; Afzal, M.T. Evaluation of h-index, its variants and extensions based on publication age; citation intensity in civil engineering. Scientometrics 2018, 114, 1107–1127. [Google Scholar] [CrossRef]
  33. Sheeja, N.K.; Mathew, S. ResearchGate profiles of naval architecture scientists in India: An altmetric analysis. Libr. Philos. Pract. 2019, 2305. [Google Scholar]
  34. Ameer, M.; Afzal, M.T. Evaluation of h-index and its qualitative and quantitative variants in Neuroscience. Scientometrics 2019, 121, 653–673. [Google Scholar] [CrossRef]
  35. Kamrani, P.; Dorsch, I.; Stock, W.G. Do researchers know what the h-index is? And how do they estimate its importance? Scientometrics 2021, 126, 5489–5508. [Google Scholar] [CrossRef]
  36. Park, K.; Sanchez, T.W.; Zuban, J. Evaluating scholarly productivity and impacts of landscape architecture faculty using citation analysis. Landsc. J. 2022, 41, 1–14. [Google Scholar] [CrossRef]
  37. Zagonari, F.; Foschi, P. Coping with the Inequity and Inefficiency of the H-Index: A Cross-Disciplinary Empirical Analysis. Publications 2024, 12, 12. [Google Scholar] [CrossRef]
  38. Zagonari, F. Scientific Production and Productivity for Characterizing an Author’s Publication History: Simple and Nested Gini’s and Hirsch’s Indexes Combined. Publications 2019, 7, 32. [Google Scholar] [CrossRef]
  39. Sharma, K.; Uddin, Z. Measuring the continuous research impact of a researcher: The Kz index. arXiv 2023. [Google Scholar] [CrossRef]
  40. QS Quacquarelli Symonds Limited. QS World University Rankings by Subject 2022. 2023. Available online: https://www.topuniversities.com/subject-rankings/2022 (accessed on 16 February 2023).
  41. Craig, O. QS World University Rankings by Subject: Methodology. 2022. Available online: https://www.topuniversities.com/subject-rankings/methodology (accessed on 5 February 2023).
  42. Jacsó, P. Metadata mega mess in Google Scholar. Online Inf. Rev. 2010, 34, 175–191. [Google Scholar] [CrossRef]
  43. Scimago Lab. Scimago Journal & Country Rank. 2022. Available online: https://www.scimagojr.com/journalrank.php (accessed on 16 February 2023).
  44. Clarivate Analytics. Journal Citation Reports. 2023. Available online: https://jcr.clarivate.com/jcr/home (accessed on 17 February 2023).
  45. SciVal. Benchmarking. 2023. Available online: https://www.scival.com/benchmarking/analyse (accessed on 26 February 2023).
  46. Cañas-Guerrero, I.; Mazarrón, F.R.; Pou-Merina, A.; Calleja-Perucho, C.; Suárez-Tejero, M.F. Analysis of research activity in the field “Engineering, Civil” through bibliometric methods. Eng. Struct. 2013, 56, 2273–2286. [Google Scholar] [CrossRef]
  47. Islam, H.; El-adaway, I.H.; Ali, G.; Assaad, R.; Elsayegh, A.; Abotaleb, I.S. Analytic Overview of Citation Metrics in the Civil Engineering Domain with Focus on Construction Engineering and Management. J. Constr. Eng. Manag. 2019, 145, 4019060. [Google Scholar] [CrossRef]
  48. Van Der Hoeven, F. Mind the evaluation gap: Reviewing the assessment of architectural research in the Netherlands. Archit. Res. Q. 2011, 15, 177–187. [Google Scholar] [CrossRef]
  49. Sanchez, T.W. Faculty Performance Evaluation Using Citation Analysis: An Update. J. Plan. Educ. Res. 2017, 37, 83–94. [Google Scholar] [CrossRef]
  50. Scopus. What is SciVal's Topic Prominence? 2023. Available online: https://service.elsevier.com/app/answers/detail/a_id/27947/supporthub/scopus/kw/topics/ (accessed on 1 October 2024).
Figure 1. The average h-index of faculty members in the examined sample in architecture, civil engineering, and mechanical engineering disciplines.
Figure 1. The average h-index of faculty members in the examined sample in architecture, civil engineering, and mechanical engineering disciplines.
Publications 12 00050 g001
Figure 2. The annual citation count, excluding self-citation, during the years 2000–2020 for the three examined fields as per Scopus data.
Figure 2. The annual citation count, excluding self-citation, during the years 2000–2020 for the three examined fields as per Scopus data.
Publications 12 00050 g002
Figure 3. Researchers’ ranking based on h-index and hr-index calculation methods for the examined sample.
Figure 3. Researchers’ ranking based on h-index and hr-index calculation methods for the examined sample.
Publications 12 00050 g003
Table 1. Descriptive statistics of the examined sample.
Table 1. Descriptive statistics of the examined sample.
DisciplineAcademic RankNh-Index
Min.Max.MeanStd. Deviation
ArchitectureAssociate Prof.3681445.035.33
Full Prof.5311728.9911.04
Civil EngineeringAssociate Prof.54518717.519.96
Full Prof.1232110028.0016.67
Mechanical EngineeringAssociate Prof.55815919.0010.20
Full Prof.1496115332.1019.90
Table 2. The average number of citations of faculty members in the examined sample in architecture, civil engineering, and mechanical engineering disciplines.
Table 2. The average number of citations of faculty members in the examined sample in architecture, civil engineering, and mechanical engineering disciplines.
DisciplineAcademic RankCitation Count (Excluding Self-Citations)
TotalRecent Citations (2018–2022)% of Recent Citations
ArchitectureAssociate Prof.23417876
Full Prof.79447360
Both Ranks56535162
Civil EngineeringAssociate Prof.148297666
Full Prof.4030209952
Both Ranks3249175554
Mechanical EngineeringAssociate Prof.1960113058
Full Prof.5958265044
Both Ranks4871223646
Table 3. Multiple comparisons of h-index average in the different examined disciplines using Scheffe test (a) for associate professor rank and (b) for full professor rank.
Table 3. Multiple comparisons of h-index average in the different examined disciplines using Scheffe test (a) for associate professor rank and (b) for full professor rank.
(I) Discipline(J) DisciplineMean Difference (I-J)Sig.95% Confidence Interval
Lower BoundUpper Bound
a. Associate Professor Rank
ArchitectureCivil Eng.−12.485 *0.00−15.95−9.02
Mech. Eng.−14.001 *0.00−17.45−10.55
Civil Eng.Architecture12.485 *0.009.0215.95
Mech. Eng.−1.5160.752−4.611.58
Mech. Eng.Architecture14.001 *0.0010.5517.45
Civil Eng.1.5160.752−1.584.61
b. Full Professor Rank
ArchitectureCivil Eng.−19.014 *0.00−21.68−16.35
Mech. Eng.−23.102 *0.00−25.69−20.51
Civil Eng.Architecture19.014 *0.0016.3521.68
Mech. Eng.−4.088 *0.00−6.06−2.11
Mech. Eng.Architecture23.102 *0.0020.5125.69
Civil Eng.4.088 *0.002.116.06
* The mean difference is significant at the 0.05 level.
Table 4. Number of journals classified under Architecture and a sample of other engineering disciplines in both the Web of Science and Scopus databases.
Table 4. Number of journals classified under Architecture and a sample of other engineering disciplines in both the Web of Science and Scopus databases.
CategoryNo. of Journals
Web of ScienceScopus
Architecture93143
Civil and Structural Engineering177305
Electrical and Electronic Engineering346635
Mechanical Engineering177555
Urban Studies and Planning129225
Table 5. An example of researchers’ ranking based on h-index and hr-index calculation methods for the associate professor rank.
Table 5. An example of researchers’ ranking based on h-index and hr-index calculation methods for the associate professor rank.
Disciplines’ DataResearchers’ Data
havg-IndexStd. Deviationh-Indexhr-Index
Researcher 1Researcher 2Researcher 3Researcher 1Researcher 2Researcher 3
Discipline 1 (Architecture)5.035.331015200.931.872.81
Discipline 2 (Civil Eng.)17.59.96101520−0.75−0.250.25
Discipline 3 (Mech. Eng.)19.010.20101520−0.88−0.390.10
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Asfour, O.S.; Al-Qawasmi, J. Research Metrics in Architecture: An Analysis of the Current Challenges Compared to Engineering Disciplines. Publications 2024, 12, 50. https://doi.org/10.3390/publications12040050

AMA Style

Asfour OS, Al-Qawasmi J. Research Metrics in Architecture: An Analysis of the Current Challenges Compared to Engineering Disciplines. Publications. 2024; 12(4):50. https://doi.org/10.3390/publications12040050

Chicago/Turabian Style

Asfour, Omar S., and Jamal Al-Qawasmi. 2024. "Research Metrics in Architecture: An Analysis of the Current Challenges Compared to Engineering Disciplines" Publications 12, no. 4: 50. https://doi.org/10.3390/publications12040050

APA Style

Asfour, O. S., & Al-Qawasmi, J. (2024). Research Metrics in Architecture: An Analysis of the Current Challenges Compared to Engineering Disciplines. Publications, 12(4), 50. https://doi.org/10.3390/publications12040050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop