1. Introduction
Research is a systematic process of inquiry with various aims, types, and methods, tailored to each specific knowledge domain. It involves data collection and analysis through appropriate research methods and tools [
1]. For university faculty members, research is a key responsibility alongside teaching and administrative duties. Architecture, as a discipline, integrates diverse aspects—art, science, psychology, and philosophy—within a single framework. It is deeply influenced by ethical, cultural, socio-economic, and environmental factors that shape the built environment and impact quality of life. Architectural research aims to advance the profession by improving building design, functionality, and user interaction, especially in urban contexts [
2]. It employs holistic approaches connected to humanities, social sciences, technical sciences, and design-based knowledge creation. Common methods include case studies, comparative analysis, experimental solutions and simulations, theoretical hypotheses, and interpretive reflection [
3,
4].
Architects often work collaboratively, and institutions encourage them to conduct research in interdisciplinary environments that engage various relevant fields, fostering innovative findings and conclusions. However, several challenges face researchers in the field of architecture. One main challenge in this regard is the institutional understanding of architecture as a discipline. Architecture is a multidisciplinary subject and, with its intellectual practice, it defies a simple classification. Some technical universities list architecture as a formal applied science profession as a part of the engineering sciences, while others place it within art schools. Although this diversity can be a strength, showcasing architecture’s multidisciplinary nature, it also presents a core challenge in defining its place within the research landscape.
Researchers in architecture also face challenges related to the gap between research and practice. Architectural research often remains confined to the ideas discussed in architectural schools and published in conferences and journals, which creates a disconnect between academic knowledge and real-world applications. Additional challenges include a limited awareness of the importance of architectural research compared to fields like science and technology, along with insufficient funding, especially for topics lacking direct industry ties [
5]. The American Institute of Architects [
2] has advocated for empowering architects to engage more actively in research through:
Increasing funding for research in architecture, including governmental grants, industry support, and collaboration between academia and industry for this purpose;
Prioritizing research as an essential competency for architects since commitment to research must become an integral part of architecture firms’ culture;
Dissemination of research to share its findings and improve its literacy.
The use of research metrics to evaluate research quality has surged over recent decades, driven by the digitization of research. These metrics offer several benefits, such as providing concrete, quantifiable measures of research output and documenting the performance history of researchers and academic institutions. However, despite their significance, research metrics have fostered a ‘culture of counting’, which could be used to manipulate the scientific impact of research. This has led to unethical practices, including false or inadequate citations, excessive self-citations, and citations of irrelevant work [
6,
7]. One major challenge for architecture researchers is the unfair comparison of their metrics to those of peers in other disciplines. It is often argued that research in the arts, humanities, and social sciences receives less visibility than that in science and technology, creating this unfair comparison [
8].
This includes architecture, which is less focused on journals and, therefore, disadvantaged in research-citation potential. For instance, the well-known Web of Science database includes core journal collections across various disciplines, with many architecture journals classified under the Arts and Humanities Citation Index. This index only began receiving journal impact factors in June 2023 and has yet to be assigned ranking quartiles, which limits the appeal of these journals to researchers and reduces their citation potential [
9]. This differs from the Social Sciences Citation Index and the Science Citation Index Expanded, where journals have had impact factors for decades. To address this gap, this study compares citation counts and
h-index values in architecture with those in other engineering fields, examining the potential variation between these disciplines in this regard. This is intended to examine the practicality of using a universal
h-index benchmark for the different research areas, as they significantly vary in terms of research opportunities, challenges, and expectations. This study suggests the use of an additional relative
h-index, tailored to accommodate the unique publishing and citation patterns prevalent in the field of architecture. This metric could serve as a standardized tool for evaluating scientific contributions within the field, enhancing the fairness and relevance of these evaluations.
2. Literature Review
Bibliometrics, the quantitative evaluation of research outputs, encompasses a range of metrics that aims to assess the impact and significance of research output at multiple levels, including individual articles, journals, academic institutions, and researchers. These metrics serve several purposes, such as ranking journals and academic institutions, supporting faculty promotions, and informing decisions on research grants and funding [
10]. Citation counting forms the basis of traditional research metrics, with commonly used examples being the Journal Impact Factor for journals and the
h-index for individual researchers [
11]. The development of electronic indexing technologies in recent decades has driven the rise of numerous research databases that offer a range of metrics to assess research quality and performance, providing unprecedented ways of research quality oversight and facilitating global collaboration and knowledge exchange. However, ongoing debates still exist in literature that highlight a wide range of challenges regarding using research metrics to accurately assess individual research quality and scholarly standing [
12,
13,
14,
15]. Different perspectives in this regard emerge from the various research cultures within the disciplines, such as average publication rates and citation counts. This underscores the importance of comparing ‘like with like’ to ensure fair and meaningful evaluations [
14,
15,
16].
Applying research metrics responsibly in evaluating research quality is essential. Since each metric has limitations, using a combination of metrics provides a more comprehensive view of research performance in any given context. A fair, multidimensional assessment of research quality should include both qualitative and quantitative measures [
8], leading to more informed evaluations and reducing the risk of low quality or predatory publishing practices [
17]. For example, alongside citation counts and other numerical metrics, it is important to consider factors such as peer-review feedback, research productivity, annual publication output, discipline-specific publication rates, research focus, and research leadership. An important metric in research evaluation is citation count, which measures the number of times other studies reference an article. This is often seen as an indicator of an article’s significance based on the attention it has attracted. Citation analysis can be conducted at various levels, including individual researchers, disciplines, or institutions. Several research databases provide citation data and analytical tools. Free databases like Google Scholar allow authors to create profiles, add publications, and track citations over time, offering
h-index and citation data updates. Among subscription-based databases, Web of Science (formerly the Institute for Scientific Information, or ISI) and Scopus are among the widely used. Notably, Scopus generally indexes a broader set of publications than Web of Science, making the latter ‘a near-perfect subset of Scopus’ [
18].
Citation counts from both Web of Science and Scopus play a key role in calculating various research metrics. The Web of Science publishes the Journal Impact Factor annually in its Journal Citation Report (JCR), evaluating the impact of journals within its database. Similarly, Scopus provides the Scimago Journal Rank (SJR) and CiteScore, both of which assess the impact of journals it indexes. For researcher impact, Web of Science offers a citation report for each author that includes the
h-index, total citations (with and without self-citations), citation trends over time, and average citations per publication. Scopus similarly provides citation data by author, listing individual papers and tracking citations per year. It also allows exclusion of self-citations, updating the author’s
h-index accordingly. Additionally, the SciVal analytics tool, which uses Scopus data, offers advanced evaluations of research performance across countries, institutions, researchers, and topics, covering aspects such as collaboration, citations, patents, and awards. Recently, some citation platforms have incorporated artificial intelligence to enhance citation analysis. For instance, Semantic Scholar evaluates citation impact and categorizes citations by specific sections of cited articles [
19].
The h-index, proposed by J. E. Hirsch in 2005, is one of the widely used metrics to assess quality of research in academic settings. It is designed to measure research productivity and influence based on citation counts. Hirsch defined the
h-index as ‘the number of papers with citation number ≥ h’ [
20]. Thus, an
h-index of 10 indicates that an author has published 10 papers, each of which has received at least 10 citations. Without any citations, a researcher would have an
h-index of zero. This index allows for comparing researchers’ performance and impact by reflecting both quality, through citation counts per paper, and productivity, through the number of published papers. However, it typically takes time for researchers to achieve a high
h-index, so early-career researchers usually have a lower index. This effect is particularly evident in fields like art and architecture, where citations to recent studies occur less frequently than in scientific disciplines [
8]. The
h-index is now widely recognized and supported by several research databases, including Web of Science, Scopus, and Google Scholar. Google Scholar, however, often provides a higher
h-index, as it includes citations from all online sources, unlike Scopus and Web of Science, which are subscription-based databases.
Despite its advantages, the h-index has some limitations and shortcomings. For example:
Some high-quality research works may not attract the expected number of citations for various reasons, such as popularity of the publication platforms. Thus, they will not contribute to improving the researcher’s h-index.
Comparing the research performance of several researchers from different disciplines using number of citations and the resulting
h-index is difficult. Research subject areas vary in their citation potential, which is reflected on their average
h-index [
21]. In fact, research in areas that have higher citation numbers, such as cell biology, is not better than research in areas that typically have lower citation numbers, such as history [
22]. Unfortunately, no precise guidelines exist in this regard. However, some general recommendations suggest common
h-index values: 2 to 5 for assistant professors, 6 to 10 for associate professors, and 12 to 24 for full professors [
23].
h-index does not consider researchers’ seniority, making comparisons of researchers’ impact at different stages of their research career difficult. To overcome this issue, Hirsch [
21] suggested an additional index called m value, which is the
h-index divided by the number of years since the researcher’s first publication. Some databases also provide a five-year
h-index, allowing for time-bound assessments. This is particularly useful for tracking a researcher’s impact and productivity on an annual basis throughout their career.
The
h-index also does not account for differences in citation potential between single-authored and co-authored papers. To address this, the hI,norm metric was proposed, which normalizes citation counts by dividing the number of citations by the number of authors, then calculates a single-author equivalent
h-index based on this adjusted citation count. Additionally, dividing hI,norm by the researcher’s academic age yields another
h-index variant called hI,annual, which reflects annualized impact [
24].
Using self-citations could skew
h-index values. Although self-citation is a justified and useful practice in many cases, its misuse could inflate the
h-index value. This is why some research databases offer the option to exclude self-citations when calculating
h-index [
25].
Alternative metrics, or altmetrics, is an improvement over the conventional traditional citation counts. The term used to refer to a range of measures used to assess the impact and reach of scholarly research beyond traditional citation counts. Unlike conventional metrics, which focus on citations in academic journals, altmetrics track attention across various platforms and media, such as social media, news outlets, blogs, and multimedia, among others. Altmetrics help researchers and institutions understand how quickly their scholarly output is being shared and discussed and where it is generating interest outside academic circles. This type of metrics can be particularly valuable for identifying the societal or practical impact of research, which is often not captured by citation counts alone. Some of the developed alternative metrics provide researchers and academic institutions with digital tools that enable them to communicate research via social media and other networking platforms, such as Plum Analytics, ResearchGate, and Academia.edu. These altmetrics provide metrics such as number of views and downloads, social bookmarks, comments, and ratings [
26,
27]. These alternative metrics reflect the broader social, economic, and cultural influence of research, highlighting its potential impact beyond academia. Additionally, by increasing a work’s visibility, altmetrics can help improve citation counts by attracting attention to the research. This also includes the Research Interest Score (RI Score) used in ResearchGate database. RI Score combines reads and recommendations on ResearchGate and citations (excluding self-citations). This means that citations are not the only way to estimate a researcher’s impact as it may take a while before a work starts to receive citations [
28]. However, this metric is limited to the ResearchGate members and does not consider the differences that exist between the different disciplines in citation counts and patterns.
Several studies have examined the potential for variation in
h-index across disciplines and its implications for comparing researchers’ performance [
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39]. For instance, Harzing et al. [
30] conducted a comparative study of 146 senior academics across five broad fields: humanities, social sciences, engineering, sciences, and life sciences. This study evaluated research metrics such as the number of publications, citations, and h-index, using data from Google Scholar, Scopus, and the Web of Science collected at eight intervals between 2013 and 2015. The findings suggested that the traditional h-index should be adjusted to ensure fair, cross-disciplinary comparisons. The authors proposed hI, annual, a modified h-index that accounts for co-authorship patterns and researchers’ academic age. Raheel et al. [
32] also evaluated h-index and its variants within civil engineering, using multiple databases to identify the most effective metrics for author ranking. The study observed weak correlations among indices, leading to variations in researcher rankings. Additionally, Sheeja and Mathew [
33] surveyed researchers in naval architecture affiliated with six higher education institutions in India. They collected altmetric data from ResearchGate profiles and scientometric data from Scopus, finding that the two sets of indicators correlated well, with most researchers achieving citation counts between 1 and 50 and an h-index between 1 and 5.
Park et al. [
36] conducted a citation analysis of landscape architecture faculty in North America using Google Scholar data. Results indicated that citation counts correlated with faculty members’ academic rank, degree type, and academic age since their first publication. Notably, the study found that 15% of tenure-track faculty in landscape architecture had no citation records. Zagonari and Foschi [
37] discussed the issue of h-index inequity, highlighting factors such as co-author count and the tendency for senior authors to receive more citations. Their study, which surveyed 10,000 Scopus authors from 2006 to 2015, proposed adjustments to h-index calculations to address these challenges and enable fairer cross-disciplinary comparisons. Zagonari [
38] further argued for incorporating each researcher’s publication history and collaboration network into h-index calculations. Meanwhile, Sharma and Uddin [
39] proposed the Kz index, which accounts for both the impact and age of publications to better reflect researchers’ sustained contributions. They suggested that this index provides a more comprehensive evaluation of research impact. However, alongside considerations of researchers’ seniority, it is also important to account for varying expectations of research impact across different fields.
Thus, this study aims to address the gap observed in the literature by focusing on architecture as a discipline and determining a discipline-specific average h-index across various levels of researcher seniority. Using inductive data collection, this study analyzed the scholarly output of researchers affiliated with the top 50 universities globally, as identified by the QS ranking. It calculated an average h-index value, havg, for each academic rank within the disciplines of architecture, civil engineering, and mechanical engineering. The average h-index formed the foundation for a new research metric, the relative h-index (hr), which measures the deviation of an individual’s h-index from the average within their field. By calculating this metric either for the discipline as a whole or for specific academic ranks, the study introduces a more nuanced approach that accounts for researcher seniority and reflects the differing h-index expectations across various fields of knowledge.
3. Materials and Methods
This study aims to compare citation counts and h-index values in architecture with those in various engineering disciplines to highlight potential differences and enable a fairer comparison of researchers’ impact across these fields. The comparison was based on inductive data collection from the Scopus and SciVal databases, covering the period from January to April 2023. Data were gathered from the top 50 universities according to the QS Rankings, which includes global, regional, and subject-specific rankings. In 2022, the subject-specific ranking encompassed 51 disciplines categorized under five broad academic fields [
40]:
Arts and Humanities, including 11 disciplines.
Engineering and Technology, including 7 disciplines.
Life Sciences and Medicine, including 9 disciplines.
Natural Sciences, including 9 disciplines.
Social Sciences and Management, including 15 disciplines.
The Arts and Humanities category includes architecture under the title ‘Architecture and Built Environment’. Subject-specific university rankings are based on several indicators, including research impact, which is assessed by citations per paper and the h-index of faculty members from the Scopus database [
41]. Notably, these indicators are represented as percentages rather than absolute values, complicating direct comparisons of publication impact across academic disciplines. The SciVal database, which also uses Scopus data, provides research metrics by discipline following the All Science Journal Classification (ASJC) system. These metrics include the annual and cumulative citation counts in each discipline, as well as the annual average h-index for researchers. However, the annual average h-index is presented as a single value for all researchers, with no clear consideration of variation in the number of years they were research-active.
To obtain more accurate data, this study adopted an inductive data-collection approach to help architecture researchers assess whether their citation counts are lower than those of peers within their own and other selected fields. For this purpose, we focused on the top 50 universities in the QS University Ranking. This increases the likelihood that any observed differences in research performance among the examined disciplines are related to the different research metrics expectations from researchers in those disciplines rather than the academic performance of the surveyed institutes or researchers. This study considered three domains in this regard: Architecture and Built Environment, from the Arts and Humanities category, and Civil and Structural Engineering and Mechanical Engineering, from the Engineering and Technology category. This study surveyed a total of 150 departments, analyzing the Scopus profiles of 5843 faculty members, with 1405 in architecture, 2151 in civil engineering, and 2287 in mechanical engineering. This study considered faculty members at the associate and full professor ranks. This study gave priority to these two ranks in data collection as they often reflect a level of research maturity and productivity that aligns well with the study objective. Expanding data collection to include other academic ranks is recommended for further investigation.
Scopus, a widely recognized database for research metrics, includes only citations from sources it has indexed, which helps ensure quality by excluding citations from lower-quality sources, as can be seen with Google Scholar [
42]. It is also used in the QS university ranking that has been considered in this study to select the surveyed universities. As for the targeted research metrics, the Scopus database provides the number of citations and
h-index for the surveyed faculty members. Two values were recorded for citations: total number of citations and citations obtained between 2018 and 2022. Self-citations were excluded in all cases. During data processing, only faculty members with Scopus profiles and a minimum
h-index of 1 were included. This resulted in 899 profiles in architecture, 1777 in civil engineering, and 2,054 in mechanical engineering (see
Table 1). Notably, in the field of architecture, 36% of surveyed faculty members lacked a Scopus profile, indicating a substantial drop in profile availability for this discipline.
This study used Excel and the Statistical Package for the Social Sciences (SPSS) software for data processing and analysis. As demonstrated in the literature review, the traditional metrics such as total number of publications and citations in addition to ℎ-index provide an overall estimation of researchers’ impact but do not reflect their relative impact within their domain of knowledge compared to their peers of researchers at the different seniority levels. Thus, this study demonstrated the use an average
h-index value,
havg, for each academic rank within each discipline, allowing us to assess individual researchers’ deviations from this calculated mean. Based on this approach, we proposed the relative h-index (
hr-index), a research metric that can be calculated using the average h-index and standard deviation within the discipline. This concept is expressed through the Standard Score formula in Equation (1).
where
hr-index is the relative
h-index value of a sampling unit,
havg.-index is the average
h-index value of the sample, and σ is the standard deviation of the sample.
hr-index could be positive, which indicates that it is higher than the mean value of the examined group and vice versa.
4. Results
Figure 1 and
Table 2 summarize the results for average h-index, total citations, and citations from the past five years (2018–2022) across the fields of architecture, civil engineering, and mechanical engineering at the associate and full professor ranks. The results indicate that architecture generally shows lower values for these metrics compared to civil and mechanical engineering. Specifically, the average h-index in architecture is 7.0, which is notably lower than 22.8 for civil engineering and 25.6 for mechanical engineering. Differences also emerged between the ranks of associate and full professor, with full professors typically showing higher values. A one-way analysis of variance (ANOVA) was performed to examine whether there is a significant difference in the h-index between the researchers in architecture and the other two disciplines.
The ANOVA test revealed that there is a statistically significant difference in the h-index between at least two fields of the three examined fields across the corresponding academic ranks (F(2, 1500) = 150.7,
p = 0.000). The ANOVA results also showed significant difference among the rank of associate professors (F(2, 1500) = 150.7,
p = 0.000) and the rank of full professors (F(2, 2720) = 210.2,
p = 0.002). In order to determine exactly where these differences lie (i.e., which specific discipline is different from the other two disciplines in terms of the average h-index scores), the post hoc Scheffe’s test was conducted. The results of Scheffé post hoc test, as presented in
Table 3, indicates that the average h-index was significantly lower in the architecture field compared to both civil engineering and mechanical engineering for both academic ranks (
p = 0.00 for both academic ranks).
To characterize the difference between the three academic fields in terms of the number of academic journals available for publishing, the researchers have accessed and examined relevant data in the Web of Science and Scopus databases [
38,
39,
40] using archival and content analysis methods, and results are presented in
Table 4 and
Figure 2.
Table 4 shows the number of journals in both the Web of Science and Scopus databases that are classified under architecture in comparison to a sample of other engineering disciplines based on data provided in [
43,
44]. To further examine the difference between the three disciplines, the researchers used SciVal, an advanced analytics tool developed by Elsevier that provides in-depth research performance analysis based on the Scopus database, to evaluate the annual citation counts for the three examined fields.
Figure 2 presents the annual citation count, excluding self-citations, for the years 2000–2020 for the examined fields based on data provided by SciVal [
45]. Both
Table 4 and
Figure 2 show a significant variation between architecture and engineering fields’ examined areas in terms of number of peer-reviewed journals and annual citation, with architecture showing the lowest score in this regard.
To address some of the limitations of the
h-index, the researchers proposed using a relative
h-index,
hr-index, using Equation (1) presented in
Section 3. The proposed
hr-index aims to compare the performance and productivity of researchers across disciplines that differ in nature and have different publishing and citation patterns such as the case of architecture and engineering. The proposed research metric,
hr-index, is calculated as a relative value compared to other peers in a specific discipline. To calculate the
hr-index, an average
h-index value,
havg.-index, should be calculated for each discipline and for each academic rank in that discipline. Then, the
hr-index is calculated to assess the performance of each researcher in his/her field by the deviation of the performance of that individual researcher from this precalculated average index value.
Figure 3 shows the
hr-index values using the data collected for the examined three disciplines.
Figure 3 presents both the
h-index and
hr-index plotted together for comparison, where the
h-index is represented by a gray shaded area and its values are plotted on the
y-axis to the left of the Figure, while the
hr-index is represented by a thick black curve and its values are plotted on the y-axis to the right. A positive value in the
hr-index indicates that the research output of a researcher is higher than the average value of the researchers in the corresponding field and vice versa.
Figure 3 shows that the suggested
hr-index has dramatically altered the researchers’ ranking compared to the rankings calculated using the standard
h-index.
Table 5 presents a hypothetical example of nine researchers, three from each discipline. Researcher-1 in the three disciplines has an h-index of 10, while researcher-2 and 3 have an h-index of 15 and 20, respectively. Based on the current
h-index calculation method, researcher-1 in all three disciplines is ranked equal in performance regardless of the substantial differences among the fields shown by the
havg-index. The same applies to researcher-2 and 3 who will be ranked equal across the three disciplines according to
h-index assessment. In addition, according to
h-index, researcher-3 in all disciplines is ranked the highest, indicating that he/she is outperforming researcher-2 and that researcher-2 is outperforming researcher 1. This suggests that the
h-index seems to provide misleading information when used to compare the performance of the researchers across disciplines that differ in nature and citation patterns. Using the
hr-index, which ranks the researchers’ performance relative to that of their peers in the same or similar disciplines, results in ranking the three researchers in each discipline in a different way, which is a more accurate and fair assessment compared to the standard
h-index. As shown in
Table 5, the
hr-index places researchers in discipline 1 at the highest rank, despite the fact that they have equal
h-index values with their colleagues in disciplines 2 and 3.
6. Conclusions
The results highlight several limitations and challenges regarding the current use of research metrics in the architecture field compared to the engineering disciplines. Most of these challenges stem from the major differences between these disciplines. Evaluating research output and its quality across various fields that differ substantially reveals major limitations, challenges, and will result in conveying wrong information, requiring the use of multiple research metrics. A responsible application of these metrics involves considering both qualitative and quantitative indicators in a multidimensional approach to ensure a fair and comprehensive assessment of research quality. With the rapid expansion of scholarly publications and their growing influence on university rankings, researchers face increasing pressure to produce more papers in areas and journals that are likely to attract higher citation counts. This has made citation metrics, such as the h-index, a central criterion in academic evaluations. While a high number of citations and a resulting high h-index may appear to indicate superior research quality, this is not always the case. Citations should not be regarded as the sole measure of a researcher’s success. Researchers in art, architecture, and design-related fields often find themselves at a disadvantage compared to their peers in other disciplines. Architecture, by nature, is a relatively specialized field with a smaller pool of researchers and journals compared to the broader scientific and engineering disciplines. This limitation reduces its ability to attract citations at the same rate as these other fields, resulting in a lower h-index for architects.
This study explores this issue by comparing citation counts and h-index values between architecture and selected engineering disciplines. The comparison was based on intensive, inductive data collection from the Scopus database, focusing on the top 50 universities in architecture, civil engineering, and mechanical engineering. The findings confirm that architecture generally exhibits lower values for these research metrics than the engineering fields. Specifically, the average h-index score for research-active faculty members at the associate and full professor ranks in architecture was 7.0, compared to 22.8 in civil engineering and 25.6 in mechanical engineering. This trend is also evident in citation counts, where architecture recorded significantly lower numbers. One potential way to address this gap is by fostering interdisciplinary research in architecture, facilitating the integration of methods and tools from multiple disciplines to offer new perspectives and create additional avenues for research dissemination.
This study concluded that a single h-index value cannot be universally applied to researchers across different disciplines, as they have varying research opportunities and expectations, including publication rates, citation counts, and h-index values. Consequently, the study proposes an additional h-index variant: the relative h-index (hr-index), which is discipline-specific. Developing and adopting discipline-specific metrics are very important, particularly disciplines that differ deeply, in order to obtain meaningful and valid evaluation of research output and quality. The proposed hr-index is calculated by determining the difference between an individual researcher’s h-index and the average h-index of their peers in the same discipline and academic rank, then dividing the result by the standard deviation in the discipline. This will show the researcher deviation from the average performance in that specific discipline. Such a metric can be used as a complement to the standard h-index, as it offers a fair and more representative evaluation of researchers’ performance and impact within their areas of expertise.
A more thorough and accurate evaluation of research performance and productivity in architecture would also be possible with the use of alternative metrics. Additionally, they would lessen the h-index’s drawbacks by promoting a wider range of architectural research and interdisciplinary co-operation without sacrificing citation count quality. The study recommends extending data collection and analysis to a wider range of researchers, academic ranks, and disciplines, using different sampling approaches and advanced data processing techniques. There is a need in this regard to develop databases that allow for regular update of discipline-specific research metric data. Future studies could also consider additional research databases such as the WoS in data collection to allow for further comparative analysis of research metrics and broader disciplinary analysis.