Next Article in Journal
Three Strategies of Urban Renewal for One National Outline Plan TAMA38: The Impact of Multiparametric Decision-Making on Neighborhood Regeneration
Previous Article in Journal
A Study of Urban Planning in Tsunami-Prone Areas of Sri Lanka
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DesignIntelligence and the Ranking of Professional Architecture Programs: Issues, Impacts, and Suggestions

School of Architecture & Design, University of Kansas, 1465 Jayhawk Boulevard, Lawrence, KS 66049, USA
Architecture 2022, 2(3), 593-615; https://doi.org/10.3390/architecture2030032
Submission received: 13 July 2022 / Revised: 28 August 2022 / Accepted: 1 September 2022 / Published: 5 September 2022

Abstract

:
This paper studies the annual rankings of professional architectural degree programs by DesignIntelligence (DI). It uses a literature review and the statistical analysis of DI rankings and program-specific data to explore the limitations of the ranking system and its impacts on programs and public opinion. According to the findings of the study, the limitations of this system are related to the data it uses, the methods it uses to collect the data, and the way it uses the data for ranking purposes. Still, the ranking system can force architectural programs into a costly campaign for better ranks. It can also mislead prospective students in choosing programs that may not match their expectations. Additionally, it does not provide a reliable assessment of the capacity of a program to serve the profession and produce public good. It is suggested that a more objective, reliable, and relevant ranking system is needed for professional architecture degree programs. For this, the ranking system should emphasize criteria and methods different from the current DI system of rankings and should allow users to personalize rankings based on their perspectives, needs, and priorities.

1. Introduction

Recently, college rankings have received significant attention among both lay people and experts. They started in the early 20th century as benign academic exercises [1]. Now, they are a regional, national, or global competition for better students and faculty and more wealth. For supporters, rankings help make the concern for excellence in higher education publicly visible and active. They argue that rankings encourage universities and academic programs to perform better and help combat the pitfalls of institutional stagnation that could develop in their absence [2,3]. They also argue that rankings meet a widespread demand for publicly accessible comparative information about academic institutions and programs. Students and families consult them when making college decisions. Sponsors of faculty research consider ranking-based reputations for funding decisions and research partnership. Policymakers consult them for allocating resources among educational institutions. Academic leaders consult them for administrative decision-making. An increasingly consumerist attitude among students, parents, administrators, and employers toward higher education in a knowledge-based economy might be the reason for the popular rise of college rankings, argues Hazelkorn [1]. Highly ranked institutions and programs do not hesitate to use their rankings to help market themselves to prospective students and parents, employers, policymakers, and funders.
However, college rankings can be misleading. The literature is replete with criticisms arguing that rankings use diverse and weakly defined methods and indicators, which are often weighted differently in an ad hoc manner [4,5]. Rankings have the advantage of extreme simplicity but also the disadvantage of revealing very little, because ranks do not disclose real differences in quality. In fact, all rankings exaggerate quality differences because of their association with winning [4]. Normally, what matters in a contest is who came in first, not how much better the winner was than the loser.
To make matters worse, by placing all institutions on the same scale, rankings give readers a false impression that these institutions are trying to win the same competition with the same goals. Most academic institutions provide many services, from teaching and preparing students for the market to generating knowledge through research and scholarships. Regardless of how well rankings are done, rankings undermine the multiplicity of services academic institutions provide for the public good. A mere inclusion of numerous factors in rankings does not represent what it takes to run successful academic institutions. College rankings have been described as normalizing “one kind of higher education institution with one set of institutional qualities and purposes, and in doing so strengthening its authority at the expense of all other kinds of institutions and all other qualities and purposes” [6]. Indeed, higher education is too complex and consumers’ individual preferences are too diverse to be fairly judged by a singular ranking scale [7].
Lacking a consensus on the exact definition of “academic quality” [7,8], a ranking system can indirectly and wrongly influence students and parents by emphasizing some factors over others in the way it defines quality. Factors left out of the rankings may appear less important because of this. Some important factors, such as teaching quality and learning engagement and experience, are hard to define and quantify. Other factors that can affect prospective students and their parents, such as geography, safety, comfort, and convenience, are especially subjective. Still other factors, such as scholarships and cocurricular and extracurricular activities, are quantifiable but idiosyncratic to students and programs. Yet, a seemingly reassuring appearance of objectivity of the rankings often leads students and parents away from factors they care about to factors less central to their goals when making college decisions.
In response to the controversies they raise, ranking systems frequently change their methods and indicators, producing significantly different results from one year to another that are only reflective of the adjustments in methodologies rather than any true change in the quality at the institutions. While changes in the ranking methodologies show a responsiveness to criticism, they also show a lack of intellectual rigor in the process and product. For example, one study revealed that, by adjusting the weighting of different criteria, the same data can produce different rankings for programs. The authors of the study argued that the data used in rankings are useful both for institutions and students, but weightings of the data reflect nothing more than rankers’ preferences [9]. In general, college ranking systems are arbitrary and reflective of elitism, because the same wealthy schools always seem to get the top spots. To make matters worse, there have been many reports over the years of schools deliberately “fudging” their data or taking non-quality-related steps to increase their ranks [10].
However, studies are more common on college rankings at the institutional level. They focus on the U.S. News & World Report’s (USNWR’s) college rankings, the Academic Ranking of World Universities (ARWU) by Shanghai Jiao Tong University, the Quacquarelli Symonds (QS) World University Rankings, and the Times Higher Education Supplement (THES) international ranking of universities. In contrast, studies on the rankings of professional programs such as architecture are rare. With only one published study [11], the rankings of professional architecture programs are seriously understudied. Therefore, among many other things, we do not know much about the kind and quality of data these rankings use, the methods they use to collect the data, the ways they use the data to determine rankings, and the impact these rankings have on programs and public opinion.
To fill in the gaps, this paper focuses on the rankings of professional architecture programs produced by DesignIntelligence (DI) of the Design Futures Council. In Section 2, it explains the process of DI rankings. In Section 3, it presents the apparent advantages and limitations of these rankings. In Section 4, it reports analytic studies on DI rankings, exploring some common issues of rankings. In Section 5, it explains the impacts of DI rankings on professional architecture programs and public opinions about these programs. In Section 6, the paper makes some suggestions to improve the process of the ranking of professional architecture programs. Finally in Section 7, the conclusions are provided.

2. DI Rankings

DI produces the most prominent rankings of professional architecture programs. Only a few of the programs ranked by DI are located outside the USA, maybe because of their memberships in the Association of Collegiate Schools of Architecture (ACSA). In recent years, DI has shifted away from listing “America’s Best Architecture and Design Schools” to “America’s Most Hired From” and “Most Admired Design Schools”. It is noted on DI’s webpage that the shift recognizes that a BEST school does not exist, as one’s needs are unique and as one’s views of “what is best” is subjective. (https://www.di-ratings.com/, accessed on 3 June 2022).
DI ranks undergraduate and graduate professional architecture degree programs separately. For ranking/rating purposes, DI uses web-based questionnaires to collect inputs and opinions on architecture programs from hiring professionals, program administrators, and students. The topics emphasized in these surveys fall into five separate categories: Output of Institution, Outcomes from Alumni, Learning Environments, Relevance, and Distinctions. The details of three DI survey questionnaires are given below.

2.1. Professionals’ Survey

For this survey, hiring/supervising professionals within firms are invited to participate. The survey asks professionals to rate the importance of several attributes for a new architecture graduate entering the workplace. These attributes include students’ ability to collaborate effectively; ability to positively influence others; adaptability/flexibility; comfort when interfacing with outside parties (client, engineer, constructor, and consultant); committed work ethic; effective interpersonal skills; emotional intelligence; and empathy. They are asked to use 1 for not important, 2 for slightly important, 3 for moderately important, 4 for important, and 5 for very important.
Professionals are also asked to rate the importance of several factors considered in hiring decisions for a new architecture graduate entering the workplace. These factors include GPA, design excellence, research skills, school attended, study abroad experience, constructability focus, adequate understanding of the professional services business structure, knowledge of sustainable design, technology adoption, design for health, and previous work experience. They are asked to use the same ratings as above.
Additionally, professionals are asked to indicate if their firm has an active educational placement/work study program with any design programs. If they do, they are asked to rank the top three programs.
Furthermore, they are asked to identify the top three most hired from programs, starting with the first. For each program, DI surveys ask two sets of questions to further identify student outcomes of professional significance. The first set of questions asks the programs from which the greatest number of students were hired in the last 5 years, the quality of the student portfolios of recent hires from these programs, and the professional readiness of recent hires from these programs. For the last two questions of the set, raters are asked to use 1 for extremely dissatisfactory, 2 for less than satisfactory, 3 for satisfactory, 4 for highly satisfactory, and 5 for extraordinary.
The second set of questions ask to rate the quality of the program from which the greatest number of students are hired in terms of how well it is preparing students for communication and presentation skills; community involvement; construction materials, means, and methods; design technologies; design theory and practice; engineering fundamentals; global issues/international practice; interdisciplinary studies; planning/project methodologies; practice management; research methodologies; sustainability/healthy design; and understanding the impact of urbanization on a design. For this set, they are asked to use 1 for poor, 2 for fair, 3 for good, 4 for very good, and 5 for excellent.
To end the survey, professionals are asked to list up to five programs and to describe the unique distinctions of each program using 25 words or less. Here, the survey lists some 141 programs, with the option to add “other”.

2.2. Deans’ Survey

For this survey, only one survey per program is allowed. The dean or a delegate of the dean can fill in the survey. The survey starts with questions asking if their undergraduate, graduate, or both undergraduate and graduate programs are accredited by the National Architectural Accrediting Board (NAAB). Then, it asks the administrators to provide the average number of graduates from each accredited undergraduate or graduate architecture program during the last three years.
These initial questions are followed by questions that ask to list peer-reviewed papers and books published by a program’s faculty, recognition awards received by the faculty and the program, and product innovations from the program in the past 12 months. They also ask to list research projects by faculty and students in the past 24 months. Additionally, they ask to provide the average percentages of program alumni who have become entrepreneurs (self-employed, created a start-up, etc.); work in leadership/management roles in design-related and non-design-related organizations; and are employed or went on to further their education upon graduation in the last 10 years. Moreover, they ask to provide the top three additional accomplishments of graduates with statistics when possible.
Then, the survey includes a few binary questions (yes/no) on learning environments. These questions ask if a program offers access to a fab-lab and software; studio facilities with 24-hr access; dedicated studio space to each student; immersion location opportunities (abroad, rural, and urban); and hybrid learning methods (physical, virtual, and hybrid). They also ask if a program has an active educational placement/work study program with outside firms/organizations, has dedicated research faculty/programming, and has courses focused on a sustainable/resilient design in the program’s core requirements. Additionally, they ask if a program emphasizes the sustainable resilient design, includes socially focused design courses (e.g., design for equity) in the core requirements, emphasizes socially focused design (design for equity), and promotes interdisciplinary learning (core classes from other disciplines involving the built environment) and trans-disciplinary learning (core classes from other disciplines NOT involving the built environment).
Finally, deans are asked to note the top five significant changes made to architecture courses in the past three years out of 13 different options provided, and to identify five distinguishing features of their programs using 25 words or less for each.

2.3. Students’ Survey

For this survey, current undergraduates and graduates students and alumni who have graduated within the last three years are invited to participate. The survey starts with questions asking for the college or university where the student is currently enrolled or from where the student recently graduated. The survey also wants to know the student’s plan after graduation. The options include pursuing an advanced degree in architecture, working in a private practice, working for a corporation, working in the government, working in academia, self-employment, working for a nonprofit, working in a field other than architecture, or other. Additionally, the survey wants to know from the student the three most important attributes of an employer in 25 words or less. Moreover, the survey asks if the student believes that they will be well-prepared for working in the profession upon graduation. Here, the student is asked to use 1 for unprepared, 2 for unsure, 3 for hopefully prepared, 4 for prepared, and 5 for well-prepared.
After the initial questions, students are asked to rate the quality of their program in terms of how well it is preparing them or has prepared them for communication and presentation skills; community involvement; construction materials, means, and methods; design technologies; design theory and practice; engineering fundamentals; global issues/international practice; interdisciplinary studies; planning/project methodologies; practice management; research methodologies; sustainability/healthy design; and understanding the impact of urbanization on a design. They are asked to use 1 for poor, 2 for fair, 3 for good, 4 for very good, and 5 for excellent.
Then, students are asked the same binary questions (yes/no) on the learning environments for both undergraduate and graduate programs that the deans were asked to answer. As in the deans’ survey, these questions ask if a program offers access to a fab-lab and software; studio facilities with 24-hr access; dedicated studio space to each student; immersion location opportunities (abroad, rural, and urban); and hybrid-learning methods (physical, virtual, and hybrid). They also ask if a program has an active educational placement/work study program with outside firms/organizations, has dedicated research faculty/programming, and has courses focused on sustainable/resilient design included in the program’s core requirements. Additionally, they ask if a program emphasizes a sustainable resilient design, includes socially focused design courses (e.g., design for equity) in the core requirements, emphasizes a socially focused design (design for equity), and promotes interdisciplinary learning (core classes from other disciplines involving the built environment) and transdisciplinary learning (core classes from other disciplines NOT involving the built environment).
Finally, students are asked to identify up to five distinguishing features of their programs using 25 words or less.

2.4. Reporting

Using the data collected through the surveys, DI determines the (1) “most admired” 40 schools; (2) “most hired” from 20 schools for each of the following categories defined based on the number of graduates: less than 20, 20–49, 50–69, 70–99, and 100+; and (3) rankings in the following 12 focus areas: (1) communications and presentation skills, (2) construction materials and methods, (3) design technologies, (4) design theory and practice, (5) engineering fundamentals, (6) healthy built environments, (7) interdisciplinary studies (awareness of and collaboration with multiple disciplines impacting the built environment), (8) practice management, (9) project planning and management, (10) research, (11) sustainable built environments/adaptive design/resilient design, and (12) transdisciplinary collaboration across Architecture/Engineering/Construction. DI also provides program insights using the data collected from professionals, deans, and students. Helpful as they are, these insights will not be discussed in this paper.

3. Some Immediate Observations

The descriptions provided above suggest some positive and many negative aspects of the DI ranking system, which are discussed below.

3.1. DI Collects Data from Multiple Sources

College rankings generally have used three different types of data—reputational data, input data, and output data. Reputational data are generally collected from university presidents, academic deans, department heads, or employers, those who supposedly know most about academic quality. Input data include, among other things, faculty publications and citations, library size, student–faculty ratio, incoming students’ test scores, retention and graduation rates, and educational expenditures. Output data pertain to the graduates of a program and include data on such factors as skillsets, employability, average salaries, awards and recognitions, and roles and responsibilities. Since none of these data sets can describe the academic quality sufficiently [3,12,13,14], rankings using a combination of reputational, input, and output data may be desired.
Interestingly, DI uses all three types of data to rank architecture programs. The input criteria used by DI are included in its administrators’ and students’ surveys. The output criteria included the employers’ surveys. Finally, the reputational criteria are included in its administrators’ and employers’ surveys. As a result, for supporters, DI data provide a comprehensive comparative assessment of professional architecture programs. Conversely, for critics, DI data are spread too thin and are unable to take into account any one of the three perspectives of rankings sufficiently. For them, conflicts among these perspectives may be difficult to resolve. As a result, the quality of a program, as defined by DI, becomes questionable and confusing. For example, DI does not explain how it determines the ranking of a program when administrators and students hold contradictory views of the program. It also does not explain how it monitors, verifies, and validates the data it collects from its three different sources.

3.2. DI Collects Profession- and Program-Focused Data

DI’s surveys collect generic data, such as the numbers of students, academic products, publications, citations, recognitions received, different types of jobs held by alumni, and hires from a program. These generic data, however, provide very little information about the specific nature of professional architecture programs. To complement these generic data, DI’s surveys also collect profession- and program-specific data. These data include the attributes of new architecture graduates entering the workplace, the factors affecting the hiring of new architecture graduates, and the quality of the learning environment of a program. Therefore, unlike rankings that use generic data only, DI rankings are likely to be more relevant to students and parents looking for an architecture program and to employers hiring new architecture graduates.
Yet, DI’s survey data raise significant concerns. Since professionals may not have direct access to a program, it seems reasonable for DI to seek out their opinions of a program based on the students they hire from the program. However, it does not seem reasonable for DI to seek out students’ and administrators’ opinions concerning the quality of their own programs. It should not surprise anyone if administrators or students use an excellent rating to define the quality of their own programs. Indeed, if the benefit of an answer to a question is already evident to those who answer the question, in the absence of any disincentives, why should they answer in a manner that could harm the rankings of their programs?

3.3. DI Cannot Define the Quality of a Program in an Objective Manner

Hazelkorn [1] identified eight academic indicators often considered by ranking systems: beginning characteristics, learning outputs, faculty, learning environment, final outcomes, resources, research, and reputation. Beginning characteristics are represented by data such as student admission scores and the percentage of international students. Learning outputs are typically defined by a proxy of retention and graduation rates in most ranking systems. Faculty indicators include faculty-to-student ratios and research output. Learning environment reflects student engagement and satisfaction with the learning conditions in a program. Final outcomes include the employability and average salaries of graduates as proxies for the quality of education. Resources account for the budgetary and physical assets of the program. Research is represented in rankings by the number of faculty publications and citations, as well as by the level of funding for faculty work. Finally, reputation is generally established by peer review, which is often subject to reviewer bias, to a “halo effect”, in which perceptions of one academic unit extend to others in the same institution, and to a tendency to restrict judgments to known institutions.
Only some of the academic indicators identified by Hazelkorn [1] are considered thoroughly and methodically by the DI ranking system. Therefore, even though DI rankings use general and program- and profession-focused data from three different sources, they cannot provide a comprehensive assessment of the academic quality of architecture programs. They also cannot provide an objective assessment of programs, because they primarily use opinion surveys. The objectivity of DI rankings is further reduced when the same opinion surveys are repeated year after year. Once an opinion survey has been conducted and the results have been published, it is hard to imagine that the survey will continue to provide unbiased data and results in the following years. By employing many small and big strategies to influence surveys, professional architecture programs can try to improve their ranks from one year to the next even though substantive changes in academic quality may be hard to achieve in a short time. Since it is impossible to prove the veracity of participants’ opinions, the DI rankings may not show the true values of architecture programs.

3.4. DI Does Not Have a Process in Place to Verify the Data It Collects

Besides the fact that opinion-based survey data are generally hard to verify, the problem of the verifiability of DI rankings is found at multiple levels. For example, it is not clear how DI verifies whether a survey participant is really the person they claim to be. It is also not clear how it stops a participant from taking the survey more than one time. Additionally, it is not clear if DI can independently verify the information provided by participants in the few cases where they are asked to provide specific data instead of opinions. For example, does it verify if an administrator inadvertently provides wrong data? In fact, stories of mistakes are quite common in ranking systems [10]. It is worth noting here that, while it is common to ask administrators regarding the quality of their academic programs or to ask them to compare their programs with other similar programs, research in the social sciences has found that it tends to provide outdated results that favor programs in institutions with strong reputations irrespective of a program’s achievement [15,16,17]. More importantly, while the opinions of professionals, deans/administrators, and students about architecture programs are valuable, how can they, who have experienced or known only a few architecture programs, weigh a program objectively against the many other programs that exist? Individuals cannot become experts of all the other programs of the country by hiring a few students as employers, getting a degree or two from one or more programs as students, or running one or more programs as administrators.
Since answers to several DI survey questions are not verifiable, there is no disincentive to wrongly answering DI’s questions. There is also no validity test reported in the literature for the questions in DI’s surveys. Do they really measure what they want to measure? Having a "fab-lab” and software does not mean that students have used them or that the quality of these resources is similar in different programs. Likewise, having a socially focused design course does not ensure that students know how to design for equity or that the content and delivery of these courses are the same in different programs. Similarly, having research faculty/courses does not mean that students have access to them. In many schools, students in professional architecture programs may not need to interact with their research faculty regularly. Therefore, they may be unsure whether the program has a research faculty or not. It is also a fact that many stellar research faculty may not have time to teach or may not be good teachers. In simple words, it is not possible to fairly rank any programs based on a set of binary questions on learning environments, as DI does. To do so, it is necessary to know the quality, quantity, and extent of the things a program has. For example, a program may have an active and strong educational placement/work study program without having dedicated research faculty or programming. A program does not have to have everything included in DI’s questions on a learning environment to become an exceptional learning community.
Responses to many DI’s survey questions are not also verifiable because there are many ways to interpret these questions. For example, in the absence of any clear definition, participants may interpret “sustainable resilient design” differently. Therefore, no answer for whether a program offers a “sustainable resilient design” course can be wrong. Many DI’s questions also seek descriptive, as opposed to numerical, responses. It is not clear how these responses are considered in determining the program quality and, hence, rank. In qualitative studies, it is a common practice to get the results examined by experts, as well as informants familiar with the subject matter, setting, or methodology of the research, before these results are reported. One wonders if the results of the analysis of DI’s qualitative data are examined in a similar manner. It is possible that DI does not use any qualitative data for ranking purposes, but we do not know that. Nevertheless, more transparency concerning its methodology may help improve the usefulness of DI rankings.

3.5. It Is Not Clear How DI Uses Data to Rank Program

As noted earlier, at least three sets of responses are considered, sometimes on the same set of issues, to determine the DI rankings. In the professional’s survey, for example, employers are asked to indicate if architecture programs are preparing newly hired architects well in different professional skills and areas. In the administrators’ survey, administrators are asked to indicate if their programs put emphasis on these skills and areas. Finally, in the students’ survey, students are asked to indicate if their programs are preparing them well, again, for the same set of professional skills and areas. It is not clear how these three sets of responses are mapped onto each other to evaluate architectural programs and to determine their rankings.
In a more specific example, DI uses professional survey data to determine the rankings for “most admired” and “most hired from” programs. However, it is not clear if DI also uses administrators’ and students’ survey data to do the same. It is not also clear how a mismatch between professionals’ expectations and administrators’ and students’ descriptions of their schools affects DI’s rankings. What happens to the ranking of a program when recent graduates of the program exhibit poor research skills in the office where they have just been hired, but both administrators’ and students’ surveys indicate that their program offers research skills courses.
Additionally, a lack of clarity exists concerning how DI determines how many programs to rank. DI does not explain why it includes only the 40 “most admired” programs in its ranking list. To be ranked 41st out of some 141 schools in the “most admired” category is no small feat. In contrast, DI includes 100 schools (20 schools for each of the five categories defined based on the number of graduates) out of some 141 schools in the “most hired from” category. To put it simply, most schools are ranked in the “most hired from” category, which probably is not a very useful distinction. Additionally, a rank in the “most hired from” category says very little about the differences between a program graduating 100+ students and a program graduating less than 20 students. In any given year, ensuring job placement for every student is more difficult for a program graduating 100+ students than a program graduating less than 20 students.
More interestingly, DI does not explain why the numbers of ranked programs in different focus areas vary. In 2021, some focus areas included 15, some 10, and still others included 9 programs in the ranking lists. One wonders if DI ranks only those programs for which it has data. If that is indeed the case, then with only a handful of programs, the usefulness of focus area rankings is questionable. Further, in 2021, DI did not rank programs in the following focus areas: community involvement, global issues/international practice, and understanding the impact of urbanization on a design. Again, DI might not have enough schools to rank in these categories. A lack of a sufficient number of schools to rank should not be surprising, because when professionals are asked to choose only the five most familiar programs out of some 141 programs, they are forced to exclude many other programs that could potentially be good in some of the many focus areas.

3.6. It Is Not Clear How DI Rankings Are Related to Program Accreditation

Architecture programs are accredited by NAAB. NAAB’s program criteria are used to evaluate how a program supports career pathways, design, ecological literacy and competency, history and theory, research and innovation, leadership and collaboration, learning and teaching culture, and social equity and inclusion. NAAB’s student criteria are related to student learning objectives and outcomes. They include health, safety, and welfare in the built environment, professional practice, regulatory context, technical knowledge, design synthesis, and building integration.
Even though all accredited architecture programs must remain in compliance with NAAB’s program and student criteria, NAAB does not rank the accredited programs, but DI does. Interestingly, a brief survey of the NAAB evaluation reports of accredited architectural programs around the country by this author shows that many programs that fail to meet several NAAB criteria are still ranked highly by DI. This is important, because, according to NAAB, “Accreditation is evidence that a collegiate architecture program has met standards essential to produce graduates who have a solid educational foundation and are capable of leading the way in innovation, emerging technologies, and in anticipating the health, safety, and welfare needs of the public.” ([18], page ii).
Of course, being a voluntary process, not all architecture programs seek NAAB accreditation. Similarly, not all jurisdictions require one to get a degree from a NAAB-accredited program to be licensed to practice. This then raises the interesting question: Should a ranking system rank architectural programs without considering whether a program “has met standards essential to produce graduates who have a solid educational foundation and are capable of leading the way in innovation, emerging technologies, and in anticipating the health, safety, and welfare needs of the public”? If a ranking system of architectural programs should not do that, then who should take the responsibility when incoming students choose a program based on its ranking instead of its accreditation status?

4. Analytic Findings

The literature reports numerous analytic studies on rankings, noting that rankings may be deficient in several aspects. Some of these studies explore ranking processes [19,20,21,22,23], differences and similarities [24,25,26,27,28], and correlations [29]. Other studies explore how lessons learned from some countries that use higher education ranking systems might influence similar practices in other countries [23]. Still others look at theories, data, and methodologies, as well as weightings and formulas used in rankings [5,19,30,31,32,33,34,35,36,37,38]. One of these studies found that USNWR rankings were no more than a reflection of the institutional outcomes and financial resources gathered from the Integrated Postsecondary Education Data System (IPEDS) of the National Center for Education Statistics (NCES) [39]. Another study found several problems with the Times Higher Education Supplement (THES) rankings: The sampling procedure of the rankings was not explained and was very probably seriously biased; the weightings of the various components of these rankings are not justified; inappropriate measures of teaching quality are used; the assessment of research achievements is biased against the humanities and social sciences; the classification of institutions is inconsistent; there were striking and implausible changes in the rankings between 2004 and 2005; the THES rankings are based on regional rather than international comparisons; and so on [40].
Many studies also explore how college rankings affects consumers and the higher education establishment [41,42]. The specific topics explored in these studies include the effects of rankings from one year to the next, also known as the feedback effects [12,14], and the effects of rankings on public policies [32], peer assessments of reputations [43,44], and admission outcomes and pricing decisions [45]. One study on admission outcomes at selective private institutions found that a less favorable rank led an institution to accept a greater percentage of its applicants, lowering the quality of the entering class [45]. Another study found that, while a change in rank did not affect the (all private) schools’ “sticker price”, less visible discounts were associated with a decrease in rank. A less favorable rank led to lower thresholds of expected self-help and more generous financial aid grants, with a drop of ten places, resulting in a four percent reduction in aid-adjusted tuition and an overall decrease in net tuition [45]. After controlling for student characteristics at fourteen public research universities, yet another study found no statistically significant relation between the USNWR measures and NSSE (National Survey of Student Engagement) benchmarks, except for one. This finding led the author conclude that the quality of a student’s education seems to have little to do with rankings [46]. Still others examine the impact of institutional and program area rankings on student’s access to and choice in higher education and discuss the impact of rankings on student opportunities after graduation in terms of placement success and earnings [23,47]. There are also those studies suggesting new methodologies for better rankings [14,38,48,49,50,51], focusing on, among other things, relative internationalization [52], research, educational and environmental performances [53], and the effect of a university’s strategy on its rank [51].
This study on the DI rankings of professional architecture programs cannot cover all that has been presented in the literature on college rankings. While similar studies on DI rankings are necessary, this study will explore only a few of the most common issues in relation to DI rankings. These issues include the halo effects of college rankings on the DI rankings of architecture programs, the feedback mechanism affecting the year-to-year DI rankings, the relevance of architecture programs as demonstrated by the DI rankings, and, finally, the relationships between academic expenses and the DI rankings of architecture programs. It should be noted here that all statistical analyses required for the study were performed using IBM SPSS Statistics (Version 27).

4.1. Halo Effects

The “halo effect” is a ranking phenomenon, where less prestigious programs are rated more highly based on the overall reputation of their institutions. In one extreme example, Princeton’s undergraduate business program was ranked among the top ten, even though Princeton did not have such a program [3]. The “halo effect” demonstrates that at least some reputational survey participants rate programs and institutions without the requisite knowledge to make an informed judgment of quality.
Two analyses were conducted to find out if halo effects exist in DI rankings. First, the correlations between the USNWR national rankings of universities and the “most admired” DI rankings of their undergraduate architecture programs were studied for the years 2019 and 2020. The graduate architecture programs were not considered in the study, because the USNWR national rankings of colleges at the institutional level are for undergraduate programs only. As indicated by the correlational analysis, the associations between these two rankings are strong and statistically significant (Table 1). Since it is unlikely for architecture programs to affect the USNWR national rankings of universities, these associations may indicate the existence of halo effects of USNWR rankings of universities on the “most admired” DI rankings of their undergraduate architecture programs.
Second, the halo effects were studied at the school level by exploring the following question: Does having the “most admired” undergraduate and graduate programs affect the frequencies of rankings for undergraduate programs? According to Table 2, in the years between 2010 and 2019, 23 schools had the “most admired” undergraduate architecture programs only. On average, these programs were ranked 5.57 times as the “most admired” programs. In contrast, during the same period, 17 schools had both the “most admired” undergraduate and graduate architecture programs. On average, the undergraduate programs of these schools were ranked 12.24 times as the “most admired” programs, indicating that having a “most admired” graduate program in the school might improve the chance of an undergraduate program to be ranked as a “most admired” program.

4.2. Feedback Mechanism

Another serious concern related to participants’ lack of information about the institutions they are ranking is that reputational rankings—particularly annual ones—may be creating a feedback mechanism for future rankings. Put another way, having little familiarity with other institutions, academic administrators and professionals are likely to allow a school’s previous rank to affect their current assessment of its academic reputation. Even if only indirectly, the sheer ubiquity of rankings on the internet may be a reason for this [14]. As a result, we may not only observe strong correlations between an institution’s year-to-year rankings, but the correlation between year-to-year rankings may increase and stabilize, and little change can be expected in the rankings overall as years go by. As shown in Table 3, the feedback mechanism seems to exist in the DI rankings of the “most admired” architecture programs. According to the table, with only a few minor exceptions, the year-to-year correlations have been clearly strengthening for the “most admired” undergraduate programs since 2008–2009. Even though the “most admired” graduate programs do not show a similar pattern, the year-to-year correlations have remained strong over the years, suggesting a strong association of a program’s previous rank with its current rank.

4.3. Relevance

The purpose of any academic ranking is to assess not only the quality but also the relevance of academic programs. People may be less interested in good quality programs if they lack relevance. One way to find out the relevance of a program is to see how hirable the graduates of the program are. If the education one receives from the “most admired” programs of DI is relevant, then one would expect that the ranks of the “‘most admired” programs would be associated with the ranks of the “most hired from” programs of DI. To find this out, the study considered if the “most admired” programs of DI were also included in the “most hired” program categories of DI. The findings for the years 2018 and 2019 are presented in Table 4. Interestingly, at least nine programs of different sizes were included in the “most hired” categories but not in the “most admired” categories of the two years. Only a few of the 23 schools with the “most admired” undergraduate programs were included in the “most hired” categories of these years. The same is true for the schools with the “most admired” graduate programs; and only a few of these programs were included in the “most hired” categories of these years. Notably, many schools with the “most admired” undergraduate and graduate programs were included only in the category of the “most hired” schools with 100+ graduates of both the years. According to these findings, only some of the “most admired” schools may be providing relevant education, as indicated by their rankings in the “most hired from” categories.
Another way to find out the relevance of an architecture program is to see how well the graduates of the program do in the architectural registration exams (ARE). ARE is designed to measure the various areas of professional knowledge that are largely regarded as an important outcome of a successful professional architectural education. It potentially reveals how much students have learned while in the program. The areas included in these exams are construction & evaluation, practice management, programming & analysis, project development & documentation, project management, and project planning & design. This study explored the correlations of the ARE passing rates of professional architecture programs and the “most admired” and “most hired from” DI ranks of these programs for the years 2018 and 2019. Of course, graduates do not take their registration exams in any predetermined year. Rather, they often take these exams over an undefined number of years. Since the reputation of a program is unlikely to change significantly from year to year, it is expected that graduates from a “most admired” or a “most hired from” program would generally do well in their registration exams, regardless of when they take these exams.
The results of the correlational analysis, as presented in Table 5, do not support the above expectation consistently. Out of the 24 correlations observed between the ranks of the “most admired” undergraduate programs and the pass rates of their graduates in the registration exams, only 12 correlations were statistically significant, indicating that ARE results may have some association with the “most admired” undergraduate rankings. In contrast, out of the 24 correlations observed between the ranks of the “most admired” graduate programs and the pass rates of their graduates in the registration exams, only three correlations were statistically significant, indicating that ARE results may have very little association with the “most admired” graduate rankings.
As shown in Table 5, the correlations between the ARE passing rates and the rankings in the “most hired from” categories of the architecture programs are inconsistent at best. The programs in the 2018 “most hired from” categories show none but one statistically significant correlation with the passing rates in the ARE exams. In contrast, the programs in the 2019 “most hired from” categories show several statistically significant correlations with the passing rates in the ARE exams. These inconsistent results raise questions about the ability of DI rankings to define the relevance of professional architecture programs.

4.4. Expenses

In an ideal world, an academic program should be ranked higher if the program provides the same quality of education as other programs but at a lower cost. In the real world, however, the opposite happens—a program generally gets punished with a poor rank for providing the same quality of education at a lower cost. Considering this, Ehrenberg notes that “no administrator in his or her right mind would take actions to cut costs unless he or she had to” [54].
DI does not include any questions on academic expenses, but its questions on learning environments for deans/administrators and students indirectly refer to academic expenses. More facilities and resources always require more money but provide no assurance for better academic quality. Studies have shown that, when expenses are used as a criterion to determine rankings, they directly encourage the inefficient use of resources by rewarding schools for spending more money, regardless of whether these expenditures contribute to academic quality [14].
To determine the relationship between DI rankings and academic expenditures, the average undergraduate tuition and fees of the following categories of the “most hired from” schools were compared: (1) “most hired from” schools with no “most admired” programs, (2) “most hired from” schools with “most admired” undergraduate programs, (3) “most hired from” schools with “most admired” graduate programs, and (4) “most hired from” schools with both “most admired” undergraduate and graduate programs. The findings presented in Table 6 indicate that the schools of category 4 are the most expensive programs with USD 38,968 average tuition and fees. These are followed by the schools of category 3 with USD 26,786 average tuition and fees. After these, we have the schools of category 2 with USD 20,920 average tuition and fees. Finally, we have the schools of category 1 with USD 15,447 average tuition and fees. Simply put, DI ranks are generally better for more expensive programs.

5. Impacts of DI Rankings

Due to the limitations discussed above, DI rankings continue to have significant negative impacts on professional architecture programs and on public opinions about these programs, which are discussed next.

5.1. Impacts of DI Rankings on Academic Programs

DI does not consider the factors related to admissions decisions in the ranking process. Without this information, it is not possible to determine how much learning students receive as they graduate out of a program. If a program admits students of underprivileged backgrounds and makes them highly employable, should not the program be given more credit than a program that admits students of privileged backgrounds only and makes them equally employable? The likely consequence of not considering admission criteria in DI rankings is that they lead programs away from properly balancing many factors that go into determining which applicants will improve the program quality and make better architects.
A lack of focus on admission criteria in the DI rankings can have worrisome secondary effects on architecture programs. While admitting students, programs may place emphasis on things with negative social consequences. A program wishing to get good design students may pay more attention to portfolios than GPAs for many middle-range students, even though students receiving good GPAs are generally well-rounded, hardworking, and persistent. For example, a program may select a 3.0 GPA student with a good portfolio over a 3.25 GPA student with an average portfolio, not taking into account that not all high schools provide opportunities that go into making a good portfolio. While a program should be free to choose its admission criteria, a ranking system can send a wrong message to the program if its admission criteria are not considered when being ranked.
There can be other secondary effects on architecture programs of DI’s lack of focus on admission criteria. Since the employability of graduates is an important factor in ranking, a program may want to improve its ranking by accepting applicants who appear to have better employment prospects. It is not difficult to identify these prospects. They generally come from wealthy families and suburban high schools and have contacts with the privileged of society. Favoring privileged students over underprivileged students may harm a program and the profession by reducing diversity.
Another impact of DI rankings is that they create an incentive for professional architecture programs for reducing class sizes, which affects programs in private and public institutions differently. With smaller classes, it is easier for programs to improve some of the data DI uses. To maintain the quality of learning and teaching, however, a program with smaller class sizes needs to increase tuition and fees. This can be done easily in private schools but not in public schools. Monitored by state boards and legislatures, public schools cannot increase their tuition and fees at will. On many occasions, despite significant budget cuts, public schools are asked to keep their tuition and fees flat. Therefore, the only way they can continue to provide an education at the same level despite budget cuts is to increase class sizes, which is just the opposite thing to do to improve the quality. Over time, this phenomenon may translate into better rankings for private schools and worse rankings for public schools.
Yet another impact of DI rankings is that they affect professional architecture programs through their effects on tuition and fees. Since having more financial resources matters to DI rankings, it is in the best interest of a program to raise tuition and fees. If such raises are not essential, the program may choose to give much of it back to students in the form of scholarships. This will probably raise the rank of a program without making significant changes elsewhere. The unwelcome side effect is that, as tuition rises each year, access to education for students of limited financial means decreases, regardless of whether there is a scholarship or not. Not knowing whether they will get scholarships, these students may choose not to apply to any expensive programs in the first place.
Other impacts of DI rankings include that they affect professional architecture programs through their effects on resource allocation within a program. When programs shift resources to improve some DI ranking indicators, they also take away resources from some other indicators not included in the ranking. This raises the question if those changes in resource allocation improve the academic quality. Since the single largest factor in DI rankings is their reputation among professionals, programs seeking to raise their reputations may start spending a substantial sum of money on their media presence through glossy publications, high-quality videos, and flashy awards. However, it is doubtful that any attempts to increase the visibility of a professional architecture program would help improve its academic quality and would help make better architects.
Finally, one should be reminded that it is not always bad that the DI ranking system does not consider several things included in other ranking systems. For example, pass rates in registration exams have been a factor in many ranking systems of professional programs [5,55]. It is not clear, however, if DI already considers registrations exams as an important factor in the rankings. Any assumption that DI considers registration exams in its rankings may have negative effects on a program seeking to improve its rankings. If instructors are told to improve pass rates for their subjects in registration exams, they may very well cover the basics for the weakest students, because registration exams are not designed to assess the highest level of competency in a subject matter. While programs must do all that they can do to improve registration exam outcomes, teaching a course for improved registration exam outcomes may not be a good pedagogical strategy.

5.2. Impacts of DI Rankings on Public Opinion

The consumer of a product, such as a car, a computer, a fridge, or a washing machine, can evaluate the product for themselves. Most buyers know the criteria they need most in a product. If they do not like a product, they can replace it. However, choosing an academic program is not buying a product. One degree cannot be exchanged for another. A high school student can take a tour, sit in a class, or live in a college dormitory for a day or two, but none can predict the college experience. Yet, the college experience one purports to buy based on the rankings outweigh all the other products of life. The market value of a good education rarely diminishes. Rather, it continues to increase, and students expect to sell their degrees in the job market many times throughout their careers. Therefore, like many other systems of academic ranking, the DI rankings of architecture programs seem to have tremendous power to impact public opinion, which cannot be ignored by the programs being ranked.
DI rankings are an easy and cheap way for architecture programs to tell the public how good they are. These programs do not feel responsible for the fact that DI rankings use individual opinions, however educated these opinions may be; that these opinions can only be surrogates for the quality of learning experience in a program; and that there is no way to verify if the data collected through DI’s online surveys truly help evaluate the quality of learning experience in a program. These limitations do not stop highly ranked programs from sharing their DI ranks with the public, because these ranks fulfill a public need for a third-party evaluation of architectural programs.
In essence, most architecture programs assume that DI rankings can be used as proxies for academic quality that the public should care about. By accepting this assumption, the public makes yet another assumption that if the program is good in some criteria, as assessed by DI, then it must be good in all the other unstated and unmeasured criteria that they are interested in. Both these assumptions are wrong. No one knows if DI’s survey findings correlate with the expectations of the public. Such expectations may include that a highly ranked program will provide high-quality education and that it will provide better professional and personal outcomes. So far, no one has studied how and whether the survey data of DI can help measure the quality of architectural education and its impact on life after graduation. At least this study has made it clear that both halo effects and feedback mechanisms may be found in DI rankings, indicating that the present DI rankings of programs can be affected by the rankings of the college within which the program exists and by the rankings of a program in the past.
Similar to most rankings, DI rankings have created the prisoner’s dilemma for architecture programs of the country. For whatever reason, when a program improves its ranking, it also means that another program must fall in ranking. Overall, it is not certain how these constant uncertainties in the rankings might be beneficial for society. A graduate from a highly ranked architecture program may easily get a good job. However, for the same reason, a graduate from a poorly ranked architecture program may have difficulty getting a good job. Even though both graduates may be equally competent to serve the profession and society as judged by the accreditation body, DI rankings create an artificial situation where society is deprived of what these two graduates could give back to society and the profession.
When hiring our next architect, should we take the DI rankings of architecture programs seriously and consider a graduate from a highly ranked program with excellent design skills who lacks social responsibility or a graduate from a lowly ranked program with excellent design skills who takes social responsibility seriously? The public interest is not well-served when our future architects have good design skills but lack social responsibility. Society is weaker if our architecture programs support poorly done rankings for immediate gains. The costs of getting the wrong students into our programs or wrong architects into our offices are not always limited to the profession of architecture only. The works of architects can have profound effects on society and the environment. If architecture programs are admitting the wrong students, the effects may be broadly felt. Put simply, it is possible that DI rankings are negatively affecting architectural programs, the public, and the society they claim to serve.

6. Suggestions for Improvement

Are the rankings of architecture programs necessary? Supporters argue that DI rankings allow a person to find a range of architecture programs they might be interested in. However, that function can easily be performed using the information presented on the websites of programs, schools, ACSA (Association of Collegiate Schools of Architecture), and the NAAB. These websites contain more in-depth information on architecture programs than the rankings provide. While research on these programs may take some time, it may be time well-spent. However, the rankings will always remain useful for those who may not want to do the necessary research or who may not be able to make a decision based on all the publicly available information about the programs they find interesting.
For those who need rankings like the rankings of architecture programs by DI, can the rankings be done differently to help them? In a data-driven world, organizations or individuals involved in the rankings have access to a tremendous amount of data for their purposes. Yet, a sensible ranking system that satisfies all this is highly unlikely. Additionally unlikely is that the market will fix the problems of the current rankings without active engagement of the students and parents looking for a program, the academic programs being ranked, and the organizations doing the rankings. Some suggestions to promote such active engagement are provided below.

6.1. Consider Alternative Processes to Improve Ranking Systems

For many, the DI rankings of architecture programs eliminate the need for individual research comparing the programs. For them, these rankings provide a clear basis for decision-making and give answers instead of arguments. Perhaps the best hope for change, then, is to develop many rankings, where each provides simple answers to a complex question on architectural programs. It is better if such rankings are not limited to those made from a supposedly neutral and general perspective. They should emphasize special interests for improved diversity and utility. They should allow consumers to evaluate programs using subjective criteria that most closely reflects their own preferences or to take the average of a program’s rank across many different qualities and specialty areas. As a result, they may be able to mitigate the perverse incentives created by a hegemonic ranking system. If rankings provide opportunities for architecture programs to highlight a wider number of specialty fields, perhaps these programs will be better able to stand out in a sea of similarities and to position themselves well on campus and in front of prospective students and their parents.
In the US, urban planning programs have attempted, with some controversy, to create a set of academic performance indicators ranging from student diversity to faculty projects without integrating them into an overall ranking system [56,57]. This allows schools to monitor and advertise their performances on a subset of indicators that reflect their values—for example, student professional registration, community engagement activities, or research publications. These kinds of systems do not create an overall ranking but, rather, many comparisons and are a way of valuing schools with different missions.
To further improve how architecture programs are compared, experts in educational evaluations can be consulted. These experts may suggest criteria that DI or other ranking organizations of architecture program may not otherwise consider. They may suggest a need to closely observe architecture programs to learn about them and to validate any numerical indicators of rankings. However, it does not appear that any ranking organization is interested in extensive and expensive site visits to assess architecture programs for ranking purposes.
In lieu of ranking lists, many suggest listing schools in broader “quality bands”, in which each school belonging to the same band would be considered largely of equal quality [58]. One example of the listing of universities and colleges in quality bands is provided by the Teaching Excellence Framework or the Teaching Excellence and Student Outcomes Framework (TEF) by the UK Department of Education. The aim of this exercise, carried out by the Office for Students (OfS), has been to look at what UK universities and colleges are doing to ensure excellence in teaching, learning, and student outcomes, in addition to meeting the national quality requirements for UK higher education. OfS carried out TEF assessments in 2016–2017, 2017–2018, and 2018–2019, according to the UK Department of Education’s specifications. The awards were judged by an independent panel of students, academics, and other experts using a range of official data, combined with a detailed statement from each higher education provider. Universities and colleges that participated in TEF assessments received either Gold, Silver, Bronze, or a provisional award (meaning the higher education provider met the rigorous national quality requirements and entered the TEF but did not have the opportunity to be fully assessed) (https://www.officeforstudents.org.uk/advice-and-guidance/teaching/about-the-tef/, accessed on 20 August 2022).
Of course, quality bands, such as Gold, Silver, Bronze, or a provisional award as defined by TEF, may not be as appealing as ranking lists. These bands are often associated with the question of granularity. While a greater granularity provides more information about excellence and better reflects the complexity of educational provisions, it also requires more resources. For example, in the case of TEF, a review report recommended that higher education providers are awarded both an institutional rating and a rating for each of the following four areas: teaching and learning environment, student satisfaction, educational gains, and graduate outcomes [59]. Do these areas represent the right level of granularity? Do they represent all the things that need to be considered for the ranking of academic institutions? Currently, rigorous answers to these questions do not exist.

6.2. Consider “Third Mission” to Improve Ranking Systems

Even when there are reasons to use rankings, a challenge for any ranking system of academic programs has always been to find robust, objective, outcome-based metrics that are easy to collect and analyze and that reflect the needs of a wide variety of users and balance quantitative with wider and more qualitative assessments of academic programs in terms of student learning and the societal impact. In architecture, little consensus exists on how to do so. An active dialogue among all those involved in the education, production, and use of architecture is necessary to develop meaningful approaches to defining student learning and the societal impact. Ideally, future rankings should incentivize architecture programs that provide, for example, the best training for graduates who take on influential roles in different organizations, providing services to society often free of cost.
Often termed as the “third mission” [60], the free-of-cost services of academic programs can be enormously diverse and can involve different funding and human resources. Continuing education and professional development courses, workshops, and seminars are the most common examples that demonstrate a commitment to extending the service of an academic program to the public sector. Technology transfer, programs for student “startups”, and internationalization are also a part of the “third mission”. With the enlargement of the target population and diversification of curricula, the “third mission” is a natural evolution of academic programs to establish nontraditional relations with the industry and national and international institutions. The “third mission” is also related to the idea of lifelong learning and regional development and may include projects that are directed to economic development, the integration of minorities, the acquisition of basic skills, and addressing environmental questions and issues related to public and population health [60].
Similar to most rankings, a comprehensive consideration of the “third mission” is absent in the DI rankings of architecture programs. Concerning this, any future rankings of architecture programs could consider, for example, the extent to which sustainability is embedded in the projects and courses of architecture programs or could consider giving credit to academic papers published by faculty in journals focused on the sustainable development goals (SDGs) and that make multiple references to the SDGs. However, just because a course includes themes on sustainability may not make it effective in transforming the thinking or future actions of its participants. For courses, projects, and publications to be impactful, they must genuinely advance thinking, frame policies, and deliver insights that can be implemented. So far, how to measure the societal impact of an architecture education for a ranking system has remained anyone’s guess. In this regard, enhancing the quantitative metrics with qualitative assessments can be helpful. Ranking organizations could highlight innovations more qualitatively using individual examples of high-impact research; social impact projects; and efforts to introduce sustainability, empathy, and emotional intelligence into the architecture curriculum.

6.3. Consider Student Experiences before and after Graduation to Improve Ranking Systems

Like most current rankings, an obvious deficiency in DI rankings is that they use very little information on the quality of teaching and learning experiences. These are a central concern for students, and these should receive more attention. One reason for this is that there are many different aspects of the teaching quality. To improve the ranking methods, one could look at the degree of student involvement in learning activities, including cocurricular and extracurricular activities. However, learning activities affecting student experiences vary widely from program to program. Some may offer a few expensive activities of very high quality, while others may offer many activities of lesser quality at a cheaper price. Capturing and presenting some of that variety in the rankings should help applicants to find a program well-tailored to their interests and aspirations. Currently, DI conducts surveys of students to evaluate their learning experiences, but it is not clear if students have enough information about the things they are being asked. Instead, the National Survey of Student Engagement (NSSE) can be used as a model for this (https://nsse.indiana.edu/, accessed on 3 June 2022). NSSE (pronounced “Nessie”) seeks to determine how successful colleges are at promoting those experiences that lead directly to student learning.
Another useful suggestion for rankers to consider is that, instead of surveying hiring professionals about the quality of the graduates of a program or asking administrators about the success of recent graduates, it may be more useful to survey alumni to find out if they got the job they wanted, if they enjoy their work, or if the work they do improves the health of the profession and society. Since graduates of an architecture program do not always seek jobs at architectural offices, measures of individual successes, if considered for many schools, could help the rankings with useful information for prospective students.

6.4. Encourage Personalization of Ranking

In response to the problems of rankings by private organizations, in recent years, government organizations have stepped into the ranking business. The European Union created U-Multirank, the Organization for Economic Co-operation and Development (OECD) launched its Assessment of Higher Education Learning Outcomes (AHELO), and the US government developed the College Scorecard [8]. These rankings emphasize personalization, allowing users to examine multiple dimensions minus any single, holistic ranking and compare institutions with similar missions across several factors. Supporters argue that user-driven comparisons of multiple factors within groups of schools with similar missions better reveal the relevance of the indicators used in rankings than the aggregation of data in holistic rankings.
Rather than “prepackaged” rankings where researchers collect data and then decide on each factor’s weight in the overall ranking, user-driven rankings capitalize on web technology to allow prospective users to assign their own weight to each factor to produce a ranking that best reflects their individual preferences. For example, after choosing a subject, the Center for Higher Education Development (DAAD) in Germany allows users to select five criteria in order of importance from an overall list of twenty-five. The resulting table lists the German universities in order of their performance on these five criteria, as well as showing the relative position of each school on each criterion (https://www.che.de/en/ranking-germany/, accessed on 3 June 2022). DI’s rankings of architecture programs based on areas of focus are somewhat similar, but the difference is that the focus areas are not defined by its users.
Any standard one-size-fits-all commercial rankings have two important weaknesses. First, they represent only the judgments of their producers. Second, no one ranking system can adequately reflect what all consumers are looking for in a program. Customizable rankings overcome these drawbacks by allowing consumers to receive information that is easily understood and accessible and that also reflects their own preferences and definition of quality. In the future, with the advancement of computer technology, it is possible that the customizable rankings of architecture programs will help promote true academic quality and fulfill their most important social role, namely providing useful information to the consumers of academic programs. It should, however, be noted that, even when users can choose their own components and weights, any ranking is only as good as the data it uses. Therefore, it is important that the data used in rankings, whether incorporated into prepackaged or user-driven rankings, more directly and accurately reflects the quality of professional architecture programs.

7. Conclusions

Academic rankings are a necessary but difficult exercise whether done at the institutional or program level. According to this study, the DI rankings of professional architecture programs show some of the same difficulties of the rankings of colleges and universities. Similar to most academic rankings, the data DI uses, the methods it uses to collect the data, and the way it uses the data for ranking purposes lack rigor and clarity.
The DI rankings of professional architecture programs are determined based on opinion surveys. No opinion survey for rankings can continue to provide unbiased data and results year after year. Left unchecked, programs may try to game the system for a better rank, even though any substantive improvement in academic quality requires time.
DI collects data from multiple sources, but it is unable to monitor, verify, and validate its data collection processes. It has not shown how it resolves conflicting opinions presented by different sources about architecture programs and how it uses descriptive qualitative and quantitative data in the way it ranks these programs. For a lack of proper definitions of the terms and phrases used in its surveys, DI has no way to determine whether the answers provided by a participant are wrong. For the same reason, DI has no disincentive for wrong answers.
DI surveys allow participants to rank only a handful of programs in any focus area of its rankings. As a result, many programs that do excellent work in many focus areas are left out of the rankings. This is significant, because the focus area rankings of DI could provide users some options to compare programs in their areas of interest that the “most hired” and “most admired” rankings of DI cannot provide.
Halo effects observed in other ranking systems were identified in DI rankings. Having a “most admired” professional architecture graduate program in a school could improve the chance of its undergraduate professional program to be ranked as a “most admired” program.
Feedback effects observed in other ranking systems were also identified in DI rankings. Similar to many other ranking systems, a program’s previous rank could affect its current rank in the DI ranking system. As a result, an early mistaken ranking of the program could impact its ranking in the later years.
The study raised doubts about the relevance of DI rankings to the education provided by professional architecture programs. Many highly ranked professional architecture programs are not always highly evaluated by their accreditors. If meeting the NAAB accreditation standards is essential to ensure a solid educational foundation for graduates and to ensure that graduates can lead the way in innovation; emerging technologies; and in anticipating the health, safety, and welfare needs of the public, then it is quite possible that DI rankings are misleading its users.
Many other factors also raised doubts about the relevance of DI rankings. In one example, these rankings did not show consistent associations with the passing rates of the graduates of these programs in the architecture registration exams. In another example, only some of the “most admired” programs were included in the “most hired” category of DI. Similarly, several “most hired” programs were not included in the “most admired” category of DI. In yet another example, just like many other rankings, more expensive programs performed better than less expensive programs in the DI rankings of architecture programs.
According to the study, the likely negative impacts of DI rankings on professional architecture programs are many. Since DI rankings do not consider admission criteria, they may lead a program away from properly balancing many factors that go into determining which applicants will improve program quality and make better architects. Since the expenses of a programs tend to affect DI rankings, these rankings may create an incentive for a program to reduce class size and increase tuition and fees. DI rankings may encourage programs to spend a substantial sum of money on improving media presence for a better visibility and reputation without changing the quality of education. Most importantly, DI rankings may create an artificial situation where the graduates of a poorly ranked program may have difficulty finding the jobs they deserve, even though substantive differences in the quality of education may not exist among many programs holding different ranks.
It is suggested that ranking organizations should allow consumers to evaluate professional architecture programs using their own preferences to avoid some of the problems of DI rankings. It is also suggested that DI or any other ranking organizations should seek expert help in the assessment and evaluation processes of professional architecture programs to improve the objectivity and relevance of their rankings. However, before anything else, an active dialogue among all those involved in the education, production, and use of architecture is necessary to determine how to measure the quality of professional architecture programs for better ranking.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study used data available in the public domain.

Acknowledgments

The author wants to thank Monalipa Dash for help with the collection of data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hazelkorn, E. Rankings and the Reshaping of Higher Education: The Battle for World-Class, Excellence; Palgrave Macmillan: London, UK, 2015. [Google Scholar]
  2. Enserink, M. Who ranks the university rankers? Science 2007, 317, 1026–1028. [Google Scholar] [CrossRef] [PubMed]
  3. Bogue, E.G.; Hall, K.B. Quality and Accountability in Higher Education: Improving Policy, Enhancing Performance; Praeger Publishers: Westport, CT, USA, 2003. [Google Scholar]
  4. Posner, R.A. Law School Rankings. Indiana Law J. 2006, 81, 13–24. [Google Scholar] [CrossRef]
  5. Stake, J.E. The interplay between law school rankings, reputations, and resource allocation: Ways rankings mislead. Ind. L.J. 2006, 81, 229–270. [Google Scholar]
  6. Van der Wende, M. Rankings and Classifications in Higher Education: A European Perspective. In Higher Education; Smart, J.C., Ed.; Springer: Dordrecht, The Netherlands, 2008; pp. 49–71. [Google Scholar]
  7. Diver, C.S. Don’t be so quick to join the refusenik ranks. Currents 2007, 33, 48–49. [Google Scholar]
  8. Davis, M. Can College Rankings Be Believed? She Ji J. Des. Econ. Innov. 2016, 2, 215–230. [Google Scholar] [CrossRef]
  9. Rehmeyer, J. Rating the Rankings. Available online: https://www.sciencenews.org/article/rating-rankings (accessed on 30 May 2022).
  10. Diver, C.S. The Rankings Farce: ‘U.S. News’ and Its Ilk Embrace Faux-Precise Formulas Riven with Statistical Misconceptions. Available online: https://www.chronicle.com/article/the-rankings-farce (accessed on 30 May 2022).
  11. Forsyth, A. Great programs in architecture: Rankings, performance assessments, and diverse paths to prominence. Int. J. Archit. Res. 2008, 2, 11–22. [Google Scholar]
  12. Brewer, D.J.; Gates, S.M.; Goldman, C.A. Pursuit of Prestige: Strategy and Competition in U.S. Higher Education; Transaction Publishers: New Brunswick, NJ, USA, 2002. [Google Scholar]
  13. Hossler, D. The Problem with College Rankings. About Campus 2000, 5, 20–24. [Google Scholar] [CrossRef]
  14. Myers, L.; Robe, J. College Rankings: History, Criticism and Reform; Center for College Affordability and Productivity: Washington, DC, USA, 2009. [Google Scholar]
  15. Keith, B.; Babchuk, N. The Quest for Institutional Recognition: A Longitudinal Analysis of Scholarly Productivity and Academic Prestige among Sociology Departments. Soc. Forces 1998, 76, 1495–1533. [Google Scholar] [CrossRef]
  16. Lowry, R.C.; Silver, B.D. A Rising Tide Lifts All Boats: Political Science Department Reputation and the Reputation of the University. PS Political Sci. Politics 1996, 29, 161–167. [Google Scholar] [CrossRef]
  17. Carey, K. College Rankings Reformed: The Case for a New Order in Higher Education; Education Sector, Lumina Foundation for Education: Washington, DC, USA, 2006. [Google Scholar]
  18. National Architectural Accrediting Board. Conditions for Accreditation. Available online: https://www.naab.org/wp-content/uploads/2020-NAAB-Conditions-for-Accreditation.pdf (accessed on 1 June 2022).
  19. Dill, D.D. Convergence and diversity: The role and influence of university rankings. In University Rankings, Diversity, and the New Landscape of Higher Education; Brill Sense: Rotterdam, The Netherlands, 2009; pp. 97–116. [Google Scholar]
  20. Jöns, H.; Hoyler, M. Global geographies of higher education: The perspective of world university rankings. Geoforum 2013, 46, 45–59. [Google Scholar] [CrossRef]
  21. Kehm, B.M. Global university rankings—Impacts and unintended side effects. Eur. J. Educ. 2014, 49, 102–112. [Google Scholar] [CrossRef]
  22. Meredith, M. Why do universities compete in the ratings game? An empirical analysis of the effects of the US News and World Report college rankings. Res. High. Educ. 2004, 45, 443–461. [Google Scholar] [CrossRef]
  23. Sanoff, A.P.; Usher, A.; Savino, M.; Clarke, M. (Eds.) College and University Ranking Systems: Global Perspectives and American Challenges; Institute for Higher Education Policy: Washington, DC, USA, 2007. [Google Scholar]
  24. Aguillo, I.; Bar-Ilan, J.; Levene, M.; Ortega, J. Comparing university rankings. Scientometrics 2010, 85, 243–256. [Google Scholar] [CrossRef]
  25. Çakır, M.P.; Acartürk, C.; Alaşehir, O.; Çilingir, C. A comparative analysis of global and national university ranking systems. Scientometrics 2015, 103, 813–848. [Google Scholar] [CrossRef]
  26. Carey, K. Introduction: A different kind of college ranking. Wash. Mon. 2013, 45, 17–20. [Google Scholar]
  27. Moed, H.F. A critical comparative analysis of five world university rankings. Scientometrics 2017, 110, 967–990. [Google Scholar] [CrossRef]
  28. Taylor, P.; Braddock, R. International university ranking systems and the idea of university excellence. J. High. Educ. Policy Manag. 2007, 29, 245–260. [Google Scholar] [CrossRef]
  29. Chen, K.-h.; Liao, P.-y. A comparative study on world university rankings: A bibliometric survey. Scientometrics 2012, 92, 89–103. [Google Scholar] [CrossRef]
  30. Clarke, M. Weighing things up: A closer look at US News & World Report’s ranking formulas. Coll. Univ. 2004, 79, 3. [Google Scholar]
  31. Huang, M.-H. Opening the black box of QS World University Rankings. Res. Eval. 2012, 21, 71–78. [Google Scholar] [CrossRef]
  32. Rauhvargers, A. Global University Rankings and Their Impact: Report II; European University Association: Brussels, Belgium, 2013. [Google Scholar]
  33. Shin, J.C.; Toutkoushian, R.K.; Teichler, U. University Rankings: Theoretical Basis, Methodology and Impacts on Global Higher Education; Springer Science & Business Media: Basel, Switzerland, 2011; Volume 3. [Google Scholar]
  34. Soh, K. The seven deadly sins of world university ranking: A summary from several papers. J. High. Educ. Policy Manag. 2017, 39, 104–115. [Google Scholar] [CrossRef]
  35. Steiner, J.E. World university rankings—A principal component analysis. arXiv 2006, preprint. arXiv:physics/0605252. [Google Scholar]
  36. Thompson-Whiteside, S. Zen and the Art of University Rankings in Art and Design. She Ji J. Des. Econ. Innov. 2016, 2, 243–255. [Google Scholar] [CrossRef]
  37. Usher, A.; Medow, J. A global survey of university rankings and league tables. In University Rankings, Diversity, and the New Landscape of Higher Education; Brill Sense: Rotterdam, The Netherlands, 2009; pp. 1–18. [Google Scholar]
  38. Van Raan, A.F. Challenges in ranking of universities. In Proceedings of the Invited Paper for the First International Conference on World Class Universities, Shanghai, China, 16–18 June 2005; Shanghai Jaio Tong University: Shanghai, China; pp. 133–143. [Google Scholar]
  39. Henderson, A.E. Predicting US News & World Report Ranking of Regional Universities in the South Using Public Data. Ph.D. Thesis, Colorado State University, Fort Collins, CO, USA, 2017. [Google Scholar]
  40. Holmes, R. The THES university rankings: Are they really world class? Asian J. Univ. Educ. 2006, 2, 1–14. [Google Scholar]
  41. Morse, R.J. The real and perceived influence of the US News ranking. High. Educ. Eur. 2008, 33, 349–356. [Google Scholar] [CrossRef]
  42. Wint, Z.; Downing, K. Uses and abuses of ranking in university strategic planning. In World University Rankings and the Future of Higher Education; IGI Global: Hershey, PA, USA, 2017; pp. 232–251. [Google Scholar]
  43. Bastedo, M.N.; Bowman, N.A. US News & World Report college rankings: Modeling institutional effects on organizational reputation. Am. J. Educ. 2010, 116, 163–183. [Google Scholar]
  44. Bowman, N.A.; Bastedo, M.N. Anchoring effects in world university rankings: Exploring biases in reputation scores. High. Educ. 2011, 61, 431–444. [Google Scholar] [CrossRef]
  45. Monks, J.; Ehrenberg, R.G. The Impact of US News and World Report College Rankings on Admission Outcomes and Pricing Decisions at Selective Private Institutions; National Bureau of Economic Research: Cambridge, MA, USA, 1999. [Google Scholar]
  46. Pike, G. Measuring Quality: A Comparison of U.S. NewsRankings and NSSE Benchmarks. Res. High. Educ. 2004, 45, 193–208. [Google Scholar] [CrossRef]
  47. Clarke, M. The impact of higher education rankings on student access, choice, and opportunity. High. Educ. Eur. 2007, 32, 59–70. [Google Scholar] [CrossRef]
  48. Dobrota, M.; Bulajic, M.; Bornmann, L.; Jeremic, V. A new approach to the QS university ranking using the composite I-distance indicator: Uncertainty and sensitivity analyses. J. Assoc. Inf. Sci. Technol. 2016, 67, 200–211. [Google Scholar] [CrossRef]
  49. Guarino, C.; Ridgeway, G.; Chun, M.; Buddin, R. Latent variable analysis: A new approach to university ranking. High. Educ. Eur. 2005, 30, 147–165. [Google Scholar] [CrossRef]
  50. Jeremic, V.; Bulajic, M.; Martic, M.; Radojicic, Z. A fresh approach to evaluating the academic ranking of world universities. Scientometrics 2011, 87, 587–596. [Google Scholar] [CrossRef]
  51. Grewal, R.; Dearden, J.A.; Llilien, G.L. The university rankings game: Modeling the competition among universities for ranking. Am. Stat. 2008, 62, 232–237. [Google Scholar] [CrossRef]
  52. Horn, A.S.; Hendel, D.D.; Fry, G.W. Ranking the international dimension of top research universities in the United States. J. Stud. Int. Educ. 2007, 11, 330–358. [Google Scholar] [CrossRef]
  53. Lukman, R.; Krajnc, D.; Glavič, P. University ranking using research, educational and environmental indicators. J. Clean. Prod. 2010, 18, 619–628. [Google Scholar] [CrossRef]
  54. Ehrenberg, R.G. Tuition Rising: Why College Costs So Much; Harvard University Press: Cambridge, MA, USA, 2000. [Google Scholar]
  55. Seto, T.P. Understanding the US News Law School Rankings. SMU Law Rev. 2007, 60, 493–576. [Google Scholar]
  56. Stiftel, B.; Forsyth, A.; Dalton, L.; Steiner, F. Assessing Planning School Performance: Multiple Paths, Multiple Measures. J. Plan. Educ. Res. 2008, 28, 323–335. [Google Scholar] [CrossRef]
  57. Stiftel, B.; Rukmana, D.; Alam, B. A national research council-style study. J. Plan. Educ. Res. 2004, 24, 6–22. [Google Scholar] [CrossRef]
  58. Clarke, M. News or noise? An analysis of US News and World Report’s ranking scores. Educ. Meas. Issues Pract. 2002, 21, 39–48. [Google Scholar] [CrossRef]
  59. Pearce, D.S. Independent Review of the Teaching Excellence and Student Outcomes Framework (TEF): Report to the Secretary of State for Education. Available online: www.gov.uk/government/publications (accessed on 1 August 2019).
  60. Montesinos, P.; Carot, J.M.; Martinez, J.M.; Mora, F. Third mission ranking for world class universities: Beyond teaching and research. High. Educ. Eur. 2008, 33, 259–271. [Google Scholar] [CrossRef]
Table 1. Correlations between the USNWR national rankings of universities and the “most admired” DI rankings of their undergraduate architecture programs for the years 2019 and 2020.
Table 1. Correlations between the USNWR national rankings of universities and the “most admired” DI rankings of their undergraduate architecture programs for the years 2019 and 2020.
US News Ranking 2019US News Ranking 2020
Most Admired UG 18-19Pearson Correlation Coefficient (r)0.594 **0.639 **
N2728
Most Admired UG 19-20Pearson Correlation Coefficient (r)0.607 **0.650 **
N2728
**. Correlation is significant at the 0.01 level (2-tailed).
Table 2. The frequencies of rankings for undergraduate programs for schools with the “most admired” undergraduate architecture programs only and for schools with the “most admired” undergraduate and graduate architecture programs in the years between 2010 and 2019.
Table 2. The frequencies of rankings for undergraduate programs for schools with the “most admired” undergraduate architecture programs only and for schools with the “most admired” undergraduate and graduate architecture programs in the years between 2010 and 2019.
Number of ProgramsAverage Number of Times Selected as Most Admired UG Program
Schools with “Most Admired” Undergraduate Program Only235.57
Schools with “Most Admired” Undergraduate and Graduate Programs1712.24
Table 3. Year-to-year correlations between the ranks of the “most admired” undergraduate and graduate architecture programs since 2008–2009.
Table 3. Year-to-year correlations between the ranks of the “most admired” undergraduate and graduate architecture programs since 2008–2009.
Correlations Between UGP RankingsPearson Correlation Coefficient (r)
Most Admired UGP 08–09 and Most Admired UGP 09–100.662 **
Most Admired UGP 09–10 and Most Admired UGP 10–110.719 **
Most Admired UGP 10–11 and Most Admired UGP 11–120.821 **
Most Admired UGP 11–12 and Most Admired UGP 12–130.657 **
Most Admired UGP 12–13 and Most Admired UGP 13–140.787 **
Most Admired UGP 13–14 and Most Admired UGP 14–150.907 **
Most Admired UGP 14–15 and Most Admired UGP 15–160.956 **
Most Admired UGP 15–16 and Most Admired UGP 16–170.966 **
Most Admired UGP 16–17 and Most Admired UGP 17–180.933 **
Most Admired UGP 17–18 and Most Admired UGP 18–190.983 **
Correlations Between GP Rankings
Most Admired GP 08–09 and Most Admired GP 09–100.744 **
Most Admired GP 09–10 and Most Admired GP 10–110.893 **
Most Admired GP 10–11 and Most Admired GP 11–120.811 **
Most Admired GP 11–12 and Most Admired GP 12–130.660 **
Most Admired GP 12–13 and Most Admired GP 13–140.642 **
Most Admired GP 13–14 and Most Admired GP 14–150.828 **
Most Admired GP 14–15 and Most Admired GP 15–160.829 **
Most Admired GP 15–16 and Most Admired GP 16–170.854 **
Most Admired GP 16–17 and Most Admired GP 18–190.821 **
**. Correlation is significant at the 0.01 level (2-tailed). UGP: Undergraduate Program. GP: Graduate Program. Note: Data for most admired graduate programs 17–18 are missing.
Table 4. The number of “most admired” programs included in the “most hired from” programs of DI for the years 2018–2019 and 2019–2020.
Table 4. The number of “most admired” programs included in the “most hired from” programs of DI for the years 2018–2019 and 2019–2020.
Most Hired from Schools, 2018–2019 Most Hired from Schools, 2019–2020
Number of Programs 100+ Graduates70–99 Graduates50–69 Graduates100+ Graduates70–99 Graduates50–69 Graduates
Schools With No Most Admired Program9Included in432432
Not included in567567
Schools With Most Admired Undergraduate Programs23Included in225356
Not included in212118201817
Schools With Most Admired Graduate Programs23Included in352363
Not included in201821201720
Schools With Most Admired Undergraduate and Graduate Programs18Included in832945
Not included in10151691413
Table 5. The correlations of the ARE passing rates of architecture programs and the “most admired” and “most hired from” DI ranks of these programs for 2018 and 2019.
Table 5. The correlations of the ARE passing rates of architecture programs and the “most admired” and “most hired from” DI ranks of these programs for 2018 and 2019.
Most Admired Undergraduate Most Admired Graduate Most Hired from Schools 2018–2019 Most Hired from Schools 2019–2020
18–1919–2018–1919–20100+ Graduates70–99 Graduates50–69 Graduates100+ Graduates70–99 Graduates50–69 Graduates
2018 ARE Pass Rate—Construction & Evaluation−0.337 *−0.726 **−0.065−0.099−0.024−0.1240.145−0.266−0.302−0.676 **
2018 ARE Pass Rate—Practice Management−0.352 *−0.573 *−0.263−0.3110.012−0.186−0.233−0.208−0.479 *−0.693 **
2018 ARE Pass Rate—Programming & Analysis−0.131−0.1850.036−0.0710.078−0.260−0.142−0.225−0.526 *−0.425
2018 ARE Pass Rate—Project Development & Documentation−0.197−0.562 *−0.092−0.0780.172−0.530−0.168−0.030−0.563 *−0.638 *
2018 ARE Pass Rate—Project Management−0.345 *−0.628 **−0.311−0.333 *−0.106−0.2590.078−0.533 *−0.354−0.463
2018 ARE Pass Rate—Project Planning & Design−0.326 *−0.502 *−0.158−0.2480.018−0.325−0.053−0.222−0.470 *−0.578 *
2019 ARE Pass Rate–Construction & Evaluation−0.259−0.233−0.314 *−0.336 *0.206−0.1710.035−0.137−0.168−0.453
2019 ARE Pass Rate–Practice Management−0.354 *−0.350−0.230−0.328 *0.047−0.446−0.454−0.418−0.583 *−0.359
2019 ARE Pass Rate–Programming & Analysis−0.168−0.454−0.197−0.1330.042−0.3040.146−0.344−0.425−0.398
2019 ARE Pass Rate—Project Development & Documentation−0.1630.1420.070−0.0240.272−0.3110.1550.096−0.407−0.244
2019 ARE Pass Rate—Project Management−0.321 *−0.669 **−0.198−0.2990.182−0.224−0.274−0.291−0.242−0.414
2019 ARE Pass Rate—Project Planning & Design−0.280−0.318−0.072−0.1540.172−0.2380.044−0.157−0.375−0.517 *
* Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed).
Table 6. Average in-state undergraduate tuition and fees for different categories of the “most hired from” schools.
Table 6. Average in-state undergraduate tuition and fees for different categories of the “most hired from” schools.
Undergraduate Tuition and Fees (In-State)
Most Hired with No Most Admired Programs$15,446.89
Most Hired Schools with Most Admired Undergraduate Programs$20,920.33
Most Hired Schools with Most Admired Ranked Graduate Programs$26,786.13
Most Hired Schools with Most Admired Undergraduate and Graduate Programs$38,967.56
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rashid, M. DesignIntelligence and the Ranking of Professional Architecture Programs: Issues, Impacts, and Suggestions. Architecture 2022, 2, 593-615. https://doi.org/10.3390/architecture2030032

AMA Style

Rashid M. DesignIntelligence and the Ranking of Professional Architecture Programs: Issues, Impacts, and Suggestions. Architecture. 2022; 2(3):593-615. https://doi.org/10.3390/architecture2030032

Chicago/Turabian Style

Rashid, Mahbub. 2022. "DesignIntelligence and the Ranking of Professional Architecture Programs: Issues, Impacts, and Suggestions" Architecture 2, no. 3: 593-615. https://doi.org/10.3390/architecture2030032

Article Metrics

Back to TopTop