1. Introduction
Expectations regarding the role of research evidence in educational improvement efforts have grown exponentially across the globe. In the U.S., beginning with No Child Left Behind (NCLB) and continuing through the Every Student Succeeds Act, federal funding for education requires the use of evidence in improvement efforts. More recently, in the wake of the COVID-19, the imperative to implement effective strategies for mitigating unfinished learning and learning loss has heightened those expectations, with Elementary and Secondary School Emergency Relief (ESSER) funding requiring evidence-based interventions [
1,
2].
Underlying these expectations are a number of assumptions about how evidence use—including the use of education research specifically—should play out in school contexts [
3,
4]. For example, there is an underlying assumption that mandated or incentivized research use will lead to changes in decision-making processes and school improvement efforts. Similarly, there are assumptions about evidence use itself, including the emphasis placed on research in decision making, and the relevance and availability of research to address the challenges schools face—particularly the unprecedented challenges presented by the pandemic.
Perhaps most importantly, expectations assume that schools and educators have the capacity to find, access, interpret, and take up findings from research in their policies and practices. Empirical studies, discussed later in this paper, consistently reveal that evidence use is a multifaceted, complex practice [
5], influenced by various organizational conditions. Additionally, many aspects of capacity are important for supporting evidence-informed improvement, including individuals’ knowledge, skills, and beliefs, as are schools’ norms, culture, structure, and leadership. Despite these findings, the United States has seen relatively scant investment in building such capacity at the school level. Efforts in this have predominantly focused on improving accessibility to research through initiatives like clearinghouses [
6] and collaborative research endeavors like research–practice partnerships (RPPs) [
7]. However, there has been a notable lack of attention paid to understanding and building organizational capacity for evidence use within schools, in contrast to, for example, the U.K, where there have been longstanding efforts to promote “research engaged schools” (e.g., [
8,
9,
10]), albeit with mixed success [
11].
In addressing this gap, particularly in the U.S. context, it is important to recognize the pivotal role of principals in shaping school capacity for evidence-informed improvement. Principals serve as critical levers in integrating research into school practice, acting as gatekeepers and brokers that influence what research is incorporated into decision-making processes [
12,
13,
14]. They are also instrumental in mediating external policy demands within their organizational context, supporting sensemaking, and facilitating the implementation of evidence-based practices among staff (e.g., [
15,
16,
17,
18,
19,
20,
21]). Moreover, these leaders are a critical lever in organizational change, ultimately influencing a wide range of outcomes, ranging from student achievement to the creation of more affirming and equitable learning environments [
22,
23]. They also exert indirect influence over a number of critical aspects of schooling, including those in-school conditions that enable instructional improvement [
22,
24,
25]. Accordingly, leadership itself is a central component of, as well as a key mechanism for building, school capacity [
26,
27,
28].
Presently, there is little empirical evidence about school capacity in terms of evidence-informed improvement and, more specifically, the role of school leaders—principals in particular—in building capacity for this work. Amid calls for the greater use of research evidence, a lack of evidence limits our understanding of how to support schools and their leaders in engaging with research evidence and serves as a barrier to achieving goals relating to evidence use across the US education system. This paper seeks to address this gap by first conceptualizing principal leadership in evidence use and, second, using this conceptual lens to examine large-scale survey data about school evidence-use practices and capacity. I am guided by the following overarching research question: In what ways can principals contribute to building schools’ capacity for evidence-informed improvement in the US?
3. The Present Study
Drawing on the literature, I conceptualize the relationship between school capacity for research use and principals’ contributions to school capacity, as illustrated in
Figure 1. With this conceptualization in mind, the purpose of this paper is to answer the question:
In what ways can principals contribute to building schools’ capacity for evidence-informed improvement? This is accomplished by examining schools’ capacity for and use of research use in the U.S. context (i.e., the right side of
Figure 1) and using principals’ contributions (i.e., the left side of
Figure 1) as a lens for identifying specific needs and opportunities for principals to build school capacity for the use of research in educational improvement. Specifically, I use large-scale survey data from a national sample of U.S. schools to describe (1) the extent to which research evidence informs school decisions, (2) schools’ capacity for research use, (3) principals’ capacity (e.g., knowledge and skills) for research use, and (4) the ways in which principals may contribute to improving schools’ capacity for research use.
Analyses feature cross-sectional survey data from a national study of schools’ use of research conducted by the Center for Research Use in Education between 2018 and 2020. The Survey of Evidence in Education-Schools (SEE-S) [
70] is organized around a conceptual framework [
71] that identifies multiple dimensions of schools’ research use practice and the factors that shape those processes. The larger project uniquely explores the role of different forms of evidence and information in school decision making at scale, enabling a landscape view of evidence-use practices in the context of U.S. schools.
3.1. Sample
The SEE-Swas administered online to schools’ instructional staff, including teachers, coaches, other specialists, administrators, and paraprofessionals, and to a member of the district office—which I refer to collectively as practitioners or educators throughout this report—during the 2018–2019 and 2019–2020 school years. A total of 134 traditional public schools from 21 districts, as well as 20 schools from 10 charters (5 rural, 13 suburban, and 13 urban), were successfully recruited into the sample for the SEE-S field trial administration. These schools and districts represent 18 different states, with the proportions of elementary, middle, and high schools mirroring national proportions. The overall individual-level response rate for the SEE-S was 53%. Response rates by school ranged from 0% to 100%, with the average school response rate being 56%. The final sample comprised 4415 school-based practitioners, including 25 district staff. About 4% (n = 181) of the sample identified as a school administrator (principal or assistant principal; heretofore, principals), 64% (n = 2818) identified as a classroom teacher, 10% (n = 420) identified as a special education teacher, 7% (n = 298) as an arts or elective teachers, and 6% identified as an interventionist or coach (n = 255), with the remaining respondents occupying other instructional support positions in schools. In this paper, I attend specifically to principals as school leaders, though school leadership can be construed more broadly to include other formal leaders (e.g., leadership teams, coaches), informal leaders, and all other respondents as school staff.
3.2. Data and Measures
The SEE-S survey was designed to capture multiple dimensions of school-based decision making and factors related to research use, as informed by the prior literature, and organized using the center’s conceptual framework [
71].Our multi-year survey design process was informed by standards for educational and psychological assessment [
72,
73] and consisted of open-ended interviews with educators (
n = 18), researchers (
n = 28), and educational intermediaries (
n = 32); a blueprint for items; cognitive interviews with 35 educators; and two rounds of piloting across a total of 64 schools. To examine the properties of survey scales, we first conducted exploratory factor analysis with an oblimin rotation, with a weight of zero given to our assumption that the sub-factors were correlated. The number of factors extracted for each set of items was determined based on the examination of scree plots and the interpretation of subsets of items relative to our measurement blueprint and conceptual framework; then, we verified our findings via confirmatory factor analysis (CFA).
There are five sections in the survey that mirror the Center’s conceptual framework: the depth of research use in decision making, perspectives about research and practice, networks through which research travels, capacity, and brokerage. For the purposes of this paper, I focus on items that are consistent with the dimensions of capacity associated with research-informed improvement and which the prior literature suggests are connected to principals’ roles.
Appendix A includes more complete item descriptions and psychometric details, as appropriate. For further details on the survey design, please see author and colleagues [
74].
Evidence use. To capture current practices related to evidence use, I rely on items related to school decision making and the sources of evidence reported to be influential in those decisions. With reference to respondent-reported organizational decisions, educators were asked about fourteen types of evidence and the extent of their influence on the decision. For items relating to research evidence, we used an open-ended item asking for a piece of research that was used (e.g., name of a product, link, reference) to validate responses. Responses were validated using a rubric designed to (a) confirm the existence of research cited by the respondent, (b) classify the citation as direct versus indirect (e.g., providing information about the research source versus the name or entity who provided the research), and (c) classify the type of citation (e.g., a book, an author’s name, a research article, etc.—see author and colleagues [
74] for more information).
Individual capacity. Several items and scales within the SEE-S are used to measure individual and school capacity for evidence use. Staff knowledge and experience are captured using items asking about specific training and experience related to engaging with evidence. This information is evaluated on a scale constructed from seven items related to self-reported capacity to critically consume research (α = 0.97). Beliefs about research are captured from educators’ perspectives on the value of research in addressing problems of practice. Five items indicate the extent of agreement with statements about the value of research. These have high internal consistency (α =.88) and are used to generate a scale based on item means. Additionally, capacity is conceptualized as access to external resources through professional networks. I capture personal networks for accessing research through open-ended items asking respondents to identify up to 10 individuals, 10 organizations, and 10 media sources they rely on to connect with educational research which are then categorized as direct ties to research, ties through intermediary organizations, or local ties (e.g., within school or district).
Organizational capacity. To indicate cultures of research use, I utilize SEE-S items related to school processes, such as guidelines for using research, and incentives for using research. Individual items provide insight into the prevalence of these elements across schools and also into their internal coherence as a scale (processes and incentives, α = 0.89). Other dimensions of culture related to research use, as noted in the prior literature, relate to the nature of relationships within schools. Items related to knowledge brokerage, including the sharing of research and other types of knowledge, are used as indicators of these aspects of culture. In particular, I focus on items asking about the frequency with which educators share research evidence as well as on local research (district-, school- or student-generated research). These data are combined into a scale (brokerage, α = 0.85) constructed of the item mean. Other dimensions of organizational capacity pertain to resources for supporting access and engagement with research. The survey includes two sets of items that capture resources that support research use. The first is a two-part item about local structures; respondents were asked about the availability of these structures. Then, a follow-up item asked how frequently those structures supported educators in connecting research and practice. Structures include general school structures, such as PLCs, and research-specific structures, such as subscriptions to research-based periodicals.
A final dimension of organizational capacity is the decision process. Two open-ended items captured the nature of organizational decisions (i.e., decisions about policy and practice made at the school or district level that affect a significant number of teachers and/or students) made in the current or prior school year. They were then asked what decision was made and why (i.e., what challenge or problem did it address?). Thirty percent (n = 1343) of respondents answered the questions about an organizational decision. Sixty percent (n = 2660) of respondents were routed to the items focused on personal-practice decisions, which are not reported here. Notably, all roles were represented in both paths, with administrators more likely to report on organizational than individual decisions (68.5 versus 28.2%). Overall, 1343 educators reported on an organizational decision. These same individuals were asked who participates in decision making, with specific reference to earlier responses about organizational decisions. Twenty categories of participants are listed, with the option to indicate involvement in gathering evidence, evaluating evidence, or making decisions, or no involvement. Because schools vary widely in their broader district and community context, I focus here on immediate members of the school community: principals, classroom teachers, special educators, coaches, support staff, parents, and students.
3.3. Analysis and Interpretation of Findings
As the purpose of this paper is to use national data to explore principals’ opportunities for and contributions to evidence-informed improvement, my analytical approach is descriptive, presenting summaries of three different item types. For closed-ended survey items and scales, I present frequencies, means, standard deviations, and other distributional statistics for responses to individual and organizational items in order to establish a portrait of capacity for research use. When describing principals’ capacity, I disaggregate their responses from other respondents and, where appropriate, test the statistical significance of these differences. When describing networks for accessing research, I utilize an ego network data approach to analyze open-ended data about the resources to which educators turn. Responses are coded into categories (e.g., web-based resources, professional associations, etc.), which are subsequently coded as local, external, or directly related to research. Respondents’ set of resources are then characterized in terms of the size and composition of their network. Finally, when describing organizational decisions, I present summaries of codes about problems and decisions. These are generated from an emergent framework developed by the research team. Specifically, the content and types of responses were categorized through an iterative discussion in which different codes were created, tested with sample responses, and modified. Through this process, multiple categories of problems and decisions were identified. Inter-rater reliability achieved 80% agreement among the research team.
Findings are interpreted through the lens of the four mechanisms through which principals may contribute to the capacity for evidence-informed improvement: developing human capital, influencing culture, leveraging resources, and shaping decision processes. Specifically, each set of findings was considered in terms of the needs or opportunities they suggested for principal leadership in these areas.
3.4. Limitations
There were several limitations to this study. First, the survey was designed for and administered to a national sample of U.S. schools. The U.S. context is distinct in its organization and governance, and conclusions should be understood with that in mind. Second, the survey’s research relied on respondent perception and recall, offering important but partial insight into school capacity. As acknowledged widely in the research-use literature (e.g., [
75]), multiple methods are needed to fully understand a phenomenon. Third, the survey does not directly observe leadership for research use, but rather examines current school capacity and practice as a means of revealing leadership needs and opportunities. Additional research directly observing practice would make an important contribution to this growing area of work. Last, as described earlier, “research” and “use” are complex concepts and there are many different understandings of their meaning in research, policy, and practice. Survey responses therefore may reflect different ideas about research use. Relatedly, research is but one type of evidence schools are expected to use, and any findings are unable to speak to other evidence-use practices.
4. Results and Discussion
In this section, I present the results of the analyses conducted and interpret those findings through the lens of principal contributions, described in the conceptual framework. First, I present an overview of the use of research evidence in school decision making as the context for studying principals’ leadership in schools’ use of research for improvement. Second, I present results regarding the various dimensions of school capacity that are within the sphere of principals’ work and influence, as described earlier: staff knowledge, skills, beliefs, and networks; cultures, structures, and resources that support engagement with and use of research; and decision-making processes. These findings are offered as a means by which to identify opportunities for strengthening the role of research in school improvement, and by proxy, opportunities for principals’ leadership in those areas. Last, I present evidence of principals’ capacity and role in schools’ use of research, reflecting leadership as a form of school capacity.
4.1. What Influence Does Research Evidence Have on School Decisions?
Of the 1343 respondents that reported on an organizational decision, 64% reported that external research had some or heavy influence on the decision, 34% of which were confirmed through our validation process, reducing the proportion of decisions that were influenced by research to 22%. Furthermore, 31 of 134 schools (23%) reported no organizational decisions in which research was influential. However, respondents also indicated that they use research outside of specific decisions. For example, 76% of respondents reported that their school/district used frameworks from research to organize improvement efforts, and 71% reported that research has provided a common language and set of ideas. Educators in our sample also reported that research is used to influence others to agree with a point of view (65%) or to mobilize support for important issues (62.7%).
These results suggest that research evidence does, in fact, influence decision making and improvement in schools—though perhaps not across all schools and not to the extent that one might hope, given the consequentiality of schools’ improvement actions. That said, it is also important to acknowledge that it is not reasonable to assume that all decisions should be based on research, nor that all decisions could be based on research. Such assumptions presuppose the relevance, availability, and quality of research evidence for all decisions and the alignment between research findings and school goals and needs—both of which are contested ideas. Nonetheless, there appears to be substantial opportunity to improve the role of research evidence in school improvement initiatives, which in turn warrants paying attention to the role of principal. Moving forward in this section, I present findings around individual and school capacity which signal opportunities for principal leadership in strengthening that role.
4.2. How Can We Characterize School Capacity for Research Use and What Opportunities Arise for Principal Leadership in Research Use?
In this section, I present findings related to staff capacity, including knowledge and skills, beliefs about research use, and networks. This followed by a discussion of the organizational dimensions of capacity, including culture, structures, and resources that support engagement with research, and lastly, decision processes, which include the improvement agenda and participatory decision making.
4.2.1. Staff Capacity
Regarding items related to
knowledge and skills, survey responses suggest that educators have limited experience and confidence using research, which in turn reflects opportunities for further growth. In terms of training or experiences (
Figure 2), educators report few experiences related to using research, with nearly a third having had none of the experiences listed and only another third having had more than one experience. Most often, educators were involved in collecting or analyzing data for a research project or engaging with research within a professional learning community (PLC); yet, still only a third reported those experiences. Perhaps not surprisingly, then, respondents report relatively low levels of confidence in terms of their ability to critically consume research. When considering the overall scale, the mean score was 2.17 (SD = 0.81), indicating that practitioners, on average, are only somewhat confident. Approximately a third of respondents indicated they were more than somewhat confident in each activity, and about 16% of respondents had the lowest possible value on the scale, indicating that they were not at all confident in critiquing the aspects of research described in our survey items.
Educators were also asked about their beliefs about the value of research in addressing meaningful problems of practice in schools. Overall, educators are mixed in their beliefs, with a mean scale score of 2.68 (SD = 0.61), which is in between agreement and disagreement on our response scale of 1 to 4. However, a few items suggested reasons for optimism. For example, most (83%) agreed or strongly agreed that research is actionable, though only half (55%) agreed that research considered the resources available to implement research findings.
Furthermore, in terms of professional networks for connecting with and accessing research, respondents across schools report relying largely on colleagues within their school or district, with on average 53% of their networks comprising local resources (SD = 0.38), compared to the 36% of their networks comprising intermediary sources, such as professional associations, media sources, curriculum, professional development providers, and others (SD = 0.30), and the 9% comprising researchers, research organizations, or research outlets (SD = 0.17).
Results suggest that most educators have little experience with or exposure to educational research and, correspondingly, report low confidence when critically consuming research, mixed beliefs about its value, and few means of connecting with research or researchers. This is consistent with prior findings which suggest educators may lack both confidence in their research-use abilities (and the capacity to critically interpret research). This is perhaps not surprising, given the knowledge, skills, and experiences educational professionals at all levels of the system are expected to master. However, it is ana rea where school leaders have an opportunity to intervene. First, school leaders are often (though not always) involved in staffing their schools, which means that hiring decisions could include criteria related to experience or training related to evidence use. School leaders are also often key decision makers in decisions about annual professional learning opportunities—both school-wide and in terms of attending conferences or other events. Further, they are often gatekeepers for direct school participation in educational research, a role which enables them to create (or limit) engagement with the research community directly. Such roles fit within the roles and responsibilities that characterize leadership for learning and effective school leadership, such as strategic hiring and the provision of opportunities for high-quality professional learning.
4.2.2. Organizational Capacity
Moving away from individuals’ capacity, I now turn to focus on the characteristics of schools. School
cultures for research use, as reflected in responses to items about processes and incentives, suggest moderately supportive cultures, with mean responses on an agreement scale from 1 to 4 of 2.51 (SD = 0.613). However, the large standard deviation indicates substantial variation in this dimension. Item-based responses (
Figure 3) reveal that about two-thirds of respondents agree that research use is a priority, but around half or less of respondents agree that there are expectations and incentives for using research. On the other hand, sharing research evidence appears to be a common practice, with 70% of respondents indicating they shared this resource at least once in the past year (
Figure 4).
The items capturing available school structures reveal variability, not only in the presence of structures but in terms of their utility in supporting engagement with research (
Figure 5). Notably, professional learning communities (PLCs), instructional coaches, and leadership teams are common structures across nearly all schools in our sample, but are also the most frequently leveraged structures in research evidence. Specialized structures that are specific to research use—such as specific professional learning, subscriptions, and district research offices—are less commonly available. However, when they exist, they appear to be helpful for promoting engagement with research. While most respondents have at least one specialized support structure available within their schools (most often professional development on research use), access is not evenly distributed among educators. In fact, 15% of educators reported that none of these structures were available.
The findings regarding culture and resources reveal some evidence that the use of research is a priority, and that time is available for its use. However, limited expectations and incentives, coupled with the mixed beliefs about the value of research described earlier, may suggest a need for leaders to promote evidence use more explicitly as part of “how we do things”. This means moving from rhetoric to action, including investing resources such as time, space, and finances in efforts to support the use of research evidence.
The results show that common school structures (i.e., PLCs, instructional support teams, and instructional coaches) play an important supporting role in research use, at least for some schools. However, their widespread presence in schools—and the space and time they afford—is a potential opportunity for strengthening research use. Therefore, these common structures may hold untapped potential to support engagement with research in ways that impact student learning.
Second, specialized research-use supports, such as research offices and subscriptions to research journals, are not often available to educators, possibly contributing to a gap between stated priorities and actual practice. Specialized support can be an important contributor to organizational capacity, and school leaders seeking to clarify expectations and support access to and engagement with research can consider such investments. However, the limited availability of these supports may not reflect a lack of priorities, but rather a lack of resources (e.g., costs associated with accessing research databases can be prohibitive) and capacity (e.g., small school districts may not have an internal research office). These findings further underscore the importance of primarily building school capacity through the use of common structures to support research use.
A final dimension of organizational capacity relates to an improvement agenda. The survey’s items solicit information about not only the types of organizational decisions that schools make, but who participates in them. The educators reporting on problems and decisions covered extraordinarily diverse issues. Concerns included, among others, creating safe schools, improving academic performance, designing standards-based report cards, and implementing Response to Intervention (RTI). The most frequently reported organizational problems were related to academic performance (39%, n = 553), followed by non-academic issues (20%, n = 282) and systemic issues (16%, n = 224). The least commonly reported organizational problems were those related to community-centered issues (1%, n = 17). The most frequently reported organizational decisions were decisions to adopt something new (44%, n = 575), structural changes (17%, n = 216), and professional development (12%, n = 152). Not only did concerns vary in terms of content focus, but they varied within content categories. For example, some respondents focused on technology, but could be referring to training teachers to use devices, to the adoption of digital course materials, or to concerns about access to technology. Furthermore, educators within schools reported multiple organizational decisions; in 25% of schools, more than six major organizational decisions were named, with some reporting as many as 15 different decisions.
In the data, it was sometimes difficult to grasp how the solution reported addressed the problem described. Broadly defined problems were linked to broadly defined and difficult-to-interpret theories of change. Drawing on the examples shown in
Table 1, respondents describe the adoption of a new reading curriculum. Here, the problem is framed in two ways. One respondent states that the problem lies in students reading below the appropriate grade level, whereas the other states that the problem lies in the inadequacy of the reading program, being replaced in the subsequent decision. The challenge here is in the clarity of the problem frame. The first frame is vague and offers no diagnosis for low performance, which likely makes it difficult to identify an evidence-based approach to improvement. The second frame points to the effectiveness of the reading program, which, again does not offer a specific diagnosis about key instructional issues that might explain lack of effectiveness. This framing led decision makers to a wholesale shift in reading curriculum, which might or might not the address underlying causes of low performance. Several other examples suggest that this is not a unique challenge. For instance, one respondent described the challenges their school faced as “failure rate, student engagement in learning/ownership of learning”, and the decision made to address the problem was implementing a 1:1 technology initiative. Another example included addressing low reading scores by giving free books to students who completed reading logs.
Although the survey does not ask for fully articulated theories of change and is limited in the scope and description of problems and decisions, the nature of these reports suggest a number of challenges for research-informed improvement. First, improvement agendas—captured in the diverse array of problems and decisions reported by educators, even within schools—are incredibly diverse and often broad. This means that resources—including time, staff, funding—must be distributed across many different issues and needs and that there is a need for a wide range of research evidence to be available and accessible in order to inform improvement efforts. Second, broadly defined (or ill-defined) problems may lead school decision makers into a time-consuming, frustrating search for potentially relevant research, in which they will likely encounter diverse and competing alternative solutions. This might diminish educators’ already mixed beliefs about the value of research. Second, selecting one of the many research-based solutions to a broadly defined problem is a high-risk venture when student learning is at stake. The selection of programs and practices ill-suited to the problem is unlikely to generate the desired change, resulting in the loss of critically important time, resources, and learning, but again is likely to negatively impact the perceived value of research.
School leaders have a role in addressing these challenges through facilitating collaborative sensemaking as well as diagnostic and tactical leadership. Through these agenda-setting roles, principals and other school leaders can focus the improvement agenda and direct valuable resources towards finding and using research evidence in more productive and impactful ways.
In terms of
who participates in research-informed decision making, most educators report some degree of shared decision making.
Figure 6 represents the proportion of organizational decision responses in which respondents indicated which stakeholders were involved, and in what ways they were involved. Principals were involved in making most organizational decisions (73.1%), and were the stakeholder most often involved in evaluating (40.4%) and collecting evidence (34.7%) as well (this item refers to evidence broadly rather than research evidence specifically). Notably, teachers were most likely to collect evidence that informed decisions (41.7%) and nearly as likely as principals to participate in its evaluation (38.1%), though much less likely to be involved in making the decision (42.0%). Coaches were also regularly involved in these aspects of decisions, as were special education teachers to a lesser extent. Other stakeholders, including support staff, students, and parents, were least likely to be engaged in the decision-making process.
These data suggest that, although principals are primary decision makers, there are many ways in which other school staff contribute to those decisions and to the use of evidence in those decisions. Principals that promote shared responsibility for collecting and evaluating evidence can leverage staff knowledge, expertise, and professional networks for research use, as described earlier. This also creates opportunities for different kinds of evidence to be used in planning and collaborative interpretation. Furthermore, participatory decision making that engages school staff in evidence use has implications for individual capacity. Specifically, the nature and quality of evidence use in decision making reflects the individual knowledge, skills, and beliefs of those participating; in schools with limited educator capacity in these areas, participation in these roles might not yield high-quality research evidence or use. On the other hand, participation can build capacity by supporting the development of research-related knowledge, skills, and beliefs if engagement and use is effectively modeled. Principals clearly have a role in determining the extent of shared decision making in schools, and this role intersects with their role in building educator capacity, as described earlier.
4.3. How Can We Characterize Principals’ Capacity for Research Use?
To this point, I have noted several dimensions of individual and school capacity which principals may have influence on and outlined how these actors are positioned to support the use of research in school improvement. However, several of these imply a level of capacity among principals, which prior research has suggested matters for evidence use. For example, school leaders who themselves do not value educational research may be less likely to invest resources in organizational structures or leverage routines to support its use. Similarly, school leaders who lack knowledge or skills related to using research may find it challenging to model research use in shared decision-making contexts. Therefore, it is important to explore data regarding principals’ own capacity.
When disaggregated from other school staff, principals were more likely to have had the most research-use-related experiences listed in the survey.
Table 2 demonstrates that these differences are statistically significant, except for participation in professional development around using research and having taken another course beyond introductory statistics or research methods. The magnitude of these differences ranges from 7% (involvement in a research–practice partnership) to 27% (having taken an introductory statistics course). Furthermore, only 6% of principals reported not having
any of these experiences, compared with 30% of teachers. Relatedly, a comparison of school leaders’ confidence in critically consuming research reveals significantly greater confidence, with respective mean scores of 2.47 (SD = 0.730) and 2.16 (SD = 0.806,
p = 0.000).
Nonetheless, no research-use-related opportunity was universally experienced by principals, with the most common—having taken an introductory statistics course—being experienced by 75% of principals. Most events were experienced by far fewer staff. Similarly, though principals expressed greater confidence in using research, their responses still fell below 3, which is between “somewhat” and “mostly” confident.
Additionally, school leaders’ networks for accessing information did not differ significantly from those of other school staff. However, principals’ networks, on average, had greater direct ties to research and fewer local resources (
Table 3). They were also significantly more likely to share research evidence, doing so between 3 to 5 times or 5 or more times per year (
Table 4). This finding suggests that principals are key brokers of research evidence within their organizations, can facilitate or constrain the flow of information into schools, and may rely on more direct channels to access that research evidence (though variability in responses suggests this is not a statistically significant different).
A last point about principal capacity is made with regard to their beliefs about the value of research evidence in addressing problems of practice. Mean scale scores for leaders (x = 2.79) were higher than those for school staff (2.67, p = 0.014), but still did not achieve a mean of overall agreement (a score of 3 on our scale). There were few statistical differences between school leaders and other school staff in terms of individual items, with principals most often being less likely to strongly disagree with statements about educational research.
Overall, principals’ responses indicate greater capacity for engaging in research use, as measured by the items selected in the survey. Greater capacity in terms of access through networks and confidence critically consuming research may enable leaders to integrate research evidence more effectively into improvement decisions, as per earlier findings showing that principals were notably more likely to be involved in these decisions. Greater sharing of research and stronger beliefs about the value of research may also enable principals to model research use, serve as a resource for accessing research, and direct resources toward research-use activities.
5. Implications and Conclusions
This study explores school capacity—both individual and organizational dimensions—for research use in school improvement through the lens of the principalship in the U.S. context. Drawing on the prior literature describing principal leadership and the ways in which principals can support evidence use in schools, I present national survey data on school decision making as a means of revealing opportunities for principal leadership in terms of research-informed improvement. The findings reveal several opportunities for building school capacity that are aligned to principals’ roles and areas of influence. While these findings are based on data collected in the U.S. context, the conceptual lens of principals’ contributions to school capacity for evidence use are likely to extend to other contexts where leaders engage in capacity-building roles and adds to this growing discussion in the literature (e.g., [
59,
60,
63]).
One set of opportunities relates to principal roles focused on the knowledge and skills of school staff. The data presented in this study suggest that teachers have limited experiences that relate to using research and have low confidence in critically consuming research, but have moderately positive perspectives about the value of research to practice. Principals’ roles in assessing and addressing professional learning needs, in hiring school staff, and in participating in research create opportunities to influence the knowledge, skills, and beliefs within a school community.
A second set of opportunities relates to creating supportive school conditions for research-informed improvement. The findings demonstrate that research use is a priority, and that sharing research is a relatively frequent occurrence in schools. On the other hand, data show that expectations, incentives, and structures that foster engagement with research are less common. The role of principals in allocating time and funding, and in leveraging structures and routines for engaging with research, can be used to promote greater alignment between expectations and actual improvement work. For example, research by Brown [
76] suggests that school and district leaders can adapt routines to support educators in “engag[ing] in a facilitated process of learning, designed to help them make explicit connections between research knowledge and their own assumptions and knowledge” (p. 389), stating that, in their research, this work is transforming teaching and learning. Relatedly, by distributing leadership and promoting shared decision making through routines, including through leadership teams, principals can maximize the knowledge, skills, and professional networks of staff [
36,
57]. This serves as an opportunity to model expectations for using research [
25].
A third set of opportunities relates to the role of principals in setting the improvement agenda for their schools. Reports about organizational decision making suggest that there is a need for principals to engage in diagnosing needs and focusing improvement efforts in ways that may make the process and value of using research clearer, which may result in more appropriate and impactful decisions. Similar points are emphasized in the research of Hoffman and Illie [
11], who articulate specific challenges faced in research-informed improvement related to articulating and defining outcomes and identifying appropriate research to inform action, among other challenges, leading the authors to emphasize that such work is not merely a tweaking practice but far more substantial and, therefore, in need of particular forms of leadership.
These findings not only reaffirm the critical importance of leadership, but also offer several lessons for how we can support research-informed improvement and principals’ leadership in achieving evidence-use goals. First, empirical evidence on school capacity for research use suggests that there are not merely opportunities but systemwide needs to substantially invest in capacity building. This confirms the problematic assumption that increased demands and expectations for evidence use will lead to actual evidence use. Rather, in spite of rising expectations and multiple policies, schools still experience challenges using research, and specific investments in individual and organizational capacity for evidence use are warranted.
Second, some of the findings here give reason to be optimistic about research use in schools, including moderately positive beliefs about the value of research, widely held priorities for using research, and the leveraging of common school structures to support research engagement. These are positive developments and represent progress towards evidence-informed practices. However, that these are not (yet) leading to the more widespread use of research in school improvement, signals a “knowing–doing” gap [
77]. For example, I find that educators value research but lack confidence and experience using it or lack routines and structures to engage with it. These findings echo calls for greater attention to capacity building, but also suggest some specific areas to target.
Third, as explored here, principals are a key lever in building school capacity, which makes their own capacity and preparation a critical issue in evidence-informed improvement. Principals demonstrate greater capacity for research use than other school staff, as measured by a limited set of indicators, yet those differences are often marginal and capacity varies widely across principals and, subsequently, schools. Therefore, an implication of this study is that there needs to be greater attention to research-use- or evidence-use-specific supports for principals, both before and during their time as school leaders. This may happen in several ways.
One shift might occur in the discourse about effective leadership, frameworks for which often guide the development of standards for preparation, professional learning, and evaluation. Findings suggest that principal leadership for research use is aligned with the roles described in broader leadership frameworks. For example, Murphy and colleagues’ [
25] description of “leadership for learning” identifies the activities of principals that focus on vision, the instructional program, the curricular program, assessment, communities of learning, resource acquisition and use, organizational culture, and social advocacy. Another approach to understanding the work of principals comes from the principal effectiveness literature, with effectiveness tied to improving student outcomes (e.g., [
22,
24,
78,
79,
80]). This body of research suggests that effective principals engage in direct instructional leadership and management, including setting the improvement agenda; organizational leadership and management; and managing the external environment. Leadership support and preparation aligned to these frameworks may therefore already support leadership for research use indirectly. On the other hand, these existing frameworks rarely, if ever, explicitly address leadership for evidence-informed improvement, and it is unclear whether developing more knowledge and skills related to these roles translates into leadership in evidence use.
Therefore, it is likely important to build knowledge and skills more explicitly in leading research use, which evidence suggests has been rare to date (60). This has been taken up to some degree in the U.S. through the Professional Standards for Education Leadership (PSEL) [
81], and PSEL standards have been adopted for the certification of program accreditation for principals and other leader (e.g., Council for the Accreditation of Educator Preparation (CAEP)). Two standards connect to leadership in terms of research or evidence use. PSEL standard 6e articulates that leaders should “deliver actionable feedback about instruction and other professional practice through valid, research-anchored systems of supervision and evaluation to support the development of teachers’ and staff members’ knowledge, skills, and practice (p.14)”. PSEL standard 10f states leaders should “assess and develop the capacity of staff to assess the value and applicability of emerging educational trends and the findings of research for the school and its improvement (p. 18)”, while 10d states that leaders should “Engage others in an ongoing process of evidence-based inquiry, learning, strategic goal setting, planning, implementation, and evaluation for continuous school and classroom improvement (p.18)”.
The inclusion of evidence use in these dimensions of leadership is a promising start towards strengthening principals’ preparedness to lead research-informed improvement, as standards are likely to trickle down into programs through alignment with accreditation processes. Furthermore, formal recognition of research-use leadership within standards and frameworks may promote change at scale, as most programs must be responsive to some set of standards, which may help to address the unequal distribution of capacity across schools and, ultimately, contribute to more systemic improvement. However, these represent three of nearly one hundred standard elements in PSEL, and still only some of the roles identified in this paper. Furthermore, the three standard elements represent complex leadership activities that require a range of strategies, knowledge, and skills for principals to effectively enact them. This means that additional guidance and support are likely to be needed.
Because research-use leadership falls largely within already-recognized leadership roles, one viable approach may be as simple as integrating a research or evidence-use lens into existing courses and activities, similar to Levin’s [
64] suggestion that all professional learning create the space to engage with evidence. However, preparation programs may also strengthen principals’ capacity by creating more research-related experiences—which were notably lacking among our sample—beyond the current curriculum. This could include research-centered coursework, participating in research, leading research, facilitating engagement with research, or other activities.
At the same time, change in principal preparation is difficult because of licensure, accreditation, and other processes, not to mention the already sizeable number of standards and practices that must be fitted into any single program. Therefore, another viable mechanism for capacity building is professional learning for leaders. Most educators, including principals, must participate in continuous learning, and professional learning providers can offer research-use-related options. I echo calls from others (e.g., [
60,
61,
64]) for attention to be paid issues such strategic engagement with evidence in the context of improvement initiatives, the critical evaluation and synthesis of different sources of evidence, the present status of research knowledge regarding key education issues and practices, and strategies to share research knowledge and use it in schools.
While principal preparation and professional learning may prove important, it is also important to consider how principals’ roles in leading research use may be shaped by their district context. As Brooks and colleagues [
60] note, principals’ practices in using evidence to make decisions are tightly coupled with the practices of education districts in which they and their schools are nested (p.170)”. Principals may themselves lack the access, time, and support needed to engage with research, and district leaders can be an important influence in research use]. Prior research has shown that evidence-related support in districts is important to the capacity for improvement [
82,
83,
84,
85]. Districts can also support principal learning through many of the strategies identified as related to principal leadership: offering professional learning centered on research use, setting expectations and modeling research use, encouraging connections with researchers, and allocating resources that support their engagement with research. However, district variability in commitments to and supports for evidence use may also contribute to the variability in school capacity found in this paper, making district-led solutions to capacity building only a partial solution to building capacity system-wide.
To conclude, this paper contributes to the literature on both leadership and evidence use in several ways. First, I draw on the literature related to school capacity, school leadership, and evidence use to offer a conceptualization of
principal leadership for research use. This approach integrates the lenses of leaders’ use of research as individuals and their roles in building school capacity to highlight more explicitly the different mechanisms by which leaders contribute to school capacity for evidence use. Importantly, this work explores such mechanisms across diverse school contexts and at scale, providing evidence of needs and opportunities across the U.S. educational system. Although additional research on leadership for evidence use more broadly—including the use of data—is warranted, the findings presented here offer directions for current and future leadership preparation and development. They also support initiatives and highlight the need for evidence-use guidance to address the complexity of implementation and the inclusion of significant investments in capacity building. I note, however, that the context of this research is specific to the United States and thus applies primarily to the roles and responsibilities of principals in a particular national context. Although leadership is widely regarded as influential in student outcomes and evidence-use expectations have risen globally, the role of school leaders and the conceptualization of leadership itself varies widely across the globe. As a result, findings and recommendations should be considered in light of this limitation. Echoing calls for cross-cultural and comparative leadership research [
86], the study of leadership and of evidence-informed improvement would considerably benefit further consideration into leadership for research use across international contexts.