Next Article in Journal
The Influence of the Sherman STEM Teacher Scholars Program on Persistence in Science, Technology, Engineering, and Mathematics: A Mixed-Methods Study
Previous Article in Journal
“It’s Good to Have a Voice”: What Do Students Think about the Impact of a Flexible Curricular Implementation of Student-Centered Pedagogies on Their Own Learning Experiences?
Previous Article in Special Issue
How Can Crosscutting Concepts Organize Formative Assessments across Science Classrooms? Results of a Video Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Showing What They Know: How Supervisors Express Their Assessment Literacy

1
School of Education, The College of New Jersey, Ewing Township, NJ 08618, USA
2
Center for Psychology in the Schools and Education, American Psychological Association, Washington, DC 20002, USA
3
College for Education and Engaged Learning, Montclair State University, Montclair, NJ 07043, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(10), 1075; https://doi.org/10.3390/educsci14101075
Submission received: 26 August 2024 / Revised: 27 September 2024 / Accepted: 29 September 2024 / Published: 1 October 2024

Abstract

:
This study examines the assessment literacy of university-based student teaching supervisors in a teacher education program. The first author engaged in an inquiry community with three supervisors from the program. Using the four dimensions of the Approaches to Classroom Assessment Inventory (ACAI) framework, we sought to expose the specific phenomena of supervisors’ articulation of the ACAI dimensions of assessment literacy. We organized our findings around the dimensions: assessment process, assessment product, assessment fairness, and assessment theory. We found that supervisors expressed multiple dimensions of assessment within their inquiry community meeting discussions and interviews and expressed varied knowledge and prioritization of each dimension. We also found that supervisors did not discuss the dimensions of the ACAI in isolation, instead, they illustrated the complex interplay among assessment and other pedagogical constructs. These findings provide an initial contribution to the literature on supervisors’ assessment literacy, which could inform research and practice.

1. Introduction

That assessment literacy is important to the field of education sciences is made clear by the many assessment standards from countries such as Australia, Canada, New Zealand, the UK, and the USA [1] as well as the existence of this special issue dedicated to the topic. Early definitions of teacher assessment literacy often addressed the knowledge of what students know and can do, the knowledge of how to interpret assessment results, and how to apply this knowledge to foster student learning and improve educational decisions [2]. However, as scholars engaged in research surrounding assessment literacy, the conceptualization of this topic became more complex, with recent definitions including knowledge of context, culture, and socio-cultural influences [3,4,5]. With these evolving ideas of assessment literacy in mind, we define assessment literacy as teachers’ knowledge, skills, and dispositions surrounding assessment [6,7,8] that are used to create and enact assessment within complex classroom contexts and school settings [5,6].
In her recent review of the literature surrounding assessment literacy, Pastore [3] reiterated the call for teachers and scholars to consider context, culture, and socio-cultural influences in their understanding of assessment. It is also important to note that assessment literacy is not a product, but, instead, a developmental process that is shaped by multiple factors (e.g., context, opportunities to learn) [4,9]. Therefore, teachers throughout their career trajectory—pre-service [10] and practicing teachers [11]—need ongoing preparation and practice in assessment literacy and to engage in quality assessment practices [1].
Scholars have underscored the need for increased research on both pre-service and practicing teachers’ assessment literacy [3,12]. However, without a comprehensive understanding of what these teachers need to know to be assessment literate, the development of standards or criteria to evaluate their assessment literacy becomes a fruitless endeavor [5]. In 2016, DeLuca and colleagues [13] introduced a framework known as the Approaches to Classroom Assessment Inventory (ACAI). This framework identifies teachers’ approaches to classroom assessment across four dimensions: assessment purpose, assessment process, assessment fairness, and measurement theory [14]. While all teachers engage in these dimensions, the approaches they prioritize may vary. Table 1 provides a detailed description of these four dimensions, the different approaches teachers may prioritize, and an explanation of each.

1.1. Assessment Purpose

The assessment purpose dimension delineates three key reasons for why assessments are conducted: assessment OF, FOR, and AS Learning [14]. In this framework, assessment of learning (AoL) reflects a summative assessment of students’ attainment of learning objectives following some period of instruction and for which a grade or mark is assigned. Assessment for learning (AfL) reflects formative assessment practices used by both teachers and students to determine current progress toward the attainment of learning objectives and to identify the next steps for continued learning and instruction. Assessment as learning (AaL) is primarily student-centered and involves students in active self-assessment and the development of metacognitive and self-regulated learning skills [15].

1.2. Assessment Process

There are three approaches in the dimension of the assessment process: design, use, and communication [14]. Assessment design is when teachers develop or modify publisher-made assessments to ensure they are reliable, aligned with learning objectives, and measure student learning. Assessment use refers to when teachers not only use scoring protocols (e.g., rubrics, checklists) or grading schemes (e.g., letter or number grades) but also modify these protocols or schemes for particular students or differing contexts. Assessment communication includes the teacher interpreting assessment results, providing feedback to students and communicating student progress to parents.

1.3. Assessment Fairness

Assessment fairness, the third dimension of assessment literacy, refers to how teachers cultivate fair assessment conditions for learners [14]. Within this dimension are three approaches that teachers enact based on the context and needs of their students: standard, equitable, and differentiated. When teachers adopt the standard approach to assessment fairness, they maintain equal assessment protocols for all students. Teachers can also adopt an equitable approach to assessment fairness. This occurs when they differentiate assessments for formally identified students, such as those in special education or English language learners. The differentiated approach individualizes assessments to address each student’s unique learning needs and goals.

1.4. Assessment Theory

The fourth dimension of assessment literacy is assessment theory, which focuses on the consistency and context of assessment [14]. This dimension is also categorized into three approaches: consistent, contextual, and balanced. When teachers prioritize consistency, they strive for reliability within assessments, across time periods, and among their peers. In contrast, when teachers consider the validity of an assessment—for example, does the assessment measure what the teacher designed it to measure—they are adopting a contextualized approach. The third approach, balanced, takes into consideration both the reliability and validity of an assessment.

1.5. Pre-Service Teachers’ Assessment Literacy

All teachers entering the profession are expected to demonstrate assessment literacy [14]. However, pre-service teachers and novice teachers face particular challenges in enacting high-quality assessments [8,16]. Specifically, early career teachers need support to challenge teacher-centered conceptions of assessment [13], inform disciplinary knowledge and assessment skills [17], practice assessment for learning (formative assessment [18]), integrate fair grading processes [17], and communicate assessment results to stakeholders [19].
The extent and quality of assessment literacy for pre-service teachers fluctuates both within and across teacher education programs [14,20]. This inconsistency has prompted many scholars to advocate for curricular modifications to ensure a more standardized understanding of assessment upon graduation. For example, Popham proposed over a decade ago that a dedicated course on educational assessment should be incorporated into teacher education programs [2]. Alkharusi et al. [21] further suggested that this specific course should be paired with field-based experience, enabling preservice teachers to apply assessment theories in their teaching practice. However, recently, scholars have argued that a single assessment course without continuous and consistent messaging about assessment within a teacher education program may lead to pre-service teachers receiving mixed messages about assessment [14]. These scholars have advocated for the integration of developmentally appropriate assessment education across all courses within the teacher education program (see [10,14,20]).
Despite the growing calls for assessment instruction to be incorporated into the teacher education curriculum, it is often the case that this critical aspect is primarily addressed during the clinical internship phase. These clinical internships usually involve immersing candidates (using Addleman et al.’s [22] recommendation, when discussing clinical internships we will use the following terms—candidate to refer to pre-service teachers, mentor to indicate the PK-12 cooperating teacher, and supervisor to indicate the university-based supervisor) in a PK-12 classroom for an extended duration. Throughout this internship, candidates receive guidance from the PK-12 mentor in whose classroom they are placed, as well as from a supervisor. These hands-on experiences offer candidates the opportunity to apply the assessment knowledge acquired during their coursework [5]. This practical application of theory to practice is a crucial component of developing comprehensive assessment literacy.

1.6. The Supervisor

Supervisors are an important part of student teaching internships as they serve as intermediaries between the teacher education program and the mentors in the PK-12 schools [23] and can foster the opportunity to connect educational theory learned in coursework into practice [24,25]. Burns and colleagues [26] identified five tasks for supervisors to support candidate learning: “(1) targeted assistance, (2) individual support, (3) collaboration and community, (4) curriculum support, and (5) research for innovation” (p. 416). To enact these tasks, supervisors need not only interpersonal (e.g., relationship building) and technical skills (e.g., lesson planning and observation) but also pedagogical skills. Burns and Badiali [27] identified noticing, ignoring, marking, intervening, pointing, unpacking, and processing as pedagogical skills a supervisor needs to support a candidate. A supervisor must also have an understanding of the PK-12 school context in which a candidate is teaching as the decisions a candidate should make will be informed by the context and learners in their classrooms [27].
Although scholars have identified tasks and skills supervisors may need, not all supervisors are prepared to perform them. Often, adjunct faculty, retired PK-12 teachers, and graduate students fill the role of supervisor [23,28,29]. Although supervisors such as these may have rich PK-12 teaching experience, which is often content- or discipline-specific [30], the quality of support they provide to candidates may vary as they may have had little preparation in teacher education [23,31]. Moreover, teacher education programs may view supervisors as intermediaries between the teacher education program, mentors, and candidates and may not provide adequate support for the supervisors [32,33]. For example, McCormack et al. [33] found that supervisors reported struggling with supporting candidates, with weak cooperating teachers, and not receiving targeted support from their teacher education program.
Scholars have identified other concerns regarding supervisor preparation. For example, supervisors may also lack preparation to support candidates in teaching diverse student populations [30] and in culturally responsive teaching practices [34]. Burns and Badiali [27] found that supervisors without preparation may focus simply on observations and corrective feedback as opposed to providing more nuanced support that involves the skills of noticing, ignoring, marking, intervening, pointing, unpacking, and processing. Similarly, supervisors may position themselves as an expert in the field [35] and tell candidates what to do instead of fostering the candidates’ reflective practices [28,36]. When providing feedback to candidates, supervisors may offer feedback that is too general [26,37] or too infrequent [25].
Relevant to this study, supervisors may be a powerful influence on a candidate’s assessment literacy [38]. However, if supervisors are not prepared to support candidates’ growth in assessment literacy, candidates may be left without an essential comprehension about assessment [10]. Therefore, it is important to study the assessment literacy of supervisors.

2. Materials and Methods

2.1. Research Question

Our study offers empirical examples from three supervisors to illustrate each dimension of the ACAI framework to bring light to the differences in how classroom assessment experts and practitioners, in this case supervisors, differ in conceptualizing and describing the important work of classroom assessment. We are not evaluating how well supervisors illustrated their assessment literacy but rather if and how they addressed each dimension in their discourse while engaging in an inquiry project to facilitate their candidate’s knowledge and use of formative assessment in their practicum. We sought to answer the following research question: how do supervisors express the ACAI dimensions of assessment literacy in an inquiry community?

2.2. Methods

We employed a qualitative case study method to provide an in-depth explanatory analysis of the phenomenon of supervisors’ assessment literacy [39]. The case was the inquiry community, the collective of supervisors, and the first author who studied their own practice in a systematic and intentional way [40,41]. The first author served as a participant observer who participated in the inquiry community and facilitated the supervisors’ inquiry process. The subject of the inquiry was supporting candidates’ assessment practices. Although all members in the community collaborated in the inquiry, the first author was responsible for guiding the process.

2.3. Context

The context of this study is a large public research university, serving approximately 18,000 students, located in the northeast US, designated as a Hispanic-serving institution. Our study is situated in the university’s teacher education program, which is accredited by the Council for the Accreditation of Educator Preparation and has partnerships with 34 [state] schools or districts. According to the publicly available accreditation report of this program, in the 2021–2022 academic year, there were 842 students enrolled in the teacher certification sequence: 75% of the students identified as female, 24% as male, and 1% did not report their gender. Students reflected diverse racial and ethnic backgrounds; they identified as White (66%), Hispanic/Latino (22%), Asian (5%), Black or African American (3%), Native Hawaiian, or Other Pacific Islander (less than 1%). Some students reported two or more races (2%) and others did not report this information (2%).
In this teacher education program, after candidates complete all course requirements in their discipline-specific content major, they begin what the university calls the professional sequence. This sequence culminates in a semester-long student teaching internship experience. During student teaching, candidates are assigned a supervisor who is employed by the university to observe candidates in their student teaching placement. The supervisors are required to provide candidates support surrounding instruction, professional dispositions, and other problems of practice. The supervisor holds pre-conferences, observes the candidate, and holds post-conferences throughout the semester. The supervisor completes both formal (four times/semester) and informal progress reports (two times/semester). The teacher education program provides professional learning opportunities for supervisors but these opportunities are not required and individuals who participate can select what they want to learn about.

2.4. Participants

An emailed call to all supervisors in the university’s teacher education program invited supervisors to participate in an inquiry community. Three supervisors, Abby, Beth, and Caroline (all participant names are pseudonyms), volunteered to participate with the first author. The three supervisors worked for the same university, had retired from a career in PK-12 public education, identified as White, and were over 50 years of age.

2.4.1. Abby

Abby taught for 14 years as a special education teacher and then 22 years as a learning consultant for a Child Study Team in a large, suburban school district in the northeast US. After retirement, she was a test preparation coordinator, an adjunct professor of special education classes, and a supervisor for the university in which this study was conducted. At the time of this study, she had served as a supervisor for 15 years.

2.4.2. Beth

Beth’s PK-12 public education career spanned 42 years. Beth’s first teaching position was as a bilingual (English and Spanish) elementary teacher in a suburban school district in the northeast US. She also served as an elementary school bilingual curriculum supervisor for a different district, also in the northeast US. From there, she worked for the state’s Department of Education. After this position, she returned to a suburban school district in the northeast US as an elementary education supervisor, then as an elementary school principal, then as an assistant superintendent, and finally as a superintendent. At the time of this study, Beth had served as a supervisor for eight years.

2.4.3. Caroline

Caroline taught for 25 years in a large suburban district, teaching in an elementary school in the northeast US. Later in her career, she taught a gifted and talented program in elementary classrooms in the same district. Caroline also served as a supervisor of elementary education in her district. At the time of this study, she had served as a supervisor of candidates for six years.

2.4.4. First Author

The first author served as a participant observer in the inquiry group and served to facilitate the group’s inquiry. The first author taught for 23 years in public high schools in urban and suburban contexts and was a doctoral candidate in a teacher development and teacher education program at the university in which this study was conducted. At the time of this study, the first author was in her mid-40s. She identifies as White.

2.5. Data Sources

We collected data from January 2020 to July 2021 via meeting observations (e.g., recordings and field notes) and semi-structured interviews. All meetings/interviews took place on Zoom, with the exception of the October 2021 inquiry community meeting and all three exit interviews. We used Zoom’s transcription tool to transcribe the video recordings and the first author verified the transcripts by comparing them to the recordings.

2.5.1. Meeting Observations

The first author participated in the inquiry community to capture participants’ thinking as it occurred in context [39]. The meetings served as a context for supervisors to demonstrate their knowledge of assessment and to engage in the stages of inquiry to plan how to develop their candidates’ knowledge and practice of formative assessment. The 10 monthly inquiry community meetings ran for approximately one hour each (M = 54 min).

2.5.2. Semi-Structured Interviews

The first author conducted three semi-structured interviews (initial, mid-study, and exit) via Zoom to learn about their role as a supervisor, experiences in the inquiry community, and their beliefs regarding assessment (M = 46, 39, and 44 min, respectively).

2.6. Data Analysis

To create and maintain qualitative quality, we prioritized prolonged engagement with participants [42]; the first author participated in inquiry with the supervisors from January 2020 to June 2021 (excluding the months of August and September 2020). To make certain that we accurately represented the supervisors, the first author engaged in member checking throughout the study [39]. Additionally, the research team collected multiple data points—such as supervisor interviews and recordings of the community meetings—to foster credibility in our research [42].
When analyzing the data, the research team engaged in investigator triangulation, with all three authors analyzing the data [39]. After the study concluded, the three authors engaged in interactive data analysis by meeting weekly to engage in deductive coding guided by the ACAI dimensions [13,14]. We sought to understand the specific phenomena of supervisors’ articulation of the dimensions of assessment literacy, therefore, deductive coding was an appropriate process [39]. However, as our coding progressed, new codes emerged when supervisors displayed knowledge not captured by these dimensions (e.g., knowledge of classroom culture). When new codes emerged in the data, we used inductive coding by utilizing the constant comparative method [43]. Once we established our codes, the first author coded multiple data points (e.g., inquiry community meetings, supervisor interviews) to reduce the risk of inaccurate analysis [44]. During this process, the first author actively looked for non-examples as well as examples of the target dimensions. Then, at weekly research team meetings, we examined, discussed, and agreed on the coding. To present our findings, the group selected portions of the data that we felt were descriptive and rich enough to illustrate our findings for the reader [39].

3. Results

Our results are organized into two subheadings grouped by our research question. We first provide examples of how supervisors expressed the dimensions. Additionally, we highlight the complexity of how supervisors expressed their assessment literacy by providing a microanalysis of a conversation that occurred during an inquiry community meeting.

3.1. Dimensions Voiced by Supervisors

Here we provide an overview of the ways in which supervisors exemplified the dimensions of the ACAI: assessment purpose, process, fairness, and theory [14].

3.1.1. Assessment Purpose Examples

Across the data, participants rarely spoke of AoL, and when they did so it was often focused on accountability issues rather than determining grades or assessment of learning objectives as described in the ACAI framework [13,14]. Moreover, their explanations of assessment purposes frequently merged multiple purposes together in one response. For instance, when asked in the exit interview about the purpose of collecting evidence of student learning, Abby referred to all three purposes.
Abby: So you should be able to show progress on the students and keep track of where they are [AfL]. To be able to compare their performances [AaL]. So you can know where your students are and where they’re going [AfL]…Especially when people, like parents, or other teachers, or anybody is questioning [you], that data is very important. Any data is very important to show, because nobody can argue with your data unless you really do it wrong which most people…you know. …Data backs up what you’re saying [AoL].
First Author: Right, I’m curious you mentioned that you use it to compare student progress with other students, could you explain what you meant by that.
Abby: Not with other students with themselves at different times [AaL].
(Interview_July 2021.)
Here Abby spoke about AoL in terms of gathering data to “back up what you’re saying”. In other words, teachers use assessment data to support their claims about students when talking with other interested parties. She also clearly indicated AfL, stating that data are needed to know students’ current achievement that can inform what next steps might be needed. Lastly, she talked about students using data over time to compare and reflect on their own progress, indicating a key aspect in AaL, which is student self-assessment.
Further, supervisors’ notions of assessment grew through their work in the inquiry community; specifically, to include the concept of AaL in their understanding of assessment. However, the language used did not always reflect the terminology used in the ACAI. For instance, when asked “Has your understanding of formative assessment changed at all since we started our community?”. Abby indicated that her understanding had grown and expanded to include what she called “reflective formative assessment”; when asked for an example, she shared the following:
It was just really including reflection with the kids. Making sure that teachers give kids a chance to reflect on what they’re doing in different ways… I always ask them, “How do you think you did? What was your goal? and How is the outcome?” you know, looking at different ways to reflect on their progress. (Interview_July 2021.)
We see here explicit reference to metacognitive processes such as self-assessment and goal-setting in the example questions she suggested. Under the ACAI framework, this would be classified as AaL. Thus, there is a discrepancy between what this supervisor considers to be formative assessment and how the ACAI describes AaL. In essence, she merged AaL into her conception of AfL.

3.1.2. Assessment Process Examples

Although supervisors referred to each approach in this dimension, supervisors primarily focused on the design of assessments. For example, in an exit interview, when asked to provide an example of formative assessment, Beth described her process for designing a writing assignment.
Beth: Number one, I’d [ask] the students to select the topic they want to write about. Okay, so this way they’re motivated and say, “I want you to tell me. Give me a specific topic on that. What did you like about it? And why was that the piece?”. But then you need to allow them to elaborate on that right? And they have to have evidence to support their thinking. And I would try to make sure they understand that topic. One that they really want to be writing about and really can talk with everyone about at the same time. “What is something you want to tell me about yourself?”. I’d love [for] them to talk about themselves, and, and then move them into that and see how well the writing is. How well they can articulate what they are thinking, not only verbally but in writing. And then from there they can analyze where the pitfalls are. What do they need to improve? Whether it’s the introduction, whether it be the grammar and spelling. A host of things just to get a sense of readiness around [writing]. So one of the formative checks would be looking at drafts of the piece before a final draft is right. (Interview_July 2021.)
Beth described creating a writing prompt and selecting a topic that would both motivate and be accessible to students. Then, she argued that because students may be comfortable writing about themselves, she would be able to focus on the quality of the writing (e.g., providing evidence or mechanics of writing), thereby designing a task that would also assess the target objectives related to writing.
There were fewer examples of how supervisors described the use of assessments. Supervisors did identify how scoring protocols should be adjusted based upon grade-level expectations. In the same interview, Beth described a writing assessment scenario, a two-paragraph response, which had scoring protocols appropriate for elementary-aged students.
Beth: So, I think, when they have more maturity and abilities, you can dig deeper into the assessments…If the content requires it. It really depends on the content.
First Author: That’s interesting. So it’s not necessarily, although you’re mentioning developmental ideas, the age, but it’s more about the materials and the subject matter that might make it different?…Can you give me an example?…We will stick with the two paragraphs for older kids, what might be a K-5 English formative assessment, what might you see in that class?
Beth: Well number one, that they [late-elementary students] can write a full sentence. And they use correct punctuation when they’re asking a question, as opposed to a statement. And if there is something that someone’s excited about, do they use an exclamation [mark]? Which, you know, some kids like to use them all the time, because they’re excited just reading it, you know what I mean? We’re writing about it and others don’t even think about it, right? So it’s interesting, you see the personalities come in [when students use certain punctuation] (Interview_July 2021.)
Beth indicated that teachers must adjust expectations in writing for a group of elementary-aged English students. She showed an understanding of the scoring expectations for student writing, such as syntax and correct punctuation. With this example, Beth also illustrated her understanding of the content of an English course and an understanding of students’ development and individual personalities in relation to their writing, which may influence how she uses particular assessments with different learners.
Supervisors spoke less about the interpretation and communication of assessment results. However, when discussing the communication of results, the conversation often returned to reporting grades to students and parents. During an inquiry community meeting, Abby shared an observation of a candidate who was teaching third-grade math as an example of individualized instruction: using a mini whiteboard with one student. Then, Caroline extrapolated from the example given to the problems of using mini whiteboards in a whole-class setting because, left as is, it would not facilitate the communication of learning progress to their students or to parents as evidence to support assigned grades.
Abby: [This example is of] a candidate in a third-grade class. It was math and she was reinforcing…working in a small group. She worked with a student who was having trouble understanding and she took out a whiteboard. She went over problem after problem using the whiteboard and they did problems together and then she left him alone to do the second page. He used the whiteboard on his own and then transferred the answer [to his notebook]. So that was just more… it was just very observable.
Caroline: Right, but even the observable ones where they do use little mini whiteboards and they hold it up… [still] my questions to the [candidates] is “So who got it right? An hour from now do you remember? Do you have a checklist? How do you know they got it? What if the parents say, “How is the child doing?”. Do you really know? You’re not going to remember with a class of 25 kids who held one [answer] and who held another”. You know? I’m trying to teach them [candidates] that when you’re in the classroom it’s just not that simple, especially in grades. When you have to give grades, right?
Abby: Yeah, so if you’re not writing it down from the whiteboards… She [the candidate] was only working with one student, so she… Yeah, but, overall, when they do the whiteboard… you’re right.
Caroline: So that is different, but a whole class…
Abby: And unless you’re checking it off… And I’ve said to students [candidates], “You need to have a checkoff system”. Because once they erase that and you’ve done multiple problems, you have nothing to check back to. So totally true. (Meeting_March 2021.)
Abby provided an example of individualized student instruction in which the candidate designed a formative assessment intended to check for the student’s understanding by using a whiteboard. However, this practice was also an instructional strategy to encourage the student to practice the assigned work. Caroline shifted the focus from individual instruction to using whiteboards with a whole class. She expressed concern that this tool was lacking without a clear plan (e.g., a checklist) to record student responses. She indicated that a candidate may not be able to remember the students who answered the question correctly, in which case there would be no information to actually interpret or communicate to parents. In this excerpt, describing a formative assessment activity, we see how these supervisors blended two assessment approaches (i.e., design and communication) with each other as well as ideas about instructional practices. Similar to our findings surrounding assessment purposes, Beth and Caroline spoke about accountability when discussing assessment communication.

3.1.3. Assessment Fairness Examples

We observed that supervisors exclusively expressed differentiated approaches to assessment fairness. For example, in an exit interview, Abby discussed a candidate who did not feel the need to differentiate for an honors class. She recounted the following:
He did not differentiate for the group because it was an honors class. Everybody got the same things. When we talked about misconceptions, he would say to me, “Oh no, this is pretty clear. They are an honors class”. I responded, “But you know, even in an honors class, there are different kinds of learners”. This was a [candidate] I had [taught in a previous course], so I was like, “Oh my gosh. What happened to all the things I told him?”. But anyhow, that’s where it is. (Interview_July 2021.)
Abby expressed frustration with her candidate’s lack of understanding that differentiation is needed for all students. This example highlights the importance of recognizing diverse learning needs even within seemingly homogeneous groups, such as an honors class. Abby’s experience underscores the necessity for ongoing and explicit instruction for candidates on the conceptual underpinning of differentiation as well as specific strategies to enact it; despite having taught this student the topic before he clearly demonstrated a conceptual misconception about this topic, which inhibited him from actually engaging in the target practice of differentiation.
Similarly, in a mid-study interview, Beth emphasized the need for flexible small groups to address common errors, stating, as outlined below:
Well, I think they need to sit down with him, and it could be not just one-on-one, but if they see the same common error with other students, maybe the question wasn’t clear enough. Or, you pull them aside. I would tell my teachers [candidates], “If you see the same errors with a group of kids, something’s not penetrating; something’s not working right”. So, I would either reteach it in a small group, not the whole class, because then kids are bored. Why waste their time? Either challenge them or just pull them aside. That’s why I think you have to have flexible small groups all the time. (Interview_July 2020.)
Beth’s approach illustrates the practical application of differentiated assessment by addressing common errors through targeted small group instruction, thereby catering to individual learning needs.
Thus, with regard to assessment fairness, supervisors emphasized individualized learning opportunities and assessments tailored to student needs. The examples provided by Abby and Beth demonstrate the practical implementation of this approach, highlighting its effectiveness in addressing diverse learning needs and promoting fair assessment conditions.

3.1.4. Assessment Theory Examples

When supervisors spoke about assessment theory, they often described balanced assessments that addressed both reliability and validity. Recall the example from the Assessment Process Section where Beth described designing a two-paragraph writing assessment. In this portion of the exit interview, Beth explained how she would structure a unit in which the final assessment would be the essay mentioned in the assessment process:
Whatever it [the summative essay task] is, are you going in that direction? Were they able to collect their thoughts, organize their thinking, to be able to put that together? So that, to me, is something that a teacher needs to develop in terms of a formative, because at the end [of the semester] it’s too late, you need to show that they are moving in that direction [throughout the semester]. (Interview_July 2021.)
Beth described creating formative assessments that are valid, or contextual. She considered the learning objectives that needed to be met in advance of mastering the semester objective of writing a two-paragraph essay: collecting thoughts and organizing thinking. Then, she described developing consistent formative assessments that provided students with practice of these objectives to help students “move in the direction” of the summative assessment. Embedded within this example is the assumption that the formative assessments would be contextual; in other words, if the objective was focused on organization, then the formative assessment created by the teacher would appropriately measure organization. Of note, this example also illustrates the overlapping of the dimensions in that she is describing the contextual approach to assessment theory in light of the design approach of the assessment process.
Supervisors provided examples of the importance of creating and implementing assessments that are both consistent and contextual and, therefore, balanced. An aspect of this dimension not evident in the data was a discussion of consistency (i.e., reliability) across a time period or among peers.

3.2. The Complexities of Assessment: A Microanalysis

Supervisors’ knowledge of assessment was evidenced in our inquiry community meetings, where in the same discussion they referenced multiple dimensions simultaneously. To illustrate this, we selected a passage from the May 2021 inquiry community meeting. In this meeting, the supervisors spoke about the candidates’ use of thumbs up/thumbs down as a formative assessment technique. In this study, the supervisors described this technique as the teacher asking students to make a hand signal—putting their thumb up or down—to indicate their understanding of a concept being taught [45]. This issue received extended attention in the meeting when the first author initiated a discussion on how to help candidates ask better questions based on the group’s shared reading of Moss and Brookhart’s book Advancing Formative Assessment in Every Classroom: A Guide for Instructional Leaders [46]. To show how the supervisors’ conversation shifts among dimensions, we have broken the conversation into five scenes, or small portions of the conversation, with corresponding analysis (Although we divided the conversation into scenes, no portion of the conversation was omitted, and it appears here as it occurred in the meeting).

3.2.1. Setting the Stage: Deeper Questions vs. Yes/No

This discourse opened with the first author inviting the supervisors to think about how they could provide feedback (process—communication) to their candidates to encourage them to ask deeper, more thought-provoking questions. The conversation quickly turned from this focus to what the candidates perform poorly with—asking yes/no; thumbs up/down questions of their students—which then became the focus of their analysis. As noted above, Caroline evoked the dimension of assessment theory and questioned the lack of a contextual approach in candidates’ use (assessment process) of the thumbs up/thumbs down technique because the use did not help the candidate accurately assess student learning. She suggested a change to the use (i.e., probing student thinking after using the thumbs up, thumbs down strategy) to ensure the assessment technique measures what it is supposed to measure, the contextual aspect of assessment theory.
First Author:
…I really was thinking about candidates and I put [this quote from the reading in my notes] “to monitor and refine the quality of the questions they ask” (Ref. [35] p. 112). How do we help them do that? Because that’s a pretty deep skill. And I know that we talked a lot about them not being in the place where they have a ton of experience. How could we help them? And I don’t have an answer, if you have an answer jump in.
Other—acknowledgement that candidates’ AL is lacking
Caroline:
To give an example, this one [candidate], it was all “yes, no, yes, no” [questions]. So how I helped him… what I said [to the candidate] was: “By that student saying no or yes, how does it help you? What did you learn from yes/no?”
Assessment process—use
He [the CI] just moved on. ‘Who gets it? Thumbs up’. But what does that mean? No probing or making them think further… Assessment theory—contextual

3.2.2. Contextual Assessment and Emotions

Both Caroline and Abby mentioned that thumbs up/thumbs down assessment may not be contextually appropriate, meaning the assessment did not measure what the candidate intended it to measure. The first author discussed how a candidate might not see a student in the back of the room who put their thumb down. By missing the student’s response, the assessment was not measuring that student’s learning. Caroline agreed with the first author and then questioned the purpose of the assessment if it did not measure the understanding of all students. In response, Abby highlighted an aspect not included in the ACAI; she stated that it was brave for students to put their thumb down, which may indicate that, in addition to soliciting student understanding, there was another use for a thumbs up/thumbs down response—emotional regulation. Then, she theorized that because students would not feel brave enough to put their thumb down, the assessment would not measure what the candidate intended it to measure, again addressing the contextual aspect of the assessment theory dimension. Of note here is how Abby added emotional regulation into a portion of the conversation focused on assessment theory.
First Author:
For equity purposes, if a kid in the back put his thumb down you didn’t see him and you just moved on.
Assessment theory—contextual
Assessment fairness—standard
How does that feel? That kid feels like Mr. Blah Blah does not care if…Other process—student emotions in assessment use
Caroline:
What’s the point of it? Exactly right, yeah.
Other—questioning purpose in light of theory (contextual)
Abby:
And it’s brave to put your thumb down.
Other process—student emotions in assessment use
Because a lot of them wouldn’t even do it.Assessment theory—contextual
Other—classroom culture as well

3.2.3. Responding to the Student

Abby shifted the conversation from the quality of the assessment technique to how teachers should respond in the classroom when a student indicates “thumbs down”. She seemed to waiver from using the response as an opportunity for AaL—responding immediately and allowing the student to explain their confusion, that is engaging them in metacognition—to a concern for students’ future classroom engagement. Caroline’s reply extended this focus on student engagement by praising the student for admitting their confusion. She then suggested how she would respond by asking the student “how can I help”, indicating AfL as her goal.
Abby:
So, do you address it right there because maybe the kid will never do it again because you’re pointing them out, you know, or do you address that later on?
Assessment purpose—AaL
Other—student engagement
Caroline:
I addressed it right away like it was a badge of honor.
Other—student engagement
[To students] ‘Good, let’s hear what… How can I help you more? I’m sure you’re probably thinking the same thing.’Assessment purpose—AfL

3.2.4. Responding to the Candidate

Abby began this portion of the conversation by questioning if the thumbs up/thumbs down assessment would be consistent (assessment theory). If students did not respond to the candidate’s question, the candidate would not know if the results of the assessment were consistent. Next, Abby speculated how a candidate might address this with a student who put their thumb down—by attempting to elicit student thinking in hopes of providing feedback. She speculated the type of feedback she might give to the student, noting that she might work to clarify the design of the assessment or, alternatively, might affirm the student’s feelings by thanking them for their honest response. Finally, Abby again considered the student’s emotions in relation to an assessment.
Abby:
Yeah, but I wouldn’t even know how to give advice, because hardly anybody put their hand up.
Assessment theory—consistency
But, how do you address that? And maybe you don’t even address it by saying [to student], ‘Oh Johnny, you put your thumbs down. Tell me what’s wrong’.Assessment process—communication
Maybe I would repeat my directions, or maybe I would do something where it’s general insteadAssessment process—design
of saying [to student], ‘Oh John… or thank them and say, “I’m sure, a lot of people feel this way like yes”. You would have to be able to make sure that you help them [the candidate] address… how are you going to address that?Other—student emotions in assessment

3.2.5. A Culture of Mistakes

In the last scene we share here, Caroline shifted the discussion once again to the importance of establishing a classroom culture where mistakes are embraced; this harkens to her comment earlier that she would make the student’s thumbs down a “badge of honor”. Here, she suggested that she would create a “culture of mistakes” by modeling herself for the class with the assumption that her doing so would make the students more likely to respond with thumbs down in the future. However, she then made an immediate switch, stating that she would not use the technique in the classroom even with the adaptations she suggested; perhaps because of her earlier statement that the technique lacks contextuality—it does not inform the teacher about student understanding in an accurate way—was more salient to her than trying to make the technique work. She then shifted back to the feedback she would give to a candidate if they choose to use this technique revealing her own assessment process approach to communication. It is worth noting that, during this semester, one of Caroline’s candidates was placed with a cooperating teacher who, based on a school initiative, embraced this technique. There was a sensitive balance in this triad for supervisors to teach candidates what they believed were accurate assessment practices while also respecting the role of the cooperating teacher as a co-instructor in the internship experience.
Caroline:
And you need to create a culture of mistakes in the very beginning in your classroom. Yeah that’s the culture, everybody makes a mistake.
Other—classroom culture
I would point out, all this is “[I] made a mistake. Who can tell me what I should have done?”. And they felt more comfortable then putting their thumb down whenever I asked: “thumbs up thumbs down?”Assessment process—use
That’s a no for me, I can’t do that in the classroomAssessment theory—contextual
but if it was a candidate, I would explain that as well you know, establish that culture it’s okay that you make a mistake we all make mistakes.Assessment process—communication (to a candidate)
The complexities of how supervisors discussed assessment were evidenced in this microanalysis. We found that, instead of discussing each dimension singularly, supervisors wove the dimensions together. Additionally, discussions of assessment did not occur in isolation but instead addressed other pedagogical concerns.

4. Discussion

In this investigation, using the ACAI dimensions [13,14], we sought to expose the specific phenomena of supervisors’ articulation of the dimensions of assessment literacy to examine how classroom assessment experts and practitioners, in this case supervisors, conceptualize and describe the important work of classroom assessment. Supervisors expressed all dimensions of assessment literacy but some were prioritized and discussed more often and in more detail than others. Also, supervisors did not speak of the dimensions in isolation, but, instead, expressed multiple dimensions simultaneously. Our findings provide an initial contribution to the literature on supervisors’ assessment literacy that, with continued exploration, could inform both research surrounding preparing supervisors to be assessment literate and supervisors’ practices.
Frameworks such as the ACAI provide important insights into how teachers approach assessment; however, to better understand these diverse approaches, it is also necessary to explore how teachers express their assessment literacy. We found that, with supervisors who were veteran PK-12 teachers, the ways in which they expressed their literacy were complex and influenced by many factors. In a recent chapter focused on assessment literacy, DeLuca [47] acknowledged this complexity and attributed teachers’ assessment literacy development to factors such as “background, experiences, students, and context” (p. 255). Our findings reflect DeLuca’s claim. For example, supervisors addressed multiple dimensions within one example or recognized that their discussion was more complex and addressed other pedagogical ideas, many of which were illustrated with examples from their own practice as teachers, teacher leaders, or supervisors. Future studies could continue to explore how supervisors show their assessment literacy, particularly with a larger and more diverse group of participants in a variety of contexts. Perhaps supervisors in different teacher education programs or with different professional experiences may bring different examples or may prioritize different dimensions of assessment literacy. As evidenced in our microanalysis, often, supervisors did not show their knowledge of assessment in isolation. In her systematic review of the assessment literacy literature, Pastore [3] acknowledges the complexities of how assessment literacy is conceived. This suggests that a holistic approach may be needed in teaching and understanding the dimensions of assessment literacy. While an isolated explanation of each dimension may be a good starting point, scholars will also need to explore how they interact and influence one another in practice.
Although there are many studies concerning assessment literacy and candidates (e.g., Refs. [14,48]), fewer studies explore assessment literacy and supervisors. We found that, similar to Barnes et al.’s [14] study of candidates’ assessment literacy, supervisors had different knowledge of and experience with the assessment literacy domains. For example, in our study, supervisors spoke more of the designs of assessment than the use of assessment results. To fully explore and create a robust library of examples for each domain, more studies involving supervisors and assessment literacy expression and development are needed. Something to consider in future studies is the context of the teacher education program; in the university in which our inquiry community was situated, supervisors observed a lesson with a corresponding pre- and post-conference. This meant conversations with candidates surrounding assessment may have occurred in isolation; in other words, because the supervisor and candidate were only discussing one lesson plan, they would not have many opportunities to address all aspects of consistent assessments (assessment theory); specifically, they may not have found opportunities to discuss how to maintain consistency in assessment with their teacher peers or over a length of time (e.g., a unit plan or school year).
Additionally, to ensure that supervisors are knowledgeable in all domains of assessment literacy, teacher education programs could consider providing supervisors with preparation regarding assessment. Similar to Barnes et al.’s [14] and DeLuca et al.’s [10] calls, this preparation could align with the theory that candidates have been taught in their coursework to ensure a continuous and consistent message regarding assessment. Preparing candidates in developing and enacting meaningful assessments is an essential part of professional learning for supervisors [38].
Although examining the quality of supervisors’ assessment literacy was outside the scope of this study, there was evidence that there were disparities in how supervisors and scholars defined and interpreted ideas surrounding assessment; for example, Abby’s interpretation of assessment as learning and Beth’s examples of valid formative assessments for teaching writing. Misconceptions such as these may be held by other supervisors preparing candidates who could perpetuate the error in their practice. Future research could examine the misconceptions supervisors have across the assessment dimensions. Findings from research such as this could support teacher education programs in creating professional development for supervisors to ensure that supervisors are operationalizing assessment in the same ways as candidates’ teacher education coursework. However, it may also be important for teacher education programs and scholars to recognize the wealth of practical knowledge supervisors can provide as this knowledge may currently be absent from the literature.

Limitations

The same teacher education program employed all of the supervisors. As teacher education programs vary, their experiences, and therefore assessment literacy, may differ from supervisors in other contexts. Further, the sample size of this inquiry community was small and, therefore, not representative of all supervisors. However, because the inquiry community was small and situated in the same teacher education program, we were able to closely examine their assessment literacy in a context-supported fashion. Also, because supervisors selected how to best facilitate their candidate’s knowledge and use of formative assessment in their practicum as the inquiry community’s focus, much of the supervisors’ talk is on designing and implementing formative assessments. If they had selected a different inquiry focus, we may have seen other interpretations (e.g., using assessment results to drive instruction, communicating assessment results to stakeholders) expressed in more detail in our data. Additionally, our focus was solely to identify examples of supervisor’s assessment literacy. We did not explore the quality of supervisors’ understanding of each dimension of assessment literacy, which is an avenue for future research.

5. Conclusions

Using the ACAI dimensions [13,14], we sought to expose the specific phenomena of supervisors’ articulation of the dimensions of assessment literacy. We found that supervisors expressed multiple dimensions of assessment within their meeting discussions and interviews, but, similar to Barnes and colleagues’ [14] findings with candidates, supervisors in our study expressed varied knowledge and prioritization of each dimension. We also found that supervisors did not discuss the dimensions of the ACAI in isolation, instead illustrating the complex interplay among assessment and other aspects of teaching practice (e.g., classroom culture and equitable practice).

Author Contributions

Conceptualization, E.R.-L., N.B. and H.F.; methodology, E.R.-L., N.B. and H.F.; validation, E.R.-L., N.B. and H.F.; formal analysis, E.R.-L., N.B. and H.F.; investigation, E.R.-L. and N.B.; data curation, E.R.-L., N.B. and H.F.; writing—original draft preparation, E.R.-L., N.B. and H.F.; writing—review and editing, E.R.-L., N.B. and H.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was approved by the Institutional Review Board of Montclair State University (IRB-FY20-21-1902), Date: 16 December 2020.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data supporting the conclusions of this article will be made available by the author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. DeLuca, C.; LaPointe-McEwan, D.; Luhanga, U. Teacher assessment literacy: A review of international standards and measures. Educ. Assess. Eval. Account. 2016, 28, 251–272. [Google Scholar] [CrossRef]
  2. Popham, W.J. Assessment Literacy Overlooked: A Teacher’s Confession. Teach. Educ. 2011, 46, 265–273. [Google Scholar] [CrossRef]
  3. Pastore, S. Teacher assessment literacy: A systematic review. Front. Educ. 2023, 8, 1217167. [Google Scholar] [CrossRef]
  4. Willis, J.; Adie, L.; Klenowski, V. Conceptualising teachers’ assessment literacies in an era of curriculum and assessment reform. Aust. Educ. Res. 2013, 40, 241–256. [Google Scholar] [CrossRef]
  5. Xu, Y.; Brown, G.T. Teacher assessment literacy in practice: A reconceptualization. Teach. Teach. Educ. 2016, 58, 149–162. [Google Scholar] [CrossRef]
  6. Pastore, S.; Andrade, H.L. Teacher assessment literacy: A three-dimensional model. Teach. Teach. Educ. 2019, 84, 128–138. [Google Scholar] [CrossRef]
  7. Popham, W.J. Assessment literacy for teachers: Faddish or fundamental? Theory Pract. 2009, 48.1, 4–11. [Google Scholar] [CrossRef]
  8. DeLuca, C.; Klinger, D.A. Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assess. Educ. Princ. Policy Pract. 2010, 17, 419–438. [Google Scholar] [CrossRef]
  9. Deneen, C.C.; Brown, G.T. The impact of conceptions of assessment on assessment literacy in a teacher education program. Cogent Educ. 2016, 3, 1225380. [Google Scholar] [CrossRef]
  10. DeLuca, C.; Chapman-Chin, A.; Klinger, D.A. Toward a teacher professional learning continuum in assessment for learning. Educ. Assess. 2019, 24, 267–285. [Google Scholar] [CrossRef]
  11. Christoforidou, M.; Kyriakides, L. Developing teacher assessment skills: The impact of the dynamic approach to teacher professional development. Stud. Educ. Eval. 2021, 70, 101051. [Google Scholar] [CrossRef]
  12. Gotch, C.M.; McLean, C. Teacher outcomes from a statewide initiative to build assessment literacy. Stud. Educ. Eval. 2019, 62, 30–36. [Google Scholar] [CrossRef]
  13. DeLuca, C.; LaPointe-McEwan, D.; Luhanga, U. Approaches to classroom assessment inventory: A new instrument to support teacher assessment literacy. Educ. Assess. 2016, 21, 248–266. [Google Scholar] [CrossRef]
  14. Barnes, N.; Gareis, C.; DeLuca, C.; Coombs, A.; Uchiyama, K. Exploring the roles of coursework and field experience in teacher candidates’ assessment literacy: A focus on approaches to assessment. Assess. Matters 2020, 14, 5–41. [Google Scholar] [CrossRef]
  15. Lam, R. Assessment as learning: Examining a cycle of teaching, learning, and assessment of writing in the portfolio-based classroom. Stud. High. Educ. 2016, 41, 1900–1917. [Google Scholar] [CrossRef]
  16. Hill, M.; Cowie, B.; Gilmore, A.; Smith, L.F. Preparing assessment-capable teachers: What should preservice teachers know and be able to do? Assess. Matters 2010, 2, 43–64. [Google Scholar] [CrossRef]
  17. Atjonen, P.; Pöntinen, S.; Kontkanen, S.; Ruotsalainen, P. In enhancing preservice teachers’ assessment literacy: Focus on knowledge base, conceptions of assessment, and teacher learning. Front. Educ. 2022, 7, 891391. [Google Scholar] [CrossRef]
  18. Gotwals, A.W.; Birmingham, D. Eliciting, identifying, interpreting, and responding to students’ ideas: Teacher candidates’ growth in formative assessment practices. Res. Sci. Educ. 2016, 46, 365–388. [Google Scholar] [CrossRef]
  19. Beziat, T.L.; Coleman, B.K. Classroom assessment literacy: Evaluating pre-service teachers. Researcher 2015, 27, 25–30. [Google Scholar]
  20. Schelling, N.; Rubenstein, L.D. Pre-service and in-service assessment training: Impacts on elementary teachers’ self-efficacy, attitudes, and data-driven decision making practice. Assess. Educ. Princ. Policy Pract. 2023, 30, 177–202. [Google Scholar] [CrossRef]
  21. Alkharusi, H.; Kazem, A.M.; Al-Musawai, A. Knowledge, skills, and attitudes of preservice and inservice teachers in educational measurement. Asia-Pac. J. Teach. Educ. 2011, 39, 113–123. [Google Scholar] [CrossRef]
  22. Addleman, R.; HWaugh, E.; JSiebert, C.; Thornhill, S.S. Mentor Teacher Perceptions of Effective University Supervisors: Prioritizing Collaboration and Community. Teach. Educ. 2024, 59, 303–325. [Google Scholar] [CrossRef]
  23. Buchanan, R. An Ecological Framework for Supervision in Teacher Education. J. Educ. Superv. 2020, 3, 76–94. [Google Scholar] [CrossRef]
  24. Deutschman, M.C.; Cornwell, C.L.; Sundstrom, S.M. Fostering anti-oppressive pedagogies in preservice teachers: The role of the university supervisor. Int. J. Qual. Stud. Educ. 2024, 37, 597–612. [Google Scholar] [CrossRef]
  25. Grossman, P.; Hammerness, K.M.; McDonald, M.; Ronfeldt, M. Constructing coherence: Structural predictors of perceptions of coherence in NYC teacher education programs. J. Teach. Educ. 2008, 59, 273–287. [Google Scholar] [CrossRef]
  26. Burns, R.W.; Jacobs, J.; Yendol-Hoppey, D. The changing nature of the role of the university supervisor and function of preservice teacher supervision in an era of clinically-rich practice. Action Teach. Educ. 2016, 38, 410–425. [Google Scholar] [CrossRef]
  27. Burns, R.W.; Badiali, B. Unearthing the complexities of clinical pedagogy in supervision: Identifying the pedagogical skills of supervisors. Action Teach. Educ. 2016, 38, 156–174. [Google Scholar] [CrossRef]
  28. Cuenca, A. In loco paedagogus: The pedagogy of a novice university supervisor. Stud. Teach. Educ. 2010, 6, 29–43. [Google Scholar] [CrossRef]
  29. Zeichner, K. Rethinking the connections between campus courses and field experiences in college-and university-based teacher education. J. Teach. Educ. 2010, 61, 89–99. [Google Scholar] [CrossRef]
  30. Goodwin, A.L.; Smith, L.; Souto-Manning, M.; Cheruvu, R.; Tan, M.Y.; Reed, R.; Taveras, L. What should teacher teachers know and be able to do? Perspectives from practicing teacher teachers. J. Teach. Educ. 2014, 65, 284–302. [Google Scholar] [CrossRef]
  31. Cochran-Smith, M.; Grudnoff, L.; Orland-Barak, L.; Smith, K. Educating teacher teachers: International perspectives. New Teach. 2020, 16, 5–24. [Google Scholar] [CrossRef]
  32. Jacobs, J.; Hogarty, K.; Burns, R.W. Elementary preservice teacher field supervision: A survey of teacher education programs. Action Teach. Educ. 2017, 39, 172–186. [Google Scholar] [CrossRef]
  33. McCormack, B.; Baecher, L.H.; Cuenca, A. University-Based Teacher Supervisors: Their Voices, Their Dilemmas. J. Educ. Superv. 2019, 2, 22–37. [Google Scholar] [CrossRef]
  34. Griffin, L.B.; Watson, D.; Liggett, T. “I didn’t see it as a cultural thing”: Supervisors of student teachers define and describe culturally responsive supervision. Democr. Educ. 2016, 24, 3. [Google Scholar]
  35. Alexander, M. Pedagogy, Practice, and Mentorship: Core Elements of Connecting Theory to Practice in Teacher Educator Preparation Programs. J. Educ. Superv. 2019, 2, 83–103. [Google Scholar] [CrossRef]
  36. Soslau, E. Opportunities to develop adaptive teaching expertise during supervisory conferences. Teach. Teach. Educ. 2012, 28, 768–779. [Google Scholar] [CrossRef]
  37. Vertemara, V.; Flushman, T. Emphasis of university supervisor feedback to teacher candidates. J. Stud. Res. 2017, 6, 45–55. [Google Scholar] [CrossRef]
  38. Graham, P. Classroom-based assessment: Changing knowledge and practice through preservice teacher education. Teach. Teach. Educ. 2005, 21, 607–621. [Google Scholar] [CrossRef]
  39. Merriam, S.; Tisdell, E.J. Qualitative Research: A Guide to Design and Implementation; Jossey-Bass: San Francisco, CA, USA, 2016. [Google Scholar]
  40. Cochran-Smith, M.; Lytle, S.L. Communities for teacher research: Fringe or forefront? Am. J. Educ. 1992, 100, 298–324. [Google Scholar] [CrossRef]
  41. Yendol-Hoppey, D.; Jacobs, J.; Gregory, A.; League, M. Inquiry as a tool for professional development school improvement: Four illustrations. Action Teach. Educ. 2008, 30, 23–38. [Google Scholar] [CrossRef]
  42. Lincoln, Y.S.; Guba, E.G. Naturalistic Inquiry; Sage: Hemet, CA, USA, 1985. [Google Scholar]
  43. Miles, M.B.; Huberman, A.M. Qualitative Data Analysis, 2nd ed.; SAGE Publications: Hemet, CA, USA, 1994. [Google Scholar]
  44. Maxwell, J.A. Validity: How might you be wrong? In Qualitative Educational Research: Readings in Reflexive Methodology and Transformative Practice; Luttrell, W., Ed.; Routledge: New York, NY, USA, 1996; pp. 279–287. [Google Scholar]
  45. McTigue, J. 8 Quick Checks for Understanding. Edutopia. Available online: https://www.edutopia.org/article/8-quick-checks-understanding/ (accessed on 28 January 2021).
  46. Moss, C.; Brookhart, S. Advancing Formative Assessment in Every Classroom: A Guide for Instructional Leaders; ASCD: Arlington, VA, USA, 2019. [Google Scholar]
  47. DeLuca, C. 16 Assessment literacy theory: Pragmatics. In Language Assessment Literacy and Competence Volume 1: Research and Reflections from the Field:250; Cambridge University Press & Assessment: Cambridge, UK, 2024. [Google Scholar]
  48. Van Orman, D.S.; Gotch, C.M.; Carbonneau, K.J. Preparing Teacher Candidates to Assess for Learning: A Systematic Review. Rev. Educ. Res. 2024. [Google Scholar] [CrossRef]
Table 1. Assessment dimensions.
Table 1. Assessment dimensions.
DimensionApproachDescription
Assessment PurposeAssessment of learning (AoL), Assessment for learning (AfL), Assessment as learning (AaL)AoL—reflects a summative assessment of students’ attainment of learning objectives following some period of instruction and for which a grade or mark is assigned
AfL—reflects formative assessment practices used by both teachers and students to determine current progress toward the attainment of learning objectives and to identify next steps for continued learning and instruction
AaL—primarily student-centered and involves students in active self-assessment and the development of metacognitive and self-regulated learning skills [15]
Assessment ProcessDesign, use, and scoring of assessmentDesign—refers to teachers developing or modifying publisher-created assessments to ensure they are reliable, aligned with learning objectives, and measure student learning
Use—refers to when teachers not only use scoring protocols (e.g., rubrics, checklists) or grading schemes (e.g., letter or number grades) but also modify these protocols or schemes for particular students or differing contexts
Communication—includes the teacher interpreting assessment results, providing feedback to students, and communicating student progress to parents
Assessment FairnessStandardized, equitable, and differentiated assessmentStandard—refers to maintaining equal assessment protocols for all students
Equitable—occurs when teachers differentiate assessments for formally identified students, such as those in special education or English language learners
Differentiated—occurs when teachers differentiate assessments for formally identified students, such as those in special education or English language learners
Assessment TheoryConsistent, contextual, and balanced assessmentConsistent—refers to reliability within assessments, across time periods, and among their peers
Contextual—refers to the validity of an assessment
Balanced—takes into consideration both the reliability and validity of an assessment
Adapted with permission from Barnes et al. [14].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Riley-Lepo, E.; Barnes, N.; Fives, H. Showing What They Know: How Supervisors Express Their Assessment Literacy. Educ. Sci. 2024, 14, 1075. https://doi.org/10.3390/educsci14101075

AMA Style

Riley-Lepo E, Barnes N, Fives H. Showing What They Know: How Supervisors Express Their Assessment Literacy. Education Sciences. 2024; 14(10):1075. https://doi.org/10.3390/educsci14101075

Chicago/Turabian Style

Riley-Lepo, Erin, Nicole Barnes, and Helenrose Fives. 2024. "Showing What They Know: How Supervisors Express Their Assessment Literacy" Education Sciences 14, no. 10: 1075. https://doi.org/10.3390/educsci14101075

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop