The Quality of Classroom Assessments

A special issue of Education Sciences (ISSN 2227-7102).

Deadline for manuscript submissions: closed (15 March 2019) | Viewed by 8197

Special Issue Editor


E-Mail Website
Guest Editor
Kristianstad University, Elmetorpsvägen 15, 291 39 Kristianstad, Sweden
Interests: classroom assessment; feedback; formative assessment; grading

Special Issue Information

Dear Colleagues,

The quality of assessments has traditionally been evaluated in terms of reliability and validity. However, these concepts have mainly evolved within the psychometric tradition and their relevance for classroom assessments may be questioned. The problematic relationship between psychometric conceptualizations of reliability and validity, on the one hand, and classroom assessments on the other, is particularly pronounced for validity, since “construct validity” is based on the notion of indirect measurement of latent—i.e., non-visible—constructs. While test developers need to ascertain that scores are interpreted and used in an acceptable way, this does not necessarily apply to the assessment of tangible products, such as lab-reports, essays, or oral presentations. In classroom assessments, the quality of student performance may therefore be assessed directly, without making reference to students’ ability, aptitude, or any other general skills/latent features. Furthermore, in relation to formative assessment, teachers need to identify strengths and weaknesses in student performance in order to provide feedback to the student. Such feedback needs to be task-related, context-sensitive, and focus on the quality of performance in order to support student learning, which—again—means that it is not necessary to make reference to any latent features of the student.

The aim of this Special Issue is to bring together research that may forward the discussion about how to judge the quality of classroom assessments, particularly in relation to reliability and validity, but also in relation to other concepts, such as fairness, alignment, or usability. Examples of questions are: How can the reliability of assessments be established within a classroom? How can teachers ascertain fairness and/or validity of their assessment decisions (such as grades)? How can the concepts of reliability and validity be used when assessing the quality of products, without the use of scores? What kind of assessment information is valued and used by teachers? Which quality criteria for assessments are relevant for classroom assessments, either in addition to, or in place of, reliability and validity?

The Special Issue is intended to contribute to the line of research that has questioned the relevance and adequacy of psychometric conceptualizations of reliability and validity for classroom assessments and student learning, perhaps most notably A systems approach to educational testing by Frederiksen and Collins (1989) and Validity in educational assessment by Moss, Girard, and Haniford (2006). There are also a number of researchers who have proposed adjustments of traditional quality criteria for assessments, as well as those who have proposed to replace them. This line of research has been meritoriously summarized by Baartman and her colleagues (2006).

Prof. Dr. Anders Jönsson
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Education Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • assessment decisions
  • classroom assessment
  • feedback
  • grading
  • reliability
  • validity

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4874 KiB  
Article
Insights Chinese Primary Mathematics Teachers Gained into their Students’ Learning from Using Classroom Assessment Techniques
by Xiaoyan Zhao, Marja van den Heuvel-Panhuizen and Michiel Veldhuis
Educ. Sci. 2019, 9(2), 150; https://doi.org/10.3390/educsci9020150 - 18 Jun 2019
Cited by 2 | Viewed by 4210
Abstract
In this study, we explored the insights that Chinese primary mathematics teachers gained into their students’ mathematical understanding from using classroom assessment techniques (CATs). CATs are short teacher-initiated targeted assessment activities proximate to the textbook, which teachers can use in their daily practice [...] Read more.
In this study, we explored the insights that Chinese primary mathematics teachers gained into their students’ mathematical understanding from using classroom assessment techniques (CATs). CATs are short teacher-initiated targeted assessment activities proximate to the textbook, which teachers can use in their daily practice to make informed instructional decisions. Twenty-five third-grade teachers participated in a two-week program of implementing eight CATs focusing on the multiplication of two-digit numbers, and filled in feedback forms after using the CATs. When their responses described specific information about their students, emphasized the novelty of the gained information, or referred to a fitting instructional adaptation, and these reactions went together with references to the mathematics content of the CATs, the teachers’ responses were considered as evidence of gained insights into their students’ mathematics understanding. This was the case for three-quarters of the teachers, but the number of gained insights differed. Five teachers gained insights from five or more CATs, while 14 teachers did so only from three or fewer CATs, and six teachers showed no clear evidence of new insights at all. Despite the differences in levels of gained insights, all the teachers paid more attention to descriptions of students’ performance than to possible instructional adaptations. Full article
(This article belongs to the Special Issue The Quality of Classroom Assessments)
Show Figures

Figure 1

14 pages, 208 KiB  
Article
Assessing English: A Comparison between Canada and England’s Assessment Procedures
by Bethan Marshall and Simon Gibbons
Educ. Sci. 2018, 8(4), 211; https://doi.org/10.3390/educsci8040211 - 05 Dec 2018
Cited by 1 | Viewed by 3622
Abstract
English as a subject used to be assessed using course-based or portfolio assessments but now it is increasingly examined through terminal tests. Canada is an exception to this rule. This paper compares the way English is assessed in England and Canada and looks [...] Read more.
English as a subject used to be assessed using course-based or portfolio assessments but now it is increasingly examined through terminal tests. Canada is an exception to this rule. This paper compares the way English is assessed in England and Canada and looks to the ways in which the kind of assessment undertaken affects the practices of English teachers both in the teaching of summative and formative assessment. Full article
(This article belongs to the Special Issue The Quality of Classroom Assessments)
Back to TopTop