Next Article in Journal / Special Issue
Policy in Practice: Teachers’ Conceptualizations of L2 English Oral Proficiency as Operationalized in High-Stakes Test Assessment
Previous Article in Journal / Special Issue
Conceptions of Assessment as an Integral Part of Language Learning: A Case Study of Finnish and Chinese University Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Students’ Perceptions of Involvement in the Assessment of Oral Competence in English as a Second Language

by
Lise Vikan Sandvik
* and
Oda Aasmundstad Sommervold
Department of Teacher Education, Norwegian University of Science and Technology, 7491 Trondheim, Norway
*
Author to whom correspondence should be addressed.
Languages 2021, 6(4), 203; https://doi.org/10.3390/languages6040203
Submission received: 18 September 2021 / Revised: 2 December 2021 / Accepted: 3 December 2021 / Published: 8 December 2021
(This article belongs to the Special Issue Recent Developments in Language Testing and Assessment)

Abstract

:
This mixed-method study examined students’ perceptions of involvement in the assessment practice of oral competence in English in Norwegian upper secondary schools. Student involvement in assessment can be seen as a key factor when it comes to enhancing students’ learning outcome and motivation. Previous research has, however, shown that student involvement and the assessment of oral competence in English as a second language classes have been challenging. Surveys (N = 116) and two focus group interviews (N = 8) were used. The findings revealed that the students wanted to be more involved in the assessment practice. Moreover, the students saw this increased involvement as a way to enhance their oral competence in English. The students expressed uncertainty as to what they were assessed by. The implications of this study suggest that increased involvement in developing goals and criteria and more dialogue-based feedback are beneficial measures for strengthening students’ learning outcomes.

1. Introduction

This study aimed to investigate students’ perceptions of involvement in the formative assessment practice of oral competence in English in upper secondary schools in Norway. In recent decades, formative assessment has become a central aspect of education to enhance students’ learning outcomes (Black and Wiliam 1998) and has been an area of focus in the Norwegian school system throughout the 2000s (Hopfenbeck et al. 2015; Sandvik 2019).
In the Norwegian grading system exam grades form, together with final grades given by the subject teacher, a final assessment. Over 80% of the grades given in upper secondary schools are teacher grades without the involvement of external examiners. The education system therefore relies on teachers being able to make fair assessments of student performance. The Directorate for Education and Training is responsible for the development, implementation and management of a coherent test and assessment system in Norway. The Directorate administers the centrally written examinations in upper secondary education, while the individual Counties in Norway are given the responsible for locally given exams. English as a second language is a subject in Norwegian schools from Year 1–11, and the students are given a final grade given by the teacher, while 20% of the students get an externally given examination. The students are informed about which subjects they will have for the exam just a few weeks before the exam takes place. Research in Norway reveals that schools have different understandings of the national curriculum and the purpose of assessment (Lysne 2006; Hopfenbeck et al. 2015), and that teachers’ final grading practice seems to vary between schools (Prøitz 2013; Prøitz and Borgen 2010). Studies have also shown that teachers collaborate to a varying degree in professional learning communities about AfL and final grading (Norwegian Directorate for Education and Training 2018; Sandvik 2019).
To deal with these challenges, teachers’ assessment literacy has in recent years been addressed nationally through initiatives such as the Assessment for Learning initative. This initiative lasted from 2010 to 2018 and aimed to develop a more learning-oriented assessment practice and assessment culture in schools across the country (Norwegian Directorate for Education and Training 2019). Schools were motivated to participate in this initiative by government funding. Student involvement in assessment is stated in the Education Act as a central principle for students’ motivation and understanding (Norwegian Directorate for Education and Training 2018). The other principles state that the students should understand what to learn and what is expected from them, they should obtain feedback that provides information about the quality of their work or performance and that they should be given advice on how to improve. Different curricula reforms have strengthened the formative qualities of these assessments by setting clearer requirements and strengthening their systematic work (Hodgson et al. 2010; Hopfenbeck et al. 2015).
Despite the efforts of strengthening the assessment culture in Norwegian schools, there are still challenges that need to be met. One of the areas that has been highlighted as challenging to implement is student involvement (Norwegian Directorate for Education and Training 2019; Sandvik and Buland 2013). Student involvement is one of the four principles of AfL that are seen as central to promote learning (Norwegian Directorate for Education and Training 2018). This principle holds that students learn better when they are involved in assessing their own work, competencies, and academic development, and when they are able to participate in planning, carrying out, and assessing their education (Norwegian Directorate for Education and Training 2018). This is also recognised in the assessment regulations in the Education Act, which hold that the purpose of self-assessment is that students should reflect and become aware of their own learning (The Education Act 2009, § 4–8). Research reporting from the AfL initiative showed that student involvement in assessment was not implemented as intended. There was an overall lack of organisational concern about assessment practices in this project, and the schools that participated, reported that they did not have a common practice within the schools (Hopfenbeck et al. 2015). Similarly, other findings included that students wished to be more involved but that teachers lacked an understanding of how to implement this (Sandvik and Buland 2013). These findings were also reflected in the results from the Pupil Survey of 2018, where students reported a decreasing degree of AfL in the higher grades and a significantly lower score in the general studies programmes (Wendelborg et al. 2019). Moreover, subject diversities relating to formative assessment and involvement was also found. Havnes et al. (2012) reported less satisfaction with feedback and student involvement in language subjects (in this case, English and Norwegian) compared to vocational subjects.
Zimmerman argued already in 1990 that “self-regulated learners select and use self-regulated learning processes to achieve desired academic outcomes on the basis of feedback about learning effectiveness and skill” (Zimmerman 1990, pp. 6–7). There is extensive support in the literature for involving learners in assessment (Andrade 2010; Pekrun et al. 2011; Andrade and Brookhart 2019; Papanthymou and Darra 2019). Feedback, as found in continuous informal assessments, is a dialogue between the learner and the teacher, which enhances learning, depending on the way the learner experiences the dialogue, i.e., being listened to, respected, and trusted. A recent concept embracing the above is Smith et al.’s (2016) concept “responsive pedagogy” developed from the self-regulation, self-efficacy, and feedback theories. Papanthymou and Darra (Papanthymou and Darra 2019, p. 213) stated that “self-assessment is one of the most important skills that learners need to have for future professional development and lifelong learning as it develops learners’ ability to be self-assessors of their own learning”. This study leans on such an understanding of assessment.
The English subject curriculum has nationally defined learning goals (Norwegian Directorate for Education and Training 2020), that are interpreted and concretized by teachers (and students). Oral competence constitutes an important part of being a proficient language user. In order for students to develop their oral competence, it is necessary that they have the right strategies and tools to further improve. To ensure this, formative assessment is launched as a prerequisite (Black and Wiliam 1998; Andrade 2010). Increased involvement in learning strategies and self-assessment will contribute to students’ understanding of what to learn, as well as how and what they should focus on in assessment situations (Sandvik et al. 2021). The English subject curriculum emphasises students who are actively involved in their own learning processes through assessing their own competence and reflecting on their own development (Norwegian Directorate for Education and Training 2020). For the teacher, this entails facilitating involvement and the desire to learn by employing various strategies and resources to develop students’ skills (Norwegian Directorate for Education and Training 2020). However, it has been shown that the learning processes in English are constrained by national tests and exams, which could cause a backwash effect where the English subject curriculum goals are downgraded in favour of the national assessment regulations (Sandvik and Buland 2013). This can lead to many teachers being more concerned with preparing students for the various tests and exams than they are with ensuring good learning processes in the subject that contribute to developing students’ intercultural understanding, communication and identity development.
Oral competence has proven hard to assess, which relates both to the reliability of the grades given in the final exam and the feedback that is given to students (Bøhn 2016; Dobson 2009). Moreover, oral assessment has most often been thought of as summative rather than formative in its form (Dobson 2009). With the turn to communicative competence in the English classroom, assessment became more focused towards meaning-oriented language in context (Chvala and Graedler 2010).
In this study, we sought to explore students’ perceptions of involvement in the assessment practice of oral competence. The Common European Framework of Reference for Languages (CEFR) has greatly influenced the English subject curriculum in Norway, with its communicative approach and focus on objectives and content rather than specific teaching methods (Simensen 2011). Its aim is to enhance mutual enrichment and understanding, facilitate communication and interaction, and facilitate greater convergence in learning and teaching languages across Europe; thus, it has an overall focus on the communicative competence of the language learner, with a focus on both oral production and oral interaction (Council of Europe 2001).
The purpose of this study was to explore how involvement in the assessment practice of oral competence in upper secondary schools is experienced by looking at this phenomenon through students’ perspectives. Thus, the overarching research question for this study is the following: How do students in upper secondary schools in Norway perceive their involvement in the assessment practice of oral competence in English? We limited the scope of the study by addressing the following research questions:
  • How do students participate in the assessment practice and what are their attitudes towards this?
  • How do students understand learning goals and assessment criteria defined in the subject syllabus and also developed by the teacher?

2. Previous Research

The potential benefits of formative assessment are widely recognised and supported by a vast amount of research on the area (Black and Wiliam 1998, 2018; Schildkamp et al. 2020). More specifically, when it comes to student involvement in the assessment practice, previous research has shown that how students feel about involvement is conditioned by how AfL practices are implemented in the classroom (Leitch et al. 2007; DeLuca et al. 2018). Moreover, the teacher is considered to be the most important factor when students consider their participation and engagement in learning. Furthermore, students who are engaged in setting goals, monitoring and evaluating performance, and selecting rewards have proven to have greater positive effects on achievement compared to when these are just controlled by the teacher (Hattie 2009). Student involvement in assessment is a tool that can encourage student confidence, and this is especially beneficial for low-performing students (Stiggins and Chappuis 2005).
Research on student involvement has shown positive effects, such as professional growth and development, consciousness of goal attainment (metacognitive development), and critical thinking, and must also be seen as a basis for adapted teaching (Andrade and Brookhart 2019; Papanthymou and Darra 2019). Moreover, how students feel about involvement has been shown to be conditioned by how Assessment for Learning is implemented in the classroom, and the teacher is, in this regard, the most important factor when students consider their participation and engagement in learning (Leitch et al. 2007). Wiliam and Thompson (2008) identified five key strategies conceptualising formative assessment. These strategies each adhere to the processes of learning, and there are different activities that can be used to pursue each of these in the classroom: While the teacher is responsible for clarifying goals and criteria for the students, the learner and their peers are responsible for understanding and sharing these. Similarly, in the next two stages, the teacher needs to engineer discussions and learning tasks and provide feedback, while the learners need to be active and use each other as resources, as well as being responsible of their own learning. In other words, both the teacher and the students need to be active throughout the assessment process.
Several studies have examined formative assessment practices in general, with a focus on students and/or teachers in English as a second language (ESL) classrooms. Havnes et al. (2012) found that feedback practices are, to a certain extent, subject-related. Moreover, their study showed that students are, to a lesser degree, involved in their English classes compared to Norwegian classes. Burner (2016) provided insight into how students and teachers perceived formative assessment of writing in English. The study showed that there are contradictions in terms of how they respond to formative assessment and that students experience limited involvement in assessment practices. Other studies have shown that students who are aware of the learning goals also perceive feedback as more useful (Vattøy and Smith 2019). Furthermore, author showed that feedback in English needs to be followed by more formative assessment practices, which entails an emphasis on reflection when working with learning goals and assessment criteria, and in assessment situations. In addition, their study draws attention to the importance of increased student involvement in the subject.
Vattøy and Smith (2019) sought to highlight the relationship between students’ and teachers’ perceptions and practices regarding perceptions of feedback. More specifically, the study looked at external goal orientation, self-regulation, self-efficacy, and EFL teaching. The results indicated that students do not find teachers’ feedback useful, and that knowledge of the learning goals and self-regulation are necessary for this feedback to be useful. Similarly, Gamlem and Smith (2013) showed that students find feedback useful, but this is also dependent on the teacher’s practice of providing time and opportunity to revise their work. Moreover, students find it challenging to provide feedback to peers, as they are often too nice to one another. Thus, working on feedback skills and criteria is important.
These findings are also supported by Van Der Kleij and Adie (2020), who investigated teachers’ and students’ perceptions of oral feedback in the classroom practice in English (as a first language) and mathematics. The study found that there are diverging perceptions of feedback between students and the teacher, which are context-, subject-, and individual-dependent. While the teacher indicated that her feedback in English went beyond corrective information, the students mostly perceived the feedback as corrective. This can have important implications for students’ learning outcomes. When feedback is not perceived as planned by the teacher, it is unlikely that it will have the intended effects of supporting students’ learning (Van Der Kleij and Adie 2020).
There are a few studies on the assessment of oral competence in the Norwegian context (Svenkerud et al. 2012). Bøhn (2016) provided an important contribution to the field of oral skills and assessment in his examination of rating processes and outcomes in an oral English exam. Bøhn (2015) provided a better understanding of teachers’ understanding of the constructs to be tested and revealed the problematic side of not having a common rating scale on the national level. Furthermore, Bøhn and Hansen (2017) showed that teachers are oriented towards intelligibility when assessing English pronunciation while disagreeing on the importance of nativeness. Moreover, in his study on assessing content, Bøhn (2018) found that teachers have a general conception of the content dimension, and that they are more concerned with skills and processes than with specific subject matter.

3. Theoretical Foundations

The term communicative competence can be defined differently depending on how one chooses to classify its components. CEFR sees communicative competence to be composed of three components: linguistic, sociolinguistic, and pragmatic competences (Council of Europe 2001). Similarly, Bachman and Palmer (1996) saw communicative competence as being composed of five components: language knowledge, topical knowledge, personal characteristics, strategic competence, and affective factors. While this is a model designed for language testing, it nevertheless offers a valuable perspective on communicative competence, because the model is relevant to how students use communication strategies in oral competence. Bøhn (2016) stated that this model takes a cognitive perspective on language ability, as the construct is something residing in the individual. Oral competence can be seen to be composed of language knowledge, topical knowledge, and strategic competence, and these components are linked together.
Previous research has also shown that teachers want to implement language learning strategies, but feel that their knowledge on this is too limited (Haukås 2012). Haukås (2012) further stated that the main obstacle to implementing language learning strategies successfully in the classroom is a lack of student involvement.
Bachman and Palmer (1996) identified important aeras of metacognitive strategy use: model, goal setting, assessment, and planning. These strategies are relevant for language teachers to use in teaching as they help learners to be aware of own learning process, and Anderson’s (2002) model of metacognition has a pedagogical approach and illustrates how teachers can work with each of the following areas to enhance students’ metacognitive skills. The model illustrates how teachers can prepare and plan for learning, how they can help students to select and us learning strategies, monitor strategy use, orchestrate various strategies, and evaluate strategy use and learning (Anderson 2002, p. 2). The first component highlights the students’ role and how students need to think through what they want to accomplish and how they can accomplish this in relation to a specific learning goal (Anderson 2002). The teacher’s role is also of significance, as they can make the learning goal(s) explicit and guide students in setting their own goals, underscoring the importance of student involvement. The second component encompasses students being explicitly taught different strategies and when to use them. The goal is for the students to be conscious about their choices throughout their learning processes. The third component highlights how monitoring strategy use will lead to increased ability to reach learning goals. The fourth component underlines the importance of being able to use more than one strategy and know when to use them. This ability is what distinguishes strong and weak second language learners. The final component highlights students’ ability to evaluate the effectiveness of what they are doing.

4. Materials and Methods

This study was conducted in an urban upper secondary school situated in a medium-sized city in Norway with first-year students (aged 16 years). The reason for targeting these students was based on the limited research on the assessment of oral competence in English in upper secondary schools in Norway (Bøhn 2016; Svenkerud et al. 2012). In addition, this first year in upper secondary schools is the last year where English is a mandatory subject, which means that there is both a time pressure in regard to the curriculum and a pressure on performance, as the students will receive their final grade in the subject at the end of the year. This school was recruited as they were part of an ongoing research project on assessment and were, therefore, open to participate in this study as well. The project was approved for data collection by the Norwegian Centre for Research Data (NSD).
This study employed both quantitative and qualitative methods with an explanatory sequential mixed-method design (Creswell 2014), see Table 1. The intention of the explanatory sequential mixed-method design is to use the qualitative data to further explain and explore the initial quantitative findings (Creswell 2014). By using this approach, the data are also triangulated. The purpose of triangulation is to view reality from different angles, which, in turn, may provide a more correct and complex picture (Postholm and Jacobsen 2018). This will also strengthen the material, as the findings are based on more than one source and are, therefore, more representative of a real-world context.
According to Creswell (2013), in order to conduct a valid phenomenological interview, researchers need to ensure that participants have experienced the same phenomena. Involvement and assessment of oral competence are phenomena that students regularly encounter in their daily life in school. Moreover, participants went to the same school. In addition, the participants in the interviews were strategically selected based on two criteria: the students’ achievement level and an equal distribution of gender. In order to do this, the teachers selected students who matched these criteria, and the students were then asked whether they were interested in participating in the interviews. One group consisted of students at a medium achievement level, while the other consisted of students at a high level. The reason for this choice was twofold: (1) students at different levels might have contrasting experiences and opinions and (2) students might be more comfortable to discuss their opinions when they are in a group of like-minded participants. Both groups consisted of two boys and two girls.

4.1. Survey

The purpose of the survey was to obtain a statistical description of the population (Ringdal 2013, p. 190; Creswell 2014). As little research has been conducted on the assessment of oral competence and involvement in Norway, a quantitative approach formed the basis of the data collection by providing an overview of the general tendencies in this area of research. Based on the research literature on the assessment of oral competence in English and student involvement, we theorised five concepts: Participation in assessment, understanding of oral skills, understanding of goals and assessment criteria, language awareness, and learning outcomes. Participation in assessment focused on aspects related to how students viewed their own participation in the assessment practice. This was specifically linked to how students viewed participation in their own language development, decisions surrounding assessment methods, and communication. Understanding of goals and assessment criteria encompassed items that were related to the students’ reported understanding of goals and assessment criteria, how goals and assessment criteria were communicated by the teacher, and the students’ involvement in developing and discussing them. Language awareness encompassed items that were used as an indication of applying learning strategies. This included whether the students knew how to approach a task in order to succeed as well as their own awareness of developing oral competence. Learning outcome encompassed items that looked at how students felt they benefitted from various assessment methods. The statements asked students to evaluate to what degree they felt that assessment by the teacher, self-assessment, peer assessment, working with feedback, and grades from the teacher helped them develop their oral skills. Apart from the background variables (gender, class, grade interval, and a list of assessment criteria), all items were at the ordinal level and a five-point Likert scale was used, ranging from 1 = totally disagree to 5 = totally agree. The questionnaire was made available to the students on a digital platform.

4.2. Focus Group Interviews

Following the explanatory mixed-method design described by Creswell (2014), the semi-structured focus group interviews were conducted after we had analysed the material from the quantitative data collection. The interview participants were suggested by their teachers, based on the given criteria, and asked if they were willing to participate further in the study. The interviews were audio-recorded and transcribed. The students were given fictive names.

4.3. Limitations of the Research Design

There are, however, some concerns with employing such a method. In an explanatory mixed-method design, the accuracy of the findings may be affected by how the quantitative data are followed up. How a researcher chooses to proceed with qualitative data collection needs to be carefully thought through, as there is a risk of overlooking important issues that come to affect the overall validity (Creswell 2014). For example, by focusing too much on a specific finding from the quantitative data, there is a risk of overlooking other issues that might need to be further examined. We thoroughly analysed the quantitative data to ensure that all aspects of the material were accounted for. Another limitation is that the findings may be invalidated by using different samples and by the sample size in each phase (Creswell 2014). Both of the phases in this study were built on the same selection of students, and this selection was based on voluntary participation.

5. Analysis and Results

Student survey data were analysed through descriptive statistics (Creswell 2014). Descriptive statistics were used to provide contextual information on participants and general response trends. The qualitative data were analysed using the hermeneutic phenomenological approach described by van Manen (1990). This process reduced the data material, as it highlighted the most essential elements. The analysis of the quantitative data resulted in three main topics that we saw as important to examine further in the interviews: (1) participation and attitudes, (2) assessment practice, and (3) language awareness. The quantitative and qualitative findings are presented together and are structured according to the research questions as follows: students’ participation and attitudes towards involvement, understanding of oral competence, understanding of learning goals and assessment criteria, and view of learning outcome. Table 2 below summarises the main findings from both the quantitative and qualitative data analyses.

5.1. Students’ Participation and Attitudes towards Involvement

Table 3 shows the students’ views on participation and dialogue in class. The four items were differentiated in that they were separated in active involvement of the individual student (Q14 and Q15) and involvement in the class as a whole (Q16 and Q17). The findings on involvement of the individual student show that close to half of the students reported that they are not actively involved in deciding tasks and assessment methods. 48.3% and 45.7% of the students answered almost never or not often, respectively.
This is also reflected in how the whole class is involved in discussing oral assessment methods and conversations about developing oral competence. The mean scores show that the students were relatively more involved in discussing oral assessment scores, as M = 2.96. Furthermore, the mean scores also show that the students were relatively more positive towards how the whole class is involved compared to how the individual student is involved. The standard deviation (SD) shows that the distribution is relatively similar across the items.
These findings were further explored in the focus group interviews, which showed that students had different experiences. Some of the students were familiar with participating in making decisions on the assessment methods, as they used anonymous voting in class. By doing this, the whole class was included in deciding what type of assessment method they were going to use. The students stated that they were pleased with this and liked being included in these decisions. One student, Anna, pointed out that some students found presentations in class uncomfortable, but by being part of deciding the assessment method, the class often ended up with using videos or group discussions instead:
Anna: I like to be part of making the decisions this way because we’ve never ended up with having presentations in front of the whole class. Most students want to use videos or conversations in small groups because otherwise you feel a pressure to perform in front of the whole class. It’s a bit uncomfortable sometimes. For many at least.
(A1)
Other students stated that they wanted to be more involved in deciding the assessment methods in their class. Similar to Anna, they pointed to students who were uncomfortable with the pressure of speaking English in front of the class and that they wished for other types of assessment than whole-class presentations. The students further stated that the teachers reviewed the most common mistakes in class, but this did not involve any further dialogue.

5.2. Self-Assessment and Roles in Assessment

Table 4 below shows the differences in how students participate in self- and peer assessment. The majority of the students reported that they rarely or almost never used self- and peer assessment (50% and 75%, respectively). Although the scores for both self-assessment and peer assessment were low, the students were more likely to use self-assessment, as the mean scores show that peer assessment scored considerably lower, M = 1.91, compared to self-assessment, M = 2.53. Additionally, the skewness for peer assessment was significantly more right-skewed than that for self-assessment, meaning that the distribution was clustered to the left side of the scale.
In the interviews, the students clearly stated that peer assessment had not been used in their English classes. This could indicate that the students who reported on this in the surveys were thinking about past experiences, either from lower secondary school or other subjects when answering this question in the survey. When asked about self-assessment in the interviews, the participants stated that they had not used this in their English classes. However, they were used to this way of working both from lower secondary school and in other subjects and seemed to have a clear understanding of what they perceived this to be. The students described this as taking part in assessing their own work and provided examples of both correcting their own texts and having a conversation with the teacher. They highlighted the need to argue for their choices as an important feature of self-assessment:
Casper: Assessing yourself […] In a conversation with the teacher, for instance. Then you can argue why you think this and that should affect your grade.
(B1)
It did, however, become apparent that the students’ understanding of self-assessment was limited in some areas, which is illustrated in the following quote:
Bea: […] we had this conversation with the teacher when we were going to receive our grade, [and the teacher] asked us what we thought and what we thought about what grade we deserved sort of, but that was it.
(A3)
There is, in other words, a certain limitation to how students think about self-assessment. The students’ understanding of their own role in assessment was a topic that was highlighted throughout the interviews. The students shed light on this from a number of different perspectives and showed that they generally had a lack of awareness of their own role in the learning process and how this affected their learning outcome. When asked about whether they found grades useful and whether they had any discussions about the grade they were given, Casper in group B stated the following:
Casper: No, but we should have. It’s a lot in English which is two-sided. There are a lot of things you can say, which are correct, that the teacher corrects. So, I think you should get the opportunity to defend yourself.
(B2)
As the extract shows, Casper expressed a wish to be more involved when the grade is set and a chance to defend himself. While the other students in the group also agreed with this, one student, Camilla, pointed out that the teacher had, in fact, discussed this with her after an assignment. While the students expressed a wish to be more involved in the assessment given by the teacher and clearly saw the benefit of this, they also saw it as the teacher’s responsibility to actually initiate this method of assessment. Similarly, when the participants in group B were asked about how they prepared before an oral assignment, one participant stated the following:
Camilla: It is what it is. The teachers are supposed to have taught us what we need to know before the test, kind of.
Emma: That’s actually true […]
(B3)
What these extracts show is that there is a conflicting view on how students view their own involvement. On the one hand, they express a wish to be more involved, for example, in reviewing feedback, while on the other, there is a lack of responsibility concerning their own role in the assessment practice.

5.3. Awareness of Language Learning

As an indication of applying learning strategies, the students were asked whether they knew how to approach a task in order to succeed as well as whether they were aware of their own development of oral competence. The analysis showed only small differences, which was expected, as the two items are closely related, see Table 5. The majority of the students either partly or totally agreed with the two statements. This can be seen as an indication of having awareness concerning language learning strategies.
The interviews provided further insight into this. The participants in group A talked about specific learning skills when asked about how they worked with oral language. Anna mentioned that she used the feedback from her teacher consciously; the feedback was valuable for her further development. Another student, Brage, used the assessment criteria to evaluate what level he was at in order to know what he needed to work on. The participants in group B had a more abstract approach to working with oral competence:
Erik: […] Like, I can’t conjugate a verb in English for example, but I know what to say in a certain sentence because I know it sounds right.
Camilla: Yeah, like there’s a difference between a verb you’re going to conjugate or like a sentence you’re supposed to say, because then you might be able to conjugate that word because it sounds right because… That’s how it’s supposed to be.
(B5)
The students reported a subconscious knowledge about the language where they have a feeling of what is correct English. When asked about this more specifically, Casper stated that he picked it up from movies, games, and in everyday life without further need to practice the language (B6). The students in both groups did, however, employ different strategies when they worked specifically with an oral assignment, such as presentations or group conversations. Several of the students wrote a script or used keywords—depending on what type of assessment they were having. Practicing difficult words or sentences was also mentioned, and the participants also stated that they were able to find synonyms or other ways of expressing themselves if they forgot the words they were going to use.

5.4. Understanding of Learning Goals and Assessment Criteria

Table 6 shows the students’ reported understanding of learning goals and assessment criteria, how goals and criteria are communicated by the teacher, and the students’ involvement in developing and discussing these. The students reported their own understanding of learning goals as good, M = 3.28. Additionally, 44% of the students partly or totally agreed that they understood the goal of the assessment and what they were meant to learn from it. Moreover, students agreed with the statement that the learning goals and assessment criteria were clearly communicated by the teacher, with a mean score of 3.47. In contrast, when asked about how they participated themselves in developing and discussing these goals and criteria, the analysis showed M = 2.68 and M = 2.95, respectively, which is lower than their perceived understanding.
The focus group interviews provided further insight into this topic. The students commented that they did not discuss learning goals or assessment criteria in class, but that this was provided to them by the teacher. It was also stated that the assessment criteria were made available on the learning platform, but not used actively by the students. The students stated that the criteria were logical, so they did not necessarily see the point of going into further detail in class. However, one student in group A, Brage, expressed uncertainty about the assessment criteria when they received feedback from the teacher:
Brage: It doesn’t really seem like we almost… It doesn’t really seem like we are assessed by that either. Uhm, because, at least when we get our feedback, we get a small text on Canvas1 as it is called, a comment… and the criteria form isn’t there and it doesn’t say where we are, if we scored middle, high, or low. We don’t get… So, it doesn’t seem like we are assessed by the criteria form either.
Researcher: It is perhaps a bit difficult to see that connection?
Brage and Bea: Yeah.
(A9)
The other students pointed out that they were satisfied with the feedback they received from the teachers, but they too felt that the feedback could be more specific in terms of highlighting whether they achieved a high, medium, or low grade. In general, the students in group A were more concerned with features of language, such as pronunciation, fluency, and grammar, and were more critical of the focus on content in their assessments. The students’ focus on language features is interesting when compared to how they described their teacher’s focus in their English classes and assessments:
Bea: We have had one oral assessment, at least we did, but then we didn’t get feedback on how we spoke, we only got feedback on the content and stuff. It’s hard to improve when like we don’t know what went well and things like that.
(A5)
The students also called for a greater focus on explicit teaching of language features in English:
Anna: And then there’s this analysis of short stories and these kinds of things, instead of grammar and things like that which we had before.
Brage: So, it’s not this… We haven’t had a lot of oral things either which are based on how you actually speak English. It’s more your content […] but, like I haven’t felt that I’ve become better to write or speak English by learning this than if I had learnt about something that had been more English-based sort of.
Ask: I agree with them. A bit more English and a bit less about these other things.
(A7)
Although the students saw the learning goals and/or assessment criteria as logical and easy to understand when presented by the teacher, it did not necessarily transfer to their understanding of the feedback they receive after an assignment. This can be seen in light of the students’ reported understanding of the aim of the assessment and what to learn, which showed a mean score of 3.28. Although the students in general felt that they have a good understanding of what is expected, this shows that there is still a level of uncertainty concerning the feedback they are given.
Furthermore, the students expressed that they did not set personal learning goals for developing their oral competence. The students in group A agreed that it was difficult to set learning goals for themselves without the assistance of the teacher. Bea stated the following:
Bea: It would have been nice if we had a conversation with the teacher where we sat down and talked and like yeah, what do you think went well, I think perhaps you can work more on this and that […]
(A11)
The students also requested more specific feedback, as they thought it was challenging to know what they needed to work on. This could be related to how the students are involved, as there is an uncertainty concerning the feedback given. This was shown by Bea’s call for more explicit goals to aim for in her language learning. The students’ perceptions of the teacher’s feedback practice will be further elaborated on in the following chapter.

5.5. Perception of Learning Outcomes

In the survey, the students were asked to take a stance regarding their learning outcome when faced with different assessment methods. This included assessment by the teacher, self-assessment, peer assessment, working with feedback, and grades, see Table 7. These findings were further explored in the interviews. In particular, the interviews focused on students’ perceptions of the feedback practice and self-assessment in relation to their involvement.
As expected, the students felt that they benefitted the most from their teacher’s assessment and working with feedback, as these items showed M = 3.29 and M = 3.61, respectively. The use of grades was also perceived as important for their learning outcomes, M = 3.27. These aspects of assessment comprise both a passive and an active role for the student.
Although the quantitative analysis shows that students benefitted the most from the teacher’s feedback, the interviews revealed varying experiences with the perceived usefulness of the feedback. Anna in group A highlighted the importance of receiving feedback from the teacher, while also underscoring how she worked with it:
Anna: I take into account what the teacher tells me after those presentations we’ve had, so I have always got feedback on what I can do differently or something, so I have tried to take that and think about it for next time. But it is very important that we get that feedback, but I feel our English teacher has been quite good to help us.
(A12)
Other students pointed out that the feedback from the teacher could be difficult to understand and saw it as irrelevant to their learning. It is interesting to note that while some participants in group A claimed that there was too much focus on content, some participants in group B stated that the feedback was not helpful when it had a corrective function (such as grammatical correction). This tendency for the medium-performing students to focus more on language features is also apparent, to a certain extent, in other areas, such as the students’ understanding of oral competence and their understanding of learning goals and assessment criteria. The uncertainty that some students experience concerning what they are learning is expressed in the following quote:
Bea: We have had one oral assessment, at least we did, but then we didn’t get feedback on how we spoke, we only got feedback on the content and stuff. It’s hard to improve when like we don’t know what went well and things like that.
(A5)
This student seeks more guidance on her development of language features rather than the content. How the feedback is understood seems related to the students’ understanding of oral competence. Interestingly, the quantitative findings also show that students rated the learning outcome from grades positively. The majority of the students were either neutral or agreed to the statement that grades helped them develop their oral competence.
The analysis of the interviews showed some differences between the medium- and high-performing groups. While both groups talked about the importance of feedback in addition to grades, the medium-performing students in group A highlighted how grades helped their motivation and their understanding:
Ask: It shows you how good you have been really. So, then you can improve if you… even if you get a bad grade. Try to do it a bit better.
Brage: […] So it’s hard to know when it’s feedback without a grade where you’re at […]
Anna: I also feel that the grade might motivate you to do well in the subject […]
(A13)
These findings also need to be seen in relation to how the students perceive their involvement in the feedback practice. While an important indicator of the students’ level, grades are not seen as enough on their own but as a valuable addition to feedback. The students also requested to be more involved as to reach a common understanding with the teacher:
Brage: I feel that grades help me improve, but not the way we’ve been given the assessment, the way we got it. For example, when we got the overall achievement grade2 now, we were just told the final grade, but we didn’t go through the criteria in each of the grades.
(A14)
It is evident that it is the feedback and the dialogue with the teacher that are considered useful for the student’s further development, not the grade itself.
This is further reflected in the students’ perceptions of the learning outcome from self-assessment, see Table 8. The students were positive towards the potential learning outcome of self-assessment; however, as shown in Section 5.1, this was rarely used in English classes. The majority of the students rated the learning outcome of peer assessment as either neutral or negative.3 The findings show that 37% of the students partly or totally agreed that they develop their oral skills well when assessing themselves.
As shown earlier in the findings, the students stated in the interviews that they had not used self-assessment in their English classes. The students did, however, express a wish to participate more in their own assessment practice through self-assessment and dialogue, as they perceived this to enhance their learning outcome:
Brage: I’m thinking at least if you have an oral conversation with your teacher and then you assess together, that maybe you learn more. Yes, of course, the teacher is guaranteed to know more than you, that’s obvious. So, if you go through it together and then the teacher points out that you have to do this and this and this, then you’re much more aware of it than if you get a comment on the learning platform. And then you can also argue, but why isn’t this the way to do it, and yeah… It’s perhaps more specific feedback.
Bea: Yes, the teacher doesn’t know for sure what you have focused on, so it might be good to say… Talk with the teacher about it.
: Yes, so you can make that clear?
Bea: Yes.
Ask: I think you get a good outcome from that. Doing that.
(A15)
The students have a clear understanding of the potential benefits of being more involved this way. At the same time, the findings from Section 5.1 need to be kept in mind, as the students reported little use of self-assessment and this type of dialogue in their English classes. Self-assessment was also seen as difficult because the students were afraid to assess themselves too highly. This was explained with the risk of seeming selfish and the potential downfall of not achieving the grade they thought they would get. At the same time, the students expressed that they had a good understanding of what level they were at and appreciated the opportunity to argue for why they deserved a particular grade. One student in group B pointed to the importance of the final product when self-assessing:
Casper: The process might not have that much to say if the final product perhaps isn’t that good. Like, it doesn’t really matter how much you worked if you get a 3.4 You got a 3 for a reason, sort of. So, I don’t think we should think about our own assessment in terms of ‘I have worked hard on this’, it should… How good you have been when it counted. Yeah.
(B7)
In this case, the process is not seen as that useful if the effort is not reflected in the final product. This is noteworthy, as students do not necessarily see how the working process is part of the overall learning outcome.
To summarise, the survey findings showed that students are, to a low degree, involved in decisions concerning involvement, self-assessment, and in developing learning goals and assessment criteria. The interviews showed that there are varying experiences with involvement among the students. Overall, the students wish to be more involved, while there is also some uncertainty about how they are involved at the present point. Self-assessment is seen as useful, but also difficult to do alone. Some students expressed an uncertainty concerning assessment criteria and what they are learning in English. They clearly see the learning potential in being more involved in the assessment practice.

6. Discussion

The purpose of this study was to examine how students in upper secondary school perceive their involvement in the assessment practice of oral competence in English. First, we discuss how student involvement is a key component in developing students’ understanding of learning goals, assessment criteria, and feedback practices. Second, we discuss the implications for student involvement in assessment.

6.1. Student Involvement as a Key to Understanding

6.1.1. Understanding the Intended Learning Goals and Assessment Criteria

Understanding the learning goals and assessment criteria is central for students to know where they are going in the learning process, and students themselves are key agents in this (Wiliam and Thompson 2008). The survey findings showed that, in general, the students reported that they have a good understanding of the learning goals of oral English classes and that these are communicated well by the teacher. This was confirmed in the interview findings. At the same time, the findings also showed that students were involved in developing and discussing these goals and criteria to a small degree. This may, in turn, affect their understanding of the feedback, as the explicit articulation of assessment criteria is not enough on its own to develop a shared understanding between the teacher and the students (Rust et al. 2003).
Although the students reported their understanding of learning goals and assessment criteria as good, the lack of involvement concerning this was reflected in the interview findings. Students stated that the assessment criteria are made available on the learning platform, but that these are not actively used by the students themselves. Similarly, author found that these goals and criteria are highlighted in English classes to varying degrees Norwegian classrooms. These classroom dialogues have implications for creating a shared understanding between the students and the teacher. Moreover, the present study showed that some students were under the impression that they were not really judged by the assessment criteria, while others felt that they would benefit from knowing more about what level they were at and requested more specific feedback.
In order for students to develop their understanding of learning goals and assessment criteria, they need to be active in this part of the process (Wiliam and Thompson 2008; Sadler 1989). The fact that the students reported their understanding of goals and criteria as good, while also expressing uncertainty concerning this, suggests that there is a discrepancy between what the students believe to be the goal and what the teacher actually assesses them by. The discrepancy found in the present study does not mean that students do not know what criteria they are assessed by, but it might signify that they, to some degree, are unaware of how the criteria are assessed and which criteria are emphasised. Furthermore, it was evident from the findings of the present study that the students experienced an uncertainty regarding what they are actually assessed by when they received their feedback from the teacher, and this came to affect their perceived usefulness of it. This is in line with previous research in the field, which has shown varying experiences with the perceived usefulness of feedback (Burner 2016; Havnes et al. 2012; Vattøy and Smith 2019). Based on the findings of the present study, it is not possible to state whether the students’ uncertainty concerning the assessment criteria and the feedback is due to the students’ understanding of these or if it is due to unclarity in the teachers’ communication. However, as has been shown, developing a shared understanding between students and teachers about the goals and criteria is central.
Furthermore, the students saw it as difficult to set personal learning goals for developing their oral competence without the assistance of the teacher. Again, this underscores the necessity of socialisation processes in assessment. Hopfenbeck (2014) highlighted that the teacher can ease students’ understanding by explaining the expectations and modelling the finished product. Wiliam and Thompson (2008) also supported this perspective, as they hold that the teacher’s objective is to clarify the learning goals and criteria for success as one of five key strategies conceptualising formative assessment. At the same time, the students need to be involved in understanding and sharing these goals and criteria (Wiliam and Thompson 2008). As the students of the present study score relatively low on student involvement, and involvement in developing learning goals and assessment criteria specifically, there is a possibility that the students’ understanding could be higher if they were more involved in constructing these goals and criteria. This argument becomes more apparent when reviewing the feedback practices and the students’ perceptions of involvement in this aspect of assessment.

6.1.2. The Need for Dialogue in the Assessment Practice

Effective feedback needs to reduce the discrepancy between students’ current understanding and the desired goal (Hattie and Timperley 2007). The survey findings showed that the students believed they benefitted the most from their teacher’s feedback. At the same time, the interviews revealed varying experiences with the perceived usefulness of the feedback and an uncertainty among the students concerning what they were learning in their English classes. Similar differences were found in the study of Havnes et al. (2012), who found significant variations between how teachers and students perceived feedback—while the teachers perceived their own feedback as useful, the students complained about the lack of usefulness of the feedback. This is interesting, as the students in the present study also expressed this view to some extent. While several students pointed to the usefulness of the feedback, it was also apparent that others considered the feedback irrelevant.
The perceived usefulness of the feedback also needs to be seen in relation to the students’ wish for more personal communication with the teacher regarding the focus of the feedback. As has been made clear, the students did not always understand the purpose of the feedback given by their teachers, and this has been seen in connection with the students’ involvement of goals and criteria. In particular, the students highlighted that the feedback they received regarding oral competence focused more on content than oral language features, which made it hard to understand how to improve their language skills. Burner (2016) also found that students complained about the features of feedback, such as correction of local text errors. Burner (2016) further suggested that this gap of perceived usefulness of feedback can be traced back to conservative assessment practices and lack of time to practice AfL.
Similarly, the need for more dialogue with the teacher was expressed in relation to grades. The findings showed that students perceived grades as useful for their learning, but they underscored the importance of accompanying feedback and dialogue with the teacher. As the interview findings showed, the students highlighted how personal communication with the teacher could help them understand both the grades and the feedback better, as well as be able to defend their choices. Thus, grades were not seen as enough on their own, but as part of the overall feedback. Similarly, Havnes et al. (2012) found that students appreciated personal communication with the teacher about their own learning. Importantly, the authors point out that written feedback is not enough on its own, as this presupposes that the students understand the feedback and are able to use it in their own learning. This point of view is interesting in light of the present study, as the students stated that they did not always understand the purpose of the feedback and requested more communication to enhance their understanding.
The wish for more personal communication with the teacher is closely related to self-assessment. Wiliam and Thompson (2008) held that self-assessment is an important part of the formative assessment practice, as learners should be in charge of their own learning. Though rarely used, the students felt positively towards the potential learning outcomes of self-assessment, but highlighted the need to be in dialogue with the teacher. Self-assessment was seen as challenging to do on their own, as the students found it difficult to know what they needed to work more on. Therefore, they saw it as beneficial to be in dialogue with their teacher after an oral assessment to discuss what they needed to improve on further. Black and Jones (2006) found that self-assessment requires an understanding of the learning goals and quality, and the learner needs to see where they are in relation to this. As discussed earlier, the students in this study did not always see the connection between the focus of the feedback and goals and criteria, which can be seen as a possible explanation for why they find it hard to assess themselves.

6.1.3. Strengthening Student Involvement

Student involvement is a significant factor for successfully implementing language learning strategies (Haukås 2012). Strategic competence is seen as a key component for the language learner’s overall communicative competence, as it mediates language knowledge and topical knowledge (Bachman and Palmer 1996). The findings of the present study showed an overall high level of reflection among the students, but also revealed variations within this. The majority of the students reported that they were aware of their own language development and knew how to approach a given task in English. This was also reflected in the interview findings, where the students reported on employing different cognitive and metacognitive language learning strategies in various oral assessment situations. According to Anderson (2002), students need to be taught learning strategies explicitly in order for them to be effective. The students in the present study reported on making conscious choices in the learning process of oral competence, such as writing scripts, practicing, and finding alternative ways of expressing themselves, which signifies that they are aware of how they employ these strategies. Moreover, the ability to orchestrate various strategies is a distinctive factor between strong and weak second language learners (Anderson 2002). The present research did not find any such distinction between the students, but it nevertheless underlines the importance of developing this ability.
Previous research has found that the learning processes in English are still controlled by the teacher and that student autonomy and self-organised learning remain a challenge (Sandvik and Buland 2013). The findings of the present study show that students appreciate being involved and that they have a wish for being more involved in the assessment practice in English than they are currently. The present study did not investigate what the teacher actually does in the classroom to involve students, but the findings nevertheless indicated that there is a potential for including the students more in various aspects of the assessment practice.
Although the students reported their self-reflection as good, the findings also showed that many students did not recognise the position that higher-order thinking skills have in English. In contrast, previous research has shown that teachers are more concerned with skills and processes than subject-specific matters when assessing, and higher-order thinking skills have a central position here (Bøhn 2018). In addition, research has also shown that teachers’ knowledge about language learning strategies affects how this is taught in the classroom (Haukås 2012). This raises the question of how students understand strategic competence as such, and how this is part of their learning processes in English.
Furthermore, metacognition is closely related to self-assessment (Wiliam 2011). Students who use self-assessment develop their metacognitive strategies, which, in turn, is central for becoming self-regulated. This is also recognised by Wiliam and Thompson (2008), who underscore the importance of activating students as owners of their own learning as the fifth strategy conceptualising formative assessment. The survey showed that approximately half of the students used self-assessment sometimes or more often, while the interview findings indicated that there was a misconception among some of the students about what self-assessment actually entailed and how they were involved in this. Being able to evaluate strategy use and learning is a key component in metacognition and underscores the importance of reflection throughout the learning process (Anderson 2002). According to Haukås (2012), the lack of student involvement is considered to be the greatest obstacle for using these teaching strategies successfully. The students need to be involved in trying out different strategies and evaluating their own language learning, as this is beneficial for their learning outcomes (Haukås 2012).
This discrepancy between the students and teachers also needs to be seen in relation to the students’ perceived understanding of oral competence, which revealed that students tend to focus on the linguistic features of language. This focus can be seen in connection with Vattøy and Smith (2019), who showed that students need to know the learning goals and be able to self-regulate in order for them to perceive the teachers’ feedback as useful. As several of the students do not fully recognise the significance of the content construct in their work, this also has an effect on their perceived usefulness of the feedback. Moreover, Van Der Kleij and Adie (2020) suggested that talking about the purpose of feedback could contribute to students recognising and engaging with it. This was also confirmed by the students in the present study, who stated a wish to be more involved in the assessment practice—especially regarding feedback and self-assessment. The usefulness of the feedback can, therefore, be traced back to the students’ understanding of the subject, in addition to their understanding of the learning goals and ability to self-regulate, as stated by Vattøy and Smith (2019). This clearly shows how the different components of oral competence are connected and depend on one another (Bachman and Palmer 1996).
Moreover, the findings suggested that some students had a conflicting view of their own role in the assessment practice. While the students spoke of the need to be more involved in making decisions concerning assessment and to communicate more with the teacher, they also saw it as the teacher’s responsibility to use and initiate this assessment method. Similarly, Burner (2016) revealed a rather simplistic understanding of what student involvement meant to lower secondary students and claimed that this highlighted the importance of talking about the whys and hows of assessment with students. It is noteworthy that the same tendency was present among the Vg1 students in the present study and could suggest that students need to be made more explicitly aware of their own role and develop a more comprehensive understanding of what self-assessment and involvement entail.
In this chapter, we discussed how students perceive their involvement in the assessment practice in relation to goals and assessment criteria and how they are involved in the feedback practice. These aspects of involvement have potential consequences for how students perceive their understanding of oral competence and their perceived learning outcome. These findings underscore the importance of developing a shared understanding of what is assessed and why in oral English classes, which can be enhanced by increased student involvement.

7. Limitations and Suggestions for Further Research

It was our intention to let the student voice be heard in this study. This choice can be seen as a strength of the study, but also a weakness. The research methods employed in this study complement one another by shedding light on both the overall tendencies and personal experiences of the phenomenon. However, solely assessing the students’ perspective does come with its limitations, as it means that the teachers’ side of the story is not heard—nor is there a neutral perspective of what actually happens in the classroom. It did, however, let us explore the student perspective in detail, which has provided valuable insight into how students themselves perceive their involvement. It is our belief that, regardless of what happens in the classroom and the good intentions of the teacher, how students experience the assessment practice is essential. Another limitation of this study is the limited generalizability of the findings because this is a case study of one particular school that was requited in the research project.
Our study suggests that student involvement in teaching and assessment practices in English as a second language classes should be given more attention. Calling for a responsive pedagogy in schools (Smith et al. 2016), the explicit intention of the teacher is to make learners believe in their own competence and ability to successfully complete assignments and meet challenges, to strengthen students’ self-efficacy, and to increase their overall self-concept.
Further research could, therefore, be focused on exploring the learning aspects detailed in this paper from different points of view, for instance by using a method such as observation, as this will provide a fuller understanding of how students are involved in upper secondary school. Moreover, more quantitative research in this field could also help to better understand the causal links between the different elements of student involvement.

Author Contributions

Conceptualization, L.V.S. and O.A.S.; Investigation, O.A.S.; Methodology, L.V.S.; Writing—original draft, O.A.S.; Writing—review & editing, L.V.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
Learning platform.
2
The half-year grade given at the end of the autumn term.
3
Since the findings indicate that peer assessment had not been used among the students, this will not be discussed further. It is, however, interesting to note this finding.
4
The Norwegian grading system goes from 1 to 6, where 1 is a fail and 6 is the highest achievable grade.

References

  1. Anderson, Neil J. 2002. The Role of Metacognition in Second Language Teaching and Learning. Washington DC: Center for Applied Linguistics, ERIC Clearinghouse on Language and Linguistics. [Google Scholar]
  2. Andrade, Heidi L. 2010. Students as the Definitive Source of Formative Assessment: Academic Self-Assessment and the Self-Regulation of Learning. In Handbook of Formative Assessment. Edited by Heidi L. Andrade and Gregory J. Cizek. New York: Routledge, pp. 90–105. [Google Scholar]
  3. Andrade, Heidi L., and Susan M. Brookhart. 2019. Classroom Assessment as the Co-Regulation of Learning. Assessment in Education: Principles, Policy & Practice 27: 1–23. [Google Scholar] [CrossRef]
  4. Bachman, Lyle F., and Adrian S. Palmer. 1996. Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford: Oxford University Press. [Google Scholar]
  5. Black, Paul, and Dylan Wiliam. 1998. Inside the Black Box: Raising Standards through Classroom Assessment. The Phi Delta Kappan 80: 139–48. [Google Scholar] [CrossRef] [Green Version]
  6. Black, Paul, and Dylan Wiliam. 2018. Classroom Assessment and Pedagogy. Assessment in Education: Principles, Policy & Practice 25: 551–75. [Google Scholar] [CrossRef]
  7. Black, Paul, and Jane Jones. 2006. Formative Assessment and the Learning and Teaching of MFL: Sharing the Language Learning Road Map with the Learners. Language Learning Journal 34: 4–9. [Google Scholar] [CrossRef]
  8. Bøhn, Henrik. 2015. Assessing Spoken EFL Without a Common Rating Scale: Norwegian EFL Teachers’ Conceptions of Construct. SAGE Open 5: 1–12. [Google Scholar] [CrossRef] [Green Version]
  9. Bøhn, Henrik. 2016. What Is to Be Assessed? Teachers’ Understanding of Constructs in an Oral English Examination in Norway. Ph.D. dissertation, University of Oslo, Oslo, Norway. [Google Scholar]
  10. Bøhn, Henrik. 2018. Assessing Content in a Curriculum-Based EFL Oral Exam: The Importance of Higher-Order Thinking Skills. Journal of Language Teaching and Research 9: 16–26. [Google Scholar] [CrossRef]
  11. Bøhn, Henrik, and Thomas Hansen. 2017. Assessing Pronunciation in an EFL Context: Teachers’ Orientations towards Nativeness and Intelligibility. Language Assessment Quarterly 14: 54–68. [Google Scholar] [CrossRef]
  12. Burner, Tony. 2016. Formative Assessment of Writing in English as a Foreign Language. Scandinavian Journal of Educational Research 60: 626–48. [Google Scholar] [CrossRef]
  13. Chvala, Lynell, and Anne-Line Graedler. 2010. Assessment in English. In Vurdering for Læring i Fag. Edited by Stephen Dobson and Roar Engh. Kristiansand: Høyskoleforlaget, pp. 75–89. [Google Scholar]
  14. Council of Europe. 2001. Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press. [Google Scholar]
  15. Creswell, John W. 2013. Qualitative Inquiry and Research Design. Choosing Among Five Approaches, 3rd ed. Los Angeles: SAGE Publications. [Google Scholar]
  16. Creswell, John W. 2014. Research Design. Qualitative, Quantitative, and Mixed Method Approaches, 4th ed. Los Angeles: SAGE Publications. [Google Scholar]
  17. DeLuca, Christopher, Allison E. A. Chapman-Chin, Danielle LaPointe-McEwan, and Don A. Klinger. 2018. Student Perspectives on Assessment for Learning. The Curriculum Journal 29: 77–94. [Google Scholar] [CrossRef]
  18. Dobson, Stephen. 2009. Muntlig Vurdering - Utfordringer Og Muligheter. In Vurdering, Prinsipper Og Praksis. Nye Perspektiver På Elev- Og Læringsvurdering. Edited by Stephen Dobson and Kari Smith. Oslo: Gyldendal Akademisk, pp. 190–99. [Google Scholar]
  19. Gamlem, Siv M., and Kari Smith. 2013. Student Perceptions of Classroom Feedback. Assessment in Education: Principles, Policy & Practice 20: 150–69. [Google Scholar] [CrossRef]
  20. Hattie, John. 2009. Visible Learning. A Synthesis of over 800 Meta-Analyses Relating to Achievement. New York: Routledge. [Google Scholar]
  21. Hattie, John, and Helen Timperley. 2007. The Power of Feedback. Review of Educational Research 77: 81–112. [Google Scholar] [CrossRef]
  22. Haukås, Åsta. 2012. Lærarholdningar Til Språklæringsstrategiar. Norsk Pedagogisk Tidsskrift 96: 114–28. [Google Scholar] [CrossRef]
  23. Havnes, Anton, Kari Smith, Olga Dysthe, and Kristine Ludvigsen. 2012. Formative Assessment and Feedback: Making Learning Visible. Studies in Educational Evaluation 38: 21–27. [Google Scholar] [CrossRef]
  24. Hodgson, Janet, Wenche Rønning, Anne Sofie Skogvold, and Peter Tomlinson. 2010. Vurdering under Kunnskapsløftet. Læreres Begrepsforståelse Og Deres Rapporterte Og Faktiske Vurderingspraksis. NF-Rapport 17/2010. Available online: https://www.udir.no/globalassets/filer/tall-og-forskning/rapporter/2011/5/smul_tredje.pdf (accessed on 21 September 2021).
  25. Hopfenbeck, Therese Nerheim. 2014. Strategier for Læring. Om Selvregulering, Vurdering Og Undervisning. Oslo: Universitetsforlaget. [Google Scholar]
  26. Hopfenbeck, Therese Nerheim, María Teresa Flórez Petour, and Astrid Tolo. 2015. Balancing Tensions in Educational Policy Reforms: Large-Scale Implementation of Assessment for Learning in Norway. Assessment in Education: Principles, Policy & Practice 22: 44–60. [Google Scholar] [CrossRef]
  27. Leitch, Ruth, Oscar Odena, John Gardner, Laura Lundy, Stephanie Mitchell, Despina Galanouli, and Peter Clough. 2007. Consulting Secondary School Students on Increasing Participation in Their Own Assessment in Northern Ireland. Education-Line. Available online: https://pdfs.semanticscholar.org/da1d/0a6ee3f26eef8852fa23830c7f06f18013e0.pdf (accessed on 21 September 2021).
  28. Lysne, Anders. 2006. Assessment Theory and Practice of Students’ Outcomes in the Nordic Countries. Scandinavian Journal of Educational Research 50: 327–59. [Google Scholar] [CrossRef]
  29. Norwegian Directorate for Education and Training. 2018. Observations on the National Assessment for Learning Programme (2010–2018). Skills Development in Networks. Final Report 2018. Available online: https://www.udir.no/contentassets/977da52955c447bca5fc419d5be5e4bf/the-norwegian-assessment-for-learning-programme_final-report-2018.pdf (accessed on 21 September 2021).
  30. Norwegian Directorate for Education and Training. 2019. Erfaringer Fra Nasjonal Satsing På Vurdering for Læring (2010–2018). Available online: https://www.udir.no/tall-og-forskning/finn-forskning/rapporter/erfaringer-fra-nasjonal-satsing-pa-vurdering-for-laring-2010-2018/ (accessed on 21 September 2021).
  31. Norwegian Directorate for Education and Training. 2020. Læreplan i Engelsk. ENG01-04. Available online: https://www.udir.no/lk20/eng01-04 (accessed on 21 September 2021).
  32. Papanthymou, Anastasia, and Maria Darra. 2019. Defining Learner Self-Assessment. Journal of Education and Human Development 8: 213–22. [Google Scholar] [CrossRef]
  33. Pekrun, Reinhard, Thomas Goetz, Anne C. Frenzel, Petra Barchfeld, and Raymond P. Perry. 2011. Measuring Emotions in Students’ Learning and Performance: The Achievement Emotions Questionnaire (AEQ). Contemporary Educational Psychology 36: 36–48. [Google Scholar] [CrossRef] [Green Version]
  34. Postholm, May Britt, and Dag Ingvar Jacobsen. 2018. Forskningsmetode for Masterstudenter i Lærerutdanning. Oslo: Cappelen Damm Akademisk. [Google Scholar]
  35. Prøitz, Tine Sophie. 2013. Variation in grading practice-subjects matter. Education Inquiry 4: 555–72. [Google Scholar] [CrossRef]
  36. Prøitz, Tine Sophie, and Jorunn Spord Borgen. 2010. Rettferdig Standpunktvurdering—The (u)muliges Kunst. Report 16/2010. Oslo: NIFU. [Google Scholar]
  37. Ringdal, Kristen. 2013. Enhet Og Mangfold. Samfunnsvitenskapelig Forskning Og Kvantitativ Metode, 3rd ed. Bergen: Fagbokforlaget. [Google Scholar]
  38. Rust, Chris, Margaret Price, and Berry O’Donovan. 2003. Improving Students’ Learning by Developing Their Understanding of Assessment Criteria and Processes. Assessment & Evaluation in Higher Education 28: 147–64. [Google Scholar] [CrossRef]
  39. Sadler, D Royce. 1989. Formative Assessment and the Design of Instructional Systems. Instructional Science 18: 119–44. [Google Scholar] [CrossRef]
  40. Sandvik, Lise Vikan. 2019. ‘Mapping Assessment for Learning (AfL) Communities in Schools’. Assessment Matters 13: 44–70. [Google Scholar] [CrossRef]
  41. Sandvik, Lise Vikan, and Trond Buland, eds. 2013. Vurdering i Skolen. Operasjonaliseringer Og Praksiser. Delrapport 2 Fra Prosjektet Forskning På Individuell Vurdering i Skolen (FIVIS). Trondheim: NTNU, Program for lærerutdanning & SINTEF. [Google Scholar]
  42. Sandvik, Lise Vikan, Kari Smith, John Alexander Strømme, Bodil Svendsen, Oda A. Sommervold, and Stine Aa. Angvik. 2021. ‘Students’ Perceptions of Assessment Practices in Upper Secondary School during COVID-19’. Teachers and Teaching: Theory and Practice, 1–14. [Google Scholar] [CrossRef]
  43. Schildkamp, Kim, Fabienne M. van der Kleij, Maaike C. Heitink, Wilma B. Kippers, and Bernard P. Veldkamp. 2020. Formative Assessment: A Systematic Review of Critical Teacher Prerequisites for Classroom Practice. International Journal of Educational Research 103: 101602. [Google Scholar] [CrossRef]
  44. Simensen, Aud Marit. 2011. Europeiske Institusjoners Rolle i Utviklingen Av Engelskfaget i Norsk Skole. Didaktisk Tidskrift 20: 157–81. [Google Scholar]
  45. Smith, Kari, Siv Måseidvåg Gamlem, Ann Karin Sandal, and Knut Steinar Engelsen. 2016. Educating for the Future: A Conceptual Framework of Responsive Pedagogy. Cogent Education 3: 1227021. [Google Scholar] [CrossRef]
  46. Stiggins, Rick, and Jan Chappuis. 2005. Using Student-Involved Classroom Assessment to Close Achievement Gaps. Theory Into Practice 44: 11–18. [Google Scholar] [CrossRef]
  47. Svenkerud, Sigrun, Kirsti Klette, and Frøydis Hertzberg. 2012. Opplæring i Muntlige Ferdigheter. Studies in Education 32: 35–49. [Google Scholar] [CrossRef]
  48. The Education Act. 2009. Kapittel 3. Individuell Vurdering i Grunnskolen Og i Vidaregåande Opplæring. FOR-2006-06-23-724. Available online: https://lovdata.no/forskrift/2006-06-23-724 (accessed on 21 September 2021).
  49. Van Der Kleij, Fabienne, and Lenore Adie. 2020. Towards Effective Feedback: An Investigation of Teachers’ and Students’ Perceptions of Oral Feedback in Classroom Practice. Assessment in Education: Principles, Policy & Practice 27: 252–70. [Google Scholar] [CrossRef]
  50. van Manen, M. 1990. Researching Lived Experiences: Human Science for an Action SensitivePedagogy, 2nd ed. Albany: State University of New York Press. [Google Scholar]
  51. Vattøy, Kim-Daniel, and Kari Smith. 2019. Students’ Perceptions of Teachers’ Feedback Practice in Teaching English as a Foreign Language. Teaching and Teacher Education 85: 260–68. [Google Scholar] [CrossRef]
  52. Wendelborg, Christian, Melina Røe, Trond Buland, and Beate Hygen. 2019. Elevundersøkelsen 2018. Analyse Av Elevundersøkelsen Og Foreldreundersøkelsen. Rapport 2019. Available online: https://samforsk.no/Sider/Publikasjoner/Elevundersøkelsen-2018--Analyse-av-Elevundersøkelsen-og-Foreldreundersøkelsen.aspx (accessed on 21 September 2021).
  53. Wiliam, Dylan. 2011. What Is Assessment for Learning? Studies in Educational Evaluation 37: 3–14. [Google Scholar] [CrossRef]
  54. Wiliam, Dylan, and Marnie Thompson. 2008. Integrating Assessment with Instruction: What Will It Take to Make It Work? In The Future of Assessment: Shaping Teaching and Learning. Edited by Carol Anne Dwyer. New York: Routledge, pp. 1–41. [Google Scholar]
  55. Zimmerman, Barry J. 1990. Self-Regulated Learning and Academic Achievement: An Overview. Educational Psychologist 25: 3–17. [Google Scholar] [CrossRef]
Table 1. Overview of the methods and participants.
Table 1. Overview of the methods and participants.
PhaseMethodSampleFocus
1Surveyn = 116Overview and tendencies
2Focus group interviewn = 8 (divided on two interviews)In-depth, personal experiences
Table 2. Main findings from the surveys and interviews.
Table 2. Main findings from the surveys and interviews.
Research QuestionSurvey FindingsInterview Findings
Participation and attitudes
  • Low participation in decision making and discussions
  • Self- and peer assessment rarely used
  • Reported awareness of own development is high
  • Varying experiences among the students
  • Self-assessment seen as beneficial but also of limited understanding
  • Wish for more involvement
Understanding of learning goals and assessment criteria
  • Own understanding of goals and assessment criteria perceived as high
  • Low participation in developing goals and criteria
  • Differences between what the students and the teachers regard as important
  • Active use of learning strategies
  • Criteria seen as logical to understand
View of being involved
  • Assessment by the teacher is seen as the most important
  • Peer assessment has the least value in terms of learning outcomes
  • Uncertainty concerning feedback
  • Wish for more specific feedback and setting goals together with the teacher
  • Varying experiences with feedback
  • Self-assessment seen as both difficult and useful
  • Wish for more involvement in the feedback practice
Table 3. Students’ views on participation and dialogue in class.
Table 3. Students’ views on participation and dialogue in class.
Totally DisagreePartly DisagreeBothPartly AgreeTotally AgreeMeanSDSkew.Kurt.
Q14: I am actively involved in deciding on the tasks which help me develop my oral English skills.22.4%25.9%30.2%19.0%2.6%2.531.1150.123−0.939
Q15: I am actively involved in deciding on the assessment methods which help me develop my oral English skills.19.0%26.7%37.9%13.8%2.6%2.541.0330.099−0.583
Q16: We discuss oral assessment methods (e.g., presentations, group discussions, etc.) together in class.13.8%18.1%35.3%24.1%8.6%2.961.153−0.123−0.694
Q17: We often have conversations in class about good ways to develop oral skills in English.17.2%31.9%28.4%17.2%5.2%2.611.1170.284−667
Table 4. Students’ participation in self- and peer assessment.
Table 4. Students’ participation in self- and peer assessment.
ItemAlmost NeverRarelySometimesOftenVery OftenMeanSDSkew.Kurt.
Q22: How often do you assess your own oral English skills?24.1%25.9%27.6%17.2%5.2%2.531.1830.269−0.864
Q23: How often do you assess a peer’s oral English skills?42.2%32.8%18.1%6.0%0.9%1.910.9600.8530.046
Table 5. Students’ reported language awareness.
Table 5. Students’ reported language awareness.
ItemTotally DisagreePartly DisagreeBothPartly AgreeTotally AgreeMeanSDSkew.Kurt.
Q7: I know how to approach a task in order to succeed.1.7%11.2%25.0%44.0%18.1%3.660.961−0.512−0.161
Q8: I am aware of my own development of oral skills in English.2.6%9.5%30.2%37.9%19.8%3.630.992−0.448−0.157
Table 6. Students’ understanding and involvement in developing learning goals and assessment criteria.
Table 6. Students’ understanding and involvement in developing learning goals and assessment criteria.
ItemTotally DisagreePartly DisagreeBothPartly AgreeTotally AgreeMeanSDSkew.Kurt.
Q18. I have a clear understanding of the aim of the assessment and what I am meant to learn from it6.0%15.5%34.5%31.9%12.1%3.281.062−0.283−0.405
Q19. Learning aims and criteria are clearly communicated by the teacher1.7%16.4%31.0%34.5%16.4%3.471.008−0.187−0.664
Q20. I participate in developing learning aims and assessment criteria15.5%27.6%35.3%16.4%5.2%2.681.0840.168−0.547
Q21: We discuss assessment criteria in class11.2%25.0%29.3%26.7%7.8%2.951.133−0.043−0.813
Table 7. Students’ perceived learning outcomes from assessment by the teacher.
Table 7. Students’ perceived learning outcomes from assessment by the teacher.
ItemTotally DisagreePartly DisagreeBothPartly AgreeTotally AgreeMeanSDSkew.Kurt.
Q9: I develop my oral English skills well when the teacher assesses me.6.0%14.7%34.5%33.6%11.2%3.291.047−0.337−0.323
Q12: Working with feedback from the teacher helps me improve my oral English skills.3.4%6.0%34.5%37.9%18.1%3.610.967−0.5000.252
Q13: Grades from the teacher help me improve my oral English skills.6.9%17.2%33.6%26.7%15.5%3.271.130−0.177−0.635
Table 8. Students’ perceived learning outcomes from self- and peer assessment.
Table 8. Students’ perceived learning outcomes from self- and peer assessment.
ItemTotally DisagreePartly DisagreeBothPartly AgreeTotally AgreeMeanSDSkew.Kurt.
Q10: I develop my oral skills well when I assess myself.3.4%19.0%40.5%31.0%6.0%3.170.926−0.151−0.265
Q11: I develop my oral skills in English well when peers assess me.12.1%29.3%35.3%18.1%5.2%2.751.0540.156−0.502
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sandvik, L.V.; Sommervold, O.A. Students’ Perceptions of Involvement in the Assessment of Oral Competence in English as a Second Language. Languages 2021, 6, 203. https://doi.org/10.3390/languages6040203

AMA Style

Sandvik LV, Sommervold OA. Students’ Perceptions of Involvement in the Assessment of Oral Competence in English as a Second Language. Languages. 2021; 6(4):203. https://doi.org/10.3390/languages6040203

Chicago/Turabian Style

Sandvik, Lise Vikan, and Oda Aasmundstad Sommervold. 2021. "Students’ Perceptions of Involvement in the Assessment of Oral Competence in English as a Second Language" Languages 6, no. 4: 203. https://doi.org/10.3390/languages6040203

APA Style

Sandvik, L. V., & Sommervold, O. A. (2021). Students’ Perceptions of Involvement in the Assessment of Oral Competence in English as a Second Language. Languages, 6(4), 203. https://doi.org/10.3390/languages6040203

Article Metrics

Back to TopTop