Next Article in Journal
Barriers to the Adoption of Reverse Logistics in the Construction Industry: A Combined ISM and MICMAC Approach
Previous Article in Journal
A Polygeneration System Based on Desiccant Air Conditioning Coupled with an Electrical Storage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Development of Student Teachers’ Knowledge Construction in Peer Assessment: A Quantitative Ethnography

College of Education, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(23), 15787; https://doi.org/10.3390/su142315787
Submission received: 19 October 2022 / Revised: 23 November 2022 / Accepted: 24 November 2022 / Published: 27 November 2022
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
Peer assessment (PA) is a formative assessment tool that can effectively monitor the development process of knowledge construction. In comment-based PA, comments contain the evidence of how the assessors construct knowledge to conduct professional assessments, which initiates a research perspective to explore the dynamic knowledge construction of the assessors. Quantitative ethnography is both a method for the quantitative analysis of qualitative data and a technique for the network modelling of professional competencies, providing a new way of thinking about the analysis and evaluation of knowledge construction processes. In this paper, quantitative ethnography was used to mine the comments generated from comment-based PA activities to reveal the characteristics of student teachers’ knowledge construction and the developmental trajectories of knowledge structure at different learning stages. The experimental results show that the student teachers’ knowledge structures and knowledge levels evolve in the PA environment, and the cognitive network gradually tends to become more complex and balanced. The student teachers showed stage and gender differences in the level of knowledge progression during the learning process. The second PA was a turning point in knowledge progression. The knowledge structures of the male and female groups are biased towards different kinds of knowledge elements.

1. Introduction

Global trends in education view learning as the co-construction of knowledge, a process that facilitates the social-relational aspects of knowledge generation, the use of multiple resources and cross-contextual skills [1,2]. As described in Goal 4 of the Sustainable Development Goals, sustaining knowledge for all learners is an indispensable pillar to support the sustainable development in the world [3]. Knowledge construction research attempts to innovate from multiple theoretical, pedagogical and technological perspectives by placing learners in authentic problem situations and engaging them in active, purposeful and sustained knowledge construction activities, playing the role of contributors to knowledge creation [4,5,6]. For example, in task-driven situations, learners gradually gain cognitive sublimation by sharing and exchanging knowledge with each other, thus contributing to the collective cognitive level of the learning team [7,8].
With the advent of the knowledge economy, society has placed new demands on the comprehensive competencies and qualities of teachers [9]. Teachers have to continuously change their original knowledge structure and improve their professionalism through learning in order to adapt to the requirements of teachers in the new era [10]. Countries around the world have invested considerable efforts in teacher training and have adopted various methods to promote the development of teaching competencies [11,12], but the knowledge construction mode of the teachers in teachers training is not clear, which constrains the effectiveness of teachers training.
As a formative assessment tool and context-based social assessment, peer assessment (PA) is considered to be a method to monitor the process of knowledge construction [13] and an effective way to promote learners to construct knowledge [14,15]. In comment-based PA, assessors use professional knowledge to evaluate the performance of their peers and provide feedback in the form of comments [16,17]. The comments contain the implicit discursive content and cognitive characteristics of the assessor [14,18]. It was proved that assessors’ learning, logical thinking [19,20] and developmental cognitive skills [21,22] were enhanced when writing professional comments Research revealed some useful information, such as the cognitive structure [23] and the sequential pattern of knowledge construction, from the comments generated from learning activities [24,25].
To promote the effectiveness of teachers training, in this paper, we conducted several comment-based PA activities and explored the student teachers’ knowledge construction mode throughout the PA activities. The comments were collected and analysed with method and technology of quantitative ethnography. We examined the following questions to reveal the characteristics of knowledge construction in different PA stages and of different groups of student teachers.
  • How does the student teachers’ knowledge construction change over a succession of PA activities?
  • What are the characteristics and developmental trajectories of student teachers’ knowledge construction at different PA stages?
  • What are the characteristics of the knowledge construction of different groups of student teachers?

2. Literature Review

2.1. The Process and Analysis of Knowledge Construction

Knowledge construction is based on constructivism, which emphasises the role of students as subjects, and group members engage in meaningful inquiry, knowledge construction and work creation through dialogue in order to facilitate the internalisation of meaning shared within the group by each member of the group [26]. Scardamalia and Bereiter specifically distinguish between shallow and deep construction in their knowledge construction studies and point out that the real meaning of constructs lies in deep constructs [26]. Deep constructs focus on the application and innovation of knowledge generation, with learning tasks and learning activities that explore the connotations and important laws implicit in the learning tasks [27]. Knowledge construction in a peer-assessment model is precisely deep construction [28]. The basic idea of knowledge construction is that learners develop higher-order learning skills, higher-order thinking skills and knowledge creation skills in the process of iteratively constructing and iteratively advancing knowledge based on problems. The most direct way to do this is not by designing learning tasks or inquiry activities that allow students to acquire knowledge and skills in the domain but by making them creators of knowledge [29,30].
For the analysis and evaluation of knowledge construction, the current field is usually based on content analysis, behavioural analysis, social interaction analysis, cognitive network analysis and other dimensions to explore the level of knowledge construction or the development patterns of learners in the process. Gunawardena et al. constructed a knowledge construction interaction analysis model, using content analysis to code the behavioural interactions between learners, and classified the learning team. The level of knowledge construction in a learning team was categorised into five levels: sharing and comparing information, discovering and exploring contradictions, negotiating meaning and constructing knowledge, validating or modifying the results of the construction and forming consensus and applying it [31]. Li et al. constructed a quality analysis model of collaborative knowledge construction using the behaviour of students’ knowledge construction in a wiki environment as a benchmark, in which knowledge construction was divided into three stages: knowledge sharing, knowledge linking and knowledge convergence [32]. Saqr & Alamro analysed the interaction data of different PBL groups through social network analysis to reveal the characteristics and patterns of learners’ social interactions in the process of knowledge construction, which provided a reference for subsequent in-depth analysis of the development of individual learners’ knowledge construction and the prediction of group knowledge construction trends [33]. Sullivan et al. used cognitive network analysis to analyse the collaborative learning activities by using cognitive network analysis to analyse the discourse elements of different groups in collaborative learning activities, Sullivan compares the differences in collaborative interactions between the high- and low-performing groups and thus provides a reference for later interventions in the learning styles of the low-performing groups [34].

2.2. Role of PA and the Current State of Research

PA is an activity in which learners evaluate the level, value or quality of the designated achievements or the performance of other equal-status learners [21]. Typically, two forms of PA exist: scoring- and comment-based PA [35]. Scoring-based PA involves providing quantitative feedback in terms of grades or scores for evaluation indicators on the basis of certain rules [36]. Comment-based PA is more flexible and has a wider scope than scoring-based PA, which is bound by evaluation indicators [37]. Comment-based PA can integrate the evaluator’s objective and comprehensive knowledge of educational objectives, evaluation rules and the evaluation object [38]. Compared with scoring, writing comments exhibits equal or higher learning potential [16,17].
PA has been applied in various education scenarios and has a positive effect on learners’ academic performance, thinking ability and autonomy [19,39,40]. Pope revealed that PA improved the academic performance and motivation of master trainees in a marketing course [41]. Wang et al. incorporated a teaching strategy based on online PA in programming training and found that, under this strategy, trainees acquired considerable programming knowledge, many programming skills, an active learning attitude and a sense of critical thinking [42]. Chien et al. implemented peer commenting activities in a spherical video–based virtual reality environment and found that these activities had a positive influence on learners’ spoken English skills, learning motivation and critical thinking [43].
PA is a complex learning task that requires a high level of cognitive processing [44]. By writing comments on other people’s reports, evaluators can reconceptualise what is right and wrong in their own reports [45]. Evaluators must recall and repeat material actively or apply learned concepts; thus, all participants become engaged in higher-level thinking and learning [46]. There were many researchers engaged in the study and application of PA to facilise learners’ learning outcomes, but the formative evidence related to the promotion is not so sufficient.

2.3. Rationale for Using Comment-Based PA: Retrieval-Based Learning Theory

Comment-based PA is a suitable formative assessment tool for learners’ learning [47]. Explaining the mechanisms underlying comment-based PA, which has been demonstrated to facilitate knowledge acquisition by learners [16,48], is crucial. Comment writing is a process of retrieving and reconstructing knowledge and is thus regarded as a form of retrieval learning [49].
In retrieval-based learning, knowledge retrieval is a key process for understanding knowledge and facilitating learning. All the knowledge representations in retrieval-based learning are retrieved using retrieval clues available in a given context [50]. Knowledge changes each time a person retrieves it because knowledge retrieval improves an individual’s ability to retrieve knowledge again in the future [51]. Even unsuccessful knowledge retrieval can facilitate learning [52]. The repeated retrieval of knowledge is key to long-term memory and, more importantly, improving later memory [53]. Chan et al. found that under conditions that simulated educational contexts, knowledge retrieval facilitated memory related not only to a test material but also (to a lesser extent) to a related non-test material [54]. Practice knowledge retrieval can produce meaningful, long-term learning [55]. Researchers have conducted experiments to verify the aforementioned statement. For example, Roediger reported that frequent testing in a classroom may increase educational achievement at all educational levels [56]. Karpicke and Grimaldi described two methods for instructing students to practice knowledge retrieval and found that incorporating knowledge retrieval into educational activities is an effective method for enhancing learning [57].
Comments are generated after the evaluator has retrieved relevant knowledge. Every retrieval is a learning experience and a process in which the evaluator reflects on their thinking and knowledge through the performance of peers to construct a dynamic cognitive structure. Therefore, comment-based PA can reveal the dynamic information such as the evaluator’s thinking, knowledge construction and cognition development in the learning process.

2.4. Quantitative Ethnography: Epistemic Network Analysis Approach Based on Epistemic Frames

In order to study the process of knowledge construction by learners in peer assessment, analysing the content of the comments written by the learners involved in the peer assessment is a useful method. Leung argues that ethnographic research can contribute to our understanding of social learning processes [58]. However, traditional ethnographic research is a time-consuming and labour-intensive qualitative analysis, which makes it difficult to conduct large-scale analytical assessments. To address the difficulty of analysing large amounts of comment data, Shaffer proposed an analytical method called quantitative ethnography [59]. A key component of quantitative ethnography is epistemic network analysis (ENA), a cognitive representation based on a cognitive framework and an evidence-based assessment method in digital learning environments [59].
Epistemic frames were introduced by Shaffer, who believes that each professional community of practice can be represented by a professional epistemic framework that encompasses its thinking methods, cognition and problem-solving abilities [60]. Epistemic frames consist of five interconnected components: skills, knowledge, identity, values and epistemology. Professionals in a particular field possess similar knowledge and skills. They have similar attitudes and approaches, thought habits and speaking styles when handling various tasks. Therefore, the discourse of professionals from a certain field can reveal their knowledge and skill structure. The core concept of epistemic frames is that the discourse of a community of practice can be modelled according to the connections between the elements of the epistemic framework to reflect the manner in which they think and behave when solving problems [61].
ENA was originally developed to model epistemic networks, which represent patterns of associations between knowledge, skills, values, thinking habits and other elements that characterise complex thinking [62]. Codes, analysis units and stanzas are three core aspects of ENA. A code represents a set of conceptual elements, and the purpose of ENA is to understand the interactions between these elements. An analysis unit represents the objects of ENA, such as gender or age groupings, or specific individuals. A stanza represents the range of code co-occurrences. ENA is conducted to model the connections between codes by quantifying the co-occurrences of codes in a conversation to generate a weighted network of co-occurrences and associated visualisations for each analysis unit in the data. The structure of the connections is determined, and the strength of the associations between elements in the weighted network is measured to quantify how the composition and strength of the connections change over time. For example, if the differences between individuals are to be compared, each individual should be the analysis unit. If the differences between groups are to be compared, each group should be the analysis unit. In ENA, elements in the same conversation are conceptually interconnected, whereas elements in different conversations are not [61].
ENA can be conducted for analysing and visualising unstructured data, such as discourse and text data, to compare cognitive differences between individuals and groups. For example, Sullivan et al. conducted ENA to model the connections between team discourse elements, with the aim of comparing the differences between a low-performing group and high-performing group in order to adjust the training for the low-performing group and to predict the performance level of both groups [34]. In ENA, domain expertise is considered not as an isolated set of knowledge, skills and processes but rather as a network of connections between knowledge, skills and decision-making processes. ENA is suitable for modelling patterns of association in any domain expertise characterised by complex dynamic relationships between a set of elements [63]. Thus, ENA can be used to compare the salient features of the cognitive networks of different individuals and groups [24].
In this study, we used ENA to visualise information about the knowledge status and cognitive structure of the assessors contained in the PA comments to explain the characteristics of the student teachers’ knowledge constructs and their developmental trajectory in teacher training.

3. Methodology

3.1. Research Design

A succession of PA activities was designed for a course titled ‘Teaching Skills Training’. In this course, two trainers guided student teachers (i.e., trainees) to promote their teaching competences. This research comprised three steps. In the first step, the trainees conducted PA activities that involved commenting on their peers’ teaching competences. The PA comments were collected. In the second step, the comment data were quantified according to a coding scheme to extract key information for analysis. In the third step, data processing and analysis were conducted to discover the characteristics of trainees’ knowledge constructs during the peer assessment activities.
ENA was performed to explore the specific cognitive characteristics of the comments, and statistical analysis was conducted to explore the differences in categorical information among the coded comments. The research framework is illustrated in Figure 1.

3.2. Participants and PA Activities

The participants were 20 junior student teachers (8 females and 12 males) in a university in eastern China. The mean (range) of their age was 20.60 (20–21) years old. They enrolled in a two-credit-hour course entitled ‘Teaching Skills Training’ that prepares them to become a teacher. The 20 participants were divided into 3 learning groups of 7, 7 and 6, with 3, 3 and 2 females in each group, respectively. Each learning group had roughly the same levels of participants, which was determined by the scores the participants received in a prerequisite course entitled “Instruction System Design”, which teaches theoretical and practical knowledge about teaching. Each learning group conducted a PA activity in a separate classroom.
Four comment-based PA activities were conducted during the course, with an interval of 1–2 weeks between each activity. The flow of the PA activity for each learning group is shown in Figure 2. Participants in the learning group took turns to conduct the trail teaching, with each participant acting as both the evaluator and the evaluatee. When a participant conducted a trail teaching as an evaluatee, the other members of the learning group were required to write PA comments for them as evaluators. Trail teaching videos and all of the comments were submitted to the online learning platform for the trainers to review.

3.3. Data Collection

All the trail teachings and PA activities were conducted within 8 weeks. We collected comments as the dataset for this study. In a learning group of 7, for example, each trainee in the group was required to write 6 comments during each PA activity, so the learning group was able to produce 42 comments for that activity. Each PA activity contained a total of three learning groups (7, 7, 6) and was able to generate 114 peer assessment comments. The four PA assessment activities collected a total of 456 comments, of which 17 were invalid. After the invalid comments were removed, 439 comments remained for analysis. These comments contained a total of 3615 sentences and 35,228 words.

3.4. Coding Scheme and Coding

We collected 580 comments written by 5 senior teachers and 2 trainers and used the comments to construct the epistemic framework of student teachers’ teaching competence. We adopted Grounded Theory [64] to obtain the categories, main categories and core categories in the context of teaching competence from the comments. With the NVIVO 12.0 tool, we formed 16 main categories from the categories implicated in the comments and abstracted three core categories, namely, the general knowledge category, the expertise category and the higher-order knowledge category, that can summarise all the main categories. We then randomly selected 44 comments from the 439 comments collected from the PA activities and coded them according to the grounded theory coding procedure to perform a saturation test. No new conceptual categories or relationships were found, which indicates that the model is theoretically saturated.
The main categories, the core categories and the relationships between them constructed the epistemic framework of student teachers’ teaching competence. The main categories represent the knowledge elements of teaching competence and were divided into three perspectives (core categories). We then assigned a code to each main category and formed a coding scheme for comments coding, with the specific interpretations shown in Table 1. Five knowledge elements, namely, appropriate appearance (G1), verbal expression (G2), interactive questioning (G3), blackboarding (G4) and teaching tools (G5), belong to the general knowledge category. Teaching content (E1), teaching methods (E2), key teaching points (E3), the arrangement of sessions (E4), teaching evaluation (E5), environment building (E6) and teaching process (E7) are categorised under the expertise category. Student-centred teaching (H1), professional ethics (H2), quality of thinking (H3) and psychological qualities (H4) fall under the higher-order knowledge category.
We then used the coding scheme to abstract the key word from comments written by the student teachers in PAs. We used the binary values ‘0’ and ‘1’ to quantify the comment data to determine whether a comment contains the knowledge elements of the adopted coding scheme. If a knowledge element was represented in a sentence of a comment, the comment was coded ‘1’ on the corresponding item that represents this knowledge element; otherwise, ‘0’ was coded on the corresponding item.
To ensure coding accuracy, two researchers were trained to code the collected comments manually. A test set of 100 fragments was used to test the coding consistency between the two researchers. The intercoder reliability coefficient was 0.84 (Cohen’s Kappa), which indicates satisfactory reliability. Thus, the two researchers were asked to code all the comments. When their opinions on the comments differed, discussions were held until an agreement was reached. An example coded comment is presented in Table 2.

3.5. Data Analysis

ENA and statistical analysis were conducted to investigate the knowledge construction process of student teachers regarding teaching competence. As per the operation process of ENA, each comment for each individual was structured as a stanza. The knowledge elements within a stanza are specific to an evaluator and are therefore related to each other. The knowledge elements between stanzas are not specific to an evaluator and are therefore not correlated. The knowledge elements that appear and/or co-occur in a stanza reflect the evaluator’s cognition of the knowledge elements and the corresponding cognitive thinking during the writing of the comments.
All the comments of an evaluator were combined into an analysis unit. Each analysis unit comprised comments from a specific evaluator for different trainees. The concatenation of the set of knowledge elements of all stanzas within an analysis unit provides the overall picture of an evaluator’s level of knowledge within an epistemic framework and the cognition of that knowledge.
In order to obtain the general development information of student teachers’ cognition, we developed three measured parameters concerning the knowledge elements and the co-occurrences between the knowledge elements of the entire class during the four PA activities. Table 3 shows the definitions of the measured parameters.
We conducted statistical analysis to elaborate on the numbers and percentages of knowledge elements that appeared in the four PAs to detail the development information of student teachers’ knowledge constructs. Then, we conducted ENA to visualise the development of the cognitive structures and to identify differences and trends in the cognitive structures of the entire class throughout the PAs.
To explore the differences in knowledge construction between gender groups in the same PA setting. We divided all trainees into the male group (B, 12 trainees) and the female group (G, 8 trainees) based on gender characteristics for the generated comment data. The occurrence and co-occurrence of all knowledge elements for all trainees in each group were pooled to obtain common features that reflect the knowledge structure of the trainees in the group. The developmental patterns of the different groups were explored by comparing the centroid development diagrams of their epistemic networks through ENA and by calculating the mean number of element types that emerged in each PA for each group in every stage.
This section may be divided by subheadings. It should provide a concise and precise description of the experimental results and their interpretation as well as the experimental conclusions that can be drawn.

4. Results

4.1. The Overall Developmental Characteristics of Cognition throughout the Four PAs

We conducted a statistical analysis of the content of the comments collected from the first to fourth PAs and determined the following factors: the mean number of element types, the mean number of co-occurrence types and the mean number of co-occurrences in each analysis unit. Table 4 presents statistics on the corresponding parameters throughout the four PAs.
As presented in Table 4, the student teachers’ cognition of teaching competence developed throughout the comment-based PA activities. In the first stage between the first and second PAs, the trainees focused on developing a diverse understanding of knowledge elements but did not understand them in sufficient depth. In the second stage between the second and third PAs, the trainees focused on the development of the diversity of the links between the various elements and the ability to integrate various elements for commenting. In the third stage between the third and fourth PAs, the trainees were more inclined to make deeper connections between knowledge elements from a fixed number of dimensions and developed a deeper understanding of individual knowledge elements.
We conducted a further statistical analysis on the comments collected in the first to fourth PAs. Table 5 presents the numbers and percentages of knowledge elements used in the class-wide comments. Table 6 presents the average number of knowledge elements in each element category that appeared in each comment.
Table 5 and Table 6 indicate that the student teachers increased their use of general knowledge (1.5 → 1.990, 38.16% → 40.04%) and higher-order knowledge elements (0.4123 → 0.6701, 10.50% → 13.49%) and decreased their use of expertise elements (51.33% → 46.47%) in the first stage. In the second stage, the student teachers decreased their use of general knowledge elements (40.04% → 37.55%) and increased their use of expertise elements (46.47% → 48.29%). In the third stage, the numbers of general knowledge (2.211 → 2.447, 37.55% → 36.00%) and expertise elements (2.842 → 3.184, 48.29% → 46.84%) used in the fourth PA were not significantly higher than those used in the third PA; however, the number and proportion of higher-order knowledge elements (0.833 → 1.667, 14.16% → 17.16%) used were significantly higher in the fourth PA.

4.2. The Detailed Developmental Characteristics through an Epistemic Network Analysis

On the basis of the coded comments’ data for all four PA activities, we constructed ENA to visualise the overall cognitive structures of the trainees in the four PA activities. Based on the results shown in Table 4, Table 5 and Table 6, the knowledge construction evolution over the four PAs can be identified. As suggested by Huang et al., the learning process can be divided into three periods: the initial period, the collision and sublimation period and the stabilisation period [65].
As illustrated in Figure 3, the largest confidence interval is achieved in the second PA, which indicates that the student teachers’ thinking was unstable and diffuse in the first PA to the second PA [66]. After that, the confidence interval gradually decreased, which indicates that the relevant cognition of the teaching competence tended to become more stable and cohesive. This suggests that the second PA was a turning point in the change in the cognitive network structure and the collision and sublimation period. It is also supported by the trend of the mean values, which initially move left–right to left and then move from left to right after the second PA.
For further exploring the developmental characteristics of the knowledge construction, we separately mapped the epistemic networks correspondences from the first to fourth PAs in Figure 3. The cognitive network implied in the first PA has a relatively simple structure, with many isolated and indiscernible points and only a few lines. The cognitive network in the second PA has more connecting lines than that in the first PA; however, these lines have a very light colour. Most of the lines in the cognitive network of the third PA are significantly thicker than those in the cognitive networks of the first two PAs. The distribution of the network began to equalise in the second stage. Some of the points in the cognitive network of the fourth PA (such as H1, H3 and H4) became larger than the previous interrater. The findings presented in this paragraph confirm that the student teachers focused on developing the diversity of knowledge elements, the complexity of the connections between knowledge elements and the depth of understanding of knowledge elements in the first, second and third stages, respectively.
In addition to the overall analysis of the network structure, we analysed the elements of the network diagram specifically. In the first PA, the knowledge construction process of the student teachers was focused on establishing connections between five knowledge elements: G5, E2, E1, G3 and E4. In the second PA, the student teachers established connections between the aforementioned five elements and increased the usage frequency of E5, G1 and G4 and their links to other knowledge elements. In the third PA, greater emphasis was placed on several knowledge elements located in the lower part of the figure, including E3, E6, E7 and H3. In the fourth PA, the teachers’ knowledge construction process was oriented towards establishing connections between E7, H3, H1, H4 and E1. The findings presented in this paragraph validate the conclusions presented in the first part of this section.

4.3. The Developmental Characteristics of the Knowledge Construction of Different Groups

We first conducted ENA to generate centroid development diagrams of the male and female groups throughout the four PA activities (Figure 4) and further mapped the differences between the epistemic networks of the two groups in each PA (Figure 5), as well as their corresponding tables of differences in statistical terms (Table 7).
Figure 4 presents the centroid development diagrams of the male and female groups throughout the PA activities. This figure indicates that the overall centroid movement trends of the male and female groups from the first to fourth PAs are the same along the X-axis and Y-axis. On the X-axis, the centroids of both groups first move from right to left and then move from left to right after the second PA. On the Y-axis, these centroids exhibit bottom-up movement throughout the PAs. The two dashed boxes in Figure 4 are completely independent, which indicates that equal distribution cannot occur in these boxes. The figure also illustrates that the confidence interval of the female group is larger than that of the male group, which indicates that the female student teachers’ thinking was more divergent than that of their male peers [66].
Figure 5 shows the differences between the structures of the epistemic networks of the two groups in each PA. In the first and second PAs, the distributions of the qualitative spaces on the Y-axis are greater in the female group than those in the male group, while the third and fourth PAs present the opposite situation. From a statistical point of view, there are statistically significant differences on the X-axis between the two groups on each of the PAs, as shown in Table 7.
We then further counted the average number of knowledge elements per category in each comment for the first through fourth PAs for both male and female groups. The relevant statistics are shown in Figure 5.
By referring to Figure 6 from the perspective of activity sequences, the following developmental characteristics of male and female groups can be identified. In the first stage, the male group focused on developing knowledge elements from the general knowledge and higher-order knowledge categories, whereas the female group maintained a steady development of the elements from each category. In the second stage, the male group focused on developing expertise elements and stopped developing general knowledge elements, whereas the female group made a strong effort to develop general knowledge and expertise elements. In the third stage, the male group maintained a steady development of elements from each category, whereas the female group made a strong effort to develop elements from the higher-order thinking category while maintaining a steady development of elements from the remaining categories. Overall, the knowledge structure of the male group tended to favour elements from the expertise and higher-order thinking categories, whereas that of the female group tended to favour elements from the general knowledge category.

5. Discussion

This study aimed to investigate the developmental characteristics of student teachers’ knowledge construction and their trajectories in PA. By conducting ENA and using mathematical statistics, we found that the development of student teachers’ knowledge construction differed in different stages and groups.
We first examined the general developmental trends of student teachers’ knowledge construction of teaching competences by analysing the coded comments for all trainees. We found that the student teachers’ knowledge construction was developed during the PA activities, and their cognition about teaching competencies was improved. This finding could be regarded as practical evidence for the research results of Farrell, who revealed that reflective practice can promote the development of teachers’ cognitive thinking [67], and as extending the findings of Cheng and Hou, who indicated that PA promotes the use of cognitive–metacognitive thinking by trainees when they provide feedback [22].
We further found that, in different stages of learning, the student teachers exhibited different development rates for distinct categories of knowledge elements. Moreover, the cognitive networks of the student teachers exhibited a general pattern of development from diffuse to tight and from simple to complex.
On the basis of the different characteristics of the student teachers’ three knowledge construction stages that were analysed in this study, instructors can adapt the curriculum process for the development of student teachers’ teaching competence. For example, in the early part of a course, instructors might place more emphasis on teaching trainees the definition of each knowledge element and other content regarding the knowledge elements. In the later part of the course, instructors might be more inclined to instruct the trainees on the connections between various knowledge elements. The most crucial aspect in this part of the course is to help advance trainees’ understanding of different knowledge elements.
However, the knowledge construction of the student teachers did not exhibit marked changes in the third stage. This result indicates that the trainees maintained a high development rate for the general knowledge and expertise elements during the first three PAs. Subsequently, their development rate for these elements decreased in the fourth PA. Moreover, their level of awareness of these knowledge elements reached a relatively constant range of values. The development of knowledge elements from the general knowledge and expertise categories tended to stagnate for trainees in the third stage. We speculate that this phenomenon might have been caused by the trainees becoming bored when writing a fourth comment on the same type of content, which might have led to a decrease in their motivation and initiative [68]. The trainees continued to deepen their understanding and thus increased their usage frequency of higher-order knowledge elements during PAs.
To address the aforementioned problem, we recommend that instructors place a limit on the number of times PA methods can be used for content based on similar types of topics within the same session. After limiting the usage frequency of these methods, instructors can intervene in the lesson through other methods. This approach prevents trainees from becoming bored and promotes the development of their knowledge structures at a deeper level.
We also examined the obtained data through statistical analysis and ENA to determine the developmental characteristics of different trainee groups. We found that, throughout the assessment activity, the knowledge structure of the male group was biased towards knowledge elements from the expertise and higher-order knowledge categories, whereas that of the female group was biased towards knowledge elements from the general knowledge category. In addition, the female group exhibited significantly less development in knowledge elements from the expertise and higher-order knowledge categories than the male group did. The results of this study are in agreement with the finding of Shen et al. that student teachers exhibit gender differences in the development of their cognitive thinking in a learning process [69].
The aforementioned conclusion is related to the fact that cultivating teaching competence is a kind of complex task, and task complexity may influence learners’ knowledge construction process and learning outcomes [70]. Accordingly, we suggest that instructors pay attention to gender differences in their daily teaching when training student teachers. Teaching interventions can be targeted according to these differences to achieve precision teaching and learning. For example, assessment scaffolds can be provided to female/male trainees before and during a teaching activity to help them break through their learning bottlenecks; a particular expertise framework can be provided, and additional discussion and reflection time can be designed during the second PA activity to give female/male trainees the opportunity to deeply understand knowledge and lead to better cognitive stability and cohesion.

6. Conclusions, Limitations and Future Work

This study used content analysis to examine the process of knowledge construction by student teachers in PAs. We used statistical methods and performed ENA to conduct an in-depth exploration on the data collected from four PA activities to investigate the development of the knowledge of teaching competence. First, we obtained statistics on all comments from different learning stages, which confirmed that the cognition of the teaching competence knowledge of the student teachers who participated in this study developed throughout the conducted course. Second, we classified knowledge elements into three different level categories: general knowledge, expertise and higher-order knowledge. Subsequently, we quantified comment data through ENA and statistical analysis to examine the developmental characteristics of the student teachers’ knowledge on different levels of knowledge in different stages. Finally, we compared the knowledge development trends of student teachers between different groups and in different stages. The results of this study provide evidence for instructors on the development of student teachers’ cognition of teaching competence and help them to carry out precision teaching and training.
The textual data generated in teaching and learning contain considerable personal cognitive information. The highlight of this study is our use of content analysis to mine the cognitive information in the PA comments to identify the characteristics and developmental patterns of student teachers’ knowledge construction in PAs. An increased quantity of such data can be analysed in future studies to build upon the current study and improve the accuracy and objectivity of the findings related to knowledge construction. This study has certain limitations. First, the selected research sample was insufficiently representative, and a relatively small number of comments was collected. The sample was only representative of junior student teachers from eastern China. It may be difficult to generalise the findings of this study to all grades of student teachers. Second, this study only analysed knowledge construction in a PA setting, and it is difficult to generalise the findings to all educational assessment settings. In a future study, we plan to examine the deeper links between student teachers’ cognition characteristics of teaching competence and their actual teaching performance in different grade levels during the process of knowledge construction. In addition, this study will combine the ENA and text mining to explore the cognitive differences and connections between teacher-assessment, student peer-assessment and self-assessment.

Author Contributions

Conceptualisation, Y.L. and Z.N.; methodology, Y.L.; validation, Y.L. and Z.Z.; formal analysis, Y.L.; investigation, Z.N. and S.Z.; resources, S.Z.; data curation, Y.L.; writing—original draft preparation, Y.L. and Z.N.; writing—review and editing, Z.N.; visualisation, Z.Z.; supervision, Z.N.; project administration, Y.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Fund of China, grant number 17BTQ067.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent was obtained from all the participants prior to the enrollment of this study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Damsa, C.I.; Jornet, A. Revisiting Learning in Higher Education—Framing Notions Redefined through an Ecological Perspective. Frontline Learn. Res. 2017, 4, 39–47. [Google Scholar] [CrossRef]
  2. Damşa, C.I.; Ludvigsen, S. Learning through Interaction and Co-Construction of Knowledge Objects in Teacher Education. Learn. Cult. Soc. Interact. 2016, 11, 1–18. [Google Scholar] [CrossRef]
  3. UNESCO. Education for Sustainable Development Goals. Available online: https://en.unesco.org/gem-report/sdg-goal-4 (accessed on 22 November 2022).
  4. Sawyer, R.K. The Cambridge Handbook of the Learning Sciences, 2nd ed.; Cambridge University Press: New York, NY, USA, 2014. [Google Scholar]
  5. Farrokhnia, M.; Pijeira-Diaz, H.J.; Noroozi, O.; Hatami, J. Computer-Supported Collaborative Concept Mapping: The Effects of Different Instructional Designs on Conceptual Understanding and Knowledge Co-Construction. Comput. Educ. 2019, 142, 103640. [Google Scholar] [CrossRef]
  6. Bouton, E.; Tal, S.B.; Asterhan, C.S.C. Students, Social Network Technology and Learning in Higher Education: Visions of Collaborative Knowledge Construction vs. the Reality of Knowledge Sharing. Internet High. Educ. 2021, 49, 100787. [Google Scholar] [CrossRef]
  7. Bereiter, C. Education and Mind in the Knowledge Age; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2002; ISBN 0805839437. [Google Scholar]
  8. Bielaczyc, K.; Collins, A. Learning Communities in Classrooms: A Reconceptualization of Educational Practice; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 1999. [Google Scholar]
  9. Popenici, S.A.D.; Kerr, S. Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education. Res. Pr. Technol. Enhanc. Learn. 2017, 12, 22. [Google Scholar] [CrossRef]
  10. UNESCO. Learning the Treasure within: Report of the International Commission on Education for the 21st Century. Available online: https://www.journals.uchicago.edu/doi/abs/10.1086/447500 (accessed on 17 August 2022).
  11. Ariffin, T.F.T.; Bush, T.; Nordin, H. Framing the Roles and Responsibilities of Excellent Teachers: Evidence from Malaysia. Teach. Teach. Educ. 2018, 73, 14–23. [Google Scholar] [CrossRef]
  12. Kim, J.H.; Jung, S.Y.; Lee, W.G. Design of Contents for ICT Literacy In-Service Training of Teachers in Korea. Comput. Educ. 2008, 51, 1683–1706. [Google Scholar] [CrossRef]
  13. Topping, K. Peer Assessment Between Students in Colleges and Universities. Rev. Educ. Res. 1998, 68, 249–276. [Google Scholar] [CrossRef]
  14. Peng, X.; Xu, Q. Investigating Learners’ Behaviors and Discourse Content in MOOC Course Reviews. Comput. Educ. 2020, 143, 103673. [Google Scholar] [CrossRef]
  15. Serrano-Aguilera, J.J.; Tocino, A.; Fortes, S.; Martín, C.; Mercadé-Melé, P.; Moreno-Sáez, R.; Muñoz, A.; Palomo-Hierro, S.; Torres, A. Using Peer Review for Student Performance Enhancement: Experiences in a Multidisciplinary Higher Education Setting. Educ. Sci. 2021, 11, 71. [Google Scholar] [CrossRef]
  16. Li, L.; Liu, X.; Steckelberg, A.L. Assessor or Assessee: How Student Learning Improves by Giving and Receiving Peer Feedback. Br. J. Educ. Technol. 2010, 41, 525–536. [Google Scholar] [CrossRef]
  17. Lundstrom, K.; Baker, W. To Give Is Better than to Receive: The Benefits of Peer Review to the Reviewer’s Own Writing. J. Second. Lang. Writ. 2009, 18, 30–43. [Google Scholar] [CrossRef]
  18. Mo, J.H. Peer Review: Increasing Student Autonomy in writing. J. Univ. Foreign Lang. 2007, 30, 35–39. [Google Scholar]
  19. Demiraslan Çevik, Y. Assessor or Assessee? Investigating the Differential Effects of Online Peer Assessment Roles in the Development of Students’ Problem-Solving Skills. Comput. Hum. Behav. 2015, 52, 250–258. [Google Scholar] [CrossRef]
  20. Falchikov, N.; Boud, D. Student Self-Assessment in Higher Education: A Meta-Analysis. Rev. Educ. Res. 1989, 59, 395–430. [Google Scholar] [CrossRef]
  21. Topping, K.J. Peer Assessment. Theory Into Pract. 2009, 48, 20–27. [Google Scholar] [CrossRef]
  22. Cheng, K.H.; Hou, H.T. Exploring Students’ Behavioural Patterns during Online Peer Assessment from the Affective, Cognitive, and Metacognitive Perspectives: A Progressive Sequential Analysis. Technol. Pedagag. Educ. 2015, 24, 171–188. [Google Scholar] [CrossRef]
  23. Liu, Y.C.; Zhu, X.; Chen, L. Using the Comments on Peer Review to Conduct Reviewer ‘s Epistemic Network Analysis in Precision Education. J. Distance Educ. 2019, 37, 85–93. [Google Scholar]
  24. Ouyang, F.; Dai, X. Using a Three-Layered Social-Cognitive Network Analysis Framework for Understanding Online Collaborative Discussions. Australas. J. Educ. Technol. 2022, 38, 164–181. [Google Scholar] [CrossRef]
  25. Li, S.C.; Lai, T.K.H. Unfolding Knowledge Co-Construction Processes through Social Annotation and Online Collaborative Writing with Text Mining Techniques. Australas. J. Educ. Technol. 2022, 38, 148–163. [Google Scholar] [CrossRef]
  26. Scardamalia, M.; Bereiter, C. Knowledge Building: Theory, Pedagogy and Technology. In Cambridge Handbook of the Learing Sciences; Sawyer, K., Ed.; Cambridge University Press: New York, NY, USA, 2006; pp. 97–115. ISBN 9780521607773. [Google Scholar]
  27. He, Z.; Wu, X.; Wang, Q.; Huang, C. Developing Eighth-Grade Students’ Computational Thinking with Critical Reflection. Sustainability 2021, 13, 11192. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Pi, Z.; Chen, L.; Zhang, X.; Yang, J. Online Peer Assessment Improves Learners’ Creativity: Not Only Learners’ Roles as an Assessor or Assessee, but Also Their Behavioral Sequence Matter. Think. Ski. Creat. 2021, 42, 100950. [Google Scholar] [CrossRef]
  29. Zhang, Y.B.; Chen, B.D.; Scardamalia, M.; Bereier, C. From Superficial to Deep Constructs—An Analysis of the Development of Knowledge Construction Theory and its Application in China. E-Educ. Res. 2012, 33, 5–12. [Google Scholar] [CrossRef]
  30. Wang, R.; Wu, S.; Wang, X. The Core of Smart Cities: Knowledge Representation and Descriptive Framework Construction in Knowledge-Based Visual Question Answering. Sustainability 2022, 14, 13236. [Google Scholar] [CrossRef]
  31. Gunawardena, C.N.; Lowe, C.A.; Anderson, T. Analysis of a Global Online Debate and the Development of an Interaction Analysis Model for Examining Social Construction of Knowledge in Computer Conferencing. J. Educ. Comput. Res. 1997, 17, 397–431. [Google Scholar] [CrossRef] [Green Version]
  32. Li, S.; Tang, Q.; Wang, C.X. Model Reflection and Case Study of Distance Collaborative Knowledge Building Analysis in Wiki Environment. Mod. Distance Educ. 2014, 03, 55–61. [Google Scholar] [CrossRef]
  33. Saqr, M.; Alamro, A. The Role of Social Network Analysis as a Learning Analytics Tool in Online Problem Based Learning. BMC Med. Educ. 2019, 19, 160. [Google Scholar] [CrossRef] [Green Version]
  34. Sullivan, S.; Warner-Hillard, C.; Eagan, B.; Thompson, R.J.; Ruis, A.R.; Haines, K.; Pugh, C.M.; Shaffer, D.W.; Jung, H.S. Using epistemic network analysis to identify targets for educational interventions in trauma team communication. Surgery 2018, 163, 938–943. [Google Scholar] [CrossRef] [PubMed]
  35. Li, J.; Huang, J.; Cheng, S. The Reliability, Effectiveness, and Benefits of Peer Assessment in College EFL Speaking Classrooms: Student and Teacher Perspectives. Stud. Educ. Eval. 2022, 72, 101120. [Google Scholar] [CrossRef]
  36. Jonsson, A.; Svingby, G. The use of scoring rubrics: Reliability, validity and educational consequences. Educ. Res. Rev. 2007, 2, 130–144. [Google Scholar] [CrossRef]
  37. Arnold, S.L. Replacing “The Holy Grail”: Use Peer Assessment Instead of Class Participation Grades! Int. J. Manag. Educ. 2021, 19, 100546. [Google Scholar] [CrossRef]
  38. Liu, N.F.; Carless, D. Peer Feedback: The Learning Element of Peer Assessment. Teach. High. Educ. 2006, 11, 279–290. [Google Scholar] [CrossRef]
  39. Shen, B.; Bai, B.; Xue, W. The Effects of Peer Assessment on Learner Autonomy: An Empirical Study in a Chinese College English Writing Class. Stud. Educ. Eval. 2020, 64, 100821. [Google Scholar] [CrossRef]
  40. Mei, T.; Yuan, Q. A case study of peer feedback in a Chinese EFL writing classroom. Chin. J. Appl. Linguist. 2010, 33, 87–98. [Google Scholar]
  41. Pope, N. An Examination of the Use of Peer Rating for Formative Assessment in the Context of the Theory of Consumption Values. Assess. Eval. High. Educ. 2001, 26, 235–246. [Google Scholar] [CrossRef]
  42. Wang, X.M.; Hwang, G.J.; Liang, Z.Y.; Wang, H.Y. Enhancing Students’ Computer Programming Performances, Critical Thinking Awareness and Attitudes towards Programming: An Online Peer-Assessment Attempt. Educ. Technol. Soc. 2017, 20, 58–68. [Google Scholar]
  43. Chien, S.Y.; Hwang, G.J.; Jong, M.S.Y. Effects of Peer Assessment within the Context of Spherical Video-Based Virtual Reality on EFL Students’ English-Speaking Performance and Learning Perceptions. Comput. Educ. 2020, 146, 103751. [Google Scholar] [CrossRef]
  44. Bezuidenhout, M.J.; Alt, H. ‘Assessment Drives Learning’: Do Assessments Promote High-Level Cognitive Processing? S. Afr. J. High. Educ. 2011, 25, 1062–1076. [Google Scholar] [CrossRef]
  45. Ramon Rico-Juan, J.; Gallego, A.J.; Valero-Mas, J.J.; Calvo-Zaragoza, J. Statistical Semi-Supervised System for Grading Multiple Peer-Reviewed Open-Ended Works. Comput. Educ. 2018, 126, 264–282. [Google Scholar] [CrossRef]
  46. King, A. Structuring Peer Interaction to Promote High-Level Cognitive Processing. Theory Pract. 2002, 41, 33–39. [Google Scholar] [CrossRef]
  47. Zlabkova, I.; Petr, J.; Stuchlikova, I.; Rokos, L.; Hospesova, A. Development of Teachers’ Perspective on Formative Peer Assessment. Int. J. Sci. Educ. 2021, 43, 428–448. [Google Scholar] [CrossRef]
  48. Roscoe, R.D.; Chi, M.T.H. Tutor Learning: The Role of Explaining and Responding to Questions. Instr. Sci. 2008, 36, 321–350. [Google Scholar] [CrossRef]
  49. Karpicke, J.D.; Blunt, J.R. Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping. Science 2011, 331, 772–775. [Google Scholar] [CrossRef] [PubMed]
  50. Karpicke, J.D. Retrieval-Based Learning: Active Retrieval Promotes Meaningful Learning. Curr. Dir. Psychol. 2012, 21, 157–163. [Google Scholar] [CrossRef]
  51. Karpicke, J.D.; Zaromb, F.M. Retrieval Mode Distinguishes the Testing Effect from the Generation Effect. J. Mem. Lang. 2010, 62, 227–239. [Google Scholar] [CrossRef]
  52. Kornell, N.; Hays, M.J.; Bjork, R.A. Unsuccessful Retrieval Attempts Enhance Subsequent Learning. J. Exp. Psychol.-Learn. Mem. Cogn. 2009, 35, 989–998. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Karpicke, J.D.; Roediger, H.L. Repeated Retrieval during Learning Is the Key to Long-Term Retention. J. Mem. Lang. 2007, 57, 151–162. [Google Scholar] [CrossRef]
  54. Chan, J.C.K.; McDermott, K.B.; Roediger, H.L. Retrieval-Induced Facilitation: Initially Nontested Material Can Benefit from Prior Testing of Related Material. J. Exp. Psychol. Gen. 2006, 135, 553–571. [Google Scholar] [CrossRef]
  55. Karpicke, J.D. Metacognitive Control and Strategy Selection: Deciding to Practice Retrieval during Learning. J. Exp. Psychol. Gen. 2009, 138, 469–486. [Google Scholar] [CrossRef] [Green Version]
  56. Roediger, H.L.; Karpicke, J.D. The Power of Testing Memory: Basic Research and Implications for Educational Practice. Perspect Psychol. Sci. 2006, 1, 181–210. [Google Scholar] [CrossRef]
  57. Karpicke, J.D.; Grimaldi, P.J. Retrieval-Based Learning: A Perspective for Enhancing Meaningful Learning. Educ. Psychol. Rev. 2012, 24, 401–418. [Google Scholar] [CrossRef]
  58. Leung, W.C. Why Is Evidence from Ethnographic and Discourse Research Needed in Medical Education: The Case of Problem-Based Learning. Med. Teach. 2002, 24, 169–172. [Google Scholar] [CrossRef] [PubMed]
  59. Shaffer, D.W. Quantitative Ethnography; Cathcart Press: Madison, WI, USA, 2017. [Google Scholar]
  60. Shaffer, D.W. Epistemic Frames for Epistemic Games. Comput. Educ. 2006, 46, 223–234. [Google Scholar] [CrossRef]
  61. Bagley, E.A.; Shaffer, D.W. Stop Talking and Type: Comparing Virtual and Face-to-Face Mentoring in an Epistemic Game: Virtual and Face-to-Face Mentoring. J. Comput. Assist. Learn. 2015, 31, 606–622. [Google Scholar] [CrossRef]
  62. Csanadi, A.; Eagan, B.; Kollar, I.; Shaffer, D.W.; Fischer, F. When Coding-and-Counting Is Not Enough: Using Epistemic Network Analysis (ENA) to Analyze Verbal Data in CSCL Research. Intern. J. Comput. Support. Collab. Learn. 2018, 13, 419–438. [Google Scholar] [CrossRef] [Green Version]
  63. Andrist, S.; Ruis, A.R.; Shaffer, D.W. A Network Analytic Approach to Gaze Coordination during a Collaborative Task. Comput. Hum. Behav. 2018, 89, 339–348. [Google Scholar] [CrossRef]
  64. Glaser, B.G.; Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research; Sage Publications: New York, NY, USA, 1967; pp. 1–16. ISBN 0-202-30260-1. [Google Scholar]
  65. Huang, C.Q.; Han, Z.M.; Li, M.X.; Jong, M.S.; Tsai, C.C. Investigating Students’ Interaction Patterns and Dynamic Learning. Sentim. Online Discuss. Comput. Educ. 2019, 140, 103589. [Google Scholar] [CrossRef]
  66. Shaffer, D.W.; Collier, W.; Ruis, A.R. A Tutorial on Epistemic Network Analysis: Analyzing the Structure of Connections in Cognitive, Social, and Interaction Data. Learn. Anal. 2016, 3, 9–45. [Google Scholar] [CrossRef] [Green Version]
  67. Abello-Contesse, C. REFLECTIVE LANGUAGE TEACHING: FROM RESEARCH TO PRACTICE. Thomas S. C. Farrell. New York: Continuum, 2007. Pp. viii + 202; Cambridge University Press: Cambridge, UK, 2009; Volume 31. [Google Scholar] [CrossRef]
  68. Erturk, S.; van Tilburg, W.A.P.; Igou, E.R. Off the Mark: Repetitive Marking Undermines Essay Evaluations Due to Boredom. Motiv. Emot. 2022, 46, 264–275. [Google Scholar] [CrossRef]
  69. Shen, W.; Liu, C.; Shi, C.; Yuan, Y. Gender Differences in Creative Thinking. Adv. Psychol. Sci. 2015, 23, 1380. [Google Scholar] [CrossRef]
  70. Robinson, P. Task Complexity, Task Difficulty, and Task Production: Exploring Interactions in a Componential Framework. Appl. Linguist. 2001, 22, 27–57. [Google Scholar] [CrossRef]
Figure 1. Research framework.
Figure 1. Research framework.
Sustainability 14 15787 g001
Figure 2. PA assessment activity.
Figure 2. PA assessment activity.
Sustainability 14 15787 g002
Figure 3. Epistemic networks of the student teachers from the first to fourth PAs.
Figure 3. Epistemic networks of the student teachers from the first to fourth PAs.
Sustainability 14 15787 g003
Figure 4. Centroid development diagrams of the two groups.
Figure 4. Centroid development diagrams of the two groups.
Sustainability 14 15787 g004
Figure 5. Differences in the structure of the epistemic networks between the two groups.
Figure 5. Differences in the structure of the epistemic networks between the two groups.
Sustainability 14 15787 g005
Figure 6. Average number (i.e., value) of elements of each category in every comment in the sequence of PA activities for male and female groups.
Figure 6. Average number (i.e., value) of elements of each category in every comment in the sequence of PA activities for male and female groups.
Sustainability 14 15787 g006
Table 1. Overall coding framework of this study.
Table 1. Overall coding framework of this study.
Core CategoryMain CategoryCodeDescriptionExample
General knowledge Appropriate appearanceG1Neat and tidy appearance, pleasant behaviour and appropriate body language.Lecturing with a tablet and not very well groomed.
Verbal expressionG2Clear and appropriate language, accurate expression, clear diction, fluent speech, standard pronunciation, loud voice and appropriate speed of speech.Not out of script, still stumbling a bit in trail teaching.
Interactive questioningG3Focusing on the stimulation of students’ learning interest and interactions.Highly interactive classroom with positive interaction with students.
BlackboardingG4Appropriate representation of a suitable quantity of neat and attractive writing on a blackboard.The blackboard could have a little more design.
Teaching toolsG5The tools, media or equipment used to deliver information and the specific environment in which teaching is conducted, including PowerPoint presentations, electronic whiteboards and multimedia classrooms.In the Try It session, rather than using abstract descriptions in words, you can use Excel on your computer for a straightforward demonstration of the operation
ExpertiseTeaching contentE1Understanding the objectives and requirements of the course and obtaining an accurate picture of the teaching content.Still a little empty in terms of teaching content.
Teaching methodsE2Selecting appropriate teaching formats and methods according to student needs.The lecture process is still based on the traditional lecture method.
Key teaching pointsE3Determining teaching objectives, teaching priorities, and difficulties in accordance with the requirements of the teaching content and curriculum standards.Teaching difficulties are not sufficiently prominent.
Arrangement of sessionsE4Organising students’ learning activities effectively and controlling the teaching flow.The lectures are well timed.
Teaching evaluationE5Focusing on student assessment and feedback in the delivery of teaching.Obtain feedback from students through assignments.
Environment buildingE6Introducing course content and creating classroom situations that stimulate students’ learning interest and learning motivation.Clever use of scenes from life to introduce course content.
Teaching processE7Presenting teaching content in a scientifically accurate manner and controlling the time and speed of teaching.It would have been nice to have more examples in the teaching process.
Higher-order knowledgeStudent-centred teachingH1The teaching design reflects the subjectivity of the students.I would suggest that your teaching design should be more student-centred.
Professional ethicsH2Love for education, care for students, respect for students and the fair and equal treatment of each student.All levels of student acceptance should be taken into account.
Quality of thinkingH3Understanding and analysing problems quickly and accurately in a logical and flexible manner.A little problem with the logic of the trail teaching.
Psychological qualitiesH4Positive, cheerful and self-confident, with a determined and tenacious spirit, and not afraid of difficulties.Acting nervous during the trail teaching.
Table 2. Coding format and an example coded comment.
Table 2. Coding format and an example coded comment.
CommentSpeech Was Fluent, and Arrangement Was Reasonable… [The Lecturer] Questioned the Students and Had Good Communication… The Teaching Process Indicated Clear Thinking and Always Focused on the Teaching Goals, and [the Lecturer] Grasped the Key Points…But He Was a Little Nervous at the Beginning.
Coded commentItemG1G2G3G4G5E1E2E3E4E5E6E7H1H2H3H4
Value0110000110010011
Table 3. The definition of the measured parameters.
Table 3. The definition of the measured parameters.
CodeDefinitionDescription
α Mean number of element types in each analysis unitRepresent the abundance of element types that the trainees used in the comments.
β Mean number of co-occurrence types in each analysis unitRepresent the abundance of co-occurrence types that the trainees used in the comments.
γ Mean number of co-occurrences in each analysis unitRepresent the strength of the connection between the co-occurrence elements the trainees used in the comments.
Table 4. Numers of the measured parameters throughout the PAs.
Table 4. Numers of the measured parameters throughout the PAs.
PA Activity α β γ
First9.3027.6043.90
Second11.1740.0067.94
Third12.4055.65102.30
Fourth13.2565.55133.85
Table 5. Numbers and percentages of the knowledge elements throughout the PAs.
Table 5. Numbers and percentages of the knowledge elements throughout the PAs.
CodesFirst PASecond PAThird PAFourth PA
(N)(%)(N)(%)(N)(%)(N)(%)
General knowledgeG171.56%163.32%324.77%344.39%
G2388.48%479.75%639.39%759.68%
G36314.06%5611.62%608.94%607.74%
G4132.90%234.77%405.96%516.58%
G55011.16%5110.58%578.49%597.61%
Total17138.16%19340.04%25237.55%27936.00%
ExpertiseE15712.72%5711.83%7611.33%8711.23%
E25311.83%367.47%537.90%617.87%
E371.56%142.90%294.32%374.77%
E45913.17%4910.17%578.49%476.06%
E561.34%224.56%334.92%354.52%
E6255.58%193.94%355.22%324.13%
E7235.13%275.60%416.11%648.26%
Total23051.33%22446.47%32448.29%36346.84%
Higher-order knowledgeH192.01%132.70%223.28%405.16%
H2112.46%214.36%304.47%314.00%
H3184.02%193.94%284.17%374.77%
H492.01%122.49%152.24%253.23%
Total4710.50%6513.49%9514.16%13317.16%
Table 6. Average numbers of knowledge elements in each comment throughout the PAs.
Table 6. Average numbers of knowledge elements in each comment throughout the PAs.
PA ActivityGeneral KnowledgeExpertiseHigher-Order Knowledge
First1.5002.0180.412
Second1.9902.3090.670
Third2.2112.8420.833
Fourth2.4473.1841.667
Table 7. t-test for differences in epistemic networks between the two groups.
Table 7. t-test for differences in epistemic networks between the two groups.
GroupXY
MeanSDNtEffect Size (d)pMeanSDNtEffect Size (d)p
1B−0.891.0412−5.872.440.00 *0.001.15120.000.001.00
G1.330.6580.002.258
2B−1.240.9410−5.852.810.00 *0.001.71100.000.001.00
G1.551.0680.002.418
3B−0.650.5312−5.592.710.00 *0.001.40120.000.001.00
G0.980.7180.000.768
4B−0.880.8912−6.082.650.00 *0.002.37120.000.001.00
G1.320.7280.001.288
* Significant at p ≤ 0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Y.; Ni, Z.; Zha, S.; Zhang, Z. Exploring the Development of Student Teachers’ Knowledge Construction in Peer Assessment: A Quantitative Ethnography. Sustainability 2022, 14, 15787. https://doi.org/10.3390/su142315787

AMA Style

Liu Y, Ni Z, Zha S, Zhang Z. Exploring the Development of Student Teachers’ Knowledge Construction in Peer Assessment: A Quantitative Ethnography. Sustainability. 2022; 14(23):15787. https://doi.org/10.3390/su142315787

Chicago/Turabian Style

Liu, Yingchun, Zhuojing Ni, Shimin Zha, and Zhen Zhang. 2022. "Exploring the Development of Student Teachers’ Knowledge Construction in Peer Assessment: A Quantitative Ethnography" Sustainability 14, no. 23: 15787. https://doi.org/10.3390/su142315787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop