Next Article in Journal
Evaluation of City Sustainability from the Perspective of Behavioral Guidance
Previous Article in Journal
Assessment of Atmospheric Deposition and Vitality Indicators in Mediterranean Forest Ecosystems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Why Students Have Conflicts in Peer Assessment? An Empirical Study of an Online Peer Assessment Community

School of Management, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sustainability 2019, 11(23), 6807; https://doi.org/10.3390/su11236807
Submission received: 8 November 2019 / Revised: 22 November 2019 / Accepted: 28 November 2019 / Published: 30 November 2019
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
This study highlights the issues in the process of peer assessment in an online environment. As an interactive learning platform, peer assessment will likely lead to conflicts among students, which will hinder the sustainability of peer assessment learning environments. It is still unclear about the particular factors that influence and cause the behavioral conflicts which arise within learning groups and learning environments. To overcome this issue, the current study explores why peer assessment could trigger conflict over a student’s task. The results of a negative binomial regression model with user fixed effects indicate that student’s knowledge self-efficacy, cognitive diversity of general knowledge, and network density have a positive impact on task conflict. Interactive experience and cognition diversity of specific knowledge are not powerful motivations for task conflict in the peer assessment. The findings of this study may be helpful for educators in understanding why students have task conflict in a specific learning environment.

1. Introduction

Effective teamwork has become an important asset for modern organizations [1]. Group project tasks have been widely used in higher education institutions as a teaching tool to encourage collaboration and collaborative learning among learners [2]. Collaborative learning strategies emphasize effective collaboration and communication among students, and task accomplishment requires the active involvement of students in learning processes. As a team learning activity, peer assessment has been widely implemented in advanced courses and proved to be effective and reliable in many courses [3]. Peer assessment is recognized as an interpersonal learning arrangement where students assess other students’ tasks by providing feedback, which includes quantitative ratings or grades across assessment criteria and qualitative comments [4]. In peer assessment, the role of students has also changed from a passive subject to a participant who actively participates and practices the assessment for self-learning.
Peer assessment provides students with the opportunities to reflect on knowledge, rebuild the knowledge, integrate ideas, solve misunderstandings, and share understanding [5]. In fact, many of these activities, such as the integration of ideas, also deepen the study of knowledge. Previous studies have shown that peer assessment can stimulate deeper learning and have positive impacts on students’ curriculum performance [6]. For this reason, peer assessment has been incorporated as an innovative learning approach for students of higher education and, most of the time, it is proved effective to provide important educational value for learning [7].
Although the individual gains of cooperative and collaborative learning have been well-studied, the special effects of cooperative learning have not been widely addressed [8]. As a collaborative community, the purpose of peer assessment is to expand the richness of the personal knowledge structure through interaction among students [9]. Some researchers have defined the productivity of this knowledge structure i.e., peer assessment carries the cognition complexity [10]. The complex cognitive structures that emerge from interpersonal interactions in a group are both different and integrated. The former is composed of many concepts and topics, while the latter refers to the interrelationship of nodes [11]. In order to generate a variety of knowledge structures, students need to receive diverse and even conflicting perspectives from knowledge interaction. In other words, students are involved in complex task-related conflicts. As in other groups, conflicts among students are natural and unavoidable. The peer assessment community requires active participation, interaction of different knowledge positions, which will naturally lead to knowledge interactions as well as knowledge transactions. In this kind of knowledge-sharing community, it is unrealistic to think that everyone will easily accept others’ advice. However, students have to settle their behavioral conflicts in an amicable manner to main teamwork. Therefore, effectively solving conflicts among participants is an important factor in personal cognitive structures. Identifying the influencing factors for task conflict behavior through peer assessment would help us select and implement the right tools to avoid excessive conflicts and collapse of collaboration, which is beneficial for students and educators [12]. Based on the method of negative binomial regression and the technical help of STATA, this study expects to analyze the data of the participants’ behavior in the Java curriculum and identify the influencing factors that stimulate the contradiction of members in the peer review for reasonable regulation. The rest of the study is organized as follows. In the second section, we review the literature on task conflict and make assumptions. Section 3 introduces the data corpus and variables to test hypotheses. Section 4 presents the data analysis model and analysis results. For a discussion of the results, as well as future research, see Section 5.

2. Literature Review and Theoretical Framework

2.1. Literature

Peer assessment has been proven in many studies to enhance student learning through knowledge contribution [13]. Previous research has investigated knowledge contribution behaviors in various learning communities, such as medical communities, online social platforms, and Q&A communities [14]. Researchers believe that the creation of knowledge depends on the social collaboration of individuals and organizations [15]. In the proposed research framework for knowledge management, online knowledge communities are used for knowledge management within the organization and institutions because of the clarity of its organizational structure and the convenience of its participation [16]. As a kind of learning community, peer assessment provides an interactive learning community for students. In addition to their own direct experience, participants can also learn from the experience of other participants, which is considered as a secondary or indirect experience. Peer assessment changes the range of students’ potential knowledge structure by expanding the learning channels of knowledge [4]. Providing feedback, explanations, or suggestions is considered to be the primary means by which participants can disseminate knowledge and skills to others [17].
Currently, seamless collaboration among participants is widely practiced in organizations that have achieved significant success in knowledge interaction [18]. In fact, the researchers found that the extent of knowledge gathered from the coordinated discussions was directly correlated with the working relationship amongst participants [19]. Organizations are considered to make significant contributions to the creation of knowledge by providing a shared environment in which individuals can interact with others [20]. Because it is common to express disagreements during brainstorming sessions, team members often have differences and conflicts with tasks. In addition to previous research, it is found that the quality of collective decision-making may be compromised if the ideas from person to person are synchronized from the beginning. It can be seen from the results of these studies that the existence of conflicts exerts a profound effect on knowledge management [10]. Conflict is a natural consequence of interaction and is generally defined as an internal misunderstanding that occurs from the differences in feelings, thoughts, and values among individuals or organizations [18].
Conflict is defined as a process in which an individual or group of people feel a negative impact from the outside [21]. Previous research has confirmed that conflict can have a negative impact on individual and organizational performance [22]. Through an analysis of participants’ satisfaction and emotional acceptance, it is found that conflicts weaken member cohesiveness and undermines the strategic prospects of the organization [23]. In contrast, another view is that conflict is considered to be a constructive part of decision-making. By encouraging participants to come up with different perspectives, conflicts actually can improve decision making [24]. This dilemma, sacrificing organization cohesion to improve the quality of decision-making conflicts, has spawned the multidimensional concept of conflict. Researchers have divided conflicts into task conflicts and relationship conflicts to discuss the effects of conflicts. Task conflicts are defined as perceptions at the individual-level, reflecting a disagreement between a focal member and other members regarding how to perform a task. Task conflicts are considered as the differences of opinion on task, while relationship conflicts arise from contradictory memberships, often accompanied by the feeling of friction and tension [25]. Previous studies have suggested that the cognition determines the conditions in which conflict can be beneficial or harmful for organizational or institutional performance. With respect to personal cognitive complexity, researchers argued that task conflict enhances the complexity of an individual’s knowledge because it is associated with various perspectives expressed during student interaction [26]. On the other hand, relationship conflict can be the cause of stress or anxiety among students, which reduces cognitive flexibility and ultimately narrows down personal cognitive complexity [27].
The literature review shows diverse impacts of the task conflict within knowledge team performance, which has been extensively discussed, but few studies have examined the causes of task conflicts in knowledge communities, especially in the peer assessment community [8]. Unlike traditional knowledge community members who participate in community activities based on interest, participants of peer assessment often need to complete the course tasks with other participants, and there are complex interests among participants. Because of close cooperation and complex interests, it is easier for participants of peer assessment to cause task conflicts. However, previous scholars rarely studied the potential causes of task conflicts in knowledge communities. Drawing on these critical gaps in the literature, this study aims to explore the potential factors for task conflicts among students in peer assessment. This study only discusses the task conflicts among students, as relationship conflicts are not the focus of this study. This study will select a research framework for influencing factors with peer-recognition representativeness, including knowledge self-efficacy, cognitive diversity of general knowledge, network density, interactive experience, and cognition diversity of specific knowledge. Considering the particularity of the participation of members of the peer review community, namely anonymity, and specificity, this study excludes socially influential factors such as social influence.

2.2. Hypotheses and Framework

2.2.1. Knowledge Self-Efficacy

Knowledge self-efficacy is seen as the participants’ ability of judgment to achieve a specific performance goal [28]. According to attribution theory, researchers believe that people observe, analyze, and interpret behavior through interpretation. In social psychology, interpretation can be divided into internal (individual) and external (situational) attribution [29]. Internal attribution assigns behavioral reasons to certain internal features rather than external forces. The source of the behavior is considered to be an individual’s characteristics, such as ability, personality, mood, effort, attitude, or personality. External attribution, also known as situational attribution, refers to the interpretation of someone’s behavior as being caused by individual factors, such as tasks and lucky [16]. Based on external interpretation, participants will explain suggestions from others to protect their self-esteem. Because intense critical endorsements lead individuals to be confined to consolidating their own position, the mindsets of participants are more likely to change from cooperative to competitive [30].
Researchers argue that such intervention can lead participants’ attribute recommendations from others to personal biased characteristics (i.e. grumpy, aggressive personality) in turn; it can be possible for participants to refuse and endorse the suggestions, which triggers task conflicts [31]. Previous studies confirmed that people with a higher level of self-efficacy have stronger self-defense awareness. A competitive mindset may cause participants to ignore, misunderstand, and misinterpret other participants’ points of view. Therefore, we propose the following assumptions:
H1. 
Knowledge self-efficacy is positively related to task conflict in peer assessment.

2.2.2. Cognitive Diversity

The cognitive diversity of an organization is considered to be the difference in cognitive perception among members of the organization. Different aspects of diversity are considered important at the personal and organizational levels [32]. Researchers have found that diversity perception has a significant impact on personal relationships and organizational performance [33]. For example, researchers have found that the perception of employee diversity is closely related to the overall performance of the institution or organization. Conversely, other researchers have found that cognitive diversity is negatively correlated to the institutional or organizational performance [34]. In addition, researchers have found that cognitive diversity indirectly affects organizational performance, which is influenced by the diversity of organizational members’ perceptions. These studies indicate that cognitive diversity in the organization influences several dimensions of teamwork.
In the knowledge community, whether the study of cognitive diversity at the individual level leads to the task conflicts has not been confirmed. In the knowledge community, the cognitive diversity of members is reflected in the differences in perceptions of knowledge. As a kind of knowledge community, knowledge in online peer assessments is closely related to the performance of member tasks [35]. Knowledge can be divided into general knowledge and specific knowledge, depending on whether the knowledge is specific to the task. The diversity of knowledge cognition includes the diversity of general knowledge and the diversity of specific knowledge. In peer assessment, general knowledge (also known as normative knowledge) is usually clear and easy to obtain, such as naming conventions and annotation specifications. Specific knowledge is closely related to the individual learning experience, also known as personal knowledge, depending on the circumstances and incidences [36], such as optimization methods and new ideas. Therefore, we propose the following assumptions:
H2a. 
Cognition diversity of general knowledge is positively related to task conflict in peer assessment.
H2b. 
Cognition diversity of specific knowledge is positively related to task conflict in peer assessment.

2.2.3. Interactive Experience

As a social network, in addition to assigning tasks to members, organizations also coordinate the interactions among members [37]. Information sharing and interactive activities among members affect interaction satisfaction. Additionally, organizations will be invalid if participants are dissatisfied with the interaction within the organizations. Researchers believe that members’ engagement with the social world around them - specifically their own interactions with participants - will affect the likelihood of conflicts arising from tasks [38]. And negative social interactions between groups have a strong correlation with the number of conflicts [39]. Conversely, a good interactive experience will encourage them to actively collaborate with others [40]. If students within institutions have pleasant experiences during the interaction, they will work cordially on the same project with the same team in the future, owing to the pleasant experience they acquire [41]. Conversely, if an organization member’s experience in the interaction was less pleasant, teamwork will encounter more obstacles [42], and members will face more task conflicts. Therefore, we propose the following assumptions:
H3. 
The good interactive experience is negatively related to task conflict in peer assessment.

2.2.4. Network Density

It is considered a pattern of interaction in which team members participate in task structure as well as the social structure of the team [43]. The position of a participant within that structure provides him or her with a particular vantage point for obtaining relational information. The core point is that social network density is positively correlated to the degree of information interaction within the group or organization [44]. Density is considered the level of association between points. According to social capital theory, the information resources available to groups through the configuration of participants’ social structures are critical to the efficiency of the group in information processing [45]. Previous studies have shown that members with a fully connected communication network are more likely to connect with participants who have and express different ideas and perspectives, which provides non-redundant information to group members, thereby promoting cognitive differences. Therefore, members of a high-density network have more variety of information, which will lead to different opinions on the content of the task, or in other words, task conflicts. Hence, we can formulate:
H4. 
Network density is positively related to task conflict in peer assessment.
In order to explore factors of task conflict in peer assessment, this study hypothesizes that knowledge self-efficacy, cognition diversity, and satisfaction of participant interaction are the main factors affecting task conflict among members in peer assessment. This study conceptualizes the research model of this study in Figure 1 (see Figure 1).

3. Measurement

3.1. Research Instrument

This research uses a quantitative approach. We have developed a new online learning platform and technology for the Java course, namely EduPCR. EduPCR is an acronym of Educational Peer Code Review, which is an online peer assessment system for programming language learning. It aims to provide students with a web-based platform to share the knowledge items of tasks, the management of collections of such items, and the accumulated history of the score performance. It is suitable for undergraduates and postgraduates who need to learn programming language skills. As an online knowledge community, EduPCR is designed to provide a platform for students’ knowledge projects and assessment processes and to harness the power of online communities to improve the process of knowledge interaction.

3.2. Data Collection and Procedure

The task conflict itself and its antecedents, i.e., self-efficacy, cognition diversity, the satisfaction of participant interaction, were measured from 84 undergraduate at a university Harbin Institute of Technology (abbreviated as HIT) in China. These undergraduates participated in the Java course that used peer assessment. Java is an advanced compulsory course taught in the undergraduate curriculum, at the School of Management at HIT. As an 8-week course, the Java course is designed to develop students’ abilities in preliminary computer programming. Students submit tasks on the EduPCR platform, 12 Java programming tasks with medium difficulty assigned by the course teacher. The course teacher should carefully manage the difficulty and time of each task to ensure the quality of the task and prevent students from being too tired and nervous. The task is static so that it will not change after the submission at a specified time. Modification information refers to students’ suggestions for the task of others, including pointing out errors, providing answers, and evaluating them. As part of the course assignment, students are required to complete various tasks through EduPCR—a web-based online assessment and knowledge sharing system, as described. All students are required to register and participate in specific phases in EduPCR, such as completing the course tasks, reviewing peers’ tasks (grading and commenting), revising and doing back-evaluation, etc. Although students are encouraged to provide comments for others’ work as seriously as possible, it was not mandatory. Feedback information refers to other students’ responses to comments received, such as arguments or thanks for comments after reviewing the task submitted by student. The course teacher’s responsibilities range from setting up the tasks to be assigned to students to summarizing the final scores after reviewing the student’s tasks.
Figure 2 is an exact depiction of a task, which is extracted from the EduPCR webpage when students were arise engaged in the process of a task review. The figure contains two types of information: personal task and modification information, and students can choose to provide feedback for review after receiving the modification suggestions from others.
The online peer assessment community was created prior to the start of the java course, and the data was collected from the students attending the Java course at the end of the course. The study was conducted based on the University’s Human Ethics Practice Research Policy and we did not collect any data about student names and personal information.

3.3. Variables and Measurement

In this study, our final data consisted of 84 users’ interactional behavior in EduPCR. The summary statistics representing the data of 84 students are shown in Table 3. As discussed in Section 3 of this study, we have one dependent variable, namely, task conflict, four latent independent variables, namely, knowledge self-efficacy, cognition diversity, interactive experience, and network density. Finally, we also need to judge whether there is a correlation between task conflict and knowledge contribution. Based on the data in EduPCR, all variables are measured as follows.

3.3.1. Dependent Variable

In EduPCR, every student is required to take two roles, task author and task reviewer, which is mandatory. As task reviewers, students can contribute corrections and suggestions to others. In the process of peer assessment, every student also works as an author. Students often provide feedback on the comments they receive from the reviewers, which include objection, approval, or other comments. For example, the reviewer received feedback as follows:
“Thank you for your suggestion, I will add more comments, but the solution you provide is more complicated than my approach.”
This feedback depicts both positive and negative impressions, and we record the negative impressions and treat them as task conflicts. In this study, we use the number of objections provided by students in the feedback to represent task conflicts and record them.

3.3.2. Independent Variables

Knowledge self-efficacy. In peer assessment, knowledge self-efficacy reflects the mastery of the course knowledge and students will be tested to assess their knowledge. For the purpose of measuring and understanding the knowledge self-efficacy of students, a simple testing exam was conducted at the end of the Java course in class. In this test, programming code is used, which is similar to the tasks performed by students in EduPCR. Students considered this test a good reflection of their understanding of the course knowledge. Standards of this test were well defined before respondents. All of them know it will be a written test and each step of the test carries a self-efficacy score of 1 point and a total of 15 points. Therefore, if a student completes all the steps correctly, her/his knowledge self-efficacy score will be 15; if she/he does not complete any steps of the test, her/his knowledge self-efficacy score will be 0. We used individual scores achieved in the test to represent a respondent’s knowledge self-efficacy.
Cognition diversity. As mentioned above, as a kind of knowledge community, cognitive diversity in peer assessment is considered to be the difference in knowledge attributes of students of the community in terms of knowledge, skills, experience, or expertise. Because members of the online assessment community have different knowledge, abilities, and skills, students often have different opinions about the work of others. Therefore, we use the number of suggestions provided by a student to measure the scale of cognition diversity in EduPCR.
Interactive experience. When students provide recommendations for others’ tasks, they will receive feedback from the reviewee. The feedback may include denial, gratitude, and recognition. The reviewer expects a response, especially a positive response, because praise and recognition are believed to enhance personal pleasure [46]. In peer assessment, the student who provided review comments may be praised by other members. Therefore, we use the number of praises provided by a student to measure the scale of satisfaction of the interactive experience.
Network density. In peer assessment, students interact extensively through daily tasks that form the social structure of the learning community [14]. The location of a student in the social structure facilitates his or her access to information [47]. Students who are in a central structure are considered to have better access to information and access to more diverse information. Therefore, we measure network density with the number of exchanges, in which every student participated.
The list of variables is recorded in Table 1.

3.4. Regression Analysis

Regression analysis is widely used to find the relationship between a set of independent and dependent variables and to explore the relationship [48]. The dependent variables in this study are non-negative integers. Both the negative binomial regression model and the Poisson regression model are designed to analyze the count data, but their application conditions are different, such as the variance of the dependent variable and the assumptions of the conditional mean. The applicable condition of the Poisson regression models is that the conditional mean is equal to the variance. Different from Poisson regression, the applicable condition of the negative binomial regression model is not that the mean and variance are equal, and parameter needs to be introduced to correct for excessive dispersion when the variance is greater than the conditional mean. As is shown in Table 2, the mean and variance of the dependent variable in this study are quite different. Therefore, this study adopts the negative binomial regression to study the factors of participants’ knowledge contribution behaviors in an online peer assessment. The negative binomial probability function is as follows:
Pr ( Y = y i ) = Γ ( y i + θ ) Γ ( y i + 1 ) Γ ( θ ) ( θ θ + λ i ) θ ( λ i θ + λ i ) y i                  
The negative binomial distribution consists of two parameters: λ and θ. Parameter θ captures over-dispersion in the data. The negative binomial distribution is equivalent to the Poisson distribution levels when the value of θ is equal to 0:
Ln ( λ ( x i ) ) = c i + x i β + ε i
Ln ( λ ( x i ) ) = c i + β 1 s u g g e s t i o n N u m + β 2 s c o r e N u m + β 3 p r a i s e N u m + β 4 c o m m u n i c a t i o n N u m + ε i
where c i is a dummy variable for capturing user’s fixed effects; β is a vector of regression coefficients of covariates; and c i is the error term.

4. Results

All participants are third-year undergraduate students in the School of Management. Thirty-eight percent of the participants are female, while 62% are male. Most of the students are between 19 and 22 years old with an average age of about 21 years old. The matrix of correlation coefficients between the variables is recorded in Table 3. It can be seen from Table 3 that the correlation coefficient of independent variables is less than 0.6, and does not cause a multi-collinearity problem when performing regression analysis.
The results of the two-count panel data regression models, the Poisson regression model, and the negative binomial regression model are shown in Table 4 and Table 5. As a criterion for selecting the best model among the available models, this study will choose the Akaike information standard (AIC) and Bayesian information standard (BIC) as the basis for selecting Poisson regression and negative binomial regression. The table shows the AIC and BIC results for the two regression models. Based on the values of AIC and BIC, the AIC and BIC values of the negative binomial regression were significantly lower than the Poisson regression, which means that negative binary regression is clearly superior to Poisson regression for our count panel data, and negative binomial regression is preferable to counting data. Similar to the linear regression model, the regression coefficient determines the effect of the independent variable on the dependent variable. The positive (negative) sign of the regression coefficient means that the independent variable has a positive (negative) effect on the dependent variable. As is shown in Table 5, with the general knowledge provided, communication and score are positively correlated with task conflict at different significance levels. This study used a general criterion, that is, 0.05 was chosen as the reference level.
Hypothesis 1 investigates the impact of knowledge self-efficacy on task conflict. Based on the results provided above, we argue that hypothesis 1 is not ruled out and knowledge self-efficacy has a positive impact on task conflict in peer assessment. In this paper, the participant’s test score is employed to represent knowledge self-efficacy in peer assessment communities. The regression analysis results indicate that the coefficient for knowledge self-efficacy is positive and significant. Therefore, hypothesis 1 is not ruled out.
Hypothesis 2a and hypothesis 2b investigate the impact of cognition diversity on task conflict in peer assessment. Based on the results provided above, we argue that hypothesis 2a is not ruled out and hypothesis 2b is ruled out. In this paper, cognition diversity represents a student’s knowledge contribution, whether it is general knowledge or specific knowledge. The regression results indicate that the coefficient for cognition diversity is positive and significant for general knowledge. But the coefficient is not significant for specific knowledge. Therefore, hypothesis 2a is not ruled out and hypothesis 2b is ruled out. We can conclude that only general knowledge contributions are positively related to task conflict.
Hypothesis 3 was designed to investigate the impact of interactive experience on task conflict. Based on the results provided above, we argue that hypothesis 3 is ruled out and good interactive experience is not related to task conflict in peer assessment. In peer assessment communities, good interactive experience is measured as praises received from other students. Contrary to our expectations, the empirical results indicate that the coefficient is not significant. Therefore, hypothesis 3 is rejected. Consequently, it can be concluded that praise is not a factor that affects the task conflict through peer assessment.
Hypothesis 4 investigates the impact of the network density on task conflict. Based on the results provided above, we argue that hypothesis 4 is not ruled out and network density is positively related to task conflict in peer assessment. In this study, network density is measured as the number of exchanges among students. The regression results indicate that the coefficient for peer recognition is positive and significant. Therefore, hypothesis 4 is supported; students’ exchanges have a positive impact on task conflict in peer assessment.

5. Discussion and Conclusions

5.1. Finding

The purpose of this study is to identify the factors that trigger task conflict among students in peer assessment. Based on social capital theory, attribution theory, and knowledge management theory, this study reveals the antecedents of intra-group task conflict among students. Based on the empirical data, the results show that students’ test scores, cognition diversity of general knowledge, and exchanges have positive impacts on knowledge-contribution behaviors.
Our research identifies the particularity of peer assessment and believes that the study of task conflicts in peer assessment should be conducted at the individual level. These empirical results also indicate that the number of exchanges and score are related to the task conflict. According to these results, students with higher knowledge self-efficacy tend to be more confident in the performance of their task and they are more likely to stick to their own coding style. In the peer assessment, students in the high-density area of the interactive network are considered to be exposed to more valuable knowledge information about the task. Therefore, they are often more likely to find errors in the task of other students and pass on the optimized knowledge to others after making amendments. Contrary to prior studies, our empirical results indicate that interactive experience and cognition diversity of general knowledge are not significant. Therefore, we can suggest that the interactive experience and cognition diversity of specific knowledge are not significant factors for task conflict among students in peer assessment. There is some reasons for this ‘‘unexpected’’ result. As an assessment course, in order to urge the auditor to seriously review the tasks of others, the auditee will score the amendments provided by the auditor, and the scores will be included in the auditor’s course score. In order to get a higher score, students may prefer to express goodwill to the person being audited by giving praise and recognition.
By analyzing the data from EduPCR, we found that praise is becoming an integral part of the review recommendations and is being widely adopted. When praises become a routine job for the auditor, students seem to be no longer sensitive to the approval of this deliberate arrangement. As a result, the insensitivity of the auditee makes it impossible to establish an effective connection with the task conflicts among students. At the same time, although task conflicts are positively related to cognitive differences in general knowledge, there is no correlation with cognitive differences in specific knowledge. In the Java language courses, students have their own style of code processing, such as annotation and naming. Although educators have published specifications for annotations and naming of the Java course, most students still insist on their own style, which they believe will not hinder their coding. Therefore, compared with specific knowledge, the auditee tends to express objection to the general knowledge from the auditor and hope that the other students respect their coding style. Conversely, specific knowledge often involves the ability of the code to run successfully and whether it is easier to accept. On the contrary, because specific knowledge can not only provide new ideas for the auditee but also help the auditee to correct programming errors, it is more easily accepted by the auditee.

5.2. Implications

Nowadays, the purpose of using peer assessment differs from before, when the educator used it for education and assessment. Peer assessment is designed to promote the richness of personal knowledge structures emerging from knowledge exchange through interpersonal interaction. As described earlier, task conflicts arising from interactions in the learning group can promote the richness of the student’s knowledge structure. However, it should be noted that some studies have also verified the negative impact of task conflicts and task conflicts are often accompanied by relationship conflicts. In order to achieve integration, they have to solve the conflicts in an amicable manner. Based on this study, we try to provide recommendations from three aspects: the knowledge self-efficacy, the cognitive diversity of general knowledge, and the density of interactive networks. In terms of code peer assessment, because of the different learning time, the students’ programming skills vary. Students’ advanced programming skills should be encouraged, but students should also be reminded to respect the programming skills of others. Students with low-level programming skills often claim that they are discriminated by high-level skills students. Although this discrimination is task-based, it will inevitably lead to relationship conflicts. Educators should provide reasonable programming skills and develop students’ self-confidence before each task is released. At the same time, there are two ways to coordinate the task conflict among students in terms of cognitive diversity of general knowledge. In order to encourage students to invest more time on the differences in specific knowledge, educators can set strict and uniform programming practices, leave no room for debate, or respect students’ diverse programming styles. Finally, in terms of network density, a harmonious communication atmosphere is necessary. Educators should develop communication skills to avoid bad comments when students receive criticism and pressure.

5.3. Limitations and Future Work

Future research is critical to validating the results of this study. Factors that influence students’ task conflict through peer assessment in different cultures and contexts should be further studied. Furthermore, the sample of this study is small. Since this study needs a specific group of students who need to choose a course that uses peer assessment to provide relevant knowledge, the educator and assistants needed to monitor the entire process. Future research can use larger samples to examine hypotheses proposed in this study to obtain a more substantiated conclusion.
Overall, the results of this study enhanced understanding of why students have conflicts in peer assessment, and therefore raise the considerations for educators to strengthen and change their respective practices.

Author Contributions

Conceptualization: Z.Z. and Y.W.; Methodology & investigation, Z.Z. and Y.W.; Data Analysis: Z.Z. and Y.W.; writing & review, Z.Z. and Y.W.; Supervision: Z.Z.

Funding

This work was partially supported by the National Natural Science Foundation of China [grant number 71573065, 71571085].

Acknowledgments

The authors would like to thank the editor and anonymous reviewers for positive and constructive comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anderson, N.; Burch, G.S.J. Measuring person-team fit: Development and validation of the team selection inventory. J. Manag. Psychol. 2004, 19, 406–426. [Google Scholar]
  2. Davies, W.M. Groupwork as a Form of Assessment: Common Problems and Recommended Solutions. High. Educ. 2009, 58, 563–584. [Google Scholar] [CrossRef]
  3. Li, H.; Xiong, Y.; Zang, X.; LKornhaber, M.; Lyu, Y.; Chung, K.S.K.; Suen, H. Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assess. Eval. High. Educ. 2016, 41, 245–264. [Google Scholar] [CrossRef]
  4. Wang, Y.; Li, H.; Feng, Y.; Jiang, Y.; Liu, Y. Assessment of programming language learning based on peer code review model: Implementation and experience report. Comput. Educ. 2012, 59, 412–422. [Google Scholar] [CrossRef]
  5. Roscoe, R.D.; Chi, M.T.H. Tutor learning: The role of explaining and responding to questions. Instr. Sci. 2008, 36, 321–350. [Google Scholar] [CrossRef]
  6. Miklas, E.J.; Kleiner, B.H. New developments concerning academic grievances. Manag. Res. News 2003, 26, 141–147. [Google Scholar] [CrossRef]
  7. Topping, K.J. Methodological quandaries in studying process and outcomes in peer assessment. Learn. Instr. 2010, 20, 339–343. [Google Scholar] [CrossRef]
  8. Haggis, T. What have we been thinking of? A critical overview of 40 years of student learning research in higher education. Stud. High. Educ. 2009, 34, 377–390. [Google Scholar] [CrossRef]
  9. Curşeu, P.L.; Janssen, S.E.; Raab, J. Connecting the dots: Social network structure, conflict, and group cognitive complexity. High. Educ. 2012, 63, 621–629. [Google Scholar] [CrossRef]
  10. Yuni, P.; Husamah, H. Self and Peer Assessments in Active Learning Model to Increase Metacognitive Awareness and Cognitive Abilities. Int. J. Instr. 2017, 10, 185–202. [Google Scholar]
  11. Driver, M.J.; Streufert, S. Integrative Complexity: An Approach to Individuals and Groups as Information-processing Systems. Adm. Sci. Q. 1969, 14, 272–285. [Google Scholar] [CrossRef]
  12. Moghavvemi, S.; Paramanathan, T.; Rahin, N.M.; Sharabati, M. Student’s perceptions towards using e-learning via Facebook. Behav. Inf. Technol. 2017, 29, 1–20. [Google Scholar] [CrossRef]
  13. Hsu, M.H.; Ju, T.L.; Yen, C.H.; Chang, C.M. Knowledge sharing behavior in virtual communities: The relationship between trust, self-efficacy, and outcome expectations. Int. J. Hum. Comput. Stud. 2007, 65, 153–169. [Google Scholar] [CrossRef]
  14. Jin, J.; Li, Y.; Zhong, X.; Zhai, L. Why users contribute knowledge to online communities. Inf. Manag. 2015, 52, 840–849. [Google Scholar] [CrossRef]
  15. Stebbins, M.W.; Shani, A.B. Organization design and the knowledge worker. Leadersh. Organ. Dev. J. 1995, 16, 23–30. [Google Scholar] [CrossRef]
  16. Hildreth, P.; Kimble, C.; Wright, P. Communities of practice in the distributed international environment. J. Knowl. Manag. 2000, 4, 27–38. [Google Scholar] [CrossRef]
  17. Ellis, A.P.; Hollenbeck, J.R.; Ilgen, D.R.; Porter, C.O.; West, B.J.; Moon, H. Team learning: Collectively connecting the dots. J. Appl. Psychol. 2003, 88, 821–835. [Google Scholar] [CrossRef]
  18. Rahim, M.A. Empirical studies on managing conflict. Int. J. Confl. Manag. 2000, 11, 5–8. [Google Scholar] [CrossRef]
  19. Nahapiet, J.; Ghoshal, S. Social Capital, Intellectual Capital, and the Organizational Advantage. Knowl. Soc. Cap. 1998, 23, 242–266. [Google Scholar]
  20. Crossan, M.M. The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. J. Int. Bus. Stud. 1996, 27, 196–201. [Google Scholar] [CrossRef]
  21. Thomas, K.W. Conflict and conflict management: Reflections and update. J. Organ. Behav. 2010, 13, 265–274. [Google Scholar] [CrossRef]
  22. Kurtzberg, T.R.; Mueller, J.S. The influence of daily conflict on perceptions of creativity: A longitudinal study. Int. J. Confl. Manag. 2005, 16, 335–353. [Google Scholar]
  23. Schweiger, D.M.; Sandberg, W.R.; Rechner, P.L. Experiential effects of dialectical inquiry, devil’s advocacy, and consensus approaches to strategic decision making. Acad. Manag. J. 1989, 32, 745–772. [Google Scholar]
  24. Sterling, S. Higher Education, Sustainability, and the Role of Systemic Learning. High. Educ. Chall. Sustain. 2004, 49–70. [Google Scholar] [CrossRef]
  25. Mannix, J.E.A. The Dynamic Nature of Conflict: A Longitudinal Study of Intragroup Conflict and Group Performance. Acad. Manag. J. 2001, 44, 238–251. [Google Scholar]
  26. Stock, R. Drivers of Team Performance: What Do We Know and What Have We Still To Learn? Schmalenbach Bus. Rev. 2004, 56, 274–306. [Google Scholar] [CrossRef]
  27. Wasko, M.L.; Faraj, S. Why Should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice. MIS Q. 2005, 29, 35–57. [Google Scholar] [CrossRef]
  28. Fullwood, R.; Rowley, J.; Delbridge, R. Knowledge sharing amongst academics in UK universities. J. Knowl. Manag. 2013, 17, 123–136. [Google Scholar] [CrossRef]
  29. De Wit, F.R.; Greer, L.L.; Jehn, K.A. The paradox of intragroup conflict: A meta-analysis. J. Appl. Psychol. 2012, 97, 360–390. [Google Scholar] [CrossRef]
  30. De Dreu, C.K.; Weingart, L.R. Task versus relationship conflict, team performance, and team member satisfaction: A meta-analysis. J. Appl. Psychol. 2003, 88, 741–749. [Google Scholar] [CrossRef]
  31. Zhu, Q.; Carless, D. Dialogue within peer feedback processes: Clarification and negotiation of meaning. High. Educ. Res. Dev. 2018, 37, 883–897. [Google Scholar] [CrossRef]
  32. Harrison, D.A.; Newman, D.A.; Roth, P.L. How Important Are Job Attitudes? Meta-Analytic Comparisons of Integrative Behavioral Outcomes and Time Sequences. Acad. Manag. J. 2006, 49, 305–325. [Google Scholar] [CrossRef]
  33. Allen, R.S.; Dawson, G.; Wheatley, K.; White, C.S. Perceived diversity and organizational performance. Empl. Relat. 2008, 30, 20–33. [Google Scholar] [CrossRef]
  34. Carless, D. Trust, distrust and their impact on assessment reform. Assess. Eval. High. Educ. 2009, 34, 79–89. [Google Scholar] [CrossRef]
  35. Rowe, D. Education for a sustainable future. Science 2007, 317, 323–324. [Google Scholar] [CrossRef]
  36. Zack, M.H. Developing a Knowledge Strategy. Calif. Manag. Rev. 1999, 41, 125–145. [Google Scholar] [CrossRef]
  37. Lurey, J.S.; Raisinghani, M.S. An empirical study of best practices in virtual teams. Inf. Manag. 2001, 38, 523–544. [Google Scholar] [CrossRef]
  38. Avgar, A.C.; Neuman, E.J. Seeing Conflict: A Study of Conflict Accuracy in Work Teams. Negot. Confl. Manag. Res. 2015, 8, 65–84. [Google Scholar] [CrossRef]
  39. Labianca, G.; Gray, B.B. Social Networks and Perceptions of Intergroup Conflict: The Role of Negative Relationships and Third Parties. Acad. Manag. J. 1998, 41, 55–67. [Google Scholar]
  40. Thoresen, C.J.; Bradley, J.C.; Bliese, P.D.; Thoresen, J.D. The big five personality traits and individual job performance growth trajectories in maintenance and transitional job stages. J. Appl. Psychol. 2004, 89, 835–853. [Google Scholar] [CrossRef] [Green Version]
  41. Barber, J.P. Integration of Learning: A Grounded Theory Analysis of College Students’ Learning. Am. Educ. Res. J. 2012, 49, 590–617. [Google Scholar] [CrossRef] [Green Version]
  42. Peralta, C.F.; Saldanha, M.F. Knowledge-centered culture and knowledge sharing: The moderator role of trust propensity. J. Knowl. Manag. 2014, 18, 538–550. [Google Scholar] [CrossRef] [Green Version]
  43. Paten, C.J.K. International Journal of Sustainability in Higher Education. Int. J. Sustain. High. Educ. 2000, 14, 25–41. [Google Scholar]
  44. Reagans, R.; Mcevily, Z.B. How to Make the Team: Social Networks vs. Demography as Criteria for Designing Effective Teams. Adm. Sci. Q. 2004, 49, 101–133. [Google Scholar]
  45. Lejk, M.; Wyvill, M. Peer Assessment of Contributions to a Group Project: A comparison of holistic and category-based approaches. Assess. Eval. High. Educ. 2001, 26, 61–72. [Google Scholar] [CrossRef]
  46. Oh, H.; Chung, M.H.; Labianca, G. Group Social Capital and Group Effectiveness: The Role of Informal Socializing Ties. Acad. Manag. J. 2004, 47, 860–875. [Google Scholar] [CrossRef]
  47. Fonseca, A.; Macdonald, A.; Dandy, E.; Valenti, P. The State of Sustainability Reporting in Universities. Int. J. Sustain. High. Educ. 2011, 12, 67–78. [Google Scholar] [CrossRef]
  48. Gardner, W.; Mulvey, E.P.; Shaw, E.C. Regression analyses of counts and rates: Poisson, overdispersed Poisson, and negative binomial models. Psychol. Bull. 1995, 118, 392–404. [Google Scholar] [CrossRef]
Figure 1. Research framework.
Figure 1. Research framework.
Sustainability 11 06807 g001
Figure 2. The web page of student code review.
Figure 2. The web page of student code review.
Sustainability 11 06807 g002
Table 1. Descriptive statistics of the variables.
Table 1. Descriptive statistics of the variables.
Variable NameMeasure ItemDescription
Dependent variable
Conflict taskObjectionThe number of objections to review suggestions.
Independent variable
Knowledge self-efficacyScoreA test score of the participant.
Cognition diversitySuggestionThe number of general or specific knowledge suggestions for the participant.
Interactive experiencePraiseThe number of praise for the participant.
Network densityCommunicationThe number of communication between members.
Table 2. Descriptive statistics.
Table 2. Descriptive statistics.
VariableMeanS.D.MinMax
Specific knowledge provided27.47630.2080194
General knowledge provided26.39320.239086
Praise provided34.9422.156095
Task conflict5.5835.698028
Communication8.13115.017090
Score14.5245.649427
Table 3. Descriptive statistics.
Table 3. Descriptive statistics.
Variables(1)(2)(3)(4)(5)
(1) Specific knowledge provided1.000
(2) General knowledge provided0.5811.000
(3) Praise provided0.1540.1311.000
(4) Communication0.3610.1990.1141.000
(5) Score0.5170.3110.2330.2891.000
Table 4. Poisson regression.
Table 4. Poisson regression.
Task ConflictCoef.S.E.[95% ConfInterval]Sig
Specific knowledge provided−0.0010.003−0.0070.004ns
General knowledge provided0.0180.0060.0070.029***
Praise provided−0.0040.004−0.0110.003ns
Communication0.0130.0030.0060.019***
Score0.0340.0140.0060.061**
Mean dependent var5.583SD dependent var5.698
Akaike crit. (AIC)496.118Number of obs84.000
Bayesian crit. (BIC)510.743
*** p < 0.01, ** p < 0.05, * p < 0.1.
Table 5. Negative binomial regression.
Table 5. Negative binomial regression.
Task ConflictCoef.S.E.[95% ConfInterval]Sig
Specific knowledge provided0.0040.005−0.0060.013ns
General knowledge provided0.0180.0050.0080.028***
Praise provided−0.0050.004−0.0130.002ns
Communication0.0140.0050.0050.023***
Score0.0370.0160.0060.067**
Mean dependent var5.583SD dependent var5.698
Akaike crit. (AIC)438.115Number of obs84.000
Bayesian crit. (BIC)455.131
*** p < 0.01, ** p < 0.05, * p < 0.1.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zong, Z. Why Students Have Conflicts in Peer Assessment? An Empirical Study of an Online Peer Assessment Community. Sustainability 2019, 11, 6807. https://doi.org/10.3390/su11236807

AMA Style

Wang Y, Zong Z. Why Students Have Conflicts in Peer Assessment? An Empirical Study of an Online Peer Assessment Community. Sustainability. 2019; 11(23):6807. https://doi.org/10.3390/su11236807

Chicago/Turabian Style

Wang, Yanqing, and Zheng Zong. 2019. "Why Students Have Conflicts in Peer Assessment? An Empirical Study of an Online Peer Assessment Community" Sustainability 11, no. 23: 6807. https://doi.org/10.3390/su11236807

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop