Next Article in Journal
Linking Sustainable Development Goals with Thermal Comfort and Lighting Conditions in Educational Environments
Previous Article in Journal
Fictional Video Cases on Parent-Teacher Conversations: Authenticity in the Eyes of Teachers and Teacher Education Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inspiring a Self-Reliant Learning Culture while Brewing the Next Silicon Valley in North Wales

School of Computer Science and Engineering, Bangor University, Bangor LL57 1UT, UK
*
Author to whom correspondence should be addressed.
Educ. Sci. 2020, 10(3), 64; https://doi.org/10.3390/educsci10030064
Submission received: 1 February 2020 / Revised: 29 February 2020 / Accepted: 2 March 2020 / Published: 8 March 2020

Abstract

:
Practical strategies for improving individual engagement and performance within an engineering team project learning environment were applied and evaluated. While methodological refinements were required due to the structural challenges and novelty of the practice, positive outcomes such as a perceived increase in engagement and technical proficiency were recorded. Critical aspects in the current approach are the well-known issue of assessing individual contributions within group performance, and setting a proper regulatory environment to prevent peer-assessment bias or dysfunctions. A novel intra-group mark moderation approach is presented and discussed.

1. Introduction

Creating a stimulating learning environment through group project work is the subject of considerable pedagogical literature [1,2,3]. The increasing consideration for group-based learning and group work assessment mirrors a change in the wider context in which higher education operates, with increasing emphasis being placed on problem-based and cooperative learning [4]. Such a trend is even more prominent in modern Engineering programs [5,6] where a multi-tasking synergistic approach seems ideally suited to tackle the growing complexity and hyper-specialization of technological skills.
Widely acknowledged advantages of group-based learning encompass new and effective ways to engage students, promote diversity and creativity, offer collaborative experiences that resemble a real working environment, increase the challenge of the tasks and reduce marking loads [7]. Furthermore, the implementation of small group learning has shown large positive impacts on student attitudes towards learning and retention [8] while reducing the emotional stress due to individual examinations. Nevertheless, a number of challenges have also been documented, including the construction of groups from a pool of students with different abilities and background [9]; decrease in members engagement with group size [10]; members anxiety when facing new assessment techniques such as group presentations [11]; evaluation of individual learners versus group performance [12].
In such context Bangor’s “Engineering team project”, a 2nd year course in various undergraduate programs run by the School of Computer Science and Electronic Engineering [13], was redesigned to create a healthy and dynamic learning environment while meeting the module’s learning outcomes. This report assesses the effectiveness of three measures introduced in this exercise over four teaching cycles, namely, (1) empowering groups to control their personnel recruitment and management; (2) engaging them in the process of funding attribution for projects development, as well as (3) for intra-group peer assessment and individual evaluation.
This paper is organized as follows: Section 2 covers the module’s regulatory environment along with the approaches followed for marking and practice evaluation; Section 3 analyses the observed outcomes and insights gained from student feedback; Section 4 discusses the issues arising from the proposed practice with respect to established methods; Section 5 draws some general conclusions.

2. Materials and Methods

The course was shaped as a (slightly) competitive effort involving student groups or “teams” with student and group numbers in the range of 50–67 and 8–11, respectively, per year. Teams were structured as micro-enterprises and required to develop a hi-tech product while maximising its commercial appeal. A lowly-prescriptive learning milieu was sought to unbridle creativity and encourage student self-reliance throughout the course and similarly to other engineering team project modules worldwide. Technical guidance from the lecturer was limited to an initial analysis of successful products developed in previous years, along with focused tutorials on device modelling, technical specifications and presentation tips. Final product standards and deliverable goals were emphasized over single lecture outcomes and assignments. Although a high level of student choice and self-reliance were encouraged in day-to-day project management, the following constitutive elements were explicitly enforced to streamline all activities in a thoroughly regulated environment.

2.1. Team Construction and Management

Each team needed to allocate adequate manpower to run four key divisions, namely Design; Testing and Modelling; Human Resources; Finance and Marketing. Teams were free to self-assemble, recruit their own “employees” and assign internal positions, with a maximum team size of 7 members. The rationale and implications of group size will be discussed in Section 4. A major innovation with respect to previous implementations consisted in enabling teams to lay-off uncommitted members provided that the lecturer could assist them to join another team and having ascertained a majority consensus behind the team’s decision. Restructuring teams were required to grant any laid-off students a GBP 5 “farewell” bonus (~10% of the average team budget) for them to endow a new team, hence catalysing recruitment. Members could also leave a team on a voluntary basis with no bonus requirement. Team re-structuring was monitored but essentially allowed until the end of February, when it was deemed that any new member could no longer make a significant contribution to the team work.

2.2. Funding Attribution

Project development was supported by the School of Electronics through the allocation of up to GBP 75 per team. However, in contrast with previous implementations, funding was not automatically granted at the beginning of the course, but rather earned by each team through the process of pitching the project rationale and development before a Venture Capitalist Board (VCB). The VCB was chaired by the lecturer and composed of members that were lent by every team (one VCB member per team leading to a 11-member board in a 10-team environment). Teams briefed the VCB on their progress and sought funding through Milestone (MS) monthly presentations. During the first Milestone (MS0) all teams were required to present a list of 3 viable ideas for a technical product and asked for feedback from the VCB to help selecting the leading one. The VCB was then summoned to approve the selected idea and bestow a start-up fund of GBP 25 through a Yes or No vote. In subsequent MS1 to MS5 the Board attributed funds up to a maximum indicated by the lecturer, and by averaging the suggested contribution from each member. The maximum amount granted in ordinary MSs was around GBP 10. The final MS presentation was only accounted for the overall assessment and not for funding attribution since it occurred on the very last date of the module. Teams were also allowed to invest directly in one another’s technology, for example by using a portion of their budget to acquire another team’s product, module or know-how, as long as mutual agreement on price and intellectual property could be achieved.

2.3. Marking Approach

Team performance in all MSs was mapped against 4 criteria that were pre-emptively defined and communicated to students. All teams (and the lecturer) completed an evaluation form based on the same four criteria. As summarized in Figure 1, the average performance from peer group evaluation accounted for 40% of the final MS mark, with another 40% being determined by the lecturer. An additional 10% was attributed by the lecturer for the group performance on MS specific topics, such as Computer Aided Design and product datasheet specification, which had been informed by short tutorials/training sessions prior to the MS. The students were additionally encouraged to complete a peer-assessment exercise which was moderated through the Virtual Learning Environment (VLE) supported across Bangor University [14]. The peer-assessment exercise required all teams to express constructive comments on each other’s MS work, and determined the residual 10% of the mark basing on the (lecturer perceived) quality of the comments. MS 1–4 were given a 10% relative weight in the final evaluation. The Final Milestone (MS5) was given a 20% weight; the Final Report was given a 40% weight and was solely evaluated by the lecturer.

Individual Evaluation through Intragroup Peer Assessment

Individual evaluation within groups, a long-standing issue in group-based learning and an explicit request by students in the first two cycles, was attempted by using and comparing two methods: (1) A benchmark approach embedding the principles of commercial web-based peer assessment software [15] with which each student in a group marks their team-mates (and their own) performance; this marking is then used with the overall group grade to provide each student with an individual mark. This approach was implemented by a simple datasheet-based assessment filled by individual team members and (manually) collated by the lecturer at the end of the course. (2) A novel and more implicit approach consisted in arranging for teams to pay their employees fictitious monthly “wages” and using these wages for real marks moderation. This was accomplished by matching each (real) GBP 1 the team was assigned from the VCB for project development with a (fictitious) GBP 1000 towards payment of employees’ wages. The monthly wage per team averaged at ~GBP 1300 for a 7 months of operation. Teams were encouraged to pursue competitive salary policies and made aware that the wages progression would be taken as an individual performance indicator. The wages would be approved monthly by all team members basing on a majority consensus. The individual wages evolution was not only continuously monitored to make sure it did not exceed the amount of resources received by the VCB; it was also used by the lecturer to calculate a “wage progression bonus” and ultimately assign each student an individual mark. Thus, ΔMn, i.e., the mark variation, or bonus, for student n within any team could be positive, negative, or null according to:
Δ M n = p 1 w F n w 1 n + p 2 [ m = 1 T w m n m = 1 F n = 1 T w m n 1 ]
where p1,2 are weighting factors, wmn is the student n’s wage at month m, F is the final month number, T is the team members number and the underlying principle is to reward salary “uplift” with respect to both the individual first month and the team average wage. Typically, F=7, T=10, p1=0.1, p2=10 lead to ΔMn ≤10 for all teams.

2.4. Practice Evaluation Approach

The first opportunity for practice evaluation occurred through observation of learner response to feedback, particularly following the milestone deliverables. This approach was specifically used to tune aspects such as the amount and quality of guidance needed to improve the presentation, or technical reporting skills. Further critical analysis consisted in comparing performance indicators such as student turnout, effort in producing deliverables, and the degree of project completion for cohorts before and after introduction of the practice. A third critical review method was applied by monitoring student progression through following related modules such as the third-year individual engineering project [13]. Finally, the most direct critical review method for practice evaluation relied on a dedicated questionnaire that was distributed to students in the last session of the module. The questions featured in the latest questionnaire version can be seen in Figure 2 and Figure 3 along with the collected answers that are then analysed in Section 3.1.1, Section 3.1.2, Section 3.1.3, Section 3.1.4, Section 3.1.5, Section 3.1.6, Section 3.1.7 and Section 3.1.8.

3. Results

3.1. General Observations and Response to the Designed Monitoring Initiatives

With a statistical basis of 200+ students over 4 years it is difficult to conclude whether significant performance variations derived from the teaching approach or from difference in learners skills. Nevertheless, an increase in overall engagement – measured in terms of attendance records, volume/quality of peer to peer comments and percentage of technically advanced projects – could be detected across the 4 cycle span. Supervision of 25 such students suggested that further interventions were needed through the development of computer based circuit design and analysis. Therefore the technical examples presented during the following module iterations were refined to more effectively match the simulation challenges anticipated in the individual engineering project. However by far the most enlightening insights on the real and perceived effectiveness of the practice came from the student response to the questions in the dedicated feedback survey.

3.1.1. Question 1: Overall Method Evaluation

The question in Figure 2a was designed to encourage an overall evaluation of the followed approach and its effectiveness in ensuring learners engagement. It can be noticed that student response was predominantly positive with more than a half of them acknowledging the strides for promoting core values such as creativity, independence and accountability. About 24% of the students advocated for a more prescriptive management style which, interestingly, was more than 50% lower than in earlier cycles. This might be due to the additional set of regulations that were progressively enforced (e.g., through the VCB, bonus attribution and peer assessment systems). One comment in the open session of the question described the course as “one of the best organized lectures” which might indicate some success in designing a structured and yet lowly prescriptive module. Another comment expressed reservations about the availability of physical space for teams to store equipment and supplies, which is not only arguable but also largely beyond the lecturer’s control.

3.1.2. Question 2: Skills that Were Improved by the Course

The question in Figure 2b sought to determine which skills were enhanced by the followed approach according to learners. A total of 44% of the answers indicated management and teamwork as the most enhanced skill, which is an expected and desirable outcome. A considerable fraction of students acknowledged improvement of presentation skills, which is also among the primary learning outcomes. The recognition for improvement of circuit design/analysis skills was less prominent (9%) and to some extent, disappointing. One student commented that “(students) did not really expand their understanding of more intricate system beside the hobbistic knowledge of systems such as Arduino microcontrollers”. Although it is true that training on new design techniques and circuit analysis was not a primary goal nor systematically pursued, the number of learners acquiring new technical skills through the followed hands-on approach might be somehow higher. Most teams faced and regularly reported circuit synthesis and software programming issues that they had never encountered before. The fact that most issues were successfully solved during the course could be taken as a measure of success of the followed approach, even in the absence of more structured training. It could also be argued that full appreciation of the skills acquired during the second-year team project module could be only achieved during the third-year individual project module.

3.1.3. Question 3: Guidance for Improvement of Presentation Skills

The question in Figure 2c tried to determine whether students deemed the received training of presentation skills to be adequate. A total of 64% of the students were satisfied with the followed approach, whereas 34% of them expressed discontent. Criticism was verbalized in a very constructive comment: “Some people had little presentation skills but learned them through the project. Most gained presentation skills, but a session on how to present would be very helpful for this course”. Indeed, a tutorial session specifically focusing on development of presentation skills was held but possibly not attended by some students. The student dissatisfaction might indicate the need to increase the provision of similar sessions and/or to spread them out at critical points in the course (for example before and after some MS presentations have been held).

3.1.4. Question 4: Desirable Features for an Interactive Assessment System

The question in Figure 2d was designed to single out a set of relevant characteristics for a novel, fair and effective peer assessment system according to learners. Obviously, the number of relevant features could be much higher than the three options that were given in this question. However, such features were selected to evaluate the level of appreciation for the prominent features that are already built in commercial systems such as WebPA [15]. The majority of preferences (38%) went to the possibility for students to review each other’s performance and commitment, much in the way suggested by [16]. Relatively strong but not overwhelming support (35%) was also enjoyed by the option to differentiate individual performance within a team, the central feature in the system advocated by [16] and implemented by modern computer-based systems. Finally, significant appreciation (25%) was shown for the option to enable the instantaneous assessment of teamwork, for example through mobile phone based real-time reviewing. This last option seems suitable for implementing assessment of teamwork or even individuals during group presentations, and could rely on available technologies such as Socrative [17]. Nevertheless, the instantaneous review approach is potentially prone to the same biasing issues that some students decried.

3.1.5. Question 5: Impact of the Venture Capitalist Board System

The question in Figure 3a probed the impact of the first major novelty introduced in this action-based project: the introduction of a VCB system to allocate the School project funding. It can be seen that while more than 57% of the students recognized the system’s merit in promoting involvement and accountability, about 6% of them took the opposite view. The key to interpret such discontent is offered by the answer to following question and by the 6% of students—not necessarily the same as the complaining 6%—who provided additional feedback in the open section of the question. The most critical students stated that the fundamental flaw in the VCB system is that some teams “were biased against other teams” and that “it is an overall good idea, as it requires more interaction, but unfortunately ruins other people’s grades, without giving a valid reason”. The latter is particularly interesting as it seems to overlap the funding attribution and the marking systems, in spite of the lecturer’s intent. It is arguable that a team with scarce funding attraction potential ends up performing poorly on the technical side and therefore earning low grades. However, in terms of sheer funding availability, the difference between the most and the least successful teams turned out to be below 20% and was lower than the team grades spread.
Another comment pointed out that bias seemed to occur in the positive direction, with VCB members rating teams “better than they were” although no further explanation or statistical proof was provided. Another student finally observed that “it is hard to give teams a relevant amount” which is line with the lecturer’s experience. Team presentations—the primary tool in attributing team funding—might have failed to capture team progress and effort, at times. Yet, besides refining team presentation skills the MS evaluation system is believed to have played a crucial role in developing the assessment capacity in both team and VCB members.

3.1.6. Question 6: Effectiveness of the Feedback System

The question in Figure 3b probed the perceived effectiveness of the received feedback from peers as well as from the lecturer. Exactly 50% of the answers indicated that feedback coming both from peers and from the lecturer was useful. A total of 6% of students indicated that only the feedback from other teams was useful, which underlines a discontent with the lecturer performance, but not to an alarming extent. More interestingly, one third of the students indicated that only the lecturer’s feedback was useful, which reinvigorates the earlier point about some students distrusting other students’ judgement. Some students were a bit more appreciative saying that other teams’ comments were constructive and helpful “only sometimes”. One student expanded “Most people did not listen presentations and asked questions explained in the same milestone presentation”. This is a valid point: the lecturer himself needed to perform an extensive review of each team’s presentation slides, beyond the presentation itself, for proper assessment. Although teams were given a week to complete the peer assessment exercise, it is possible the some of them assessed other teams work solely basing on the presentation time in class.

3.1.7. Question 7: Impact of the Team Restructuring Provision

The question in Figure 3c probed the impact of the second major and more sensitive novelty in the action-based project: enabling teams to lay-off employees in an attempt to limit blaming of unengaged individuals for poor team performance. About 67% of the answers acknowledged the effectiveness of the measure in partially counteracting a lack of commitment from some members. One student pointed out that “it is an amazing idea but it needs to be done with lecturer approval. No student should be laid off without lecturer approval.”. This interesting suggestion is slightly at odds with the laissez-faire principle of the course, but the lecturer agrees that some form of monitoring is needed, and it was indeed enacted. For example, monthly reports showing evidence of majority consensus on team-restructuring decisions were requested. The lecturer was also consulted before any layoff decision and provided advice without imposing his views. Furthermore, while predominantly approved by students, the power to layoff team members was not abused and over the 4 cycles only 10 students (~ 5%) out of the entire workforce had to relocate to a different team. Consecutive layoffs of the same employee were not observed although some measures to prevent them could be enacted. The percentage of students opposing the layoff power was 20% which is significant but somehow expected. A total of 9.3% of the students thought the measure did not make any difference, while two students checked both the praising and the criticizing options possibly implying that the measure was beneficial for some but detrimental for other learners.

3.1.8. Question 8: Impact of Peer Assessment and Bonus System

The question in Figure 3d mirrored a similar enquiry in the previous year’s questionnaire to evaluate the perception of fairness and effectiveness of the evaluation system. The learners’ response was less overwhelmingly positive than previously (52% vs. 73%) while more students (30% vs. 3%!) criticized the evaluation system. The critical learners tended to coincide with those expressing negative feedback at question 2, reinforcing the impression that the funding attribution system got somehow associated with the evaluation system. Furthermore, these students made use of the open session of the question to verbalize concerns that “students mark others down on purpose” and that they achieved “consistent bad peer results with no relative reason for it”. Even more articulate “In theory (and in practice most of the time) the peer+lecturer review is nice and works well. However peers may be biased for personal relationship (positively or negatively) which is unfair”. Another commenter explicitly advocated for the lecturer to solely assess teamwork. This is obviously a non-solution to the bias problem, since lecturers are equally prone to bias, especially if they know students. It was also suggested [3] and observed empirically in this course that student and lecturer assessments tend to achieve significant correlation.
Nevertheless, the survey outcome pushed the lecturer to question whether the introduction of competitive funding to support team projects resulted in a new and unpredicted form of peer-to-peer bias. Because team success is (loosely) related to funding availability, and VCB members belong to different and competing teams, a scenario is conceivable where some members unfairly penalize other teams by attributing them incongruous funding in order to favour their own team. Such phenomenon could not be observed directly, and the fact that the allocated funds were averaged through all VCB members’ indications might have flattened out possible inadequacies. Hindle’s suggestion [18] about separating the benefits of teamwork from the rigour of assessment might be implemented by having a VCB composed of members that do not belong to any team. However, it is questionable whether favouring assessment impartiality at the expenses of hands-on and teamwork time would be beneficial.

3.2. Outcome of the Intra-group Peer Assessment Excrcise: Traditional vs. Alternative Methods

The outcome of the traditional peer assessment exercise, where students directly moderated each other’s marks, and the one where wages were used as an implicit measure of performance is shown in Figure 4. The two methods achieved appreciable interdependence and a correlation coefficient of 0.81 using the standard Pearson product-moment definition [19]. The weighting coefficients in (1) can be tuned so that the intra-group marks spread falls in a predetermined range which was conservatively set to 10 in the first iteration. Tuning of weighting coefficients is also a possibility in commercially established peer assessment software [15].

4. Discussion

4.1. Issues Arising from Student Heterogeneity and Group Size

The present practice allowed students to form their own groups and refrained from any ability streaming intervention. Lejk et al. [9] observed that allowing spontaneous group assembly might have the same impact as streaming, since similarly skilled students are likely to form groups with each other. Although the statistical basis for such a claim or any conclusion from the present practice are unclear, observation of about 200 students in the last four years suggests that social and cultural homogeneity might act as more powerful drivers for team aggregation than skill levels. Suggestions for constructing mixed abilities groups [3] appear embraceable but difficult to implement for modules in which there is little chance to screen students’ ability a priori.
Specifically, with respect to cultural heterogeneity, it has been demonstrated that the most diverse groups outperform culturally homogenous groups when the tasks’ duration and complexity are increased [20]. Additionally, diversity is associated with creativity and is notoriously appreciated in the kind of enterprise environment [21] that this application attempted to recreate. In the two latest cycles diversity was additionally promoted through a bonus system, i.e., by making cultural (and gender) diversity a key metric for the VCB to attribute the start-up funds.
Group size has been identified as another key element in developing an effective learning environment. Group size beyond six students has been deemed detrimental to individual motivation, task allocation, group decision making and commitment to undertaking peer assessment [10]. Unfortunately, the sheer number of students (≥50/year), and the impossibility to handle MS sessions with more than eleven presenting groups forced the present intervention to settle for an average team size of seven members. It should also be observed that the module was originally developed for smaller student volumes. The issues outlined in [10], notably the fact that only a minority of students were engaging in peer assessment within a group, were certainly observed in some, although not in all of the most numerous teams.

4.2. Issues Arising from Anxiety from Novelty of Assessment Technique

Exposing students to assessment techniques they had not previously encountered, such as group presentations, has been reported to engender anxiety and impair leaners performance through a sense of bemusement and unfairness [11]. Such emotional response was commonly observed during the first milestone group presentations, when some students were fairly new to the exercise. Measures were taken to familiarize the students with the process such as giving a tutorial presentation to outline the key components for a good presentation, and allowing the first milestone (MS0) to be used for training, but not for assessment purposes. It is also believed that a moderate amount of positive puzzlement is educationally valuable and should not be avoided [22] in a module that makes the development of presentation skills a primary learning outcome.

4.3. Issues Arising from Assessment of Individuals within Groups

Failure to identify individual contributions within groups has been linked to decrease in individual effort [23] and the strategic shift of commitment towards individually assessed modules. [24] described an effect in which the most hardworking students reduce their effort in order to avoid being taken advantage of by less committed “freeloaders”. While such a negative effect could not be directly observed, complaints from hardworking students about free-riding teammates were indeed verbalized to the lecturer. A range of methods were hence investigated to overcome this issue.
An obvious solution is to limit the relevance of group marks while introducing separate assessments possibly evaluating individually accomplishable tasks [25]. While the idea to isolate the learning benefits of group work from the rigour of individual assessment [18] is intriguing, this approach becomes impractical for high student volumes and densely populated assessment (e.g., MSs) schedules. Moderation of individual marks based on additional knowledge, such as personal logs/portfolios/interviews, tends also to be impractical for large cohorts. A much more promising approach seems to be the so called “Knickrehm method” [26] where students moderate each other’s group mark, that is initially allocated by a tutor basing on the inside knowledge of that individual. Such mechanism was suggested to achieve greater perceived fairness, more collaborative behaviour, and wider spread of marks [27]. It is also suitable for implementations leveraging recent technology-enhanced learning tools [28] with web-based systems [15] that enable anonymity and reduced administration burdens. Gibbs [3] suggested a variation of the method where students moderate each other’s marks by making sanctions against individuals who behave inappropriately; for example, by not attending group meetings or not delivering on assigned tasks. The approach in the present report both builds on (intra-group direct or wages-mediated peer evaluation) and takes to the next level (team self-restructuring provision) Gibb’s solution.
With respect to the peer-assessment exercise it should be noted that about 5% of the students did not engage in the traditional peer assessment exercise, whereas they contributed to the wages attribution process. Indeed, such a process naturally feeds into the mini-enterprise learning narrative and is less likely to be perceived as an imposition from the module leader. Eventually, the ludic and yet educational value of the implicit peer assessment approach encouraged the lecturer to adopt it as the default intra-group peer assessment strategy in subsequent teaching cycles.
As to the more radical provision to empower groups with not only wages (and hence marks) modulation but also with termination of uncommitted members the purpose of such measure was not social Darwinism, nor punishment of freeloaders. It was rather stripping the freeloading critics of the number one complaint, and at times, excuse, for not delivering on their group project. As much as it was experimentally ruthless, the proposed approach resulted in the practical benefits discussed in Section 3. It also showed the potential to overcome some reliability issues from gender bias (e.g., males favour males) and social bias (e.g., friends not sanctioning friends) that can affect peer-moderated individual marking, provided that the groups are sufficiently heterogeneous.

5. Conclusions

A range of practical strategies for improving individual engagement and performance within groups have been tried out, evaluated, and discussed in the context of the 2nd year engineering team project module at Bangor University. Positive outcomes such as average engagement and achieved technical proficiency could be recorded. With respect to the didactically delicate provision to enable laying-off of team members, effectiveness in addressing the “freeloaders” issue appears potentially significant provided the regulatory environment prevents unilateral and arbitrary actions (e.g., layoff of a member by a team’s minority). The introduction of a Venture Capitalist Board for funds attribution appeared instrumental in promoting a meritocratic climate while it cannot be considered immune to peer-assessment dysfunctions (e.g., more or less conscious bias) due to the competitive nature of funding. Another innovation consisted in using the team internal wages distribution both as an implicit approach for intra-group peer assessment and as a performance indicator. This strategy achieved good correlation with a traditional and direct peer assessment method while being perceived as a more playful and somehow acceptable means to students. Therefore, it is tempting to conclude that intra-group peer assessment through wages assignment represents a viable strategy to address the well-known issue of evaluating individual contributions within group performance.

Author Contributions

Conceptualization, C.P. and I.P.; methodology, C.P.; validation, C.P. and I.P.; formal analysis, C.P.; investigation, C.P.; data curation, C.P.; writing—original draft preparation, C.P.; writing—review and editing, C.P. and I.P.; visualization, C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Onyon, C. Problem-based learning: A review of the educational and psychological theory. Clin. Teach. 2012, 9, 22–26. [Google Scholar] [CrossRef] [PubMed]
  2. Backx, C. The use of a case study approach to teaching and group work to promote autonomous learning, transferable skills and attendance. Pract. Evid. Scholarsh. Teach. Learn. High. Educ. 2008, 3, 68–83. [Google Scholar]
  3. Gibbs, G. The Assessment of Group Work: Lessons from the Literature. 2009. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.422.8600&rep=rep1&type=pdf (accessed on 1 January 2020).
  4. Dochy, F.; Segers, M.; Vand De Bossche, P.; Gijbels, D. Effects of problem-based learning: A meta-analysis. Learn. Instr. 2003, 13, 533–568. [Google Scholar] [CrossRef] [Green Version]
  5. Moriarty, G. The Engineering Project: Its Nature, Ethics, and Promise; Penn State University Press: University Park, PA, USA, 2010; pp. 13–42. [Google Scholar]
  6. Rugarcia, A. The future of engineering education I. A vision for a new century. Chem. Eng. Educ. 2000, 34, 16–25. [Google Scholar]
  7. Mabrouk, P.A. Active Learning: Models from the Analytical Sciences; American Chemical Society Symposium Series; American Chemical Society: New York, NY, USA, 2007; Volume 970, pp. 34–53. [Google Scholar]
  8. Springer, L.; Stanne, M.E.; Donovan, S. Effects of small group learning on undergraduate Science, Mathematics, Engineering and Technology: A meta-analysis. Rev. Educ. Res. 1999, 69, 21–51. [Google Scholar] [CrossRef]
  9. Lejk, M.; Wyvill, M.; Farrow, S. Group assessment in Systems Analysis and Design: A comparison of the performance of streamed and mixed-ability groups. Assess. Eval. High. Educ. 2006, 24, 5–14. [Google Scholar] [CrossRef]
  10. Kerr, N.; Bruun, S. Dispensability of member effort and group motivation losses: Free-rider effects. J. Personal. Soc. Psychol. 1983, 44, 78–94. [Google Scholar] [CrossRef]
  11. Jessop, T.; Tomas, C. The implications of programme assessment patterns for student learning. Assess. Eval. High. Educ. 2017, 42, 990–999. [Google Scholar] [CrossRef] [Green Version]
  12. Bacon, D.; Stewart, A.K.; Silver, W.S. Learning from the best and worst student team experiences: How a teacher can make the difference. J. Manag. Educ. 1999, 23, 467–488. [Google Scholar] [CrossRef]
  13. Undergraduate Courses: 2019–20. Available online: https://www.bangor.ac.uk/computer-science-and-electronic-engineering/undergraduate-modules (accessed on 1 January 2020).
  14. Blackboard Help. Available online: https://www.bangor.ac.uk/itservices/lt/blackboard.php.en (accessed on 1 January 2020).
  15. Web-PA. Available online: https://webpa.lboro.ac.uk/login.php (accessed on 1 January 2020).
  16. Goldfinch, J. Further developments in peer assessment of group projects. Assess. Eval. High. Educ. 1994, 19, 29–35. [Google Scholar] [CrossRef]
  17. Socrative. Available online: https://www.socrative.com (accessed on 1 January 2020).
  18. Hindle, B. The ‘Project’: Putting student-controlled, small-group work and transferable skills at the core of a geography course. J. Geogr. High. Educ. 1993, 17, 11–20. [Google Scholar] [CrossRef]
  19. SPSS Tutorials: Pearson Correlation. Available online: https://libguides.library.kent.edu/SPSS/PearsonCorr (accessed on 1 January 2020).
  20. Watson, W.E.; Kumar, K.; Michaelsen, L.K. Cultural diversity’s impact on group process and performance: Comparing culturally homogeneous and culturally diverse task groups. Acad. Manag. J. 1993, 36, 590–602. [Google Scholar]
  21. Sims, P. The Montessori Mafia. The Wall Street Journal. 14 May 2013. Available online: https://blogs.wsj.com/ideas-market/2011/04/05/the-montessori-mafia (accessed on 1 January 2020).
  22. Wass, R.; Golding, C. Sharpening a tool for teaching: The zone of proximal development. Teach. High. Educ. 2014, 19, 671–684. [Google Scholar] [CrossRef]
  23. Van Dick, R.; Tissington, P.A.; Hertel, G. Do many hands make light work? How to overcome social loafing and gain motivation in work teams. Eur. Bus. Rev. 2009, 21, 233–245. [Google Scholar] [CrossRef] [Green Version]
  24. Houldsworth, C.; Mathews, B.P. Group composition, performance and educational attainment. Educ. Train. 2000, 42, 40–53. [Google Scholar] [CrossRef]
  25. Lejk, M.; Wyvill, M. Peer Assessment of Contributions to a Group Project: Student attitudes to holistic and category-based approaches. Assess. Eval. High. Educ. 2002, 27, 569–577. [Google Scholar] [CrossRef]
  26. Maranto, R.; Gresham, A. Using ‘world series shares’ to fight free riding in group projects. Political Sci. Politics 1998, 31, 789–791. [Google Scholar]
  27. Sharp, S. Deriving individual student marks from a tutor’s assessment of group work. Assess. Eval. High. Educ. 2006, 31, 329–343. [Google Scholar] [CrossRef]
  28. Fisher, A.; Exley, K.; Ciobanu, D. Using Technology to Support Learning and Teaching; Routledge: New York, NY, USA, 2014; pp. 104–138. [Google Scholar]
Figure 1. Marking approach and milestone progression for the module leading to different final individual marks within the same team.
Figure 1. Marking approach and milestone progression for the module leading to different final individual marks within the same team.
Education 10 00064 g001
Figure 2. Questions 1 (a), 2 (b), 3 (c), 4 (d) and answer distribution in the module evaluation questionnaire filled by students at the end of cycle 3.
Figure 2. Questions 1 (a), 2 (b), 3 (c), 4 (d) and answer distribution in the module evaluation questionnaire filled by students at the end of cycle 3.
Education 10 00064 g002
Figure 3. Questions 5 (a), 6 (b), 7 (c), 8 (d) and answer distribution in the module evaluation questionnaires filled by to students at the end of cycle 3.
Figure 3. Questions 5 (a), 6 (b), 7 (c), 8 (d) and answer distribution in the module evaluation questionnaires filled by to students at the end of cycle 3.
Education 10 00064 g003
Figure 4. Team marks moderation using direct intra-group peer assessment method and the novel wages progression.
Figure 4. Team marks moderation using direct intra-group peer assessment method and the novel wages progression.
Education 10 00064 g004

Share and Cite

MDPI and ACS Style

Palego, C.; Pierce, I. Inspiring a Self-Reliant Learning Culture while Brewing the Next Silicon Valley in North Wales. Educ. Sci. 2020, 10, 64. https://doi.org/10.3390/educsci10030064

AMA Style

Palego C, Pierce I. Inspiring a Self-Reliant Learning Culture while Brewing the Next Silicon Valley in North Wales. Education Sciences. 2020; 10(3):64. https://doi.org/10.3390/educsci10030064

Chicago/Turabian Style

Palego, Cristiano, and Iestyn Pierce. 2020. "Inspiring a Self-Reliant Learning Culture while Brewing the Next Silicon Valley in North Wales" Education Sciences 10, no. 3: 64. https://doi.org/10.3390/educsci10030064

APA Style

Palego, C., & Pierce, I. (2020). Inspiring a Self-Reliant Learning Culture while Brewing the Next Silicon Valley in North Wales. Education Sciences, 10(3), 64. https://doi.org/10.3390/educsci10030064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop