Next Article in Journal
Being Different with Dignity: Buddhist Inclusiveness of Homosexuality
Previous Article in Journal
Nonviolence and Religion: Creating a Post-Secular Narrative with Aldo Capitini
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The PILAR Model as a Measure of Peer Ratings of Collaboration Viability in Small Groups

1
School of Medicine and Public Health, University of Newcastle, Callaghan 2308, Australia
2
School of Mathematical and Physical Sciences, University of Newcastle, Callaghan 2308, Australia
*
Author to whom correspondence should be addressed.
Soc. Sci. 2018, 7(3), 49; https://doi.org/10.3390/socsci7030049
Submission received: 20 February 2018 / Revised: 13 March 2018 / Accepted: 13 March 2018 / Published: 20 March 2018

Abstract

:
The PILAR (prospects, involved, liked, agency, respect) model provides a dynamical systems perspective on collaboration. Two studies are performed using peer assessment data, both testing empirical support for the five Pillars that constitute members’ perceptions of collaboration viability (CoVi). The first study analyses peer assessment data collected online from 458 first-year engineering students (404 males; 54 females). A nine-item instrument was inherited from past year’s usage in the course, expanded with four additional items to elaborate upon the agency and liked Pillars. Exploratory factor analysis was conducted on student responses to test whether they thematically aligned to constructs consistent with the five Pillars. As anticipated, twelve of the thirteen items grouped into five components, each aligned with a Pillar, providing empirical evidence that the five Pillars represent perceptions of collaboration. The second study replicated the first study using a retrospective analysis of 87 items included in the Comprehensive Assessment of Team Member Effectiveness (CATME) peer assessment tool. The associated factor analyses resulted in five components and conceptual alignment of these components with Pillars was evident for three of five CATME components. We recommend a peer assessment instrument based upon PILAR as potentially more parsimonious and reliable than an extensive list of behaviours, such as employed by CATME. We also recommend including items that target inter-rater bias, which is aligned with the liked Pillar, that instruments such as CATME exclude.

1. Introduction

Egalitarian collaboration within hierarchical organisations leads to employee retention and organisational responsiveness (Baker et al. 2016). Long-term benefits to the organisation have been demonstrated by Appreciative Inquiry, an intervention that creates collaboration outside normal workflow processes, across organisational units, and between levels of hierarchy (MacCoy 2014). However, unless culturally embedded and hierarchically supported, collaboration outside normal workflows may eventually cease, as participants fail to see an impact for their efforts (Edmondson 2004; Leana and Buren 1999). Yet, it is not advisable to mandate or incentivise collaboration, since participants may halt once either is removed (James et al. 2007; Wageman and Gordon 2005). To embed viable collaboration within the organisation, we postulate the importance of establishing a definition that serves both participants’ bottom-up, and policy designers’ top-down, requirements and perspectives. This research aims to attain empirical evidence for a model of collaboration designed to measure collaboration viability (CoVi). Such a model may also aid design of pro-collaboration organisational policies, based upon how collaboration is perceived by its members.

2. PILAR Model of Collaboration

Only a generically applicable theory of collaboration can reliably inform wider organisational policy. While numerous theorists have expressed their desire for a universal, generic model of collaboration (Salas et al. 2015), currently no agreed version exists (Keyton 2016), and some contend that such a model will never exist (Green 2015). Designed to be consistent with organisational psychology (Heslop et al. 2018a) and social psychology (Heslop et al. 2018c), the PILAR model of collaboration is a contender for universality.
PILAR considers that humans have an array of perceptions that guide their decision to engage in a group, or to withdraw when membership of the group is no longer in their interest (Wilson et al. 2013). A perspective of tension between the member’s pro-social engagement with the group, and self-interested withdrawal, indicates a potential evolutionary basis of collectivism and individualism (Huang and Bargh 2014). The team work engagement that each member feels as a result of assessing their peers, the external environment, and other factors, influence their preparedness to commit personal resources to the team (Costa et al. 2014).
The PILAR model asserts that its Pillars (prospects, involved, liked, agency and respect) provide some of the basis upon which the typical collaborating actor is instinctively prompted to remain within, or leave, a team (Oakley and Halligan 2017). Since these perceptions will influence an employee’s willingness to collaborate, they should be considered in the organisation’s design of any policy or scheme designed to foster collaboration (Table 1).

2.1. Perception of Prospects

The prospects Pillar is the likelihood of the collaboration achieving its goal, and the team member subsequently receiving the anticipated benefit, for instance, produce from a community garden. When the member feels that their group is likely to fail, low prospects are experienced as uneasiness and foreboding. Even when the group itself is performing well, if the member suspects their share is at risk, then that member perceives prospects as low (Heslop et al. 2016).

2.2. Perception of Involved

The involved Pillar encourages two members to cooperate in providing knowledge or physical assistance to complete a task. It is experienced by the member as an openness to, and comfort working with, a specific colleague, which at high levels is experienced as enthusiasm to participate (Quinn and Dutton 2005). Lack of the involved perception is experienced as general trepidation and unease, due to the potential risk of cooperation, such as embarrassment at needing help, an unappreciative recipient of advice, or their becoming a competitor for expert status (Klein et al. 2004).

2.3. Perception of Liked

The liked Pillar is associated with belonging and security, whereas being disliked leads to feelings of social isolation and insecurity. Those who perceive they are poorly liked may reduce the unpleasant feeling by mending relationships, or if this is not practical or desired, leaving the group entirely. This behavior is prominent for those who hold an identity linked to their group, who therefore cannot tolerate being disliked by in-group colleagues, and willingly engage in out-group exclusion (Meeussen et al. 2014).

2.4. Perception of Agency

The agency Pillar describes feeling empowered to suggest change to the collaboration, for example, a strategy change based upon foreseen dangers (Spânu et al. 2013) or suggesting that task responsibilities between members be adjusted.

2.5. Perception of Respect

The respect Pillar reflects the member’s perception of a colleague’s competence and character (Ibrahim and Ribbers 2009). High respect is faith and trust in colleagues’ dependability, compared to low respect, associated with distrust and vigilance (Ko 2010).

3. Hypotheses Related to Peer assessment Data

Peer assessment is a survey of employees’ opinions of their colleagues, commonly implemented by organisations as an adjunct to supervisor-assessment. Peer assessment data may exhibit themes reminiscent of the five Pillars because members’ perceptions of CoVi are highly dependent upon perceptions of their collaborators (Heslop et al. 2018d). The presence of themes within peer assessment that align with the five Pillars may therefore constitute empirical support for PILAR. Using peer assessment data, both of the studies in this article will test both of the following hypotheses.

3.1. Hypothesis One

Consistent with the effect of colleagues’ attributes on group affect (Barsade and Knight 2015), we hypothesise that all Pillars will be evident in peer assessment data due to an indication of the respondent’s desire to collaborate with each assessed peer. For instance, a respondent will prefer to engage with a group populated by peers that they assess highly (Fulmer and Ostroff 2015). Since Pillars are also theorised to cumulatively predict CoVi and hence engagement (Heslop 2018), peer assessment data is postulated to thematically align with perception of Pillars when averaged over all peers, within each Pillar category. For instance, a respondent’s disengagement from the group resulting from a single rude peer (liked), might be counterbalanced by another colleague’s (or even the same person’s) trustworthiness (respect). Thus, each member’s perception of each Pillar may be influenced by the behaviour and attributes of any peer. This leads to the first hypothesis that respondents will give each peer a similar assessment to items that thematically align with each Pillar.

3.2. Hypothesis Two

A further implication of PILAR is that the respondent’s decision to increase or reduce engagement in the group depends upon the relative importance they place on each of the five Pillars. Relative weighting will vary depending upon the collaboration context, and the personalities prevalent within the cohort (Breevart et al. 2012; Salas et al. 2005). Weighting may vary according to the situation, for example, a military team focussing on prospects of surviving, compared to a gardening club wishing to maintain relationships (liked). Weighting of Pillars may also vary according to the member’s personality and psychological resources (Luthans et al. 2007), for instance, a confident member is less concerned with being liked, and those with less optimism in others’ intentions might be less interested in demonstrating agency (Heslop et al. 2018c). This leads to the second hypothesis that the personalities present in the cohort will predict the priority given to Pillars within the peer assessment data.

4. First Study

In the first study, both hypotheses were tested using an exploratory factor analysis of engineering student responses to the peer assessment instrument. To test hypothesis one, we predict that factor analysis will be consistent with thematic categorisation of peer assessment questions categorised into the five Pillars. To test hypothesis two, we predict which of the above Pillar-aligned factor analysis components will explain the most variance in student response data, based upon engineering students’ personality profile.

4.1. First Study Method

4.1.1. Participants

In teams of three to six, engineering students from an Australian university participated in a design project as part of their 2016 first semester’s coursework. At the end of the semester, 458 first-year engineering students completed a peer assessment instrument (404 males, 54 females; mean age 20.1 years, S.D. 4.6 years; and 91.9% Australian national) for each of their teammates, using the SPARKPLUS online platform1 (Willey and Gardner 2010), which is henceforth referred to as the PILAR instrument.

4.1.2. Instructions to Survey Participants

Before attempting the survey, students received written instructions on how to use the survey interface and how the group’s data might be used to adjust their mark. Students were cautioned by the lecturer that overestimating their own contribution at the expense of others may result in their evaluations being omitted from the analysis. Students were also advised that the total pool of marks available to the group members was fixed, and if they chose to rate another member highly, they were therefore in effect giving permission to the lecturer to take some of their own marks and award them to the high-performing member of the group. Students were also asked to mark ‘average’ behaviour at 70 out of a possible 100, and to use very low or very high marks for exceptionally poor or favourable teamwork behaviours, respectively.

4.1.3. PILAR Peer Assessment Instrument

This research built upon a nine-item peer assessment instrument from the previous year’s application by the lecturer, Dr. Simon Iveson (Heslop et al. 2018d). While the original instrument contained items related to peers’ competence (respect), motivation (prospects) and teamwork (involved and liked), at the authors’ request, the instrument was expanded to include four additional items that aligned with agency, involved and liked (Table 2).
Although inheritance of the first nine, unmodified items prevented the authors from specifically designing questions to align with Pillars, data were nonetheless assessed for evidence of consistency with Pillar perceptions within this cohort. As it does not necessarily indicate a peer or respondent’s perception of the group, Q3 as a measure of punctuality was not considered to align logically with a Pillar. Apart from commitment to the project, punctuality is influenced by a range of factors, such as whether the peer is organised, cultural background, and schedule conflicts (Back et al. 2006; Basu and Weibull 2002). Q3 was therefore considered a control question, and the factor analysis was expanded to six components to accommodate it. The purpose of Q3 as a control is to demonstrate that the components derived from the factor analysis align with the Pillars, even with the inclusion of an item that does not conceptually align to a Pillar.

4.1.4. Removal of Outliers

As part of standard instructions given by the instrument, for each item students were instructed to rate the average collaborator as 70/100. We therefore contend that a rating five or less out of 100 may be influenced by inter-rater bias (Magin 2001; Viswesvaran et al. 2005). Inter-rater bias occurs where a personal relationship between peer and respondent bias peer assessments ratings made by the respondent, leading to rating on a primarily subjective basis, rather than on the peer’s objective behaviour. For this reason, we remove from analysis any set of 13 ratings that includes any rating of five or less. We consider it highly unlikely that such a low rating is an objective assessment and is rather prompted by the respondent’s antipathy for the peer.

5. First Study Hypothesis Predictions:

5.1. Predicted Alignment of Items with Pillars

Hypothesis one predicts that factor analysis will be consistent with thematic alignment of instrument items and Pillars. Q1 and Q2 align with the respondent’s perception of group prospects, which are reduced by unmotivated and free-riding peers. Q4–Q6 align with the respondent’s perception of respect, because all three competencies relate to the respondent’s view of the peer’s abilities. Q7 asks if the respondent feels liked by the peer, based upon how comfortable their interaction was. Q13 considers whether a peer feels disliked, which may be inversely related to whether the respondent feels disliked by that peer. Q8 and Q11 align with the peer’s perception of agency, and whether they felt able to express an opinion or contribute to the discussion. Q9, Q10 and Q12 align with the peer’s perception of involved, in terms of whether they are willing to help, or be helped by, colleagues. For example, Q9 asks whether a peer asks for a colleague’s opinion, which necessitates the peer’s willingness to accept assistance.

5.2. Predicted Component Priority Due to Cohort Personality

In their survey of 4876 engineers and 75,892 non-engineers, Williamson et al. (2013) discovered that engineers’ personality could be classified as more tough-minded (assess problems logically) and intrinsically motivated (seeking challenge, meaning and autonomy), but lacking assertiveness (ability to speak up), service orientation (meeting collaborator’s needs), emotional stability (resilience), extraversion (sociability), optimism (positive outlook), image management (control one’s presentation to others), visionary style (forward thinking) and work drive (dedication to project goals).
Considering these attributes of engineers, we posit that respect will be of primary importance because it requires logical assessment of colleagues’ competence, whereas the remaining assessment items are more qualitative. We consider that prospects will also be important, since the engineer’s intrinsic motivation will produce a preference for collaboration in which colleagues are also highly dedicated. It is difficult to be autonomous, find meaning and be challenged when colleagues require oversight because of their poor skills and motivation.
Conversely, lack of assertiveness, visionary style and optimism indicates that agency will be a lower-priority perception, and therefore, the component less relevant. Taking optimism as an example, without believing that change is possible, there is no point suggesting it (Heslop et al. 2018b). A desire to be involved would require a service orientation and work drive; the latter because aiding colleague’s places project goals above one’s own interests. Finally, we consider that liked would be important to those with extraversion and concerned with image management. Hence, within this cohort, we expect that respect and prospects will feature among the first two components, with involved, agency and liked among the latter three.

6. Results of First Study

In teams of between three and six, 458 first-year engineering students completed the PILAR peer assessment instrument (404 males; 54 females, mean age 20.1 years, S.D. 4.6 years, and 91.9% Australian national). Descriptive statistics and correlations between the 13 items are shown in Table 3. Each item’s mean is over 80, and the distribution of each item is negatively skewed. As expected, components were highly correlated (r = 0.67 to 0.80; the least being component six; r = 0.54 to 0.65) due to the positive feedback contended to exist between Pillars, noted previously.
Factor analysis was permissible as the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy was above the recommended cut-off of 0.6 (KMO = 0.97) and Bartlett’s test of Sphericity was statistically significant (p < 0.05). The factor analysis was set at six, based upon five Pillars plus a control component. Kaiser Normalisation was applied, using Promax rotation due to the high correlation coefficients (r > 0.6) between items (Brown 2009). Promax was also chosen because Pillars are theorised to be correlated as a result of positive reinforcing feedback between four pairs (prospects and involved; involved and respect; liked and involved; and respect and prospects), with stabilising feedback between the remaining six pairs of Pillars (Heslop et al. 2018c).
The factor analysis converged in six iterations and the resulting factors cumulatively explained 90% of the variance of student responses. Each of the resulting factors had strong scale reliability (Cronbach’s alpha > 0.8) indicating that items making up each factor were closely related. Only items with factor loadings over 0.5 were considered for inclusion in each scale.Eleven of the twelve possible PILAR items aligned to one of the five components as predicted by PILAR (Table 4). The single exception was Q10’s loading of 0.40 (below the 0.5 cut-off) in component three. Nevertheless, Q10 still has its largest loading aligned with involved. The non-Pillar control item regarding punctuality, Q3, dominated the sixth component. We therefore conclude that the first study supports hypothesis one, and that Pillars are a parsimonious classification of perceptions of collaboration.

6.1. Discussion of First Study

Factor analysis of the PILAR data partially supported hypothesis one that peer assessment data can be parsimoniously captured through the PILAR perceptions of collaboration. However, with the first component explaining almost three quarters of the variance, later components are susceptible to small permutations in the data. Therefore, further work is required to develop an instrument that might generate data with more variance-balanced, but less correlated, components that can claim to represent the five Pillars. However, the extent of balance achievable may be limited by inter-rater bias, which Viswesvaran et al. (2005) estimated to explain 60% of peer assessment rating variance.
Hypothesis two concerned component order, and whether it suggests teamwork priorities of the cohort. Capturing more variance may indicate that respondents placed greater importance on answering the component’s constituent items accurately. The first two components were respect followed by liked, which only partly supports Hypothesis two; that predicted respect followed by prospects.
One potential explanation for the liked component being second priority is that inter-rater bias was captured by items aligned with liked. This may have been exacerbated by interpersonal conflict arising from engineers’ lack of emotional stability (resilience), extraversion (sociability) and image management (control one’s presentation to others) (Williamson et al. 2013).
The fourth place for the prospects component may have been a result of this cohort not being representative of professional engineers. The attrition rate of first-year undergraduate engineers is typically over 50% (Felder et al. 1998), implying that half of this cohort will choose a different career. Due to the Dunning–Krueger effect, this may have led to unusual interpretations for items aligned with prospects, since less-competent members evaluate peer’s performance less accurately (Schlösser et al. 2013). Since prospects-aligned questions in the instrument are more subjective than respect-aligned questions, they may have suffered from greater inaccuracy from those students destined to not achieve an undergraduate engineering degree.

6.2. Limitations of First Study

There are numerous limitations to our approach that might be remedied in future. First, to reduce signalling that may have artificially induced respondents to answer in alignment with Pillars, the first six items would ideally be mixed among the remaining items rather than sequenced (Fowler 1995). Second, to reduce respondents’ superficially giving identical answers to similarly phrased items, items would ideally be phrased differently, for example respect-aligned. Third, a future replication of this research would improve question design (Fowler 1995). For instance, Q1 and Q2 both have multiple concepts in one item. A separate investigation of the PILAR instrument found that eight items were poorly designed, and potentially led respondents to disengage, hence reduce its reliability (Heslop et al. 2018d). This may have contributed to the emphatic capturing of variance within the first component.
Another limitation was mixing peer- and respondent-based perceptions of collaboration within Pillars of the first study instrument (Table 2). A future study might consider focusing solely upon one locus of perception. For instance, in other work, we have proposed an instrument exclusively based upon peer’s perception, designed to measure CoVi. We consider that this instrument may also be used for peer assessment purposes (Heslop et al. 2017). To address limitations of the first study, our second study now compares its findings with those of a highly-cited (114 citations) study that employed a similar, factor analysis-based method to create a peer assessment instrument.

7. Second Study

This second study is a reconsideration of Loughry, Ohland, and Moore’s (2007) original approach to developing their Comprehensive Assessment of Team Member Effectiveness (CATME) peer assessment instrument. CATME is a well-established and widely used online instrument designed for group projects in educational settings (Ferguson et al. 2016). Re-examining hypothesis one, we consider whether the CATME components that Loughry et al. (2007) produced through factor analysis, conceptually align with the five Pillars. To investigate such an alignment, we first make post-hoc ‘predictions’ by conceptually associating each CATME item either with a Pillar perception of collaboration, or otherwise with a non-Pillar, control construct.
CATME has been selected for this study because it suffers less from limitations present in the PILAR instrument. CATME questions are well designed as they are drawn from previously validated instruments. Additionally, we observe that CATME’s items are similarly worded. However, we are unsure if aligned CATME items appear randomly or sequentially, since the original instrument given to respondents is not published.

Summarising the Approach Taken by Loughry, Ohland & Moore

Loughry et al. (2007) developed CATME by first formulating a list of 392 items, drawn from pre-existing instruments, for example measuring team potency and Team Climate Inventory (Anderson and West 1998). Fifteen colleagues of Loughry et al. (2007) then selectively reduced these 392 to a shorter list of 218 by removing poorly formed and unclear items. Eighty-seven of the most significant were then identified by surveying 2777 students, who were asked to rate the importance of each peer behaviour to team success. Finally, and similar to the first study, 1157 students were each asked to rate a collaborator they remembered on each of the 87 items (listed in (Loughry et al. 2007)).
The variance of this data was tested first via an exploratory-, followed by confirmatory-, factor analysis with a one-, five- and seven-component model. Five components best explained the variance, however, they exhibited high inter-correlation (r = 0.77 to 0.93) that the authors claimed required further investigation. While Loughry et al. (2007) found that five components were superior to one and seven, it is possible that the data could be better explained by fewer components than five; by six components; or by more than seven.

8. Second Study Hypothesis Predictions

Predicted Alignment of CATME Items with Pillars

Our examination of the 87 CATME items suggests that 66 items align with four Pillars. The remaining 21 items align with leadership, which we considered to be a control. No CATME items align with liked, and only three align with agency: “Made important contributions to the team’s final product,” “Provided insights and ideas that improved the team project,” and “Made recommendations that improved the team’s performance.” Given this, we predict that three CATME components will separately align with involved, respect and prospects, and a control component will align with leadership. We cannot predict how the fifth component might be constituted, given the sparcity of agency-aligned items, and lack of liked-aligned items.

9. Results of Second Study

As expected, items contained within two smaller CATME components clearly align with respect and prospects, while the largest component with 30 items aligns with involved (Table 5). One of the remaining two, larger components, that with 24 items, aligns with all three Pillars just mentioned, and additionally agency. The medium-sized component, with 21 items, aligns with leadership, which we consider to be control items that should enter its own component.

9.1. Discussion of Second Study

9.1.1. Inter-Rater Bias Causing Intercorrelation

One limitation of study two is sparsity of items aligned with liked and agency, which may also explain high correlations between CATME components. Following on from the first study, with liking reaching second place, and the first component explaining over 70% of the variance, we consider that these correlations might have been a result of a lack of liked items. The PILAR instrument had two items related to liked, which were included in the second component, potentially indicating its importance.
No CATME items refer to the relationship between peer and respondent, whether from their own, or the peer’s perception. Lack of liked-aligned items may have encouraged respondents to distribute inter-rater bias felt towards the peer amongst the remaining items, rather than within liked-aligned items. This uses the rationale of Freud’s emotional displacement (Fenichel 1946), since inclusion of items specifically targeting inter-rater bias may otherwise reduce its effect on the rest of the items, because respondents have a safe, approved channel for their negative or positive affect for the peer.
Relative to the first study, inter-rater bias in the second study may have been exacerbated by numerous factors. The respondent is rating a historical rather than a current colleague, and details of their behaviour may have become uncertain. Further, CATME’s 87 items is a long instrument given the wide range of item phrasing adopted, which prompted its later reduction to 33 items (Loughry et al. 2007). Whether from exhaustion from a long instrument, or uncertain recall, it is an understanding that inter-rater bias would increase correlation between components.
Also, potentially reducing inter-rater bias, the first study used 13 items, repeated for multiple peers, whereas CATME had 87 novel items, which may have increased respondent’s cognitive load, and increased the influence of affect in rating peers (Morewedge and Kahneman 2010). Finally, the first study collected information on current rather than historical peers, presumably allowing sharper details to inform ratings. A shorter instrument, applied to current peers, and with some items aligned to liked, may have reduced intercorrelation between components in the first study, otherwise higher in Loughry et al. (2007), perhaps due to inter-rater bias.

9.1.2. Lack of Agency and Liked in the CATME Instrument

Rather than reflecting an inherent lack of importance of agency and liked, we consider it an artefact of the author’s process for deriving the 87 items. Agency is only a comparatively recent construct, having been initially proposed as participative safety, before being popularised by psychological safety (Edmondson 1999). Therefore, agency may not have been strongly represented by the instruments from which the 392 items were derived. This sparcity may have been compounded by an introspective, conformist student cohort ascribing less value to peers’ agency in rating the 218 items to arrive at 87 (Arnett 2006). The CATME cohort were equal parts engineering and business students, neither of which are noted for empathy that otherwise allows appreciation of others’ agency (Levenson et al. 1995).
In contrast to agency, liked is a relatively longstanding construct, originally in the form of interpersonal attraction; an aspect of social cohesion (Back 1951), and in the negative sense, interpersonal conflict (Exline and Ziller 1959). A lack of liked-aligned items may have been the result of prioritising behaviours rather than perceptions. Of the five Pillars, we consider that liked is the most subjective and intangible, followed by agency. While the original list of 218 items is not published, there is also selection bias of original selection of items to consider. For instance, 30 items were extracted from Anderson and West’s (1998) 60-item Team Climate Inventory. It mapped on to five components, two of which respectively align to agency (“participative safety”) and liked (“interaction frequency”), both of which have over ten constituent items.
There is no published list of the 392 items originally chosen by Loughry et al. (2007), and no published list of the subsequent list of 218 derived by 15 colleagues. Hence, we cannot know at what point agency- or liked-aligned items were excluded. Nevertheless, their lack obviously prevented a CATME component being aligned with agency and liked. More broadly, it and inter-rater bias may explain why there was a single CATME component that encapsulated a range of Pillars, and why the inter-correlation between CATME components was high.

10. Overall Discussion

We note that CATME focusses on peer behaviour, rather than perceptions of the group that influence peer behaviour, yet this approach to peer assessment may offer advantages. First, since perceptions are less diverse than behaviours, a shorter instrument is possible. Second, perceptions are not limited by natural ability, whereas behaviours are, which the respondent must evaluate. Evaluating behaviours may be more prone to inter-rater bias than evaluating peer’s perceptions, since the latter require an empathetic rather than judgmental perspective (Oakley and Halligan 2017). Finally, since behaviours may be difficult to calibrate between respondents, to improve accuracy of a survey, training to use a behaviour-based peer assessment instrument is recommended (van Zundert et al. 2010). By contrast, a perception-based instrument, if based upon familiar, universal perceptions, may not require training for respondents before use (Heslop et al. 2017).
While behaviour-based peer assessment cannot ignore personality variation; a perception-based instrument founded upon universal perceptions must do so. In other words, for PILAR, personality is a confounder, whereas for CATME, perceptions of collaboration become confounders. For instance, the item: “Provided insights and ideas that improved the team project” might be due to a peer’s lack of imagination, or from the Pillar perspective, not feeling they had agency. The item “Kept trying when faced with difficult situations” if answered in the negative, might have arisen from a peer perceiving that the group’s prospects are sufficiently poor to not warrant their continued investment (Graham and Sloan 2016). Many CATME items can be interpreted as either the respondent’s valid judgement of the peer’s behaviour, or alternatively the peer’s behaviour being justified based upon their perception of the group.
PILAR has taken a different approach to creating an instrument, by encapsulating social psychology theory directly, rather than incorporating instruments currently in use. Since both approaches provide five constructs, we contend that inducing their nature directly from empirically validated theory is potentially more robust than utilising existing instruments; not only because of changing trends in the field (Green 2015), but also due to potential biases within CATME’s original creation (Simmons et al. 2013). We therefore contend that PILAR may offer a more parsimonious basis upon which to survey respondents’ assessment of peers on the team (Heslop et al. 2017). Indeed, if perceptions are causal to behaviour, it may be that CATME is largely measuring peer’s perceptions.

11. Conclusions

This article presents empirical evidence for the PILAR (prospects, involved, liked, agency, respect) model of collaboration. Over 400 engineering students participated in peer assessment as part of their course requirements. Assessment of their responses using factor analysis revealed almost-perfect alignment of the first five components with predicted Pillar-aligned items. The sequence of components may indicate that the cohort prioritise respect, followed by liked. The latter being unexpected, we considered that either the cohort was not representative of engineers, or that the priority given to the liked-aligned component was due to capturing inter-rater bias.
The second study only partially supports PILAR by demonstrating that three Pillars align with three components of CATME’s more extensively developed instrument. Lack of CATME items aligned with liked and agency, and only examining one, five and seven components, may have precluded potentially stronger endorsement of PILAR. We postulated that methodological biases, and historical trends in literature, may have limited the numerousness of items aligned with agency and liked, leading to high intercorrelation between CATME components.
Despite respective limitations of each study, our findings encourage further exploration of perceptions of collaboration, whether peer’s or respondent’s, as a method of assessing collaboration viability (CoVi). Should future studies demonstrate alignment with Pillars, it will constitute further theoretical support for PILAR as a parsimonious model of collaboration. Such a universal model may not only enable organisations to measure the viability of their constituent collaborations with a brief instrument, but also provide guidance for developing policy that fosters egalitarian, voluntary and intrinsically motivated collaboration, and thereby innovation.

Acknowledgments

Simon Iveson from the Engineering Faculty, University of Newcastle collected the data, which he then anonymised before providing it to Benjamin Heslop. The School of Medicine and Public Health, University of Newcastle, supplied Heslop’s PhD scholarship.

Author Contributions

Benjamin Heslop designed both studies and wrote the draft manuscript. Benjamin Heslop analysed the data acquired from Simon Iveson. Elizabeth Stojanovski advised on statistical treatments and presentation of statistical results. Kylie Bailey advised on structuring the manuscript for publication. Jonathan Paul edited the manuscript for logical consistency.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Anderson, Niel R., and Michael A. West. 1998. Measuring climate for work group innovation: Development and validation of the team climate inventory. Journal of Organizational Behavior 19: 235. [Google Scholar] [CrossRef]
  2. Arnett, Jeffrey Jensen. 2006. Suffering, Selfish, Slackers? Myths and Reality about Emerging Adults. Journal of Youth and Adolescence 36: 23–29. [Google Scholar] [CrossRef]
  3. Back, Kurt W. 1951. Influence through social communication. The Journal of Abnormal and Social Psychology 46: 9–23. [Google Scholar] [CrossRef]
  4. Back, Mitja D., Stefan C. Schmukle, and Boris Egloff. 2006. Who is late and who is early? Big Five personality factors and punctuality in attending psychological experiments. Journal of Research in Personality 40: 841–48. [Google Scholar] [CrossRef]
  5. Baker, William E., Amir Grinstein, and Nukhet Harmancioglu. 2016. Whose Innovation Performance Benefits More from External Networks: Entrepreneurial or Conservative Firms. Journal of Product Innovation Management 33: 104–20. [Google Scholar] [CrossRef]
  6. Barsade, Sigal G., and Andred P. Knight. 2015. Group Affect. Annual Review of Organizational Psychology and Organizational Behavior 21: 21–46. [Google Scholar] [CrossRef]
  7. Basu, Kaushik, and Jorgen W. Weibull. 2003. Punctuality—A Cultural Trait as Equilibrium. In Economics for an Imperfect World: Essays in Honor of Joseph E. Stiglitz. Edited by R. Amott, B. Greenwald, R. Kanbur and B. Nalebuff. Cambridge and London: The MIT Press, pp. 163–82. Available online: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=317621 (accessed on 12 March 2015).
  8. Breevaart, Kimberley, Arnold B Bakker, Evangelia Demerouti, and Jorn Hetland. 2012. The Measurement of state work engagement: A multilevel factor analytic study. European Journal of Psychological Assessment 28: 305–12. [Google Scholar] [CrossRef]
  9. Brown, James D. 2009. Choosing the Right Type of Rotation in PCA and EFA. Shiken: JALT Testing & Evaluation SIG Newsletter 13: 20–25. [Google Scholar]
  10. Costa, Patricia L., Ana M. Passos, and Arnold B. Bakker. 2014. Team work engagement: A model of emergence. Journal of Occupational and Organizational Psychology 87: 414–36. [Google Scholar] [CrossRef]
  11. Edmondson, Amy. 1999. Psychological safety and learning behavior in work teams. Administrative Science Quarterly 44: 350–83. Available online: http://www.jstor.org/stable/2666999 (accessed on 29 October 2014). [CrossRef]
  12. Edmondson, Amy C. 2004. Psychological safety, trust, and learning in organizations: A group-level lens. In Trust and Distrust in Organizations: Dilemmas and Approaches. Edited by Roderick Kramer and Karen Cook. New York: Russell Sage Foundation, pp. 239–72. [Google Scholar]
  13. Exline, Ralph V., and Robert C Ziller. 1959. Status Congruency and Interpersonal Conflict in Decision-Making Groups. Human Relations 12: 147–62. [Google Scholar] [CrossRef]
  14. Felder, Richard M., Gary N. Felder, and E. Jacquelin Dietz. 1998. A Longitudinal Study of Engineering Student Performance and Retention V. Comparison with Traditionally Taught Students. Journal of Engineering Education, 469–80. [Google Scholar] [CrossRef]
  15. Fenichel, Otto. 1946. The Psychoanalytic Theory of Neurosis. London: W. W. Norton and Company. [Google Scholar]
  16. Ferguson, Daniel M., Chad Lally, Hilda Ibriga Somnooma, Olivia Murch, and Matthew W. Ohland. 2016. Using frame-of-reference training to improve the dispersion of peer ratings in teams. Paper presented at Frontiers in Education Conference, Erie, PA, USA, October 12–15. [Google Scholar] [CrossRef]
  17. Fowler, Floyd. 1995. Improving Survey Questions: Design and Evaluation. Thousand Oaks: SAGE Publications. [Google Scholar]
  18. Fulmer, C. Ashley, and Cheri Ostroff. 2015. Convergence and emergence in organizations: An integrative framework and review. Journal of Organizational Behavior 17: 1–20. [Google Scholar] [CrossRef]
  19. Graham, Alan, and Karlin Sloan. 2016. Resilience at Work: Negotiating a Dynamic World. Sloan Group White Papers. Available online: https://static1.squarespace.com/static/541bbbe3e4b07b5243b1543f/t/57acebb959cc684a27ecb697/1470950336028/RAWA+White+Paper.pdf (accessed on 15 March 2018).
  20. Green, Christopher D. 2015. Why Psychology Isn’t Unified, and Probably Never Will Be. Review of General Psychology 19: 207–14. [Google Scholar] [CrossRef]
  21. Heslop, Benjamin. 2018. Selfish engagement: Consilience of evolutionary theories? submitted for publication. [Google Scholar]
  22. Heslop, Benjamin, Bailey Kylie, Paul Jonathan, Anthony Drew, and Roger Smith. 2016. Collaboration Guidelines to Transform Culture. Interdisciplinary Journal of Partnership Studies, 3. [Google Scholar] [CrossRef]
  23. Heslop, Benjamin, Elizabeth Stojanovski, Jonathan Paul, and Kylie Bailey. 2017. Are We Collaborating Yet? Employee Assessment of Peer’s Perceptions. International Journal of Human Resource Studies 7: 175–92. [Google Scholar] [CrossRef]
  24. Heslop, Benjamin, Jonathan Paul, Anthony Drew, Kylie Bailey, and Elizabeth Stojanovski. 2018a. Organisational Psychology Versus Appreciative Inquiry: Unifying the Empirical and the Mystical. AI Practitioner 20: 69–90. [Google Scholar] [CrossRef]
  25. Heslop, Benjamin, Elizabeth Stojanovski, Kylie Bailey, Anthony Drew, and Jonathan Paul. 2018b. Wellbeing through collaboration. submitted for publication. [Google Scholar]
  26. Heslop, Benjamin, Elizabeth Stojanovski, Jonathan Paul, and Kylie Bailey. 2018c. PILAR: A Model of Collaboration to Encapsulate Social Psychology. Review of General Psychology. in press. [Google Scholar]
  27. Heslop, Benjamin, Elizabeth Stojanovski, Jonathan Paul, Simon Iveson, and Kylie Bailey. 2018d. Respondent disengagement from a peer assessment instrument measuring Collaboration Viability. Australasian Journal of Engineering Education. [Google Scholar] [CrossRef]
  28. Huang, Julie Y., and John A. Bargh. 2014. The Selfish Goal: Autonomously operating motivational structures as the proximate cause of human judgment and behavior. Behavioral and Brain Sciences 38: 121–35. [Google Scholar] [CrossRef] [PubMed]
  29. Ibrahim, Mohamed, and Pieter M. Ribbers. 2009. The impacts of competence-trust and openness-trust on interorganizational systems. European Journal of Information Systems 18: 223–34. [Google Scholar] [CrossRef]
  30. James, Kim Turnbull, Jasbir Mann, and Jane Creasy. 2007. Leaders as Lead Learners: A Case Example of Facilitating Collaborative Leadership Learning for School Leaders. Management Learning 38: 79–94. [Google Scholar] [CrossRef]
  31. Keyton, Joann. 2016. The Future of Small Group Research. Small Group Research 47: 134–54. [Google Scholar] [CrossRef]
  32. Klein, Katherine J., Beng-Choing Lim, Jessica L. Saltz, and David M. Mayer. 2004. How Do They Get There? An Examination of the Antecedents of Centrality in Team Networks. The Academy of Management Journal 47: 952–63. [Google Scholar] [CrossRef]
  33. Ko, Dong-Gil. 2010. Consultant competence trust doesn’t pay off, but benevolent trust does! Managing knowledge with care. Journal of Knowledge Management 14: 202–13. [Google Scholar] [CrossRef]
  34. Leana, Carrie, and Harry J. Van Buren. 1999. Organizational social capital and employment practices. Academy of Management Review 24: 538–55. Available online: http://amr.aom.org/content/24/3/538.short (accessed on 3 June 2014).
  35. Levenson, Michael R., Kent A. Kiehl, and Cory M. Fitzpatrick. 1995. Assessing psychopathic attributes in a noninstitutionalized population. Journal of Personality and Social Psychology 68: 151–58. [Google Scholar] [CrossRef] [PubMed]
  36. Loughry, Misty L., Matthew W. Ohland, and DeWayne D. Moore. 2007. Development of a Theory-Based Assessment of Team Member Effectiveness. Educational and Psychological Measurement 67: 505–24. [Google Scholar] [CrossRef]
  37. Luthans, Fred, Carolyn Carolyn Youssef, and Bruce Avolio. 2007. Psychological Capital: Developing the Human Competitive Edge. Oxford: Oxford University Press. [Google Scholar]
  38. MacCoy, David J. 2014. Appreciative Inquiry and Evaluation—Getting to What Works. The Canadian Journal of Program Evaluation 29: 104. [Google Scholar] [CrossRef]
  39. Magin, Douglas. 2001. Reciprocity as a Source of Bias in Multiple Peer Assessment of Group Work. Studies in Higher Education 26: 53–63. [Google Scholar] [CrossRef]
  40. Meeussen, Loes, Ellen Delvaux, and Karen Phalet. 2014. Becoming a group: Value convergence and emergent work group identities. British Journal of Social Psychology 53: 235–48. [Google Scholar] [CrossRef] [PubMed]
  41. Morewedge, Carey K., and Daniel Kahneman. 2010. Associative processes in intuitive judgment. Trends in Cognitive Sciences 14: 435–40. [Google Scholar] [CrossRef] [PubMed]
  42. Oakley, David A., and Peter W. Halligan. 2017. Chasing the Rainbow: The Non-conscious Nature of Being. Frontiers in Psychology 8: 1–16. [Google Scholar] [CrossRef] [PubMed]
  43. Quinn, Ryan W., and Jane E. Dutton. 2005. Coordination as Energy-in-Conversation. The Academy of Management Review 30: 36–57. [Google Scholar] [CrossRef]
  44. Salas, Eduardo, Dana E. Sims, and C. Shawn Burke. 2005. Is there a “Big Five” in Teamwork? Small Group Research 36: 555–99. [Google Scholar] [CrossRef]
  45. Salas, Eduardo, Marissa L. Shuffler, Amanda L. Thayer, Wendy L. Bedwell, and Elizabeth H. Lazzara. 2015. Understanding and Improving Teamwork in Organizations: A Scientifically Based Practical Guide. Human Resource Management 54: 599–622. [Google Scholar] [CrossRef]
  46. Schlösser, Thomas, David Dunning, Kerri L. Johnson, and Justin Kruger. 2013. How unaware are the unskilled? Empirical tests of the “signal extraction” counterexplanation for the Dunning-Kruger effect in self-evaluation of performance. Journal of Economic Psychology 39: 85–100. [Google Scholar] [CrossRef]
  47. Simmons, Jospeh P., Leif D. Nelson, and Uri Simonsohn. 2013. Life after P-Hacking. NA—Advances in Consumer Research 41. Edited by S. Botti and A. Labroo. Duluth: Association for Consumer Research. [Google Scholar] [CrossRef]
  48. Spânu, Florina, Adriana Băban, Mara Bria, Raluca Lucăcel, Ioan Ştefan Florian, and Lucia Rus. 2013. Error Communication and Analysis in Hospitals: The Role of Leadership and Interpersonal Climate. Procedia—Social and Behavioral Sciences 84: 949–53. [Google Scholar] [CrossRef]
  49. van Zundert, Marjo, Dominique Sluijsmans, and Jeroen van Merrienboer. 2010. Effective peer assessment processes: Research findings and future directions. Learning and Instruction 20: 270–79. [Google Scholar] [CrossRef]
  50. Viswesvaran, Chockalingam, Frank L. Schmidt, and Deniz S. Ones. 2005. Is There a General Factor in Ratings of Job Performance? A Meta-Analytic Framework for Disentangling Substantive and Error Influences. Journal of Applied Psychology 90: 108–31. [Google Scholar] [CrossRef] [PubMed]
  51. Wageman, Ruth, and Frederick M. Gordon. 2005. As the Twig Is Bent: How Group Values Shape Emergent Task Interdependence in Groups. Organization Science. [Google Scholar] [CrossRef]
  52. Willey, Keith, and Anne Gardner. 2010. Investigating the capacity of self and peer assessment activities to engage students and promote learning. European Journal of Engineering Education 35: 429–43. [Google Scholar] [CrossRef]
  53. Williamson, Jeanine M., John W. Lounsbury, and Lee D. Han. 2013. Key personality traits of engineers for innovation and technology development. Journal of Engineering and Technology Management—JET-M 30: 157–68. [Google Scholar] [CrossRef]
  54. Wilson, David S., Elinor Ostrom, and Michael E. Cox. 2013. Generalizing the core design principles for the efficacy of groups. Journal of Economic Behavior and Organization, 90. [Google Scholar] [CrossRef]
1
Table 1. Definition of each Pillar.
Table 1. Definition of each Pillar.
PillarMember’s Perception of Collaboration
prospectsYour opinion of whether the group will succeed, and if so, whether you will receive your anticipated share of that success
involvedYour willingness to cooperate with colleagues, either providing or receiving assistance: in the form of knowledge and physical aid
likedYour sense of popularity and security based upon your perception of colleagues’ warmth and affection toward you
agencyThe permission you feel to suggest change to the group’s norms, processes, task allocation and strategy
respectYour opinion of a colleagues’ task-relevant competence, and general trustworthiness
Table 2. Peer assessment questions administered by the online PILAR instrument. For each question, alignment with Pillars, and locus of perception, is nominated.
Table 2. Peer assessment questions administered by the online PILAR instrument. For each question, alignment with Pillars, and locus of perception, is nominated.
QPILAR Instrument ItemPillarLocus of Perception
Q1How much work and effort has the person put into the project? Have they done their fair share and pulled their weight?prospectsrespondent
Q2How well did the person get their work completed by the agreed upon time?prospectsrespondent
Q3Did the person regularly attend team meetings on time?control
Q4How good is the quality of the person’s research and their understanding of what they have read?respectrespondent
Q5How good are the person’s analytical and problem-solving skills?respectrespondent
Q6How good are the person’s reporting writing and editing skills?respectrespondent
Q7Did you find this person easy to interact with?likedrespondent
Q8How well did the person contribute to discussion during team meetings?agencypeer
Q9How well did the person encourage others and value their ideas?involvedpeer
Q10How well did the person assist others when asked?involvedpeer
Q11Did this person express their opinions with confidence?agencypeer
Q12Did the person constructively challenge other people’s opinions?involvedpeer
Q13Was this person well-liked by the group?likedpeer
Table 3. Mean and Standard Deviation of each item’s ratings, and Pearson correlations between items. All correlations are significant (p < 0.01) and each item has 2228 scores.
Table 3. Mean and Standard Deviation of each item’s ratings, and Pearson correlations between items. All correlations are significant (p < 0.01) and each item has 2228 scores.
DescriptivePearson Correlation Coefficients
QMeanS.D.Q1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11Q12
Q182.911.91
Q282.912.20.831
Q385.811.40.640.621
Q483.311.60.830.740.601
Q583.110.60.780.740.600.841
Q682.611.80.790.710.560.820.791
Q785.911.30.690.630.550.700.690.671
Q884.011.60.760.690.590.770.790.740.771
Q983.810.80.680.610.520.680.730.660.760.761
Q1084.110.80.760.700.600.770.760.740.730.780.771
Q1184.510.40.680.640.560.710.760.670.680.800.730.751
Q1281.911.40.690.620.520.720.760.700.650.750.750.740.761
Q1385.611.50.720.670.590.720.730.690.830.770.750.760.710.68
Table 4. Pattern matrix (loadings > 0.5) resulting from exploratory factor analysis (six components). Cronbach’s Alpha and percentage of variance is included. The control question (Q3) represents a sixth component.
Table 4. Pattern matrix (loadings > 0.5) resulting from exploratory factor analysis (six components). Cronbach’s Alpha and percentage of variance is included. The control question (Q3) represents a sixth component.
Component Loadings
123456
respectlikedinvolvedprospectsagencyControl (Q3)
Q1 0.58
Q2 1.03
Q3 1.00
Q40.86
Q50.63
Q61.10
Q7 0.96
Q8 0.54
Q9 0.84
Q10 (0.40)
Q11 0.96
Q12 0.75
Q13 0.79
Alpha0.930.910.850.900.89
Variance (%)73.245.244.063.342.322.10
Table 5. Five CATME components and their alignment of constituent questions with Pillars. Also, the locus of perception of the CATME component (but not necessarily of all constituent questions).
Table 5. Five CATME components and their alignment of constituent questions with Pillars. Also, the locus of perception of the CATME component (but not necessarily of all constituent questions).
CATME ComponentNo. itemsPillarLocus
Contributing to the team’s work24agency, respect, involved, prospectsPeer
Interacting with teammates30involvedPeer
Keeping the team on track 21leadership
Expecting quality6prospectsPeer
Having relevant knowledge, skills and abilities 9respectRespondent

Share and Cite

MDPI and ACS Style

Heslop, B.; Bailey, K.; Paul, J.; Stojanovski, E. The PILAR Model as a Measure of Peer Ratings of Collaboration Viability in Small Groups. Soc. Sci. 2018, 7, 49. https://doi.org/10.3390/socsci7030049

AMA Style

Heslop B, Bailey K, Paul J, Stojanovski E. The PILAR Model as a Measure of Peer Ratings of Collaboration Viability in Small Groups. Social Sciences. 2018; 7(3):49. https://doi.org/10.3390/socsci7030049

Chicago/Turabian Style

Heslop, Benjamin, Kylie Bailey, Jonathan Paul, and Elizabeth Stojanovski. 2018. "The PILAR Model as a Measure of Peer Ratings of Collaboration Viability in Small Groups" Social Sciences 7, no. 3: 49. https://doi.org/10.3390/socsci7030049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop