Next Article in Journal
Soiling Losses: A Barrier for India’s Energy Security Dependency from Photovoltaic Power
Next Article in Special Issue
Dozy-Chaos Mechanics for a Broad Audience
Previous Article in Journal
Biodiversity in Music Scores
 
 
Concept Paper
Peer-Review Record

Action Research to Enhance Inter-Organisational Coordination of Climate Change Adaptation in the Pacific

Challenges 2020, 11(1), 8; https://doi.org/10.3390/challe11010008
by Daniel Gilfillan 1,2,*, Stacy-ann Robinson 3 and Hannah Barrowman 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Challenges 2020, 11(1), 8; https://doi.org/10.3390/challe11010008
Submission received: 8 April 2020 / Revised: 13 May 2020 / Accepted: 21 May 2020 / Published: 27 May 2020
(This article belongs to the Special Issue Challenges: 10th Anniversary)

Round 1

Reviewer 1 Report

This is the first article I have reviewed in over 50 years of reviewing articles for journals that is for all intents and purposes perfect!

The authors present a fascinating project proposal which will in all likelihood be copies more or less in other territories.

Everything is very clearly expressed and the article is already ready to be prepared for publication.

Author Response

Reviewer 1 did not ask for any modifications to the manuscript

Reviewer 2 Report

This manuscript outlines a potential experimental research design to examine the effects of virtual team structure on group performance and identity. The starting point of this proposed experiment is rivalry between regional organisations in the Pacific, notably SPC and SPREP. This competition for scarce funding, the authors argue, undermines the regional response to climate change. To overcome this problem, the authors suggest that virtual cooperation helps. More specifically, the authors suggest to create joint teams that collaborate virtually (given the locations of the two organisations, in New Caledonia and Samoa, respectively). Such virtual teams can turn rivalry into collaboration and superordinate group identity, yet the effects are moderated by team structure, according to the manuscript. To test more specifically how virtual team structure affects group performance and identity, the authors describe an experimental research design, whereby four inter-organisational teams are created and tasked with developing a bankable regional adaptation project. Two of these teams are assigned a formal structure, the other two have no formal structure at the outset. After six months, the teams should have developed an adaptation project that the authors want to have evaluated for quality. Alongside interviews before and after the experiment, this outcome helps to understand the role of team structure.

In principle, I find the idea of a qualitative field experiment interesting, and I also find it interesting to write up the research design for publication, before the research has been undertaken. Yet, I have two major concerns about the research design, as well as two minor concerns about publication prior to implementation and the conceptual framework.

First, I am skeptical about what the experimental design can tell us about team structure, or to what extent the proposed design is appropriate for the research questions. The authors want to understand how virtual team structure (having a formal structure) affects group performance and identity (p. 6). To properly and systematically single out the effect of team structure, the authors would need a large-N quantitative design. Yet, given the context, this is impossible, and the authors instead opt for a qualitative field experiment, with a very small sample size: the authors propose 4 teams, two with a structure, two without. The authors might find out, based on interviews with team members, how group dynamics evolved and to what extent the pre-assigned formal team structure was considered useful or not, but I don’t think one can draw any robust conclusions about the role of team structure; any effects might just be random, due to personality or other factors that cannot be controlled for in the proposed design. Even the suggested personality tests will not help to systematically control for such context factors. Four is just too small a sample size to clearly identify any systematic variation, or explanations for such variation.

My second concern exacerbates this shortcoming. I am very skeptical about the feasibility of the proposed design. The experiment is to run for six months, in which 24 staff members will work on developing an adaptation project, of which only one will be submitted to the Green Climate Fund. The research team suggest to contact the head of the two organisations to secure collaboration, but I don’t think any organisation will agree to investing so many resources to a research project, much less organisations that have scarce human and financial resources. The compensation that the authors propose to pay cannot really make up for the time lost. This key concern aside, I am also doubtful about the extent to which these selected 24 staff members agree to participate in the experiment. I don’t see any specific incentives for these people to devote substantial amounts of your work time to this experiment, including participating in personality tests and repeated interviews. Even if the teams could be set up, these would not be independent. As the authors correctly point out, the team members within each organisation will meet in corridor and in other projects, and hence inevitably talk about their different projects, which means that the end result will not be 4 different adaptation projects that can be evaluated separately. It may also happen that team members leave their job, leave the project, or leave the experiment, which may additionally affect group performance and identity.

Finally, I am also skeptical about publishing an experimental design prior to its implementation. The participants are likely to become aware of the publication, and read it, thus potentially compromising their participation in the experiment. If participants are aware about the purpose and expected outcomes of an experiment, they are likely to behave differently, I think.

Additionally, I think the theoretical framework needs to be improved. The authors turn to a variety of theories, but little is done to tie these different components into a single framework. Moreover, it seems that the experiment mainly aims at understanding virtual team structure and superordinate group identity. It is thus unclear how regionalism, neofunctionalism and complex system thinking come into play. When describing the experiment, the authors should also refer back to their framework to highlight how the empirical design reflects the framework.

Overall, I think that the research question is interesting. I also think that there is value in creating virtual teams from different regional organisations to enhance cooperation and create adaptation projects that likely are of better quality than what individual organisations would be capable of producing. However, I am very doubtful about the suitability of the design for the research question at hand, as well as its feasibility. If the authors want to identify the role of team structure for group performance and identity, why not design a large-N experiment with student participants (e.g. from different universities within the Pacific region, or elsewhere), where you can control for all sorts of intervening variables and hence isolate the effect of team structure? It would be nicer to have “real” adaptation practitioners as participants, but this creates severe feasibility issues, as outlined above, in particular for an experiment that runs over six months and requires substantial time and effort from participants. If the authors found instances where organisations have collaborated, they could use qualitative data (like interviews) to better understand how this collaboration occurred, what barriers teams encountered, and how collaboration could be improved in the future. This may not be as ‘clean’ as an experimental design but seems much more feasible and realistic.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

 

Overall

I believe that this is an important research area and understanding how we can improve coordination between organisations to improve adaptive and transformational capacity is potentially of great interest, especially where resources are constrained and organisations are competing rather than collaborating.    

My comments may come across as discouraging – I do not mean them to be.  Because I think you have identified an important area I am keen to support it.  However, I am not convinced that this is an appropriate approach to doing research given it is focussed on 4 six person teams across 2 organisations.  I think you could achieve the goal of understanding how to enhance coordination between ROs on climate change adaptation more effectively in other ways, for example, an action research, co-enquiry type of approach. I think this has more potential for learning and really unpicking the interesting ways in which organisations like this, and the people within them, are working – what motivates and constrains them and the good, perhaps historic, reasons they find it hard to collaborate at the moment.  And then finding ways to reshape how they operate through reflection on their ways of working while doing the task (proposal writing). Using a ‘facilitating object’ such as the GCF proposal writing in the virtual teams is an excellent mechanism for learning more about this –[for example,  I would suggest taking an action learning approach and thinking about the difference facilitation makes (i.e. having 2 teams that reflect on the communication and ways of working in the team in addition to undertaking the proposal creation) and 2 teams that don’t have facilitation support].  Enquiring into what difference this makes would be interesting but I don’t think making it an experiment helps – it feels too forced given the small number of participants involved and there seem to be too many variables to control for.

More specific

The abstract drew me in the way it outlined the wider problem, introduced the overarching theory and identified the focus (understanding how to improve coordination between ROs and how the structure of virtual teams might enhance this).

Getting into the text I started to wonder ‘Why do an experiment?’  Why not take an action research approach and do an enquiry?  To be completely transparent, I have got a background in facilitating processes of change in organisations so I come with a particular view of how organisations operate.  Organisation Theory, was originally based on mechanistic thinking (Taylor, 1911 for example) but organisations are no longer seen as machines and workers as units so trying to force a research approach to understanding organisational processes into an ‘field experimental’ space seems odd to me.  As you refer to in your research framework, complexity and systems thinking is part of how organisations are now viewed. i.e. organisations as ‘complex fields of human interaction and social processes’ (Caviccia, 2019) also Ralph Stacey, Patricia Shaw, Barry Oshry, David Snowden.    If you think of meaning in organisations being constantly constructed and reconstructed through engagement and conversation over time I think that another kind of research approach is needed to understand what supports this to flourish and what gets in the way.

Section 4.4.1 Fieldwork To me this would be an action research approach (see Reason and Bradbury, SAGE Handbook of Action Research) such as action learning or co-enquiry, which offers ways for people in organisations to inquire into their own practice, learn from experience and make sense of their actions with the academic as a facilitator of the learning process.   I really like the idea of getting 4 teams to create bankable GCF submission (as an aside, why pick only one to be funded?) but this could be set with in a wider process of dialogue and enquiry where you collaborate with the ROs and co-learners. 

When you were describing how you would encourage participation in this section, I was thinking if I was the RO, I would be thinking ‘who are these people? What do they know about us?’.  Whereas if you came in and explained what you were trying to do (the goal of enhancing collaboration and coordination and working together on common areas of interest on climate change) and then asking – and we would like to learn alongside you about how this could be enhanced – then I would imagine being more intrigued and open to learning more.  Imposing your field experiment as it is currently described seems a mistake.  You are trying to understand what helps and what gets in the way from a small group of people. Some of your findings may have wider transferability –using such an experimental design does not make sense to me. 

You also have to be really careful (hence my ethical concerns) with how you use something like the DiSC assessments.  There is a lot of literature that suggests that whilst personality/performance tests are useful for starting a conversation in, say a coaching setting, you have to be careful about what conclusions you draw from the.  Have you yourselves been trained in how they can be applied so that you understand what this lens illuminates and what it doesn’t?  I am more familiar with MTBI but I know that that is not a static measurement.  Were you planning to tell the participants their label?  How exactly where you going to use it in your analysis. 

I am also wondering why do you need this kind of information?  How will getting DiSC assessments of your participants (and labelling them?) help you understand how to reach the goal of your research – which I am taking to be the use of virtual teams to improve coordination of ROs?  Given that there will be 4 teams of 6 people what kind of transferrable lessons are possible about whether you have 2 D’s and I and an S in one team and 4 C’s in another, about virtual team structure and coordination?

Section 3.3 on complex systems – I found this confusing.  Are you saying that a regional system, containing other systems and with subsystems nested within them makes up a complex system?  In my mind this could be complex but it could also be complicated.  I think this is an important distinction because in complicated systems, I would argue, there are potential ‘solutions’ (so you can have ‘best practices’) but in complex systems there are not solutions, only ways to improves the system (that may only be knowable in hindsight).  I like the work of David Snowden and Cynefin on this.  I also do not understand how sentence 215 came about starting ‘These include the broad goals of sustainability…’.  What is the ‘these’ in this sentence?  Complex systems, per se, do not include these goals. 

Line 231: I don’t understand the sentence ‘The level of coordination is influenced by the quality and characteristics of organisational inputs and components (e.g. physical environment and individuals) as well as linking processes and mechanisms.  Seeing individuals (members of staff?) as inputs to an organisation and judging their quality doesn’t not seem a helpful way to understand how to improve coordination between organisations which, I would suggest is more about what enables good relational working and trust building.  Organisational inputs and components appears a few times and I think it needs to be explained or removed and a new framing of organisational operation considered.

Line 538.  Why is only one of the 4 proposals going to be allowed to be submitted to GCF?  Why is this competition between the teams seen as helpful?  Couldn’t there be a set of criteria which, if you score high enough, you get to submit – taking the need to be competitive (reducing future collaboration potential?)

Line 616.  Field experiments are the gold standard.  Having looked at the reference I see this quote is from someone who wrote a book on Field Experiments in Organisations.  So he is possibly biased?

Line 622 – Is this sentence saying it limits potential for unexpected results and unintended developments?  Or is doesn’t?  I may be misunderstanding this but wouldn’t it be better to have the opportunity to learn about the unexpected and unintended – then you get to have your assumptions challenged and learn as a result. 

Line 672:  the quote from Fjermestad and Ocker.  High performing virtual teams whose leaders facilitated improved communication between team members ….  I think the important work here is ‘facilitated’.   So the point you go on to make about ‘natural’ leaders taking charge.  If they are ‘facilitating improved communication between team members’ I would argue that they are not ‘taking charge’.  Facilitation of learning seems to be a critical aspect of what you are hoping to learn about.  You could thus have 4 virtual teams, 2 of whom had a facilitator in the team whose role it was to ‘facilitate improved communication in the team’ i.e. so that the team, as well as focussing on the content of the work (the task) also spent a proportion of the time reflecting on how they as a team are operating, what is working and not working and how it could be improved (rather than the ‘team leader’ role).  There is a very good book on working with teams by Jarlath Benson who talks about this (‘Working more creatively with groups’).

 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

I'm impressed by how quickly you have revised the text.  I am happy with your responses to my previous review comments.

Back to TopTop