Next Article in Journal
How Well Does the CWEQ II Measure Structural Empowerment? Findings from Applying Item Response Theory
Next Article in Special Issue
More Dynamic Than You Think: Hidden Aspects of Decision-Making
Previous Article in Journal
Factor Structure of Almutairi’s Critical Cultural Competence Scale
 
 
Article
Peer-Review Record

The Impact of Formal Decision Processes on e-Government Projects

Adm. Sci. 2017, 7(2), 14; https://doi.org/10.3390/admsci7020014
by Leif Sundberg * and Aron Larsson
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Adm. Sci. 2017, 7(2), 14; https://doi.org/10.3390/admsci7020014
Submission received: 19 December 2016 / Revised: 3 May 2017 / Accepted: 11 May 2017 / Published: 22 May 2017
(This article belongs to the Special Issue Decision Making: Individual and Organisational Perspectives)

Round 1

Reviewer 1 Report

The study analyses an interesting problem of the relation between a decision-making process leading to an ITC project and the project's eventual success. It collects and reports on  original data from Sweden, which is always a plus.  

In particular, the authors try to ascertain the impact of undertaking a risk analysis, weighting, resource allocation, and stakeholder-based objective setting on the eventual success or failure of the project.

I do not understand, however, why simple correlations are used to analyse and report on the data. This creates a giant problem of omitted variable bias. Numerous features of an agency, including its size, budget, prestige, experience with managing ITC projects may influence both the existence of formal decision-making and the eventual success of the project. 

A use of a simple regression model, with a sufficient number of controls that represent these and other potentially omitted variables could easily solve this problem. In addition, using such a model would allow authors to avoid an artificial 1m SEK cutoff that both significantly lowered the number of responses and seems arbitrary. i understand that, because of how the survey was constructed, broadening the scope of the survey to projects below 1m SEK (and then controlling for the project size in a regression model) would mean that 161 responses that answered "no" to >1m SEK question would have to be retaken. 

But the effort is essential for the project's eventual success, given another indefensible approach that the paper takes and that must be fixed. In particular, the authors lump non-assessed projects and failures into one category. The reason is plain when you glance at Table 3: among the 61 projects above 1m SEK, only two were reported as unsuccessful. You obviously cannot conduct any statistical analysis on the group of 2. 

Unfortunately, having the resulting FNA group that consists of 15 non-assessed projects and only 2 failed projects, dramatically changes the conclusions that we can draw from this research. While the authors CLAIM to show the procedural differences in pursuing successful and non-successful projects, in fact what they do is by and large show us the procedural differences of assessed and non-assessed projects (as the bulk of the "failure" group is really non-assessed and not failed projects).

When you read the results like that, they unfortunately become much less interesting: projects that are not assessed were also not conceived through formal decision making. Not surprising. If you are not formal ex post, you are also not formal ex ante. 

It may be that that's all that will come out of this research. But adding 161 responses with smaller projects (and then controlling for project size) at least gives authors a hope that hey will collect sufficient number of real failures to compare. Alternatively, you can ascertain a project success through other ways, for instance by asking a larger group of managers from each agency about their subjective opinion on the project's success. Data can also be extracted from ongoing projects, which further limits the number of responses. Perhaps the 18 agencies (and more if the 161 are added) can be asked if there was another project that the agency pursued which is already finished. In other words, the agencies should be asked to talk about the irlargest COMPLETED project. 


Specific comments: 

-- Attributing the concept to "black swan," famously introduced by Taleb (2007), to Budzier and Flybvjerg (who did apply it to the e-gov context), is an example of disciplinary parochiality and should be avoided;

-- Section 3 needs to be rewritten: it's not a good strategy to copy-paste the entire survey, which should appear in an appendix. Instead you should describe the key aspects of your research design that are relevant to the discussion. You do not need to waste space for obvious statements such as "the results were then exported to spreadsheets  

-- Section 4 should be rewritten in line with the general comments. 


Author Response

Dear reviewer,

Thank you for the constructive feedback on our paper. We have revised the paper based on the comments in the review. The revisions are described in the enclosed file.

 

Best regards,

The authors


Author Response File: Author Response.pdf

Reviewer 2 Report

Review report for submission admsci-171012

A      brief summary (one short paragraph)      outlining the aim of the paper and its main contributions.

 

This paper investigates the association between the use of formal decision making procedures in e-government projects and the outcomes of the projects. The article contributes a varieties of literatures, namely decision science as well as e-government and public administration, specifically implementation studies. It is a particularly notable combinations of literatures which so far has been underused in addressing a very acute practical problem in the public sector, namely how to explain the failure ( and successes) of implementation of ICT projects in public organizations.

 

 

Broad      comments highlighting areas of      strength and weakness. These comments should be specific enough for      authors to be able to respond.

 

Several strengths of the article need to be outlined:

1.      The combination of the different literatures  which so far tended to stay separate is particularly notable.  

2.      The field research and collection of primary data in a particular type of organizations is interesting and deserves a more prominent place in the text.

3.      The potential for the expanding of the current research is very high.

4.      It is commendable that the authors have included the research instrument in the text.

 

Suggestions for improvement:

 

1.      It would be helpful to include a short section on the differences between the public and private organizations and the consequences of those differences for project management and project implementation, as it would strengthen the core argument of the article.

 

2.      It would be helpful to reflect in the conclusion on the consequences of the results for the  discussion on public values and their importance for developing the decision analysis in the  cases which are investigated.

 

3.      The theoretical association between formalized decision making and project success needs to be articulated more clearly. Formalized procedures do improve the transparency of project management, and transparency is a characteristic particularly important for public organizations due to accountability procedures.

 

4.      It would be helpful to specify the differences (if any) between the achieved and desired project outcomes in order to explain the constructs of “successful project”.

 

5.      It would be helpful to acknowledge and discuss the differences between measures of perceived success of the project (through the survey) and eventual measures of independently establishing of the success of the project. Through the survey primarily the perceived success is being measured, and that has consequences for the interpretation of the results.

 

 

 

Specific      comments referring to line      numbers, tables or figures. Reviewers need not comment on formatting      issues that do not obscure the meaning of the paper, as these will be      addressed by editors.

 

1.      Lines 65-67: The authors only mention in that there is limited literature, but it is definitely worthy to insist on the novelty of their approach. The contribution brought by the article through combining these particular strands of literature needs to receive a more prominent place.

2.      Lines 213-214: it would be helpful to specify whether there were any follow-up calls, reminders, or similar actions undertaken to increase the response rate. Is it known to the authors which positions occupied the respondent? Approaching one person per organization/ or per project can influence the responses; to what extent have the authors dealt with these issues?

3.      Lines 213-214- What kind of measures were taken to address the privacy of the respondents and the storing of the data?

4.      Line 219- how did the authors deal with multiple answers in the analysis, for instance in line 232-233 or 235-236?

5.      Line 224: How did the authors deal with the choice for the largest project as base for the answers? Whereas it is an understandable choice ( due to the assumed organizational capacity needed to deal with such a project), the size/volume of the project can vary and this aspect can require some measures to deal with the consequences of this choice in the interpretation of the results.

6.      Line 228: how did the authors deal with the combination of method and model in formulation of the item?

7.      Lines 264-265:  parameter “b” is missing in Table 1

8.      Lines 283-284: what kind of consequences result from treating non assessed projects as failures ; strictly speaking they should be treated as “unknown”.

9.      Line 335: it would be helpful to specify how did the coding process took place and in which way did the authors deal with issues on inter-coder reliability.

 


Author Response

Dear reviewer,

Thank you for the constructive feedback on our paper. We have revised the paper based on the comments in the review. The revisions are described in the enclosed file.

 

Best regards,

The authors


Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

The revised version of the article  provided clarifications to the earlier comments. 

There are several aspects that need to be explicited in the text, not only in the : 

- the reasons for combining decision models and methods in one item; just putting them together may have contributed to confusion among the respondents and could have caused issues in interpretation of the answers.

-the way the authors dealt with intre-coder reliability, as some of the answers required interpretation

- line 268: it is not clear what authors meant by "samples", was it about "observations/ answers? 

Author Response

Dear reviewer,

 

Thank you for your valuable feedback that help us improve the content of our paper. Below are the comments and the way we have treated them in the revised version of the paper:

---------------------

Reviewer's comment: The reasons for combining decision models and methods in one item; just putting them together may have contributed to confusion among the respondents and could have caused issues in interpretation of the answers.

 

Authors' answer: We have added a bullet about the use of method/model under "Limitations":

A formalized decision process involves a structured and systematic method, often accompanied by a model enabling or facilitating the use of the method. From the decision maker’s perspective distinguishing between these two concepts is a less relevant issue.  Since the purpose of this study was to identify the employment of formal decision processes, the concepts are not distinguished in the survey.

---------------------

Reviewer's comment: The way the authors dealt with intre-coder reliability, as some of the answers required interpretation.

 

Authors' answer: Since we did not adopt a structured content analysis on the open text, but focused on the quantitative data, we added the following information under 3. Materials and Methods:

Free text answers were considered short enough to be treated manually and no extensive content analysis was made. The major part of the material is quantitative, suitable for statistical analysis. In a few cases where respondents used free text instead of a fixed option, the answer was judged to be ambiguous and removed as missing data.

---------------------

Reviewer's comment: Line 268: it is not clear what authors meant by "samples", was it about "observations/ answers? 

 

Authors’ answer: We acknowledge that using the term "answer" in this case would improve the text and have made the appropriate changes.


Best regards,

The authors

 


Back to TopTop