Next Article in Journal
Fish Farmers’ Perception of Ecosystem Services and Diversification of Carp Pond Aquaculture: A Case Study from Warmia and Mazury, Poland
Next Article in Special Issue
Engaging People with Energy Efficiency: A Randomised Controlled Trial Testing the Effects of Thermal Imaging Visuals in a Letter Communication
Previous Article in Journal
The Role of Port Authority in Port Governance and Port Community System Implementation
Previous Article in Special Issue
When Does Being Watched Change Pro-Environmental Behaviors in the Laboratory?
 
 
Article
Peer-Review Record

Social Norms Based Eco-Feedback for Household Water Consumption

Sustainability 2021, 13(5), 2796; https://doi.org/10.3390/su13052796
by Ukasha Ramli
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Sustainability 2021, 13(5), 2796; https://doi.org/10.3390/su13052796
Submission received: 31 December 2020 / Revised: 2 March 2021 / Accepted: 3 March 2021 / Published: 5 March 2021

Round 1

Reviewer 1 Report

Comments for Authors 

“Social norms based eco-feedback for household water consumption 

Sustainability 

 

Major Comments: 

  1. Attrition 

Why such high attrition? What was the source of attrition from the 5 exclusion criteria?  This substantially changes the concept of persistence of a repeated experiment.  Roughly 2000  households receive at least one mailer but only ~1000 receive all five. When assessing the persistence of the program how much is due to the shifting sample?  If high users and those that have water use above 100% of the peers are systematically removed there may be “negative attrition” where the remaining sample is less receptive to treatment.  

A replated question is how households that drop out of the intervention are handled. Are these household dropped from the sample?  If they are dropped from the sample roughly 50% of the treated households never transition to the post-treatment group when studying persistence. I'm not sure how to disentangle true treatment dynamics from attrition in this setting.  I recommend a more thorough quantitative analysis of attrition and some thoughtful discussion on how this relates to interpreting the persistence results. 

  1. Randomization/Balance 

The randomization does not look good, in particular Figure 2.  You say that the pre-treatment mean consumption is not statistically significant, but the treatment group is 2% higher than the control group prior to the experiment and the p-value is still relatively low (0.087).  This is the expected treatment effect in these settings, so the imbalance is on the same order of magnitude as the expected treatment effects.  Additionally, the imbalance appears to be highest in the summer where most of the treatment effects occur. 

regression with controls and a dummy for treatment in the pre-treatment period might generate a significant difference between the experimental groups. 

  1. Event study: 

I’m not entirely sure how Figure 3 was created.  The figure notes say a DID model is run for each month of the data.  This does not account for correlation in treatment effects between different treatment dates.  The preferred event study specification is to interact the treatment with the month dummies, before and after treatment. This generates all the parameters from one regression model. It also allows for a formal assessment of the balance prior to the intervention because all coefficients in the pre-treatment period should be roughly zero. Additionally, it is hard to view the event study graph: the CIs blend in with the grid lines. 

  1. Billing data 

The data are not observed monthly but rather every six months.  However, the data are presented monthly. It would be helpful to understand the dynamics of billing periods.  Are these evenly distributed over the year?  Are households whose meters are read in the June similar to those August?  Any given month can reflect two sources of variation: the seasonal variation in water use during the prior six months and the composition of households that have their meters read in that month. 

  1. Quantile treatment effects 

What model of quantile treatment effects (QTE) does the paper use?  If the model includes any controls, there are two basic models: conditional and unconditional.  The conditional QTEs show the underlying distribution after controlling for covariates.  So, if baseline consumption is a covariate a household in the 90th quantile of the conditional distribution may not in fact be one of the top users – they just use more water than expected conditional on their baseline usage.  The unconditional QTE is probably preferred because this focuses on the unconditional (or original) consumption distribution, and therefore better reflects the interpretation of high quantiles = high users.  Language such as “top 20% of users where treatment effects are much larger” requires the use of an unconditional QTE. 

Additionally, how do are the standard errors calculated?  The confidence intervals seem quite a bit smaller than the OLS confidence intervals. 

  1. Conditional average treatment effects (CATEs) 

I have similar questions about the CATEs.  What model was used to estimate the CATE?  Is Figure 6 just plotting the raw data? The preferred model is similar to an event study.  Run a regression with dummies for each of the quantiles, and then interact the treatment with each of these dummies.  I’d be surprised if this sample was powered to estimate over 200 parameters in one model.  I would recommend using quartiles or deciles for the CATE analysis. The labels of this graph could be cleaned up and more informative. Given the imbalance in the baseline period I think it’s important to estimate the CATEs in a formal model with controls to make it comparable to the ATE. 

Minor comments: 

  1. Figure 1 notes says the shaded areas represents confidence intervals; I do not see any shaded areas. 
  1. I suggest updating the reference list.  Some working papers on there have been published (e.g. Carlsson & Jaime, 2015), and there are many other very similar interventions for water conservation that are not cited.  There should be a more comprehensive review of the existing literature to see how this paper fits in and expands our knowledge. 

 

Author Response

Thank you so much for these excellent comments. They were very enlightening and helpful. I hope I have addressed them all satisfactorily. 

I have made many changes to the paper in accordance to your comments. I've made most of these changes in red font. The discussion section is a total re-write. 

Attrition:

I have included additional explanations within the paragraph on 'data' to clarify the issue on attrition. Including balance checks of attrition on treatment assignment.

While 50% of households may have not have received the treatment throughout the programme, their consumption data were still being observed, unless they actually just closed their accounts. 

Data from 3,461 households were included in all the analysis of treatment effect. 

 

Randomization:

"A regression with controls and a dummy for treatment in the pre-treatment period might generate a significant difference between the experimental groups."

I may be misinterpreting what you are suggesting. But this is exactly how I have estimated the balance. I have made this clearer by adding a sentence to explain this. However, it should be noted that the p-value for this estimation is less than 0.05 if I do not cluster the standard errors. 

 

Event study:

I have generated a new event study graph in accordance with the suggested preferred specification of using interaction terms. I have also included the pre-treatment months so that it is easier to make an assessment of the balance prior to the intervention. Finally, I have added caps to the CIs so that it is easier to distinguish them from the other vertical lines.  

Billing data:

I've included a sentence on how billing is handled in the paragraph on 'participants'. I do not have the billing cycle for each household so am unable to include this in the model. I have tried to acquire this data in the past, but when this project began, the EU was rolling out GDPR which meant that most companies were still very nervous about handling personal data. Billing data is related to geographical location so it was seen as sensitive. I can try to make another request for this data, but knowing utility companies, this can take months to acquire. I have included the lack of billing data as an additional limitation to the study in the discussion. 

I have also included a sentence on why the data is presented as monthly rather than 6-monthly. Again, this is something that is not within my control. I have tried to acquire the data as the raw 6-monthly, Advizzo, can only give me this for more recent months. They have somehow deleted the 6-monthly format and only store it as monthly. 

Quantile treatment effect:

You made an excellent point regarding the use of unconditional QTEs. I have rerun the analysis, and produced new outputs and interpretations. I also had not clustered the standard errors for the QTEs previously, but have done so now. The standard errors are similar to the ATE (2.399). 

The output is of course very different and so I have amended my interpretation accordingly. 

CATEs:

I have replaced the binscatter plot with the graph based on the preferred specification provided. I reduced the groups to deciles, and have generated the graph in different way which allowed me to create better labels. The results were quite different, and I have therefore re-written the interpretations in line with this. 

 

 

 

Reviewer 2 Report

The paper is a well-written piece of research looking at the effect of a social norms intervention on household water consumption. The topic is clearly of interest from the perspective of behavioral sciences and sustainability sciences. The literature sections could highly benefit from recent behavioral research on social-norm nudging. For instance (not mandatory but highly recommended):

Bicchieri, C., & Dimant, E. (2019). Nudging with care: The risks and benefits of social information. Public choice, 1-22.

Lehner, M., Mont, O., & Heiskanen, E. (2016). Nudging–A promising tool for sustainable consumption behaviour?. Journal of Cleaner Production134, 166-177.

Czajkowski, M., Zagórska, K., & Hanley, N. (2019). Social norm nudging and preferences for household recycling. Resource and Energy Economics58, 101110.

Kandul, S., Lang, G., & Lanz, B. (2020). Social comparison and energy conservation in a collective action context: A field experiment. Economics Letters188, 108947.

The sections devoted to materials and results are quite clear. However, subsection 3.3 is extremely scarce. No other type of information was collected in this post survey? This of course brings the question of a pre-intervention survey. This is mentioned as limitation of the study but it should be better explained why the design ignored potential influential factors like the pro-environmental attitude of the households.

Also, there is almost no mention about other socio-demographic controls: income levels, type of family (number of children) etc. As external factors, there was any change in the price of utilities in that period? Or any other potential shocks affecting the consumption/income of the household?

The discussion section should be extended to explicitly describe theoretical and practical implications. Conclusions, limitations and future research should stand in an independent section.

 

 

Author Response

Thank you for these comments and the excellent list of recommended articles. I have integrated these into the paper, and have also added some additional recent articles as well. 

I have made the changes I have made to the paper in red font, except for the discussion section which is almost entirely rewritten. 

Survey:

Unfortunately, as this survey was something the utility company conducts on a regular basis through a third-party, I could only influence the addition of a few questions regarding recall and satisfaction. I have included a sentence in section 3.3 to better explain this point. 

Furthermore, I have added a longer explanation for why a survey was not conducted before the intervention either. Briefly, it is because it didn't make commercial sense for the utility to do so, and I didn't have any funding to do conduct it myself. 

Shocks:

I have included a further explanation as to why data regarding billing was not included in the analysis. This is simply because I did not have the data. 

I have tried to acquire this data in the past, but when this project began, the EU was rolling out GDPR which meant that most companies were still very nervous about handling personal data. Billing data is related to geographical location so it was seen as sensitive. I can try to make another request for this data, but knowing utility companies, this can take months to acquire. I have included the lack of billing data as an additional limitation to the study in the discussion. 

 

Discussion/Conclusion:

I have split the two as separate sections and further expanded on both. Splitting the two has allowed me to expand on the points within each section as you have demarcated them. Thank you for this suggestion, I feel like the two  sections are now more informative. 

Round 2

Reviewer 2 Report

Thank you for the revisions. 

Author Response

Thank you for your comments!

Back to TopTop