Next Article in Journal
Stress, Emotion Regulation, and Well-Being among Canadian Faculty Members in Research-Intensive Universities
Previous Article in Journal
Family Income and Student Educational and Cognitive Outcomes in China: Exploring the Material and Psychosocial Mechanisms
 
 
Article
Peer-Review Record

Vox Populi? Trump’s Twitter Page as Public Forum

Soc. Sci. 2020, 9(12), 226; https://doi.org/10.3390/socsci9120226
by Carles Roca-Cuberes * and Alyssa Young
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Soc. Sci. 2020, 9(12), 226; https://doi.org/10.3390/socsci9120226
Submission received: 21 October 2020 / Revised: 30 November 2020 / Accepted: 3 December 2020 / Published: 10 December 2020
(This article belongs to the Special Issue AI and Journalism: Opportunities and Challenges)

Round 1

Reviewer 1 Report

Thank you very much for giving me the chance to review this article, that I have enjoyed reading.

The article investigates a very up to date topic: Russia Investigation, and President, Donald Trump use of Twitter.

Honestly there a very few comments  that I can make to improve this work that is already complete and well structured.

The Introduction offers a clear background, the methodology used  is clearly explained and appears suitable to the RQs that are perfecty tackled in the conclusions.

My only perplexity has to see with this sentence: “This study concludes that Trump’s tweets do not inform his Twitter audience’s opinion on this matter and that  Trump’s repetition of catchphrases on the Russia Investigation did not have a measurable impact on his Twitter audience’s responses”.

On the one hand yes, it does respond to the RQs. However, and acknowledging that the paper is about the audience’s responses and not about Trump, would be worth at least acknowledging that Trump’s political discourse is populist, thus its main function is to catch emotional attention and not to inform?

I insist: this is not the core of this article, so I invite the Author(s) to take it as a suggestion, but I believe that by adding this concept in the theoretical framework will perfect the soundness of the conclusions.

 

Minor observations:

1) Figure 7. Top 40 most-repeated terms

I suggest to make this figure bigger because it is hard to read.

2) Line 461: something is marked in yellow (a typo?)

3) Table 2. Top 10 most-repeated terms.

What is the source?

 

 

Author Response

  1. My only perplexity has to see with this sentence: “This study concludes that Trump’s tweets do not inform his Twitter audience’s opinion on this matter and that Trump’s repetition of catchphrases on the Russia Investigation did not have a measurable impact on his Twitter audience’s responses”. On the one hand yes, it does respond to the RQs. However, and acknowledging that the paper is about the audience’s responses and not about Trump, would be worth at least acknowledging that Trump’s political discourse is populist, thus its main function is to catch emotional attention and not to inform? I insist: this is not the core of this article, so I invite the Author(s) to take it as a suggestion, but I believe that by adding this concept in the theoretical framework will perfect the soundness of the conclusions.

The characterization of Trump’s political discourse as populist is now included in the literature review (lines 93-94), with 2 new references, and in the conclusion (lines 421-423).

  1. Figure 7. Top 40 most-repeated terms: I suggest to make this figure bigger because it is hard to read.

Revised as requested

  1. Line 461: something is marked in yellow (a typo?)

Revised as requested

  1. Table 2. Top 10 most-repeated terms. What is the source?

The source is a frequency table from the comments; the same as Figure 7.

Reviewer 2 Report

This is an interesting study. However, the article needs some work to do the study justice.

Ethics -need to add some detail about the ethical decisions undertaken before beginning this research.

For example, what processes did you go through to get permission to publish people’s tweets? Have you considered the copyright implications? Seems to be within twitter T&Cs, but this should be noted and referenced.

Methodology: please provide more detail about the coding (e.g. how many coders, how disagreements were resolved), including an inter-coder reliability rating.

Results: Given the possibility that people who agree with a tweet RT it and people who disagree reply, the % of replies that are anti-Trump does not prove anything about the overall preferences of his audience.

Style:

1 -Decide how you are going to use single or double quotes and be consistent. Avoid using single quotes to distance yourself from a term. E.g. The following quote (lines 45 to 48) seems to use single and double quote marks excessively, with no consistency: “the ‘imagined audience’ is the audience that Twitter writers imagine they are tweeting to (Marwick & boyd, 2011a). Similarly, the concept of the ‘personal public,’ refers to the “follower network” that makes up the “meso layer” of Twitter communication, where the “macro layer” would be hashtag campaigns and the micro layer would consist of @replies (Burgess et al., 2013).”

If you’re putting quotes round a term to imply it has a special meaning in the context, you need to explain what this meaning is.

2 -Put the columns in charts in a logical order, e.g. Pro/ neutral / anti or anti/ neutral / pro

3 -Russia Investigation or Russia investigation -be consistent.

4 -DM needs spelled out the first time it is used.

Author Response

  1. Ethics -need to add some detail about the ethical decisions undertaken before beginning this research. For example, what processes did you go through to get permission to publish people’s tweets? Have you considered the copyright implications? Seems to be within twitter T&Cs, but this should be noted and referenced.

We have added a paragraph (lines 149-154) in which we deal with the reviewer’s concern. We have also added a reference (Towsend & Wallace, 2016).

  1. Methodology: please provide more detail about the coding (e.g. how many coders, how disagreements were resolved), including an inter-coder reliability rating.

The method we have employed in our research is ‘qualitative content analysis’ and not ‘quantitative content analysis’. While (inter-coder) reliability is a requirement in ‘quantitative content analysis’, it is not a necessary condition in ‘qualitative content analysis’. As Schreier, an authoritative figure in qualitative content analysis research points out, “the issue of reliability is a contentious one in qualitative research. Reliability (especially in the sense of intersubjectivity) will often be rejected on the grounds that meaning is highly context-dependent. According to this line of reasoning, to make the agreement between two coders a criterion in evaluating data analysis is to reduce the multiplicity of potential meanings to one meaning only. This is considered to decrease instead of increase the quality of the analysis.” (Schreier, 2012: 169). As a result, we have not deemed necessary to provide those details (like inter-coder reliability rating) that might be considered crucial for a quantitative content analysis, but that are not relevant in qualitative content analysis.

  1. Results: Given the possibility that people who agree with a tweet RT it and people who disagree reply, the % of replies that are anti-Trump does not prove anything about the overall preferences of his audience.

Replies and the percentage of replies that are pro- or anti-Trump provide a good measure of sentiment in the responses to Trump’s tweets. Paul & Sui (2019) followed a similar procedure in their analysis of the public’s emotional reaction in their replies to emotional tweets by candidates of the 115th US Congress (2017–2019). Joseph et al (2019) found high levels of polarization in replies to Trump’s tweets. That polarization was taken to reflect the overall preferences of his audience.

Joseph, K., Swire-Thompson, B., Masuga, H., Baum, M. A., & Lazer, D. (2019, July). Polarized, together: Comparing partisan support for trump’s tweets using survey and platform-based measures. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 13, pp. 290-301).

Paul, N., & Sui, M. (2019). I Can Feel What You Feel: Emotion Exchanges in Twitter Conversations between Candidates and the Public. Journal of Political Marketing, 1-21.

  1. Decide how you are going to use single or double quotes and be consistent. Avoid using single quotes to distance yourself from a term. E.g. The following quote (lines 45 to 48) seems to use single and double quote marks excessively, with no consistency: “the ‘imagined audience’ is the audience that Twitter writers imagine they are tweeting to (Marwick & boyd, 2011a). Similarly, the concept of the ‘personal public,’ refers to the “follower network” that makes up the “meso layer” of Twitter communication, where the “macro layer” would be hashtag campaigns and the micro layer would consist of @replies (Burgess et al., 2013).” If you’re putting quotes round a term to imply it has a special meaning in the context, you need to explain what this meaning is.

We now use single or double quotes consistently in the article.

  1. Put the columns in charts in a logical order, e.g. Pro/ neutral / anti or anti/ neutral / pro

Columns in charts have been reordered in a logical order, as suggested by reviewer.

  1. Russia Investigation or Russia investigation -be consistent.

Revised as requested. We now consistently employ Russia Investigation.

  1. DM needs spelled out the first time it is used.

Revised as requested

Reviewer 3 Report

Interesting work, I quote "This article investigates Twitter replies to tweets concerning the Russia Investigation,  published by the United States President, Donald J. Trump. Using a qualitative content analysis, we  examine a sample of 200 tweet replies within the timeframe of the first 16 months of Trump’s  presidency to explore the arguments made in favor or not in favor of Trump in the comment". The article is interesting however the design and analysis methodology uses content analysis of 50 tweets comments, where they are "more than 72,000 comment replies", Big Data and algorithmic content analysis should be implemente in order to complement the work.

The research idea and investigation is interesting but sample and methodology shoud be complemented, I quote "We employ qualitative content analysis 113 (henceforth QCA) because it allows us to understand the range of ideas expressed in those 114 comments.", the sample and methods should be enlarge to more comments and I do not agree that "content analysis is the best methodology for us to identify and understand sentiment from our sample", it is a correct methodology which should be explained in terms of intercoding coherence and also enlarging the sample and implementing mixed algorithmic methods in order to enlarge sample and draw more coherent conclusions.

Author Response

  1. The article is interesting however the design and analysis methodology uses content analysis of 50 tweets comments, where they are "more than 72,000 comment replies", Big Data and algorithmic content analysis should be implemented in order to complement the work. The research idea and investigation is interesting but sample and methodology shoud be complemented, I quote "We employ qualitative content analysis 113 (henceforth QCA) because it allows us to understand the range of ideas expressed in those 114 comments.", the sample and methods should be enlarge to more comments and I do not agree that "content analysis is the best methodology for us to identify and understand sentiment from our sample", it is a correct methodology which should be explained in terms of intercoding coherence and also enlarging the sample and implementing mixed algorithmic methods in order to enlarge sample and draw more coherent conclusions.

The method we have employed in our research is ‘qualitative content analysis’ and not ‘quantitative content analysis’. Addressing the reviewers’ concerns would imply doing an entirely different research with a completely different purpose than the one we set ourselves to accomplish in this article. We agree with the reviewer’s comment that qualitative content analysis is not, necessarily, the best methodology to identify and understand sentiment from our sample, but it is perfectly adequate for the purposes of our research. While we understand that ‘quantitative content analysis’ objectives would be to achieve intercoder reliability and use large samples gathered through algorithmic methods, these are not requirements for ‘qualitative content analysis’ research. Regarding intercoder reliability in qualitative research, this “will often be rejected on the grounds that meaning is highly context-dependent. According to this line of reasoning, to make the agreement between two coders a criterion in evaluating data analysis is to reduce the multiplicity of potential meanings to one meaning only. This is considered to decrease instead of increase the quality of the analysis.” (Schreier, 2012: 169). The sampling strategy typically employed in qualitative research is purposive and the subtype we use (quota sampling , or stratified purposeful sampling)) is perfectly adequate for the purposes of our research. Sample size (200 tweet replies and not ‘50 tweets comments’, as the reviewer indicates) is also more than ample and very generous for a qualitative content analysis; further, sample size in purposive sampling is determined by thematic saturation, which is the point at which “no additional data are being found whereby the (researcher) can develop properties of the category. As [s/he] sees similar instances over and over again, [s/he] becomes empirically confident that a category is saturated” (Glasser & Strauss, 1967). This is exactly how we proceeded because these are the ‘rules’ that apply in a qualitative content analysis.

Round 2

Reviewer 3 Report

May be published with some english proof reading.

Back to TopTop