Next Article in Journal
Decoding Journalism in the Digital Age: Self-Representation, News Quality, and Collaboration in Portuguese Newsrooms
Previous Article in Journal / Special Issue
Adapting Traditional Media to the Social Media Culture: A Case Study of Greece
 
 
Article
Peer-Review Record

Greek Young Audience Perceptions and Beliefs on Different Aspects of TV Watching

Journal. Media 2024, 5(2), 500-514; https://doi.org/10.3390/journalmedia5020033
by Anna G. Orfanidou * and Nikos S. Panagiotou
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Journal. Media 2024, 5(2), 500-514; https://doi.org/10.3390/journalmedia5020033
Submission received: 2 February 2024 / Revised: 24 March 2024 / Accepted: 15 April 2024 / Published: 19 April 2024

Round 1

Reviewer 1 Report (New Reviewer)

Comments and Suggestions for Authors

Summary

Based on a self-report online questionnaire with 204 participants, the author/s want to explore the correlation between young people’s beliefs on television and their demographic characteristics (educational level of parents, school performance and television use). The author/s contribute with their study to the argument that a higher degree of media literacy supports the ability to critically evaluate factual (?) television programs. 

While the correlation analysis itself is well done, several aspects remain unclear or underdeveloped: First and foremost, the theoretical framing needs more exploration; secondly, some methodological choices are not explained (for example the choice of items and the rationale behind it); thirdly, the number of tables seems unproportionally in relation to the descriptive parts.

Review

As announced in the headline of the article, the paper is engaged in questions of beliefs about television by Greek teenagers (15-18 years) and how their beliefs correlate with the education of their parents, their own school performance as well as their daily viewing habits. While this is in itself an interesting question, the paper does not fully live up to its own demand.

The first problem arises from the unclear use of the term “belief”. The paper does not provide any definition of the term neither does it position itself within a disciplinary field that would allow a more precise understanding of the concept of “belief”. Several terminologies are used in the introduction, for example public opinion, cultural standard and ideology or behaviour. References or explanations such as the value-belief-attitude system are missing. Similarly, the evaluative competency to attune one’s own beliefs towards concrete programs are labelled as critical thinking, critical reflection or identification of news and fake news. A more thorough reflection of and alignment within existing research for example on media literacy is missing – in fact, the term is not used until the very end of the paper (line 369). The missing theoretical framing is problematic because it indicates that existing research from media and communication studies but also journalism studies has not been taken into account sufficiently. It is also crucial for the quality of the method design: without the adequate elaboration of concepts used, the items that are presented in the online questionnaire seem random: why is it important for the authors to include for example political belief but not economic or social belief? Which function has the item Frequency of watching Talk shows? 

The above-mentioned affects also the overall strength of argument of this paper: since there is no clear rationale that has led to the choice of items, the conclusion seems to be far-fetched at points, leaving the reader wondering, if qualitative interviews would not have created more insightful results. This calls for a much stronger “red thread” in the paper.

I recommend to conduct a comprehensive revision of the introduction and theoretical framing that elaborates on the key concepts of this paper. In addition, it would be very helpful to have the research question presented at an earlier point (at the moment it is placed in line 110, in the chapter “methods”). 

A second point relates to the presentation of results and the use of tables:
While tables undoubtedly can support the communication of research and bring more detail and clarity, the heavy use of tables in this article is a bit out of proportion. For example, table 1 and 2 could easily be replaced by text, while the Pearson results are better understood when shown as table. 

Lastly, while English is not my native language, a thorough language revision seems advisable. This includes smaller aspects such as repetitions (e.g. line 355: “more specifically”), but also the sentence structure in general. 

Author Response

Thank you for your suggestions!

Author Response File: Author Response.pdf

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The subject area, methods, and goals are clearly established but the copy-editing issues in the manuscript were distracting. The study is an important area of research and very relevant to the media effects on youth overall, especially in the area of their media behaviours with "traditional" media (TV in this case). It would have been an interesting added value if the authors would have detailed (within the questionnaire applied) the typology of TV channels (which are the most frequently used) and/or the streaming channels (Netflix, HBO etc.) as it might be confusing from the respondents' the right addressing answers (when watching films/documentaries their uses focused on streaming platforms, for exemple). 

We also recommend the clarification of the concept "reflective thinking" vs. "reflexive thinking". It is our opinion that in this case, the reflexive thinking implies the (de) construction as a continuing process of understanding the meaning of the media messages and that it implies the process of critical thinking.

Comments on the Quality of English Language

Some copy editing needs to be addressed.

Author Response

Thank you for your suggestions!

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

You have conducted a small study that helps you develop your research skills on a topic that is important to media and media literacy researchers in Greece and other countries. Congratulations and I am sure you have learned a lot in the process. 

You rely on a convenience sample and use self-report measures of media consumption and media literacy but it would be good for you to reflect on the limitations and strengths of using self-report measures. Did you review how other researchers have used self-report measures for these variables? 

In the process of writing your results, you must continue to work to determine what really matters in your findings. Right now, your research questions are not well-formed - they are merely correlational. A careful review of the literature would reveal that the relationship between demographic variables is established. It seems like you are really interested in students' beliefs about television -- and you are using these beliefs as a proxy measure of media literacy.  

I suggest you make careful choices about what "story" to tell with your data. You should present no more than 5 tables and they should be sequenced in a way that builds an argument. 

Your study shows that Greek teens think that they are critical viewers. In reviewing the literature on the measurement of media literacy competencies, you may be able to reflect on this as both an artifact of self-report methodology or as a developmental feature of adolescents. You will find Fastrez and Landry a helpful resource to situate your project methodologically. 

Comments on the Quality of English Language

This work is hard to read because of the many grammatical and usage errors. Please engage the services of a native English speak to edit the work, 

Author Response

Please see attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

You are addressing a topic that is relevant for the editorial focus of this scholarly journal, in general. It looks like some parts of your research design and analyses are basically okay from the brief description you provided. For example, when you report the findings from your descriptive statistics (counts, percentages, means, and standard deviations) as you do in Tables 1, 2, 3, 5, 7, 9, 11, and 13, you are on relatively solid ground. While your descriptions accompanying these tables are brief, they are sufficient to let readers know what you found. However, in other parts of your manuscript, the brevity of your descriptions leads readers to believe that you have made faulty decisions, which leads to a conclusion that the claims you made for your findings lack credibility. In my comments below, I will focus on two places where your manuscript exhibits fatal flaws.

One example of a fatal flaw is that you do not seem to realize that when you report the findings of an empirical study, you cannot take credit for producing findings about concepts that you fail to measure and test. For example, in your Conclusion section, you present your most important finding as: “the results of this research showed generally moderate levels of critical thinking and reflection among young people towards various aspects of television shows and programs and how they are presented to the general public and influence their perceptions and attitudes.” You highlight this claim even more by featuring it in your abstract. But I cannot see that any of your measures are able to make a valid assessment of “critical thinking.” In order to measure critical thinking, you would have had to give your participants a problem and observed how they engaged in the process of solving it. In order to be a valid measure of critical thinking, it needs to focus on the process, not the outcome because such outcome measures are likely to be confounded by a conglomeration of many factors well beyond the concept of critical thinking.

Perhaps in your mind you think you are measuring the concept of critical thinking with items such as “Documentaries present facts and pictures in such a way that together they make up a believable story” or “Occasionally, the producers of documentaries tell the people in their documentaries what to say.” To me, these items appear to be measures of skepticism, not critical thinking. But if you believe they are measures of critical thinking, then you need to make this explicit and you need to provide a convincing argument to support such a claim. It is telling that you mention “critical thinking” 16 times in your introduction and review of the literature, then you do not mention it again until the “Conclusion.” By failing to mention it even once in the 12 pages where you present your measures and results leads readers to conclude that you do not believe that you have measured it.

A second example of fatal flaw is exhibited repeatedly in your treatment of the inferential statistical tests using ANOVA (Tables 4, 6, 8, 10, 12, and 14), which makes it appear that you do not understand how to use this statistical test or report the findings from it. For example, let’s look at Table 4 where you say you are reporting the results of ANOVAs that tested differences between one set of variables (5 TV viewing measures and 2 measures of demographics) and a second set of variables (5 beliefs about documentaries). Your description is so brief, it makes your design as well as your reasoning process behind the design impossible to understand.

1. Which were your independent variables, and which were your dependent variables? When running an ANOVA, designers typically use a nominal measure for the independent variable and a ratio measure for the dependent variable. But none of your measures generated ratio level data, and only gender is nominal level. Therefore, I cannot tell which variables you regarded as independent or dependent from looking at the level of data. You can, of course, run an ANOVA using any level of data in your dependent measures and independent measures, but the findings that ANOVA will generate have highly questionable value.

2. The headings of the 5 columns appear to be the 5 individual measures that you used to assess participants’ beliefs about documentaries. Why did you not use these items to construct a scale to represent beliefs about documentaries? If each of these 5 items were designed to measure one concept (beliefs about documentaries) why not scale them to arrive at one measure to represent each participants’ value on that variable? Why treat each item separately in your analysis?

3. When you introduce Table 4 (lines 155 to 159), you say “statistically significant differences occur if the displayed p-values are less than 0.05.” Does this mean that the numbers in the 35 cells are p-values? If that is what you are saying, then you are claiming that you ran 35 different ANOVAs? Is that right? If so, then the data matrix is 20 cells for almost all of these tests (5 levels on each measure of belief about documentaries X 4 levels of TV viewing). Given that you collected data from 204 respondents, the power of your test is fairly weak, which means that it would be highly unlikely that you would have generated findings with more than two or three statistically significant p-values; yet you report finding 25 statistically significant findings across your 35 tests! Either you have found an effect much, much stronger than social scientists could ever expect to find, or that there is something radically wrong in the way you ran your analyses.

4. Did you run each of these ANOVAs individually? It appears that you did from the way you report your findings in Table 4. If did run them individually, then this is a serious flaw. Instead, you needed to run a MANOVA with repeated measures to generate your findings for each table in order to take into account family-wide error and thereby generate more accurate p-values.

 5. You do not seem to understand that p-values can never be zero. Statisticians are conservative and recognize that there is always the possibility that their test produced spurious results; even when that probability is tiny, it is still never zero. There are times when SPSS will report p-values of .000 but this is the result of rounding. In such a case, you should report “p < .001” not that “p is .000.” I realize that this is a rather minor point, but it is yet another indicator of your limited knowledge about inferential statistics.

6. When you report the results of an ANOVA, you need to report the F, the degrees of freedom, and eta squared. Without those indicators, the p-values have little meaning.

7. Also, when you do find a significant difference as the result of running an ANOVA, it is important to run a subsequent test to identify where the overall significant difference is most concentrated.

8. ANOVA is a test of differences, but what you seem to be interested in reporting are associations. For example, looking at the first cell in Table 4, are you really interested in finding out if there are differences in TV viewing across levels of belief about documentaries using actors? Or are you more interested in finding an association between the two, that is, are people who watch more TV each day more likely to believe that documentaries use actors?

I could continue to add reasons to this list, but I will stop at 8. These problems are not limited to Table 4, but also show up in Tables 6, 8, 10, 12, and 14. Because of these fatal errors, all of the inferential findings that you report lack credibility.

In conclusion, I encourage you to keep working on this important topic. If you plan to make contributions to the media literature, then you need to learn a lot more about the statistical tests of differences and associations so that you can choose the appropriate test. I hope the effect of my comments is to motivate you to learn much more about statistics, their requirements, how to use statistical tests appropriately, and how to report the findings from such tests. In the meantime, stay on more solid ground by using only the more simple descriptive statistics.

Comments for author File: Comments.pdf

Comments on the Quality of English Language

/

Author Response

Please see attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

This is an interesting study regarding television and young audiences in Greece. Below the authors can find some suggestions aiming at strengthening this work prior to publication.

First, the authors state in the Introduction (p.1, lines 38-42): “…after studying the Greek and international literature, no study was found that investigates the different beliefs of young people, in different aspects of watching programs on television. In particular, there is a significant gap in the Greek literature in this field, and this is sought to be filled through the present research…”. However, this is not the case as there exist several studies regarding the ways young audiences perceive and interpret different aspects of TV programming and news, and some of them refer to the Greek case. Here is an indicative list:

-          Patch, Hanna. (2018). ‘Which factors influence Generation Z’s content selection in OTT TV? A Case Study’. Available at: http://www.diva-portal.org/smash/get/diva2:1232633/FULLTEXT01.pd

-          Bennett, W. L. (2008). Changing citizenship in the digital age. Civic life online: Learning how digital media can engage youth, 1(1-24). Doi:10.1162/dmal.9780262524827.001

-          Hagedoorn, B., Eichner, S., & Gutiérrez Lozano, J. F. (2021). The ‘youthification’of television. Critical Studies in Television16(2), 83-90.

-          Kourti, L. (2002). Children and Media-Greece. Children and Media.4(2), 42.

-          Podara, A. et al (2021). Generation Z’s screen culture: Understanding younger users’ behaviour in the television streaming age – The case of post-crisis Greece. Critical Studies in Television, 16(2), 91-109.

-          Podara, A. et al (2019). Audiovisual consumption practices in post-crisis Greece: An empirical research approach to Generation Z. In Proceedings of the International Conference on Filmic and Media Narratives of the Crisis: Contemporary Representations, Athens, Greece (pp. 7-8).

In this perspective, maybe the authors would like to revise this part of the Introduction as well as the theoretical framework, which needs to be strengthened.

In addition, in the Methodological section (p.2, lines 87-95) the authors present four RQs which in fact could be merged into one as they read like four different aspects of the same RQ, based on different demographics.

Also, regarding the sampling procedure the authors state (p.3, line 98): “The research sample was collected through convenience sampling method”.  This needs to be explained a bit further… for example, how were the interviewees selected? How did the researchers obtained the parents’ contact information and sent the link to the online questionnaire? Etc, etc.

Furthermore, has this research obtained an ethical clearance from a relevant Ethics Committee? The authors do not mention something about it in the Method section. Any study involving human participants needs to obtain Ethics permission by a formal Commission in the country conducted. Especially, when the study involves people under 18 years old.

The analysis of the Results seems solid. However, there are only 204 participants and this is a rather limited sample in order to reach sound conclusions. Maybe the authors would consider mentioning this as one of the study’s limitations.

The Conclusions part (pp.14-15) reads more like a Discussion rather than Conclusions. What is this study’s major finding? And how these findings advance the existing literature on the topic?

Finally, the bibliographical search of the study (References) seems a bit limited in regards to the issue examined here. There is significant literature regarding the relationship between young people and television, which could strengthen the theoretical part of this study and offer a solid basis for the discussion of the empirical part of the study.

Author Response

Please see attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

While I appreciate your politeness in responding to the criticisms I expressed in my previous review, I must evaluate your manuscript on its scholarly quality. I wish I could be more positive, but I cannot see any improvement exhibited in this revision. The two fatal errors I pointed out before still appear in this version.

One of those fatal errors is your claim that you have measured critical thinking. Your response to this criticism was that “the creator of the questionnaire I used, whom I have mentioned as a reference in the list of references at the end, herself points out that it is a questionnaire for evaluating critical thinking.” First off, you are quoting an untested claim in an unpublished doctoral thesis. You do not seem to realize that scholarship requires you to critically examine all claims rather than simply accepting them. Without critical analyses of the claims in the literature, faulty claims get repeated then institutionalized, which moves scholarship in the opposite direction of its intended purpose which is to strive to construct more valid and more useful explanations for the phenomenon we study.

Also, you do not seem to realize that when you claim to be testing a concept, you need to show readers what your meaning is for that concept, then present a measure that demonstrates the ability to capture that meaning. In your manuscript, the only definition you provide for critical thinking comes from Ku et al who apparently define it as the “ability to reflect on whether the information they receive from the media is correct and how it might affect their own personal beliefs, attitudes, and perceptions, towards social, political and cultural issues.” Notice that the key idea in this definition is “the ability to reflect” and make decisions about the accuracy of information. Yes the definition also mentions beliefs but those are consequences of the application of the ability to reflect, so that is people who have a low level of ability to reflect will likely form faulty beliefs. Therefore it is essential to measure the ability to reflect and avoid measuring beliefs because they are likely to be faulty among those people who have little ability to reflect. Can you see the difference?

In your Methods section you say that your measure consisted of eight parts: (1) beliefs about documentaries, (2) political beliefs, (3) beliefs about young people, (4) propositions related to television presentations, (5) beliefs about television in general, (6) beliefs about watching TV, (7) daily TV viewing habits, and (8) demographics. I cannot see how any of these are measures of “critical thinking” and yet you make a wild claim that “As Rosenbaum [25] mentioned, these questions show the existence of adolescents’ critical attitude towards media.” Which questions? All of them? If so, how are demographics measures  regarded as measures of a person’s critical thinking? And how are self-reported beliefs measures of critical thinking? You even quote Hobbs’ caution that “some subjects may not be able to self-assess their media literacy competencies and other may choose a more socially acceptable answer rather than one that reflets their lived experience.” And by providing this quote to support your position, are you saying that critical thinking is the same as media literacy competencies? Your argument is very confusing on the surface, and the more I try to work through your process of reasoning the more I see ways in which your argument is faulty.

The other fatal flaw in your manuscript is your lack of understanding about inferential testing. You do not have a representative sample, so p-values are moot. There are of course many published studies that use convenience samples and still report p-values – but in those instances, the p-values are always presented as a background indicator to the main focus which is the inferential statistic. When you run Pearson correlations, the main focus on your reporting should be on the r’s which indicate the strength of association. However, in none of your five tables that purport to show the results of your tests of correlations do you present even a single r. Also, you have ignored my previous caution about not avoiding family wide error. Again it looks like you have run each test of correlation independently from all the other tests, which unfairly inflates all of the p-values. Therefore your reporting of p-values is faulty for two major reasons.

I will conclude this review as I did my previous one, but this time I will add emphasis. I encourage you to keep working on this important topic. If you plan to make contributions to the media literature, then you need to learn a lot more about the statistical tests of differences and associations so that you can choose the appropriate test. I hope the effect of my comments is to motivate you to learn much more about statistics, their requirements, how to use statistical tests appropriately, and how to report the findings from such tests. In the meantime, stay on more solid ground by using only the more simple descriptive statistics.

Reviewer 3 Report

Comments and Suggestions for Authors

There is no doubt that the author has done a lot of work to strengthen the manuscript. Congrats to the author, this looks like a promising work.

Back to TopTop