5.1.2. Design

We manipulated Video Version between subjects, showing each subject one of two versions of the event. In addition, subjects assigned themselves into one of four Political Affiliation categories.

#### 5.1.3. Materials and Procedure

The experiment was identical to Experiment 2a, except as follows. We collected these data approximately 10 months after the event occurred. Because we found no effects of video version in Experiment 2a, we simplified the design, dropping the "looped" version of the video and randomly assigning subjects to watch either the "altered" version or the "original" version. We also allowed subjects to differentiate between having an "Other" political affiliation and "None." Finally, we included some slightly different exploratory measures, which we do not report the results of here. The data are available at https: //osf.io/h6qen/ (accessed on 27 September 2021).

#### *5.2. Results and Discussion*

We analyzed data only from subjects who gave complete responses, and we did not exclude subjects on any other basis, contrary to our preregistration. Most Mechanical Turk workers did not look up any related information (95%).

Of the 485 subjects, 130 identified as Republicans, 184 as Democrats, 143 as None, and 28 as Other. Distributions of the political leaning variable were consistent with these reports: The modal selections were "somewhat conservative" for Republicans, "somewhat liberal" for Democrats, and "Moderate" for Other and None.

Recall that our primary question was: To what extent does political affiliation influence how people interpret video footage of a real-world news event? To answer that question, we again calculated, for each subject, an average of their ratings across the four key items. As before, we preregistered to conduct multivariate analyses across these four ratings, but because they were all at least moderately correlated (*r*s = 0.40–0.61; Cronbach's α = 0.80) we chose instead to combine them for univariate analysis (but conducting the preregistered analyses leads to similar results and conclusions; see Supplementary Material). Table 1 shows the mean composite rating for each condition.

We examined subjects' composite rating as a function of the video version they observed and their political affiliation. A two-way ANOVA revealed main effects of video version, *F*(1, 477) = 4.78, *p* = 0.03, η2p = 0.010, and political affiliation, *F*(3, 477) = 10.77, *p* < 0.01, η2p = 0.063. These results sugges<sup>t</sup> that the version of the event people observed and their political affiliation each mattered for how they interpreted the journalist's behavior.

More specifically—and contrary to our predictions—people who viewed the "original" version of the video gave slightly more negative ratings of the journalist's behavior than people who viewed the "altered" version (*M*Diff = 0.14, 95% CI [−0.00, 0.27]). Tukey-corrected post hoc comparisons further revealed that, in terms of people's political affiliation, Republicans rated the journalist's behavior more negatively than Democrats ( *M*Diff = 0.49, 95% CI [0.27, 0.71], *p* < 0.01), Others ( *M*Diff = 0.40, 95% CI [0.00, 0.80], *p* = 0.05), and members of no party ( *M*Diff = 0.32, 95% CI [0.09, 0.56], *p* < 0.01).

We also included age as a covariate in an additional exploratory ANCOVA and found that each year of aging was associated with a shift in interpretation of the journalist's behavior, but the direction and strength of this shift depended on political affiliation, *<sup>F</sup>*Age x Political Affiliation(3, 476) = 3.67, *p* = 0.01, η2 p = 0.023. More specifically, for Democrats and those reporting not belonging to any political party, each year of aging was associated with a statistically significant shift toward a more positive rating of the journalist's behavior, *B*Democrats = −0.017, *t*(181) = 4.33, *p* < 0.01; *B*None = −0.012, *t*(141) = 2.54, *p* = 0.01.

This pattern of results is largely consistent with the findings of Experiment 2a and reinforces the idea that concerns over the suggestive nature of the altered video may have been unwarranted. As in Experiment 2a, we wondered about the influence of subjects' prior familiarity. We again split subjects into two groups, classifying them as "unfamiliar" (*n* = 309) or "familiar" (*n* = 178) with the event, according to their rating of prior familiarity. We then re-ran the two-way ANOVA for each of these groups in turn. In these exploratory analyses, only political affiliation remained statistically significant—with the same patterns of means as above—and only for those who indicated prior familiarity (Familiar: *p* < 0.01; Unfamiliar: *p* = 0.09). These results are consistent with the findings of Experiment 2a, suggesting again that familiarity with the event matters.

### **6. General Discussion**

Across four experiments encompassing a variety of news sources and a real-world event that varied in familiarity, we found that the influence of source depends on political beliefs.

In Experiment 1a, we found that Democrats rated unfamiliar news headlines as more likely to be real than Republicans or Others did—but only when those headlines were attributed to news sources favored by Democrats. This result shows that a simple change to the ostensible source of news information can affect people's interpretations of that news. In addition, we found that the older people were, the more "real" they rated headlines, regardless of the source of those headlines or people's political affiliation. Our sources had not been normed for credibility, however, leaving room for alternative interpretations. In Experiment 1b, we sought to resolve this issue and build on our initial findings, examining two news sources previously rated most distinct in trustworthiness across the political spectrum. Here, we found that Democrats rated unfamiliar headlines as less likely to be real than Republicans or Others—but only when those headlines were attributed to a news source not favored by Democrats.

In Experiments 2a and 2b, we found evidence to sugges<sup>t</sup> that prior knowledge of a real-world "fake news" event strongly influences people's beliefs about that event. More specifically, when people indicated they already knew about the depicted event—that is, the interaction between CNN's Jim Acosta and a White House intern—ratings about the journalist's behavior were consistent with political affiliation: Democrats rated the journalist's behavior more favorably than Republicans. Moreover, this influence of familiarity dwarfed any influence of the version of the video people observed. We also found that the older people were, the more positively they rated the journalist's behavior, but only among Democrats or people who belonged to no political party. These results sugges<sup>t</sup> that concerns about the suggestive nature of the altered video may have been unwarranted, especially when considering that those unfamiliar with the event rated the journalist's behavior *more* favorably after watching the altered video than after watching the original CSPAN footage.

Our findings are consistent with related work showing that people's political beliefs predict which news sources they consider to be "fake news" [14]. Our data build on this work, suggesting that in some cases, differences in beliefs about the trustworthiness of news sources carries forward into judgments of the veracity of news information. That finding is concerning, because related research shows that "fake news" is often political in nature and can have serious consequences, such as non-compliance with behaviors that inhibit the spread of a deadly virus [32–34]. Our research is also reminiscent of other work showing that individual differences—like age, the need to see the world as structured, or the propensity to think analytically—predict endorsement of or skepticism about "fake news" and misinformation [26,27,29,35,36]. With respect to age specifically, we found two small but noteworthy patterns. First, age was positively associated with the belief that news headlines were "real" in Experiment 1a. This finding should be interpreted cautiously, however, because we did not observe the same association in Experiment 1b. Second, age was positively associated with more favorable views of the journalist's behavior in Experiments 2a and 2b—although not for Republicans. Together, these findings are consistent with work showing differences in the ability to think critically as people age [37]. Finally, our results also dovetail with prior research demonstrating that people are more easily misled by sources of information deemed credible [24,25].

One limitation, however, is that news source information appears to have only a small influence on people's beliefs about the news. Take, for example, the finding from Experiment 1b, in which Democrats rated headlines attributed to Fox News as less real than either Republicans or Others. The confidence intervals for those differences ranged from 0.04 to 0.51—or put another way, from almost zero to half of a point along a 5-point scale. However, considering that subjects were given sparse information in the form of brief and unfamiliar news headlines, any effect at all may seem surprising.

There are at least three possible explanations for the small size of these effects. The first is that people require more context (e.g., a longer news article) for news source information to powerfully sway interpretations of the news. The results from Experiments 2a and 2b are consistent with this idea, because differences in event interpretations due to political affiliation were strongest amongs<sup>t</sup> those already familiar with the event. The second explanation is that people do not rely on source information when evaluating news content that is already relatively plausible [38–40]. The third explanation—and one we should take seriously when designing interventions to help people detect fake news—is that people are increasingly skeptical of news sources in general [18–20]. If that trend continues, then it will become difficult to find any meaningful differences in people's interpretations of the news according to where that news is sourced, because all sources will eventually be considered "fake news." In fact, given the proliferation of digitally altered footage in which people are convincingly replaced with others (i.e., "deep fakes"), we may be approaching a tipping point, beyond which no news will be considered credible [41].

Another limitation is that we lacked control over what people already knew about the real-world "fake news" event in Experiments 2a and 2b, instead choosing to measure naturally occurring familiarity. We therefore cannot be sure what caused differences in interpretations of the event amongs<sup>t</sup> those already familiar with the event. We know that the video itself is an inadequate explanation, because video version had no meaningful influence among people who were unfamiliar with the event. We suspect a likely explanation is that Democrats and Republicans encountered different reports of the event due to selective news source consumption [16]. Consistent with this explanation, Fox News's reporting of the event featured a Tweet from a conservative commentator stating that Acosta had bullied the intern and should have his press credentials revoked [42].

One implication of this research hinges on the finding that the same news was interpreted differently when it came from different sources. That finding implies that people rely on more than just the news content when forming beliefs about the news. This implication is consistent with other work showing that people sometimes draw on whatever is available—like how easy it feels to process information—when making judgments about various targets [6,8,43]. It is similarly consistent with an explanation in which people's political motivations influence their reasoning about the news, and more generally with work showing that people find information more persuasive when it comes from a more credible source [5,23,44]. Finally, it is consistent with a framework in which people use source information when making attributions about remembered details [10]. A future

study could examine the extent to which people can remember the source of encountered news information. We suspect that given the trend towards news source selectivity, people will be relatively good at remembering those sources they are familiar with, but relatively poor at remembering those sources they are not familiar with [16].

This narrowing source selectivity likely acts as a negative feedback loop, serving to reinforce pre-existing, ideologically aligned beliefs—even when those beliefs are not accurate [5]. Moreover, people may be unaware such selectivity is happening: Multiple technology giants such as Google and Facebook curate content according to algorithms, resulting in externally generated selectivity [45]. Such a "filter bubble" may be especially concerning when news sources blatantly misinform. Take the recent example of Fox News publishing digitally altered images, placing an armed guard into photos of protests in Seattle [46].

What steps could be taken to reverse this selectivity? Can we successfully encourage people to engage with a wider variety of news sources and to be more critical of news reporting? Some efforts are underway, though it remains to be seen whether these approaches are successful [47–49]. Given the increasing distrust in the media, a more successful approach may be to make systemic regulatory changes to the media itself [18–20]. One idea, for example, is to re-establish the Fairness Doctrine, ensuring that broadcasters cover multiple aspects of controversial issues [50]. Such regulatory measures may ultimately increase accurate and decrease inaccurate news reporting, and in doing so reduce the burden on individuals to detect fake news.

**Supplementary Materials:** The following are available online at https://www.mdpi.com/article/10 .3390/soc11040119/s1, Additional Analyses.

**Author Contributions:** Conceptualization, R.B.M. and M.S.; methodology, R.B.M. and M.S.; validation, R.B.M. and M.S.; formal analysis, R.B.M. and M.S.; investigation, R.B.M.; resources, R.B.M.; data curation, R.B.M.; writing—original draft preparation, R.B.M.; writing—review and editing, R.B.M. and M.S.; visualization, R.B.M. and M.S.; supervision, R.B.M. and M.S.; project administration, R.B.M.; funding acquisition, R.B.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of University of Louisiana at Lafayette (FA17-23PSYC, approved 12 September 2017).

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** The data presented in this study are openly available in Open Science Framework at https://osf.io/h6qen/ accessed on 27 September 2021.

**Conflicts of Interest:** The authors declare no conflict of interest.
