2.1.2. Design

We manipulated News Source within subject, attributing headlines to one of five sources. In addition, subjects assigned themselves into one of three Political Affiliation categories.

#### 2.1.3. Materials and Procedure

As a cover story, we told subjects the study was examining visual and verbal learning styles. Then, we presented subjects with 50 news headlines, one at a time, in a randomized order. Each headline was attributed to one of five news sources. Specifically, above each news headline, subjects read "X reported that ... ," where X was replaced with a news source and the ellipsis was followed by a headline (e.g., "The New York Times reported that ... Rarely Used Social Security Loopholes, Worth Thousands of Dollars, Closed"). We asked subjects to rate each headline for the extent to which they believed that news story was real news or fake news (1 = Definitely fake news, 5 = Definitely real news).

**News Sources.** We chose the news sources as follows. We gathered an initial list of 42 sources from a study investigating people's beliefs about the prevalence of fake news in various media agencies [14]. For the current experiment, we narrowed this list down to the following four sources: The New York Times, Fox News, Occupy Democrats, and Breitbart. We chose these sources on face value, in an effort to cover both relatively leftand right-leaning media sources, as well as relatively well-established and new media sources of varying levels of reputed journalistic integrity. We also included an additional unspecified fifth source, achieved by replacing X with the words "It was."

**Headlines.** We constructed the list of news headlines as follows. First, we scoured various U.S. national and international news websites for headlines from the 2015–2016 period. We selected headlines on the basis that they should cover a wide range of topics—including non-political or non-partisan issues—and should make a claim, rather than merely stating an opinion. This initial search produced 167 candidate headlines. We then asked a separate sample of 243 undergraduate students to rate, in a randomized order, the familiarity of each headline (1 = Definitely never seen before, 5 = Definitely seen before). Using these data, we selected a final set of 50 unique, specific headlines that were rated relatively low in familiarity (*M* = 2.32, Range = 1.75–3.43). The final list of headlines is available at https://osf.io/h6qen/ (accessed on 27 September 2021).

No headlines were drawn from our four specified sources. We counterbalanced presentation of the materials such that each subject observed 10 headlines attributed to each source, and each headline was attributed to each source equally often across subjects. We included two attention check items among the headlines that looked similar but specified the response subjects should select if they were paying attention.

Following the headline rating task, we asked subjects how they identified politically (1 = Very conservative, 5 = Very liberal), which political party they were a member of (1 = Democratic party, 2 = Republican party, 3 = Other or none), and basic demographic information. We also administered several exploratory measures: subjects completed the Social Dominance Orientation scale [30], rated how familiar they were with each news source (1 = Not at all familiar, 5 = Extremely familiar), rated how much the source information affected their ratings (1 = Not at all, 5 = A grea<sup>t</sup> deal), answered two openended questions about the purpose of the study, and indicated if they had looked up any of the headlines. We do not report results from most of these exploratory measures, but the data are available at Fake News - Headlines and Acosta. Available online: https://osf.io/h6qen/ (accessed on 27 September 2021).

#### *2.2. Results and Discussion*

For all experiments in this article, we report the results of analyses that met the standard criterion of statistical significance (i.e., *p* <0.05). For the interested reader, additional reporting of results can be found in the Supplementary Material.

We only analyzed data from subjects who gave complete responses, and we did not exclude subjects on any other basis, contrary to our preregistration. Most subjects responded correctly to each attention check item (85% and 87%, respectively) and did not look up any headlines (93%). We also deviated from our preregistration in how we created the three political affiliation groups for analysis: Rather than categorizing subjects based on their rated political leaning, we simply used subjects' reported party membership (but using the preregistered groupings leads to similar results and conclusions; see Supplementary Material).

Of the 581 subjects, 229 identified as Republicans, 177 as Democrats, and 175 as Other (or none). Distributions of the political leaning variable were consistent with these data: The modal selections were "somewhat conservative" for Republicans, "somewhat liberal" for Democrats, and "Moderate" for Other.

Recall that our primary question was: To what extent does political affiliation influence how source information affects people's interpretations of the news? To answer that question, we examined subjects' mean headline ratings as a function of their political affiliation and news source. Table 1 shows the mean rating for each condition. A Repeated Measures Analysis of Variance (RM-ANOVA) on these ratings revealed a statistically significant interaction between political affiliation and news source, suggesting that the influence of political affiliation on headline ratings depends on source information, *F*(8, 2312) = 3.09, *p* < 0.01, η2 p = 0.011. We also included age as a covariate in an additional exploratory Repeated Measures Analysis of Covariance (RM-ANCOVA), and found only a main effect of Age, such that each year of aging was associated with a small shift toward rating headlines more as real news, irrespective of source or political affiliation, *B* = 0.005, *t*(579) = 3.77, *p* < 0.01.

**Table 1.** Descriptive Statistics for Ratings of News Classified by Source of Material and Subjects' Political Affiliation.


Note. In Experiments 1a and 1b, ratings concerned the "realness" of various news headlines; in Experiments 2a and 2b, ratings were a composite of four items concerning the negativity of a CNN journalist's interaction with a White House intern as depicted in video footage recorded during a press conference given by President Trump. a In Experiments 1a and 1b, headlines were attributed to various news sources; in Experiments 2a and 2b source was one of several versions of a videoed event. b In Experiments 1a, 1b, and 2a, "Other or none" was a single political affiliation response option, whereas in Experiment 2b, "Other" and "None" were distinct response options.

To determine where any meaningful differences occurred, we then ran five oneway ANOVAs testing the influence of political affiliation on mean headline ratings for each news source (we did not explicitly specify these follow-up analyses in our preregistration). These analyses yielded mixed results. Subjects' political affiliation had no appreciable influence when headlines came from the two sources favored by people who lean politically right (all *p* values > 0.26). However, subjects' political affiliation did have an influence when headlines came from the remaining three news sources, *F*New York Times(2, 578) = 5.17, *p* < 0.01, η2 p = 0.018; *<sup>F</sup>*Occupy Democrats(2, 578) = 4.57, *p* = 0.01, η2 p = 0.016; *<sup>F</sup>*UnspecifiedSource(2, 578) = 3.34, *p* = 0.04, η2 p = 0.011.

More specifically, Tukey-corrected post hoc comparisons for those three sources revealed that Democrats rated headlines from the New York Times as slightly more real than Republicans ( *M*Diff = 0.18, 95% CI [0.04, 0.31], *p* = 0.01) or Others ( *M*Diff = 0.15, 95% CI [0.01, 0.29], *p* = 0.04). Similarly, Democrats rated headlines from Occupy Democrats as slightly more real than Republicans ( *M*Diff = 0.15, 95% CI [0.03, 0.28], *p* = 0.01) or Others (*M*Diff = 0.14, 95% CI [0.00, 0.27], *p* = 0.04). Finally, Democrats rated headlines from an unspecified source as more real, in the mean, than Republicans ( *M*Diff = 0.12, 95% CI [ −0.01, 0.26], *p* = 0.07) or Others ( *M*Diff = 0.14, 95% CI [ −0.00, 0.28], *p* = 0.06). These last differences were not statistically significant once adjusted for multiple comparisons, however.

Taken together, this collection of results is partially consistent with our hypothesis. We predicted that people would rate news headlines from sources favored by their political affiliation as more real than headlines from other sources. That prediction was correct—but only for headlines attributed to sources favored by people who lean politically

left (Democrats). How are we to explain these results? One possibility is that Democrats— and only Democrats—make meaningful distinctions among news sources, but we can think of no theoretical reason this explanation would be true. An alternative possibility is that the sources we used varied in unanticipated ways. In fact, exploratory examination of subjects' source familiarity ratings reveals data consistent with this idea: The New York Times and Fox News were rated more familiar than Occupy Democrats and Breitbart (*M*New York Times = 3.36, 95% CI [3.25, 3.47]; *M*Fox News = 3.56, 95% CI [3.46, 3.66]; *<sup>M</sup>*Occupy Democrats = 1.69, 95% CI [1.60, 1.78]; *M*Breitbart = 1.72, 95% CI [1.62, 1.81]). We also note that the headline rating differences were small, suggesting that our sources may not be construed as meaningfully different from one another in terms of their credibility. We conducted Experiment 1b to address these concerns.

## **3. Experiment 1b**

The preregistration for this experiment is available at https://aspredicted.org/pi83g. pdf (accessed on 27 September 2021). The data were collected on 17 February 2019.
