Next Article in Journal / Special Issue
#MeToo? Legal Discourse and Everyday Responses to Sexual Violence
Previous Article in Journal / Special Issue
All Their Eggs in One Basket? Ideological Congruence in Congress and the Bicameral Origins of Concentrated Delegation to the Bureaucracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Judicialization and Its Effects: Experiments as a Way Forward

Department of History and Government, Texas Woman’s University, P.O. Box 425889, CFO 605, Denton, TX 76204, USA
Submission received: 16 February 2018 / Revised: 1 May 2018 / Accepted: 16 May 2018 / Published: 18 May 2018
(This article belongs to the Special Issue Intersection between Law, Politics and Public Policy)

Abstract

:
Law and courts play a larger role in American policymaking than in similar countries—and a larger role than ever before in American politics. However, systematic efforts to evaluate the effects of judicialized policymaking are consistently plagued by problems of causal inference. Experiments offer a way forward. Causal claims by public law scholars are often undercut by validity difficulties that are avoidable if scholars engaging in observational research incorporate the tenets of experiments in their research designs, as well as if more public law scholars attempted to isolate the effects of judicialization in controlled settings, such as survey or laboratory experiments. An original survey experiment on the effects of media reporting on tort reform suggests that experiments have much to offer public law scholars. Despite certain challenges in implementation, experiments and observational research based on experiments provide a promising path for assessing the varied—and important—effects of judicialized policymaking.

1. Introduction

What are the effects of the encroachment of law and courts into American policymaking? Public law scholars have debated this question at length. Some contend that law and courts are an effective policymaking outlet (Gash 2015; Keck 2014). Others bemoan their use in the policymaking process as undemocratic (Bork 1997), ineffective (Forbath 1991; Rosenberg 2008), or prohibitively expensive and cumbersome (Bumiller 1998; Abel 1987; Carroll et al. 2005; Kagan 2001; Rabkin 1989). Regardless of the efficacy of law and courts at resolving political disputes, broad agreement exists that they have deeply penetrated into American policymaking (see Silverstein 2009; Kagan 2001). This “judicialization” of politics has been explored in diverse policy areas, from the administration of America’s prisons (Feeley and Rubin 2000), to injury compensation (Barnes 2007; Barnes and Burke 2015; Barnes and Hevron 2018), to playground safety (Epp 2009; see additionally Farhang 2008, 2010; Kagan 2001; Burke 2002; Silverstein 2009).1
This article suggests that the time is ripe to utilize research designs—namely, experiments—that provide analytic leverage as we attempt to explore the effects of judicialization. The research enterprise has three distinct components: the identification of a phenomena; the generation of hypotheses concerning the formation and effects of the phenomena; and the testing of these hypotheses. The first step—the identification of judicialization—has been accomplished in richly convincing fashion. An impressive body of research vividly documents judicialization in the United States (see also Derthick 2005; Kirkland 2016; Sandler and Schoenbrod 2003). Scholars have also thought systematically about how to compare levels of judicialization cross-sectionally and over time. For example, Kagan’s concept of adversarial legalism was developed to compare levels of judicialization across countries (Kagan 2001). Between descriptions of judicialization in the U.S. and concepts such as adversarial legalism that give the concept of judicialization generalizability, scholars have surmised—sometimes explicitly and on other occasions more implicitly—about judicialization’s effects. This article makes the case that hypotheses about these effects are ready for empirical testing and that experiments—and the incorporation of the tenets of experiments in observational research—have much to offer scholars.
Experiments present researchers with some distinct advantages over other methodological approaches (though it should be stated at the outset that the complexity of judicialized policymaking means that experiments may not always be appropriate or feasible). At their core, experiments allow for the exploration of the counterfactual. In terms of judicialization, this means that unless we systematically compare the effects of judicialized policymaking to modes of policymaking that are not judicialized (or are comparatively less so), we are left, in empirical terms, with a mountain of research that “selects” on the dependent variable by describing (in convincing fashion, to be sure) aspects of American policymaking that are judicialized (and often hypothesizing about their effects). Selecting on the dependent variable is concerning on several levels. Generally, the failure to account for the counterfactual is highly problematic for the making of causal claims (see Morgan and Winship 2014; Barnes and Weller 2014; Chilton and Tingley 2014). More specifically for public law scholars who study the effects of law and courts on policy in the U.S., the larger point is that we can only learn so much about judicialization if we continue to train our analytic focus on one example after another of judicialized policymaking. Careful process tracing and thick description have shown us that judicialization is both substantively important and everywhere, yet our attempts to account for its effects as a causal variable have consistently been left wanting. Put simply, this has resulted in scholarship that has unrealized potential.
Broadly, there are two sets of hypotheses about judicialization’s effects: (1) its effects on politics (see Barnes and Burke 2015; Barnes and Hevron 2018); and (2) its effects on attitudes (see Haltom and McCann 2004; Barclay and Flores 2017). This article suggests two prescriptions for public law scholars interested in both of these effects and provides examples of implementation. The first prescription is for researchers who are interested in ascertaining the effects of judicialization on politics. For reasons that will be explained in greater detail later in this article, hypotheses concerning judicialization’s effects on politics do not easily lend themselves to testing via experiments. However, scholars can—and should—use the logic of experiments to improve their research designs when experiments are impractical or impossible (see Angrist and Pischke 2009; see also Beck 2010; Ho and Rubin 2011).
It has also been hypothesized that judicialization can have distinct effects on attitudes (see Haltom and McCann 2004). In such cases, experiments can be particularly useful in assessing the role of judicialization as a causal variable. Thus, the second prescription is that researchers should utilize laboratory and survey experiments to test hypotheses about these effects. One example of this approach is Haltom and McCann’s analysis of the intersection of mass media and the courts in their book, Distorting the Law. They argue that judicialized policies are uniquely susceptible to being framed in certain ways by mass media and that these frames affect people’s attitudes (Haltom and McCann 2004). It also bears mentioning what this article does not suggest. It does not suggest that experiments will be the most effective tool for all studies of judicialization (they will not be). Nor does it suggest that experiments are “better” than other, more qualitative modes of inquiry. The field is poised to effectively employ experiments precisely because the extant literature, which successfully utilizes tools such as process tracing and thick description, has laid the groundwork for the testing of hypotheses.
The article proceeds as follows. First, it provides an overview of experiments, including their value in identifying causal relationships among variables, and their use in political science and public law. The next section of the article uses examples of recent scholarship to illustrate the benefits of using the tenets of experimental design to strengthen observational studies concerning judicialization’s effects. Next, the article describes an original survey experiment that explores a key effect of judicialization: how mass media report on tort reform and the ways in which that reporting affects people’s attitudes. The findings suggest that certain types of framed stories about tort reform are more persuasive than others and that these results run counter to anecdotal suggestions about the effects of media reporting on the issue. Finally, the article concludes with a discussion of how experiments can assist public law scholars as we continue to push our understanding of the effects of judicialization forward.

2. Experiments in Political Science

Political scientists have begun to recognize the enormous value of randomization for causal inference (McDermott 2002, 2011; Gerber and Green 2012; Dunning 2012; Druckman et al. 2006). Experiments in their three primary forms—laboratory, survey, field (and natural)—have been utilized by political scientists to study a host of topics. The earliest published experimental work in political science concerned voter turnout (Eldersveld 1956; see Druckman et al. 2011). Since then, and particularly over the past three decades, experiments have been used to explore topics as diverse as the democratic peace theory (Tomz and Weeks 2013), the determinants and consequences of prejudice (Hutchings and Piston 2011), and continued efforts to understand why people vote and how their minds are changed (Green and Gerber 2004; Shaw et al. 2012; Butler and Broockman 2011; Broockman and Kalla 2018). Though public law scholars have been relatively slow to adopt experimental methodology, they have published experiments on preventing corporate fraud (Guttentag et al. 2008), backlash over judicial activism (Fontana and Braman 2012), and the efficacy of legal assistance programs and right to counsel rules regarding housing disputes (Greiner et al. 2012; Seron et al. 2001), among others (see generally Epstein and King 2002).
Experiments allow researchers to exert a high level of control over the data-generation process and to make causal claims—through the systematic testing of hypotheses—that can withstand a stricter level of scrutiny than observational studies. Observational studies are often confounded by the impossibility of simultaneously observing treatment and non-treatment effects in the same individual or groups of individuals. This has been referred to as the fundamental problem of causal inference (Rubin 1974; see also Holland 1986; Sekhon 2007). By randomly assigning a group of subjects into two groups: a treatment condition in which respondents are subjected to a causal variable and a control group in which they are not, experimenters can come as close as possible to overcoming the fundamental problem of causal inference and can assume that any difference in the two groups on an outcome variable of interest must be attributable to the treatment. Such an approach is the gold standard for inferring cause.

3. Prescription 1: Using the Logic of Experiments to Strengthen Observational Designs

Both policymakers and social scientists are concerned with causal inference, yet they have different motivations in uncovering and publicizing these relationships (see generally Friedman 1953). For social scientists, the goal is often straightforward: to assess the impact of policies on outcomes. Or put differently, to find evidence of “strong inference” (Platt 1964). On the other hand, elected officials are often concerned with maximizing the allocation of benefits to constituents (see Romer and Rosenthal 1978). These distinct purposes can pose seemingly intractable problems for understanding policy effects via experiments. For example, randomizing the assignment of a treatment (policy) and including a control group that does not receive the treatment (and its associated benefits) is not a particularly enticing approach for officials concerned with the maximization of benefits.2
To better illustrate the point, let us turn to a foundational piece of public law scholarship that has particular resonance for the present discussion: Campbell and Ross’ landmark analysis of the effects of an effort by Connecticut officials to decrease the number of driving-related fatalities by enforcing penalties for speeding in the 1950s (Campbell and Ross 1968). They provide an illuminating example of the benefits of treatment randomization—and, conversely, the limitations of making causal claims about policy effects based on nonrandomized treatment implementation. After pointing out that the Connecticut governor was quick to credit his speeding crackdown for the decline in deaths, Campbell and Ross proceed to thoroughly pick apart this claim of causal inference by focusing on various threats to internal validity. At their core, Campbell and Ross’ criticisms of the governor’s claims stem from the lack of random assignment of the treatment (in this case, the crackdown on speeding). This poses fundamental—and potentially fatal—problems to anyone attempting to ascertain the effect of the treatment on the outcome variable (the number of traffic-related fatalities).
In a world free of electoral concerns, one could envision policy implementation that would allow lawmakers to make much more credible claims of causal inference. Connecticut policymakers could have randomly assigned the crackdown to occur in half of the state’s counties, whereas the remaining counties would remain under the old system of lax enforcement. Such an approach would allow policymakers to isolate the average treatment effect of the policy by calculating the difference in traffic deaths in counties under the crackdown—the treatment condition—and counties under the old system—the control condition. Then, policymakers and elected officials would have been able to make plausible claims to Connecticut residents about the law’s effectiveness. Despite its appeal to social scientists concerned with causal inference, the limitations of such a quasi-experiment in a real-world setting are clearly apparent.3 If policymakers truly believed that the crackdown would save lives, and if they were additionally motivated by the maximization of resource allocation, it would make little sense to limit its benefits to only half of Connecticut residents (e.g., potential voters).
Judicialization scholars face a similar problem when attempting to ascertain the effects of law and courts on the policymaking process. Policymakers have no interest or motivation in randomly assigning judicialized policies for a portion of the population. Nor do we, as social scientists, have the ability to randomly assign some aspect of judicialization for a subset of the populous. Thus, in order to study judicialization’s effects in non-controlled (e.g., non-experimental) settings, observational approaches are often our only recourse. This is complicated by the widely-accepted claim that judicialization is everywhere in politics. If it is indeed everywhere, then how we can isolate its effects?
It is possible to overcome some of these problems by incorporating key tenets of experiments in observational studies. They are: (1) case selection that is akin to “matching” and is supplemented with controls; (2) “as-if” randomization of the treatment; (3) identifying treatments that can be considered “dose” effects, followed by (4) “post-test” evaluations of the treatment and control conditions along a variable of interest. In an illustration of this approach, Barnes and Hevron identify two regimes (asbestos litigation and black lung compensation) that exist within a single policy area (injury compensation in the area of occupational disease) yet rely on differing levels of judicialization (Barnes and Hevron 2018). To shorten an otherwise lengthy story, asbestos compensation is a paradigmatic judicialized policy that also provides within-case variation.4 Black lung injury compensation, on the other hand, starting in 1969 and continuing to the present, is handled through a Congressionally-created and federally-administered compensation regime.5 It is funded by a surtax on coal production, disburses payments to sufferers of black lung disease along predetermined schedules, and involves minimal litigation. Thus, it represents a non-judicialized policy.
Scholars have argued that American tort litigation is subject to skewed and distorted media coverage and that legal knowledge is gained and disseminated in a variety of ways (Haltom and McCann 2004; see also Bailis and MacCoun 1996; Gavin 2008). When the conventions of modern news reporting combine with the adversarial nature of the civil justice system the result is uniquely potent news coverage (see Bennett 2016). In making this argument, Haltom and McCann implicitly formulate a second, related contention: that mass media covers judicialized policies differently than non-judicialized policies (see Haltom and McCann 2004). We seek to build on their impressive investigation by noting that despite offering reams of convincing evidence of skewed media coverage of tort litigation in the United States, Haltom and McCann do not extend their analysis to examine the counterfactual: media coverage of non-judicialized policies. Thus, our dependent variable is media coverage of the asbestos and black lung policy regimes. We measure coverage along a number of theoretically important variables through a content analysis of New York Times coverage of both regimes from the late 1960s to 2016 (n = 392).
This observational approach rests on three components, all of which are crucial tenets of experiments. First, we consider the level of judicialization of these two policy regimes to be the treatment. Identifying a judicialized regime and a regime in which policy was non-judicialized (or at least far less judicialized) provides us with variation on the treatment. This allows us to avoid selecting on the dependent variable and its associated problems for causal inference. Second, we make the case that these policy areas are well-matched (see generally Ho and Rubin 2011; Berk and Newton 1985; Qian 2007; Persson and Tabellini 2002). Indeed, we argue that for our purposes the only theoretically important difference between the cases is their level of judicialization. Though it is impossible in observational research to randomly assign a treatment and thus mimic the true conditions of an experiment, choosing cases that are close matches allows for an approximation these conditions. However, because of the lack of random assignment of the treatment, we had to eliminate alternative explanations (e.g., other factors that could potentially account for differences in media coverage between the two regimes). Thus, we controlled for the following alternative explanations: (1) time (the form and content of newspaper articles may have changed during our sample); (2) author (journalists who wrote more than one article in the sample may have writing styles that would bias the results); (3) placement (articles appearing on the front page may be written differently than articles appearing elsewhere in the newspaper); (4) news (news articles may be written differently than features, op-eds, or editorials); and (5) sources (articles employing pro-payor, pro-claimant, or quotes from experts may be framed differently). Third, the assignment of one policy to the treatment condition (the judicialization of injury compensation policy) was unrelated to the outcome variable (media coverage). In other words, the levels of judicialization in both policy regimes were affected by factors that were not reliant on media coverage.
In a laboratory experiment in which researchers control the random assignment of the treatment under controlled conditions, one would not need to spend more than a cursory amount of time proving that these conditions are met. However, given that we face “real world” obstacles that plague all practitioners of observational research who base their case selection on the tenets of experimental design, we must offer convincing evidence that these core tenets of experimentation are sufficiently met. If successful, we can then plausibly argue that any difference in the outcome variable between the treatment and control conditions represents the average treatment effect and therefore must be the result of judicialization and nothing else.
After accounting for the potential alternative explanations in a series of logistic regression models, we find that the probability of skewed coverage progressively climbs as the coverage shifts from black lung to collective asbestos litigation (Chapter 11 trusts) to individual asbestos litigation, growing from less than 10 percent to about 15 percent and then 25 percent in the case of regimes with a high degree of judicialization (asbestos litigation). (See Figure 1.) (These differences were statistically significant and robust across several models.) In short, at least in these two cases, distorted media coverage is an effect of judicialization.
Despite the inherent limitations of observational research designs in determining causal inference, they also offer distinct benefits. First, they allow researchers to maximize external validity by using events that have already happened and thus are not reliant on approximating real-world conditions in a laboratory or survey experiment. Second, they offer scholars a way forward in assessing causal claims about judicialization’s effects on politics. For example, recent scholarship suggests that judicialized policies create fundamentally different politics at the time of creation than comparatively non-judicialized policies. These politics can undermine social solidarity among activists and balkanize the mobilization of resources (see Barnes and Burke 2015; see also Gash 2015; cf. Keck 2014). To demonstrate this effect, Barnes and Burke use observational data of Congressional hearings of judicialized and comparatively non-judicialized policies (Barnes and Burke 2015). They find that court-based policy regimes have a unique impact and that, for activists, focusing resources on court-based approaches for policy change can have potentially deleterious effects.
Observational researchers using this approach must be meticulous about making the case for a high level of internal validity. They are inherently susceptible to “missing variable” criticisms that may proffer that unaccounted-for variables are responsible for the outcome instead of the causal variable identified in the research design. In statistical modeling, those using regression analyses to estimate the relationships among independent and dependent variables must “control” for potential alternative explanations that could be responsible for affecting the value of the outcome variable (see Barnes and Hevron 2018 for an illustration of this approach). On the other hand, survey and laboratory experiments may involve significant concessions to external validity. Though researchers have complete (or almost complete) control over the data generation environment, they must effectively argue that laboratory (or survey) conditions are a close approximation of “real world” settings and that any causal relationship appearing under controlled conditions will also appear in uncontrolled settings.
An additional drawback of the observational approach described here is that it does not allow scholars to directly address hypotheses that argue that judicialization has an effect on the public. The study described above provides convincing evidence that judicialization leads to skewed media coverage. We also know that media coverage framed in certain ways (negatively and episodically, in particular) can affect how people assign blame for policy problems (see Iyengar 1991). However, without resorting to actual experiments, we are unable to do more than imply that the unique media coverage engendered by judicialization affects the public. The next section discusses the article’s second prescription for researchers: how to turn these implications into empirical evidence through the use of survey experiments.

4. Prescription 2: Using Experiments to Test Hypotheses about Judicialization’s Effects

Exploring differences in media coverage of judicialized and non-judicialized policies is a promising avenue for understanding the unique effects of judicialization. Though the tenets of experiments can be useful in observational studies to make comparisons and parse out judicialization’s effects, the second suggested prescription is for scholars to employ actual experiments to test judicialization’s effects on people, as demonstrated by the following survey experiment.

4.1. Framing Research

A frame is a communication tool that can help people make sense of a complex world according to a specific interpretation put forth by the framer (Gamson and Modigliani 1987; Gitlin 1980; Manoff 1986; Popkin 1994; Entman 1993; Reese et al. 2001; Hevron 2013). In recent years, two related strains of inquiry have characterized framing research. With roots in sociology, “frames in communication” refer to the processes through which frames in news stories are constructed. In this conception, the frame is considered the outcome variable and researchers try to understand the conditions that lead political elites (journalists, candidates, or officeholders) to adopt certain framing devices (see Scheufele 1999; Carragee and Roefs 2004).
A second type of framing research concerns the effects that frames have on people, which are called as “audience frames.” They usually refer to “changes in the presentation of an issue or an event [that can] produce changes of opinion” (Chong and Druckman 2007, p. 45; see also Borah 2011, 2014; Druckman 2001). In this conception of framing research, the frame acts as the causal variable, with most framing effects studies concerned with the degree to which variable x (the frame) causes outcome y (attitude change) (Brader et al. 2008; see also Freedman 1997; Kinder and Sanders 1990; Mendelberg 2001; Nelson et al. 1997).
While scholars still struggle with explaining the mediators and moderators of framing effects, consensus has formed around the existence of at least two different audience frame types, equivalency and issue frames, which have conceptual as well as theoretical differences (see Hevron 2013). Equivalency frames feature logically alike content that is presented or phrased differently (see Kahneman and Tversky 1979; Tversky and Kahneman 1981; Druckman 2001).6 Public law scholars have employed equivalency frames to better understand risky choice in legal contexts. Guthrie (2000) found that frames used to describe “frivolous” litigation induced risk-seeking behavior in plaintiffs and risk-averse behavior in defendants, thus potentially giving plaintiffs psychological leverage in settlement negotiations. Issue frames, which have also been called “emphasis” frames, tend to highlight or emphasize certain aspects of complex phenomena (Druckman 2001, p. 230; Nelson and Kinder 1996; Chong and Druckman 2010).

4.1.1. Narrative Frames

“Narrative” frames are a subset of issue frames that concern storytelling devices, or word-by-word choices, which add up to distinctive ways of telling a story (see Iyengar 1991; Gross 2008; Hevron 2013). Two types of narrative frames, episodic and thematic, are particularly relevant to the present discussion of the effects of judicialization. Episodic frames present issues by using a specific example, case study, or event-oriented report, whereas thematic frames place issues into broader context. An example of an episodic frame would be a newspaper article that examines the issue of homelessness through a narrow lens, perhaps highlighting a single homeless person’s everyday struggles as well as the specific reasons why that person is homeless. A thematically-framed article might paint an expansive picture of homelessness in America, citing statistics about its prevalence, touching on its root causes, and perhaps identifying ways to successfully combat it.
Iyengar (1991) argues that episodic and thematic frames can lead readers to differing cognitive reactions in three areas: (1) the attribution of responsibility for the problem discussed in the coverage; (2) the support of government policies designed to address the problem; and (3) the level of understanding of the problem.7 In addition to cognitive effects, a growing body of research has found that episodic and thematic frames can lead consumers to different emotional, or affective, reactions (see Aarøe 2011; Gross 2008; Hevron 2013).

4.1.2. Tort Reform

The rise of judicialization in the United States over the second half of the twentieth century has led to a political backlash, particularly in the area of tort law (Daniels and Martin 2000, 2004, 2015; Gavin 2008). Early conceptions of tort law used strict liability, which was borne of a time when little recourse existed for negative externalities that were becoming increasingly prevalent due to industrialization. By the middle of the twentieth century, judges and the public began to recognize that the prevailing state of negligence and contract principles afforded little protection to consumers. In two landmark cases, Escola v. Coca Cola Bottling Company (1944) and Greenman v. Yuba Power Products (1963), the California Supreme Court eliminated the contributory negligence rule. The resulting changes were enshrined in 1965 in the Restatement of Torts, Second, a common law treatise prepared by the American Law Institute (American Law Institute 1965), which opened the door to a dramatic increase in tort suits. Modern tort law was off and running. A hostile political response followed, as industry groups and other pro-payor interests sought to enact changes in common law civil justice systems that would reduce or otherwise mitigate the amount of tort litigation or monetary damages awarded to plaintiffs. By the late 1980s and 1990s, anti-litigation stances were commonplace in national and particularly state politics (see Burke 2002).
Two general courses of action exist for those who wish to “reform” the tort system. The first is to alter the rules of the game by enacting legislation that reshapes tort doctrine, often through legislation. (Though reformers have found little legislative success at the national level, they been successful in changing laws at the state level.) The second course of action for reformers is to work from the ground up to change public norms, expectations, and opinion. Here, a diverse range of knowledge dissemination is in play, from commonly-told “lawyer” jokes, to press releases from organizations such as the American Tort Reform Association, to traditional media coverage of the tort system found in newspapers. Taken together, these sources of knowledge can lead people to a “common sense” stance that the tort system is broken (Haltom and McCann 2004). In other words, the public’s understanding of the system—and its reflections of fairness, justice, and compensation—is not simply the result of black letter law, but instead is contingent on social processes that shape, constitute, and reconstitute attitudes. It is here that tort reformers have been particularly successful, finding in mass media a vehicle through which they can change public opinion and alter the scope of the conflict over tort reform (Schattschneider 1960; Hevron 2013; Daniels and Martin 2004, 2015; Gavin 2008; Bailis and MacCoun 1996).
These efforts have been referred to as “stealth tort reform,” or, the successes that tort reformers, outside the courtroom and aided by common storytelling and framing devices employed by journalists in their coverage of tort reform, have had in moving public opinion toward their cause (Gavin 2008, p. 431). Tort reformers have been explicit about this goal. A 1993 New York Times article quotes a representative from a tort reform organization in New Jersey, who says: “What we want to do is raise public awareness about the problem and try to motivate individuals and organizations to make their voice heard over the trial bar” (Romano 1993, p. NJ1). Their efforts have paid off. After several post-war decades in which Americans brought increasing numbers of tort claims to court, public opinion has shifted in favor of those who wish to “reform” the tort system and the defendants whose interests they represent (see Galanter 1974; Gavin 2008; Kagan 2001; Nielsen and Beim 2004).8
Much of the knowledge that Americans possess about the courts and legal system is driven by their portrayals in popular culture. In deciding which aspects of the court system deserve public attention, journalists (and the market and institutional forces to which they are subjected), are a key component in the construction of popular legal knowledge. They disseminate information about law and courts to the public, packaging it in ways that they hope will draw public interest (Haltom 1998; Haltom and McCann 2004; see also Roberts and Doob 1990; Yankelovich and White, Inc. 1978). In Distorting the Law, Haltom and McCann (2004) argue that tort reformers have found a willing partner in mass media, which have a penchant for framing stories episodically. The reliance on episodic frames, when combined with the inclination for media to sacrifice complexity for simplicity, nuance for black-and-white proclamations, and systemic observations for individualized stories, means that coverage of the tort system is potentially skewed and distorted (see also Bennett 2016).
The paradigmatic example of an episodic story designed to draw the public’s interest (while simultaneously misleading it) is the infamous McDonald’s “hot coffee” case. As Haltom and McCann (2004) point out, Liebeck v. McDonald’s Restaurants earned an enormous amount of attention, from newspapers and television news to late-night talk show hosts. However, lost in the outrage over the multimillion dollar punitive damages award to Liebeck were key details of her story, including the severity of her injuries, the enormous profitability of McDonald’s, and the limited scope of her original request for damages, which was for the company to simply cover her medical bills.
Haltom and McCann are forthright about the nature of their analysis in Distorting the Law, arguing that they explore the mass production of legal knowledge, rather than the various ways that knowledge can become meaningful to citizens. They write: “To the extent that widely circulated story lines figure prominently in the cognitive archives from which media-attentive citizens actively construct legal meaning, the narratives we identify can be expected to matter a great deal” (Haltom and McCann 2004, p. 13). As the next section demonstrates, experiments, which allow researchers to exert a high level of control over the data collection environment (thus avoiding some of the dangers of relying on observational data to infer causality), are a potent tool in addressing the impacts of skewed media coverage of the tort system.

4.2. Methodology and Data

4.2.1. Design

I conducted a survey experiment to explore how one aspect of judicialization—uniquely skewed media coverage—can affect people’s views about legislative efforts to “reform” the tort system. The survey experiment drew subjects from a sample of students at a private university in California and was conducted in 2013. It was designed as a standard pretest posttest mixed-factorial 2 × 2 design with a control condition. The treatment conditions varied along two dimensions: frame (episodic/thematic) and tone (pro-reform/anti-reform). All subjects received a piece of introductory text and were then randomly assigned to one of five treatment conditions: (1) an episodically-framed article that was in favor of passing legislation to reform the tort system, (2) an episodically-framed article that was against tort reform, (3) a thematically-framed article that was in favor of tort reform, (4) a thematically-framed article that was against tort reform, or (5) a control condition that received only the introductory text. The treatments consisted of a vignette about the potential passage of a fictional piece of tort reform legislation by Congress (the “Class Action Fairness Act of 2013”).9
The introductory text was:
The Class Action Fairness Act of 2013 is a piece of legislation currently under debate in the U.S. Congress.
Most who keep a close watch on Congress expect it to pass during the next legislative session. It will curb the amount of damages that juries can award in civil lawsuits and will make it more difficult for people to file class action lawsuits against corporations. The necessity of the Act has been much debated by groups representing consumers, businesses, and attorneys.
In a debate article in the New York Times on March 9, 2013, the following point of view appeared …
The two episodically-framed treatments presented either the case for or against the passage of a piece of tort reform legislation by providing personalized examples of the tort system’s problems (its high costs, inefficiencies, and outsized jury awards that appear to defy common sense), or personalized examples of the tort system’s comparatively heroic side (such as providing an avenue for bereaved parents to be compensated for the wrongful death of a child due to corporate negligence). On the other hand, the two thematically-framed treatments used statistics and context to argue for or against the passage of the legislation, citing studies and statistics that either confirmed the brokenness of the tort system or that these claims were overblown.10 (An appendix of the language used in the treatments is available from the author.)

4.2.2. Data and Methods

Over two weeks in March 2013, 204 subjects completed the pretest, the experimental procedure, and the posttest. Sixty-seven percent were women. Fifty percent were white, twenty-six percent were Asian, seven percent were Hispanic, and four percent were black. Fifty-six percent identified as Democrats, twenty-three percent as Republicans, and sixteen percent as politically independent. Participants ranged in age from 18–24. (There were no statistically significant differences across treatment conditions on these characteristics.)
Participants received the stimuli in the semi-controlled setting of an Internet-based survey and were compensated with gift cards for participating. The procedure began with a pretest that measured demographic information, political knowledge, and political beliefs (including beliefs about tort law in the U.S.). Next, participants were randomly assigned one of the five treatment conditions (the four conditions described above and a control condition). Following the administering of the treatment, participants answered a series of post-test questions about the presence or absence of discrete emotions (pity, disgust, worry, anger, and sympathy). Research suggests that emotions may play a mediating role in the effectiveness of narrative frames (see Iyengar 1991; Gross 2008). For subjects who indicated that they felt one of these emotions after the treatment, a follow-up question was asked about the strength of the emotional response (measured on a nine-point Likert scale). (As the emotion questions were asked only in the post-test, the results could only illuminate between-case, rather than within-case, comparisons.)
A second mediator of framing effects may be prior beliefs (Druckman and Nelson 2003; Gross 2008; Slothuus 2008). Accordingly, respondents were asked in the pre-test and post-test about the degree to which they agreed with four statements about the tort system. These statements were adapted from research that examined jury verdicts in business trials, in which jurors were asked to respond to four statements to gauge their beliefs about the justice system (see Hans 2000). These statements concerned: (1) the number of frivolous lawsuits in the United States; (2) the potential chilling effects of tort lawsuits on the development of new products; (3) the legitimacy of most grievances in tort suits, and (4) the effects of tort suits on product safety.
The post-test also included two measures designed to explore cognitive framing effects, which can be considered dependent variables. The first was designed to gauge general support for the core tenets of the tort reform legislation and the second asked whether the respondents believed that Congress should pass the proposed legislation (responses to both questions were coded on a 7-point Likert scale). (The findings were robust after recoding each dependent variable into a binary variable for ease of interpretation.)

4.2.3. Expectations

Extant literature on narrative frames suggests that episodic and thematic stories should cause different emotional responses, with episodic stories more likely to elicit affective reactions. The literature is split on whether episodic or thematic frames are more likely to be persuasive (see (Iyengar 1991; cf. Aarøe 2011; Gross 2008), yet, as we have seen, Haltom and McCann, along with others who have surmised about the effects of “tort tale” coverage on attitudes, implicitly argue that episodically framed coverage that highlights (or distorts) the negative aspects of the tort system has been a particularly potent weapon of tort reformers. This suggests that episodically framed articles that are pro tort reform, at least in this case, are presumed to be more persuasive.

4.2.4. Results

The results of the experiment suggest that episodic framing is indeed likely to engender emotional response relative to a control condition on the same topic. This comports with anecdotal claims made by non-experimental research on the effects of tort reform reporting, which indicate that “event-centered” tort reform reporting is more likely to lead people to have affective reactions than statistics-based stories that privilege context over individualization. Interestingly, the episodic pro-reform treatment was more likely to elicit sympathy and pity than the episodic anti-reform treatment, which suggests that even in a sample that is more liberal than conservative, people are more likely to feel pity that is aimed at defendants in an episodic pro-reform article than pity aimed at plaintiffs in an episodic anti-reform condition. (See Table 1.) In other words, the results indicate that pro-tort reform narratives are likely to predominate in the process of socially constructing people’s feelings about the issue of tort reform. (This potentially spells trouble for anti-tort reformers who may attempt to counteract the negative episodic examples propagated by pro-reform groups with episodic examples of their own that tout the heroic side of American tort law.)
The results of the experiment regarding persuasion (as measured by the difference in support from the pre- to post-test and compared across treatment conditions) suggest that thematic framing is potentially more persuasive than episodic framing, at least in this case when paired with an anti-reform tone. Participants who received the thematic anti-reform frame were sixteen percent less likely than those in the control condition to believe that the proposed legislation was a good idea and twelve percent less likely to believe that Congress should pass it. (See Table 2.) These differences were statistically significant and represent the largest difference any of the four treatment conditions compared to the control condition.
Regarding mediating cognitive beliefs, the results of the survey experiment are less clear. In the thematic anti-reform condition, which was the most persuasive of the four treatment conditions, only one of the four mediating belief variables significantly differed from pre-test to post-test. This was the “there are too many frivolous” lawsuits belief, which decreased by an average of three percent from the pre-test to post-test. (See Table 3.) This may suggest that anti-tort reformers should focus their thematic arguments on the fact that there are not as many “frivolous” lawsuits as tort tale coverage would indicate. Other belief variables had significant differences between the pre-test and post-test (such as the belief that lawsuits hurt product development in both episodic and thematic pro-reform conditions), yet these differences did not significantly alter people’s views about the passage of the fictional piece of tort reform legislation.
Along the same lines, when comparing the average post-test beliefs to the control condition, none of the four beliefs in the thematic anti-reform condition (which, again, was the most persuasive of the four treatment conditions) differed significantly from the control condition. (See Table 4.) This suggests that perhaps the belief variables examined in this experiment simply do not serve as mediating variables when the outcome variable is persuasion. On the other hand, several belief variables in other, less persuasive treatment conditions did differ significantly compared to the control condition. However, these changes in beliefs did not translate to—or mediate—persuasion effects. For example, the subjects who received the episodic pro-reform treatment were more than twice as likely than those in the control condition (fifty-two percent to twenty-five percent) to believe that there are too many frivolous lawsuits and sixteen percent less likely (thirty-seven percent to fifty-two percent) to believe that most who sue have legitimate grievances.

4.2.5. Analysis

This survey experiment demonstrates the usefulness of experiments in moving the study of judicialization from the second to the third step in the research enterprise: hypotheses-generation to hypothesis-testing. In addition to advancing our understanding of the impacts of narrative frames and their likelihood to elicit emotional responses and persuade, these results may also help to explain, why, counter to expectations, thematic frames about tort reform are more common in newspaper coverage than episodic frames. Though the subject of the content analysis in the observational study described earlier in this article (see Barnes and Hevron 2018) was different than the content analysis that guided this survey experiment, in that study we found a similarly high percentage of thematic or neutrally-framed articles about asbestos litigation (seventy-nine percent to twenty-one percent).
By subjecting theories about tort coverage that had previously been only anecdotal in nature (see Bailis and MacCoun 1996; Haltom and McCann 2004) to empirical testing, this survey experiment advances our knowledge of how media coverage can affect people’s opinions about tort reform, which is a hypothesized effect of judicialization. At least in this case, the results suggest that thematic frames—particularly those that are anti-tort reform—are generally more persuasive than episodic frames, a finding that intimates that journalists—who tend to write thematic stories more often than episodic ones—have a sense about their persuasiveness that academics who bemoan the coverage of tort reform do not.
Additionally, respondents in the episodic treatment conditions were more likely to report having discrete emotional reactions than those in the thematic treatment conditions. Pity and sympathy were particularly likely to be elicited, which suggests that these emotional responses may play an important role in determining why “tort tales” prove so resonant. Future research should pay careful attention to differences between sympathy and pity. Despite sharing a common valence, sympathy proved to be statistically significant across both episodic treatment conditions, whereas pity did not.
However, this experiment leaves other questions unanswered. More work must be done to better understand the relationship between mediating variables, such as emotion and prior beliefs, which can affect the persuasiveness of frames. Framing scholars are keenly interested in these mediating relationships, which have been referred to as the “black box of causality” (see Imai et al. 2010, 2011; Imai and Yamamoto 2013). Although experiments are quite useful in helping researchers determine whether a treatment causes changes in outcomes, they cannot tell us how or why. Scholars have used various methods to tease out the relationships of mediating variables, such as structural equation models that examine the statistical significance of corresponding path coefficients (Baron and Kenny 1986; Druckman and Nelson 2003; Brader et al. 2008) and, more recently, parallel encouragement designs (Imai and Yamamoto 2013). Additionally, scholars must think critically in the theory-building stage to identify variables that can potentially mediate persuasion, whether those variables are cognitive or affective. Finally, as in all pre-test/post-test designs, the interaction threat of pre-testing and treatment is a significant threat to external validity. The pre-test has a sensitizing potential for participants so that they may respond to the treatment differently than they would with no pre-test.

5. Discussion

Though the results described here are compelling, much work remains to be done. First, each of the studies described in this article only examines one particular aspect of judicialized policymaking. Barnes and Hevron (2018) train their focus on media coverage of injury compensation policy in the United States, arguing that litigation-based policies are likely to provoke negative episodic coverage at a higher rate than non-litigation-based policies that are otherwise similar. Future research should expand the analysis beyond injury compensation to investigate whether this finding persists across other types of judicialized policies. Similarly, the survey experiment described in this article also examines one particular aspect of judicialized policymaking: media coverage of the tort backlash and its effects on attitudes. Additional empirical testing should be done to investigate whether framed information about other aspects of judicialized policymaking causes similar reactions among the public.
Second, the survey experiment relied on a “convenience sample” of college students. Samples relying on convenient populations of university students are both quite common and potentially problematic (see Sears 1986). Population-based surveys of a representative sample (e.g., not only university students) are an alternative approach, yet they can be prohibitively expensive for researchers. Opt-in samples, the most common of which is Amazon’s Mechanical Turk (MTurk), provide yet another option for practitioners of survey experiments. In opt-in samples, which are becoming increasingly popular, respondents self-select into a study rather than being drawn from a representative sample. MTurk has rapidly grown in popularity due to its ease of use and relative affordability for researchers (up to thirty times less expensive than population-based samples in a recent comparison; see Mullinix et al. 2015, p. 4). Nascent research suggests that the reliability of conclusions drawn from opt-in samples on MTurk are comparable to those from population-based sampling (Mullinix et al. 2015). Public law scholars who attempt to understand the effects of judicialization should consider employing opt-in samples in survey experiments.
Experimental research on judicialization—like all experimental research—necessitates that researchers pay close attention to internal and external validity when designing studies. For observational studies using the tenets of experiments to guide case selection and theory, it is essential to find cases that are well-matched. As we have seen, judicialization scholars have tended to design and publish studies that make causal claims, yet do not possess any variation on the treatment. These studies, despite their significant descriptive contributions, are somewhat limited. Next, although observational studies guided by experiments can offer compelling evidence of judicialization’s unique effects, they are less suited for the identification of the mechanisms that underpin these effects (see Barnes and Hevron 2018). It is here that other modes of inquiry in public law can help to fill in the gaps, such as process tracing and comparative case studies. In the case of media coverage of judicialization, such a study would require different types of analysis, perhaps focusing on the causal pathways that go into the building of frames rather than differences in frame type (see Carragee and Roefs 2004; see generally Barnes and Weller 2014; Mahoney 2012).
Questions of internal validity and generalizability affect all experimental media effects research. Despite admirable efforts, such as Iyengar’s 1991 study on the effects of television news in which he recreated a typical living room with a television set and surreptitiously embedded treatments in otherwise unaltered national news programs, it is difficult—if not impossible—for researchers to replicate in a controlled environment the conditions under which most people consume media. The so-called “Hawthorne Effect,” which is the tendency for people to modify an aspect of their behavior when they know that they are being studied, is also difficult to overcome, particularly in laboratory experiments on media effects (see Levitt and List 2011). Researchers who employ framed media coverage as treatments must ensure that the treatments hew closely to reality. Using systematic content analysis of media coverage to inform the content and structure of treatments is a good start, but difficult decisions remain when designing treatments that are meant to not only be precise, but also reflective of the complexity of media coverage and judicialization.
At least in the case of episodic and thematic framing and their use in tort tale coverage, more work must be done to ascertain the effects of thematic frames. As we have seen, negative episodic stories are often in the crosshairs of those who lament the nature of media coverage of litigation in the United States, yet we find that the thematic coverage of litigation, despite being more contextual than the episodic coverage, still misses some of the important ramifications of why people are forced to turn to the courts to solve policy disputes, such as the shrinking and retrenchment of the welfare state (Barnes and Hevron 2018; see also Hacker 2004).
Finally, if not motivated by carefully-constructed theory, experiments, at best, can fail to tell us very much of value. At worst, they can be misleading. It is crucial that public law scholars—including those who are concerned with judicialization and its effects—continue to think creatively about employing a diverse range of methodologies. This also requires those who use process tracing or case studies to qualitatively illuminate important processes be cautious about claiming cause-and-effect. Moreover, it necessitates that researchers who employ experiments think carefully about designing studies that make use of the valuable data gathered by qualitative scholars.

6. Conclusions

Public law scholars have succeeded in identifying judicialization as an important part of the modern policymaking process. This article argues that in order to advance this important research agenda, we must convincingly isolate its impacts. Experiments can be of great use in this endeavor. Failure to adopt our most powerful analytic tools to isolate judicialization’s impacts has several drawbacks. First, it impedes our ability to advance our understanding of the effects of hyper-active law and courts in American politics. Second, it thwarts our efforts to convince fellow social scientists of the importance of judicialization. Experiments offer a way forward and can be useful in addressing at least two categories of effects of judicialization. The first is its effect on politics, such as the content of media coverage of judicialized policies (Barnes and Hevron 2018) and the politics surrounding the creation of judicialized policies (Barnes and Burke 2015). The second is its effect on attitudes, such as the framing effects of tort reform coverage.
Scholars of judicialization also have much to offer policy-makers. By utilizing experiments to probe judicialization’s effects, researchers can increase the likelihood that they will be called on to fulfill one of three roles that experiments offer researchers: “whispering in the ears of princes” (Roth 1995, p. 22; see also Druckman et al. 2011). For example, incorporating the tenets of experiments in observational research can convince activists that pursuing court-based policymaking will require them to grapple with difficult politics that may erode the cohesion between themselves and likeminded potential allies (Barnes and Burke 2015). Similarly, litigants and attorneys who seek court-based remedies for injury compensation should recognize that such policies will leave them vulnerable to consistently skewed media coverage that can have potentially deleterious effects on public opinion, including the opinions of plaintiffs’ attorneys and juries (Barnes and Hevron 2018). Activists should plan accordingly.
The use of experiments need not phase out other research designs. Indeed, in the observational study and original survey experiment presented here, elements of case study selection (“matching”), content analysis, and regression analysis, were incorporated. The point is not that we should replace all other modes of inquiry with experiments. Indeed, the time is ripe for experiments precisely because observational studies that have incorporated rich case studies and process tracing have generated compelling hypotheses that would benefit from being tested using experiments. When studies of judicialization’s effects lend themselves to experiments, as is the case with the hypothesized framing effects of judicialization, researchers should not shy away from using them. Relatedly, when experiments are not an option, as is the case in determining whether judicialization leads to uniquely skewed and distorted media coverage, we should use the logic of experiments to improve observational research design. Finally, it is likely that the findings from empirical tests of hypotheses concerning judicialization’s effects will raise new questions that will start the research cycle anew.
The research presented here demonstrates that experiments are not only compatible with other research designs, but can also be utilized by scholars to test theories that have been convincingly and carefully crafted through more qualitative methodologies, which have provided enormous value for understanding important aspects of law and courts. These insights include how and why judicial institutions have developed (Crowe 2012), the history of interactions between the elected branches of government and the federal judiciary (Engel Merry 2011), and issues surrounding access to justice (Staszak 2015). These examples of public law scholarship, which stem from the tradition of American political development, seek to illuminate the processes that lead to outcomes (see also Mahoney 2012).
Public law has always been characterized by generous methodological pluralism. Experiments and observational studies that heed the tenets of experimentation can—and should—be a complementary approach. Alexis de Tocqueville observed in Democracy in America that Americans are uniquely likely to turn questions of politics into questions of law (De Tocqueville [1835] 1999). As numerous studies of judicialization have demonstrated, this is more likely now than ever before. Judicialization casts a broad shadow over American politics. By incorporating tenets of experimental design in observational research and creatively using experiments, public law scholars can better illuminate its important effects.

Acknowledgments

This material is based upon work supported by the National Science Foundation under Grant No. 53-7841-1701. The author gratefully acknowledges the anonymous reviewers for their suggestions, Jeb Barnes for his invitation to explore experiments and judicialized policymaking, and Ann Crigler and G. Thomas Goodnight for advice that greatly improved the survey experiment discussed in the article. Any errors that remain are entirely his own.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Aarøe, Lene. 2011. Investigating Frame Strength: The Case of Episodic and Thematic Frames. Political Communication 28: 207–26. [Google Scholar] [CrossRef]
  2. Abel, Richard L. 1987. The Real Tort Crisis—Too Few Claims. Ohio State Law Journal 48: 443–67. [Google Scholar]
  3. American Law Institute. 1965. Restatement of the Law, Second, Torts, 2nd ed. Saint Paul: American Law Institute Publishers. [Google Scholar]
  4. Angrist, Joshua D., and Jörn-Steffen Pischke. 2009. Mostly Harmless Econometrics. Princeton: Princeton University Press. [Google Scholar]
  5. Bailis, Daniel, and Robert MacCoun. 1996. Estimating Risks with the Media as Your Guide: A Content Analysis of Media Coverage of Tort Litigation. Law and Human Behavior 20: 419–44. [Google Scholar] [CrossRef]
  6. Barclay, Scott, and Andrew R. Flores. 2017. Policy Backlash: Measuring the Effects of Policy Venues Using Public Opinion. Indiana Journal of Law and Social Equality 5: 391. [Google Scholar]
  7. Barnes, Jeb. 2007. Rethinking the Landscape of Tort Reform: Lessons from the Asbestos Case. Justice Systems Journal 28: 157–81. [Google Scholar]
  8. Barnes, Jeb, and Thomas Burke. 2015. How Policy Shapes Politics: Rights, Courts, Litigation, and the Struggle Over Injury Compensation. New York: Oxford University Press. [Google Scholar]
  9. Barnes, Jeb, and Parker Hevron. 2018. Framed? Judicialization and the Risk of Negative Episodic Media Coverage. Law & Social Inquiry. [Google Scholar] [CrossRef]
  10. Barnes, Jeb, and Nicholas Weller. 2014. Finding Pathways: Mixed-Method Research for Studying Causal Mechanisms. New York: Cambridge University Press. [Google Scholar]
  11. Baron, Reuben M., and David A. Kenny. 1986. The Moderator-Mediator Variable Distinction in Social Psychological Research: Conceptual, Strategic, and Statistical Considerations. Journal of Personality and Social Psychology 51: 1173–82. [Google Scholar] [CrossRef] [PubMed]
  12. Barth, Peter S. 1987. The Tragedy of Black Lung: Federal Compensation for Occupational Disease. Kalamazoo: Upjohn Institute for Employment Research. [Google Scholar]
  13. Beck, Nathaniel. 2010. Causal Process “Observation”: Oxymoron or (Fine) Old Wine. Paper presented at the 2006 Annual Meeting of the American Political Science Association, Philadelphia, PA, USA, September 1. [Google Scholar]
  14. Bennett, W. Lance. 2016. News: The Politics of Illusion, 9th ed. Chicago: University of Chicago Press. [Google Scholar]
  15. Berk, R. A., and P. J. Newton. 1985. Does Arrest Really Deter Wife Battery? An Effort to Replicate the Findings of the Minneapolis Spouse Abuse Experiment. American Sociological Review 50: 253–62. [Google Scholar] [CrossRef]
  16. Borah, Porismita. 2011. Conceptual Issues in Framing Theory: A Systematic Examination of a Decade’s Literature. Journal of Communication 61: 246–63. [Google Scholar] [CrossRef]
  17. Borah, Porismita. 2014. The Hyperlinked World: A Look at How the Interactions of News Frames and Hyperlinks Influence News Credibility and Willingness to Seek Information. Journal of Computer-Mediated Communication 19: 576–90. [Google Scholar] [CrossRef]
  18. Bork, Robert. 1997. The Tempting of America: The Political Seduction of the Law. New York: Free Press. [Google Scholar]
  19. Brader, Ted, Nicholas A. Valentino, and Eric Suhay. 2008. What Triggers Public Opposition to Immigration? Anxiety, Group Cues, and Immigration Threat. American Journal of Political Science 52: 959–78. [Google Scholar] [CrossRef]
  20. Broockman, David E., and Joshua Kalla. 2018. The Minimal Persuasive Effects of Campaign Contact in General Elections: Evidence from 49 Field Experiments. American Political Science Review 112: 148–66. [Google Scholar]
  21. Bumiller, Kristin. 1998. Body Images: How Does the Body Matter in Legal Imagination? In How Does Law Matter? Edited by B. G. Garth and A. Sarat. Chicago: Northwestern University Press. [Google Scholar]
  22. Burke, Thomas. 2002. Lawyers, Lawsuits, and Legal Rights: The Struggle over Litigation in American Society. Berkeley: University of California Press. [Google Scholar]
  23. Butler, Daniel M., and David E. Broockman. 2011. Do Politicians Racially Discriminate Against Constituents? A Field Experiment on State Legislators. American Journal of Political Science 55: 463–77. [Google Scholar] [CrossRef]
  24. Campbell, Donald T., and Harold L. Ross. 1968. The Connecticut Crackdown on Speeding: Time-Series Data in Quasi-Experimental Analysis. Law & Society Review 3: 33–54. [Google Scholar]
  25. Campbell, Donald T., and Julian C. Stanley. 1963. Experimental and Quasi-Experimental Designs for Research. New York: Wadsworth Publishing. [Google Scholar]
  26. Carragee, Kevin, and Wim Roefs. 2004. The Neglect of Power in Recent Framing Research. Journal of Communication 54: 214–33. [Google Scholar] [CrossRef]
  27. Carroll, Stephen J., Deborah Hensler, Jennifer Gross, Elizabeth M. Sloss, Matthias Schonlau, Allan Abrahamse, and J. Scott Ashwood. 2005. Asbestos Litigation. Santa Monica: RAND Institute for Civil Justice. [Google Scholar]
  28. Cattaneo, Matias D., and Juan Carlos Escanciano, eds. 2017. Regression Discontinuity Designs: Theory and Applications. Bingley: Emerald Publishing Limited. [Google Scholar]
  29. Chilton, Alex S., and Dustin Tingley. 2014. Why the Study of International Law Needs Experiments. Columbia Journal of Transnational Law 52: 174–237. [Google Scholar]
  30. Chong, Dennis, and James N. Druckman. 2007. Framing Theory. American Review of Political Science 10: 103–26. [Google Scholar] [CrossRef]
  31. Chong, Dennis, and James N. Druckman. 2010. Dynamic Public Opinion: Communication Effects Over Time. American Political Science Review 104: 663–680. [Google Scholar] [CrossRef]
  32. Crowe, Justin. 2012. Building the Judiciary: Law, Courts, and the Politics of Institutional Development. Princeton: Princeton University Press. [Google Scholar]
  33. Daniels, Stephen, and Joanne Martin. 2000. The Impact That It Has Had Is Between People’s Ears: Tort Reform, Mass Culture, and Plaintiffs’ Lawyers. DePaul Law Review 50: 453–78. [Google Scholar]
  34. Daniels, Stephen, and Joanne Martin. 2004. The Strange Success of Tort Reform. Emory Law Journal 5: 1225–62. [Google Scholar]
  35. Daniels, Stephen, and Joanne Martin. 2015. Tort Reform, Plaintiffs’ Lawyers, and Access to Justice. Lawrence: University of Kansas Press. [Google Scholar]
  36. De la Cuesta, Brandon, and Kosuke Imai. 2016. Misunderstandings about the Regression Discontinuity Design in Close Elections. Annual Review of Political Science 19: 375–96. [Google Scholar] [CrossRef]
  37. De Tocqueville, Alexis. 1999. Democracy in America. New York: Wordsworth Editions Ltd. First published 1835. [Google Scholar]
  38. Derthick, Martha. 2005. Up in Smoke: From Legislation to Litigation in Tobacco Politics, 2nd ed. Washington: Congressional Quarterly Press. [Google Scholar]
  39. Druckman, James N. 2001. The Implications of Framing Effects for Citizen Competence. Political Behavior 23: 225–256. [Google Scholar] [CrossRef]
  40. Druckman, James N., and Kjersten. R. Nelson. 2003. Framing and Deliberation: How Citizens’ Conversations Limit Elite Influence. American Journal of Political Science 47: 729–45. [Google Scholar] [CrossRef]
  41. Druckman, James N., Donald P. Green, James H. Kuklinski, and Arthur Lupia. 2006. The Growth and Development of Experimental Research in Political Science. American Political Science Review 100: 627–35. [Google Scholar] [CrossRef]
  42. Druckman, James N., Donald Green, and Arthur Lupia, eds. 2011. Cambridge Handbook of Experimental Political Science. New York: Cambridge University Press. [Google Scholar]
  43. Dunning, Thad. 2012. Natural Experiments in the Social Sciences: A Design-Based Approach (Strategies for Social Inquiry). New York: Cambridge University Press. [Google Scholar]
  44. Engel Merry, Sally. 2011. American Politicians Confront the Court: Opposition Politics and Changing Responses to Judicial Power. New York: Cambridge University Press. [Google Scholar]
  45. Entman, Robert M. 1993. Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication 43: 51–58. [Google Scholar] [CrossRef]
  46. Eldersveld, Samuel. 1956. Experimental Propaganda Techniques and Voting Behavior. American Political Science Review 50: 154–165. [Google Scholar] [CrossRef]
  47. Epp, Charles. 2009. Making. Rights Real: Activists, Bureaucrats, and the Creation of the Legalistic State. Chicago: University of Chicago Press. [Google Scholar]
  48. Epstein, Lee, and Gary King. 2002. The Rules of Inference. University of Chicago Law Review 69: 1–133. [Google Scholar] [CrossRef]
  49. Farhang, Sean. 2008. Public Regulation and Private Lawsuits in the American Separation of Powers System. American Journal of Political Science 52: 821–39. [Google Scholar] [CrossRef]
  50. Farhang, Sean. 2010. The Litigation State: Public Regulation and Private Lawsuits in the United States. Princeton: Princeton University Press. [Google Scholar]
  51. Feeley, Malcolm, and Edward L. Rubin. 2000. Judicial Policymaking and the Modern State: How the Courts Reformed America’s Prisons. New York: Cambridge University Press. [Google Scholar]
  52. Fontana, David, and Donald Braman. 2012. Judicial Backlash or Just Backlash? Evidence from a National Experiment. Columbia Law Review 112: 1–69. [Google Scholar]
  53. Forbath, William E. 1991. Law and the Shaping of the American Labor Movement. Cambridge: Harvard University Press. [Google Scholar]
  54. Freedman, P. 1997. Framing the Partial Birth Abortion Debate: A Survey Experiment. Paper presented at the Annual Meeting of the Midwest Political Science Association, Chicago, IL, USA, August 28. [Google Scholar]
  55. Friedman, Milton. 1953. The Methodology of Positive Economics. Chicago: University of Chicago Press. [Google Scholar]
  56. Galanter, Marc. 1974. Why the “Haves” Come Out Ahead: Speculations on the Limits of Legal Change. Law and Society Review 9: 165–230. [Google Scholar] [CrossRef]
  57. Gamson, William A., and Andre Modigliani. 1987. The Changing Culture of Affirmative Action. In Research in Political Sociology. Edited by Richard G. Braungart and Margaret M. Braungart. Greenwich: JAI Press, vol. 3, pp. 137–77. [Google Scholar]
  58. Gash, Alison. 2015. Below the Radar: How Silence Can Save Civil Rights. New York: Oxford University Press. [Google Scholar]
  59. Gavin, Sandra F. 2008. Stealth Tort Reform. Valparaiso University Law Review 42: 431–60. [Google Scholar]
  60. Gerber, Alan S., and Donald Green. 2012. Field Experiments: Design, Analysis, and Interpretation. New York: W. W. Norton & Company. [Google Scholar]
  61. Gitlin, Todd. 1980. The Whole World is Watching: Mass Media in the Making and Unmaking of the New Left. Berkeley: University of California Press. [Google Scholar]
  62. Green, Donald, and Alan S. Gerber. 2004. Get Out the Vote!: How to Increase Voter Turnout. Washington: Brookings Institution. [Google Scholar]
  63. Greiner, D. James, Cassandra W. Pattanayak, and Jonathan Hennessy. 2012. The Limits of Unbundled Legal Assistance: A Randomized Study in Massachusetts District Court and Prospects for the Future. Harvard Law Review 126: 901–86. [Google Scholar] [CrossRef]
  64. Gross, Kimberly. 2008. Framing Persuasive Appeals: Episodic and Thematic Framing, Emotional Response, and Policy Opinion. Political Psychology 29: 169–92. [Google Scholar]
  65. Guthrie, Chris. 2000. Framing Frivolous Litigation: A Psychological Theory. University of Chicago Lew Review 67: 163–216. [Google Scholar] [CrossRef]
  66. Guttentag, Michael D., Christine L. Porath, and Samuel N. Fraidin. 2008. Brandeis’ Policeman: Results from a Laboratory Experiment on How to Prevent Corporate Fraud. Journal of Empirical Legal Studies 5: 239–73. [Google Scholar] [CrossRef]
  67. Hacker, Jacob. Privatizing Risk Without Privatizing the Welfare State: The Hidden Politics of Social Policy Retrenchment in the United States. American Political Science Review 98: 243–60. [CrossRef]
  68. Haltom, William. 1998. Reporting on the Courts: How Mass Media Cover Judicial Actions. Chicago: Nelson-Hall. [Google Scholar]
  69. Haltom, William, and Michael McCann. 2004. Distorting the Law: Politics, Media, and the Litigation Crisis. Chicago: University of Chicago Press. [Google Scholar]
  70. Hans, Valerie P. 2000. Business on Trial: The Civil Jury and Corporate Responsibility. New Haven: Yale University Press. [Google Scholar]
  71. Hevron, Parker. 2013. The Affective Framing of Tort Reform: Toward a Theory of the Mediating Effects of Emotion on Attitude Formation. Unpublished Doctoral dissertation. University of Southern California, Los Angeles, CA, USA. [Google Scholar]
  72. Ho, Daniel E., and Donald B. Rubin. 2011. Credible Causal Inference for Empirical Legal Studies. Annual Review of Law and Social Science 7: 17–40. [Google Scholar] [CrossRef]
  73. Holland, Paul W. 1986. Statistics and Causal Inference. Journal of the American Statistical Association 81: 945–60. [Google Scholar] [CrossRef]
  74. Hutchings, Vincent L., and Spencer Piston. 2011. The Determinants and Political Consequences of Prejudice. In The Cambridge Handbook of Experimental Political Science. Edited by James N. Druckman, Donald P. Green and James H. Kuklinski. New York: Cambridge University Press. [Google Scholar]
  75. Imai, Kosuke, and Teppei Yamamoto. 2013. Identification and Sensitivity Analysis for Multiple Causal Mechanisms: Revisiting Evidence from Framing Experiments. Political Analysis 21: 141–71. [Google Scholar] [CrossRef]
  76. Imai, Kosuke, Luke Keele, and Teppei Yamamoto. 2010. Identification, Inference, and Sensitivity Analysis for Causal Mediation Effects. Statistical Science 25: 51–71. [Google Scholar] [CrossRef]
  77. Imai, Kosuke, Luke Keele, Dustin Tingley, and Teppei Yamamoto. 2011. Unpacking the Black Box of Causality: Learning about Causal Mechanisms from Experimental and Observational Studies. American Political Science Review 105: 765–89. [Google Scholar] [CrossRef]
  78. Iyengar, Shanto. 1991. Is Anyone Responsible? How Television Frames Political News. Chicago: University of Chicago Press. [Google Scholar]
  79. Kagan, Robert. 2001. Adversarial Legalism: The American Way of Law. Cambridge: Harvard University Press. [Google Scholar]
  80. Kahneman, Daniel, and Amos Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica 47: 263–91. [Google Scholar] [CrossRef]
  81. Keck, Thomas. 2014. Judicialized Politics in Polarized Times. Chicago: University of Chicago Press. [Google Scholar]
  82. Kinder, Donald R., and Lynn M. Sanders. 1990. Mimicking Political Debate with Survey Questions: The Case of White Opinion on Affirmative Action for Blacks. Social Cognition 8: 73–103. [Google Scholar] [CrossRef]
  83. Kirkland, Anna. 2016. Vaccine Court: The Law and Politics of Injury. New York: New York University Press. [Google Scholar]
  84. Levitt, Steven D., and John D. List. 2011. Was There Really a Hawthorne Effect at the Hawthorne Plant? An Analysis of the Original Illumination Experiments. American Economic Journal: Applied Economics 3: 224–38. [Google Scholar] [CrossRef]
  85. Mahoney, James. 2012. The Logic of Process Tracing Tests in the Social Sciences. Sociological Methods and Research 41: 570–97. [Google Scholar] [CrossRef]
  86. Manoff, Robert K. 1986. Writing the News (By Telling the “Story”). In Reading the News: A Pantheon Guide to Popular Culture. Edited by R. K. Manoff and M. Schudson. New York: Pantheon Books, pp. 197–229. [Google Scholar]
  87. McDermott, Rose. 2002. Experimental Methods in Political Science. Annual Review of Political Science 5: 31–61. [Google Scholar] [CrossRef]
  88. McDermott, Rose. 2011. Internal and External Validity. In Cambridge Handbook of Experimental Science. Edited by James N. Druckman, Donald P. Green, James H. Kuklinski and Arthur Lupia. New York: Cambridge University Press, pp. 27–40. [Google Scholar]
  89. Mendelberg, Tali. 2001. The Race Card: Campaign Strategy, Implicit Messages, and the Norm of Equality. Princeton: Princeton University Press. [Google Scholar]
  90. Morgan, Stephen L., and Christopher Winship. 2014. Counterfactuals and Causal Inference: Methods and Principles, 2nd ed. New York: Cambridge University Press. [Google Scholar]
  91. Mullinix, Kevin J., Thomas J. Leeper, James N. Druckman, and Jeremy Freese. 2015. The Generalizability of Survey Experiments. Journal of Experimental Political Science 2: 109–38. [Google Scholar] [CrossRef]
  92. Nelson, Thomas E., and Donald R. Kinder. 1996. Issue Frames and Group-Centrism in American Public Opinion. The Journal of Politics 58: 1055–78. [Google Scholar] [CrossRef]
  93. Nelson, Thomas E., Zoe M. Oxley, and Rachel A. Clawson. 1997. Toward a Psychology of Framing Effects. Political Behavior 19: 221–46. [Google Scholar] [CrossRef]
  94. Nielsen, Laura Beth, and Aaron Beim. 2004. Media Misrepresentation: Title VII, Print Media, and Public Perceptions of Discrimination Litigation. Stanford Law & Policy Review 15: 101–30. [Google Scholar]
  95. Nockleby, John, and Shannon Curreri. 2004. 100 Years of Conflict: The Past and Future of Tort Retrenchment. Loyola Los Angeles Law Review 38: 1021–92. [Google Scholar]
  96. Persson, Torsten, and Guido Tabellini. 2002. Do Constitutions Cause Large Governments? Quasi-Experimental Evidence. European Economic Review 46: 908–18. [Google Scholar] [CrossRef]
  97. Platt, John R. 1964. Strong Inference. Science 146: 347–53. [Google Scholar] [CrossRef] [PubMed]
  98. Popkin, Samuel L. 1994. The Reasoning Voter: Communication and Persuasion in Presidential Campaigns. Chicago: University of Chicago Press. [Google Scholar]
  99. Qian, Yi. 2007. Do National Patent Laws Stimulate Domestic Innovation in a Global Patenting Environment? A Cross-Country Analysis of Pharmaceutical Patent Protection, 1978–2002. Review of Econometric Statistics 89: 436–53. [Google Scholar] [CrossRef]
  100. Rabkin, Jeremy. 1989. Judicial Compulsions: How Public Law Distorts Public Policy. New York: Basic Books. [Google Scholar]
  101. Reese, Stephen D., Oscar H. Gandy, and August E. Grant, eds. 2001. Framing Public Life: Perspectives on Media and Our Understanding of the Social World. Mahwah: Lawrence Erlbaum & Associates. [Google Scholar]
  102. Roberts, Julian V., and Anthony N. Doob. 1990. News Media Influences on Public Views of Sentencing. Law and Human Behavior 14: 451–68. [Google Scholar] [CrossRef]
  103. Romano, Jay. 1993. New Effort to Restrict Civil Suits is Started. New York Times, December 5, NJ1. [Google Scholar]
  104. Romer, Thomas, and Howard Rosenthal. 1978. Political Resource Allocation, Controlled Agendas, and the Status Quo. Public Choice 33: 27–43. [Google Scholar] [CrossRef]
  105. Rosenberg, Gerald N. 2008. The Hollow Hope: Can Courts Bring About Social Change? 2nd ed. Chicago: University of Chicago Press. [Google Scholar]
  106. Roth, Alvin E. 1995. Introduction to Experimental Economics. In The Handbook of Experimental Economics. Edited by J. H. Kagel and A. E. Roth. Princeton: Princeton University Press. [Google Scholar]
  107. Rubin, Donald. 1974. Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies. Journal of Educational Psychology 66: 688–701. [Google Scholar] [CrossRef]
  108. Sandler, Ross, and David Schoenbrod. 2003. Democracy by Decree: What Happens when the Courts Run Government. New Haven: Yale University Press. [Google Scholar]
  109. Schattschneider, E. E. 1960. The Semisovereign People: A Realist’s View of Democracy. New York: Wadsworth Publishing. [Google Scholar]
  110. Scheufele, Dietram A. 1999. Framing as a Theory of Media Effects. Journal of Communication 49: 103–22. [Google Scholar] [CrossRef]
  111. Sears, David O. 1986. College Sophomores in the Laboratory: Influence of a Narrow Database on Social Psychology's View of Human Nature. Journal of Personality and Social Psychology 51: 515–30. [Google Scholar] [CrossRef]
  112. Sekhon, Jasjeet S. 2007. The Neyman-Rubin Model of Causal Inference and Estimation via Matching Methods. In The Oxford Handbook of Political Methodology. Edited by Janet Box-Steffensmeir, Henry Brady and David Collier. New York: Oxford University Press. [Google Scholar]
  113. Seron, Carroll, Martin R. Frankel, Gregg Van Ryzin, and Jean Kovath. 2001. The Impact of Legal Counsel on Outcomes for Poor Tenants in New York City’s Housing Court: Results of a Randomized Experiment. Law & Society Review 35: 419–34. [Google Scholar]
  114. Shaw, Daron, Donald P. Green, James G. Gimpel, and Alan S. Gerber. 2012. Do Robotic Calls from Credible Sources Influence Voter Turnout or Vote Choice? Evidence from a Randomized Field Experiment. Journal of Political Marketing 11: 231–45. [Google Scholar] [CrossRef]
  115. Silverstein, Gordon. 2009. Law’s Allure: How Law Shapes, Constrains, Saves, and Kills Politics. New York: Cambridge University Press. [Google Scholar]
  116. Slothuus, Rune. 2008. More Than Weighting Cognitive Importance: A Dual-Process Model of Issue Framing Effects. Political Psychology 29: 1–23. [Google Scholar] [CrossRef]
  117. Staszak, Sarah. 2015. No Day In Court: Access to Justice and the Politics of Judicial Retrenchment. New York: Oxford University Press. [Google Scholar]
  118. Tomz, Michael, and Jessica L. Weeks. 2013. The Democratic Peace: An Experimental Approach. American Political Science Review 100: 1–31. [Google Scholar]
  119. Tversky, Amos, and Daniel Kahneman. 1981. The Framing of Decisions and the Psychology of Choice. Science 211: 453–57. [Google Scholar] [CrossRef] [PubMed]
  120. Yankelovich, Skelly, and White, Inc. 1978. Highlights of a National Survey of the General Public, Judges, Lawyers, and Community Leaders. In State Courts: A Blueprint for the Future. Edited by John T. Fetter. Washington: National Center for State Courts. [Google Scholar]
1
To avoid diving headlong into the welter of terms used to describe judicialized policymaking, this article refers to the complicated and multi-faceted phenomenon of court-based policymaking as “judicialization.” For a more extensive explanation of the logic behind this approach, see Barnes and Hevron (2018).
2
In real world policy settings, the splitting of a group of people into beneficiaries and non-beneficiaries also raises ethical concerns. See generally Campbell and Stanley (1963).
3
A quasi-experiment involves the non-random assignment of a treatment. Policymakers can attempt to overcome the lack of randomization in a number of ways, from matching to a predetermined eligibility cut-off (see de la Cuesta and Imai 2016; Cattaneo and Escanciano 2017).
4
In addition to relying on litigation, the asbestos compensation regime incorporated approaches that were only somewhat judicialized, such as bankruptcy trusts. These trusts became a key feature of asbestos compensation starting in the early 1980s after Johns Manville declared Chapter 11 bankruptcy to deal with the looming spectre of large jury awards. Other large companies with legal exposure soon followed suit.
5
Until 1969, the compensation of workers for industrial diseases such as black lung disease had been left to the state governments and workers compensation programs (see Barth 1987 for an overview of the development of black lung disease compensation policy). The Farmington mine explosion in 1968 and sustained pressure by the United Mine Workers Association led to the passage of the Coal Mine Health and Safety Act of 1969. Today, the Black Lung Benefits Act is administered by the Division of Coal Mine Workers’ Compensation within the U.S. Department of Labor.
6
The “Asian disease” problem, first described by Tversky and Kahneman (1981), is a canonical example of an equivalency frame. Subjects who received information framed in a way that leads them to think about a problem in terms of relative gains (in the Asian disease case, how many sick people would be saved in a treatment program) overwhelmingly picked a seemingly less-risky option. When people were confronted with a negatively framed problem (how many people would die under the same treatment scenario), they tended to be risk-seeking. Both options, despite being framed differently, were mathematically alike.
7
Regarding the level of understanding of political problems, researchers have argued that episodic frames can lead to “morselized” understanding of complex political problems (Iyengar 1991, p. 136).
8
Polls from the past three decades suggest that people think that there are not only too many “frivolous” lawsuits, but that there has been a veritable explosion in the amount of litigation (Daniels and Martin 2000, 2004; Haltom and McCann 2004). Other polling data describe a public that believes the current state of civil litigation is inherently unfair, favoring plaintiffs at the expense of “deep pocket” defendants or that awards made by juries are excessive (Daniels and Martin 2004; Gavin 2008; Nockleby and Curreri 2004).
9
Subjects in the experiment were not told that the piece of legislation was fake.
10
The language used in the treatments to reflect narrative frame and tone was modeled on examples of tort reform coverage drawn from the New York Times from 1970 to 2013 (n = 55) and analysed using latent content analysis. Articles were coded on multiple dimensions, including frame type (episodic/thematic), tone (positive/negative), interest group appearance (including whether they were pro- or anti-tort reform, such as the American Trial Lawyers Association), subject of episodic frame (plaintiffs, defendants, corporations, doctors, etc.), and the type of litigation mentioned in the article (product liability, medical malpractice, etc.).
Figure 1. Adjusted Predictions of Episodic Coverage.
Figure 1. Adjusted Predictions of Episodic Coverage.
Laws 07 00020 g001
Table 1. Emotional Response by Treatment Condition.
Table 1. Emotional Response by Treatment Condition.
Emotional Response (n)Episodic Anti-Reform (57)Thematic Anti-Reform (45)Episodic Pro-Reform (32)Thematic Pro-Reform (31)Control (39)
Sympathy0.15 (0.27)0.10 (0.23)0.43 *** (0.38)0.05 (0.16)0.13 (0.25)
Pity0.14 (0.26)0.06 (0.16)0.28 (0.36) **0.05 (0.15)0.12 (0.24)
Anger0.10 (0.22)0.17 * (0.28)0.17 (0.28)0.05 (0.15)0.07 (0.03)
Disgust0.19 (0.30)0.13 (0.27)0.20 (0.34)0.08 (0.20)0.16 (0.28)
Worry0.15 (0.27)0.24 (0.28)0.23 (0.32)0.12 (0.20)0.18 (0.29)
Note: Table entry is mean emotional response by frame and tone with standard deviation in parentheses. Emotional reactions are coded to range between 0 and 1, where 0 indicates that the respondent did not feel the emotional reaction and 1 indicates that they strongly felt the emotion. Asterisks indicate that the treatment condition differs significantly from the control condition (t-test on means): *** p < 0.01 ** p < 0.05 * p < 0.10.
Table 2. Persuasiveness by Treatment Condition.
Table 2. Persuasiveness by Treatment Condition.
Dependent Variable (n)Episodic Anti-Reform (38)Episodic Pro-Reform (31)Thematic Anti-Reform (43)Thematic Pro-Reform (28)Control Condition (37)
Legislation is a good idea0.46 * (0.19)0.46 * (0.18)0.38 *** (0.20)0.48 (0.19)0.54 (0.21)
Legislation should be passed0.48 (0.21)0.47 (0.16)0.40 *** (0.18)0.50 (0.18)0.52 (0.20)
Note: Opinion on the proposed tort reform legislation is coded to range between 0 to 1 where 1 represents those who strongly believe that Congress should pass proposed tort reform legislation (standard deviations in parentheses). Asterisks indicate that a t-test of means shows framed condition differs significantly from control condition, *** p < 0.01 ** p < 0.05 * p < 0.10 (two tail t-test).
Table 3. Belief Change by Treatment Condition.
Table 3. Belief Change by Treatment Condition.
Mediators (n)Episodic Anti-Reform (38)Episodic Pro-Reform (31)Thematic Anti-Reform (43)Thematic Pro-Reform (28)Control Condition (37)
Too many frivolous lawsuits0.08 * (0.23)0.02 (0.11)−0.03 * (0.15)0.09 * (0.23)0.02 (0.07)
Lawsuits hurt product development−0.01 (0.14)0.06 *** (0.18)−0.02 (0.14)0.06 *** (0.14)−0.03 (0.08)
Most who sue have legitimate grievances0.08 * (0.23)−0.06 (0.18)0.02 (0.21)−0.01 (0.15)0.00 (0.19)
Lawsuits make society safer0.01 (0.20)−0.01 (0.19)0.03 (0.18)0.02 (0.18)0.01 (0.12)
Note: The rows of table entries are the average differences between the pretest and posttests for the four potentially mediating belief variables (standard deviations are parentheses). Asterisks indicate that average treatment effect is significant at *** p < 0.01 ** p < 0.05 * p < 0.10 (two tail t-test).
Table 4. Average Treatment Effect of Beliefs by Treatment Condition.
Table 4. Average Treatment Effect of Beliefs by Treatment Condition.
Mediators (n)Episodic Anti-Reform (38)Episodic Pro-Reform (31)Thematic Anti-Reform (43)Thematic Pro-Reform (28)Control Condition (37)
Too many frivolous lawsuits0.27 (0.19)0.52 *** (0.38)0.25 (0.21)0.32 (0.21)0.25 (0.16)
Lawsuits hurt product development0.59 ** (0.33)0.47 (0.28)0.43 (0.29)0.49 (0.22)0.46 (0.19)
Most who sue have legitimate grievances0.68 ** (0.29)0.37 *** (0.18)0.48 (0.26)0.51 (0.21)0.53 (0.22)
Lawsuits make society safer0.69 (0.29)0.53 * (0.24)0.61 (0.23)0.68 (0.21)0.62 (0.20)
Note: The rows of table entries are the posttest averages of the four mediating belief variables for in the four treatment conditions and the control condition. Asterisks indicate that, compared to the control condition, average treatment effect is significant at *** p < 0.01 ** p < 0.05 * p < 0.10 (two tail t-test).

Share and Cite

MDPI and ACS Style

Hevron, P. Judicialization and Its Effects: Experiments as a Way Forward. Laws 2018, 7, 20. https://doi.org/10.3390/laws7020020

AMA Style

Hevron P. Judicialization and Its Effects: Experiments as a Way Forward. Laws. 2018; 7(2):20. https://doi.org/10.3390/laws7020020

Chicago/Turabian Style

Hevron, Parker. 2018. "Judicialization and Its Effects: Experiments as a Way Forward" Laws 7, no. 2: 20. https://doi.org/10.3390/laws7020020

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop