*Article* **High School Students' Epistemic Cognition and Argumentation Practices during Small-Group Quality Talk Discussions in Science**

**Liwei Wei 1,\* , Carla M. Firetto <sup>2</sup> , Rebekah F. Duke <sup>3</sup> , Jeffrey A. Greene <sup>3</sup> and P. Karen Murphy <sup>1</sup>**

	- rduke23@live.unc.edu (R.F.D.); jagreene@email.unc.edu (J.A.G.)

**Abstract:** For high school students to develop scientific understanding and reasoning, it is essential that they engage in epistemic cognition and scientific argumentation. In the current study, we used the AIR model (i.e., Aims and values, epistemic Ideals, and Reliable processes) to examine high school students' epistemic cognition and argumentation as evidenced in collaborative discourse in a science classroom. Specifically, we employed a qualitative case study approach to focus on four small-group discussions about scientific phenomena during the Quality Talk Science intervention (QTS), where students regularly received explicit instruction on asking authentic questions and engaging in argumentation. In total, five categories of epistemic ideals and five categories of reliable processes were identified. Students demonstrated more instances of normative epistemic ideals and argumentative responses in the discussions after they received a revised scientific model for discussion and explicit instruction on argumentation. Concomitantly, there were fewer instances of students making decisions based on process of elimination to determine a correct scientific claim. With respect to the relationship of epistemic cognition to authentic questioning and argumentation, the use of epistemic ideals seemed to be associated with the initiation of authentic questions and students' argumentation appeared to involve the use of epistemic ideals.

**Keywords:** epistemic cognition; argumentation; science discussions; Quality Talk

## **1. Introduction**

High school students must engage in the epistemic practices of science to develop their scientific understanding and reasoning [1]. It is not enough for them to read about and memorize the "facts" that have already been established by scientists. Science is an iterative, social process and the transmission of scientific facts from teacher to student does not do justice to the realities of science practices. Indeed, in the contemporary digital world where abundant unvetted information is easily created and spread [2], it is essential for students to develop reasoning skills and critically evaluate this information. Whether or not students choose to pursue a career in science, they must be armed with the ability to reason, problem-solve, as well as evaluate and justify arguments as they encounter scientific information in their daily lives. These abilities and practices are critical to navigating and effectively engaging in society. Rather than focusing on what science content students need to know, science education reforms have shifted the focus toward helping students understand how scientists observe the world and draw conclusions from their observations, leading to knowledge [3,4].

In line with this shift, over the past 50 years researchers have examined students' epistemic cognition, that is, how they acquire, understand, justify, change, create, and use knowledge [5–9]. The ways in which individuals conceptualize the fundamental nature of

**Citation:** Wei, L.; Firetto, C.M.; Duke, R.F.; Greene, J.A.; Murphy, P.K. High School Students' Epistemic Cognition and Argumentation Practices during Small-Group Quality Talk Discussions in Science. *Educ. Sci.* **2021**, *11*, 616. https://doi.org/ 10.3390/educsci11100616

Academic Editors: Moritz Krell, Andreas Vorholzer and Andreas Nehring

Received: 6 August 2021 Accepted: 30 September 2021 Published: 8 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

how and what they know plays an important role in learning and acquiring knowledge [10], particularly for science education [9] and layperson scientific literacy [11]. For instance, in an attempt to further understand and model epistemic cognition in science, Reith and Nehring tested and confirmed the *ScieNo*-framework by examining the relationship between key scientific reasoning competencies and views on the nature of scientific inquiry (NOSI) [12]. Empirically, a recent meta-analysis has revealed that epistemic cognition interventions bolstered students' academic achievement in various ways and had the largest average effect size on argumentation among different types of academic achievement outcomes in the reviewed studies (*ES* = 1.047, *p* < 0.001) [13].

Indeed, both epistemic cognition and argumentation are core to scientific reasoning. According to Osborne, scientific reasoning abilities require "a meta-level knowledge of science and the epistemic features of science," (p. 274) which are necessary for learners to understand why certain scientific claims are warranted [14]. Indeed, Chinn and Sandoval argued that scientific reasoning requires that students acquire, understand, and use scientific practices and norms, which include various facets of epistemic cognition such as reliable processes for knowing and the skills to generate arguments that support knowledge claims [15]. In addition, the emphasis on fostering scientific reasoning skills requires that students engage in argumentation during which they construct and evaluate scientific models through evaluation [4]. During this process, students are expected to provide justifications based on evidence to support or refute claims, which is central in scientific reasoning [16]. In line with Upmeier zu Belzen and colleagues in this Special Issue, "modeling is a prominent style of scientific reasoning" [17] (p. 495).

Delving into these key facets of scientific reasoning, researchers have also identified a close relationship between epistemic cognition and argumentation. Indeed, epistemic cognition influences and supports argumentative reasoning; more complex or developed epistemic beliefs are related to better argumentative reasoning skills (e.g., production and evaluation of arguments; [18–22]). In turn, argumentation is often emphasized as a part of epistemic cognition interventions [23,24] and may also serve to promote epistemic cognition. As a case in point, Iordanou and Constantinou found that 11th-grade students who participated in evidence-focused, argumentative discourse activities in a Web-based learning environment increased their use of scientific evidence in their electronic dialogs with a peer, used more evidence to weaken the opponent's claims, and made explicit references to the source of evidence, whereas the comparison counterparts did not exhibit such improvements [25]. This suggests that students developed a more advanced epistemological understanding in science after engaging in sustained argumentation. However, given the complexity of epistemic cognition, scientific argumentation, and their interaction, more research is needed to more clearly delineate the relations between collaborative argumentation and students' epistemic cognition.

Informed by this body of literature, we investigated the epistemic cognition practices of high school students during small-group science discussions over the course of a year-long intervention designed to develop students' argumentation and discourse skills. We chose to leverage small-group discussions to examine students' epistemic practices in science, given the established literature documenting positive effects of small-group discussions on students' scientific argumentation and critical-analytic thinking [26–28] as well as the aforementioned evidence regarding the relationship between argumentation and epistemic cognition in science [25]. Specifically, we conducted a qualitative case study of a small group of students engaging in four science discussions, with the goal of better understanding how epistemic cognition and scientific argumentation manifest and interact with one another, particularly as students learn more about argumentation and discourse. This research contributes understandings of how best to incorporate small-group discourse into science classrooms, engage students in epistemic practices of science, and prepare students to think critically and analytically about the scientific information they encounter in their daily lives.

#### *1.1. Theoretical and Explanatory Framework: AIR Model of Epistemic Cognition*

To examine and analyze students' epistemic cognition as reflected in their small-group discourse in science classrooms, we employed Chinn and colleagues' AIR model of epistemic cognition (i.e., Aims and values, epistemic Ideals, and Reliable processes) [29,30] as the theoretical and explanatory framework of the current study. Chinn and colleagues proposed that epistemic cognition is comprised of three components [29,30]. The first component, *epistemic aims*, consists of goals related to the pursuit of epistemic ends or products, such as knowledge, understanding, explanations, true beliefs, scientific models, or rational arguments. For example, students who adopt the aim of summarizing explanations deemed normative by the field (i.e., knowledge) would necessarily engage in a given task differently from those whose epistemic aim was to achieve deep understanding of those explanations, such as the scientific reasoning that underlies such explanations [31,32].

The second component, *epistemic ideals*, represents the criteria or standards used to evaluate epistemic products, which are discipline-specific, context-specific, and even topicspecific. Students use epistemic ideals as justification for the adequacy of the epistemic products they construct or to evaluate the epistemic products of other individuals. For example, a scientific claim's adequacy as an epistemic product can be judged by how well it adheres to various science-based epistemic ideals, such as its fit with prior knowledge. Chinn and colleagues proposed five broad categories of epistemic ideals: (a) specification of the internal structure of an epistemic product, (b) connection to and coherence with other knowledge, (c) present and future connections to empirical evidence, (d) credibility of testimony, and (e) coherency and how well it has been communicated [30]. For example, when considering models of a scientific phenomenon, students may hold the epistemic ideal that "good models fit all the evidence," or "good models are parsimonious." Likewise, when evaluating a scientific argument, students may hold the epistemic ideal that "strong evidence addresses core parts of the model," or "good arguments are clearly communicated."

*Reliable epistemic processes*, the third component of the AIR model, are the methods by which knowledge and other epistemic products are constructed [29,30]. Reliable processes, strategies, and practices are those that consistently result in epistemic products that meet epistemic aims. Classification as to whether a process is reliable and appropriate depends largely on the discipline and context; they are often contingent upon the circumstances in which they are enacted, although certain processes are near-universally viewed as less reliable than others (e.g., relying on hearsay). In science, controlled experimentation and rigorous observation are often endorsed as reliable processes, under particular conditions. Observation may be a reliable process when a person uses it to visually count a small number of people in a room, but it becomes much less reliable when counting people in a crowd of thousands. As argued by Chinn et al., a critical part of epistemic cognition relates to the people's schemas about the conditions under which processes can be considered reliable, and these schemas can be used in different ways [30]. According to Chinn et al., individuals may use the schemas to guide their actions, that is, to *enact* a reliable process [30]. For example, a student may conduct a well-designed scientific experiment to collect data as evidence to support a claim. Second, individuals may use the schemas to *evaluate* the processes used by others, such as by judging whether a specific method is viable to generate an accurate understanding of a scientific phenomenon. Third, individuals may use the schemas to express *metacognitive* beliefs about how to produce reliable epistemic products. For instance, an individual may explain what needs to be considered when evaluating a scientific argument.

Chinn and colleagues' model has been used as a framework to examine learners' epistemic cognition while engaged in scientific inquiry [24,33]. For instance, Herrenkohl and Cornelius examined the argumentation practices of fourth- and fifth-grade students and teachers to assess the epistemic thinking that emerged during instructional activities such as whole-class discussions and small-group discussions [24]. The researchers coded whole-class discourse for argumentation and categorized the emergent code clusters into

the components of Chinn et al.'s model of epistemic cognition. In the present study, we also employed the AIR model as the theoretical and explanatory framework. Specifically, we conceptualized epistemic cognition based on the AIR model. Further, we used the AIR model to identify and analyze student discourse to deepen understanding of how epistemic ideals and reliable processes occur and interact together with argumentation in student discourse about science in small-group discussions. The goal was to gather a better sense of the criteria students use to form scientific arguments and to better understand the relationship between students' practices of argumentation and the epistemic criteria they hold and apply in science classrooms.

#### *1.2. Interplay of Argumentation and Epistemic Cognition in Science*

As addressed above, the AIR model aligns well with contemporary research and theory on argumentation. Argumentation—the process through which knowledge claims are asserted and justified through supporting reasons and evidence—is part of the foundation for the development and progression of scientific knowledge [34–37]. Thus, when conducted in ways that adhere with scientific normative practices, argumentation can be considered a reliable epistemic practice in science. Scientists advance knowledge in their field by endorsing normative epistemic ideals such as that a scientific argument needs to be supported by evidence and connected to prior theories (i.e., coherence with evidence or normative disciplinary knowledge). Subsequently, scientists must establish a convincing argument and communicate it to the broader scientific community. Their argument is subject to critical evaluation by their peers, who can question aspects of the argument and make counterarguments. The goal of reasoned argumentation is thus to come to a rational conclusion about which claims to accept or which actions to take [38].

As students engage in argumentation as an epistemic process, they are also likely to develop their epistemic understanding of science [25]. When students engage in collaborative argumentation, their arguments are also open to evaluation by others, who can examine the provided justification and accept or reject the purported claims [39]. During this process, alternative positions can be considered as well. Specifically, an individual can engage in written argumentation independently by articulating their own viewpoints and providing reasoning and evidence in support of their claim as well as considering multiple perspectives and counterarguments to their position. However, students can also engage in oral argumentation collaboratively and dialogically. During oral argumentation, students benefit from listening to others, processing and evaluating others' arguments, similar to what scientists do in their own practice. As a result, engaging students in argumentation helps them to understand the processes behind science and to develop a deep understanding of how knowledge develops in the scientific discipline [40], which subsequently advances their science learning [3,41–43].

Epistemic ideals, on the other hand, guide the kinds of reasons and evidence used in the scientific arguments constructed by scientists and students. In science, there are disciplinary standards (i.e., epistemic ideals) regarding the ways in which argumentation (e.g., evidence or connection to other theories) is used in knowledge building [3]. These disciplinary standards are the accepted guidelines by which the community justifies and evaluates knowledge, as well as the processes used to produce knowledge [3]. As a result, argumentation involves a deliberation on the epistemic status of knowledge claims [44]. For instance, in science, claims that adhere to scientific evaluative criteria (e.g., supported by evidence or fit with prior theories) are given predominant epistemic status over claims that do not meet these criteria. In the coordination of claims, reasons, and evidence, one's epistemic cognition becomes pivotal. Absolutist, multiplist, and naïve views of knowledge and knowing provide few guides as to what should and should not be considered a valid knowledge claim [45], whereas when students adopt an evaluativist perspective and more normative beliefs, they are more likely to utilize disciplinary norms to evaluate arguments and consider whether to accept or refute the arguments.

Therefore, the epistemic ideals students hold will guide the kinds of reasoning, evidence, and arguments they bring forward and the type of disciplinary standards they use to evaluate the presented arguments. Empirical evidence shows that students' epistemic cognition influences how they evaluate and construct scientific arguments [46]. Students with more advanced epistemological understanding engage in more critical evaluation [47]. They are better able to identify informal reasoning fallacies in flawed arguments [48] and produce higher-quality written arguments of their own [18]. Nussbaum and colleagues examined the transcripts of paired students' online argumentation discussions and found that students with less advanced epistemological understanding were less critical of their partner's arguments [19]. Also, these students did not acknowledge inconsistencies within arguments, when compared to students who held more advanced views. The more advanced students provided counterarguments, brought forth more content into their argumentation, and noted the need for more information. Students with more advanced epistemic perspectives were also more willing to engage in argumentation than peers with more naïve perspectives [19,49]. Notably, there is literature suggesting this relationship could be bi-directional. When students engage in dialogic argumentation and demonstrate their knowledge of the argumentation norms in science, they reveal an improvement in their epistemic understanding [50]. Given the strong alignment between models of argumentation and epistemic cognition (e.g., AIR model), there is a need for research on how to construct argumentation instruction in ways that help students refine their epistemic understanding of science, which necessarily includes normative scientific aims, ideals, and reliable processes.

#### *1.3. Using Quality Talk Science (QTS) as a Potential Approach to Enhance Argumentation and Examine Epistemic Cognition*

As stressed in prior research, the kind of classroom intervention found to be effective for promoting epistemic cognition often involves teachers' creating and supporting an open space where small groups of students can co-construct and challenge arguments about domain-specific problems [2,51]. Further, within the context of an open participation space, the type of task assigned to students may also influence their performance. For example, a well-defined, open-ended, and challenging task provides more opportunities for students to utilize multiple strategies and can help promote generalization, argumentation, and higher-order thinking [52,53]. The variable nature of open-ended tasks also stimulates conversations among students to allow for a negotiation of meaning and understanding of the domain knowledge [54].

In this study, we examined students' oral discourse in science during an intervention called Quality Talk Science (QTS), a teacher-facilitated, small-group, discourse intensive approach that aims to promote students' critical-analytic thinking and high-level comprehension about scientific models and phenomena [27,55]. Similar to the aforementioned characteristics of successful interventions that promote epistemic cognition, during QT<sup>S</sup> teachers receive a series of professional development workshops to become familiar with the pedagogical principles. Specifically, these pedagogical principles outline the need for students to take on the interpretative authority of the discussion. To achieve this, teachers gradually release control of the discussion to students, such that students increase their responsibility participating in productive discourse about scientific content, searching for the underlying arguments and assumptions (i.e., epistemic engagement) [56]. Further, as part of these pedagogical principles, teachers provide explicit instruction to students with guided practice on how to generate thought-provoking, open-ended questions (i.e., authentic questions, AQ) and respond to those questions using argumentation. These student-initiated authentic questions and argumentation responses serve as indicators of high-level comprehension, as students critically and reflectively engage with scientific text or content. As an essential part of QTS, students engage in regular small-group discussions where they are expected to evaluate scientific models related to various scientific phenomena. Teachers facilitate these discussions using appropriate teacher discourse moves such as marking or modeling discourse elements indicative of productive talk [57].

QT<sup>S</sup> has both theoretical and empirical underpinnings as a branch of the broader Quality Talk (QT) framework [58]. QT was derived from a systematic review of text-based discussion interventions in language arts [26] and was adapted for use in high-school science classrooms [27]. The most effective parts of multiple approaches to discussion were combined into one approach designed to bolster students' high-level comprehension and critical-analytic thinking of text. It is rooted in rich theoretical underpinnings including cognitive, sociocognitive, sociocultural, and dialogic perspectives on teaching and learning [59].

Accumulating empirical research on the QT approach has evidenced positive impacts on improving students' discourse and argumentation in science, literacy, English language learning, and mathematics [27,60–62], as well as in different cultural contexts [63–65]. As a case in point, we conducted a quasi-experiment in which high school chemistry and physics teachers implemented QT<sup>S</sup> in their classrooms over a school year [27]. The critical-analytic thinking and argumentation in the discourse of students engaging in QT<sup>S</sup> improved dramatically from pre-test to post-test. Over time, QT<sup>S</sup> students asked more questions that provoked deeper levels of cognitive processing and responses [27]. In contrast, students in the comparison classroom did not evidence these changes to the same degree. At the end of the school year, QT<sup>S</sup> students produced many more well-supported responses with reasoning and evidence, and challenged and built on others' arguments more frequently. Such indicators were not present in the pre-test discussions, nor were they present in the post-test discussions of the students in the comparison classroom. Comparable results have been shown across varying grades, content areas, and contexts. For example, in language arts classrooms, fourth-grade students who participated in QT discussions evidenced increases in students' basic- and high-level comprehension [60] as well as students' written argumentation after receiving writing instruction as part of the QT intervention [66].

In sum, QT<sup>S</sup> has shown promise as a way to foster scientific practices that involve argumentation and understanding via small-group discussion, and it aligns well with instruction on epistemic cognition in science. However, less is known about the epistemic ideals that students use in scientific discourse as they generate arguments and how they consider reliable processes while understanding scientific phenomena.

#### *1.4. The Present Study*

In this qualitative case study, we examined how high school students engaged in small-group discussions about scientific models and phenomena with a particular focus on how students' epistemic cognition and argumentation were evidenced across a set of discussions. We used the AIR model as the theoretical and explanatory framework from which we identified and analyzed the epistemic ideals and reliable processes students used while constructing arguments and evaluating scientific models. This study contributes to the extant literature in three ways: (a) our methodological approach allowed us to gather evidence of epistemic cognition and argumentation as enacted in students' oral discourse rather than via self-reports, (b) the AIR model enabled us to capture the criteria students used to evaluate scientific arguments while also contributing to the emerging body of literature using the framework to analyze collaborative argumentation discourse [24,33], and (c) the use of the QT<sup>S</sup> discussion approach contributed to examining the relationship between epistemic cognition and argumentation as well as informing instructional implications for promoting scientific argumentation and epistemic cognition in science classrooms. Our research questions were:


#### **2. Methods**

#### *2.1. Participants and Study Design*

Within the context of a larger National Science Foundation grant, four teachers from one public high school in the northeastern United States implemented QT<sup>S</sup> in their 10ththrough 12th-grade chemistry and physics classes over an entire academic year. Students in the school were predominantly Caucasian (i.e., 91%), and over half of the students were from economically disadvantaged families (i.e., 49% qualified for the Free Lunch Program and 8% qualified for the Reduced-Price Lunch Program). The school was situated within a small city in a rural setting. The student population was highly transient; almost half of the participants who enrolled in the study at the start of the school year changed school districts over winter break.

For this qualitative case study, one group of all female students (*n* = 6) from the AP chemistry class was selected for analysis. Although students in the class were split into four discussion groups, we elected to examine the discourse from one of the small groups so that we could conduct the depth of qualitative analysis necessary to explore our research questions. We identified the best fitting group for analysis based on two primary selection criteria: (a) a group where the teacher was not present for the QT<sup>S</sup> science lesson discussions and (b) a group with students who had high rates of attendance and a full year of participation. These selection criteria allowed us to identify the group that would give us the best sense of students' epistemic cognition and scientific argumentation without the influence of the teacher or the variability in group composition (e.g., shifts in group dynamics due to student absences). Finally, it is important to note that in this qualitative study, we emphasized ecological validity over external validity. That is, the study examined student learning in an authentic science classroom. Therefore, our research design does not warrant causal claims or generalizations from our findings.

### *2.2. QT<sup>S</sup> Intervention*

The key components of the QT<sup>S</sup> intervention included the delivery of QT<sup>S</sup> discourse lessons and QT<sup>S</sup> catalyst, QT<sup>S</sup> science lessons, QT scientific model handouts for QT<sup>S</sup> discussions, and QT<sup>S</sup> discussions across one academic year (see Table 1 for schedule and timeline), which are introduced in the following sections, respectively.


**Table 1.** Timeline of Monthly Cycles with QT<sup>S</sup> Discourse Lessons and QT<sup>S</sup> Science Lesson Topics.

Note. \* Denotes discussion analyzed as part of this study's data.

### 2.2.1. QT<sup>S</sup> Discourse Lessons and QT<sup>S</sup> Catalyst

The six QT<sup>S</sup> discourse lessons were shared with the teacher during the initial and ongoing professional development workshops. For each discourse lesson, the teacher was provided with a set of slides to present in class as well as a corresponding lesson plan. The first three discourse lessons focused on different types of authentic questions that students could generate and ask in their discussions (e.g., speculation questions, connection questions, or high-level thinking questions) and were delivered in fall. The last three discourse lessons were delivered in spring and were focused on teaching students about argumentation components (i.e., claim, reasoning, and evidence), the evaluation of

evidence and reasoning, as well as challenge, alternative argument, and counterargument. Students were not only introduced to the definition of each necessary argumentation component, but they were also provided with guidelines (i.e., relevance, credibility, and accuracy) on evaluating evidence and quality of reasoning. All discourse lessons included descriptions of concepts as well as realistic examples of these concepts illustrated through discussion transcripts and/or videos [58]. provided with a set of slides to present in class as well as a corresponding lesson plan. The first three discourse lessons focused on different types of authentic questions that students could generate and ask in their discussions (e.g., speculation questions, connec‐ tion questions, or high‐level thinking questions) and were delivered in fall. The last three discourse lessons were delivered in spring and were focused on teaching students about argumentation components (i.e., claim, reasoning, and evidence), the evaluation of evi‐ dence and reasoning, as well as challenge, alternative argument, and counterargument. Students were not only introduced to the definition of each necessary argumentation com‐ ponent, but they were also provided with guidelines (i.e., relevance, credibility, and accu‐

The six QTS discourse lessons were shared with the teacher during the initial and ongoing professional development workshops. For each discourse lesson, the teacher was

Students were also provided with a QT<sup>S</sup> catalyst worksheet to correspond with the discourse lessons and prepare students for the QT<sup>S</sup> discussions. In the fall semester, the QT<sup>S</sup> catalyst focused on different types of authentic questions in alignment with the QT<sup>S</sup> discourse lessons (Figure 1a). The fall QT<sup>S</sup> catalyst provided space for students to record their authentic questions about the model, readings, and demonstration in preparation for discussion. In spring, students were provided with a QT<sup>S</sup> catalyst that centered on argumentation in alignment with the discourse lessons focused on argumentation. In addition to providing space for recording authentic questions, the spring QT<sup>S</sup> catalyst used visual representations of each argumentation component to facilitate the discussion and help students think about the model for discussion regarding the evidence and reasoning for each claim (Figure 1b). racy) on evaluating evidence and quality of reasoning. All discourse lessons included de‐ scriptions of concepts as well as realistic examples of these concepts illustrated through discussion transcripts and/or videos [58]. Students were also provided with a QTS catalyst worksheet to correspond with the discourse lessons and prepare students for the QTS discussions. In the fall semester, the QTS catalyst focused on different types of authentic questions in alignment with the QTS discourse lessons (Figure 1a). The fall QTS catalyst provided space for students to record their authentic questions about the model, readings, and demonstration in preparation for discussion. In spring, students were provided with a QTS catalyst that centered on argu‐ mentation in alignment with the discourse lessons focused on argumentation. In addition to providing space for recording authentic questions, the spring QTS catalyst used visual representations of each argumentation component to facilitate the discussion and help students think about the model for discussion regarding the evidence and reasoning for each claim (Figure 1b).

(**a**)

*Educ. Sci.* **2021**, *11*, x FOR PEER REVIEW 8 of 36

2.2.1. QTS Discourse Lessons and QTS Catalyst

(**b**)

related to a scientific phenomenon (see Tables 1 and 2 for details).

**Essential Question Science Concepts Class Demonstrations**

Newton's Laws of Motion, Kinetic Theory of Gases, Acceleration, Velocity, Force, Diffusion

Fission, Strong force, Nucleons, Nuclides, Neutrons, Protons, Electrons, Binding Energy, Electrostatic Forces, Radiation, Isotopes, Stability

Destructive Interference, Constructive Patterns, Refraction, Young's Experiment, Absorption,

**Table 2.** QTS Science Lesson Details.

**Figure 1.** Examples of QTS Catalysts. (**a**) The top QTS catalyst was used in fall with a focus on recording students' authentic questions; (**b**) The bottom QTS catalyst was used in spring with a focus on argumentation components. 2.2.2. QTS Science Lessons Paired with each of the six discourse lessons, the teacher also taught a QTS science **Figure 1.** Examples of QT<sup>S</sup> Catalysts. (**a**) The top QT<sup>S</sup> catalyst was used in fall with a focus on recording students' authentic questions; (**b**) The bottom QT<sup>S</sup> catalyst was used in spring with a focus on argumentation components.

lesson over three consecutive class periods. QTS science lessons were co‐created with teachers and content area experts to provide rich opportunities for students to engage in

tion Science Standards (NGSS). Each lesson was centered around an essential question

 Video demonstration of a crash test with and without an airbag Video demonstration of an air‐ bag deployment and deflation in

Video demonstration of chain re‐

 Video demonstration of an ex‐ plosion from 100 tons of TNT Video discussion of the Manhat‐ tan Project Trinity Test

Hands‐on, thin film rainbow pa‐

slow motion

actions

per experiment

**QTS Science Lesson Topic**

Airbags

Nuclear Fission

Thin Films

How does the inflation and deflation of the airbag prevent injury?

> How does nuclear fission create explosions?

What causes the appearance of multiple colors in a layer of

### 2.2.2. QT<sup>S</sup> Science Lessons

Paired with each of the six discourse lessons, the teacher also taught a QT<sup>S</sup> science lesson over three consecutive class periods. QT<sup>S</sup> science lessons were co-created with teachers and content area experts to provide rich opportunities for students to engage in discussions around disciplinary core ideas in science in alignment with the Next Generation Science Standards (NGSS). Each lesson was centered around an essential question related to a scientific phenomenon (see Tables 1 and 2 for details).


On the first day of the QT<sup>S</sup> science lesson, students were introduced to the essential question and observed demonstrations of the phenomenon (i.e., hands-on activity or video). After the demonstrations, the teacher introduced a handout that contained multiple models/claims to explain the scientific phenomenon that students observed in the demonstration (Figure 2). During and after the demonstrations, students generated and wrote down authentic questions about the phenomenon or their thinking about each claim in the scientific model on their QT<sup>S</sup> catalyst worksheet (Figure 1). Taken together, students were provided with multiple exposures to the scientific phenomenon and related material (e.g., demonstrations or scientific readings) in order to promote the likelihood that students possessed the necessary foundational understanding related to the phenomenon prior to the discussion. This approach also allowed multiple opportunities for students to develop a variety of rich authentic questions and engage with the scientific model. On the second day, students brought their QT<sup>S</sup> catalyst, readings, and the provided scientific model/claims to their QT<sup>S</sup> discussions to talk about the scientific model and related scientific phenomenon. On the third day of the science lesson after students conducted a QT<sup>S</sup> discussion, the teacher reviewed the student evaluations of the presented models or claims via a whole-class discussion toward the normative model and addressed any

*Educ. Sci.* **2021**, *11*, x FOR PEER REVIEW 11 of 37

misconceptions in student responses to help them understand the normative scientific model in class.

**Figure 2.** Examples of handouts for scientific models/claims. (**a**) The one on the left was used in fall with four models for students to choose from; (**b**) The one on the right was used in spring and resembles a student‐generated model with three different claims. **Figure 2.** Examples of handouts for scientific models/claims. (**a**) The one on the left was used in fall with four models for students to choose from; (**b**) The one on the right was used in spring and resembles a student-generated model with three different claims.

#### 2.2.3. Scientific Model Handouts for QTS Discussions 2.2.3. Scientific Model Handouts for QT<sup>S</sup> Discussions

The scientific model handouts were an integral part of the QTS science lesson, as they provided a framing for alternative scientific models related to the phenomena and also served to guide the discussions. In line with Schwarz et al. [67,68], scientific models are considered "tools for predicting and explaining" scientific phenomena, and scientific models can "change as understanding improves" [67] (p. 632). Having students engage in modeling practices is conducive to developing their epistemic understanding as well as their capacity for constructing and evaluating knowledge in science [69]. Therefore, in the current study, students were afforded the opportunity to evaluate and revise these models during QTS discussions as part of their learning about various scientific phenomena. The scientific model handouts were an integral part of the QT<sup>S</sup> science lesson, as they provided a framing for alternative scientific models related to the phenomena and also served to guide the discussions. In line with Schwarz et al. [67,68], scientific models are considered "tools for predicting and explaining" scientific phenomena, and scientific models can "change as understanding improves" [67] (p. 632). Having students engage in modeling practices is conducive to developing their epistemic understanding as well as their capacity for constructing and evaluating knowledge in science [69]. Therefore, in the current study, students were afforded the opportunity to evaluate and revise these models during QT<sup>S</sup> discussions as part of their learning about various scientific phenomena.

In fall, the handout consisted of four different models of the given phenomena. Each model had a collection of claims and there were overlapping claims across the four mod‐ els. Among the four models, one of them contained all correct claims and the remaining three had one or more incorrect claims (Figure 2a). However, the teacher and students reported that these models were too simplistic. Once students identified one model that they believed to be correct, they no longer considered the remaining models. As a result, we revised the handout. In spring, a single model was presented, which included three claims that jointly explained the phenomena (Figure 2b). The models and claims were hand‐drawn and formatted to appear as if they were student‐generated work rather than an authoritative source (e.g., a textbook figure). The three claims addressed different as‐ pects of the model and did not have any overlapping components. Students were told that In fall, the handout consisted of four different models of the given phenomena. Each model had a collection of claims and there were overlapping claims across the four models. Among the four models, one of them contained all correct claims and the remaining three had one or more incorrect claims (Figure 2a). However, the teacher and students reported that these models were too simplistic. Once students identified one model that they believed to be correct, they no longer considered the remaining models. As a result, we revised the handout. In spring, a single model was presented, which included three claims that jointly explained the phenomena (Figure 2b). The models and claims were hand-drawn and formatted to appear as if they were student-generated work rather than an authoritative source (e.g., a textbook figure). The three claims addressed different aspects of the model and did not have any overlapping components. Students were told that any

any number of the claims were potentially correct or incorrect, and their task was to

number of the claims were potentially correct or incorrect, and their task was to provide reasoning and evidence regarding the veracity of each claim. For incorrect claims, students were asked to generate a correct claim with appropriate reasoning and evidence.

### 2.2.4. QT<sup>S</sup> Discussions

The small-group discussions took place on the second day of the QT<sup>S</sup> science lesson. Given logistical and time constraints, it was not possible for the teacher to facilitate four discussions in one day while still allowing each group enough time to engage sufficiently in discussions (i.e., at least 15 min). Thus, the teacher organized the class so that two groups discussed for the first half of the class, while the other two groups worked independently on classwork, and then they switched for the second half of class. The teacher facilitated one small-group discussion in each half, while the other group engaged in a discussion without a facilitator. Discussions lasted approximately fifteen minutes and naturally unfolded in two portions: (a) discussing the answer to the essential question presented in the lesson, also called the model-based portion of the discussion and (b) engaging in the discussion about related scientific content guided by student-initiated questions, also called the open-ended portion of the discussion.

In response to the essential question about the provided model, students discussed the different models or claims with respect to which were correct, which were incorrect, and why. Notably, there was no single answer to these questions and there existed multiple ways to address these questions by referring to various pieces of evidence from the provided readings or demonstrations. During the model-based portion of the QT<sup>S</sup> discussions, students focused on one specific epistemic aim: determining whether the model or claim was scientifically sound. To achieve this epistemic aim, students needed to evaluate and analyze the scientific credibility of the provided models or claims using reasoning and evidence.

After students concluded their discussion around the essential question and reached a conclusion regarding the scientific model, they began the open-ended portion of the discussion. When students engaged in the open-ended portion of the discussion, a singular, central epistemic aim was not evident. This open-ended portion revolved around asking and answering student-generated authentic questions. These two distinct parts emerged from the flow of the discussion across all discussions, but occasionally, students would briefly return to reconsidering the essential question as it related to a student-generated authentic question they were discussing. Importantly, this split between the scientific model-based portion and the open-ended portion of the discussion was not invoked by the teacher nor controlled by the researchers.

#### *2.3. Procedures*

Along with a cohort of teachers participating in the larger grant-funded study, the chemistry teacher participated in a series of initial and ongoing professional development workshops, where they learned about the QT<sup>S</sup> approach and how to implement it in their classroom with researcher-provided materials (e.g., QT<sup>S</sup> discourse lessons, QT<sup>S</sup> science lessons, or materials for hands-on activity). They also received regular one-on-one coaching sessions with QT<sup>S</sup> coaches to support high-fidelity QT<sup>S</sup> implementation (e.g., successful delivery of QT<sup>S</sup> discourse lessons, QT<sup>S</sup> science lessons, or implementation of QT<sup>S</sup> discussions).

Each month the teacher engaged in a cycle (see Table 1 and Figure 3) whereby they: (a) presented a QT<sup>S</sup> discourse lesson to teach aspects of authentic questions and argumentation, (b) implemented a QT<sup>S</sup> science lesson about a disciplinary core idea, (c) conducted small-group discussions based on the disciplinary core idea science lesson with two groups being teacher-facilitated and two groups being student-led, which were both video- and audio-recorded, and (d) reviewed student evaluation of the scientific model(s) through a whole class discussion and presented the normative scientific model. In addition, the teacher also (e) conducted a second set of small-group discussions based on a chemistry

lesson of their choice. In this teacher-choice science discussion, the teacher used a structure similar to the QT<sup>S</sup> science lessons but without a scientific model. This gave students an opportunity to engage in additional QT<sup>S</sup> discussions while also allowing the teacher to facilitate the groups that were previously student-led. Thus, by alternating the discussions, the teacher was able to facilitate all four groups within each cycle. Over the academic year, this monthly cycle repeated six times. In this study, we examined four of the QT<sup>S</sup> discussions conducted by one student-led group. *Educ. Sci.* **2021**, *11*, x FOR PEER REVIEW 13 of 36

**Figure 3.** A Flowchart of the QTS Intervention Procedures (One Cycle). **Figure 3.** A Flowchart of the QT<sup>S</sup> Intervention Procedures (One Cycle).

#### *2.4. Qualitative Coding 2.4. Qualitative Coding*

descriptions and examples.

2.4.1. Epistemic Cognition Coding 2.4.1. Epistemic Cognition Coding

The coding for epistemic cognition was conducted through iterative cycles to ensure consistency and coherence across the two portions of each QTS discussion (i.e., model‐ based portion and open‐ended portion). Prior literature regarding epistemic criteria for models and arguments [70] and Chinn and colleagues' AIR model [30] framed our initial, open coding of students' discourse. After separately coding the discourse, the authors met to discuss the emergent codes in the data and our reasoning for each code. Over multiple coding cycles, we refined our coding scheme and resolved all discrepancies through dis‐ cussion. Given the two portions of each QTS discussion had different epistemic aims, cod‐ ing was conducted first for the model‐based portion of the discussion and then replicated for the open‐ended portion in order to maintain greater consistency and to avoid drift in coding for each portion of the QTS discussion. Throughout the coding process, the re‐ searchers wrote analytic memos [71] to document reflections and insights regarding the coding process. It is worth noting that the authors did not code for epistemic aims in the coding of epistemic cognition. This was because there was one clear epistemic aim for the model‐based portion of the discussion, which was to determine a correct model, and no clear, central epistemic aim for the open‐ended portion of the discussion during which students answered their authentic questions related to the scientific phenomenon in gen‐ The coding for epistemic cognition was conducted through iterative cycles to ensure consistency and coherence across the two portions of each QT<sup>S</sup> discussion (i.e., modelbased portion and open-ended portion). Prior literature regarding epistemic criteria for models and arguments [70] and Chinn and colleagues' AIR model [30] framed our initial, open coding of students' discourse. After separately coding the discourse, the authors met to discuss the emergent codes in the data and our reasoning for each code. Over multiple coding cycles, we refined our coding scheme and resolved all discrepancies through discussion. Given the two portions of each QT<sup>S</sup> discussion had different epistemic aims, coding was conducted first for the model-based portion of the discussion and then replicated for the open-ended portion in order to maintain greater consistency and to avoid drift in coding for each portion of the QT<sup>S</sup> discussion. Throughout the coding process, the researchers wrote analytic memos [71] to document reflections and insights regarding the coding process. It is worth noting that the authors did not code for epistemic aims in the coding of epistemic cognition. This was because there was one clear epistemic aim for the model-based portion of the discussion, which was to determine a correct model, and no clear, central epistemic aim for the open-ended portion of the discussion during which students answered their authentic questions related to the scientific phenomenon

eral. Through this iterative, multi‐cycle process, the authors developed two sets of epis‐

epistemic processes used in the discussions, the researchers also noted the ways of how students used the schemas that specify these processes, such as whether each reliable ep‐ istemic process was enacted, evaluative, or metacognitive [30]. See Tables 3 and 4 for code in general. Through this iterative, multi-cycle process, the authors developed two sets of epistemic cognition codes (i.e., epistemic ideals and reliable processes) with five categories of epistemic ideals and five categories of reliable processes. While coding for the reliable epistemic processes used in the discussions, the researchers also noted the ways of how students used the schemas that specify these processes, such as whether each reliable epistemic process was enacted, evaluative, or metacognitive [30]. See Tables 3 and 4 for code descriptions and examples.




**Table 3.** *Cont.*


Note. 1. The term "explanation" is used throughout the definitions in this table for consistency. However, each epistemic ideal could be applied to any epistemic product (e.g., model, argument, claim). 2. In the examples, EC codes are noted in italics within {}; duration of codes that span multiple turns is indicated within <>; EI: Epistemic ideals; RP: Reliable process. 3. Student names are pseudonyms.


**Table 4.** Descriptions of Reliable Epistemic Processes Codes with Examples.

**Table 3.** *Cont.*


**Table 4.** *Cont.*

#### 2.4.2. Quality Talk Coding

Two trained researchers independently coded the discussions and came together to reconcile any disagreements in accordance with the Quality Talk Coding Manual [72]. In order to facilitate analysis, we segmented the discussion into episodes of talk based on the authentic question events [73]. Each authentic question event began with an authentic question asked by a student and included all responses generated by students in response to that question. Responses related to the authentic questions were coded for individually (i.e., elaborated explanation, EE) and collectively constructed argumentation (i.e., exploratory talk, ET; cumulative talk, CT). See Table 5 for code descriptions and examples.

#### *2.5. Data Analysis Plan*

Of the six cycles of QT<sup>S</sup> discussions, four were selected for analysis: two from fall and two from spring (see Table 1). The lesson on Soap Bubbles was not analyzed due to technical malfunctions with the camera, and the lesson on Tesla Coil was not analyzed due to a new student joining the group. Video and audio data from these QT<sup>S</sup> small-group discussions were transcribed by a professional transcriber into word processing documents. These transcription files were uploaded to a qualitative data analysis software (ATLAS.ti 7) to facilitate coding and analysis.

For RQ 1, we identified the categories of epistemic cognition and argumentation invoked during the QT<sup>S</sup> discussions via qualitative coding and through iterative coding and reconciling by two raters. For RQ 2, we detected the differences in students' epistemic cognition and argumentation due to contextual factors by examining the frequency of the epistemic cognition and argumentation codes during the model-based portion of the QT<sup>S</sup> discussions. For RQ 3, we investigated how students' epistemic cognition related to their authentic questioning and argumentation by examining the extent to which epistemic cognition codes, the authentic question code, and argumentation codes co-occurred or were closely related to one another (e.g., where one code tended to immediately follow another code) and then checking the transcripts to verify patterns.


**Table 5.** Descriptions of Quality Talk Codes with Examples.

Note. QT codes are noted in bold within {}.

#### **3. Results**

## *3.1. RQ1. Epistemic Ideals, Reliable Processes, and Argumentation Invoked in Science Discussions* 3.1.1. Epistemic Ideals

Our qualitative open-coding procedure resulted in a set of 10 codes related to students' epistemic ideals. Then, these codes were organized into five epistemic ideal categories: connections to other knowledge, internal structure of the explanation, good communication, empirical evidence, and evidentiary support (see details in Table 3). With respect to the frequency of these epistemic ideals, there was a notably wide variation between the different categories. As shown in Table 6, students most frequently made 'connections to other knowledge,' as evidenced through 49 instances. In contrast, there was only one instance of the category 'empirical evidence,' as represented through the code 'coherence with personally collected empirical evidence.' In the following descriptions of each category, we refer to the examples provided in Table 3.


**Table 6.** Frequency Table of EI Codes Across Discussions.

The vast majority of identified instances of epistemic ideals related to how epistemic products connected to other knowledge (e.g., personal experience or prior knowledge). Within this category, the most commonly invoked epistemic ideal was noted by instances coded as 'coherence with normative disciplinary knowledge' (i.e., coherence with NDK). As evidenced by the 32 instances of this code, this ideal generally involved students expressing normative science knowledge that related to what they learned from the QT<sup>S</sup> science lesson, such as the scientific articles. As shown in the example in Table 3, Aria (student names are pseudonyms) explicitly referred to an article assigned from the nuclear fission lesson in response to Emma's authentic question. Likewise, in the airbag discussion, Chloe made a connection to their shared prior knowledge from the demonstration video that they watched together in class.

With respect to the category of 'internal structure of the explanation,' students evaluated whether explanations were sufficiently complex (i.e., comprehensive) or internally consistent (i.e., logically sound). In alignment with the two codes in this category, students expressed hesitations about whether to accept a claim or explanation because they felt it was incomplete or missing important explanatory components or they speculated about

the logic in the explanations. For example, when discussing the use of the chemical sodium azide in airbags, students wrestled with whether the use of the highly toxic sodium azide was contradictory to the use of an airbag as a safety device. That is, students stated the ideal of being logically sound was not sufficiently met.

Instances associated with the 'good communication' category pertained to epistemic ideals related to the language and comprehensibility of explanations. The codes that made up this category included 'precise wording' and 'clearly understandable' and were notably infrequent in the discussions, each occurring only once. In the example, Chloe expressed that they accepted Model 2 because of the language, and then went on to describe the exact phrasing of the explanation that they were referring to as it related to this ideal. These statements exemplified how the student accepted the claim on the basis of its clear, understandable language in comparison to the other claims.

The category of 'empirical evidence' related to the epistemic ideal that students used to seek coherence with empirical evidence that was personally collected. However, formal data collection was not a component of the science lessons in the present study, and students only conducted hands-on experiments during some lessons. Despite this, there was one instance where a student made a connection with empirical evidence. In the thin films lesson, students had an opportunity to engage in a hands-on activity. They submerged a scrap of black construction paper in water, added a drop of nail polish onto the water, and then observed how the nail polish formed a layer on the surface of the water as well as how the paper looked once it was lifted out of the water. Thus, during the discussion about thin films in the example, Chloe referred to the empirical evidence and emphasized that the model did not seem to align with what was observed during the demonstration.

Finally, we identified one code that did not fit within any of the five proposed by Chinn and colleagues [30]. We termed that code and the broader category 'evidentiary support.' As illustrated by the evidentiary support code, students either accepted a claim because it was supported by reasons and/or evidence or they held their peers accountable for providing reasoning and/or evidence. Notably, this was different from empirical evidence where students provided or referred to empirical evidence that was personally collected to support a claim. An example would be Grace explicitly prompting their group for reasoning and evidence to help evaluate a claim.

#### 3.1.2. Reliable Epistemic Processes

Through our open-coding process, we coded five types of reliable epistemic processes: experimentation, observation, physics formula, thought experiment, and process of elimination (see code descriptions and examples in Table 4). Compared to the frequency of epistemic ideals, there were fewer instances of reliable epistemic processes, which added up to 11 instances across four QT<sup>S</sup> discussions (see Table 7).


**Table 7.** Frequency Table of Reliable Processes Codes Across Discussions.

Each of these five codes involved a different method used to construct epistemic products, that is, a process used to establish knowledge, models, explanations, or theories.

For instance, 'experimentation' referred to using controlled testing as a reliable process to produce an epistemic product. Similarly, for other reliable process codes, students considered examining the world through human sensory perception (i.e., observation), referring to a known physics formula (i.e., physics formula), logically thinking through an imagined situation (i.e., thought experiment), or applying a process of elimination as reliable processes to obtain an epistemic product. For instance, during the discussion on thin films, Grace referred to her experimentation as a reliable process to explain what happened after adding one more drop of nail polish on the water surface. An example of a thought experiment as a reliable process was identified in the discussion on hot packs as shown in Table 4. Aria was thinking through an imagined scenario where the energy would be released over a longer period of time when one clicked the disc more slowly.

As we attempted to identify the categories of reliable processes in student discourse, we also noted the ways in which students used the schemas that specified these reliable processes (i.e., enacted, evaluative, and metacognitive). An example of students enacting a reliable process would be picking a model that they deemed to be correct through the process of elimination (see Table 4). Specifically, when students discussed which model to pick for the nuclear fission discussion, they did not argue why the selected Model 2 was correct. Instead, Emma and Grace eliminated other models because one statement in the rest of the models did not seem to align with the information provided in the reading. As noted earlier in the qualitative coding for reliable processes, even though it was not viable for participants to enact all possible reliable processes, students could still speculate about the reliable processes by evaluating and metacognitively thinking and talking about them. As a case in point, in the discussion on airbags, Chloe expressed a metacognitive belief regarding observation as a reliable process, stating that having first-hand observations from a real-life experiment would help the group better understand the models than a video demonstration. Similarly, in the discussion on nuclear fission, Emma expressed her metacognitive belief about using a physics formula as a reliable process that could estimate how long it would take a nuclear boom to hit the ground.

#### 3.1.3. Argumentation

The final piece related to the first research question involved students' use of argumentation in the discussion. Our argumentation coding was operationalized through the response codes of the Quality Talk coding. Three argumentation codes were used to identify episodes of talk that evidenced both individually (i.e., elaborated explanation, EE) or collectively constructed argumentation (i.e., cumulative talk, CT; exploratory talk, ET). The frequency of each argumentation code was generally balanced across the four QT<sup>S</sup> discussions (see Table 8), but EEs (*n* = 41) occurred more frequently than ETs (*n* = 10) or CTs (*n* = 24).


**Table 8.** Frequency Table of Quality Talk Codes Across Discussions.

EEs are individual explanations (i.e., an uninterrupted turn by a single speaker) that include a claim and multiple pieces of reasoning and/or evidence. For instance, in the example shown in Table 5, Chloe first started with a claim regarding the necessity of a seatbelt in response to a question about the pros and cons of airbag. Following this claim, they provided evidence or reasoning, one being the demonstration they watched in class and the other being the reasoning derived from the demonstration.

The code for ET captures episodes of collaborative, group-constructed discourse where students weigh different arguments over multiple turns and is characterized by the use of a challenge. For example, in the discussion on thin films (see Table 5), Grace challenged Chloe's reasoning about why the colors went from green to pink on the thin film but not purple or blue. Chloe first proposed that it was because the lights were not truly white light. However, Grace challenged this claim by referring to their observation of the thin film when the black paper was held at different angles and argued that the reason could be the angle of perception.

In contrast, the code for CT captures episodes of talk that are collaborative, groupconstructed exchanges where multiple students build knowledge but not critical way. That is, there is no presence of a challenge in a cumulative talk episode. According to the example in Table 5, four students co-constructed their understanding of the relationship between wavelength and colors seen on the thin film without challenging each other. Together they built their evaluation of a claim in the presented model and concluded that the claim was not correct.

#### *3.2. RQ2. Contextual Factors Related to Students' Epistemic Cognition and Argumentation*

Scholars have increasingly acknowledged the influence of context, including factors such as domain- and task-specificity of phenomena, on epistemic cognition [10,32]. In order to explore our second research question, the authors met to identify trends with regard to students' epistemic cognition and argumentation as related to change in the contextual factors from fall to spring. Specifically, the contextual factors of interest included the model format (i.e., the scientific model task shifted from selecting one best model from four models to evaluating three separate claims) and explicit instruction provided to the students (i.e., the focus of QT<sup>S</sup> discourse lessons and QT<sup>S</sup> catalyst shifted from authentic questions to argumentation components). Herein, we present three trends that demonstrate changes in students' epistemic ideals, reliable processes, and argumentation from fall (i.e., discussions on airbags and nuclear fission) to spring (i.e., discussions on thin films and hot packs) in the model-based portion of the discussion, as it was the portion of the discussion that was more sensitive to changes related to model format.

#### 3.2.1. Students Evidenced Increased Use of Epistemic Ideals

We identified two primary trends regarding shifts in the epistemic cognition codes for the model-based portion between fall and spring. First, with regard to epistemic ideals, there were substantially more occurrences of coherence with normative disciplinary knowledge (*n* = 11) and evidentiary support (*n* = 3) in spring compared to fall (*n* = 5 and *n* = 0, respectively; see Table 6).

As illustrated in Excerpts 1 and 2, during both spring discussions (i.e., thin film and hot packs discussions), students invoked the standard that acceptable claims must be supported by reasons and evidence (i.e., evidentiary support). They systematically evaluated each of the three claims presented. In Excerpt 1, Grace asserted that the first claim was correct because it was the only claim with evidence, meaning that this claim met a necessary criterion (i.e., claims must be supported by evidence). Then, Aria endorsed this epistemic ideal and prompted the group to provide evidence by asking, "What's your evidence behind it?"

Students also referred to the revised QT<sup>S</sup> catalyst that was focused on the key argumentation components (i.e., claim, reasoning, and evidence) to probe for evidence of each claim in the provided scientific model. Similar to the question that Aria asked in Excerpt 1, in Excerpt 2 from the thin films discussion, Grace asked, "What do you have written down as your reasoning and evidence?" (Note: In each excerpt, EC codes are noted in italics

within {}; QT codes are noted in bold within {}; duration of codes that span multiple turns is indicated within <>. EI: Epistemic ideals; RP: Reliable process.) Excerpt 1: Hot Packs

**Grace:** I think that's the only [claim] that has evidence. *{EI: Evidentiary Support}*

**Aria:** Yeah, same. So what's your evidence behind it?

**Isabella:** I talked about this little—this little [doodah], this—graph, about the activation energy. I said that by clicking it—that it provides the activation energy for the reaction to start occurring. *{EI: Coherence with NDK}*

Excerpt 2: Thin Films

**Grace:** When a drop of nail polish is dropped onto a warm surface, the lower density of the nail polish and the molecular attraction of the molecules present it from mixing with the water? Now, what does everyone think about this?

**Chloe:** I think it's false.

**Isabella:** I think it's true.

**Grace:** What do you have written down as your reasoning and evidence? {*EI: Evidentiary support*}

**Aria:** Yeah, why do you think it's false?

These excerpts are examples of how both the changing context of the model format and the explicit instruction affected the ways students evidenced epistemic cognition via argumentation. The structure of the model and lessons on argumentation made it more likely that students would surface their epistemic cognition (i.e., epistemic ideals), which in turn, made their epistemic cognition public and available for scrutiny by their peers via argumentation. The process of argumentation, a scientific epistemic practice, could then lead to improvements in epistemic cognition.

3.2.2. Students Evidenced Decreased Use of Process of Elimination

The second trend regarding the change in students' epistemic cognition pertained to students' use of reliable processes for the model-based portion between fall and spring. There were only two instances of reliable processes in the model-based portions of the discussions, and notably, these both occurred during the fall discussions. That is, students used the process of elimination in both airbags and nuclear fission discussions to identify the normative model that they held to be true. Specifically, students used the process of elimination to identify the model they believed was the most appropriate without challenging each other, probing for alternative arguments, or requesting further justification for the claim. In essence, in fall, students engaged in a process of elimination by narrowing the options provided in the model, a process they considered reliable for achieving their epistemic end (see Excerpt 3). In contrast, after the change in the model format, students did not use process of elimination when discussing the provided scientific model in spring.

Excerpt 3: Nuclear Fission

**Chloe:** Why'd we pick model two?

**Emma:** Well, compared to the other ones, it says, "The strong nuclear force overpowered the electric static forces." And only number four also says that. The rest of them are backwards. {*RP: Process of elimination, enacted*} <RP starts> {**EE**}

**Chloe:** I agree, and that refers to the text where it told us that strong, uh, nuclear forces would overpower the, uh, electrostatic forces.

**Grace:** So then you would be able to narrow it down to two and four, and then it would be two because it says, like, the resulting nuclei will have an increased binding energy and be more stable. <RP ends>

The process of elimination is not a typical, normative scientific practice. Again, this change in the use of reliable processes was likely due, in part, to the change in model format but also likely resulted from QT instruction in argumentation, which emphasized more normative epistemic practices in science than the process of elimination.

#### 3.2.3. Students Evidenced Increases in EE, ET, and CT

The last trend focused on changes in students' argumentation as evidenced by the individual- and group-constructed argumentative responses. Notably, there were more instances of EE (*n* = 10), ET (*n* = 4), and CT (*n* = 4) in spring than fall (*n* = 3, 1, and 3, respectively) during the model-based portion of the discussions.

The increased occurrences of these argumentation codes indicated an improvement in the quality of student argumentation in general, but as we examined students' EEs in the transcripts, we also noted that the EE generated by Emma in spring appeared to have a higher quality than the EE she initiated in fall. For instance, in Excerpt 3, students were discussing why they would pick Model 2 from the four models presented to them during the nuclear fission discussion. In response to this question, Emma generated an EE that indicated a process of elimination, a non-normative reliable process. That is, as long as a claim includes a statement that the students think is wrong or is reversed from the statement that they hold to be true, it is automatically eliminated regardless of what remains in the claim. Such reasoning did not directly explain if the model was scientifically acceptable or not. By contrast, in spring, as students were evaluating the second claim in the provided model about hot packs, Emma initiated an EE which included an explicit reference to the evidence that was closely related to the claim. As shown in Excerpt 4 below, Emma used scientific evidence to explain why the second claim was considered wrong.

Excerpt 4: Hot Packs

**Emma:** Yeah, whenever—in this article it says—like, it's talking about entropy, it says, like, an increase in entropy is represented by a positive value for delta S, which is, like, an endothermic reaction. So that's kind of like . . . But this is saying, like, an increase in entropy makes it an exothermic reaction. So it's kind of, like, saying the opposite in here. {**EE**}

#### **Aria:** Yeah.

**Emma:** That's what I used for my evidence, like, down here—

Such discourse is reflective of more normative argumentation processes in science, where counterclaims and rebuttals must be supported with reasoning and evidence. As students evaluated each separate claim, they needed to provide reasoning and evidence to support a correct claim as well as to refute an incorrect claim. The ability to evaluate claims via argumentation was likely strengthened through explicit instruction on argumentation.

#### *3.3. RQ3. The Relation of Epistemic Cognition to Authentic Questioning and Argumentation*

For our third and final research question, we looked at how students' epistemic cognition related to their authentic questioning and argumentation. Specifically, we produced a set of figures to demonstrate the timelines and the codes of epistemic cognition, argumentation, and authentic questions across the entirety of four QT<sup>S</sup> discussions, which we examined in combination with the transcripts to synthesize trends (see Figures 4–7). In sum, we identified two major trends: (a) the use of epistemic ideals was associated with the initiation of authentic questions, and (b) argumentation involved the use of epistemic ideals.

3.3.1. The Use of Epistemic Ideals Was Associated with the Initiation of Authentic Questions

As shown in Figure 4 along with the transcripts, a pattern emerged whereby (a) the use of epistemic ideals seemed to trigger the initiation of an authentic question and (b) the initiation of an authentic question seemed to lead to the use of epistemic ideals. For the first

trend, we explored the transcripts where the use of an epistemic ideal co-occurred with or immediately preceded the initiation of an authentic question to identify the relationship. For the second trend, we explored the transcripts where the initiation of an authentic question preceded the use of an epistemic ideal to verify the finding. tation, and authentic questions across the entirety of four QTS discussions, which we exam‐ ined in combination with the transcripts to synthesize trends (see Figures 4–7). In sum, we identified two major trends: (a) the use of epistemic ideals was associated with the initiation of authentic questions, and (b) argumentation involved the use of epistemic ideals. tation, and authentic questions across the entirety of four QTS discussions, which we exam‐ ined in combination with the transcripts to synthesize trends (see Figures 4–7). In sum, we identified two major trends: (a) the use of epistemic ideals was associated with the initiation of authentic questions, and (b) argumentation involved the use of epistemic ideals.

For our third and final research question, we looked at how students' epistemic cogni‐ tion related to their authentic questioning and argumentation. Specifically, we produced a set of figures to demonstrate the timelines and the codes of epistemic cognition, argumen‐

For our third and final research question, we looked at how students' epistemic cogni‐ tion related to their authentic questioning and argumentation. Specifically, we produced a set of figures to demonstrate the timelines and the codes of epistemic cognition, argumen‐

*3.3. RQ3. The Relation of Epistemic Cognition to Authentic Questioning and Argumentation*

*3.3. RQ3. The Relation of Epistemic Cognition to Authentic Questioning and Argumentation*

*Educ. Sci.* **2021**, *11*, x FOR PEER REVIEW 25 of 36

*Educ. Sci.* **2021**, *11*, x FOR PEER REVIEW 25 of 36

**Figure 4.** Occurrences of Authentic Questions, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. 1. AQ = Authentic Questions, EI = Epistemic Ideals, RP = Reliable Processes; \* Episode of talk exemplified in Excerpt 5. 2. Please also note that the thin films discussion had more turns than the otherthree so the scale forthe X‐axis is different from the rest. **Figure 4.** Occurrences of Authentic Questions, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. 1. AQ = Authentic Questions, EI = Epistemic Ideals, RP = Reliable Processes; \* Episode of talk exemplified in Excerpt 5. 2. Please also note that the thin films discussion had more turns than the other three so the scale for the X-axis is different from the rest. **Figure 4.** Occurrences of Authentic Questions, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. 1. AQ = Authentic Questions, EI = Epistemic Ideals, RP = Reliable Processes; \* Episode of talk exemplified in Excerpt 5. 2. Please also note that the thin films discussion had more turns than the otherthree so the scale forthe X‐axis is different from the rest.

**Figure 5.** Occurrences of Elaborated Explanation, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. EE= Elaborated Explanation; \*Episode of talk exemplified in Excerpt 6. **Figure 5.** Occurrences of Elaborated Explanation, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. EE= Elaborated Explanation; \*Episode of talk exemplified in Excerpt 6. **Figure 5.** Occurrences of Elaborated Explanation, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. EE = Elaborated Explanation; \* Episode of talk exemplified in Excerpt 6.

**Figure 6.** Occurrences of Exploratory Talk, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. ET = Ex‐ ploratory Talk; \* Episode of talk exemplified in Excerpt 6. **Figure 6.** Occurrences of Exploratory Talk, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. ET = Exploratory Talk; \* Episode of talk exemplified in Excerpt 6. **Figure 6.** Occurrences of Exploratory Talk, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. ET = Ex‐ ploratory Talk; \* Episode of talk exemplified in Excerpt 6.

**Figure 7.** Occurrences of Cumulative Talk, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. CT = Cu‐ mulative Talk; \*Episode of talk exemplified in Excerpt 7. **Figure 7.** Occurrences of Cumulative Talk, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. CT = Cu‐ mulative Talk; \*Episode of talk exemplified in Excerpt 7. **Figure 7.** Occurrences of Cumulative Talk, Epistemic Ideals, and Reliable Processes in Four Discussions. Note. CT = Cumulative Talk; \* Episode of talk exemplified in Excerpt 7.

3.3.1. The Use of Epistemic Ideals Was Associated with the Initiation of Authentic Questions As shown in Figure 4 along with the transcripts, a pattern emerged whereby (a) the use of epistemic ideals seemed to trigger the initiation of an authentic question and (b) the initiation of an authentic question seemed to lead to the use of epistemic ideals. For the first trend, we explored the transcripts where the use of an epistemic ideal co‐occurred with or immediately preceded the initiation of an authentic question to identify the rela‐ tionship. For the second trend, we explored the transcripts where the initiation of an au‐ thentic question preceded the use of an epistemic ideal to verify the finding. An example of the first trend is evidenced in Excerpt 5, which is denoted in a block with an asterisk in Figure 4. At the beginning of this excerpt, Isabella's turn was coded 3.3.1. The Use of Epistemic Ideals Was Associated with the Initiation of Authentic Questions As shown in Figure 4 along with the transcripts, a pattern emerged whereby (a) the use of epistemic ideals seemed to trigger the initiation of an authentic question and (b) the initiation of an authentic question seemed to lead to the use of epistemic ideals. For the first trend, we explored the transcripts where the use of an epistemic ideal co‐occurred with or immediately preceded the initiation of an authentic question to identify the rela‐ tionship. For the second trend, we explored the transcripts where the initiation of an au‐ thentic question preceded the use of an epistemic ideal to verify the finding. An example of the first trend is evidenced in Excerpt 5, which is denoted in a block with an asterisk in Figure 4. At the beginning of this excerpt, Isabella's turn was coded An example of the first trend is evidenced in Excerpt 5, which is denoted in a block with an asterisk in Figure 4. At the beginning of this excerpt, Isabella's turn was coded both as an authentic question and an epistemic ideal, that is, the two codes co-occurred. In this turn, Isabella first connected their everyday experience of boiling chocolate with the changing state of hot packs. The idea that a reusable hot pack, after being boiled, did not turn solid again at room temperature did not seem to cohere with their personal experience about the substance of chocolate. Thus, students had to reconcile the lack of coherence between the two situations to fully understand the phenomenon under which sodium acetate in hot packs behaved differently than chocolate. In this scenario, the epistemic ideal of coherence with personal experience was not met and thus prompted a productive authentic question asked by Isabella, approaching the end of the same turn,

about why a hot pack did not get solid again at room temperature like chocolate. In this excerpt, the scrutiny of epistemic ideals facilitated productive discussions around students' misconceptions through the initiation of an authentic question.

Excerpt 5: Hot Packs

**Isabella:** OK, and then, like, what you were saying before, how you boiled it and then it stays a solid—I mean, a liquid—like, that's, like, different. You know, like, when you heat chocolate, OK—it's, like, solid at first, and then you heat it, and then it turns into, like, a b—then it turns back into a solid. So, like, why wouldn't this do that? <*EI: Coherence with PE*> {**AQ**}

**Aria:** I think the freezing points are different. Like, with ice, if you melt it, it's just gonna stay, like, water, unless you put it back into the fridge again—<*EI: Coherence with NDK*> {**EE**}

**Isabella:** Oh.

**Aria:** —because the freezing point is all (inaudible).

**Isabella:** OK.

**Aria:** Anyway, um, but with chocolate, it probably has a really high freezing point—

**Isabella:** It's (inaudible)—

**Aria:** —'cause at room temperature it's a solid, right?

**Isabella:** Yeah. But it's weird, because this—like, at room temperature it can be a solid or a liquid. {*RP: Observation*} {*EI: Coherence with PE*}

**Aria:** Yeah, that's kind of weird, huh?

**Grace:** Yeah, 'cause yours is . . . Well, mine's as hard as can be, and hers is like a gel.

**Isabella:** Or, like, they're just doing it right now, like, they just boiled theirs, and it's, it's gonna stay a solid. Like, it's not gonna go back to . . . I mean, it's gonna stay a liquid. It's not gonna go back to a solid.

**Aria:** Yeah, I don't know how they engineered it to—so that it stays . . . I feel like it's probably the chemical properties, because it says, "But it can exist as, as a liquid at a much lower temperature," like, lower than the freezing point, "and it's extremely stable." I don't know why that is, but I think—{*EI: Coherence with NDK*} {**EE**}

The use of epistemic ideals seems to trigger the initiation of authentic questions, but authentic questions also probe for the use of epistemic ideals in student responses to these authentic questions. Again, as shown in Excerpt 5, Isabella's authentic question led to multiple student responses that connected to the normative disciplinary knowledge (by Aria) as well as personal experience (by Isabella). For example, in response to Isabella's authentic question, Aria was seeking coherence with the normative disciplinary knowledge by citing a piece of evidence from the student's prior knowledge about the melting of ice and later brought up a reference reading regarding the chemical properties of hot packs.

3.3.2. Argumentation Involves the Use of Epistemic Ideals

The second trend pertained to the co-occurrence of argumentation and epistemic ideals, indicating frequent use of epistemic ideals in constructing argumentation. Specifically, EE co-occurred with epistemic ideals approximately 50% of the time (see Figure 5). Further, as shown in Figures 6 and 7, ET and CT also co-occurred with epistemic ideals approximately 60% of the time. The excerpts presented in the sections below provide additional evidence to bolster our argument regarding this trend.

Students' EE involved the practice of epistemic ideals. For example, in Excerpt 6 (also see the block noted in Figure 5), students were evaluating one claim in the student model during the thin films discussion, that is, "When a drop of nail polish is dropped onto a water surface, the lower density of the nail polish and the molecular attraction of the molecules prevent it from mixing with the water." During the discussion, Aria generated an EE by utilizing normative disciplinary knowledge and referred to the bonding between molecules of the nail polish and the bonding between water molecules as her reasoning.

Excerpt 6: Thin Films

**Isabella:** Why else do you think the, the first one [claim is false]—{**AQ**}

**Chloe:** No, I just—it doesn't make sense that the molecular attraction of the molecule, uh, (inaudible)—{*EI: Logically sound*} {**ET**} <ET starts>

**Aria:** OK, think of it this way: you have, um, like, this drop of nail polish in just, like, water. Where was the water? Was it on the film or something?

**Emma:** It was in, like, a little plastic tub.

**Isabella:** It was (inaudible).

**Emma:** Yeah.

**Aria:** OK. So you have the nail polish, and the nail polish molecules attract one another, so they want to stick, stick together, sort of like . . . And the water has hydrogen bonding, your favorite type of bonding, right? And then they want to stay together, so the nail polish is not gonna just mix with the water, because they're still, like, together, because (inaudible) molecular forces are bonding them together, that they're not separated. Does that make sense? {*EI: Coherence with NDK*} {**EE**}

**Grace:** Basically, the water molecules don't want to get a divorce, and the nail polish ones don't either, so they just kind of—

**Aria:** Yeah.

**Grace:** —coexist. <ET ends>

In addition, argumentation was found to co-occur with epistemic ideals as evidenced through ETs and CTs. As illustrated in Excerpt 6, an ET occurred following an authentic question. In this example, some students stated the first claim in the student model was true, whereas others disagreed. Chloe struggled to accept the first claim. The statement that molecular attraction prevents the nail polish from mixing with water did not appear to be logically sound to her. The use of this epistemic ideal (i.e., logically sound) led to Chloe's challenge in the group, which characterized this ET. In response to Chloe's challenge, Aria provided an EE to demonstrate that this claim cohered with normative disciplinary knowledge as explained earlier. After this EE, the group collectively decided that the first claim was valid and moved onto the next claim in the model. Collaboratively, students used various epistemic ideals to construct argumentation as a group by raising a challenge and responding to the challenge.

During episodes of talk where co-constructed understandings occurred, but without challenging each other (i.e., CT), it appeared that epistemic ideals were also enacted as part of the knowledge building process (see Figure 7). For example, in Excerpt 7, students were co-constructing a response to an authentic question "Do airbags cause more injuries or prevent more injuries?" As Isabella and Mia built upon each other's response about how airbags could prevent people from smashing into the windshield, they referred to their prior knowledge from a demonstration video that they watched in class.

Excerpt 7: Airbags

**Mia:** I feel like . . . I feel like a lot of airbags also do is they, like, keep you from (inaudible) if you were to smash into the windshield, too. {CT} <CT starts>

**Isabella:** Yeah, in the one—in the one—{*EI: Coherence with PK*} <EI starts>

**Mia:** Cause if you smash in the windshield . . .

**Isabella:** —demonstration, like, without the—without the airbag, it showed, like, the person—

**Mia:** Yeah.

**Isabella:** —it's not an actual person, but—

**Mia:** The (inaudible) going through.

**Isabella:** —but the person, like, going through the window, and, like, you could see (inaudible). <EI ends>

**Mia:** Yeah, (inaudible), like, if you hit it, it's gonna shatter. It's like a glass that stays together. So, like, if you go through it, it's not gonna shatter around and you're gonna be stuck in it, and you're gonna have (inaudible). <CT ends>

These excerpts are examples of how students' epistemic cognition interacted with authentic questioning and argumentation during small-group discussions, indicating a close relationship of epistemic cognition to authentic questioning and argumentation. It also enhances the understanding about how epistemic cognition can be enacted and supported by authentic questioning and argumentation during small-group, QT<sup>S</sup> discussions in science.

#### **4. Discussion**

Modern science education standards are focused on literacy practices, including the ability to engage in scientific thinking and argumentation [4]. Society expects students to be able to readily evaluate, accept, and use scientific knowledge as they reason about science in their own lives [35,74]. Despite the strong rationale behind incorporating argumentation into science education, it remains limited in most science classrooms [40].

In response, our study contributes to science education and literacy research by exploring how small-group discussions can be used as a pedagogical tool to help students acquire the epistemic cognition and argumentation practices necessary to be thoughtful critics of scientific claims inside and outside of the classroom [10]. In prior work, we implemented QTS, a teacher-led, small-group discussion approach designed to promote students' scientific oral and written argumentation skills and identified increases in students' scientific argumentation [27]. In the present study, we utilized the AIR model as the framework to examine and analyze high school chemistry students' epistemic cognition (i.e., epistemic ideals and reliable processes) [30] and scientific argumentation as they participated in small-group, scientific discussions about, around, and with scientific text or content.

## *4.1. RQ1. Documented Evidence of Students' Epistemic Cognition*

#### 4.1.1. Epistemic Ideals

Most of the identified epistemic ideals in the present study are in alignment with those in the extant literature. For instance, four of the categories we identified, namely connections to other knowledge, internal structure of an explanation, empirical evidence, and good communication, align directly with the categories of epistemic ideals (i.e., "connection to other knowledge," "internal structure of an explanation," "clearly presented and understandable," and "present and future connections to empirical evidence") set forth by Chinn and colleagues [30] (pp. 433–434). The codes organized into these four categories also appear to align with the broader extant literature. For instance, the epistemic ideal of logically sound is similar to the criteria of plausibility [69,75] or logical consistency [76]. The epistemic ideal of comprehensive aligns with what scientists consider to be good models, which can strike a balance between complexity and parsimony [77]. Similarly, clearly understandable, which is classified into the category of good communication in our study, is a communicative criterion that scientists use to evaluate models. Indeed, as Pluta et al. argued, if a successful model is not presented in a way that scientists understand, it will not be accepted and thus, the communication criterion has to be fulfilled prior to evaluating epistemic quality [70].

However, one epistemic ideal (i.e., standards of testimony) proposed by Chinn and colleagues was not identified within these four science discussions. Chinn et al. proposed a category of epistemic ideals related to standards of testimony, which indicates the criteria that must be met to believe testimony from others [30]. For example, a student could use quotes from a climate scientist to support their argument about climate change as a result of referring to the expert in the area. In this dataset, we found no instances where students referred to standards of testimony. They occasionally brought forth testimony as evidence (e.g., provided experiential accounts from their relatives) and tended to use personal experience when discussing familiar topics. However, they were stating that a knowledge claim cohered with the information they received as testimony, rather than asserting that a particular testimony was valid according to some standard. This indicates that students may need more explicit instruction that guides them to evaluate the source of evidence within the context of small-group discussions so that they may be more likely to employ the standards of testimony.

#### 4.1.2. Reliable Processes

Throughout the discussions, we found evidence of students using a number of normative reliable processes, including experimentation and observation, which are consistent with Chinn et al. [30]. However, students also adopted non-normative processes when the situation allowed for it. For example, students used the process of elimination when they were asked to select one normative scientific model from four options for the airbags and nuclear fission discussions. It is important to note that the process of elimination is considered a non-normative process because it does not align with scientists' epistemic practices as delineated in prior research [34–36]. A plausible reason for the use of the process of elimination could be that when students were given four models with overlapping claims to choose from, when the goal of the task emphasized determining the best model, students were likely to apply a simple heuristic to narrow down the options. Another possible reason is related to the level of knowledge students had about argumentation and normative reliable processes in science. The explicit instruction on argumentation delivered in spring may have made it less likely for students to use non-normative processes such as the process of elimination when evaluating scientific claims.

#### *4.2. RQ2. Model Format and Explicit Instruction in Relation to Epistemic Cognition and Argumentation*

The epistemic practices that individuals engage in vary widely depending on contextual factors [15,78]. In the current study, the influence of context pertained to changes in the model format and explicit instruction provided during the QT<sup>S</sup> intervention. Following the changes in these two contextual factors, instead of using the process of elimination, students tended to use epistemic ideals such as coherence with normative disciplinary knowledge and evidential support to negotiate ideas and engaged in argumentation more extensively during the model-based portion of the discussion.

A plausible reason for such change is that the revised model format precluded the ability to eliminate models, as the multiple claims presented in the model in spring were distinctly different from each other. Students needed to go through each of the three claims and discuss why certain claims were either more or less supported. Further, the explicit instruction on argumentation and the updated QT<sup>S</sup> catalyst with external, visual cues related to essential argumentation components (see Figure 2) also seemed to promote the use of certain epistemic ideals (e.g., evidentiary support) and argumentation as students evaluated each claim. For instance, at the beginning of the thin films discussion when students were evaluating the presented model, Grace used the epistemic ideal of evidentiary support and explicitly referred to the QT<sup>S</sup> catalyst by asking, "What have you written down as your reasoning and evidence?"

This finding is informative in terms of understanding and cultivating apt epistemic performance (i.e., "performance that achieves valuable epistemic aims through competence" [79] (p. 353)) in science classrooms. Barzilai and Chinn examined prior models of

epistemic cognition and proposed that the primary goal of epistemic education is to enable learners to achieve apt epistemic performance [79]. For the first two discussions, students used a process of elimination to approach the model-evaluation task. Using the definition of apt epistemic performance, a process of elimination could have resulted in success (i.e., choosing the correct model), but it would not have necessarily been apt (i.e., choosing the correct model by providing reasons and evidence to support all aspects of the model). Therefore, to promote students' apt epistemic performance via scientific discourse [2], it is necessary to consider the model format for discussion and encourage students to negotiate ideas about scientific phenomena using normative practices in science.

Another implication for instruction would be to optimize explicit instruction on argumentation by providing instructional tools such as a graphic organizer worksheet that visually demonstrates the components of argumentation. Such visual cues and external representation of abstract concepts can potentially support the use of argumentation components [43,66]. As students fill out these worksheets in preparation for the discussion, they are more likely to think about and negotiate ideas about these essential argumentation components during the discussion. As a result, students may be more likely to bring forth and evaluate scientific arguments by querying or providing reasoning and evidence to engage in deeper thinking about scientific phenomena.

#### *4.3. RQ3. The Relation of Epistemic Cognition to Authentic Questioning and Argumentation*

As evidenced across the four QT<sup>S</sup> discussions in the current study, the relationship between the use of epistemic ideals and the initiation of authentic questions appears to be bi-directional. A plausible explanation is that when students used epistemic ideals to decide whether knowledge should be accepted as valid and found the epistemic ideal to be unmet, the dissonance prompted them to query why that knowledge claim did not meet the ideal students had in mind, and thus led to the initiation of an authentic question. On the other hand, when students responded to an authentic question, as they justified their claims by referring to their personal experience or prior knowledge as evidence, they invoked epistemic ideals accordingly.

When students invoked epistemic ideals in their responses to an authentic question, they were also likely to bring forth arguments individually or collectively, in the form of an EE, ET, or CT, indicating a close relationship between epistemic cognition and argumentation. This is possibly because the epistemic ideals students held may guide the kinds of reasoning, evidence, and arguments they brought forward and the type of disciplinary standards they used to evaluate the presented arguments. To construct an EE, students necessarily needed to build an argument by providing evidence or reasoning that met students' epistemic ideals. When students' epistemic ideals were not met, the resulting dissonance indicated a gap between what was being discussed and what students knew (see Excerpt 6) and was effective in triggering challenges during small-group discussions, which are characteristic of ET. That is, students may raise a challenge when someone says something that does not cohere with their prior knowledge. This also indicates that when students have the normative epistemic cognition knowledge necessary to critique claims, they are more likely to engage in discourse that involves thoughtful critique [30].

Finally, the relationship between epistemic ideals and CT as evidenced in Excerpt 7 reveals that even the building of knowledge without the raising of counterarguments or challenges involved the use of epistemic ideals. A possible explanation is that knowledge building involves making connections to one's prior knowledge, ideas, and explanations [80–82], as well as cognitive processes such as asking questions that probe for explanations, interpreting and evaluating information, and justifying arguments [83–85]. Such cognitive processes would require the application of epistemic ideals to help elicit or formulate responses that cohere with the students' normative disciplinary knowledge, personal experience, or other forms of prior knowledge. This also suggests the importance of teaching scientific knowledge, skills, and practices in tandem to promote student construction of knowledge through cumulative talk, as was done in our larger QT<sup>S</sup> study [27].

#### *4.4. Limitations*

In this study, we elected to conduct a close analysis of one student group's work over the course of an instructional year to deeply examine their use of epistemic cognition and argumentation in small-group discussions. Our emphasis in this in-depth qualitative study was on ecological validity over external validity, therefore causal claims are not warranted but our findings do deeply capture epistemic cognition and argumentation in an authentic context.

Students' argumentation and epistemic cognition were observed within the context of the QT<sup>S</sup> intervention. As part of QTS, students received explicit instruction on argumentation and conducted regular QT<sup>S</sup> discussions about scientific phenomena including evaluating a scientific model, and thus, these findings may not be generalized outside of this context. Instead, we argue that this study contributes to a foundation from which to further investigate students' emergent argumentation and epistemic cognition in other contexts, for example, while engaging with conflicting scientific claims [86], or to further examine how features of pedagogical practices can support students' development of argumentation practices and epistemic thinking.

#### **5. Conclusions and Future Directions**

The current study adds to the growing body of work examining situated epistemic cognition during authentic scientific practices. Within the scope of current research, we observed students' epistemic cognition through the lens of epistemic ideals and reliable processes, examined the role of contextual factors in the occurrence of epistemic cognition and argumentation, and investigated the relationship between students' epistemic cognition and their scientific argumentation during the QTs intervention. Such findings not only contribute to the field's understanding about students' epistemic cognition and argumentation in authentic science classrooms but also inform research and practice on how to develop and design effective pedagogies in ways that promote students' epistemic cognition and argumentation.

In this study, we examined students' discourse in four science discussions within one discussion group. In future work, researchers could include additional discussion groups to better capture individual and group differences (e.g., reading comprehension) [87] and explore to what extent such differences may influence students' epistemic cognition and argumentation in science. In response to the first research question, we identified various categories of epistemic ideals and reliable processes, two components of the AIR model, in students' science discourse. Future researchers could examine under what conditions students vary in their enactment of epistemic aims, and how different aims relate to scientific argumentation, to further explore students' epistemic cognition in science classrooms. Further, we found that contextual factors (e.g., model format) guided the discussion of scientific models. These contextual factors may contribute to the occurrences of different types of epistemic ideals, reliable processes, and argumentation. Researchers could extend this line of research and examine ways in which the design of a scientific modeling task can most effectively lead to students' enactment of normative scientific practices and the extent to which different attributes of the context (e.g., scientific language, background knowledge) may relate to students' scientific practices [15,88]. Finally, we identified a close relationship between students' use of epistemic ideals, authentic questioning, and argumentation across the series of QT<sup>S</sup> discussions. This finding revealed how certain components of the QT<sup>S</sup> intervention, such as explicit instruction on authentic questions and argumentation, may promote students' epistemic cognition. Thus, future researchers could investigate other instructional components of QT<sup>S</sup> that might promote students' epistemic cognition in science via argumentation instruction and practice.

Indeed, in the face of various post-truth reasoning challenges, to help students develop their scientific reasoning competency and achieve "valuable epistemic aims through competence" (i.e., apt epistemic performance) [79] (p. 353), researchers likely need to work together with practitioners to design more authentic learning environments that

engage students in discussions to explore different ways of knowing and to understand how different sources of information work and why they are more or less reliable [2].

**Author Contributions:** Conceptualization, L.W., C.M.F., R.F.D., J.A.G. and P.K.M.; Data curation, L.W., C.M.F. and R.F.D.; Formal analysis, L.W., C.M.F., R.F.D., J.A.G. and P.K.M.; Funding acquisition, J.A.G. and P.K.M.; Investigation, L.W., C.M.F., R.F.D., J.A.G. and P.K.M.; Methodology, L.W., C.M.F., R.F.D., J.A.G. and P.K.M.; Project administration, C.M.F.; Resources, L.W., C.M.F., R.F.D., J.A.G. and P.K.M.; Supervision, J.A.G. and P.K.M.; Visualization, L.W., C.M.F. and R.F.D.; Writing—original draft, L.W., C.M.F. and R.F.D.; Writing—review and editing, L.W., C.M.F., R.F.D., J.A.G. and P.K.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** The data reported was collected as part of a larger project funded by the National Science Foundation (USA) through Grant No. 1316347 to The Pennsylvania State University. Any opinions, findings, and conclusions or recommendations expressed are those of the author(s) and do not represent the views of the National Science Foundation.

**Institutional Review Board Statement:** The Pennsylvania State University's human subjects review board approved this study, and appropriate human subjects procedures and guidelines were followed during all phases of the study and manuscript preparation.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** The data presented in this study are available on request from the corresponding author. The data are not publicly available due to participants' informed consent in alignment with the human subjects procedures and guidelines.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Education Sciences* Editorial Office E-mail: education@mdpi.com www.mdpi.com/journal/education

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34

www.mdpi.com ISBN 978-3-0365-4547-9