**2. Background**

As students evaluate paper-based or web-based information, they must apply critical thinking skills, which involve the ability to analyze, assess, and reconstruct information [16]. Dewey [17] (1933) considered critical thinking to be a stance or disposition in which a learner actively applies reflective thinking. This view situates critical thinking within a constructivist theoretical perspective. Dewey suggested that learners think critically when "selecting and weighing the bearing of facts and suggestions as they present themselves, as well as of deciding whether the alleged facts are really facts and whether the idea used is a sound idea or merely a fancy" (pp. 119–120). Evaluating online information also reflects a *new literacies* perspective. A dual-level theory of New Literacies conceptualizes new literacies on two levels: upper case and lowercase new literacies [18]. In general, New Literacies (upper case) attempts to explain the phenomenon of new literacies (lower case) created by the emergence and constant influence of technology and the expanding definitions of literacy [18]. As patterns of findings evolve from new literacies studies, they inform this theory [18]. Critical literacies are among the principles of New Literacies that appear to be common across the research and theoretic work taking place.

The ability to think critically is a key factor in evaluating online information and becoming web literate [18,19]. Readers must become healthy skeptics [19] of online information, developing what we call reliability reasoning [5] to determine deceptions and truths found on the internet. Because we live in a world of convenient internet access and abundant information, teachers must understand, teach, and model web literacy skills [2], which entail the knowledge and skills required to locate, evaluate, synthesize, organize, and communicate information found online [2,20]. As Dalton [21] (2015) reported, "Web literacy is huge. It's everything we do on the Web" (p. 605). Much of the literature related to web literacy skills focuses on the ability to evaluate the content of an article or other information found on the internet. We expand on current discussions to include search processes that lead to the desired content. We sugges<sup>t</sup> that the issue of evaluating information begins early in the search procedure, prior to the selection of a particular website or article. The process of searching the internet and thinking critically about online information is often referred to as web literacy [2,20]. Students must understand how to conduct effective research, and part of this process requires them to understand the massive nature of the

internet. A typical internet search results in millions of website suggestions. Students need basic knowledge of what a browser is and that an online search provides unlimited content. Students also need to practice evaluative skills and reliability reasoning in order to recognize ads and inappropriate or unrelated content.

When a search is initiated, internet users can see the number of "hits" a search produces in various ways. When using Google, the search engine provides the number of websites that the search resulted in. Figure 1 shows that a search for "dolphins" resulted in about 285,000,000 results. When using a tablet, such as an iPad, Safari is typically the search engine used. With Safari, the number of results is not listed, but users can select "more results" at the bottom of the search.


**Figure 1.** Google search result.

Because of the vast amount of information on the internet, the ability to narrow a search plays a role in finding information. Teaching students to narrow online searches enables them to significantly reduce the amount of information they must sift through. There are many ways to narrow a search, including altering key word phrases, using quotation marks, or applying Boolean terms. A search can be narrowed further by using tools such as the Google toolbar, which enables internet users to conduct advanced searches using criteria such as language, readability, file type, usage rights, or other settings. For example, a Google search for *fake news* yields approximately 992,000,000 results. By conducting an advanced search, the results requiring the words "fake news" in the title are either inclusionary or exclusionary terms. A search of this nature yields 2,520,000 results. The results could be narrowed further by selecting language, location, date ranges, or domain options until a manageable number with a specific focus is curated.

After a search is conducted and potentially narrowed, the next step is to determine which website to select for further examination. In our work, we have noticed that many students go straight to images, searching for visuals. As adult learners, we do this sometimes as well. However, ads and suspicious content may be avoided by applying evaluation skills early in the search process. Once a website is selected, evaluation continues as students examine the website's content for relevance and accuracy. Reliability reasoning is no easy task! One could check the website's URL for clues about a website's content. Internet users must understand the domain and extension (.edu, .org, .com), find the author, and utilize many other clues URLs may provide. For example, the tilde (~) is a clue that the website is a personal page authored by any person without review or validation of content. In addition, suspicious content can be cross-referenced with other sites. Web literacy skills require critical thinking, a necessary skill in the information age.

#### **3. Materials and Methods**

Since 2016, we have conducted ongoing research to learn more about elementary students' web literacy skills [6]. In order to assess web literacy skills, we initially developed the Concepts of Online Text (COT), which measured the knowledge of online navigation and text features of students in grades 1–5. Traditional assessments of concepts about print inspired the development of the instrument, which includes an observation protocol of online text, similar to the observation protocol Marie Clay [22] (1979) used with print-based text. Table 1 provides a comparison of the COT and Clay's concepts about print assessment. The COT instrument consists of seven tasks that align with two main constructs: (1) website orientation and navigation and (2) knowledge of webpage text

features. Construct 1 involves the orientation of a website, including the understanding of principles involving directional arrangemen<sup>t</sup> of text and media. Construct 2 involves the identification and understanding of webpage text features such as author, publisher, titles, headings, menus, captions, graphics, and hyperlinks. While emerging readers typically master print awareness and concepts of print in kindergarten [23], research conducted with the COT, published in 2018, indicated that knowledge of text features and website navigation occurs during the later elementary years [6].


**Table 1.** Considerations for concepts of online text assessment based on concepts about print assessment.


**Table 1.** *Cont.*

Source: Pilgrim et al., 2018 [6].

The COT-R, an updated protocol, extends the assessment instrument to evaluate knowledge of internet research. The COT-R instrument added a research component to the assessment, which included additional constructs: (3) Application of Research Skills and (4) Evaluation of Online Information. Construct 3 involves the ability to use digital skills to search, save, cite, and share information. Construct 4 involves the ability to evaluate search results, websites, and content for relevance and the credibility/trustworthiness of sources. For the purpose of this study, we focus on construct 4, the evaluation of information found during an authentic search on the *wild wide web.*

#### *Data Collection and Analysis*

In the spring of 2020, we began recruiting teachers across the US to administer the COT-R to students in grades 1–5, with the goal of administering the assessment to at least 500 students. Prior to the pandemic, we recruited teachers from four states—one west coast state, an east coast state, and two southern states. Teachers completed a brief training session, in which the interview protocol administration and scoring processes were explained. After gathering both guardian consent and student assent, teachers conducted one-on-one interviews using the COT-R protocol. Teachers began data collection, which was interrupted temporarily as the doors of schools across the nation closed. Although data collection resumed during the fall of 2020, recruiting teachers to collect data was difficult, as teachers were overwhelmed with COVID-19-related issues. Therefore, data collection continued through the spring of 2021. A total of 354 first- through fifth-grade students participated in this study, including 183 female participants and 171 male participants. The authors and certified teachers trained to give the assessment collected the data. Table 2 presents the number of participants per grade level.

**Table 2.** Number of participants per grade level.


Students in this study used a laptop or desktop using a Google search engine. The research tasks began with a prompt in which students were asked to search for an animal, specifically a dolphin. If the participant needed help with spelling, the administrator assisted by spelling the word aloud or typing it for the student, if needed. Many students selected the target word from the auto-complete drop-down box. It was also noted that a few students used the microphone feature to start their search. Then, students examined search results and discussed their search. Two tasks were assessed, including the ability to narrow information and the ability to evaluate information. The first task was evaluated with the following prompts: (1) Show me how many websites your search provided and (2) Show me how you could narrow the dolphin search to find what dolphins eat. Examples of answers that received credit for question one had to be specific. For example, a student might say, "A search for 'dolphins' provides 86,000 sites." Most searches will reveal multiple pages of sites, so the child would earn credit for the question if he/she understood that results extend beyond those visible on the first page. Counting visible links or websites on first screen is NOT correct. Examples of answers that received credit on question two included: the website titles/subtitles, context clues, and credible sources. Examples of answers that received no credit: first link, an advertisement, or images. The number of correct responses on each task for each grade level was calculated and converted to a percentage. Examples of actions that received credit on question two included the addition of keywords, typing a more specific question, or using quotation marks (with two or more words). Boolean terms (and, or, not) or the use of advanced searches would also count as an appropriate action. If students simply clicked on a link or indicated they did not know how to narrow a search, they received no credit. Teachers were provided a space to take notes during the administration of the assessment.

The second task was evaluated with the following questions and prompt: (3) How do you know which website will provide the best information about your topic; (4) Click on one of the websites you found. How can you tell if this website is relevant to your search? In other words, how can you tell if this website will give you the kind of information you need; and (5) How can you tell if this website will provide correct information that is true, or accurate? Examples of answers that received credit on question 3 included: the website titles/subtitles, context clues, and credible sources. Examples of answers that received no credit: first link, an advertisement, or images. Students received credit on question 4 if they were able to determine that the website(s) they selected matched their topic. For example, the child might say, "It is about dolphins." A website about the football team, the Miami dolphins, would be an inappropriate response to this question. Students received credit on question 5 if they were able to explain a way to check the validity or credibility of the website. They could respond with answers such as "Go to the home page and look for information about the publisher," "It is part of the Family Education Network (reliable source)," "Cross-reference the website," or "I trust the author because s/he is a scientist (or other occupation)". Examples of answers that received no credit include: it is the first website; it is not an advertisement, and it is a .org or .net (not always reliable). Again, teachers were provided a space to take notes during the administration of the assessment.

In order to analyze data, we examined student responses for the five tasks that are reflective of Construct 4. Binary data were analyzed using quantitative statistics in which students scored a "1" for a correct response and a "0" for an incorrect response. The number of correct responses on each task for each grade level was calculated and converted to a percentage. Teacher notes on the surveys were a potential qualitative data source. Even though few teachers included written notes, this qualitative source was analyzed by a search for themes that came out of each task/question.
