To optimize educational resources and make room for evidence-based strategies, it is important to refute neuromyths (
Krammer et al. 2021;
Rousseau 2021). Past research suggests that textual refutations can be used to update erroneous beliefs in general (
Chan et al. 2017;
Rich et al. 2017;
Swire and Ecker 2018). Various types of refutation texts have been effectively used to correct erroneous beliefs about educational misconceptions, including those simply stating that the information is incorrect, to more complex refutations that include explanatory and personalized feedback (
Lithander et al. 2021;
Ferrero et al. 2020;
Dersch et al. 2022). This is performed by first presenting the misconception (e.g., “Bulls are enraged by the color red”), followed by a refutation (e.g., “Many people think that the color red enrages bulls, but this notion is false;
Rich et al. 2017;
Van Loon et al. 2015). Presenting both false and new (correct) information together may help people revise and update their knowledge (
Kowalski and Taylor 2017;
Lewandowsky et al. 2012;
Van Loon et al. 2015). Refutations have been used to correct students’ beliefs in neuromyths in particular. For example,
Lithander et al. (
2021) presented participants with a list of neuromyths (e.g., “We only use 10% of our brains”), followed by various types of refutations (e.g., “This statement is false. Although it is commonly believed that we only use 10% of the brain, this notion is false”). The results showed that all types of refutations (those with and without explanations) were effective in correcting students’ erroneous beliefs in neuromyths. Further, this corrective effect held for one week and one month (but see also
Swire-Thompson et al. 2023;
Swire et al. 2017;
Paynter et al. 2019).
The current studies examined whether corrective feedback is effective for updating reported beliefs in neuromyths and reasoning based on neuromyths. We hypothesized that though people may explicitly change their response to indicate that a neuromyth is incorrect following a correction, they may continue to believe in neuromyths and use them to reason about effective learning strategies. Therefore, we designed a task to measure whether people are using lingering beliefs in neuromyths to reason about learning in real-world contexts. Participants read scenarios describing a learner’s study behaviors that were either based on beliefs in neuromyths or on supported evidence-based research. Participants were asked to indicate whether they agreed with the person’s reasoning illustrated in each scenario. This paradigm allowed us to investigate whether corrective feedback could be effective for updating not only beliefs in statements, as shown before but also reasoning about real-world learning scenarios. Previous research indicates that using scenarios can enhance the assessment of participants’ knowledge (
Tovazzi et al. 2020). Therefore, we used a method that incorporates scenario evaluations. We investigated students’ and teachers’ use of corrective feedback to update their beliefs and how these corrected beliefs may influence reasoning.
Ferrero et al. (
2020) found that refutations can be used to update educators’ reported beliefs in neuromyths. However, refutations did not change teachers’ intention to implement the neuromyths in their teaching practices. Thus, we examined whether feedback can be used to influence students’ and teachers’
reasoning about learning practices. We hypothesized that it might be particularly challenging to affect teachers’ reasoning (and not just reported beliefs) following a refutation because of engrained beliefs about learning resulting from extensive training and experience (
Walter and Tukachinsky 2020). Students (Experiments 1 and 2) and teachers (Experiment 3) read true and false statements (neuromyths) about intelligence and learning and indicated the veracity of the statements before receiving either corrective feedback or no feedback. Later, they were asked about the veracity of the initial statements, and novel to this study, they were asked to indicate their level of agreement with the reasoning described in various learning scenarios.
4. General Discussion
Research suggests that students and educators often endorse neuromyths—unsubstantiated beliefs about learning and intelligence (
Dekker et al. 2012;
Lithander et al. 2021). Ascribing to these false beliefs can lead students to choose suboptimal study strategies and educators to teach using these educational practices (
Dunlosky et al. 2013;
Morehead et al. 2016). Therefore, we investigated whether feedback on the accuracy of neuromyths would influence students’ and teachers’ reasoning about learning. The results showed that both students and teachers hold false beliefs about learning and the brain, in line with previous findings (
Dekker et al. 2012;
Ferrero et al. 2016;
Howard-Jones 2014;
Lithander et al. 2021). Initial beliefs in neuromyths were high across all three experiments. For example, between 79% and 94% of participants stated that they believed individuals learn better when they receive information in their preferred learning style.
However, the effectiveness of feedback on beliefs differed across the experiments, suggesting that the effectiveness of feedback may be influenced by the test delay and the population. In Experiment 1, students who received either type of feedback (feedback only or feedback and explanation) had significantly higher accuracy than those who received no feedback on the delayed test. There was no significant difference in accuracy between the feedback-only and feedback–explanation conditions, replicating
Lithander et al. (
2021). In contrast, in Experiment 2, students who received feedback with an explanation had significantly higher accuracy on the delayed test relative to students who received feedback only. However, participants in both feedback conditions had higher accuracy than those in the no-feedback condition. This finding diverges from the results of
Lithander et al. (
2021). Still, it aligns with frameworks suggesting that explanations can help people better integrate the correction into long-term memory (
Kendeou and O’Brien 2014) and research showing the benefits of providing explanations (
Rich et al. 2017;
Swire et al. 2017). The differing effects of explanations on accuracy between Experiments 1 and 2 may be due to the difference in delay (one week versus one month). Providing simple feedback may be sufficient to correct neuromyths at a short delay, whereas explanations may be needed to correct neuromyths after a longer delay. Diverging from the students’ results, teachers (Experiment 3) who received feedback only were not significantly more accurate on the one-week delayed test than teachers who received no feedback. However, receiving feedback and an explanation significantly improved accuracy for teachers on the one-week delayed test. This finding may suggest that teachers (relative to students) are more resistant to updating neuromyths and may need an explanation to change their beliefs.
As described above, feedback influenced later belief accuracy. In addition to the influence of feedback on accuracy, some other predictors contributed to whether beliefs were updated. Across all three experiments, higher initial accuracy predicted accuracy on the delayed test. In Experiment 1, greater confidence in beliefs on the delayed test predicted more accurate responding to the statements. Students who reported higher reliance on neuromyths in their daily lives were less likely to update their beliefs. In both Experiments 1 and 2, students who were more confident in their beliefs were more likely to be accurate. In contrast to Experiment 1, in Experiment 2 confidence on the initial test increased the likelihood of belief updating. One potential reason for this discrepancy (despite the same stimuli and similar participants) is the difference in delay. In Experiment 3, feedback condition and reliance on neuromyth were the only significant predictors of belief updating for teachers. As with students in Experiment 1, higher reliance on neuromyths decreased the probability of accuracy on the delayed test.
In addition to examining change in beliefs accuracy, another main goal was to investigate students’ and teachers’ reasoning using neuromyths following feedback. For students, the effects of feedback on reasoning closely mimicked the effects obtained for beliefs (described above). In Experiment 1 (after a one-week delay), students in the feedback-only or feedback-and-explanation conditions showed better reasoning than students who received no feedback. In Experiment 2, students who received feedback with an explanation showed better reasoning than students who received feedback only or no feedback. However, receiving feedback only still predicted more accurate reasoning than receiving no feedback. Once again, this difference in student performance may relate to the length of the delay between the correction and test (one week vs. one month). Unlike students whose reasoning may be influenced by a neuromyth correction, the teacher’s reasoning was unaffected by feedback. After a one-week delay, there was no difference in reasoning accuracy across the three conditions. This finding suggests that teachers may continue to rely on neuromyths when reasoning about real-world learning situations. This finding also aligns with the literature indicating that people may continue to rely on erroneous information when reasoning about the world, even though they are aware that the information has been corrected (
Guillory and Geraci 2016;
Lewandowsky et al. 2012;
Walter and Tukachinsky 2020). It is noteworthy that, across all three experiments, initial accuracy in neuromyth evaluation (correctly identifying neuromyth) significantly improved reasoning accuracy on the delayed test. Similarly, across all experiments, higher confidence in reasoning was associated with a higher accuracy of reasoning. In addition to these predictors, higher reported reliance on neuromyths reduced teachers’ (but not students’) accurate reasoning.
One practical implication of these results is that feedback should be designed to target erroneous reasoning and not just erroneous beliefs. Rather than simply correcting erroneous statements (“people have different learning styles”), it may be key to correct the reasoning and behaviors that result from reliance on neuromyths (that material should be presented in different formats for different students; that one should listen to an audio book to remember it if deemed an auditory learner). Because these neuromyth-based practices may be well engrained and may operate on a more implicit level (see
Rousseau 2021, for review), it may be necessary for people to practice applying this newly-updated knowledge in various real-world situations to overcome automatic tendencies to rely on neuromyths to make study and teaching decisions. Of course, it is also possible that though people know that the neuromyth is wrong, they continue to hope and believe that it might be effective (see
Guillory and Geraci 2010). If this is the case, other interventions that target this type of motivational reasoning may be needed.
Specifically, these types of interventions may be important for teachers who seem, based on the current data, less likely to apply corrections of neuromyths to their reasoning. Teachers may be more resistant to corrections relative to students given that they may have known and relied on neuromyths for longer. Furthermore, over the course of teachers’ careers, they may be more likely to be exposed to misconceptions (neuromyths) in workshops, conferences, and educational materials that promote neuromyths for classroom use (
Busso and Pollack 2015;
Goswami 2006). Future studies might examine this hypothesis. Also, future studies might focus on increasing the source’s trustworthiness to attenuate the influence of misconceptions about learning and the brain on reasoning. Though we included citations to scientific sources in the feedback-and-explanation condition, this design feature may not be sufficient to help people update their knowledge, as other studies suggest that trustworthiness is more important than expertise for the correction of erroneous information (
Ecker and Antonio 2021;
Guillory and Geraci 2013).
Although our findings suggest that beliefs in neuromyths and reasoning about neuromyths can be corrected with feedback, there is little consensus about how changing beliefs might translate into teaching practices, and future research should examine this issue. Some suggest that beliefs in neuromyths do not influence educators’ teaching practices (
Krammer et al. 2021). On the other hand, two recent studies indicated that beliefs in neuromyths do transfer to teaching practices (
Blanchette Sarrasin et al. 2019;
Hughes et al. 2020). In a large online questionnaire from Quebec,
Blanchette Sarrasin et al. (
2019) found that many participants (74%) endorsed the learning style myth. Out of those participants, a majority (64.9%) regularly implemented this practice in the classroom. Only a small minority (2.4%) of those who endorsed the learning style myth reported not implementing this practice in their classroom teaching. With these mixed results, future studies should include behavioral measures to investigate if neuromyths affect teaching practices and how correcting them may influence those practices. As others have discussed, this work should be conducted closely parallel to classroom practices (
Rousseau 2021;
Torrijos-Muelas et al. 2021). One technique that has been found to be effective in various formats, including classroom teaching, is using contrasting cases to provide feedback (
Loibl and Rummel 2014). Future studies should explore contrasting students’ beliefs about educational misconceptions and using feedback that explicitly contrasts erroneous beliefs with the correct information. Such an approach has promoted long-term conceptual change (
Asterhan and Dotan 2018). Contrasting feedback that addresses individual student errors and misconceptions could make the input more effective than generic explanatory refutation texts.
Future studies should also focus on creating effective interventions for students. We found that students often hold erroneous beliefs about how to learn, which is consistent with previous research (
Blasiman et al. 2017;
Dirkx et al. 2019;
Morehead et al. 2016). Therefore, finding practical interventions that can help students use better study strategies is important. Recent studies have shown that it is possible to help students improve their study strategies by providing training programs that promote effective learning strategies and self-regulation (
McDaniel and Einstein 2020). So far, such interventions have focused on promoting effective strategies. These interventions should also focus on correcting erroneous preconceptions about learning, as having erroneous preconceptions may impede future learning (
Bransford et al. 2000;
Gelman and Lucariello 2002;
Lucariello et al. 2014). In other words, correcting erroneous beliefs and teaching students to use effective strategies are important for “making the truth stick and the myths fade” (
Schwarz et al. 2016).
There are limitations to the current study. For example, in Experiment 1, we measured the endorsement of neuromyths using a standard true/false response scale. Recently, this scale has come under scrutiny because neuromyths may not encompass one coherent construct (
Horvath et al. 2018;
Krammer et al. 2021;
Macdonald et al. 2017). This potential issue led us to conduct Experiments 2 and 3 using Likert scales, which have been adopted in several studies. However, future research is necessary to understand how best to measure beliefs in neuromyths. Also, we examined one sample of teachers, but the results may differ using a different sample of teachers. Future research should examine the effectiveness of corrections for various groups of teachers and attempt to tailor these corrections to these groups. Another potential limitation is that the data were collected online using MTurk. Of course, using an online sample can be a strength because it can allow for responses from a broad sample of teachers. However, some may worry about data quality from MTurk participants on the basis that these participants may not be paying attention and taking studies as seriously as other participants. To address this question, we examined participant’s performance on the math tests in sessions 1 and 2 as a proxy for attention check. The results showed that very few participants (1.35%) failed to answer all of the math problems within the allotted time (1.35% in part 1 and 1.45% in part 2), suggesting that we had very few cases of participants who were likely not attending to the task. On average, participants completed most of the math problems (76% in part 1 and 85.2% in part 2), and of these, they answered 86% and 88% of them correctly in sessions 1 and 2, respectively. The math problems had some complexity to them (e.g., (72 − 50) × 2 = ?; 10 + 7 − 5 = ?; 44 − 20 + 10 = ?), and participants were given only 6 s to complete each one, suggesting that participants needed to actively pay attention to this task to be able to make their responses within the tight time limit. Thus, the fact that most people completed the task on time and answered a relatively high percentage of the items correctly suggests that the majority of participants in Experiment 3 were paying attention to the study. In addition, the only participants included in the analyses were those who completed each session of the study, suggesting a relatively high degree of intention to take the study seriously.
In sum, the results of the current studies suggest that students and teachers can update initial erroneous beliefs about learning and the brain, partially replicating previous research. It is promising that simple feedback can help students to update erroneous beliefs and influence the reasoning because this feedback can be easily implemented in the classroom. However, more research is needed to understand how to make feedback effective so that educators do not rely on corrected beliefs when reasoning about learning. Based on the results of the current studies, educators may update their beliefs when given feedback. However, these updated beliefs do not translate into better reasoning about learning and the brain, where they may still rely on their original erroneous beliefs. For now, our results suggest that students’ and educators’ erroneous beliefs can be updated, but reasoning may not be influenced by this belief updating.