Next Article in Journal
Self-Compassion and Smartphone Addiction Tendency Among College Students: The Chain-Mediating Effect of Self-Concept Clarity and Experiential Avoidance
Previous Article in Journal
Lived Experiences of Physiotherapists in Caring for People with Advanced Amyotrophic Lateral Sclerosis in Portugal: A Phenomenological Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computer Self-Efficacy and Reactions to Feedback: Reopening the Debate in an Interpretive Experiment with Overconfident Students †

by
Carlo G. Porto-Bellini
1,*,
Malu Lacet Serpa
2 and
Rita de Cássia de Faria Pereira
1
1
Department of Business Administration, Universidade Federal da Paraíba, João Pessoa 58051-900, Brazil
2
Human Resources Management, Universidade Federal de Pernambuco, Recife 50670-901, Brazil
*
Author to whom correspondence should be addressed.
This article is a revised and expanded version of a paper entitled Mirror, mirror on the wall: An experiment on feedback and overconfidence in computer-mediated tasks, which was presented at 24th Americas Conference on Information Systems (AMCIS), New Orleans, LA, USA, 16–18 August 2018.
Behav. Sci. 2025, 15(4), 511; https://doi.org/10.3390/bs15040511
Submission received: 26 January 2025 / Revised: 27 March 2025 / Accepted: 31 March 2025 / Published: 11 April 2025
(This article belongs to the Section Organizational Behaviors)

Abstract

:
The accuracy of self-perceptions and the maturity to handle feedback received from others are instrumental for one’s mental health, interpersonal relations, and effectiveness in the classroom and at work. Nonetheless, research on one’s computer self-efficacy (CSE) and reactions to feedback on task performance has been ambiguous in terms of quality, motivations, and results. A particularly important case involves overconfident individuals, i.e., those with unrealistically high CSE beliefs. Using the theoretical lenses of technology use effectiveness along with a mixed methodological approach of thought and lab experiments with 54 undergraduate students performing computer-aided tasks who were randomly assigned to different groups receiving feedback on task performance, we found that valence-based feedback possibly introduces unnecessary information to adjust the levels of CSE and the actual performance of overconfident students. This finding contributes to knowledge on whether feedback is important when skills and learning naturally mature across tasks, in addition to how judicious one is when processing externally motivated feedback. This study additionally offers a novel three-dimensional CSE construct, an instrument to measure the construct, and a scenario-based tool to conduct experiments with sequential decision tasks in the classroom. The practical implications include the planning of tasks and feedback in the classroom, with further insights into organizational hiring, training, and team building.

1. Introduction

Self-perceptions and external perceptions about the self are natural concerns for all individuals, since perceptions are closely related to self-confidence and self-esteem for the emergence of substantive adulthood, mental health, and social life (Chevrier et al., 2020). Moreover, developing appropriate self-perceptions may help avoid the emergence of extreme attitudes, including suicidal ideation, as reported in all parts of the world and across cultures (e.g., Wetherall et al., 2019; Orozco et al., 2018; Nguyen et al., 2019; Lee et al., 2006; Richardson et al., 2005). In the field of human–computer interaction, an important part of perceptions about the self involves how individuals see their own capabilities to use digital technologies to achieve arbitrary goals. According to the seminal models on technology use, individuals will use a system based on self-efficacy beliefs and perception-based attitudes towards that system and towards the use environment (e.g., Davis, 1989; Compeau & Higgins, 1995; DeLone & McLean, 2003; Venkatesh et al., 2016). Such a framing of technology use is more motivational in nature and ignores whether the user is effectively capable of operating the system in reference to stated purposes, i.e., whether the user has the needed theoretical abilities and practical skills in addition to certain positive attitudes towards the system and the use environment.
We designed a study on how technology users perceive themselves and react to external perceptions about themselves (from feedback provided by an authorized agent) regarding their actual performance in computer-aided tasks. We aimed to estimate how mindful one is about performing tasks on the computer and how he or she processes others’ assessments about his or her performance and correspondingly adjusts attitudes and behaviors. In doing so, we contribute to the emergent stream of research on technology use effectiveness (Burton-Jones & Grange, 2013; Porto-Bellini, 2018), in some sense improving the dominant models of technology use by framing not only what motivates use but also what makes one achieve arbitrarily defined use purposes by understanding what needs to be achieved and the means to do so.
Moreover, we have paid special attention to the presence of overconfidence in individuals performing computer-aided tasks, which is a particularly important trait in the study of human cognition and decision-making (Kahneman & Klein, 2009). Overconfidence is here framed as a post hoc measure of unrealistically high self-efficacy beliefs, which represents a cognitive limitation towards one’s effectiveness in using digital technologies. We therefore searched for an answer to the question of whether feedback on performance, as well as the valence of feedback, has any impact on overconfident individuals in their computer self-efficacy (CSE) beliefs and actual task performance. To answer it, we designed an experiment with undergraduate students of Business Administration who were regularly assigned to perform computer-aided decision tasks. We submitted the participants to two sequential, similar computer-aided decision tasks and measured three newly defined dimensions of CSE before each task (general CSE, problem-specific CSE, and tool-specific CSE) and, after each task, we measured their self-perceptions of performance as well as their actual performance. In between the two tasks, we provided each participant with one of three types of manipulated feedback: neutral, negative, and positive feedback on their task performance. An important characteristic of the experiment is that it involved a realistic computer-aided business decision under real assessment by an authorized agent (the course’s instructor), instead of any computer use. With such an experimental design, we studied technology use through the theoretical lenses of use effectiveness, which requires the definition of the purposes of use and assumes that more than one perspective of use effectiveness may exist (in the present study, the perspective of each student and the perspective of their instructor).
This study contributes to the literature on many fronts. First, it empirically investigates the effects of feedback received from supervisors (instructors, managers, or team leaders) regarding one’s performance in computer-aided problem solving as well as by comparing the levels of actual performance with the levels of CSE. It is also among the few studies, if any, discussing the role of (motivational) attitudes, (theoretical) abilities, and (practical) skills. It additionally gives answers to calls for research on providing feedback on digital skills (Piccoli et al., 2020; Aesaert et al., 2017), on learning issues being “hindered by biased attention allocation and overconfidence” (Ehrlinger et al., 2016, p. 99), on the effects of immediate feedback over student learning (Chen et al., 2018), on the relation of one’s mindset, reactions to feedback, learning, and performance (Cutumisu & Lou, 2020; Cutumisu, 2019), and on the relation of self-efficacy, feedback, and self-regulation (Kefalidou, 2017). Moreover, it contributes to the long-lived debate about the relationship between confidence and accuracy (González-Vallejo & Bonham, 2007), particularly whether CSE predicts technology use effectiveness (Porto-Bellini et al., 2016). For practice, this study has numerous implications for students, instructors, and organizational managers, such as regarding the assignment of computer-aided tasks to students and workers and the appraisal of their performance.
This article is organized as follows. In the literature review section, we present the key constructs of this study, i.e., CSE, overconfidence (a special case of CSE), task performance, feedback, and learning. Next, we discuss a mixed experimental design to study how feedback received from supervisors may impact an individual’s CSE, self-perceptions on task performance, and actual performance. In developing the methodological approach, we propose that CSE can be measured in three dimensions to better isolate and understand specific issues in computer-aided decision-making. Subsequently, we analyze and discuss the empirical data according to the approach of analytical reasoning, which is characterized by insightful, ad hoc procedures to process unusual datasets. And finally, we discuss the implications of our findings as well as limitations and perspectives for future research.

2. Literature Review

A review by Venkatesh et al. (2016) concerning the first decade of research on the unified theory of acceptance and use of technology (UTAUT) showed that studies on information technology (IT) acceptance, adoption, and use relied on perceptions about a system’s features, the use environment, or someone’s self-efficacy. However, perceptions do not necessarily correspond to reality, much like how in-use technologies may not correspond to espoused ones (Orlikowski, 2000). In fact, people may hold unrealistic views about technology, the use environment, and even about themselves, such as when expected performance is not confirmed vis-à-vis actual performance (Beck & Schmidt, 2018). We thus resort to the works of Burton-Jones and Grange (2013) and Porto-Bellini (2018) on technology use effectiveness to discuss the presence of individuals in educational and professional settings who may not hold accurate views about themselves in terms of the actual competencies needed to perform a task on the computer. The use effectiveness perspective extends traditional views of technology use by focusing on how mindful one is about his or her technology-related motivational attitudes, theoretical abilities, and practical skills. Moreover, effectiveness is a relativistic measure of the level in which someone achieves his or her arbitrarily defined technology use purposes, i.e., each stakeholder in a given technology use situation will hold a particular view of effectiveness in the light of his or her personal perspectives about what effectiveness is (what he or she wishes to do) in that particular situation (Porto-Bellini, 2018).
In the next sections, we review the seminal works and research opportunities on the key concepts of our study (self-efficacy beliefs, task performance, feedback, and learning) and connect them with the technology use effectiveness rationale. Self-efficacy beliefs are instrumental for one to achieve effectiveness in technology use (an individually defined expectation about task performance). Feedback, in turn, is an opportunity to demonstrate that different perspectives may exist about task performance and that the stakeholders must negotiate a shared expectation as per the demands of the situation. For instance, students and instructors may hold different views about acceptable student performance in the classroom. While all views are acceptable as per the concept of technology use effectiveness (Porto-Bellini, 2018), the stakeholders must share a common expectation if a larger stakeholder is present (in the case of classroom activities, the larger stakeholder is the school’s evaluation system). Finally, learning is a natural process through which people adapt their views about the world and about themselves in order to meet those shared expectations.

2.1. Computer Self-Efficacy

Self-efficacy refers to one’s judgment about the competencies needed to perform tasks in view of expected performance (Bandura, 1997). Relatedly, the construct of computer self-efficacy (CSE) refers to the ability/efficacy to use a computer to complete a task (Compeau & Higgins, 1995; Marakas et al., 1998). While CSE assessments do not reveal one’s actual skills, the levels of CSE may be related to one’s motivation to develop the needed skills to use a computer in effective ways (Tzeng, 2009), thus impacting how much effort, persistence, and interest the individual will invest in a task, as well as the type of environment in which the individual desires to work (Gist, 1987).
An important issue is how to measure CSE. To the best of our knowledge, Murphy et al. (1989) and Gist et al. (1989) have proposed the first operational models of CSE. Murphy et al. (1989) conceived CSE in three factors, namely beginning level computer skills, advanced level computer skills, and mainframe computer skills. Such factors address general and specific computer skills as well as skills related to the technological apparatus in use. Gist et al. (1989) followed a similar rationale to conceive CSE in two factors, namely computer self-efficacy (general skills about the computer context) and software self-efficacy (skills about a specific tool). Compeau and Higgins (1995) then discussed CSE within the interests of the IT literature and defined it parsimoniously as a single-factor, 10-item construct addressing task issues. Soon after, Marakas et al. (1998) elaborated CSE as an IT concept conveying two constructs (general CSE and task-specific CSE) that “cannot be treated interchangeably from either a measurement or manipulation perspective” (p. 129). A decade later, Downey and McMurtrey (2007) criticized all available CSE measurement models and developed a task-based, summated general CSE. However, their model was specific to six arbitrarily chosen tools and conceived the tasks as the mere operation of those tools, i.e., they did not address for which purposes the tools were used―the real tasks of interest. Subsequently, in a meta-analysis of 102 empirical CSE studies, Karsten et al. (2012) concluded that serious problems existed in how CSE had been defined and measured, but they did not propose any alternative model. Gupta and Bostrom (2019) revisited Marakas et al.’s (1998) CSE model as it had not given full attention to the tasks that are performed with the mediation of technology. They thus proposed that CSE should be measured according to the specificity of technology and the complexity of the task. More recently, the hypotheses that guided Compeau and Higgins’ (1995) seminal work on CSE were submitted to a replication study three decades after the original study and with the participation of one of the original authors (Torres et al., 2022). While the original rationale for the construct was preserved, about half of the original hypotheses were not accepted in the replication. And finally, Ulfert-Blank and Schmidt (2022) developed a CSE scale based on a framework by the European Commission to address multiple heterogeneity problems found in their literature review. Their newly developed CSE construct (called digital self-efficacy) includes the following five dimensions: information and data literacy, communication and collaboration, digital content creation, safety, and problem solving. It is apparent to us, however, that two of these dimensions may not be necessary in stand-alone IT uses or when a tool is used by one individual only, i.e., not by interacting individuals. These dimensions are safety and communication and collaboration.
It is thus apparent that CSE is still a construct undergoing conceptual refinement and empirical validation. We advocate that, in the light of technology use effectiveness, the concept of CSE should merge the rationale of all previous models so as to include three distinct components based on the interplay of reality and representation (Burton-Jones & Grange, 2013), as follows: (1) the real-world purpose of technology use, (2) the computer as the representational context of that purpose, and (3) the specific software tool as the representational mechanism to aid the user in achieving the stated purpose. We will offer a way to implement these three components when describing the experimental design of our empirical study. We will call the three components, respectively, problem-specific CSE, general CSE, and tool-specific CSE.

2.2. Self-Efficacy and Performance

Even if a positive correlation was originally expected to exist between self-efficacy beliefs and actual task performance (Bandura, 1991), self-regulation theories were criticized in the past for being silent on certain self-efficacy levels contributing to individuals allocating less resources than needed to successfully complete a task (Vancouver et al., 2001, 2002). Indeed, debate still exists on whether self-efficacy is positively related to performance in all situations or if the negative effects occasionally found are merely the visible side of an adaptive process of judicious and efficient allocation of finite resources (Beck & Schmidt, 2018). It seems, though, that high levels of self-efficacy promote excessively optimistic feelings about performance, which in turn results in unrealistic perceptions that one is near to goal achievement (Schmidt & Deshon, 2010) or far from risks for the self and for others (Moore & Healy, 2008), or even that future performance can repeat successful past performance (Hilary & Menzly, 2006).
A special situation occurs when self-efficacy beliefs exceed a certain level and make an individual report overconfidence in performing a task. Overconfidence may reduce performance of the task (Moores & Chang, 2009). This is particularly true for individuals who have been successful in previous decisions but, in certain moments, pay less attention to available cues and more attention to personally held information, thus incurring in increased risks of lower decision quality (Hilary & Menzly, 2006). As a result, overconfidence has been said to account for the erroneous self-assessment of skills (Ehrlinger et al., 2016). Overconfidence is generally seen as the (1) overestimation of one’s actual performance, (2) overplacement of one’s performance relative to the performance of others, or (3) overprecision in one’s beliefs (Moore & Healy, 2008). An alternative view of overconfidence is a motivational one, whereby individuals “are often more confident than accurate (…) a desire to see the self as knowledgeable and competent” (Blanton et al., 2001, p. 373), meaning that “people typically overestimate their prowess” (Ehrlinger et al., 2016, p. 94) and that their “confidence is typically higher than is justified by the observed accuracy” (González-Vallejo & Bonham, 2007, p. 222). This may be one reason why studies on the relationship between self-efficacy beliefs and actual performance do not find positive correlations under certain conditions (e.g., Porat et al., 2018; Schmidt & Deshon, 2010; Vancouver & Kendall, 2006; Vancouver et al., 2001, 2002; Yeo & Neal, 2006). As such, self-efficacy is conceptually challenging, and, for the interests of the present study, the particular case of overconfidence is framed as a cognitive limitation for one’s technology use effectiveness (Porto-Bellini, 2018), i.e., an unrealistically high estimation of one’s actual abilities to perform computer-aided tasks.

2.3. Feedback and Learning

A related concern is feedback. Providing feedback on abilities and actual performance is believed to help regulate perceptions and behaviors, promote learning and progress (L. W. Lam et al., 2017; Smith & Kimbal, 2010; Anseel et al., 2009; González-Vallejo & Bonham, 2007), and shape one’s intrinsic motivation to perform (Sansone, 1989). In general learning situations, feedback has been used to confirm and reinforce the correct paths of action and adjust the incorrect ones (Smith & Kimbal, 2010). In organizations, feedback is an integral aspect of work and an old interest of scholars (L. W. Lam et al., 2017), whereby peers and supervisors provide information on one’s success or failure in performing tasks either in regard to task fulfilment itself or to the individual’s particular performance (Unger-Aviram et al., 2013), with impacts on competence validation and competence development (Merriman, 2017). In schools, feedback is also an important part of the learning process (Narciss et al., 2014) and of educational performance (Cutumisu & Schwartz, 2018), with students who receive feedback from instructors and engage in self-regulated learning practices performing better (Tsai, 2013). Nevertheless, feedback was also found to inhibit learning in some situations (Misiejuk et al., 2021).
However, it is not any kind of feedback that will possibly promote learning and progress. The literature is broadly inconclusive as to the existence of a dominant pattern of the impact of feedback on human performance (Thuillard et al., 2022). In one of the most cited reviews on feedback, Hattie and Timperley (2007) concluded that both the type of feedback and the way it is given play specific roles in learning and achievement. The authors summarize feedback in three questions (“going where”/feed up, “going how”/feed back, and “what next”/feed forward) and four categories (about either the task, the processing of the task, self-regulation, or the self), and they state that it is at the level of self-regulation feedback that confidence and self-efficacy beliefs mediate feedback reception and effectiveness. They also address how cultural issues may impact feedback design and outcomes, as well as the need to appropriately plan the timing, the valence, the optimal use, and the role of assessment in feedback interventions. Such planning is needed to increase one’s effort, motivation, and engagement, as well as to minimize the negative impacts of feedback on self-efficacy and performance if proper cause–performance paths are not communicated to those receiving feedback.
Discussions on the complexity and maturity of the feedback literature are not new. Kluger and DeNisi (1996) developed feedback intervention theory motivated by about 90 years of poor scholarly knowledge on how feedback relates to behavior and performance, as well as by evidence that feedback significantly reduces performance in certain situations or does not have any effect at all—whereas the widely accepted assumption is that feedback improves performance. To those authors, feedback interventions are “actions taken by (an) external agent(s) to provide information regarding some aspect(s) of one’s task performance” (p. 255), but related research for almost a century was plagued with inconsistencies regarding definitions, methodology, analyses, and the resulting admissible inferences. Also, they found that many experimental studies on feedback lacked control groups, thus not allowing for a comparison between the effects of feedback with no feedback at all. Kluger and DeNisi (1996) affirmed that, by that time, there was no solid knowledge on how feedback (any feedback or none, its type, signal, magnitude, or focus) acted on individuals to adjust the perceptions about the self and task performance. Further problems included issues like (1) researchers ignoring high variances in the findings so as to highlight only the positive outcomes of feedback; (2) the use of varying standards to design and assess feedback; (3) researchers ignoring the locus of attention across feedback interventions, such as whether feedback was directed to the process, to the outcomes, or to the agents performing the tasks; and (4) the simplistic assumption that any behavior could be regulated by feedback. Moreover, there are so many idiosyncratic situations in the relationship between feedback, people, tasks, and their environment that it would be impossible with current knowledge and methods to predict specific outcomes from feedback. Finally, those authors identified the lack of a task taxonomy, thus developing one based on the following variables: subjective novelty, intrinsic complexity (number of actions and dependencies), time constraint, needed creativity, performance measurement (quality or quantity, objective data or subjective ratings), transfer (if the effect is measured on a subsequent task), latency (latency/speed), and type of task (physical, reaction, memory, knowledge, rule-based, vigilance).
The current literature remains hesitant on several aspects of feedback, such as its affective components and outcomes. For instance, encouraging/positive feedback has been reported to help students achieve the generally desirable state of flow, but it may also create counterproductive situations (Cabestrero et al., 2018). In turn, negative feedback has been reported to be negatively associated with learning (Cutumisu & Schwartz, 2018) and to increase the levels of stress and negative affect while having no significant impact on one’s performance or self-esteem (Thuillard et al., 2022). Mixed findings also exist regarding the impacts of feedback frequency (C. F. Lam et al., 2011) and feedback timing (Attali & Van Der Kleij, 2017) on learning and task performance. Additional design issues of feedback interventions include the following: absolute/individualized or relative/comparative feedback (Bellman & Murray, 2018; Zingoni & Byron, 2017), whether feedback is directed to an individual or to a group (Rabinovich & Morton, 2012), feedback clarity (Borovoi et al., 2022; Schaerer et al., 2018; Biernat & Danaher, 2012), feedback intensity (Freeman & Dexter-Mazza, 2004), feedback consistency (Lucas et al., 2018), the perceived credibility of the source of feedback (Watling et al., 2012), the degree of motivation of those who provide feedback (Schaerer et al., 2018), whether feedback comes from peers (Zong et al., 2021), automated devices (Hollingshead et al., 2019) or a combination of human and automated processes (Thuillard et al., 2022), and if feedback is immediate/interactive (Kefalidou, 2017) or used in massive learning environments (Piccoli et al., 2020).
Worthy of note, the outcomes of feedback also depend on the individuals who receive it, as they need to engage in reflection by having a learning goal orientation, motivation to think extensively, and a feeling of task-performance importance (Anseel et al., 2009) along with self-efficacy beliefs significantly related to feedback-seeking behaviors (Sherf & Morrison, 2020; Dimotakis et al., 2017). Also influential are their self-concepts (McConnell et al., 2009) and whether they hold a fixed/stable/entity or an incremental/malleable/growth view of abilities/intelligence (Dweck & Leggett, 1988). Furthermore, evidence reported by Kelley and McLaughlin (2012) shows that an individual’s cognitive resources predict feedback requirements, in the sense that more capable individuals (in task demands) would benefit more from reduced feedback, whereas less capable individuals would benefit more from increased feedback.
Lastly, feedback may be contrasted with feedforward and whether an individual passively receives or actively seeks feedback1. Feedforward refers to one’s intentions for the future rather than to performance in the past (Budworth et al., 2015), while feedback seeking refers to a change in the locus of control in the feedback process (Anseel et al., 2007). In our study, none of the issues is present, as we focus on assessing past performance (feeding back) and on feedback that is designed and motivated by the researchers’ interests, i.e., those who provide, not seek, feedback.
On the methodological front, the use of experiments to study the effects of feedback on overconfidence in computer settings is not new (e.g., Zakay, 1992; Narciss et al., 2014), but a recent review by Sauer et al. (2019) identifies a paucity of experimental studies on the relationship between social stressors and performance in human–computer interaction, with negative feedback on performance being among those stressors. Therefore, one of our contributions to the literature is to study, in an experimental design, the effects of valence-based feedback on actual performance in computer-aided tasks as well as on three newly defined domains of CSE to verify if overconfident individuals regulate appraisals of self-efficacy and become more aware of the needed resources to fulfil a task. In doing this, we can inform instructors on how to help students reflect on their learning performance from the early stages of educational development, and managers to design worker–task fit and training more judiciously.

3. Method

We designed an experiment to study the effects of a supervisor’s feedback on a subordinate’s performance in computer-aided tasks, with a particular interest in the reactions of overconfident individuals. We modelled the tasks at a subjectively intermediate level of complexity due to the ambiguous relationship between task complexity and one’s self-appraisals: on one hand, easy tasks are said to preserve overconfidence (Ehrlinger et al., 2016) and make people underestimate actual performance while also believing that they are better than others (Moore & Healy, 2008); on the other hand, difficult tasks are said to promote greater self-insight (Ehrlinger et al., 2016) and make people overestimate actual performance while also believing that they are worse than others (Moore & Healy, 2008). Therefore, we did not anticipate the signal and the magnitude of the possible impact of feedback on the levels of CSE and actual task performance of overconfident individuals. For tasks with an intermediate level of complexity, our hypotheses are as follows:
H1a. 
Overconfident individuals adjust their CSE beliefs after receiving feedback on performance from a supervisor.
H1b. 
Overconfident individuals adjust their self-reported performance after receiving feedback on performance from a supervisor.
H1c. 
Actual task performance of overconfident individuals is impacted after they receive feedback on performance from a supervisor.
Additional hypotheses may be stated for the effects of skills and learning that naturally mature across tasks, as follows:
H2a. 
Overconfident individuals adjust their CSE beliefs as per their practical skills and learning across similar tasks.
H2b. 
Overconfident individuals adjust their self-reported performance as per their practical skills and learning across similar tasks.
H2c. 
Actual task performance of overconfident individuals is impacted as per their practical skills and learning across similar tasks.
The experimental design included features of thought and lab experiments. Lab experiments are popular in IT research, but thought experiments are less so, being a form of qualitative research (Introna & Whitley, 2000) for theory development (Grover et al., 2008). Our experiment includes both designs due to the presence of testable (H1a, H1b, H1c) and conjectured (H2a, H2b, H2c) causal paths. The lab experiment part has features of a field experiment due to the experimental groups being studied in their natural setting, i.e., during a real execution of computer-aided tasks and with no information provided to the performing individuals about the experimental activities. We informed the participants about the experiment only after its conclusion, when we asked for their individual consent to analyze and publish the data2. The lab experiment part was also a true experiment (Campbell & Stanley, 1966) due to the randomization process and other features described next.
Participants were undergraduate students of Business Administration assigned to two different classrooms (morning and evening sections) of a course called Administrative Informatics. A total of 54 students participated in this study, 29 from the morning section and 25 from the evening section. We assigned them evenly and randomly to three groups: 18 students to an experimental group that would receive positive feedback on task performance (GPOS), 18 students to an experimental group that would receive negative feedback (GNEG), and 18 students to a control group that would receive neutral feedback (GCTRL). In addition to there being an equal number of students in each group, their assignment to the groups was partially controlled for pairing, i.e., the demographic profile of each group was partially homogenized regarding the most typical variance-generating variables as judged from the historical demographic distribution in similar classrooms (Table 1). Full pairing of all variables was evidently not an option, since pairing the most influential variables impedes full pairing of other, less influential ones. For instance, one may question why gender was not a priority for pairing. The reason is, no interactions were found between gender, intelligence theories, and overconfidence in Ehrlinger et al.’s (2016) study with undergraduate students. Broadly, those authors found a lack of scholarly knowledge regarding the demographic aspects of overconfident individuals, rather concluding that views of intelligence (entity/fixed or incremental/malleable) and the locus of attention (difficult or easy tasks) may explain self-assessments of performance and effects on learning. On a note of caution, Biernat and Danaher (2012) found differences in immediate reactions to subjective interpretations of feedback according to gender and race, but their study is not comparable to ours in many ways (e.g., their experimental tasks involved leadership roles, and their focal measurement was the level of importance the participants assigned to those roles); and Narciss et al.’s (2014) study found gender differences in learning performance of students under tutoring schemes, which is also not the case in our study.
We adjusted the experimental computer-aided tasks to the course’s syllabus and the planned activities for the semester. Since the course was about using the computer to support routines and decision-making in organizations, we designed the experimental tasks to include the computer as the tool to make business decisions. We informed the students that they had to be on time to class and stay there until they completed the tasks, as it was a part of their academic evaluations. The tasks involved using an electronic spreadsheet to model and solve decision problems based on the prisoner’s dilemma (Kolokolstov & Malafeyev, 2020; Kippenberger, 1997). The problems involved manufacturing companies deciding whether to update their technological infrastructure and how much money to invest (Appendix B). Since streamlining the technological infrastructure is a major competition factor towards production efficiency as well as market share expansion and branding, a company’s decision would have to consider similar strategic choices available to competitors―thus the prisoner dilemma’s rationale.
The experimental model (Figure 1) has several causal paths―some paths for the lab experiment, and other paths for the thought experiment. The reason for such a mixed experimental design is that some causalities are conjectures guiding our reasoning as well as for future empirical research, while other causalities can be actually investigated in the present study, i.e., the effects of feedback on perceptions and actual performance in computer-aided tasks. On the one hand, experiments are usually too limited to help understand human attitudes and behaviors, since any attitude or behavior is too complex to be explained by manipulating a few independent variables and observing their effects through a limited chain of variables. On the other hand, it would be too complicated, if not impossible, to design and perform an experiment with reasonably many variables. Moreover, if the experiment also involves multiple dependent variables, it may become virtually impossible to either do it or develop satisfactorily trustful results. For these and other reasons, scholars have invested in thought experiments to shed light on possible causal chains of events that are too complex to submit to statistical tests whereas certain phenomena require the exploration of multiple causal links even if not currently possible in a fully empirical experiment.
Due to the presence of some conjectures in the experimental design and the number of participants (54 in total), the hypotheses stated before were not submitted to statistical tests. Rather, they were handled with analytical reasoning as the data analysis approach (explained later). Such a decision was aimed at not infringing statistical assumptions that are otherwise required for construct validation. In contrast, our experiment is of an exploratory, interpretive nature and highly dependent on the phenomenological experience of the researchers with the experimental context and the participants. Indeed, the perfect balance between internal validity (consistency) and external validity (generalization) is always a challenge. Our study prioritizes internal validity in many senses (e.g., it includes an arguably true field experiment with high ecological validity and immersion of the researchers in the context and with the participants) while lacking evidence for statistical validity. Other studies do the exact opposite when using an experimental design. For instance, in a study by Budworth et al. (2015, p. 52) on feedforward interventions, “internal validity was decreased in favor of external validity” and the “random assignment of participants to a control group […] was not possible”.
The experiment involved two sequential tasks. In each task, we first measured the participants’ levels of general computer self-efficacy (G-CSE), tool-specific computer self-efficacy (T-CSE), and problem-specific computer self-efficacy (P-CSE). This three-dimensional CSE construct is an original contribution of our study (the scale we used is available in Appendix A). Afterwards, we asked the students to perform the corresponding task individually and silently. The instructor would not provide any help. After completing each task, the students answered a one-question self-report of performance (SRP1, SRP2) and the instructor measured their actual task performance (ATP1, ATP2) with 28 objective criteria (Appendix B). The SRP and ATP measures used a 0–10 scale, which is a popular scale in academic evaluations. Therefore, we collected each student’s performance from the perspective of two stakeholders (SRP by the very student, and ATP by the instructor). This is consistent with the rationale for technology use effectiveness, i.e., effectiveness is dependent on the perspective of each stakeholder (Porto-Bellini, 2018). The ATP measures involved operating the spreadsheet correctly and the quality of the decision tasks. The quality of the decision tasks was easy to measure given that any strategic decision based on the prisoner’s dilemma has a statistically dominant answer―a competitive one (Kolokolstov & Malafeyev, 2020; Kippenberger, 1997)―due to issues like the agents’ information asymmetry, opportunism, skepticism on the others’ strategic choices, bounded rationality, avoidance of the worst outcome, and acceptance of certain losses. After the first task, we provided feedback on performance to each participant according to a feedback plan (discussed later). Feedback was given in written form, and participants could not exchange information with others about the feedback they received. Students then proceeded to the second task.
As for the environmental contingencies that might impact experiments, we took the risk of not controlling them. Instead, we decided to have a natural task environment, i.e., a regular classroom meeting. Contingencies included the ergonomic factors of the computer use environment (such as temperature, noise, electric power availability, and the quality of office furniture), personal impediments of students to attend the class or be there on time, and other naturally occurring situations. We also did not manipulate the emotional state of participants, such as by changing the room’s visual appeal or engaging in unusual conversation. The intent was to have an ordinary classroom meeting.
The first action in the experiment was the application of a priming procedure for both focusing the students on the task and instructing them on how to model and solve decision problems based on the prisoner’s dilemma (solving such problems was part of the course’s syllabus). Priming was effected with a 10-min video about the dilemma’s principles. The use of video models is an instructional strategy to stimulate learning and self-efficacy (Hoogerheide et al., 2018). The experimental actions in our study are summarized in Table 2.
The experiment had the following design (Campbell & Stanley, 1966), where R means that the inclusion of individuals in groups followed a random assignment procedure; GPOS, GNEG, and GCTRL are the three groups of feedback; O1, O3, and O5 are the pre-tests for CSE, SRP, and ATP; X1 and X2 are the experimental treatments (feedback types as described later); “–” is the lack of treatment (neutral feedback); and O2, O4, and O6 are the post-tests for CSE, SRP, and ATP:
RGPOSO1X1O2
RGNEGO3X2O4
RGCTRLO5O6
We developed three five-item scales following patterns found in data collection instruments since the first CSE studies, such as those in Compeau and Higgins (1995), Murphy et al. (1989), and Gist et al. (1989). Those instruments typically consist of short statements about a specific situation of computer use, sometimes referring to a specific tool as well. We replicated this pattern. However, there are two fundamental differences between our scale and numerous others. First, while some instruments (such as those three mentioned above) employ scales of confidence to measure self-efficacy, we did not find any rationale for using confidence instead of self-efficacy ratings—which are different constructs (Morony et al., 2013). Bandura and Cervone (1986) in fact discuss how capable, not how confident, one is. Therefore, we asked participants to rate how capable they believed they were to accomplish certain actions on the computer. And a second difference between our scale and extant ones is that ours ranges from 0 to 10 (therefore, it has a neutral point), not from 1 to 10 (with no neutral point). We did not find any argument in favor of not having the neutral point. On the contrary, Compeau and Higgins (1995) defend it when they mention exactly point number “5” in their scale as the single point representing “moderately confident” individuals, thus suggesting that they see number “5” as a neutral point; however, point number “5” does not stand exactly in the middle of their 1–10 scale. Our instrument is available in Appendix A. It includes three sections, one for each of the CSE types we developed based on the rationale presented in the literature review. The scale was conceptually discussed with our research group in numerous rounds as well as in a version presented at the Americas Conference on Information Systems (Porto-Bellini & Serpa, 2018).
As per the computer-aided tasks, we used two similar designs. We chose similar designs so that we could compare the participants’ CSE, SRP, and ATP across tasks, as well as the effects of feedback. Additionally, having two slightly different tasks would reduce biases in Task #2 (the bias of mere task repetition or, conversely, the bias of having unmatching tasks). To measure ATP, the students received fractional points for each performance criterion fully met, up to 10 possible points per task (Appendix B). A detailed list of real-time verification procedures was also available for the task supervisors (one of the authors and two classroom assistants) to help the execution of the experiment. All students had experience with the scoring procedure, thus we assumed that they were capable of estimating their own performance.
In addition to solving a decision problem, the tasks involved basic computer skills like filling the spreadsheet cells with data, and more complex skills like generating analytical diagrams and formulae. Feedback was given individually in written form to each student after completing Task #1. Three feedback groups were formed (much like the affect manipulation of feedback given to students in Cabestrero et al., 2018): individuals who received positively stated feedback (“Thank you. Your performance was satisfactory.”), individuals who received negatively stated feedback (“Thank you. Your performance was not satisfactory.”), and individuals who received neutral feedback (“Thank you. Let’s begin the second task.”). In selecting the individuals to receive each type of feedback, the instructors analyzed each student’s ATP for Task #1 and then equally distributed each type of feedback (positive, negative, and neutral) to those with actually satisfactory and not satisfactory performance, as follows (Figure 2): half of each type of feedback (positive, negative, and neutral) was randomly given to each of the two performance groups (students with satisfactory and not satisfactory ATP). The criterion for deciding whether a performance was satisfactory or not was the official score to pass an exam in the university (at least 7 points out of a maximum of 10 possible points), but the instructors did not explain to the students the positive or the negative feedback. As such, the participants were free to reflect on all possible performance outcomes (about the general use of the computer, the use of the specific computer tool, or the decision-making activity). As shown later, no student achieved the minimum or the maximum scores possible, thus all feedback messages could be interpreted as meaningful.
In addition to the three valences of affect (positive, negative, and neutral), we modelled feedback as individualized (Rabinovich & Morton, 2012), absolute (Zingoni & Byron, 2017), immediate (Attali & Van Der Kleij, 2017), episodic (C. F. Lam et al., 2011), unambiguous (Borovoi et al., 2022; Schaerer et al., 2018; Biernat & Danaher, 2012), and provided by a source (the instructor) that was presumably both credible (Watling et al., 2012) and motivated (Schaerer et al., 2018). As for the individuals receiving feedback, we had no information about their self-concepts (McConnell et al., 2009), orientation to reflection (Anseel et al., 2009), or framing of intelligence/abilities (Dweck & Leggett, 1988).
On a last note on this study’s originality and rigor, we conducted a thorough search in the recent literature to find similar studies in terms of design and intents. We searched for studies published in the following journals in the field of human–computer interaction: Computers & Education, Computers in Human Behavior, Interacting with Computers, Behaviour & Information Technology, and Information Technology & People, as well as in all journals of the AIS Senior Scholars’ Basket of Journals3. The closest studies we found were two experimental studies on the effects of feedback on task performance, and one psychometric study on CSE and self-assessments of performance. First, the study by Bellman and Murray (2018) used feedback intervention theory to study the effects of negative, neutral, and positive feedback on task performance as well as interface preferences across individuals, i.e., in a situation of normative feedback. Our study used the same three types of feedback, but we were not interested in comparing individuals due to the person-based framing of technology use effectiveness (Porto-Bellini, 2018). Second, the study by Thuillard et al. (2022) measured the impacts of feedback on the subjective cognitive state of participants. Their experimental design was much like ours, in that they had sequential tasks and provided the participants with manipulated positive, negative, and neutral feedback between tasks. However, unlike our study, they intended to compare human- versus computer-generated feedback to verify if there was any difference in the participants’ cognitive reactions. And the third study is one by Aesaert et al. (2017) focusing on bias (the direction of judgement error) and accuracy (the magnitude of judgement error) of one’s estimations of CSE and self-assessments of performance. To some extent, our study adds to theirs by including an experimental design to estimate CSE bias and accuracy and by extending their study to adult individuals.

4. Results and Discussion

Here, we present the full dataset, the processing of the data, and the findings for the three types of CSE, task performance, and the effects of feedback. The sample was consistently the same in all analyses. Feedback on task performance was the independent variable homogeneously administered to individuals according to the feedback groups (GPOS, GNEG, or GCTRL) and irrespectively of actual individual performance. The intent was to verify a variety of individual reactions to feedback. Importantly, no participant knew beforehand that we designed distinct feedback groups. CSE was one of the dependent variables, measured in three domains (G-CSE, T-CSE, and P-CSE). The second dependent variable was task performance, measured both as self-evaluations by the students and as objective evaluations by the instructor. With such a procedure, we addressed a key aspect of technology use effectiveness, i.e., measuring effectiveness from multiple perspectives as it is arbitrarily defined by each stakeholder (Porto-Bellini, 2018).
The data analysis procedures are characterized as analytical reasoning, which is a helpful approach in complex situations or in the lack of statistically significant data. It is not a particular method but rather refers to an ample set of analytical skills that include the insightful, ad hoc elaboration of procedures to solve a pragmatic problem. One example in the IT literature is available in Porto-Bellini et al.’s (2020) ex post facto study on the cognitive and behavioral archetype of software developers that correlate with the success of enterprise systems projects. While our study’s sample (54 subjects) is numerically comparable to others (e.g., 46 subjects in Biernat & Danaher, 2012), it is not large enough to make us comfortable in performing statistically based comparisons of means. Also, since we coherently did not collect objective data for the thought experiment parts of the experimental design, submitting the available data to statistical tests would not help address those parts. In contrast, by resorting to analytical reasoning, we invested our best analytical efforts to understand and explain the patterns and idiosyncrasies in the data, which are highly dependent on the phenomenological experience of the researchers with the application context and the technology users. Analytical reasoning is in fact expected to produce deep understanding of a situation that would otherwise be constrained within the limits of cold statistical manipulation. For this reason, it is used in U.S. universities4 to develop analytical skills among students and employees. Moreover, the present study was conducted in a highly idiosyncratic society—the Brazilian northeastern region—thus demanding an immersive experience for the researchers to discuss certain attitudinal and behavioral patterns that are possibly present in the data. On a final note, the generalization and replication of the data was not the intention of this study. Rather, it envisioned the elaboration of an experimental design that can be replicated, as well as a phenomenological understanding of attitudes and behaviors in a specific sample of students.
Figure 3 shows what happened in each group for the three measures of CSE. In GCTRL, an expressive number of individuals reduced or increased T-CSE and P-CSE, and reduced G-CSE. As GCTRL is the control group, we did not search for an explanation for their attitudinal patterns, but it may result from the individuals understanding the characteristics of the computer-aided tasks after completing Task #1. In GNEG, the prevalent pattern was clearly one of reduction in CSE levels, which is coherent with the type of feedback they received. In GPOS, while many individuals increased their P-CSE in important ways, an expressive number of individuals also decreased G-CSE and T-CSE. GPOS individuals may have interpreted the positive feedback as feedback on solving the decision problem with the computer, and not about computer use itself. As such, they showed a pattern for G-CSE and T-CSE that is similar to the pattern in GCTRL. Moreover, an increase in P-CSE after positive feedback is coherent with the very concept of technology, which is a means to performing a task and not an end in itself. Therefore, at this moment, we can conclude that individuals, taken as a group, coherently associate feedback on performance with problem solving rather than with technology operation, and they adjust their CSE appraisals accordingly.
Figure 4 shows what happened in each group for average computer self-efficacy (AvCSE), self-reported performance (SRP), and actual task performance (ATP). AvCSE is computed from the three measures of self-efficacy (G-CSE, T-CSE, and P-CSE). The reason for computing AvCSE is to have an estimate on the broad CSE archetype of individuals to compare with their performance (the self-reported and the objectively measured performance). In GCTRL, an expressive number of individuals increased SRP and ATP, and other individuals reduced their AvCSE. Again, we did not search for an explanation for the attitudinal patterns in GCTRL, but it may be that individuals improved their actual performance (and their perceptions about it) since Task #2 was similar to Task #1. In GPOS, a vast number of individuals increased their AvCSE, SRP, and ATP. This is arguably due to the reinforcing effects of the positive feedback received, but also to learning across tasks. In GNEG, while AvCSE decreased expressively, some individuals did not change their SRP, and others increased their ATP. The increase in ATP was consistent across the feedback groups, possibly due to learning, while SRP mostly remained the same in GNEG possibly due to some individuals being skeptical about the negative feedback they received. As explained before, individuals in a feedback group received the same feedback (whereas they were not aware of the existence of groups), thus the received feedback could be in contrast with SRP or ATP, or both. So far, we conclude that positive feedback correlates with an increase in CSE beliefs and performance, and negative feedback correlates with a decrease in CSE beliefs and an increase in performance.
Now, we focus attention on the individuals (not the groups), particularly the overconfident ones. We are not aware of any method to identify overconfidence; therefore, we relied on experience and on an empirically based, post hoc procedure, as follows: we identified individuals with higher AvCSE than their ATP and selected the individuals with differences between AvCSE and ATP equal or superior to 2.5 points in the scale. Such a magnitude takes into consideration several issues, like the arithmetic difference itself (a very prudent, arguably expressive 25% difference between expectation and effectiveness) and the accuracy, or lack of, with which one is able to assess and report three different types of CSE from which AvCSE is computed. Moreover, it is not any level of CSE that exceeds ATP that should be considered an indication of overconfidence, since moderate–high levels of CSE have been reported as desirable for college students to achieve technology use effectiveness (Porto-Bellini et al., 2016). Two additional reasons support the proposed heuristic: first, by examining Table 3, one realizes that 2.5 is well above the other AvCSE-ATP differences, except for only six measures between 2.1 and 2.3 reported in the two tasks (out of 108 differences computed in total), meaning that 2.5 is significantly above 94% of all AvCSE-ATP differences; and the second reason is that a range of 2.5 is also well above the difference needed for a student to move between grades in the Grade Point Average (GPA) system5. Therefore, 2.5 and above can be reasonably accepted as a manifestation of overconfidence. On a final note, such a rationale does not mean that difference scores below 2.5 may not represent overconfidence too, but that 2.5 and above is a safe threshold for our analyses.
Table 3 shows the overconfidence cases we considered. Students are identified according to their classroom section (“M” for the morning, “E” for the evening), and overconfidence levels are colored in black. We identified 17 overconfident students in the first task, and five students in the second task, i.e., one-third of the participants were initially overconfident based on our conservative heuristic for overconfidence. Such a proportion is in line with a study with junior-high-school students about their digital competencies, which has found that “only a few of participants’ perceived skills were related to their actual performance […] participants displayed high confidence in their digital literacies and significantly over-estimated their actual competencies” (Porat et al., 2018, p. 23). Another study by Aesaert et al. (2017) found similar results among primary school students regarding an overestimation of competencies to use the information and communication technologies.
In GCTRL, four individuals showed overconfidence in Task #1, and all of them reduced or eliminated overconfidence in Task #2. This group also had the only individual (M25) with initially acceptable high confidence (1.9) and overconfidence (3.03) after receiving (neutral) feedback. That is, in no other situation did individuals with high confidence in Task #1 show overconfidence in Task #2. The difference between this single case and the other four in GCTRL is that the four overconfident individuals in Task #1 had low or borderline ATP, so they seem to have naturally adjusted their CSE for Task #2 while also improving their performance due to learning from Task #1.
In GPOS, positive feedback generally reinforced the individuals’ CSE beliefs, to the extent that two of the four overconfident individuals in Task #1 remained overconfident in Task #2. One of the four individuals (M12) adjusted his or her CSE to a more acceptable level, while the individual with the highest overconfidence of all (E20) probably realized how far he or she was to performing well regardless of the positive feedback received and reacted accordingly.
In GNEG, we found the largest number of individuals (nine) with overconfidence. Since the individuals were randomly assigned to groups according to a balanced procedure of matching types of feedback to actual performance (Table 3), and since we controlled for the presence of a balanced number of morning- and evening-section students in each group as well as for pairing the most relevant demographic variables, we do not know why this happened, other than knowing that the number of individuals in each group (18) was relatively small to assure a fully homogeneous sampling across the groups. One of the nine overconfident GNEG students (M9) refused to complete Task #2 after receiving the negative feedback on Task #1. That student’s behavior cannot be explained as disappointment with the negative feedback received, given that the student achieved only 5.25 evaluation points out of 10 in Task #1, i.e., the student’s actual performance was below the course’s passing level (7.0). The student left the classroom and did not explain why. In the remaining eight GNEG overconfident cases, six students adjusted their CSE levels downwards and more coherently with ATP levels, but two students were not able to sufficiently improve their ATP in Task #2 so as to avoid being considered overconfident again. Those who adjusted their CSE to more realistic levels may hold incremental views of intelligence (those who believe that people can improve their abilities), thus being open to learning with negative absolute feedback (Zingoni & Byron, 2017).
We were intrigued by the unexpected fact that one student withdrew from the experiment; therefore, we tried to understand it using theories of feedback. In addition to a general understanding that feedback activates emotional responses (Kluger & DeNisi, 1996), we know that entity-intelligence people (those who believe that ability is fixed/stable) are likely to reject negative feedback (Cutumisu, 2019) and avoid tasks that might require reflection on low performance (Ehrlinger et al., 2016). Therefore, this may explain that student’s behavior as well as why other students had difficulty in adjusting their CSE levels.
Overall, we concluded that (1) CSE adjustments and increases in ATP occurred partially as a natural learning process across tasks as the three groups of feedback showed reasonably similar patterns when adjusting their CSE and ATP levels; (2) positive feedback contributed to reinforcing CSE beliefs and increasing ATP; and (3) negative feedback contributed to adjusting overestimations of CSE more than increasing ATP. However, as per the first conclusion, feedback on task outcomes (as modelled in our study) usually limits the learning process (Kluger & DeNisi, 1996); therefore, more research is needed to isolate the effects of either learning or feedback. Also, at the individual level, neutral feedback (no real feedback) seems to have been as powerful as positive/negative feedback when adjusting CSE and ATP levels for the particular case of overconfident individuals while not causing emerging overconfidence in other individuals. We did not analyze, however, the possible emergence of underconfidence. Such results are in line with Cabestrero et al. (2018), who found that neutral feedback given to secondary school students, as compared with emotionally reinforcing feedback, was more effective. Also, in interviews with early-career academic doctors, Watling et al. (2012) were inconclusive on whether the valence of feedback is influential over one’s reactions in different motivation scenarios.

5. Implications for Theory and Practice

This study has implications for scholarly knowledge. First, we have offered an original, exploratory decomposition of CSE in three dimensions (general, problem-specific, and tool-specific). Second, we offered a conceptualization and heuristic to identify overconfidence―as a post hoc measure of unrealistically high self-efficacy beliefs. Third, this study is among the few empirical ones on the framing of technology use effectiveness, i.e., the conception of effectiveness as a stakeholder’s arbitrary perspective. Fourth, this is one of few studies, if any, discussing the need to integrate attitudes (one’s CSE and SRP levels as well as the motivational reactions to feedback), theoretical abilities (one’s previous experience with a course’s contents as well as learning on the task), and practical skills (one’s actual behaviors towards problem modeling and solving) to explain technology use effectiveness. Fifth, this is among the very few studies in the human–computer interaction domain to articulate thought and lab experiments and to analyze experimental data with a qualitative approach.
This study also has implications for classroom activities, organizational hiring, training, and team building. In summary, we provide three new scales for CSE (Appendix A), two versions of a scenario-based computer-aided decision task for use in the classroom and in professional training (Appendix B), an experimental design, and numerous insights into the significant numbers of overconfident individuals that may be present in a work group (indeed, we found around one-third of individuals to be overconfident in our sample, by using a very conservative measure). Particularly helpful is the use of the tool in Appendix B. When teaching topics like the commoditization of IT (Carr, 2003), the productivity paradox (Anderson et al., 2003), and business competition in the IT industry, students want to see real situations to better understand the logic of the prisoner’s dilemma in action. The tool we offer can be used in this sense for a first glimpse at the concepts and how decisions are made when a company’s strategy needs to take into account a competitor’s strategy. Another possible use of the tool, along with the CSE instrument in Appendix A, is to promote self-reflection by the very participants of behavioral experiments and identify individual decision-making patterns (Folke et al., 2017). Such convenient experiments may efficiently reveal the presence of overconfident individuals in a team, as expectations placed on those individuals, and by them, may not correspond to reality and thus result in frustration, opposition, and the discontinuance of work (such as what happened with one overconfident student who abandoned our experiment).

6. Limitations and Future Research

This study has limitations. The first limitation is the quantity of individuals in each experimental group (three groups of 18 individuals each, 54 in total). We had the opportunity to study the individuals in an almost natural setting of decision-making under performance assessment (students under real assessment by an instructor), but the drawback was that we had fewer individuals than needed to study certain statistics. While we have found statistically based experimental studies with fewer individuals than ours (e.g., 46 individuals in Biernat & Danaher, 2012), we saw more problems than benefits in doing so. Therefore, we opted for an analytical reasoning approach, which provided high ecological validity due to the various aspects of how the experiment was designed and conducted as well as the opportunity to reevaluate established concepts (Köhler et al., 2025). We want to take the opportunity to comment a bit further on the issue of qualitative versus statistically based analyses. No sufficient explanation exists for two events being causally connected—neither philosophy nor statistics can solve this fact. Multivariate statistics is based on correlations, and there is always the possibility that two events are spuriously correlated either because they are the concomitant consequences of a common cause or due to pure chance. Therefore, all we can do before better knowledge is available in a given situation (either with or without statistically sound measures) is to develop a temporarily satisfactory explanation based on our own factual experience and judgement about how and why the events of our interest manifest as they do. In the present study, we have opted to invest our best efforts in analytical reasoning rather than having our conclusions constrained within the statistically accepted boundaries. If we had a larger sample size and had developed full statistical measures, our discussion would inevitably focus on them, thus causing us to lose the opportunity to conduct the in-depth, phenomenological analyses we actually performed.
Another limitation of this study is that, if we wanted to understand our data according to individual user profiles, we do not have information about certain personal characteristics of each individual, such as their self-concepts, orientation to reflection, and the framing of their intelligence/abilities. But again, this stems from the opportunity we had to study individuals in a natural setting. Even if we could benefit from understanding how overconfident individuals react to affect-based feedback according to individual characteristics, we would rather search for an answer to the broader question on the effects of feedback over one’s possibly inflated self-perceptions of performance before and after performing similar tasks on the computer. Also, we strongly argue in favor of a more realistic, ecologically valid experimental setting such as ours rather than processing large statistical data from artificially designed environments or self-reported attitudinal surveys. Still concerning an individuals’ traits, we also did not consider if some participants had a significant inclination towards response biases such as social desirability (Fisher, 1993) and acquiescence (Watson, 1992). While some may see this as a limitation6, we had not considered such biases as intervening factors because we had not asked respondents to express their views about socially shared issues.
A third limitation is the heuristics we used to analyze the data, since heuristics are always questionable for their essentially empirical rationale. A fourth limitation7 is that we had not considered the possibility that the students in the morning section could inform those in the evening section about the experiment. In the university where the study took place, most students take classes independently and work for the remaining hours of the day, thus it is unlikely that they would have strong reasons to immediately share information with others about the activities in the classroom. Also, we did not inform the first classroom that the same exercise would be performed again with other students in the same day. Fifth, we did not manipulate any of the specific CSE types (G-CSE, T-CSE, P-CSE). While this may be a limitation, it is coherent with the fact that we focused on average CSE (AvCSE) when comparing the CSE levels with actual task performance (ATP). A sixth limitation—although not directly related to the research question—is that we did not analyze whether the experimental treatments caused the emergence of underconfidence in some individuals.
Besides addressing the limitations mentioned before, we expect that future research will refine the instruments to measure our newly developed three-dimensional CSE construct. Future research may also contribute to the long-lived debate on whether self-efficacy beliefs and affective feedback correlate with individual performance in computer-aided tasks. Another avenue for future research is to compare the outcomes of learning from feedback with the outcomes of learning from pure experience. Still regarding the sources of learning and self-efficacy gains, future research can test the effects of the use of priming to introduce a decision problem. In our study, priming was used to level the participants’ awareness of the tasks to be completed, but it is possible that the model–observer similarity hypothesis and the task appropriateness hypothesis explain the cognitive gains and losses of some individuals due to personal identification with the priming stimulus or with those who produced it (Schunk, 1987; Hoogerheide et al., 2017, 2018). Another suggestion for future studies is to deepen our understanding about the thought-experiment causal paths, hopefully developing instruments to measure them. Last, the present study can be helpful for research on the reactions to feedback within the self-motives literature. The present study was developed with participants who were in a passive position with regard to the feedback process, i.e., the design and the purpose of the feedback were based on the interests of another stakeholder (the instructors/researchers). However, there is additional literature on people who actively seek for, and react to, feedback due to self-enhancement and self-improvement interests (Anseel et al., 2007)8.

7. Conclusions

This study aimed to understand the reaction of overconfident students to feedback received from their instructors on task performance under the premises of technology use effectiveness, i.e., the effectiveness with which an individual deploys the needed resources to achieve an arbitrarily defined technology use purpose. To answer the research question, we developed a novel, three-dimensional measure of CSE, a scenario-based tool to conduct experiments with sequential decision tasks in the classroom, and a mixed, interpretive experiment on the effects of feedback over one’s CSE beliefs and performance in computer-aided decision-making. We found that the valence of feedback may not be decisive―when compared with neutral feedback―in how individuals elaborate their CSE beliefs and perform decision tasks on the computer if skills and learning are expected to mature naturally. Therefore, our study adds to a body of research that does not find clear benefits stemming from feedback in general and for overconfident individuals in particular. This is intriguing as common sense suggests feedback is important to have “one’s feet on the ground” and achieve task effectiveness.

Author Contributions

Conceptualization, C.G.P.-B.; Methodology, C.G.P.-B., M.L.S. and R.d.C.d.F.P.; Validation, C.G.P.-B. and R.d.C.d.F.P.; Formal analysis, C.G.P.-B. and M.L.S.; Investigation, M.L.S.; Resources, C.G.P.-B.; Data curation, C.G.P.-B., M.L.S. and R.d.C.d.F.P.; Writing—original draft, C.G.P.-B.; Writing—review & editing, M.L.S. and R.d.C.d.F.P.; Supervision, C.G.P.-B.; Funding acquisition, C.G.P.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CNPq grant number 435879-2018-9, CNPq grant number 310165-2019-9, CNPq grant number 305810-2022-7, CNPq grant number 408262-2023-0, and UFPB grant ID PVE13554-2020.

Institutional Review Board Statement

Ethical review and approval were waived for this study. This study was developed in Brazil. Brazilian law waives such an approval if a study is done with anonymous people for research purposes. The law ruling this issue is “Resolução 510 do Conselho Nacional de Saúde”, 7 April 2016, which is available at: https://conselho.saude.gov.br/resolucoes/2016/Reso510.pdf (accessed on 26 March 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is contained within the article.

Acknowledgments

This article is a revised and expanded version of a paper (Porto-Bellini & Serpa, 2018), which was presented at 24th Americas Conference on Information Systems (AMCIS), New Orleans, LA, USA, 16–18 August 2018.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. CSE Instrument Applied Before Each Task

The following questionnaire must be adapted to each specific computer tool and decision problem.
Please rate from “0—Totally uncapable” to “10—Totally capable” each of the following 15 statements about your performance in using the digital technologies (either regular computers or mobile devices):
(G-CSE 1)
Use a personal computer.
(G-CSE 2)
Install a computer software.
(G-CSE 3)
Use a computer software for the first time without help.
(G-CSE 4)
Use a computer software for the first time with help.
(G-CSE 5)
Use a computer software for the first time, a software which is similar to another one that I use.
(T-CSE 1)
Edit data in an electronic spreadsheet.
(T-CSE 2)
Use mathematical functions and logical operations in an electronic spreadsheet.
(T-CSE 3)
Use menu instructions/commands (edit, delete, filter, format, etc.) in an electronic spreadsheet.
(T-CSE 4)
Use data from one tab in another tab of the same electronic spreadsheet.
(T-CSE 5)
Work with charts in an electronic spreadsheet.
(P-CSE 1)
Use an electronic spreadsheet as a decision-support tool.
(P-CSE 2)
Use an electronic spreadsheet as a support tool for academic or professional tasks.
(P-CSE 3)
Use an electronic spreadsheet as a support tool in business analyses.
(P-CSE 4)
Use an electronic spreadsheet to analyze costs and benefits.
(P-CSE 5)
Use an electronic spreadsheet to analyze business investments in information technology.

Appendix B. Computer-Aided Decision Tasks

Task #1
  • The productivity paradox is a classical problem in the domain of IT investments. It refers to the doubt of whether a company’s IT infrastructure has a positive impact in the company’s results, particularly regarding the financial returns on investment and competitive advantage. Please reflect on the following case inspired on the productivity paradox and develop a spreadsheet application according to the instructions.
  • Initial situation: You are the chief technology officer of a company (MINE) in the soft drinks industry with net financial assets of $7 million and market share of 10%. The main rival of MINE in the industry is company OTHER, which has $4 million in net financial assets and 20% of market share. You need to decide whether to upgrade MINE’s IT infrastructure due to prospective efficiency gains in its relationship with the supply chain and end customers. OTHER is about to make a similar decision.
  • Scenario #1:
  • If MINE decides to upgrade its IT infrastructure and OTHER does not, MINE will spend $1 million but in return MINE will take 5% from OTHER’s market share.
  • Scenario #2:
  • If OTHER decides to upgrade its IT infrastructure and MINE does not, OTHER will spend $3 million but in return OTHER will take 3% from MINE’s market share.
  • Scenario #3:
  • If both companies decide to upgrade their IT infrastructure, each of them loses 2% of market share due to a period of low discernment of the market.
  • Scenario #4:
  • If both companies decide not to upgrade their IT infrastructure, they do not spend money but each of them loses 6% of market share due to inefficiencies.
  • Activities:
  • Note: Maximum scores are informed in each activity. Participants did not know beforehand the score per activity. The maximum total score is 10.0.
  • Please use the electronic spreadsheet available in your computer to do the following activities (template tables provided):
    (1)
    In cell B1, please insert the current time. [0.25 points]
    (2)
    In cell B2, please insert your name. [0.25 points]
    (3)
    In cell B3, please insert the current date. [0.25 points]
    (4)
    In the first tab of the spreadsheet, please:
    (a)
    Insert the net financial assets of each company (please assure that the cell is formatted for currency). [0.50 points]
    (b)
    Insert the market share of each company (please assure that the cell is formatted for percentage). [0.50 points]
    (c)
    Insert a title in each column (“Name of the company”, “Net assets”, “Market share”), put in bold the name of each company, and highlight in black the edges of the table. [0.75 points]
    (d)
    Rename the tab as “Information”. [0.25 points]
    (5)
    Use the second tab to fill the tables with the four scenarios, rename the tab as “Scenarios” [0.25 points], and:
    (a)
    Make it bold all column titles in all scenarios (“Company”, “Initial financial assets”, “IT cost”, “Final financial assets”, “Initial market share”, “Market share transfer”, “Final market share”). [0.25 points]
    (b)
    In cell A3, type “MINE”; in cell A4, type “OTHER”. [0.50 points]
    (c)
    In column “Initial financial assets” of Scenario #1, put the net financial assets of each company. [0.25 points]
    (d)
    In column “IT cost” of each scenario, put the corresponding investment costs. [0.25 points]
    (e)
    In column “Final financial assets” of each scenario, develop a formula to calculate the net financial assets after the IT investments. [0.50 points]
    (f)
    In column “Initial market share” of Scenario #1, put the initial market share of each company. [0.25 points]
    (g)
    In column “Market share transfer” of each scenario, put the corresponding values of share transfer and assure the cells are in percentage format (put a minus signal for market share lost). [0.50 points]
    (h)
    In column “Final market share” of each scenario, develop a formula to calculate the final market share of each company. [0.50 points]
    (i)
    Centralize all cells both horizontally and vertically. [0.50 points]
    (6)
    Rename the third tab as “Summary” [0.25 points], and fill in and format the summary table of the scenarios according to the example below:
OTHER
investsdoes not invest
MINEinvestsMINE(post-investment financial assets)(post-investment market share)MINE(post-investment financial assets)(post-investment market share)
OTHER(post-investment financial assets)(post-investment market share)OTHER(post-investment financial assets)(post-investment market share)
does not investMINE(post-investment financial assets)(post-investment market share)MINE(post-investment financial assets)(post-investment market share)
OTHER(post-investment financial assets)(post-investment market share)OTHER(post-investment financial assets)(post-investment market share)
(a)
The table must be equal to the given example (except for the real values) and filled in using the spreadsheet’s command that retrieves data from another tab (the “Scenarios” tab). [0.75 points]
(b)
Develop a bar chart entitled “Final financial assets” with a note comparing the financial assets of both companies in the case they upgrade their IT infrastructure. [1.00 points]
(c)
In tab “Information”, put in cell B4 the current time. [0.25 points]
(d)
In that same tab, answer in cell A10 what is the expected scenario for MINE and for OTHER, and explain why. [1.00 points]
(e)
What is the main decision variable in this problem? [0.25 points]
(   ) market share
(   ) financial assets
(   ) IT infrastructure
Task #2
  • Initial situation and scenarios:
  • Note: the problematic situation and the scenarios are similar to those in Task #1, except for the different numbers in financial assets and market share of each company. The intention was to let the students reflect once again on the logics of the prisoner’s dilemma and, by changing numbers, make them intrigued as to whether the decision would be similar to the previous one or not. In order to save space, we will not repeat the problematic situation here, since merely changing the numbers does not affect the decision logics of this type of problem.
  • Activities:
  • Note: Maximum scores are informed in each activity. Participants did not know beforehand the score per activity. The maximum total score is 10.0.
  • Please use the electronic spreadsheet available in your computer to do the following activities (template tables provided):
    (1)
    In cell B1, please insert the current time. [0.25 points]
    (2)
    In cell B2, please insert your name. [0.25 points]
    (3)
    In cell B3, please insert the current date. [0.25 points]
    (4)
    In the first tab of the spreadsheet, please:
    (a)
    Insert the net financial assets of each company (please assure that the cell is formatted for number with thousand separator and two decimal places). [0.50 points]
    (b)
    Insert the market share of each company (please assure that the cell is formatted for percentage). [0.50 points]
    (c)
    Fill in grey and highlight the edges of cells A6, B6 and C6. [0.75 points]
    (d)
    Rename the first tab as “Information”. [0.25 points]
    (5)
    Use the next tab to fill the tables with the four scenarios and rename the tab as “Scenarios”. [0.25 points] and:
    (a)
    Make it blue all column titles in all scenarios (“Company”, “Initial financial assets”, “IT cost”, “Final financial assets”, “Initial market share”, “Market share transfer”, “Final market share”). [0.25 points]
    (b)
    In cell A3, type “MINE”; in cell A4, type “OTHER”. [0.50 points]
    (c)
    In column “Initial financial assets” of Scenario #1, put the net financial assets of each company. [0.25 points]
    (d)
    In column “IT cost” of each scenario, put the corresponding investment costs. [0.25 points]
    (e)
    In column “Final financial assets” of each scenario, develop a formula to calculate the net financial assets after the IT investments. [0.50 points]
    (f)
    In column “Initial market share” of Scenario #1, put the initial market share of each company. [0.25 points]
    (g)
    In column “Market share transfer” of each scenario, put the corresponding values of share transfer and assure the cells are in percentage format (put a minus signal for market share lost). [0.50 points]
    (h)
    In column “Final market share” of each scenario, develop a formula to calculate the final market share of each company. [0.50 points]
    (i)
    Justify to the left and to the bottom all cells. [0.50 points]
    (6)
    Rename the third tab as “Summary” [0.25 points], and fill in and format the summary table of the scenarios according to the example below:
OTHER
investsdoes not invest
MINEinvestsMINE(post-investment financial assets)(post-investment market share)MINE(post-investment financial assets)(post-investment market share)
OTHER(post-investment financial assets)(post-investment market share)OTHER(post-investment financial assets)(post-investment market share)
does not investMINE(post-investment financial assets)(post-investment market share)MINE(post-investment financial assets)(post-investment market share)
OTHER(post-investment financial assets)(post-investment market share)OTHER(post-investment financial assets)(post-investment market share)
(a)
The table must be equal to the given example (except for the real values) and filled in using the spreadsheet’s command that retrieves data from another tab (the “Scenarios” tab). [0.75 points]
(b)
Develop a line chart entitled “Final financial assets” with a note comparing the financial assets of both companies in the case MINE upgrades its IT infrastructure and OTHER does not. [1.00 points]
(c)
In tab “Information”, put in cell B4 the current time. [0.25 points]
(d)
In that same tab, answer in cell A10 what is the expected scenario for MINE and for OTHER, and explain why. [1.00 points]
(e)
What is the main decision variable in this problem? [0.25 points]
(   ) market share
(   ) financial assets
(   ) IT infrastructure

Notes

1
The authors thank an anonymous reviewer for suggesting these discussions.
2
The statement on ethical research was shared with the publisher.
3
4
For instance: Butler University (https://www.butler.edu/academics/core/components/analytic-reasoning, accessed on 26 March 2025), Chicago State University (https://www.csu.edu/humanresources/empdev/documents/AnalyticalThinking.pdf, accessed on 26 March 2025), and University of Wisconsin Eau-Claire (https://www.uwec.edu/academics/programs/undergraduate/analytical-reasoning/, accessed on 26 March 2025).
5
6
The authors thank an anonymous reviewer for this remark.
7
The authors thank another anonymous reviewer for this remark.
8
The authors thank an anonymous reviewer for suggesting this discussion.

References

  1. Aesaert, K., Voogt, J., Kuiper, E., & Van Braak, J. (2017). Accuracy and bias of ICT self-efficacy: An empirical study into students’ over- and underestimation of their ICT competences. Computers in Human Behavior, 75, 92–102. [Google Scholar] [CrossRef]
  2. Anderson, M. C., Banker, R. D., & Ravindran, S. (2003). The new productivity paradox. Communications of the ACM, 46(3), 91–94. [Google Scholar] [CrossRef]
  3. Anseel, F., Lievens, F., & Levy, P. E. (2007). A self-motives perspective on feedback- seeking behavior: Linking organizational behavior and social psychology research. International Journal of Management Reviews, 9(3), 211–236. [Google Scholar] [CrossRef]
  4. Anseel, F., Lievens, F., & Schollaert, E. (2009). Reflection as a strategy to enhance task performance after feedback. Organizational Behavior & Human Decision Processes, 110(1), 23–35. [Google Scholar] [CrossRef]
  5. Attali, Y., & Van Der Kleij, F. (2017). Effects of feedback elaboration and feedback timing during computer-based practice in mathematics problem solving. Computers & Education, 110, 154–169. [Google Scholar] [CrossRef]
  6. Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior & Human Decision Processes, 50(2), 248–287. [Google Scholar] [CrossRef]
  7. Bandura, A. (1997). Self-efficacy: The exercise of control. Freeman. [Google Scholar]
  8. Bandura, A., & Cervone, D. (1986). Differential engagement of self-reactive influences in cognitive motivation. Organizational Behavior & Human Decision Processes, 38(1), 92–113. [Google Scholar] [CrossRef]
  9. Beck, J. W., & Schmidt, A. M. (2018). Negative relationships between self-efficacy and performance can be adaptive: The mediating role of resource allocation. Journal of Management, 44(2), 555–588. [Google Scholar] [CrossRef]
  10. Bellman, S., & Murray, K. B. (2018). Feedback, task performance, and interface preferences. European Journal of Information Systems, 27(6), 654–669. [Google Scholar] [CrossRef]
  11. Biernat, M., & Danaher, K. (2012). Interpreting and reacting to feedback in stereotype-relevant performance domains. Journal of Experimental Social Psychology, 48(1), 271–276. [Google Scholar] [CrossRef]
  12. Blanton, H., Pelham, B. W., DeHart, T., & Carvallo, M. (2001). Overconfidence as dissonance reduction. Journal of Experimental Social Psychology, 37(5), 373–385. [Google Scholar] [CrossRef]
  13. Borovoi, L., Schmidtke, K., & Vlaev, I. (2022). The effects of feedback valance and progress monitoring on goal striving. Current Psychology, 41, 4574–4591. [Google Scholar] [CrossRef]
  14. Budworth, M.-H., Latham, G. P., & Manroop, L. (2015). Looking forward to performance improvement: A field test of the feedforward interview for performance management. Human Resource Management, 54(1), 45–54. [Google Scholar] [CrossRef]
  15. Burton-Jones, A., & Grange, C. (2013). From use to effective use: A representation theory perspective. Information Systems Research, 24(3), 632–658. [Google Scholar] [CrossRef]
  16. Cabestrero, R., Quirós, P., Santos, O. C., Salmeron-Majadas, S., Uria-Rivas, R., Boticario, J. G., Arnau, D., Arevalillo-Herráez, M., & Ferri, F. J. (2018). Some insights into the impact of affective information when delivering feedback to students. Behaviour & Information Technology, 37(12), 1252–1263. [Google Scholar] [CrossRef]
  17. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Rand McNally. [Google Scholar]
  18. Carr, N. G. (2003). IT doesn’t matter. Harvard Business Review, 5–12. Available online: https://www.classes.cs.uchicago.edu/archive/2014/fall/51210-1/required.reading/ITDoesntMatter.pdf (accessed on 26 March 2025).
  19. Chen, X., Breslow, L., & DeBoer, J. (2018). Analyzing productive learning behaviors for students using immediate corrective feedback in a blended learning environment. Computers & Education, 117, 59–74. [Google Scholar] [CrossRef]
  20. Chevrier, B., Compagnone, P., Carrizales, A., Brisset, C., & Lannegrand, L. (2020). Emerging adult self-perception and link with adjustment to academic context among female college students. European Review of Applied Psychology, 70(5), 100527. [Google Scholar] [CrossRef]
  21. Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, 19(2), 189–211. [Google Scholar] [CrossRef]
  22. Cutumisu, M. (2019). The association between feedback-seeking and performance is moderated by growth mindset in a digital assessment game. Computers in Human Behavior, 93, 267–278. [Google Scholar] [CrossRef]
  23. Cutumisu, M., & Lou, N. M. (2020). The moderating effect of mindset on the relationship between university students’ critical feedback-seeking and learning. Computers in Human Behavior, 112, 106445. [Google Scholar] [CrossRef]
  24. Cutumisu, M., & Schwartz, D. L. (2018). The impact of critical feedback choice on students’ revision, performance, learning, and memory. Computers in Human Behavior, 78, 351–367. [Google Scholar] [CrossRef]
  25. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and end user acceptance of information technology. MIS Quarterly, 13(3), 318–339. [Google Scholar] [CrossRef]
  26. DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9–30. [Google Scholar] [CrossRef]
  27. Dimotakis, N., Mitchell, D., & Maurer, T. (2017). Positive and negative assessment center feedback in relation to development self-efficacy, feedback seeking, and promotion. Journal of Applied Psychology, 102(11), 1514–1527. [Google Scholar] [CrossRef] [PubMed]
  28. Downey, J. P., & McMurtrey, M. (2007). Introducing task-based general computer self-efficacy: An empirical comparison of three general self-efficacy instruments. Interacting with Computers, 19, 382–396. [Google Scholar] [CrossRef]
  29. Dweck, C. S., & Leggett, E. L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95(2), 256–273. [Google Scholar] [CrossRef]
  30. Ehrlinger, J., Mitchum, A. L., & Dweck, C. S. (2016). Understanding overconfidence: Theories of intelligence, preferential attention, and distorted self-assessment. Journal of Experimental Social Psychology, 63, 94–100. [Google Scholar] [CrossRef]
  31. Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20(2), 303–315. [Google Scholar] [CrossRef]
  32. Folke, T., Jacobsen, C., Fleming, S. M., & De Martino, B. (2017). Explicit representation of confidence informs future value-based decisions. Nature Human Behavior, 1, 0002. [Google Scholar] [CrossRef]
  33. Freeman, K. A., & Dexter-Mazza, E. T. (2004). Using self-monitoring with an adolescent with disruptive classroom behavior: Preliminary analysis of the role of adult feedback. Behavior Modification, 28(3), 402–419. [Google Scholar] [CrossRef]
  34. Gist, M. E. (1987). Self-efficacy: Implications for organizational behavior and human resource management. Academy of Management Review, 12(3), 472–485. [Google Scholar] [CrossRef]
  35. Gist, M. E., Schwoerer, C., & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and performance in computer software training. Journal of Applied Psychology, 74(6), 884–891. [Google Scholar] [CrossRef]
  36. González-Vallejo, C., & Bonham, A. (2007). Aligning confidence with accuracy: Revisiting the role of feedback. Acta Psychologica, 125(2), 221–239. [Google Scholar] [CrossRef]
  37. Grover, V., Lyytinen, K., Srinivasan, A., & Tan, B. C. Y. (2008). Contributing to rigorous and forward thinking explanatory theory. Journal of the AIS, 9(2), 40–47. [Google Scholar] [CrossRef]
  38. Gupta, S., & Bostrom, R. P. (2019). A revision of computer self-efficacy conceptualizations in information systems. The DATABASE for Advances in Information Systems, 50(2), 71–93. [Google Scholar] [CrossRef]
  39. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. [Google Scholar] [CrossRef]
  40. Hilary, G., & Menzly, L. (2006). Does past success lead analysts to become overconfident? Management Science, 52(4), 489–500. [Google Scholar] [CrossRef]
  41. Hollingshead, S. J., Wohl, M. J. A., & Santesso, D. (2019). Do you read me? Including personalized behavioral feedback in pop-up messages does not enhance limit adherence among gamblers. Computers in Human Behavior, 94, 122–130. [Google Scholar] [CrossRef]
  42. Hoogerheide, V., Loyens, S. M. M., Jadi, F., Vrins, A., & Van Gog, T. (2017). Testing the model-observer similarity hypothesis with text-based worked examples. Educational Psychology, 37(2), 112–137. [Google Scholar] [CrossRef]
  43. Hoogerheide, V., Van Wermeskerken, M., Van Nassau, H., & Van Gog, T. (2018). Model-observer similarity and task-appropriateness in learning from video modeling examples: Do model and student gender affect test performance, self-efficacy, and perceived competence? Computers in Human Behavior, 89, 457–464. [Google Scholar] [CrossRef]
  44. Introna, L. D., & Whitley, E. D. (2000). About experiments and style: A critique of laboratory research in information systems. Information Technology & People, 13(3), 161–173. [Google Scholar] [CrossRef]
  45. Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526. [Google Scholar] [CrossRef]
  46. Karsten, R., Mitra, A., & Schmidt, D. (2012). Computer self-efficacy: A meta-analysis. Journal of Organizational & End User Computing, 24(4), 54–80. [Google Scholar] [CrossRef]
  47. Kefalidou, G. (2017). When immediate interactive feedback boosts optimization problem solving: A ‘human-in-the-loop’ approach for solving capacitated vehicle routing problems. Computers in Human Behavior, 73, 110–124. [Google Scholar] [CrossRef]
  48. Kelley, C. M., & McLaughlin, A. C. (2012). Individual differences in the benefits of feedback for learning. Human Factors, 54(1), 26–35. [Google Scholar] [CrossRef] [PubMed]
  49. Kippenberger, T. (1997). The prisoner’s dilemma. The Antidote, 2(4), 8–10. [Google Scholar] [CrossRef]
  50. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284. [Google Scholar] [CrossRef]
  51. Kolokolstov, V. N., & Malafeyev, O. A. (2020). Around the prisoner’s dilemma. In Understanding game theory: Introduction to the analysis of many agent systems with competition and cooperation (pp. 3–28). World Scientific. [Google Scholar] [CrossRef]
  52. Köhler, T., Rumyaantseva, M., & Welch, C. (2025). Qualitative restudies: Research designs for retheorizing. Organizational Research Methods, 28(1), 32–57. [Google Scholar] [CrossRef]
  53. Lam, C. F., DeRue, D. S., Karam, E. P., & Hollenbeck, J. R. (2011). The impact of feedback frequency on learning and task performance: Challenging the “more is better” assumption. Organizational Behavior & Human Decision Processes, 116(2), 217–228. [Google Scholar] [CrossRef]
  54. Lam, L. W., Peng, K. Z., Wong, C.-H., & Lau, D. C. (2017). Is more feedback seeking always better? Leader-member exchange moderates the relationship between feedback-seeking behavior and performance. Journal of Management, 43(7), 2195–2217. [Google Scholar] [CrossRef]
  55. Lee, M. T. Y., Wong, B. P., Chow, B. W. Y., & McBride-Chang, C. (2006). Predictors of suicide ideation and depression in Hong Kong adolescents: Perceptions of academic and family climates. Suicide & Life-Threatening Behavior, 36(1), 82–96. [Google Scholar] [CrossRef]
  56. Lucas, G. J. M., Knoben, J., & Meeys, M. T. H. (2018). Contradictory yet coherent? Inconsistency in performance feedback and R&D investment change. Journal of Management, 44(2), 658–681. [Google Scholar] [CrossRef]
  57. Marakas, G. M., Yi, M. Y., & Johnson, R. D. (1998). The multilevel and multifaceted character of computer self-efficacy: Toward clarification of the construct and an integrative framework for research. Information Systems Research, 9(2), 126–163. [Google Scholar] [CrossRef]
  58. McConnell, A. R., Rydell, R. J., & Brown, C. M. (2009). On the experience of self-relevant feedback: How self-concept organization influences affective responses and self-evaluations. Journal of Experimental Social Psychology, 45(4), 695–707. [Google Scholar] [CrossRef]
  59. Merriman, K. K. (2017). Extrinsic work values and feedback: Contrary effects for performance and well-being. Human Relations, 70(3), 339–361. [Google Scholar] [CrossRef]
  60. Misiejuk, K., Wasson, B., & Egelandsdal, K. (2021). Using learning analytics to understand student perceptions of peer feedback. Computers in Human Behavior, 117, 106658. [Google Scholar] [CrossRef]
  61. Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502–517. [Google Scholar] [CrossRef]
  62. Moores, T. T., & Chang, J. C.-J. (2009). Self-efficacy, overconfidence, and the negative effect on subsequent performance: A field study. Information & Management, 46(2), 69–76. [Google Scholar] [CrossRef]
  63. Morony, S., Kleitman, S., Lee, Y. P., & Stankov, L. (2013). Predicting achievement: Confidence vs self-efficacy, anxiety, and self-concept in Confucian and European countries. International Journal of Educational Research, 58, 79–96. [Google Scholar] [CrossRef]
  64. Murphy, C. A., Coover, D., & Owen, S. T. (1989). Development and validation of the computer self-efficacy scale. Education & Psychological Measurement, 49(4), 893–899. [Google Scholar] [CrossRef]
  65. Narciss, S., Sosnovsky, S., Schnaubert, L., Andrès, E., Eichelmann, A., Gogua, G., & Melis, E. (2014). Exploring feedback and student characteristics relevant for personalizing feedback strategies. Computers & Education, 71, 56–76. [Google Scholar] [CrossRef]
  66. Nguyen, D. T., Wright, E. P., Dedding, C., Pham, T. T., & Bunders, J. (2019). Low self-esteem and its association with anxiety, depression, and suicidal ideation in Vietnamese secondary school students: A cross-sectional study. Frontiers in Psychiatry, 10, 698. [Google Scholar] [CrossRef] [PubMed]
  67. Orlikowski, W. J. (2000). Managing use not technology: A view from the trenches. In D. A. Marchand, T. H. Davenport, & T. Dickson (Eds.), Mastering information management (pp. 253–257). Prentice-Hall. [Google Scholar]
  68. Orozco, R., Benjet, C., Borges, G., Moneta Arce, M. F., Fregoso Ito, D., Fleiz, C., & Villatoro, J. A. (2018). Association between attempted suicide and academic performance indicators among middle and high school students in Mexico: Results from a national survey. Child & Adolescent Psychiatry & Mental Health, 12, 9. [Google Scholar] [CrossRef]
  69. Piccoli, G., Rodriguez, J., Palese, B., & Bartosiak, M. L. (2020). Feedback at scale: Designing for accurate and timely practical digital skills evaluation. European Journal of Information Systems, 29(2), 114–133. [Google Scholar] [CrossRef]
  70. Porat, E., Blau, I., & Barak, A. (2018). Measuring digital literacies: Junior high-school students’ perceived competencies versus actual performance. Computers & Education, 126, 23–36. [Google Scholar] [CrossRef]
  71. Porto-Bellini, C. G. (2018). The ABCs of effectiveness in the digital society. Communications of the ACM, 61(7), 84–91. [Google Scholar] [CrossRef]
  72. Porto-Bellini, C. G., & Serpa, M. L. (2018, August 16–18). Mirror, mirror on the wall: An experiment on feedback and overconfidence in computer-mediated tasks. 24th Americas Conference on Information Systems (AMCIS), New Orleans, LA, USA. [Google Scholar]
  73. Porto-Bellini, C. G., Isoni Filho, M. M., De Moura, P. J., Jr., & Pereira, R. C. F. (2016). Self-efficacy and anxiety of digital natives in face of compulsory computer-mediated tasks: A study about digital capabilities and limitations. Computers in Human Behavior, 59(1), 49–57. [Google Scholar] [CrossRef]
  74. Porto-Bellini, C. G., Pereira, R. C. F., & Becker, J. L. (2020). Emergent customer team performance and effectiveness: An ex-post-facto study on cognition and behavior in enterprise systems implementation. Communications of the AIS, 47, 550–582. [Google Scholar] [CrossRef]
  75. Rabinovich, A., & Morton, T. A. (2012). Sizing fish and ponds: The joint effects of individual- and group-based feedback. Journal of Experimental Social Psychology, 48(1), 244–249. [Google Scholar] [CrossRef]
  76. Richardson, A. S., Bergen, H. A., Martin, G., Roeger, L., & Allison, S. (2005). Perceived academic performance as an indicator of risk of attempted suicide in young adolescents. Archives of Suicide Research, 9(2), 163–176. [Google Scholar] [CrossRef]
  77. Sansone, C. (1989). Competence feedback, task feedback, and intrinsic interest: An examination of process and context. Journal of Experimental Social Psychology, 25(4), 343–361. [Google Scholar] [CrossRef]
  78. Sauer, J., Schmutz, S., Sonderegger, A., & Messerli, N. (2019). Social stress and performance in human-machine interaction: A neglected research field. Ergonomics, 62(11), 1377–1391. [Google Scholar] [CrossRef] [PubMed]
  79. Schaerer, M., Kern, M., Berger, G., Medvec, V., & Swaab, R. I. (2018). The illusion of transparency in performance appraisals: When and why accuracy motivation explains unintentional feedback inflation. Organizational Behavior & Human Decision Processes, 144, 171–186. [Google Scholar] [CrossRef]
  80. Schmidt, A. M., & Deshon, R. P. (2010). The moderating effects of performance ambiguity on the relationship between self-efficacy and performance. Journal of Applied Psychology, 95(5), 572–581. [Google Scholar] [CrossRef]
  81. Schunk, D. H. (1987). Peer models and children’s behavioral change. Review of Educational Research, 57(2), 149–174. [Google Scholar] [CrossRef]
  82. Sherf, E. N., & Morrison, E. W. (2020). I do not need feedback! Or do I? Self-efficacy, perspective taking, and feedback seeking. Journal of Applied Psychology, 105(2), 146–165. [Google Scholar] [CrossRef] [PubMed]
  83. Smith, T. A., & Kimbal, D. R. (2010). Learning from feedback: Spacing and the delay-retention effect. Journal of Experimental Psychology: Learning, Memory, & Cognition, 36(1), 80–95. [Google Scholar] [CrossRef]
  84. Thuillard, S., Adams, M., Jelmini, G., Schmutz, S., Sonderegger, A., & Sauer, J. (2022). When humans and computers induce social stress through negative feedback: Effects on performance and subjective state. Computers in Human Behavior, 133, 107270. [Google Scholar] [CrossRef]
  85. Torres, C. I., Correia, J., Compeau, D., & Carter, M. (2022). Computer self-efficacy: A replication after thirty years. AIS Transactions on Replication Research, 8, 5. [Google Scholar] [CrossRef]
  86. Tsai, C.-W. (2013). An effective online teaching method: The combination of collaborative learning with initiation and self-regulation learning with feedback. Behaviour & Information Technology, 32(7), 712–723. [Google Scholar] [CrossRef]
  87. Tzeng, J.-Y. (2009). The impact of general and specific performance and self-efficacy on learning with computer-based concept mapping. Computers in Human Behavior, 25(4), 989–996. [Google Scholar] [CrossRef]
  88. Ulfert-Blank, A.-S., & Schmidt, I. (2022). Assessing digital self-efficacy: Review and scale development. Computers & Education, 191, 104626. [Google Scholar] [CrossRef]
  89. Unger-Aviram, E., Zwikael, O., & Restubog, S. L. D. (2013). Revisiting goals, feedback, recognition, and performance success: The case of project teams. Group & Organization Management, 38(5), 570–600. [Google Scholar] [CrossRef]
  90. Vancouver, J. B., & Kendall, L. N. (2006). When self-efficacy negatively relates to motivation and performance in a learning context. Journal of Applied Psychology, 91(5), 1146–1153. [Google Scholar] [CrossRef] [PubMed]
  91. Vancouver, J. B., Thompson, C. M., & Williams, A. A. (2001). The changing signs in the relationships among self-efficacy, personal goals, and performance. Journal of Applied Psychology, 86(4), 605–620. [Google Scholar] [CrossRef] [PubMed]
  92. Vancouver, J. B., Thompson, C. M., Tischner, E. C., & Putka, D. J. (2002). Two studies examining the negative effect on self-efficacy on performance. Journal of Applied Psychology, 87(3), 506–516. [Google Scholar] [CrossRef]
  93. Venkatesh, V., Thong, J. Y. L., & Xu, X. (2016). Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the AIS, 17(5), 328–376. [Google Scholar] [CrossRef]
  94. Watling, C., Driessen, E., Van Der Vleuten, C. P. M., Vanstone, M., & Lingard, L. (2012). Understanding responses to feedback: The potential and limitations of regulatory focus theory. Medical Education, 46(6), 593–603. [Google Scholar] [CrossRef]
  95. Watson, D. (1992). Correcting for acquiescent response bias in the absence of a balanced scale: An application to class consciousness. Sociological Methods & Research, 21(1), 52–88. [Google Scholar] [CrossRef]
  96. Wetherall, K., Robb, K. A., & O’Connor, R. C. (2019). Social rank theory of depression: A systematic review of self-perceptions of social rank and their relationship with depressive symptoms and suicide risk. Journal of Affective Disorders, 246(1), 300–319. [Google Scholar] [CrossRef]
  97. Yeo, G. B., & Neal, A. (2006). An examination of the dynamic relationship between self-efficacy and performance across levels of analysis and levels of specificity. Journal of Applied Psychology, 91(5), 1088–1101. [Google Scholar] [CrossRef]
  98. Zakay, D. (1992). The influence of computerized feedback on overconfidence in knowledge. Behaviour & Information Technology, 11(6), 329–333. [Google Scholar] [CrossRef]
  99. Zingoni, M., & Byron, K. (2017). How beliefs about the self influence perceptions of negative feedback and subsequent effort and learning. Organizational Behavior & Human Decision Processes, 139, 50–62. [Google Scholar] [CrossRef]
  100. Zong, Z., Schunn, C. D., & Wang, Y. (2021). What aspects of online peer feedback robustly predict growth in students’ task performance? Computers in Human Behavior, 124, 106924. [Google Scholar] [CrossRef]
Figure 1. The experiment. Notes: dashed lines represent the thought experiment’s causal paths and latent variables; full lines represent the lab experiment’s causal paths and variables that were effectively measured. The model integrates motivational attitudes (the levels of CSE and SRP, and the reactions to feedback), theoretical abilities (previous experience with the course’s contents along with the modelling instructions received from the priming procedure), and practical skills (the actual skills impacting the formation of CSE and ATP).
Figure 1. The experiment. Notes: dashed lines represent the thought experiment’s causal paths and latent variables; full lines represent the lab experiment’s causal paths and variables that were effectively measured. The model integrates motivational attitudes (the levels of CSE and SRP, and the reactions to feedback), theoretical abilities (previous experience with the course’s contents along with the modelling instructions received from the priming procedure), and practical skills (the actual skills impacting the formation of CSE and ATP).
Behavsci 15 00511 g001
Figure 2. Feedback plan.
Figure 2. Feedback plan.
Behavsci 15 00511 g002
Figure 3. Computer self-efficacy at the general, tool, and problem levels. Note: Numbers in the first rows represent the quantity of individuals with increased (“↑”), decreased (“↓”), or equal (“—”) measures after feedback, and numbers in the second rows represent the accumulated differences regarding the first (Task #1) and second (Task #2) measures of CSE from all individuals in the corresponding column. One individual in GNEG did not complete Task #2 and did not provide data on P-CSE.
Figure 3. Computer self-efficacy at the general, tool, and problem levels. Note: Numbers in the first rows represent the quantity of individuals with increased (“↑”), decreased (“↓”), or equal (“—”) measures after feedback, and numbers in the second rows represent the accumulated differences regarding the first (Task #1) and second (Task #2) measures of CSE from all individuals in the corresponding column. One individual in GNEG did not complete Task #2 and did not provide data on P-CSE.
Behavsci 15 00511 g003
Figure 4. Average self-efficacy, self-reported performance, and actual task performance. Note: Numbers in the first rows represent the quantity of individuals with increased (“↑”), decreased (“↓”), or equal (“—”) measures after feedback, and numbers in the second rows represent the accumulated differences regarding the first (Task #1) and second (Task #2) measures of CSE from all individuals in the corresponding column. One individual in GNEG did not complete Task #2 and, as such, did not provide data on SRP and ATP.
Figure 4. Average self-efficacy, self-reported performance, and actual task performance. Note: Numbers in the first rows represent the quantity of individuals with increased (“↑”), decreased (“↓”), or equal (“—”) measures after feedback, and numbers in the second rows represent the accumulated differences regarding the first (Task #1) and second (Task #2) measures of CSE from all individuals in the corresponding column. One individual in GNEG did not complete Task #2 and, as such, did not provide data on SRP and ATP.
Behavsci 15 00511 g004
Table 1. The experimental groups.
Table 1. The experimental groups.
Demographic VariableGCTRLGPOSGNEG
Participants (quantity)181818
Second semester participants (%)899489
Average age (years)21.823.722.7
Average first contact with computers (year)200220002002
Participants with experience in computer-related industry internship (quantity)423
Participants with computer-related work experience (quantity)11813
Average duration of internship (months)717.57
Average work experience (months)318651
Female participants (%)557228
Male participants (%)452872
Average importance attributed to computer use for personal issues (0–10)8.838.788.67
Average importance attributed to computer use for professional issues (0–10)9.58.949.44
Table 2. Actions in the experiment.
Table 2. Actions in the experiment.
Sequential ActionCodeDuration (min)
Priming with a video about the prisoner’s dilemmaPriming10
Measurement of computer self-efficacy (G-CSE, P-CSE, T-CSE)CSE15
Electronic spreadsheet task (first version)Task #135
Self-evaluation of performanceSRP1*
External evaluation of actual performanceATP1**
Feedback interventionFeedback***
Measurement of computer self-efficacy (G-CSE, P-CSE, T-CSE)CSE25
Electronic spreadsheet task (second version)Task #235
Self-evaluation of performanceSRP2*
External evaluation of actual performanceATP2**
Notes: * Self-evaluation was part of the task, i.e., it was completed within the 35 min task limit. ** Actual task performance was evaluated as soon as each student finished the corresponding task. *** Feedback was given immediately to each student after Task #1.
Table 3. Overconfident individuals.
Table 3. Overconfident individuals.
Student (GCTRL)AvCSE1ATP1AvCSE1—ATP1AvCSE2ATP2AvCSE2—ATP2
M29.075.503.579.007.002.00
M44.735.75−1.026.206.25−0.05
M104.072.002.074.674.000.67
M132.673.00−0.333.273.50−0.23
M187.003.503.503.204.75−1.55
M259.407.501.909.536.503.03
M275.605.000.605.675.500.17
M289.336.752.588.878.750.12
M295.733.002.735.603.751.85
E16.805.501.306.477.00−0.53
E37.877.000.878.408.000.40
E65.935.750.188.136.002.13
E75.405.75−0.353.535.50−1.97
E158.276.501.777.936.001.93
E168.607.750.859.009.50−0.50
E184.606.75−2.154.408.75−4.35
E238.808.750.059.279.000.27
E253.534.75−1.222.875.50−2.63
Student (GPOS)AvCSE1ATP1AvCSE1—ATP1AvCSE2ATP2AvCSE2—ATP2
M68.275.752.528.876.252.62
M73.135.00−1.872.075.75−3.68
M83.536.50−2.974.478.00−3.53
M126.804.002.806.605.251.35
M152.804.75−1.953.006.50−3.50
M179.079.000.078.609.75−1.15
M208.735.752.988.876.002.87
M227.006.001.007.077.000.07
M230.734.25−3.521.735.25−3.52
M247.677.250.427.877.500.37
E24.676.00−1.334.476.75−2.28
E46.336.000.336.078.00−1.93
E105.073.751.326.004.002.00
E115.336.50−1.176.077.75−1.68
E123.804.00−0.203.073.50−0.43
E146.077.50−1.436.737.75−1.02
E209.334.005.338.879.25−0.38
E244.675.25−0.585.276.25−0.98
Student (GNEG)AvCSE1ATP1AvCSE1—ATP1AvCSE2ATP2AvCSE2—ATP2
M99.335.254.0810.00
M17.277.75−0.487.208.25−1.05
M38.736.502.238.537.750.78
M57.875.002.877.474.502.97
M118.475.003.477.207.50−0.30
M146.076.000.075.605.500.10
M165.136.25−1.124.606.00−1.40
M198.607.251.358.209.50−1.30
M216.538.25−1.726.808.75−1.95
M262.734.75−2.022.676.25−3.58
E58.607.001.608.608.250.35
E87.204.752.453.934.75−0.82
E98.005.502.505.935.750.18
E135.871.754.123.331.002.33
E179.006.502.508.337.750.58
E195.533.002.534.136.50−2.37
E218.605.253.358.936.252.68
E229.678.750.929.538.501.03
Notes: Overconfidence levels are colored in black, and non-overconfidence levels are colored in grey. Student M9 left the room after filling the CSE2 form with two maximum scores. Student E8 was a borderline case considering overconfidence.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Porto-Bellini, C.G.; Serpa, M.L.; Pereira, R.d.C.d.F. Computer Self-Efficacy and Reactions to Feedback: Reopening the Debate in an Interpretive Experiment with Overconfident Students. Behav. Sci. 2025, 15, 511. https://doi.org/10.3390/bs15040511

AMA Style

Porto-Bellini CG, Serpa ML, Pereira RdCdF. Computer Self-Efficacy and Reactions to Feedback: Reopening the Debate in an Interpretive Experiment with Overconfident Students. Behavioral Sciences. 2025; 15(4):511. https://doi.org/10.3390/bs15040511

Chicago/Turabian Style

Porto-Bellini, Carlo G., Malu Lacet Serpa, and Rita de Cássia de Faria Pereira. 2025. "Computer Self-Efficacy and Reactions to Feedback: Reopening the Debate in an Interpretive Experiment with Overconfident Students" Behavioral Sciences 15, no. 4: 511. https://doi.org/10.3390/bs15040511

APA Style

Porto-Bellini, C. G., Serpa, M. L., & Pereira, R. d. C. d. F. (2025). Computer Self-Efficacy and Reactions to Feedback: Reopening the Debate in an Interpretive Experiment with Overconfident Students. Behavioral Sciences, 15(4), 511. https://doi.org/10.3390/bs15040511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop