Next Article in Journal
The Impact of Artificial Intelligence (AI) on Students’ Academic Development
Previous Article in Journal
Linking Professional Development Opportunities to Work Performance Among Chinese Kindergarten Teachers: The Mediating Roles of Commitment and Engagement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing Computational Thinking in Engineering and Computer Science Students: A Multi-Method Approach

by
Farman Ali Pirzado
1,*,†,
Awais Ahmed
2,†,
Sadam Hussain
1,
Gerardo Ibarra-Vázquez
1 and
Hugo Terashima-Marin
1
1
Tecnológico de Monterrey, School of Engineering and Sciences Monterrey, Monterrey 64849, Mexico
2
School of Computer Science, China West Normal University, Nanchong 637009, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Educ. Sci. 2025, 15(3), 344; https://doi.org/10.3390/educsci15030344
Submission received: 10 December 2024 / Revised: 3 January 2025 / Accepted: 11 February 2025 / Published: 11 March 2025

Abstract

:
The rapid integration of computational thinking (CT) into STEM education highlights its importance as a critical skill for problem-solving in the digital age, equipping students with the cognitive tools needed to address complex challenges systematically. This study evaluates CT skills among Engineering and Computer Science students using a multi-method approach by combining quantitative methods (CTT scores and CTS responses) with qualitative methods (thematic analysis of open-ended questions), integrating objective assessments, self-perception scales, and qualitative insights. The Computational Thinking Test (CTT) measures proficiency in core CT sub-competencies, abstraction, decomposition, algorithmic thinking, and pattern recognition through objective tests. The Computational Thinking Scale (CTS) captures students’ perceived CT skills. At the same time, open-ended questions elicit perspectives on the practical applications of CT in academic and professional contexts. Data from 196 students across two Mexican universities were analyzed through quantitative and thematic methods. The results show that students excel in pattern recognition and abstraction but face decomposition and algorithmic thinking challenges. Cross-sectional analyses were conducted between CTT, CTS and the open-ended part to compare CT skills across different demographic groups (e.g., age, gender, academic disciplines), showing clear differences based on age, gender, and academic disciplines, with Computer Science students performing better than engineering students. These findings highlight the importance of CT in preparing students for modern challenges and provide a foundation for improving teaching methods and integrating these skills into university programs.

1. Introduction

Computational thinking (CT) is increasingly recognized as a vital skill set within STEM (Science, Technology, Engineering, and Mathematics) education, providing students with the cognitive tools necessary for problem-solving in this technology-driven world. In Engineering and computing education, CT is integral to understanding complex systems and formulation algorithms and addressing real-world challenges systematically. Despite its growing importance, measuring and assessing CT skills remain complex and often subjective, particularly when examining students’ self-perceptions versus objectively measured abilities in STEM (Liu et al., 2023; Muñoz et al., 2023; Syafe’i et al., 2023). It is believed that, regardless of their chosen field, students must equip themselves with essential problem-solving skills for the future, including abstraction, decomposition, algorithmic thinking, etc.
Undoubtedly, CT is a recent problem-solving skill that offers a multifaceted approach, integrating mathematical, Engineering, and scientific thinking. As a vital foundation for fostering innovative problem-solving and creative thinking abilities, CT empowers individuals to break down complex problems, recognize patterns, and devise systematic solutions. By developing these competencies, students enhance their ability to address challenges within computing and become better prepared to navigate the evolving technological landscape across various fields (Hunsaker, 2020; Liang et al., 2013; Yeni et al., 2024). The term “computational thinking” was introduced by (Wing, 2008) to highlight that thinking like a computer scientist can benefit everyone, not just Computer Science majors. It is defined as “the thought processes involved in formulating problems and designing their solutions so that these solutions can be effectively executed by an information processing agent including computers, robots, and humans” (Wing, 2009).
Currently, CT is an emerging discipline that emphasizes structured and algorithmic approaches to problem-solving. It involves breaking down complex problems into smaller, more manageable components, identifying patterns and relationships, and developing abstract models to address and solve these problems computationally. Recognized as a key multidisciplinary skill of the digital age (Czerkawski & Lyman, 2015), CT has gained prominence as technology increasingly influences all aspects of life, driving the need for professionals capable of tackling complex challenges using computational methods. Researchers have explored CT as a framework of concepts and thought processes that enable problem formulation and solution development across various disciplines. It is often described as a cognitive strategy akin to thinking like a computer scientist, even for those outside the field (Riley & Hunt, 2014). Moreover, CT is closely linked to competencies such as problem-solving, system modeling, and problem formulation, highlighting its relevance in fostering critical and analytical thinking across diverse domains (Denning & Tedre, 2019; A. Sullivan & Bers, 2019; Wing, 2008).
In education, CT serves as a tool to enhance critical thinking and technical proficiency. Applying structured problem-solving processes empowers students to create computational artifacts like computer programs and system designs. Educators and policymakers increasingly prioritize the integration of CT into curricula to prepare students for the demands of a rapidly evolving technological landscape (Agbo et al., 2019; Tekdal, 2021). Moreover, CT nurtures essential skills such as systematic problem-solving, data analysis, and creative thinking capabilities highly valued in programming, mathematics, and other fields (Angevine et al., 2017; Fields et al., 2021; Tang et al., 2020). These connections underscore CT’s critical role in equipping students with the competencies to excel in a technology-driven world. In addition, integrating CT into education systems is essential for equipping students with future-ready skills (Grover & Pea, 2013). As a result, CT has been incorporated into the Next Generation Science Standards (NGSS) in the United States and embedded within STEM curricula at the K-12 level (Tang et al., 2020). Although many countries, including Finland, Norway, South Korea, Israel, Poland, New Zealand, Portugal, and Estonia, have integrated CT into their educational programs (Junpho et al., 2022; Karalar & Alpaslan, 2021; Machuqueiro & Piedade, 2024; Tikva & Tambouris, 2021), there are only a few studies that target CT analysis among students in Latin America, and most existing research focuses on primary education (Castro et al., 2021; Paucar-Curasma et al., 2022; Ríos Félix et al., 2020). This research gap necessitates more research to establish a solid foundation for studying CT in higher education in Mexico.
Recent studies on CT skill assessment often use either self-assessment tools or objective tests using gamification across primary, college and vocational levels (Chen et al., 2023; Hermans et al., 2024; National Research Council et al., 2011; Relkin et al., 2020; Wilensky & Reisman, 2006). A study by (El-Hamamsy et al., 2023) has introduced the competent Computational Thinking Test (CTT), validated for longitudinal studies among primary students. In contrast, (Ghosh et al., 2024) developed ACE, a tool assessing higher cognitive levels in visual programming domains. Additionally, (Zapata-Cáceres et al., 2020) designed and validated a beginner-focused CT test, emphasizing its applicability and content validity. Recent contributions like (Relkin et al., 2020) and (Clarke-Midura et al., 2021) have focused on unplugged assessments targeting early childhood and kindergarten education, respectively. Thus, a gap exists between students’ self-perceptions of their CT abilities and their objective performance, particularly in Engineering and computing education at higher levels.
Therefore, this study aims to address the above gaps using a multi-method approach to assess self-perceived and objectively assessed CT skills. The research integrates the Computational Thinking Scale (CTS) adopted from (Tsai et al., 2021), which measures students’ self-assessments, with the Computational Thinking Test (CTT), an objective measure of their CT competencies. In addition, it analyzes students’ perception of CT competencies in their academic and career lives through qualitative assessment. Through this triple assessment, this study explores the alignment between self-perception and actual performance while examining how demographic factors such as program of study, gender, and prior experience influence students’ CT abilities. Through this method, the research addresses a significant gap in assessing CT skills by adopting a multi-dimensional assessment approach, integrating the CTT, the CTS, and qualitative insights in Latin America.
This study provides a critical foundation for understanding CT in Latin America, which future intervention-based research can build upon to create targeted educational strategies. Overall, this study addresses three key research questions: How aligned are students’ self-perceived CT skills (via CTS) with their objectively assessed CT skills (via CTT)? How do demographic factors (e.g., program of study, gender, and age) influence these skills? Lastly, how do students perceive the relevance of CT skills to their academic experiences and future careers? These questions aim to uncover critical insights into the relationship between perception and performance, demographic influences, and the practical value of CT in STEM education.

Contributions

This study contributes in multiple ways. These contributions provide valuable insights for STEM educators and curriculum developers and pave the way for a more effective integration of CT skills into Engineering and computing education.
  • Design and Validation of a CT Assessment Instrument: An instrument was designed and validated to assess undergraduate students’ CT skills and perceptions, specifically within Engineering and computing education contexts. Unlike prior CT-focused tools that primarily target pre-college education, this instrument addresses college-level students’ unique requirements and skills in STEM programs.
  • Cross-Verification Methodology: This study presents valuable insights by comparing students’ self-perceived CT skills with their objectively assessed CT test and qualitative insights obtained via open-ended questions.
  • Exploration of Demographic and Academic Influences: In addition, it examines how factors like program of study, gender, program, and age influence both self-perceived and objectively assessed CT skills, contributing to an understanding of diversity in CT competencies.
  • Qualitative Insights: It also incorporates qualitative insights to explore students’ perceptions of CT’s importance and applicability in their academic experiences and future Engineering careers, bridging theoretical knowledge and practical applications.
  • Future Research Gap: Lays a foundation for future investigations into the factors influencing CT development and the design of interventions to enhance CT education across diverse populations in STEM.
The rest of this article is as follows: Section 2 presents a detailed theoretical framework of CT and the tools and methods deployed to assess CT in the literature. This is followed by Section 3, which provides an overview of each step executed in this methodology, including the experimental setting, participant details, instrument design, and data analysis. Next, Section 4 provides a detailed analysis of the results of the CTT, CTS, and qualitative data, and Section 5 contains a discussion of the work. Further, Section 7 presents this work’s limitations and future development; and lastly, the conclusion is given in Section 8.

2. Theoretical Framework

2.1. Computational Thinking

CT was first introduced by (Wing, 2006) as a problem-solving process involving key skills such as abstraction, decomposition, algorithmic thinking, and pattern recognition. The concept of “computational thinking” emphasizes that the mindset of a computer scientist can benefit everyone, not just those majoring in Computer Science. In 2010, a study defined computational thinking as the cognitive processes involved in formulating problems and developing solutions in a way that they can be effectively represented and executed by an information-processing agent (Cuny et al., 2010). It can also be understood as a broad concept encompassing the cognitive skills required to perform computational tasks (Doleck et al., 2017). It emphasizes that CT spans various disciplines and is not limited to programming, making it an essential skill for students, particularly in STEM fields (Wing, 2008). Wing later revised the definition to emphasize that CT is a cognitive process. It is now defined as a means of framing problems and their solutions so that an information-processing agent can efficiently handle them (Wing, 2011).
The history of CT can be traced back to the mid-20th century alongside the development of Computer Science (Ogegbo & Ramnarain, 2022), gaining broader recognition by the early 21st century as a core educational skill. CT includes concepts like data collection, analysis, and representation (Barr & Stephenson, 2011; Tabesh, 2017), and has been formalized by organizations such as the International Society for Technology in Education (ISTE) and Computer Science Teachers Association (CSTA). Their definition highlights CT as a methodology for solving problems using computational resources, applicable across all disciplines (CSTA, 2016; CSTA & ISTE, 2011).
Earlier research emphasizes the integration of CT in education at all levels, particularly higher education, where it improves both students’ understanding and attitudes towards computing (Hambrusch et al., 2009; National Research Council et al., 2011; Wilensky & Reisman, 2006; Yadav et al., 2014). Research further suggests that CT sub-competencies, such as problem decomposition and pattern recognition, are crucial for structured thinking and everyday problem-solving (Andrian & Hikmawan, 2021; Kazimoglu et al., 2012; Wing, 2006). While CT is well established in K-12 education, its integration in higher education, particularly outside STEM fields, remains underexplored (Czerkawski & Lyman, 2015). Many educators, particularly experts in educational technology, have highlighted the critical importance of CT as a foundational skill for thriving in the 21st century (Mishra & Yadav, 2013; Voogt et al., 2015).
A review study by (Lu et al., 2022) examines 33 studies on computational thinking (CT) assessments in higher education. The findings indicate that most assessments focus on undergraduate computing students and pre-service teachers, employing in-class interventions and utilizing methods such as programming artifacts and interviews. These tools primarily evaluate skills in algorithmic thinking, problem-solving, and abstraction, providing valuable insights for educators in selecting appropriate CT assessment instruments. Additionally, several studies have explored CT within the context of creative programming activities using Scratch with undergraduate students. This analysis compares computational thinking scores derived from an automated tool against human assessments of imaginative programming projects (Dehbozorgi & Roopaei, 2024; De la Hoz Serrano et al., 2024; Romero et al., 2017; Zhang et al., 2024).
In conclusion, CT has evolved from a problem-solving skill to a crucial cognitive process applicable across disciplines. In contrast, its integration in STEM education is well established. The development of various assessment tools highlights the importance of CT sub-competencies in fostering problem-solving skills. Emphasizing CT as a foundational skill will prepare students for future challenges in the 21st century.

2.2. Computational Thinking Assessment Tools and Methods

Various methodologies have been used to assess and enhance CT skills across educational contexts, combining quantitative tools like questionnaires and qualitative methods such as case studies, workshops, and games (Allsop, 2019; Barcelos et al., 2018; Cheng et al., 2021). The lack of a universal CT definition has led to diverse assessment tools, categorized into seven types by (Román-González et al., 2019). While many tools are domain-specific, others, including games and simulations, have been used across education levels. For example, Kazimoglu et al. (2012) used a simulation game to assess CT, while Malva et al. (2020) used an adaptive game with observations and interviews to gather qualitative insights into CT skill development.
Furthermore, a few studies have demonstrated diverse methodologies to assess and promote CT skills across educational contexts. For instance, in (Lemay et al., 2021), they used a questionnaire designed in Google Form for data collection and WarpPLS software version 7. A quasi-experimental mixed-method approach was used to analyze the relationship between CT and academic performance. The development of the PRADA model (Dong et al., 2019) showcased the integration of CT into K-12 education through workshops, case studies, and surveys, providing a multifaceted approach to data collection and analysis. In mathematics education, Kallia et al. (2021) employed questionnaires and statistical analysis to characterize CT.
Various assessment tools have been developed to evaluate CT skills, such as TechCheck, designed for young children (Relkin & Bers, 2021), and KIBO, a robotic kit that teaches foundational CT concepts through hands-on tasks (A. A. Sullivan et al., 2017). Other tools like the Fairy assessment (Werner et al., 2012) and CHoiCO (Kynigos & Grizioti, 2020) explore gamification for teaching and evaluating CT skills. Additionally, Bebras tasks assess core CT competencies without prior programming knowledge (Dolgopolovas et al., 2016).
In contrast, the researchers combined HTML5-based educational games with surveys and qualitative statistical analyses to evaluate the impact of game-based learning on CT development (Soboleva et al., 2021). Some studies have also effectively employed mixed-method approaches, combining quantitative and qualitative techniques, such as interviews and focus groups, to explore CT skills and their integration into education. For instance, efforts by (Valenzuela, 2019) focused on integrating CT into K-12 Computer Science education, highlighting its potential as a foundational component for students. Another study, Günbatar (2019) analyzed CT skills among in-service and pre-service teachers through a mixed-method approach, combining CT scales and group discussions. Likewise, Ma et al. (2021) employed Scratch programming, CT scales, and interviews to enhance primary students’ CT and problem-solving skills, demonstrating the value of mixed methodologies in educational research.
Several efforts have been made to develop scales for measuring CT skills. Tools like the Self-Efficacy Perception Scale for CT Skills by (Gülbahar et al., 2019) focus on specific domains like programming. In contrast, scales by (Tsai et al., 2021) and (Allsop, 2019) emphasize assessing CT across multiple fields. In parallel, Gouws et al. (2013) developed a test for higher-education CT performance, categorizing skills into six areas, but lacked reliability and validity evidence. Many CT assessment tools are designed for early education, with few addressing adult learners. For example, Nelson et al. (2015) created a 13-item test for evaluating CT and Computer Science knowledge but lacked empirical analysis and validity evidence.
The Computational Thinking Levels Scale by (Korkmaz et al., 2015, 2017) assesses algorithmic thinking, problem-solving, and other skills, but overlooks essential CT components like abstraction and decomposition (Selby, 2013). The Self-Efficacy Perception Scale for Computational Thinking Skills by (Gülbahar et al., 2019) is domain-specific, focusing on programming. While many CT tools are domain-specific, CTSCLE by (Tsai et al., 2021) provides a domain-general approach. A study by (Karalar & Alpaslan, 2021) validated the Turkish version of the CTSCLE, showing a positive correlation between programming experience and higher CT skills. Our methodology adopts a mixed-method approach, combining the CTT and CTSCLE to assess students’ CT competencies and perceptions, aiming to enhance CT education and inform teaching practices across disciplines. The competencies used in this study are defined as follows: Abstraction simplifies complex problems by focusing on essential details and ignoring irrelevant information. It involves identifying key characteristics and patterns to create generalized solutions that can be applied to various situations (Mueller et al., 2017). Algorithmic thinking is the ability to design clear, step-by-step instructions to solve a problem, focusing on efficient methods and structured solutions (Mueller et al., 2017). Decomposition involves breaking complex problems into smaller, more manageable parts, simplifying problem-solving by tackling each subproblem individually, making the overall task easier to address (Mueller et al., 2017). Pattern recognition is the identification of regularities, similarities, or patterns within problems (Barcelos et al., 2018). Generalization is the ability to take specific solutions or patterns and apply them to broader, often new, problems (Selby, 2013). Finally, evaluation involves assessing the effectiveness of a solution by reviewing its adequacy in solving a problem, identifying flaws, and determining necessary improvements (Wing, 2006).

3. Materials and Methods

The methodology for assessing CT was designed for Engineering and Computer Science students and consists of four phases. The workflow presented in Figure 1 is structured into four stages to ensure precision and relevance:
  • The Planning and Design Phase: It begins by defining research objectives tailored to the Engineering and Computer Science context. It includes the development of a specialized instrument comprising the CTT to assess CT sub-competencies using multiple-choice questions and the CTS for self-reported perceptions and qualitative insights.
  • The Data Collection Phase: It involves recruiting participants exclusively from Engineering and Computer Science programs, administering the instrument, and collecting responses.
  • The Data Processing and Analysis Phase: It applies quantitative methods to evaluate CTT scores and CTS perceptions, alongside a thematic analysis of qualitative responses.
  • Result Interpretation and Reporting Phase: The results are categorized to emphasize strengths and weaknesses in CT sub-competencies, the cross-verification and triangulating results of three sections and their relevance to academic and professional problem-solving, and the specific challenges faced by students in these disciplines.

3.1. Instrument

The designed instrument consists of three sub-instruments: CTT, CTS, and open-ended questions. Hence, a collaborative team, including two CT and psychometric evaluation specialists and two postdoctoral researchers from the Future of Education department, ensured that the instrument aligned with contemporary educational frameworks. Their combined expertise was instrumental in refining all three sections. These experts contributed significantly to the design of the test items, ensuring their alignment with established theoretical frameworks, such as those by (Wing, 2006) and (Brennan & Resnick, 2012).
This study adapted the CTS from (Tsai et al., 2021) for use in the context of Computing and Engineering education. The adaptation process involved multiple steps to ensure linguistic and cultural appropriateness. First, experts in the team who could speak the native Spanish language translated the scale from English to Spanish. A back-translation process was conducted to confirm the accuracy and consistency of the translated items with the original English version. Discrepancies identified during this process were resolved through discussion among the research team and external reviewers with computational thinking and educational measurement expertise. Second, cultural adjustments were made to ensure the scale’s relevance to the Mexican educational context. This involved revising specific terminology and examples that might not align with the local educational practices and norms. Third, the adapted scale underwent a content validation process. The same team of experts, i.e., two postdoctoral researchers and two specialists, reviewed the scale to evaluate the measured constructs’ clarity, relevance, and representativeness. Their feedback informed minor revisions to enhance the comprehensibility and appropriateness of the items.
In contrast, CTT and open-ended questions were primarily designed for this study, and contextual adaptation ensured relevance to the Mexican higher education environment. Cultural and linguistic considerations influenced the phrasing of items and examples, making them more relatable to the target student population. All items in both sections were presented in Spanish, and the test instructions were clarified. Further details of the instrument modules will be discussed in upcoming subsections.

3.1.1. Computational Thinking Test (CTT)

The instrument’s first section is CTT, which comprises sixteen questions designed to assess four core CT skills, including pattern recognition, decomposition, abstraction, and algorithmic thinking. These components align with widely recognized CT frameworks, such as those proposed by (Brennan & Resnick, 2012) and (Wing, 2006), which identify these skills as foundational to CT development. The CTT consists of sixteen questions, four from each competency. Sample CTT questions are given in Table 1.

3.1.2. Computational Thinking Scale (CTS)

This subsection of the instrument focuses on adopting a scale that is consistent with established methodologies in the literature, such as the work of (Korkmaz et al., 2015, 2017), which emphasizes the importance of measuring CT through clearly defined and psychometrically validated sub-skills. In the designed instrument’s CTS section, we adopted the CTSCLE scale (Tsai et al., 2021).
It consists of 19 items distributed across 5 dimensions: abstraction, decomposition, algorithmic thinking, evaluation, and generalization. Each sub-dimension contains four items, except for decomposition, which includes three items. It is a self-reporting questionnaire consisting of multiple items on a 5-point Likert scale with five different factors, used to assess how students perceive their skills in each sub-competency of CT. An example of a CTS item related to each factor is given in Table 2.

3.1.3. Open-Ended Questions

At the end of the instrument, open-ended questions were asked to collect opinions about how students perceived applying particular CT sub-competencies and their applications, how these skills can help them solve different given problems, and what students perceived about using these competencies in their future careers. Examples are shown in Table 3.

3.2. Participants

This study’s participants were undergraduate students from two sectors: the public and private sectors. These students were enrolled in two different academic programs, Computer Science and Engineering, and were categorized by age, program, and gender. The study followed ethical guidelines established by Tecnológico de Monterrey. All participants were informed about the research’s purpose and procedures and provided informed consent before participation.
As shown in Table 4, from the private institute, the Computer Science program included 43 participants, 26 boys and 17 girls. The Engineering program at this institution had 41 participants, 23 boys and 18 girls. In summary, this institute reports 84 participants in total, where the first group (under 20) reports 40, while the second group (20 and above) reports the rest of the population of 44.
Similarly, from the public institute, the Computer Science program had 62 participants, 36 boys and 26 girls. The Engineering program had 50 participants, 25 boys and 25 girls. In summary, this institute reports 112 participants in total, where the first group (under 20) reports 58, while the second group (20 and above) reports the rest of the population of 54. This cross-attribute demographic distribution reflects the general trends in enrollment in Computer Science and Engineering within private and public institutions, respectively.

3.3. Data Analysis

Data analysis has been performed differently for each sub-instrument to ensure a comprehensive evaluation. For the first sub-instrument, CTT, we assessed students’ performance by comparing their achievements across various demographics, including age, gender, program, and institution. The adopted scale CTSCLE data analysis was presented in the second part through standard steps suggested by (Çapık et al., 2018). First, we performed a factor analysis for the CTS to verify its reliability and validity. Next, we conducted a quantitative analysis of students’ self-reported scores. Ultimately, we evaluated comparing by program, age, and gender accordingly. As the last part of the instrument contained open-ended questions, a team of experts conducted qualitative data analysis using thematic analysis techniques, ensuring a detailed exploration of students’ perceptions and experiences of CT.

4. Results

The results section presents a detailed data analysis, highlighting key findings across all study components of the deployed instrument. Performance on the CTT and comparison by demographic factors such as age, gender, program, and institution are presented in Section 4.1. The CTS results following reliability testing, factor analysis, and quantitative comparisons based on program and gender are discussed in detail in Section 4.2. Additionally, the results obtained via thematic analysis of qualitative data are discussed in Section 4.3.

4.1. Quantitative Analysis of CTT

To evaluate students’ performance on the CTT across key factors such as pattern recognition, decomposition, abstraction, and algorithmic thinking, we utilized Cronbach’s alpha as a measure of internal consistency, a widely recognized method for assessing the reliability of scales and questionnaires in related studies (Field, 2013; Piedade & Dorotea, 2023). A higher Cronbach’s alpha, typically above 0.7, indicates strong internal consistency (Buyukozturk, 2002; Field, 2013; Korkmaz & Bai, 2019). In this study, we conducted both descriptive and reliability analyses to assess students’ performance and the instrument’s reliability.
In addition, Item Response Theory (IRT) was employed to evaluate the CTT items using Python-based tools. A two-parameter logistic (2PL) model was fitted to estimate item parameters, including difficulty and discrimination, using the pyirt library in Python. The discrimination coefficients ranged from 0.85 to 1.10, with an average close to 1.0, indicating that the items effectively differentiate between individuals with varying levels of computational thinking ability. Difficulty levels spanned from −2.5 to 2.3, ensuring that the test captures a broad spectrum of abilities. Model fit was assessed using metrics such as RMSEA (0.04), CFI (0.96), and SRMR (0.03), all of which indicated a good fit.
Firstly, an overall quantitative analysis of CTT was performed to check the validity and reliability of CTT and overall students’ score analysis; later on, a group-wise performance analysis was conducted, and results are presented in Table 5. The overall analysis results revealed varied performance and reliability across different factors. Students excelled in pattern recognition ( μ = 0.87 , σ = 0.32 , Cronbach’s α = 0.75 ), demonstrating robust and consistent abilities in analyzing and resolving computational problems. In contrast, performance on decomposition remained moderate ( μ = 0.53 , σ = 0.40 , Cronbach’s α = 0.72 ), suggesting challenges in breaking down complex problems into simpler components. For abstraction ( μ = 0.72 , σ = 0.47 , Cronbach’s α = 0.78 ) and algorithmic thinking ( μ = 0.64 , σ = 0.32 , Cronbach’s α = 0.74 ), students demonstrated fair performance, though abstraction exhibited higher variability, reflecting inconsistency in identifying patterns or relevant details. The overall instrument ( μ = 0.69 , σ = 0.37 , Cronbach’s α = 0.77 ) showed good reliability, confirming its effectiveness in measuring CT constructs.
Table 5 presents comparisons of mean scores and standard deviations for CT competencies across various demographic categories, focusing on abstraction, algorithmic thinking, decomposition, and pattern recognition. Analysis reveals that participants under 20 exhibit lower overall performance, although they demonstrate notable strengths in pattern recognition. In contrast, participants aged 20 and above show superior proficiency across all assessed competencies. Gender differences indicate that boys outperform girls in all areas; however, the observed differences are relatively minor. Institutional analysis suggests that students from public institutions achieve higher scores, particularly in algorithmic thinking and pattern recognition, than private institutions. Furthermore, an examination of academic programs reveals that Computer Science students excel in abstraction and pattern recognition, while Engineering students tend to score lower across all competencies assessed.

4.2. Quantitative Analysis of CTS

This section analyzes students’ CTS data to assess their perceived CT skills based on various demographic factors, including gender, age, and academic program. Statistical analyses determine significant differences in CT skills between groups, including calculating mean, standard deviation, t-tests, and effect size calculations (Cohen’s d). Firstly, this study adopted the CT scale from (Tsai et al., 2021) to measure and assess CT skills. The scale is flexible to adapt to assess domain-specific CT skills in STEM fields by adjusting its terminology. It is suitable for secondary and higher-level participants. It has five factors: abstraction (four items), decomposition (three items), algorithmic thinking (four items), evaluation (four items), and generalization (four items).
Multiple studies on CT analysis use a t-score and p-value, a statistical measurement used to assess how far a sample mean is from the population mean in terms of the standard deviation. It is part of a t-test, which compares the means of two groups to determine if they are statistically different (De la Hoz Serrano et al., 2024; Hsu et al., 2022). In addition, Cohen’s d is a measure of effect size that quantifies the difference between two groups in standard deviation (Karalar & Alpaslan, 2021). It is used to assess the practical significance of the difference, or how significant the effect is, in a way independent of sample size.

4.2.1. Reliability and Validity of Adopted CTCSLE

A Confirmatory Factor Analysis (CFA) was conducted following established guidelines using the SmartPLS 3 software. CFA is a statistical technique employed to test the construct validity of a scale and assess how well the data fit the hypothesized model. This approach is appropriate for verifying predefined factor structures and ensuring alignment between the theoretical framework and the observed data. Several parameters were used to assess the reliability and validity of the CTS. Cronbach’s alpha was calculated to evaluate internal reliability, reflecting the interrelatedness of test items (Hair et al., 2021; Rosli & Saleh, 2023). Composite Reliability (CR) and Average Variance Extracted (AVE) were also computed for each factor, with factor loadings ≥0.7, AVE > 0.5, and both Cronbach’s alpha and CR ≥ 0.7 deemed acceptable thresholds (Hair et al., 2017, 2011). The results of CFA presented in Table 6 collectively confirm the reliability and validity of the CTS.
The results provided valuable insights into the adequacy of the CTS factor structure and its suitability for measuring CT skills. The CFA indicated that the chi-squared value was χ 2 ( 142 ) = 615.09 , with a chi-squared/df ratio of χ 2 / df = 2.00 , which is within the acceptable range (typically below 3.00), suggesting a reasonable model fit (Schumacker & Lomax, 2016). Additionally, the Standardized Root Mean Square Residual (SRMR) was 0.076, indicating an acceptable fit, as values below 0.08 indicate a good fit (Byrne, 2013). The Normed Fit Index (NFI) was 0.935, which exceeds the commonly accepted threshold of 0.90, further supporting the model’s goodness of fit (Hu & Bentler, 1999). These results suggest that the CTS factor structure is robust for assessing CT skills.

4.2.2. Gender-Wise Comparison of CT Skills

Table 7 reveals differences across all skill dimensions. Perceived CT skills were broadly similar across genders. Both boys and girls scored comparably in abstraction (p = 0.37) and other factors, with no statistically significant differences observed. A minor difference in decomposition approached significance but was inconclusive. It was found that girls scored higher than boys in abstraction and decomposition. Similarly, boys slightly outperformed girls in generalization. Since the p-values are more significant than the ordinary significance threshold of 0.05, the t-values indicate that there is no statistically significant difference between boys’ and girls’ perceived CT skills for any of the factors (abstraction, decomposition, algorithmic thinking, generalization, and evaluation).

4.2.3. Age-Wise Comparison of Students’ Perceived CT Skills

Table 8 analyzes CT skills across two age groups: those under 20 and those 20 and above. The analysis showed minimal differences in perceived CT skills between age groups, with no significant differences in most factors. However, older students (20 and above) scored significantly higher in decomposition ( μ = 0.92, σ = 0.87, t-value = 2.54 and p = 0.01), indicating they may have better problem-solving abilities in that skill, potentially due to increased experience. Other factors, such as abstraction and algorithmic thinking, showed no significant age-related differences. Overall, age did not substantially impact performance in CT skills, with only slight variations in decomposition noted between the two age groups.

4.2.4. Program-Wise Comparison of Their Perceived CT Skill

The analysis in Table 9 reveals differences in CT skills between Computer Science and Engineering students. Computer Science students demonstrated better skills in abstraction, with a higher mean score than Engineering students. Significant differences were found between Computer Science and Engineering students in abstraction (p = 0.02) and decomposition (p = 0.001). Computer Science students consistently outperformed their Engineering peers. Other CT factors, such as algorithmic thinking and generalization, showed no significant variation by the program as per the t-value in the table. While both groups performed similarly in algorithmic thinking, generalization, and evaluation, Computer Science students had slightly higher scores in these areas. These findings suggest that Computer Science students have a stronger foundation in specific CT sub-skills, particularly abstraction and decomposition, than their Engineering participants.

4.3. Qualitative Analysis

The qualitative data consist of students’ opinions, with 196 student records analyzed using the content analysis method (Moretti et al., 2011; Yilmaz & Yilmaz, 2023). This approach facilitated the identification of significant themes, patterns, and insights within the data, allowing for a thorough investigation and understanding of the students’ experiences and viewpoints. Four researchers were involved in this analysis to extract sub-themes from data, focusing on the strategies utilized to solve computational problems and the cognitive processes involved in applying CT skills to tackle the given issues. We uncovered a spectrum of insights reflecting both positive and negative aspects. The word cloud from qualitative data contains students’ opinions about CT competencies, as shown in Figure 2.
Through analyzing students’ responses, identifying key themes and sub-themes revealed deeper insights into their experiences about CT. From the analysis, multiple prominent themes emerged, such as strategies for problem-solving, challenges in applying CT skills, and the perceived value of CT in real-life and academic contexts. For example, under problem-solving strategies, several students described using decomposition (“I broke the problem into smaller parts to manage it better”) and visualization techniques (“I imagined the structure in my mind before coding”). However, challenges such as difficulty with algorithmic thinking and terminology gaps were also highlighted, with one student noting, “I often struggle to understand the steps required for effective algorithms.” Beyond these challenges, students emphasized the relevance of CT skills in their academic and career pursuits, with comments like, “These skills are essential for creating efficient algorithms and solving real-world problems.” This deeper exploration not only enriches the understanding of CT competencies but also uncovers areas where targeted educational interventions could improve student outcomes.
The students employed different strategies to tackle pattern recognition problems using CT techniques. Some explicitly used logical and systematic reasoning, while others took a decomposition approach, breaking down a problem into smaller tasks. A few students were also uncertain about the terminologies related to their applied strategy, indicating a potential gap in understanding CT competencies. Notably, one student highlighted using mental visualization to improve problem-solving. The criteria for selecting the top 20 perceptions were based on the key themes and sub-themes reported as applications of CT by student opinions about the importance and usefulness of CT skills in their academic careers, as detailed in Table 10. The perceptions also align with existing studies in the literature (Angevine et al., 2017; Lye & Koh, 2014; Yu et al., 2019).
According to Table 10, students often reported that CT skills enhance their problem-solving abilities by improving their capacity to recognize structures, analyze them, and make coherent decisions. They also mentioned the practical applicability of these competencies in various real-life scenarios and their importance in developing algorithms. Additionally, students highlighted that increased CT competencies could lead them to more efficient and effective solutions in academic and real-life contexts.
In contrast, some participants reported difficulty in identifying patterns, and many found this both difficult and time-consuming. Some faced challenges in applying algorithm thinking and decomposition to break down into manageable parts and applying effective algorithm thinking steps to solve problems. Some students admitted to struggling with simplifying thought processing, resulting in more complicated solutions.
Apart from this, some participants reported a lack of confidence in their CT skills and needing more mathematical and computational knowledge to solve algorithmic thinking problems. Many participants admitted limited familiarity with CT and a lack of foundational knowledge and understanding of how to apply CT concepts to academic tasks. This led to difficulties in correctly using CT competencies. Participants recognized the relevance of CT skills in everyday life for addressing diverse challenges and improving outcomes through enhanced visualization. They also noted the potential of CT skills to enhance Engineering proficiency in solving complex problems. These findings highlight the importance of comprehensive support and education in CT competencies to develop practical problem-solving skills among students. Addressing pattern recognition and algorithmic thinking challenges requires tailored interventions to improve students’ cognitive abilities and computational knowledge. By bridging gaps in understanding and providing adequate training, educators can empower students to navigate complex problem-solving tasks confidently and proficiently.

5. Discussion

The results from this study offer valuable insights into students’ CT abilities across different demographic categories. Table 11 briefly discusses key findings of each component, the CTT, CTS, and the qualitative part, and triangulates all key findings accordingly. Analyzing the overall score of the CTT, the students demonstrated relatively strong skills in pattern recognition but struggled more with tasks requiring the decomposition of complex problems, with mean ( μ = 0.87 , μ = 0.53 ) values, respectively. However, performance variations across different groups of the CTT, such as age, gender, institute, and academic programs, are also noteworthy. Older students tended to perform better in abstraction and algorithmic thinking, possibly due to increased experience and maturity.
Studies addressing students’ CT skills have identified various factors influencing these skills, with gender, program, and age among the most commonly examined variables. The effects of gender on CT skills, in particular, remain a topic of debate. Some studies have reported no significant difference in the CT skills, while others suggest slight differences based on gender (Gülbahar et al., 2019). For instance, in (Sirakaya, 2020), the authors examined gender differences in CT skills with a sample of 722 Turkish secondary school students. The t-test results indicated no significant difference in the mean CT skills scores between girls and boys (t(719) = −0.98, p = 0.33). In contrast, other work reported a significant difference in favor of female students, with boys’ and girls’ CT skill scores differing significantly (t(43748) = 7.42, p < 0.01) (Gülbahar et al., 2019).
Gender is a prominent variable in CT studies, necessitating a comprehensive discussion to understand its implications fully. Numerous studies have identified gender-based differences in CT performance, frequently attributing these disparities to social, cultural, and educational influences (Atmatzidou & Demetriadis, 2016; De la Hoz Serrano et al., 2024; Grover, 2017). For example, Cheryan et al. (2017) highlights that the prevalent stereotype associating computing with male-dominated environments can dissuade girls from engaging in these fields, potentially leading to lower performance in CT-related tasks. Similarly, Denner et al. (2012) observed that while boys tend to excel in technical CT tasks, girls often show more substantial capabilities in collaboration and problem decomposition. This suggests that gender differences in CT are multifaceted and context-dependent.
However, contrasting these findings, Espino and González (2016) argue that both genders possess equal potential to develop CT skills and acquire related information-processing abilities. Parallel to this, another study (Espino & González, 2016) found that gender differences in CT assessment were minimal. Although boys slightly outperformed girls in abstraction and decomposition tasks, girls demonstrated a marginally higher proficiency in algorithmic thinking. These results indicate that while gender may influence specific aspects of CT, the overall effect is relatively minor. Moreover, the findings support the notion that with appropriate instructional strategies, both boys and girls can achieve comparable levels of proficiency in CT. When compared to similar studies (Mindetbay et al., 2019; Tsai et al., 2021; Wu & Su, 2021), the present study corroborates the trend that gender differences in CT are present but not substantial.
In the current study, the boys outperform girls across various skill areas, including algorithmic thinking and pattern recognition, though the differences are relatively minor in the CTT. This suggests that boys slightly outperform girls in actual performance, but the gaps are not substantial. In contrast, the CTS shows no significant gender differences in perceived CT skills. Both boys and girls report similar levels of competence across all skill dimensions, with girls scoring slightly higher in abstraction and decomposition, whereas boys scored slightly higher in generalization. These differences were not statistically significant, indicating that gender does not play an essential role in how students perceive their computational thinking abilities.
Similarly, the institutional analysis of the overall score analysis of the CTT indicates that students from public institutions outperform those from private institutions, especially in areas such as algorithmic thinking and pattern recognition. This suggests that institutional factors, such as curriculum structure or resources, may shape students’ computational thinking skills. Furthermore, an examination of academic programs highlights that Computer Science students outperform Engineering students in specific CT sub-skills, particularly abstraction and pattern recognition. Similarly, Computer Science students in the CTS also demonstrate superior skills in abstraction, with significant differences found between Computer Science and Engineering students in abstraction (p = 0.02) and decomposition (p = 0.001). Computer Science students consistently scored slightly higher across these areas despite no significant differences in algorithmic thinking, generalization, and evaluation.
These findings suggest that while both groups exhibit similar performance in specific CT dimensions, Computer Science students have a stronger foundation in particular areas, particularly abstraction and decomposition, which could be attributed to their program’s focus on these critical aspects of computational thinking. The curriculum and pedagogical approaches in Computer Science may emphasize abstract reasoning and problem-solving techniques more than Engineering programs, which are often more oriented toward applied, hands-on learning. This alignment can be attributed to the close connection between CT concepts and the field of Computer Science, as several studies highlight how CT’s core principles map directly to the domain. According to (Weintrop et al., 2016), CT is integral to Computer Science curricula, where programs are designed to nurture cognitive skills such as abstraction, decomposition, and algorithmic thinking. These skills are vital for tasks like coding and problem-solving. Conversely, Engineering programs prioritize practical applications, such as designing and building systems, where problem-solving often emphasizes tangible, real-world outcomes over abstract reasoning.
Several studies corroborate this distinction, underscoring the alignment between CT concepts and Computer Science principles (Barr & Stephenson, 2011; Grover & Pea, 2013). For instance, studies (Grover, 2017; Grover & Pea, 2013) stress that CT’s foundational elements, such as abstraction, algorithmic thinking, and decomposition, are deeply embedded in Computer Science education and are typically more integrated into its curriculum than in Engineering programs, which tend to focus on applied problem-solving. Similarly, Lye and Koh (2014) highlights that by concentrating on practical applications and physical systems, Engineering programs may emphasize developing abstract cognitive skills less. The work of (Barr & Stephenson, 2011) further reinforces this perspective, discussing how CT directly maps to the Computer Science discipline, thereby fostering more robust development of these skills among students in Computer Science programs.
Regarding the age group, the CTT analysis shows that participants under 20 perform lower overall but excel in pattern recognition. At the same time, those 20 and older demonstrate superior competency across all areas, suggesting age and experience enhance computational thinking (CT). The CTS results reveal minimal perceived CT skill differences, except that older students scored significantly higher in decomposition ( μ = 0.92, σ = 0.87, t = 2.54, p = 0.01). Factors like abstraction and algorithmic thinking showed no significant differences, indicating age has a limited impact on CT skills overall. In conclusion, the findings underscore the importance of CT as a foundational skill across disciplines. While certain demographic factors, such as age and academic program, influence CT abilities, there remains significant potential to improve and cultivate these skills, mainly through targeted educational interventions. Future research and curriculum development should continue to address the challenges in decomposition and abstraction while also exploring ways to integrate CT across diverse fields of study.

6. Practical Implications

The findings of this study provide critical insights into CT within Latin America, particularly Mexico, and offer several avenues for application in educational practice. While this research primarily establishes a foundational understanding of CT in an underrepresented region, the following practical implications can help educators, curriculum developers, and policymakers make immediate use of the insights gained:
  • Recommendations for Educators:Based on the identified strengths and weaknesses in students’ CT skills in the results, educators can design targeted learning activities that build on students’ existing capabilities while addressing areas for improvement (Hsu et al., 2018). For example, teachers can introduce structured exercises that help students engage with problems more effectively by breaking them down into manageable parts and focusing on key components. In addition, incorporating visual aids and collaborative discussions into lessons can further support students in developing a deeper understanding of concepts and enhance their problem-solving abilities. These strategies can create an inclusive learning environment, fostering confidence and competence in CT.
  • Enhancing Learning Tools: Various tools can be adopted to enhance students’ CT skills, such as block-based programming tools like Scratch or Blockly, to help students build in an engaging and interactive manner (George, 2018). Adopting modern CT tools into the STEM curriculum would be beneficial.
  • Incorporating CT into Existing Subjects at College level: Integrating CT into early education is widely recognized as a pivotal step in enhancing problem-solving and analytical skills (Grover et al., 2022). However, less attention has been given to its integration at the college level, where the need is equally critical. Embedding CT principles into core Engineering and Computer Science subjects, such as programming, mathematics, and related disciplines, equips students with the tools to tackle complex problems and develop analytical thinking. For example, incorporating structured problem-solving activities and model-creation exercises into existing curricula can seamlessly integrate CT principles, promoting a more interdisciplinary learning approach. Developing specialized modules focusing on key CT competencies such as debugging, algorithmic thinking, and advanced problem-solving can provide students with targeted opportunities to strengthen these essential skills.
  • CT Teacher Training: Investing in professional development programs can empower educators to effectively teach CT by equipping them with the necessary knowledge and tools (Cutumisu et al., 2019). Training initiatives should focus on integrating CT principles into everyday teaching practices and utilizing assessment tools to measure and enhance students’ CT skills.
By providing the findings in actionable recommendations and frameworks, this study offers a roadmap for enhancing CT education in Mexico and other underrepresented regions in Latin America. These practical implications empower educators, curriculum developers, and policymakers to take the first steps toward integrating CT into mainstream education, thereby addressing the growing demand for digital literacy and problem-solving skills in the 21st century.

7. Limitations and Future Work

This study has several limitations, including a limited sample size from only two universities in Mexico, which may affect the generalizability of the findings to broader educational contexts. Additionally, the focus on specific CT sub-competencies, abstraction, decomposition, algorithmic thinking, and pattern recognition without considering other relevant dimensions, such as creativity and critical thinking, narrows the scope of the analysis. The reliance on self-reported data through the CTS introduces potential biases in students’ perceptions of their abilities. Furthermore, this study does not evaluate the long-term impact of CT education on students’ problem-solving skills or career outcomes.
One of the most critical next steps is to design and implement experimental interventions targeting specific CT areas. For instance, focusing on areas where students demonstrate weaknesses, such as algorithmic thinking or debugging, we plan to develop and test teaching interventions incorporating problem-based learning and interactive exercises. These interventions will be rigorously evaluated to assess their effectiveness in improving students’ CT skills and understanding. Future research could also design and implement interventions that consider the impact of integrating CT activities into classrooms and assess their influence on students’ learning outcomes.
In addition, future research should expand the sample size and include diverse institutions and disciplines to enhance the representativeness of the findings. It should also incorporate more comprehensive assessments that cover additional CT competencies, integrate objective performance measures alongside self-reports, and explore the long-term effects of CT education. Longitudinal studies tracking students’ CT skill development and its impact on their professional careers, as well as research on the role of emerging technologies like AI-driven learning tools in CT education, will provide valuable insights for refining instructional practices and fostering effective CT education across a wide range of academic fields.

8. Conclusions

This study addresses key questions about CT skills by exploring the alignment of students’ self-perceptions with objective assessments, the influence of demographic factors, and the relevance of CT to academic and career contexts. The findings provide significant insights and emphasize the need for targeted interventions and policy reforms. Firstly, the results highlight a considerable misalignment between students’ self-reported confidence and CT performance, particularly in decomposition and algorithmic thinking. This discrepancy calls for integrating reflective practices and feedback mechanisms to help students develop more accurate self-assessments and better align their perceptions with their abilities.
Secondly, related to the study’s research question, demographic analyses reveal notable variations in CT skills, age, gender, and program. Computer Science students outperformed their Engineering peers in CT competencies, while public institution students performed better than those from private institutions. In addition, gender differences were negligible in some CT competencies while notable in others. These findings underscore the importance of equitable resource allocation, cross-disciplinary curriculum reforms, and inclusive teaching strategies to address these disparities. Lastly, students recognized the value of CT skills in their academic and professional pursuits, though perceptions of relevance varied by discipline. To ensure broader applicability, CT education must be contextualized to meet the diverse needs of students across academic programs.
The findings suggest the need for tailored educational interventions to strengthen CT integration across disciplines. Additionally, discrepancies between students’ self-reported confidence and actual performance indicate a need for strategies to align perceptions with abilities, fostering a more realistic understanding of their skills. In conclusion, this research contributes to the growing body of knowledge on CT education by presenting a validated assessment framework that informs curriculum development and instructional strategies. By addressing the gaps reported in this study, educators and policymakers can better equip students with the critical problem-solving skills necessary to thrive in a technology-driven world.

Author Contributions

Conceptualization, F.A.P. and H.T.-M.; methodology, F.A.P. and A.A.; software, S.H.; validation, A.A., G.I.-V., and H.T.-M.; formal analysis, A.A.; investigation, S.H.; resources, G.I.-V.; data curation, F.A.P.; writing—original draft preparation, H.T.-M.; writing review and editing, A.A.; visualization, S.H.; supervision, H.T.-M.; project administration, F.A.P.; funding acquisition, H.T.-M. All authors have read and agreed to the published version of the manuscript.

Funding

Tecnologico de Monterrey supports this research with financial support through the project I003—IFE001—C2-T3—T E4C&CT: Ecosystem for scaling up computational thinking and reasoning for complexity within the “Challenge-Based Research Funding Program 2022”.

Institutional Review Board Statement

This study was conducted following the Declaration of Helsinki, and an ethical advisory opinion was provided by Technologcio De Monterrey for studies involving humans.

Informed Consent Statement

Written informed consent was not required for this study as per institutional guidelines. Participation in this study was voluntary, and data were anonymized to ensure confidentiality.

Data Availability Statement

The authors declare that no external data sources were collected. However, data related to the systematic literature review may be available upon reasonable request.

Acknowledgments

The authors express their gratitude to Tecnologico De Monterrey, Consejo Nacional de Ciencia y Tecnología (CONAHCYT) and Project I003—IFE001—C2-T3—T E4C&CT: Ecosystem for scaling up computational thinking and reasoning for complexity within the “Challenge-Based Research Funding Program 2022”, for their financial support.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EDAExploratory Data Analysis
IRTItem Response Theory
CTComputational Thinking
CTTComputational Thinking Test
CTSComputational Thinking Scale
STEMScience, Technology, Engineering, and Mathematics
CFAConfirmatory Factor Analysis
CRComposite Reliability
AVEAverage Variance Extracted
K-12Kindergarten through 12th Grade

References

  1. Agbo, F. J., Oyelere, S. S., Suhonen, J., & Adewumi, S. (2019, November 19–22). A systematic review of computational thinking approach for programming education in higher education institutions. 19th Koli Calling International Conference on Computing Education Research (pp. 1–10), Koli, Finland. [Google Scholar]
  2. Allsop, Y. (2019). Assessing computational thinking process using a multiple evaluation approach. International Journal of Child-Computer Interaction, 19, 30–55. [Google Scholar] [CrossRef]
  3. Andrian, R., & Hikmawan, R. (2021). The importance of computational thinking to train structured thinking in problem solving. Journal Online Informatika, 6(1), 113–117. [Google Scholar] [CrossRef]
  4. Angevine, C., Cator, K., Roschelle, J., Thomas, S. A., Waite, C., & Weisgrau, J. (2017). Computational thinking for a computational world (Technical report). Digital Promise. [Google Scholar]
  5. Atmatzidou, S., & Demetriadis, S. (2016). Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems, 75, 661–670. [Google Scholar] [CrossRef]
  6. Barcelos, T. S., Muñoz-Soto, R., Villarroel, R., Merino, E., & Silveira, I. F. (2018). Mathematics learning through computational thinking activities: A systematic literature review. Journal of Universal Computer Science, 24(7), 815–845. [Google Scholar]
  7. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? ACM Inroads, 2(1), 48–54. [Google Scholar] [CrossRef]
  8. Brennan, K., & Resnick, M. (2012, April 13–17). New frameworks for studying and assessing the development of computational thinking. 2012 Annual Meeting of the American Educational Research Association (Vol. 1, p. 25), Vancouver, BC, Canada. [Google Scholar]
  9. Buyukozturk, S. (2002). Sosyal bilimler icin very analizi el kitabi. Pegem Yayincihk. [Google Scholar]
  10. Byrne, B. M. (2013). Structural equation modeling with Mplus: Basic concepts, applications, and programming. Routledge. [Google Scholar]
  11. Castro, A. T., Reyes, M. M., & Soberanes-Martín, A. (2021). Instructional design to foster computational thinking using educational robotics. In Handbook of research on using educational robotics to facilitate student learning (pp. 164–182). IGI Global. [Google Scholar]
  12. Chen, L.-X., Su, S.-W., Liao, C.-H., Hsiao, M.-J., & Yuan, S.-M. (2023). Digital game approaches for cultivating computational thinking skills in college students. Engineering Proceedings, 55(1), 62. [Google Scholar]
  13. Cheng, X., Ma, X.-Y., Luo, C., Chen, J., Wei, W., & Yang, X. (2021). Examining the relationships between medical students’ preferred online instructional strategies, course difficulty level, learning performance, and effectiveness. Advances in Physiology Education, 45(4), 661–669. [Google Scholar] [CrossRef] [PubMed]
  14. Cheryan, S., Ziegler, S. A., Montoya, A. K., & Jiang, L. (2017). Why are some STEM fields more gender balanced than others? Psychological Bulletin, 143(1), 1. [Google Scholar] [CrossRef] [PubMed]
  15. Clarke-Midura, J., Kozlowski, J. S., Shumway, J. F., & Lee, V. R. (2021). How young children engage in and shift between reference frames when playing with coding toys. International Journal of Child-Computer Interaction, 28, 100250. [Google Scholar] [CrossRef]
  16. CSTA. (2016). K-12 computer science standards. Available online: https://dl.acm.org/doi/book/10.1145/2593249 (accessed on 5 December 2023).
  17. CSTA & ISTE. (2011). Computational thinking in k-12 education leadership toolkit. Available online: http://csta.acm.org/Curriculum/sub/CurrFiles/471.11CTLeadershipToolkit-SP-vF.pdf (accessed on 5 December 2023).
  18. Cuny, J., Snyder, L., & Wing, J. M. (2010). Demystifying computational thinking for non-computer scientists, [Unpublished manuscript in progress].
  19. Cutumisu, M., Adams, C., & Lu, C. (2019). A scoping review of empirical research on recent computational thinking assessments. Journal of Science Education and Technology, 28(6), 651–676. [Google Scholar] [CrossRef]
  20. Czerkawski, B. C., & Lyman, E. W. (2015). Exploring issues about computational thinking in higher education. TechTrends, 59(2), 57–65. [Google Scholar] [CrossRef]
  21. Çapık, C., Gözüm, S., & Aksayan, S. (2018). Kültürlerarası ölçek uyarlama aşamaları, dil ve kültür uyarlaması: Güncellenmiş rehber. Florence Nightingale Journal of Nursing, 26(3), 199–210. [Google Scholar] [CrossRef]
  22. Dehbozorgi, N., & Roopaei, M. (2024, March 9). Improving computational thinking competencies in stem higher education. 2024 IEEE Integrated STEM Education Conference (ISEC) (pp. 1–4), Princeton, NJ, USA. [Google Scholar]
  23. De la Hoz Serrano, A., Melo Niño, L. V., Álvarez Murillo, A., Martín Tardío, M. Á., Cañada Cañada, F., & Cubero Juánez, J. (2024). Analysis of gender issues in computational thinking approach in science and mathematics learning in higher education. European Journal of Investigation in Health, Psychology and Education, 14(11), 2865–2882. [Google Scholar] [CrossRef] [PubMed]
  24. Denner, J., Werner, L., & Ortiz, E. (2012). Computer games created by middle school girls: Can they be used to measure understanding of computer science concepts? Computers & Education, 58(1), 240–249. [Google Scholar]
  25. Denning, P. J., & Tedre, M. (2019). Computational thinking. Mit Press. [Google Scholar]
  26. Doleck, T., Bazelais, P., Lemay, D. J., Saxena, A., & Basnet, R. B. (2017). Algorithmic thinking, cooperativity, creativity, critical thinking, and problem solving: Exploring the relationship between computational thinking skills and academic performance. Journal of Computers in Education, 4, 355–369. [Google Scholar] [CrossRef]
  27. Dolgopolovas, V., Jevsikova, T., Dagiene, V., & Savulionienė, L. (2016). Exploration of computational thinking of software engineering novice students based on solving computer science tasks. The International Journal of Engineering Education, 32(3), 1107–1116. [Google Scholar]
  28. Dong, Y., Catete, V., Jocius, R., Lytle, N., Barnes, T., Albert, J., Joshi, D., Robinson, R., & Andrews, A. (2019, February 27–March 2). PRADA: A practical model for integrating computational thinking in K-12 education. 50th ACM Technical Symposium on Computer Science Education (pp. 906–912), Minneapolis, MN, USA. [Google Scholar]
  29. El-Hamamsy, L., Zapata-Cáceres, M., Martín-Barroso, E., Mondada, F., Zufferey, J. D., Bruno, B., & Román-González, M. (2023). The competent Computational Thinking test (cCTt): A valid, reliable and gender-fair test for longitudinal CT studies in grades 3–6. arXiv, arXiv:2305.19526. [Google Scholar] [CrossRef]
  30. Espino, E. E. E., & González, C. G. (2016, September 13–16). Gender and computational thinking: Review of the literature and applications. XVII International Conference on Human Computer Interaction (pp. 1–2), Salamanca, Spain. [Google Scholar]
  31. Field, A. (2013). Discovering statistics using IBM SPSS statistics. Sage. [Google Scholar]
  32. Fields, D., Lui, D., Kafai, Y., Jayathirtha, G., Walker, J., & Shaw, M. (2021). Communicating about computational thinking: Understanding affordances of portfolios for assessing high school students’ computational thinking and participation practices. Computer Science Education, 31(2), 224–258. [Google Scholar] [CrossRef]
  33. George, L. (2018). Computational thinking for adults-designing an immersive multi-modal learning experience using mixed reality. Malmö Universitet/Kultur och Samhälle. [Google Scholar]
  34. Ghosh, A., Malva, L., & Singla, A. (2024, March 20–23). Analyzing-evaluating-creating: Assessing computational thinking and problem solving in visual programming domains. 55th ACM Technical Symposium on Computer Science Education (Vol. 1, pp. 387–393), Portland, OR, USA. [Google Scholar]
  35. Gouws, L., Bradshaw, K., & Wentworth, P. (2013, October 7–9). First year student performance in a test for computational thinking. South African Institute for Computer Scientists and Information Technologists Conference (pp. 271–277), East London, South Africa. [Google Scholar]
  36. Grover, S. (2017). Assessing algorithmic and computational thinking in K-12: Lessons from a middle school classroom. In Emerging research, practice, and policy on computational thinking (pp. 269–288). Springer. [Google Scholar]
  37. Grover, S., Dominguez, X., Leones, T., Kamdar, D., Vahey, P., & Gracely, S. (2022). Strengthening early STEM learning by integrating CT into science and math activities at home. In Computational thinking in prek-5: Empirical evidence for integration and future directions (pp. 72–84). Association for Computing Machinery. [Google Scholar]
  38. Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the field. Educational Researcher, 42(1), 38–43. [Google Scholar] [CrossRef]
  39. Gülbahar, Y., Kert, S. B., & Kalelioğlu, F. (2019). The self-efficacy perception scale for computational thinking skill: Validity and reliability study. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 10(1), 1–29. [Google Scholar] [CrossRef]
  40. Günbatar, M. S. (2019). Computational thinking within the context of professional life: Change in CT skill from the viewpoint of teachers. Education and Information Technologies, 24(5), 2629–2652. [Google Scholar] [CrossRef]
  41. Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial Management & Data Systems, 117(3), 442–458. [Google Scholar]
  42. Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing theory and Practice, 19(2), 139–152. [Google Scholar] [CrossRef]
  43. Hair, J. F., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). An introduction to structural equation modeling. In Partial least squares structural equation modeling (PLS-SEM) using R: A workbook (pp. 1–29). Springer. [Google Scholar]
  44. Hambrusch, S., Hoffmann, C., Korb, J. T., Haugan, M., & Hosking, A. L. (2009). A multidisciplinary approach towards computational thinking for science majors. ACM Sigcse Bulletin, 41(1), 183–187. [Google Scholar]
  45. Hermans, S., Neutens, T., Wyffels, F., & Van Petegem, P. (2024). Empowering vocational students: A research-based framework for computational thinking integration. Education Sciences, 14(2), 206. [Google Scholar] [CrossRef]
  46. Hsu, T.-C., Chang, C., Wong, L.-H., & Aw, G. P. (2022). Learning performance of different genders’ computational thinking. Sustainability, 14(24), 16514. [Google Scholar] [CrossRef]
  47. Hsu, T.-C., Chang, S.-C., & Hung, Y.-T. (2018). How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Computers & Education, 126, 296–310. [Google Scholar]
  48. Hu, L.-T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
  49. Hunsaker, E. (2020). Computational thinking. EdTech Books. [Google Scholar]
  50. Junpho, M., Songsriwittaya, A., & Tep, P. (2022). Reliability and construct validity of computational thinking Scale for junior high school students: Thai adaptation. International Journal of Learning, Teaching and Educational Research, 21(9), 154–173. [Google Scholar] [CrossRef]
  51. Kallia, M., van Borkulo, S. P., Drijvers, P., Barendsen, E., & Tolboom, J. (2021). Characterising computational thinking in mathematics education: A literature-informed Delphi study. Research in Mathematics Education, 23(2), 159–187. [Google Scholar] [CrossRef]
  52. Karalar, H., & Alpaslan, M. M. (2021). Assessment of eighth grade students’ domain-general computational thinking skills. International Journal of Computer Science Education in Schools, 5(1), 35–47. [Google Scholar] [CrossRef]
  53. Kazimoglu, C., Kiernan, M., Bacon, L., & Mackinnon, L. (2012). A serious game for developing computational thinking and learning introductory computer programming. Procedia-Social and Behavioral Sciences, 47, 1991–1999. [Google Scholar] [CrossRef]
  54. Korkmaz, Ö., & Bai, X. (2019). Adapting computational thinking scale (CTS) for Chinese high school students and their thinking scale skills level. Participatory Educational Research, 6(1), 10–26. [Google Scholar] [CrossRef]
  55. Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the computational thinking scales (CTS). Computers in Human Behavior, 72, 558–569. [Google Scholar] [CrossRef]
  56. Korkmaz, Ö., Çakir, R., Özden, M. Y., Ali, O., & Sarioğlu, S. (2015). Bireylerin bilgisayarca düşünme becerilerinin farklı değişkenler açısından incelenmesi. Ondokuz Mayis University Journal of Education Faculty, 34(2), 68–87. [Google Scholar]
  57. Kynigos, C., & Grizioti, M. (2020). Modifying games with ChoiCo: Integrated affordances and engineered bugs for computational thinking. British Journal of Educational Technology, 51(6), 2252–2267. [Google Scholar] [CrossRef]
  58. Lemay, D. J., Basnet, R. B., Doleck, T., Bazelais, P., & Saxena, A. (2021). Instructional interventions for computational thinking: Examining the link between computational thinking and academic performance. Computers and Education Open, 2, 100056. [Google Scholar] [CrossRef]
  59. Liang, H.-N., Fleming, C., Man, K. L., & Tillo, T. (2013, August 26–29). A first introduction to programming for first-year students at a Chinese university using LEGO MindStorms. 2013 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE) (pp. 233–238), Bali, Indonesia. [Google Scholar]
  60. Liu, X., Wang, X., Xu, K., & Hu, X. (2023). Effect of reverse engineering pedagogy on primary school students’ computational thinking skills in STEM learning activities. Journal of Intelligence, 11(2), 36. [Google Scholar] [CrossRef]
  61. Lu, C., Macdonald, R., Odell, B., Kokhan, V., Demmans Epp, C., & Cutumisu, M. (2022). A scoping review of computational thinking assessments in higher education. Journal of Computing in Higher Education, 34(2), 416–461. [Google Scholar] [CrossRef]
  62. Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior, 41, 51–61. [Google Scholar] [CrossRef]
  63. Ma, H., Zhao, M., Wang, H., Wan, X., Cavanaugh, T. W., & Liu, J. (2021). Promoting pupils’ computational thinking skills and self-efficacy: A problem-solving instructional approach. Educational Technology Research and Development, 69(3), 1599–1616. [Google Scholar] [CrossRef]
  64. Machuqueiro, F., & Piedade, J. (2024). Game on: A journey into computational thinking with modern board games in portuguese primary education. Education Sciences, 14(11), 1182. [Google Scholar] [CrossRef]
  65. Malva, L., Hooshyar, D., Yang, Y., & Pedaste, M. (2020, July 6–9). Engaging Estonian primary school children in computational thinking through adaptive educational games: A qualitative study. 2020 IEEE 20th International Conference on Advanced Learning Technologies (ICALT) (pp. 188–190), Tartu, Estonia. [Google Scholar]
  66. Mindetbay, Y., Bokhove, C., & Woollard, J. (2019). What is the relationship between students’ computational thinking performance and school achievement? International Journal of Computer Science Education in Schools, 2(5), 3–19. [Google Scholar] [CrossRef]
  67. Mishra, P., & Yadav, A. (2013). Of art and algorithms: Rethinking technology & creativity in the 21st century (Vol. 57). Springer. No. 3. [Google Scholar]
  68. Moretti, F., van Vliet, L., Bensing, J., Deledda, G., Mazzi, M., Rimondini, M., Zimmermann, C., & Fletcher, I. (2011). A standardized approach to qualitative content analysis of focus group discussions from different countries. Patient Education and Counseling, 82(3), 420–428. [Google Scholar] [CrossRef]
  69. Mueller, J., Beckett, D., Hennessey, E., & Shodiev, H. (2017). Assessing computational thinking across the curriculum. In Emerging research, practice, and policy on computational thinking (pp. 251–267). Springer. [Google Scholar]
  70. Muñoz, R. F. Z., Alegría, J. A. H., & Robles, G. (2023). Assessment of computational thinking skills: A systematic review of the literature. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, 18, 319–330. [Google Scholar] [CrossRef]
  71. National Research Council, Division of Behavioral and Social Sciences and Education, Board on Science Education, Board on Testing and Assessment & Committee on Highly Successful Schools or Programs for K-12 STEM Education. (2011). Successful k-12 stem education: Identifying effective approaches in science, technology, engineering, and mathematics. National Academies Press. [Google Scholar]
  72. Nelson, K. G., Shell, D. F., Husman, J., Fishman, E. J., & Soh, L.-K. (2015). Motivational and self-regulated learning profiles of students taking a foundational engineering course. Journal of Engineering Education, 104(1), 74–100. [Google Scholar] [CrossRef]
  73. Ogegbo, A. A., & Ramnarain, U. (2022). A systematic review of computational thinking in science classrooms. Studies in Science Education, 58(2), 203–230. [Google Scholar] [CrossRef]
  74. Paucar-Curasma, R., Villalba-Condori, K., Arias-Chavez, D., Le, N.-T., Garcia-Tejada, G., & Frango-Silveira, I. (2022). Evaluation of computational thinking using four educational robots with primary school students in Peru. Education in the Knowledge Society, 23, 1–10. [Google Scholar] [CrossRef]
  75. Piedade, J., & Dorotea, N. (2023). Effects of Scratch-based activities on 4th-grade students’ computational thinking skills. Informatics in Education, 22(3), 499–523. [Google Scholar] [CrossRef]
  76. Relkin, E., & Bers, M. (2021, April 21–23). Techcheck-k: A measure of computational thinking for kindergarten children. 2021 IEEE Global Engineering Education Conference (EDUCON) (pp. 1696–1702), Vienna, Austria. [Google Scholar]
  77. Relkin, E., de Ruiter, L., & Bers, M. U. (2020). TechCheck: Development and validation of an unplugged assessment of computational thinking in early childhood education. Journal of Science Education and Technology, 29(4), 482–498. [Google Scholar] [CrossRef]
  78. Riley, D. D., & Hunt, K. A. (2014). Computational thinking for the modern problem solver. CRC Press. [Google Scholar]
  79. Ríos Félix, J. M., Zatarain Cabada, R., & Barrón Estrada, M. L. (2020). Teaching computational thinking in Mexico: A case study in a public elementary school. Education and Information Technologies, 25(6), 5087–5101. [Google Scholar] [CrossRef]
  80. Román-González, M., Moreno-León, J., & Robles, G. (2019). Combining assessment tools for a comprehensive evaluation of computational thinking interventions. In Computational thinking education (pp. 79–98). Springer. [Google Scholar]
  81. Romero, M., Lepage, A., & Lille, B. (2017). Computational thinking development through creative programming in higher education. International Journal of Educational Technology in Higher Education, 14, 42. [Google Scholar] [CrossRef]
  82. Rosli, M. S., & Saleh, N. S. (2023). Technology enhanced learning acceptance among university students during COVID-19: Integrating the full spectrum of Self-Determination Theory and self-efficacy into the Technology Acceptance Model. Current Psychology, 42(21), 18212–18231. [Google Scholar] [CrossRef] [PubMed]
  83. Schumacker, E., & Lomax, G. (2016). A beginner’s guide to structural equation modelling (4th ed.). Routledge. [Google Scholar]
  84. Selby, C. (2013). Computational thinking: The developing definition. University of Southampton. [Google Scholar]
  85. Sirakaya, D. A. (2020). Investigation of computational thinking in the context of ICT and mobile technologies. International Journal of Computer Science Education in Schools, 3(4), 50–59. [Google Scholar] [CrossRef]
  86. Soboleva, E. V., Sabirova, E. G., Babieva, N. S., Sergeeva, M. G., & Torkunova, J. V. (2021). Formation of computational thinking skills using computer games in teaching mathematics. Eurasia Journal of Mathematics, Science and Technology Education, 17(10), em2012. [Google Scholar] [CrossRef] [PubMed]
  87. Sullivan, A., & Bers, M. U. (2019). Computer science education in early childhood: The case of ScratchJr. Journal of Information Technology Education. Innovations in Practice, 18, 113. [Google Scholar] [CrossRef]
  88. Sullivan, A. A., Bers, M. U., & Mihm, C. (2017). Imagining, playing, and coding with KIBO: Using robotics to foster computational thinking in young children (Vol. 110). The Education University of Hong Kong. [Google Scholar]
  89. Syafe’i, S. S., Widarti, H. R., Dasna, I. W., & Wonorahardjo, S. (2023). STEM and STEAM affects computational thinking skill: A systematic literature review. Orbital: The Electronic Journal of Chemistry, 15, 208–216. [Google Scholar] [CrossRef]
  90. Tabesh, Y. (2017). Computational thinking: A 21st century skill. Olympiads in Informatics, 11(2), 65–70. [Google Scholar] [CrossRef]
  91. Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. [Google Scholar]
  92. Tekdal, M. (2021). Trends and development in research on computational thinking. Education and Information Technologies, 26(5), 6499–6529. [Google Scholar] [CrossRef]
  93. Tikva, C., & Tambouris, E. (2021). Mapping computational thinking through programming in K-12 education: A conceptual model based on a systematic literature Review. Computers & Education, 162, 104083. [Google Scholar]
  94. Tsai, M.-J., Liang, J.-C., & Hsu, C.-Y. (2021). The computational thinking scale for computer literacy education. Journal of Educational Computing Research, 59(4), 579–602. [Google Scholar] [CrossRef]
  95. Valenzuela, J. (2019). Attitudes towards teaching computational thinking and computer science: Insights from educator interviews and focus groups. Journal of Computer Science Integration, 2(2), 1–17. [Google Scholar] [CrossRef]
  96. Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory education: Towards an agenda for research and practice. Education and Information Technologies, 20, 715–728. [Google Scholar] [CrossRef]
  97. Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology, 25, 127–147. [Google Scholar] [CrossRef]
  98. Werner, L., Denner, J., Campe, S., & Kawamoto, D. C. (2012, February 29–March 3). The fairy performance assessment: Measuring computational thinking in middle school. 43rd ACM Technical Symposium on Computer Science Education (pp. 215–220), Raleigh, NC, USA. [Google Scholar]
  99. Wilensky, U., & Reisman, K. (2006). Thinking like a wolf, a sheep, or a firefly: Learning biology through constructing and testing computational theories—An embodied modeling approach. Cognition and Instruction, 24(2), 171–209. [Google Scholar] [CrossRef]
  100. Wing, J. (2009). Computational thinking. Journal of Computing Sciences in Colleges, 24(6), 6–7. [Google Scholar]
  101. Wing, J. (2011). Research notebook: Computational thinking-what and why? Carnegie Mellon University School of Computer Science. [Google Scholar]
  102. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. [Google Scholar] [CrossRef]
  103. Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1881), 3717–3725. [Google Scholar] [CrossRef] [PubMed]
  104. Wu, S.-Y., & Su, Y.-S. (2021). Visual programming environments and computational thinking performance of fifth-and sixth-grade students. Journal of Educational Computing Research, 59(6), 1075–1092. [Google Scholar] [CrossRef]
  105. Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking in elementary and secondary teacher education. ACM Transactions on Computing Education (TOCE), 14(1), 1–16. [Google Scholar]
  106. Yeni, S., Grgurina, N., Saeli, M., Hermans, F., Tolboom, J., & Barendsen, E. (2024). Interdisciplinary integration of computational thinking in K-12 education: A systematic review. Informatics in Education, 23(1), 223–278. [Google Scholar] [CrossRef]
  107. Yilmaz, R., & Yilmaz, F. G. K. (2023). Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Computers in Human Behavior: Artificial Humans, 1(2), 100005. [Google Scholar] [CrossRef]
  108. Yu, Y., Si, X., Hu, C., & Zhang, J. (2019). A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation, 31(7), 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  109. Zapata-Cáceres, M., Martín-Barroso, E., & Román-González, M. (2020, April 27–30). Computational thinking test for beginners: Design and content validation. 2020 IEEE Global Engineering Education Conference (EDUCON) (pp. 1905–1914), Porto, Portugal. [Google Scholar]
  110. Zhang, X., Aivaloglou, F., & Specht, M. (2024). A systematic umbrella review on computational thinking assessment in higher education. European Journal of STEM Education, 9(1), 2. [Google Scholar] [CrossRef] [PubMed]
Figure 1. General workflow.
Figure 1. General workflow.
Education 15 00344 g001
Figure 2. Word cloud of qualitative data.
Figure 2. Word cloud of qualitative data.
Education 15 00344 g002
Table 1. Examples of CTT questions.
Table 1. Examples of CTT questions.
CT Sub-CompetencyProblem NumbersSample Questions
Pattern Recognition1.1–1.4There is a missing number that does not allow the series 1, 5, 9, ?, 25, 37, 49 to be completed. What will it be? Solve by applying pattern recognition.
Decomposition2.1–2.4You have been given the task of finding the prime factors of a number using the division method, for which you must apply the decomposition skill and thus effectively understand the problem. What would be your first step to start solving it? Apply decomposition.
Abstraction3.1–3.4Apply abstraction to complete the sequence of the given puzzle.
Algorithmic Thinking4.1–4.4Apply algorithmic thinking to decide whether the pseudocode for adding two numbers presented below is correct or incorrect.
Table 2. Examples of CTS items.
Table 2. Examples of CTS items.
FactorNo. of ItemsItem Description
Abstraction4When solving problems, I thought about the problem from a global point of view rather than focusing on the details.
Decomposition3When solving problems, I considered dividing a large calculus problem into several small ones on the test.
Algorithmic Thinking4By solving problems, I tried to find out the step-by-step procedures for solving a computational problem.
Generalization4When solving problems, I tried to solve a new problem according to my experience.
Evaluation4By solving the problems presented in this form, I searched for a correct solution to the given problem.
Table 3. Examples of open-ended questions.
Table 3. Examples of open-ended questions.
No.Open-Ended Question
1What strategies did you use to solve the problems in this section? Explain your procedure.
2What did you find difficult when solving the problems in this problem? Explain?
3How would you imagine these CT skills can help you solve problems in your future career?
Table 4. Participant distribution by age group and institution.
Table 4. Participant distribution by age group and institution.
InstitutionAge GroupProgramN (Boys)N (Girls)N (Total)
PrivateUnder 20Computer Science325
Engineering201535
20 and aboveComputer Science231538
Engineering336
PublicUnder 20Computer Science91120
Engineering152338
20 and aboveComputer Science271542
Engineering10212
Table 5. CTT score comparison by age, gender, institution, and academic program. Each value represents the mean score with standard deviation (SD) in parentheses. ABS: abstraction; AT: algorithmic thinking; PR: pattern recognition; and DEC: decomposition.
Table 5. CTT score comparison by age, gender, institution, and academic program. Each value represents the mean score with standard deviation (SD) in parentheses. ABS: abstraction; AT: algorithmic thinking; PR: pattern recognition; and DEC: decomposition.
GroupsABSATDECPR
Under 200.68 (0.25)0.49 (0.21)0.56 (0.18)0.83 (0.21)
20 and above0.75 (0.31)0.57 (0.20)0.71 (0.19)0.91 (0.17)
Boys0.73 (0.13)0.65 (0.35)0.53 (0.33)0.90 (0.07)
Girls0.68 (0.18)0.60 (0.37)0.55 (0.32)0.82 (0.09)
Public0.77 (0.12)0.79 (0.24)0.58 (0.37)0.94 (0.03)
Private0.70 (0.14)0.60 (0.38)0.52 (0.33)0.86 (0.08)
Computer Science0.76 (0.11)0.71 (0.30)0.59 (0.32)0.90 (0.06)
Engineering0.66 (0.16)0.55 (0.41)0.47 (0.34)0.84 (0.10)
Table 6. Factor loadings and reliability/validity metrics for CTS. Item-wise factor loadings are presented in tuple form as per item count. Cronbach’s alpha = CA.
Table 6. Factor loadings and reliability/validity metrics for CTS. Item-wise factor loadings are presented in tuple form as per item count. Cronbach’s alpha = CA.
Factors and ItemsItem-wise Factor LoadingCArho_ACRAVE
Abstraction (4)(0.703, 0.733, 0.781, 0.776)0.7330.6010.7300.536
Algorithmic Thinking (4)(0.757, 0.703, 0.672, 0.805)0.7160.7260.8250.542
Decomposition (3)(0.839, 0.880, 0.722)0.7040.7560.8280.621
Evaluation (4)(0.801, 0.858, 0.816, 0.722)0.8130.8210.8770.741
Generalization (4)(0.760, 0.723, 0.758, 0.798)0.7170.7250.8250.642
Table 7. Gender-wise comparison of students’ perceived CT skills of CTS. The first two columns represent the mean and standard deviation (SD) for boys, while the second two columns are for girls.
Table 7. Gender-wise comparison of students’ perceived CT skills of CTS. The first two columns represent the mean and standard deviation (SD) for boys, while the second two columns are for girls.
FactorsMeanSDMeanSDt-Valuep-Value
Abstraction0.380.650.390.580.910.37
Decomposition0.380.840.360.921.630.11
Algorithmic Thinking0.390.760.400.760.760.45
Generalization0.400.770.390.730.680.49
Evaluation0.380.920.380.891.360.17
Table 8. Age-wise comparison of students’ perceived CT skills of CTS. The first two columns represent the mean and standard deviation (SD) for students under 20, while the second two columns are for 20-year-olds and those above 20.
Table 8. Age-wise comparison of students’ perceived CT skills of CTS. The first two columns represent the mean and standard deviation (SD) for students under 20, while the second two columns are for 20-year-olds and those above 20.
FactorsMeanSDMeanSDt-Valuep-Value
Abstraction0.390.640.400.620.420.67
Decomposition0.360.840.920.872.540.01
Algorithmic Thinking0.950.760.400.760.730.47
Generalization0.400.770.400.750.350.73
Evaluation0.380.910.380.920.150.88
Table 9. Program-wise comparison of students’ perceived CT skills of CTS. The first two columns represent the mean and standard deviation (SD) for Computer Science, while the second two columns are for Engineering.
Table 9. Program-wise comparison of students’ perceived CT skills of CTS. The first two columns represent the mean and standard deviation (SD) for Computer Science, while the second two columns are for Engineering.
FactorsMeanSDMeanSDt-Valuep-Value
Abstraction0.410.530.390.682.420.02
Decomposition0.400.820.350.873.340.001
Algorithmic Thinking0.400.700.390.791.370.17
Generalization0.410.660.390.821.190.23
Evaluation0.390.880.380.940.760.45
Table 10. Student responses and related findings.
Table 10. Student responses and related findings.
P.NoStudents’ PerceptionsFinding Category
1It can help you understand the logic of the problem more easily and have a more logical solution.Problem-Solving Efficiency.
2These skills are helpful when modeling patterns and functions in my career to solve specific problems.Real-Life Application.
3They can help solve problems or develop codes or programs.Programming Application.
4To be able to approach problems differently and see panoramas that are not very visible if you approach things in a fixed way.Real-Life Application.
5They are of great help since the requested problems could be resolved more quickly and be able to provide an adequate and precise solution.Problem-Solving Efficiency.
6It would help me a lot since the previous points contribute to computational Engineering, and it would be good to analyze and learn more about each sub-competency.Engineering Application.
7In the aspect of somewhat complex exercises, it will help us to fragment the problem to solve it in steps.Problem-Solving Efficiency.
8These skills facilitate the solution and creation of a project, as well as the ability to develop, think, and apply all these points correctly.Problem-Solving Efficiency.
9With logic and coherent decision-making, one can know how to recognize a structure, analyze it, and decide what steps or paths to take to solve a problem correctly and effectively.Decision-Making.
10They can help you solve problems more efficiently and effectively, lead to more structured programming, and apply more efficient structures instead of longer and more complex processes.Programming Application.
11In everyday life, we face these problems since this knowledge is necessary to be more visual and have better results.Real-Life Application.
12They would help me improve my Engineering skills since they are beneficial and greatly help at a complex level.Engineering Application.
13These skills can help me solve real-life problems; I can develop critical thinking, which will help me solve problems faster.Real Life Application.
14To improve my logic, make my codes shorter, correct, and effective.Programming Application.
15I imagine that over time, these skills become tools that, in a certain way, can make you more effective when programming.Programming Application.
16I believe that they would help me streamline my mind and look for new ways to solve problems, not only focusing on the basics but also seeking to improve problem-solving.Problem-Solving Efficiency.
17These skills can greatly help when thinking about the problem to be solved in broad strokes, where it is more important to have an abstract vision of the problem instead of focusing so much on the code itself.Problem-Solving Efficiency.
18I think CT skills development is widely helpful and, to a certain level, necessary for programming.Programming Application.
19I would like to apply these skills to my code writing to create efficient algorithms that are, therefore, faster to run.Programming Application.
20Well, I believe that developing these skills makes our brain think in a more structured way. The creative side comes in how we want to solve the problem- how we will make it more efficient and effective. Without a doubt, the more we develop these skills, the easier it becomes to think of a solution. We must not avoid being blinded by traditional solutions because it can prevent our creative side from developing and proposing better ways to solve it.Problem-Solving Efficiency.
Table 11. Key findings and evidence of triangulation across instruments.
Table 11. Key findings and evidence of triangulation across instruments.
InstrumentKey FindingsEvidence of Triangulation
Computational Thinking Test (CTT)Students performed proficiently in pattern recognition and abstraction; older students outperformed younger ones. Public institution students showed better performance than private ones.CTT scores align with self-assessments in CTS. Qualitative feedback indicates strengths in problem structuring and application of CT principles.
Computational Thinking Scale (CTS)Students rated themselves proficient, especially in abstraction and problem-solving. Boys showed higher confidence in specific areas, while Computer Science students excelled overall.Identified performance gaps in algorithmic thinking across instruments. Qualitative insights suggest struggles with complex problem decomposition.
Open-Ended AssessmentStudents view CT as crucial for structured problem-solving but face challenges in decomposition and applying algorithmic thinking.Findings from CTT and CTS remained high for decomposition skill, reinforcing the need for foundational training for the remaining factors.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pirzado, F.A.; Ahmed, A.; Hussain, S.; Ibarra-Vázquez, G.; Terashima-Marin, H. Assessing Computational Thinking in Engineering and Computer Science Students: A Multi-Method Approach. Educ. Sci. 2025, 15, 344. https://doi.org/10.3390/educsci15030344

AMA Style

Pirzado FA, Ahmed A, Hussain S, Ibarra-Vázquez G, Terashima-Marin H. Assessing Computational Thinking in Engineering and Computer Science Students: A Multi-Method Approach. Education Sciences. 2025; 15(3):344. https://doi.org/10.3390/educsci15030344

Chicago/Turabian Style

Pirzado, Farman Ali, Awais Ahmed, Sadam Hussain, Gerardo Ibarra-Vázquez, and Hugo Terashima-Marin. 2025. "Assessing Computational Thinking in Engineering and Computer Science Students: A Multi-Method Approach" Education Sciences 15, no. 3: 344. https://doi.org/10.3390/educsci15030344

APA Style

Pirzado, F. A., Ahmed, A., Hussain, S., Ibarra-Vázquez, G., & Terashima-Marin, H. (2025). Assessing Computational Thinking in Engineering and Computer Science Students: A Multi-Method Approach. Education Sciences, 15(3), 344. https://doi.org/10.3390/educsci15030344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop