Next Article in Journal
Conv-ScaleNet: A Multiscale Convolutional Model for Federated Human Activity Recognition
Previous Article in Journal
Large Language Models in Cybersecurity: A Survey of Applications, Vulnerabilities, and Defense Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unplugged Activities for Teaching Decision Trees to Secondary Students—A Case Study Analysis Using the SOLO Taxonomy

by
Konstantinos Karapanos
1,
Vassilis Komis
2,
Georgios Fesakis
3,
Konstantinos Lavidas
2,
Stavroula Prantsoudi
3 and
Stamatios Papadakis
4,*
1
Music School of Patras, Ministry of Education, 26335 Patras, Greece
2
Department of Educational Sciences & Early Childhood Education, University of Patras, 26504 Patras, Greece
3
Department of Preschool Education Sciences and Educational Design, University of the Aegean, 85100 Rhodes, Greece
4
Department of Preschool Education, University of Crete, 74100 Rethymnon, Greece
*
Author to whom correspondence should be addressed.
AI 2025, 6(9), 217; https://doi.org/10.3390/ai6090217 (registering DOI)
Submission received: 9 July 2025 / Revised: 2 September 2025 / Accepted: 3 September 2025 / Published: 5 September 2025

Abstract

The integration of Artificial Intelligence (AI) technologies in students’ lives necessitates the systematic incorporation of foundational AI literacy into educational curricula. Students are challenged to develop conceptual understanding of computational frameworks such as Machine Learning (ML) algorithms and Decision Trees (DTs). In this context, unplugged (i.e., computer-free) pedagogical approaches have emerged as complementary to traditional coding-based instruction in AI education. This study examines the pedagogical effectiveness of an instructional intervention employing unplugged activities to facilitate conceptual understanding of DT algorithms among 47 9th-grade students within a Computer Science (CS) curriculum in Greece. The study employed a quasi-experimental design, utilizing the Structure of Observed Learning Outcomes (SOLO) taxonomy as the theoretical framework for assessing cognitive development and conceptual mastery of DT principles. Quantitative analysis of pre- and post-intervention assessments demonstrated statistically significant improvements in student performance across all evaluated SOLO taxonomy levels. The findings provide empirical support for the hypothesis that unplugged pedagogical interventions constitute an effective and efficient approach for introducing AI concepts to secondary education students. Based on these outcomes, the authors recommend the systematic implementation of developmentally appropriate unplugged instructional interventions for DTs and broader AI concepts across all educational levels, to optimize AI literacy acquisition.

1. Introduction

AI applications have fundamentally transformed human experience across diverse domains, dominating educational, healthcare, entertainment, and social contexts. Students are increasingly exposed to AI-enabled technologies such as smartphones, facial recognition systems, conversational agents, intelligent home automation and autonomous vehicles, from early developmental stages [1]. AI advancements mostly leverage Machine Learning (ML) methodologies [2], necessitating educational institutions to reconsider their pedagogical frameworks to equip students with foundational competencies for comprehending AI technologies.
Despite the ubiquity of these technologies, empirical evidence suggests that students frequently lack comprehensive understanding of underlying AI concepts [3,4]. These findings have prompted initial research initiatives aimed at developing educational materials and methodologies to facilitate ML instruction within secondary education contexts [5,6,7]. Among others, educational researchers investigate whether ML concepts can be effectively transmitted to secondary school students (ages 12–15) through the implementation of interactive and engaging pedagogical strategies.
Several researchers have advocated for the integration of unplugged (i.e., computer-free) activities—hands-on learning experiences that do not require computational devices—in AI concept instruction [5,8]. These unplugged methodologies appear to serve a complementary function to traditional programming-based activities, contributing to more comprehensive conceptual understanding [6]. However, the documented learning outcomes associated with such approaches remain limited, and researchers have identified significant gaps in understanding optimal strategies for introducing AI concepts to young learners [9].
A critical consideration in this pedagogical landscape involves the alignment of AI-related educational content with established cognitive assessment frameworks. Specifically, the Structure of the Observed Learning Outcome (SOLO) Taxonomy and Bloom’s Taxonomy provide systematic methodologies for categorizing learning outcomes and cognitive competencies [10,11], offering potential scaffolding for curriculum development in AI education. The integration of these frameworks may enhance the effectiveness of AI instruction by providing structured approaches to skill assessment and learning progression.
Contributing to this research field, this study proposes a comprehensive framework for enhancing AI-driven educational interventions through the application of the SOLO cognitive assessment taxonomy. To address this research objective, a pedagogical intervention was designed and implemented within the framework of the official Greek Computer Science curriculum for secondary education. The study investigates the applicability and potential positive contribution to the learning process effectiveness of students’ engagement with decision tree (DT) concepts through unplugged pedagogical activities. Learning outcomes were measured exploiting the SOLO taxonomy, thereby contributing to the development of a more dynamic learning assessment model. Inferential statistical analysis employing paired-sample tests was conducted to determine statistically significant improvements in student knowledge acquisition. The findings of this research may be utilized to support educators and educational advisors in developing a more comprehensive understanding of learning outcomes across SOLO taxonomy levels when teaching ML concepts, particularly DTs. These insights contribute to the development of innovative and effective pedagogical approaches in CS and AI education.
The paper is structured as follows: Section 2 presents theoretical background, comprising a literature review of studies examining ML concept instruction in secondary education with particular emphasis on DTs, and an exposition of SOLO taxonomy levels adapted for classifying learning outcomes in DT instruction. Section 3 delineates the research implementation framework, including the study’s purpose, research questions, participant characteristics, and the designed educational intervention. Section 4 describes the evaluation metrics and data analysis procedures, providing also the assessment instruments employed. Section 5 discusses the research findings in response to the stated research questions and existing literature. Section 6 concludes the study’s contribution to the field and Section 7 discusses limitations of the research and future research directions.

2. Theoretical Background

2.1. Teaching ML Concepts in Secondary Education

One of the pedagogical approaches to DT instruction involves the implementation of unplugged (i.e., computer-free) activities, overcoming the teaching of programming concepts and techniques, or use of technology, that may constitute significant learning barriers for secondary education students. In the unplugged approach, the instruction of DTs is not directly concerned with algorithmic mechanisms but rather focuses on data comprehension and manipulation [12]. It is essential for students to understand how input data provided to ML models influences predictive performance and, consequently, model outputs [13].
The instruction of DTs is particularly relevant as numerous ML applications utilized by adolescents in their daily activities rely on DT methodologies for outcome prediction and data classification. Representative examples include spam email detection systems, medical diagnostic tools, individual credit assessment algorithms, customer behavior analysis for purchase prediction, and various other applications. DTs’ hierarchical structure parallels human decision-making processes across diverse problem domains, presenting considerable pedagogical advantages, since they enhance comprehensibility and model construction [14]. DTs are regarded as a conceptually accessible and approachable ML methodology for secondary education students and are consequently incorporated into educational curricula [15,16] and instructional materials [5].
Within K-12 educational contexts, researchers have engaged students in problem-solving activities utilizing Logo-inspired technological tools [17,18,19], including Teachable Machine, Machine Learning for Kids, Cognimates, DoodleIt, and robotics platforms with intelligent agents, to facilitate learning organization and collaborative development of innovative solutions and raise awareness of potential bias. Additional research has implemented blended pedagogical approaches combining digital technological tools with unplugged activities [6,20]. Hitron et al. [13] contended that ML algorithms present substantial comprehension challenges for younger students (ages 10–13 years), leading them to focus on data preparation and exploration of algorithm-generated models while avoiding explicit instruction of algorithmic mechanisms (maintaining a black-box approach). Alternative research initiatives propose exclusively unplugged methodologies [21], the utilization of data cards [12], or tangible materials such as pasta [8]. While limited in scope, these efforts demonstrate that unplugged activities can enhance the accessibility of complex AΙ concepts for younger learners while providing visual representation of the complete decision-making process through the high explanatory capacity inherent in DT structures.
A study conducted by Lindner et al. [21] involved 14–16-year-old students from a German Real Schule participating in a 6–8 h instructional program. The research objective centered on the development and evaluation of an unplugged activity sequence for AI introduction, incorporating classification activities utilizing DTs. Students constructed classification models to differentiate between aggressive and non-aggressive (biting and non-biting) monkeys using DT methodologies. This process involved examining how sample elements (pictorial cards depicting monkey faces) corresponded to predetermined categories (aggressive or non-aggressive classification). Students acquired fundamental ML concepts knowledge, including training and testing data distinctions, DT classification mechanisms, and the inherent limitations of achieving complete accuracy. The findings demonstrated that unplugged AI activities are appropriate for introducing complex AI topics to 14–16-year-old learners, promoting active engagement and resulting in measurable improvements in understanding of core concepts and AI’s societal implications.
Fleischer et al. [12] conducted research across two secondary school classes in Germany, involving 11–12-year-old students over 8 45 min instructional sessions. The study aimed to investigate student capacity for constructing DTs using data cards for classification purposes. Results indicated that this approach serves as an effective introductory method for ML instruction at the secondary level, facilitating the development and application of relevant heuristic reasoning and analytical skills among participating students.
Lehner et al. [9] introduced three novel pedagogical tools designed to enhance the practicality of unplugged DT learning activities. Pilot implementation confirmed the effectiveness of these instruments, demonstrating that students could successfully construct DTs utilizing three-dimensional printed components with adaptive modification capabilities. Although data card preparation required substantial time investment, these materials accelerated the model training process for students. This contribution exposed students to learning processes through applicating simplified mathematical formulas for optimal split calculations, thereby providing them with direct experience of fundamental ML mechanisms.
Ma et al. [8] conducted research involving 12–13-year-old students participating in an eleven-session afterschool program in the United States, designed to introduce fundamental ML algorithms, specifically DTs and k-nearest neighbors. The “Pasta Land” unplugged activity required students to construct DTs using various pasta types as classification elements, subsequently applying these models to classify unidentified pasta specimens. Upon program completion, students demonstrated comprehension of the relationship between interrogative processes, dataset partitioning, and DT construction. Results indicated successful student comprehension of fundamental concepts and effective engagement in DT creation and design processes.
Michaeli et al. [22] extended the unplugged methodology by incorporating Lindner et al.’s [21] activity within the initial phase of their instructional framework, subsequently integrating a digital data analysis tool (Orange) for comparative analysis. This blended approach enabled students to generate computer-based DTs using identical monkey datasets and evaluate model performance against their manually constructed unplugged counterparts. This comparative methodology facilitates student examination of manual versus automated DT generation processes, supports experimentation with datasets and hyperparameters, and promotes critical reflection on underlying computational concepts.
The studies collectively demonstrate positive effects of unplugged activities on student learning outcomes in DT instruction. However, these investigations frequently exhibit limited transparency regarding the theoretical frameworks employed for learning outcome classification and cognitive skill assessment, particularly concerning established taxonomies such as Bloom’s taxonomy, SOLO, or alternative cognitive models. To address this methodological gap, the present study proposes a comprehensive framework applying the SOLO cognitive assessment model, with the objective of enhancing AI education through systematic categorization and evaluation of learning outcomes.

2.2. Introduction to the SOLO Taxonomy

The SOLO taxonomy represents a hierarchical framework for classifying student performance based on the structural complexity and sophistication of demonstrated understanding, progressing from basic recall to sophisticated generalization and abstraction [10]. This developmental assessment model evaluates learning outcomes through five distinct cognitive levels: pre-structural, uni-structural, multi-structural, relational, and extended abstract understanding. Each level represents increasing cognitive complexity and conceptual integration, providing educators with a systematic approach to evaluate student comprehension depth rather than merely content quantity.
The SOLO taxonomy’s emphasis on structural complexity aligns particularly well with the hierarchical nature of DT algorithms, where understanding progresses from basic node recognition to sophisticated comprehension of branching logic, pruning strategies, and ensemble methods. In the context of AI education, this taxonomy offers the potential for mapping student cognitive development as they navigate increasingly complex ML concepts, from fundamental data classification principles to advanced algorithmic optimization and real-world application scenarios.
The present study adapts the SOLO taxonomy specifically for evaluating learning outcomes in DT instruction, establishing a novel assessment framework that has not previously been applied to ML concept education. Table 1 delineates the five hierarchical levels of cognitive complexity, each corresponding to distinct learning objectives and assessment criteria within the DT instructional context.
To operationalize learning outcomes mapping across SOLO taxonomy levels, targeted assessment criteria were developed in properly structured worksheets. These instruments enable precise classification of student responses according to cognitive complexity and conceptual integration. Due to methodological constraints and temporal limitations within the research design, assessment of the highest cognitive level—extended abstract understanding—was not implemented in the current study. The comprehensive assessment framework with corresponding evaluation criteria is presented in Table 2, representing a foundational structure for future research expansion and refinement.
The application of SOLO taxonomy within AI educational contexts offers multiple pedagogical advantages that address fundamental challenges in ML instruction. Primarily, the framework provides a systematic and transparent mechanism for assessing student understanding depth, which proves essential when instructing complex AI concepts that require progressive cognitive development [23,24]. This structured approach enables teachers to distinguish between simple memorization of algorithmic steps and genuine comprehension of underlying computational principles governing DTs construction and optimization.
The SOLO framework further facilitates the systematic design of learning activities that promote increasingly sophisticated cognitive engagement. Through this progression, students advance from surface-level pattern recognition, through relational understanding of algorithmic dependencies, to achieving abstract conceptualization of fundamental ML principles [10]. This progressive cognitive scaffolding proves particularly valuable in DT instruction, where students must integrate multiple conceptual layers including data preprocessing, feature selection, splitting criteria optimization, and model validation techniques.
The taxonomy also supports metacognitive skill development by providing students with explicit frameworks for self-assessment and reflection on their learning progression [10]. This metacognitive awareness enables students to recognize their current level of understanding, identify knowledge gaps, and develop strategic approaches for advancing their comprehension of AI systems and their practical applications. In the context of DT learning, this metacognitive development facilitates student understanding of when and why specific algorithmic choices are appropriate for different data characteristics and problem contexts. Teachers are also enabled to identify specific cognitive barriers in student understanding, allowing for targeted pedagogical interventions that address conceptual gaps before progressing to more complex material.

3. Materials and Methods

3.1. Research Description

In response to the increasing need for systematic research initiatives addressing ML pedagogical methodologies, given the prevalence of ML applications within the daily experiences of secondary education students, we designed and implemented a teaching intervention to facilitate ML instruction for secondary education students through DT methodologies. Such research endeavors possess significant potential to inform curricular development discussions for secondary education contexts, both within Greece and internationally.
The intervention design incorporated Merrill’s [25] instructional design principles for activities and worksheets development, following the methodological approach established by Sanusi, Oyelere et al. [5]. Merrill [25] delineated five fundamental instructional design principles applicable to any educational program or intervention to achieve effective and efficient instruction. These principles encompass: transforming instructional content into authentic real-world problems (problem-centered approach), activating existing knowledge as a foundation for new learning (activation), demonstrating knowledge to learners through systematic presentation (demonstration), facilitating practical application of new knowledge by students (application), and integrating acquired knowledge into learners’ experiential contexts (integration).
Furthermore, to systematically document student knowledge construction progression throughout and following the intervention completion, the SOLO taxonomy was employed as the methodological framework for classifying and evaluating learning outcomes.

3.2. Goal and Research Questions

Goal of our research was to advance the field of AI instruction, specifically ML pedagogy within K-12 educational contexts, through an instructional intervention designed and implemented based on the official Greek secondary school Computer Science curriculum. The study investigated the teachability and pedagogical utility of DT instruction for secondary education students. This research focuses on the implementation of exclusively unplugged instructional activities and the systematic application of the SOLO taxonomy for learning outcomes classification. The intervention’s implementation framework comprises three principal components: (1) Machine learning instruction through DTs utilization with emphasis on data comprehension, (2) Implementation within 9th grade CS curricula through integrated unplugged pedagogical activities, (3) Learning outcomes classification based on the SOLO taxonomy hierarchical levels.
The overarching objective of this research was to examine the cognitive development of 9th grade students during DT instruction through unplugged activities, employing the SOLO taxonomy for learning outcomes assessment. The research addresses the following research questions:
  • To what extent does the performance of 9th grade students improve following an educational intervention to teach decision trees through unplugged activities?
  • How is the learning (cognitive) progression of 9th grade students classified to the uni-structural, multi-structural, and relational levels of the SOLO taxonomy, when instructed decision trees through unplugged activities?

3.3. Research Design

AI and ML concept instruction were officially integrated in the Greek educational system through the revised secondary school CS curriculum, implemented from the 2024–2025 school year [26]. Within Thematic Field 1: “Algorithmics and Programming of Computer Systems”/Subsection 1.3.3: “Applications of Artificial Intelligence,” and Thematic Field 5: “Digital Technologies and Society”/Subsection 5.2.3: “The Impact of Algorithms, Data Science and AI in Society,” teachers are directed to engage students with AI and ML through relevant conceptual instruction and practical applications. Based on Ministry of Education guidelines, the curricular content accommodates experimental approaches and diverse implementation methodologies, supporting student learning processes and attitudinal development toward specific conceptual domains.
In this study, a preliminary experimental design was developed to facilitate ML instruction through DT activities. The learning scenario was structured across four sequential phases, incorporating corresponding assessment instruments (pre-test, first intermediate test, second intermediate test, post-test). Throughout successive implementation phases, each student group completed a series of three corresponding worksheets designed to provide instructional scaffolding and guide activity progression. The objective was to evaluate the proposed intervention framework and assess students’ cognitive advancement through systematic analysis of learning outcomes and their correspondence to three hierarchical SOLO taxonomy levels: uni-structural, multi-structural, and relational.

3.4. Participants and Context

The study was conducted in a Greek public secondary school, the Music School of Patras, as a pilot implementation of teaching ML concepts and methodologies based on the revised official secondary school CS curriculum. The intervention was delivered by the classroom teacher/researcher, a secondary education CS teacher with approximately thirty years of pedagogical experience. The researcher has been specializing in AI concept instruction within secondary education, in the context of his postgraduate studies on Informatics and didactic methodologies for almost two years prior the implementation of the research.
The implementation process comprised 5 instructional sessions of 45 min each, conducted during March 2025. A total of 47 students from two 9th grade classes participated, attending weekly sessions divided into subgroups of 11–12 students, predominantly working in collaborative groups of three participants. The demographic composition included 14 male and 33 female students of ages ranging between 13 and 14 years. All participants reported no previous exposure to the investigated concepts, while maintaining regular attendance in mandatory secondary school CS classes.
The educational scenario was structured according to the following activity framework:
(a)
Cognitive and psychological preparation—aimed at identifying students’ prior knowledge
(b)
Teaching and construction of new knowledge and skills
(c)
Application and implementation of new knowledge
(d)
Evaluation of new knowledge
(e)
Metacognitive activities (designated for future implementation).
During Phase (a) (i.e., cognitive and psychological preparation), a pre-test was administered to assess participants’ entry knowledge prior to the intervention implementation. Demographic data regarding gender, age, and grade level were also collected during this phase. In the knowledge construction phase (b) (i.e., teaching and construction of new knowledge and skills), a complete DT (Figure 1) was presented to students as an exemplar of the concept under investigation. The DT was constructed from musical instrument data, with each instrument possessing distinct characteristics and properties. The model was developed following discussions with music teachers and incorporating primary instrument categories taught within Greek music schools.
Through utilization of this DT and corresponding data, students’ existing knowledge regarding musical instruments—derived from their music school attendance—was leveraged to introduce fundamental DT concepts and functions. Based on this knowledge framework and through relevant pedagogical activities, concepts were developed including identification and description of DT elements, examination of data partitioning processes based on characteristic features, and comprehension and explanation of prediction mechanisms within ML models.
In Phase (c) (i.e., application and implementation of new knowledge), students received instruction in establishing criteria for training data partitioning, designing autonomous DT models, identifying predictive errors within their constructed models, and implementing necessary refinements. Students concluded that DT models may undergo modification depending upon the training data employed in their construction, thereby enhancing predictive accuracy. The intervention concluded with Phase (d) (i.e., evaluation of new knowledge), through implementation of a peer evaluation process whereby each student group assessed the DTs constructed by other groups, based on established criteria including model simplicity and efficiency, presence of sufficient attributes for effective data partitioning, and adequate number of decision nodes to facilitate comprehensive data classification. Upon completion of the intervention, the post-test was administered, consisting of the same questions as those employed in the pre-test assessment. Metacognitive activities (phase (e)) were not implemented and are subject to future research design.
The investigation of students’ conceptual understanding and learning experiences during ML engagement was grounded in sociocultural theories of learning and participation [27]. Throughout the study implementation, students actively engaged in the learning process through reflective activities designed for collaborative group work, utilizing existing knowledge and contextual examples from their sociocultural environments [5,28]. This pedagogical approach aimed to minimize cognitive load while employing unplugged activities to enhance accessibility for all participants [20]. The intervention sought to introduce fundamental ML concepts through promotion of critical thinking and active student engagement.
Furthermore, no previous research was detected investigating learning achievement in DT instruction for secondary school students utilizing the SOLO taxonomy classification. Therefore, a SOLO-based assessment framework was developed to evaluate student performance in DT learning, specifically adapted for secondary education contexts and aligned with the hierarchical structure of SOLO taxonomy levels.

3.5. Metrics and Data Analysis

To evaluate student processing of instructional materials and measure cognitive understanding progression of pedagogical concepts, a pretest-posttest data collection methodology was employed, utilizing identical question sets for both assessments. Concurrently, data was collected through two supplementary intermediate individual assessments: the first (1st intermediate test) administered following completion of phase (b) (i.e., Teaching and construction of new knowledge and skills), and the second (2nd intermediate test) implemented after phase (c) (i.e., Application and implementation of new knowledge). The first test facilitated examination of participant learning outcomes immediately following concept instruction and DT model design methodology. The second test evaluated individual student integration of learning experiences regarding DT models following application and implementation activities involving active group participation.
All participant responses were recorded in individual spreadsheet files per student and assessment. Each correct response contributed a specific score toward the final total for each test. Additionally, each question provided a unique ranking factor for linking learning outcomes to uni-structural, multi-structural, and relational SOLO taxonomy levels (Table 3). Both pre-test and post-test comprised identical sets of nine closed-ended questions with a maximum score of 11 points, designed to ensure performance comparability across measurement points. These instruments primarily targeted comprehensive learning outcomes across uni-structural, multi-structural, and relational SOLO taxonomy levels, capturing baseline knowledge and post-intervention gains (Table 4).
The first intermediate individual test contained four items (maximum score: 5 points), emphasizing procedural understanding and factual recall corresponding to uni-structural and multi-structural cognitive levels (Table 5). The second intermediate individual test included six items (maximum score: 9 points), with the final question (weighed at 4 points) requiring independent DT construction from provided training datasets (Table 6). This task was specifically designed to assess higher-order thinking corresponding to relational SOLO taxonomy levels, necessitating knowledge integration and learning transfer. The comprehensive design and scoring framework aligned with the study’s learning objectives while ensuring reliability and sensitivity in measuring student progression across distinct cognitive levels.
Concerning the response formats, questions were categorized into three types: (a) single-response items requiring selection from predetermined alternatives, (b) multiple-response items permitting selection of several answers from provided options, and (c) sequencing items requiring students to arrange given statements in correct order. The second intermediate test incorporated an open-ended item directly contributing to students’ classification within corresponding SOLO taxonomy levels. Quantitative analysis of aggregated data provided measurable representation of students’ cognitive progression across experimental process stages, interpreted through SOLO taxonomy theoretical frameworks. Individual student performance per assessment and per SOLO taxonomy level (uni-structural, multi-structural, relational) was classified as “High”, “Medium” or “Low” based on summative scores of correct responses for each cognitive level. Statistical analysis employed descriptive statistics including means and standard deviations. Intervention effectiveness was evaluated through parametric dependent samples t-test procedures. All statistical values remained within acceptable range, permitting next steps of the analysis procedure.
Table 4 presents the pre-test and post-test questions. The questions were identical for both tests delivered before and after the implementation of the intervention.
Table 5 presents the 1st individual test questions.
Table 6 presents the 2nd individual test questions.
One participant was unable to complete the intervention due to illness-related absence during the final two sessions, resulting in a final active participant of forty-six (n = 46) students. In the sections that follow results are presented considering this amendment.

4. Results

This section presents comprehensive responses to the research questions based on empirical evidence from the educational intervention and its documented effects on the performance and cognitive development of 9th grade students.

4.1. Answer to Research Question 1

Pre-test responses provide compelling evidence that prior to implementation of the intervention, most participants possessed minimal exposure to DT and ML concepts, confirming the absence of prior formal instruction in these domains. Quantitative analysis revealed that 21 of 46 participants (~45.7%) achieved overall scores below 2.3 out of 10 points, while an additional 21 participants (~45.7%) failed to achieve the minimum passing score (5 points). Overall, 42 of 46 participants (~91.3%) performed below the established passing threshold (5 points), indicating substantial knowledge gaps in fundamental ML concepts prior to pedagogical intervention. This baseline assessment validates the appropriateness of the target population for investigating the effectiveness of unplugged DT instruction, as participants demonstrated minimal prior knowledge that could potentially confound learning outcome measurements.
Post-test results demonstrated substantial and statistically significant improvement across all performance metrics. Specifically, 17 of 46 participants (~37.0%) achieved scores exceeding 63% of maximum possible points, while 24 of 46 participants (~52.2%) surpassed 50% of maximum scoring potential. Only 5 of 46 participants (~10.9%) scored below baseline performance levels, indicating intervention effectiveness for most participants. Individual performance trajectory analysis reveals remarkable improvement patterns: one participant achieved a twelve-fold score increase from pre-test to post-test; ten participants demonstrated quadrupled performance scores; nine participants achieved tripled performance outcomes; thirteen participants doubled their assessment scores; and thirteen additional participants showed measurable improvement. Significantly, no participant failed to demonstrate some degree of performance enhancement, indicating positive intervention effects across all participants.
Dependent samples t-test analysis reveals statistically significant differences in student performance following engagement with unplugged educational activities. The total mean performance value following the DT instructional intervention (M = 6.16, SD = 1.55) significantly exceeded pre-intervention performance means (M = 2.30, SD = 1.29), t(45) = 18.83, p < 0.001, r = 0.94.
Furthermore, dependent samples t-test analysis conducted separately for each SOLO taxonomy level demonstrated significant improvement across all cognitive complexity domains: uni-structural level (t(45) = 14.31, p < 0.001, r = 0.91), multi-structural level (t(45) = 9.95, p < 0.001, r = 0.83), and relational level (t(45) = 9.64, p < 0.001, r = 0.82). These results indicate that the intervention was effective in promoting learning across multiple cognitive complexity levels, since significant improvement in the participants’ learning achievement score was observed after the implementation of the intervention activities (Table 7).

4.2. Answer to Research Question 2

Individual student cognitive progression was systematically evaluated through analysis of performance scores across the four sequential assessments throughout the intervention period. Each participant received classification rankings of “Low,” “Medium,” or “High” within each SOLO taxonomy level based on their demonstrated competency relative to established criteria. This assessment approach enables detailed examination of cognitive development trajectories across different levels of conceptual complexity.

4.2.1. Uni-Structural SOLO Level Students’ Cognitive Progression

Analysis of student performance at the SOLO uni-structural level reveals a positive developmental trajectory throughout the intervention period (Figure 2). Initial pre-test assessment indicated that 82.61% of participants were classified at the “Low” performance level, suggesting minimal foundational knowledge of basic DT concepts and terminology.
Progressive improvement was documented through intermediate assessments, with substantial advancement evidenced in post-test results where 86.96% of participants achieved “High” level classification, while only 6.52% remained in the “Low” performance category. This represents a remarkable shift in the distribution of cognitive competency, with most of the students progressing from minimal understanding to demonstration of solid foundational knowledge in DT concepts. The intermediate assessments (1st and 2nd) demonstrated consistent upward trajectories in both “Medium” and “High” ranking categories, indicating that cognitive development was progressive rather than occurring solely at intervention completion. This pattern suggests effective scaffolding of learning experiences and gradual knowledge construction processes.

4.2.2. Multi-Structural SOLO Level Students’ Cognitive Progression

Cognitive development analysis at the multi-structural SOLO level demonstrates significant but more graduated improvement patterns compared to uni-structural progression (Figure 3). Pre-intervention assessment revealed that 71.74% of participants were classified at the “Low” level, with 26.09% achieving “Medium” level performance, indicating some capacity for connecting multiple discrete elements of DT concepts.
Through successive assessments, these proportional distributions progressively reversed, with post-intervention results showing 15.22% of participants remaining at “Low” classification levels, while 84.78% achieved either “Medium” or “High” performance categories. This improvement pattern indicates successful development of students’ ability to identify and integrate multiple relevant aspects of DT construction, data analysis, and classification processes. The multi-structural level progression demonstrates students’ developing capacity to coordinate multiple pieces of information simultaneously, connect discrete DT components, and understand relationships between data attributes and classification outcomes. This cognitive development represents a crucial intermediate step toward more sophisticated relational thinking.

4.2.3. Relational SOLO Level Students’ Cognitive Progression

Student cognitive development at the relational SOLO level exhibited the most challenging progression pattern, reflecting the inherent complexity of this cognitive domain (Figure 4). Initial assessment indicated 84.78% of participants at “Low” classification levels, confirming the substantial cognitive demands associated with relational thinking.
Learning outcomes demonstrated gradual improvement following intermediate tests, though advancement occurred at a more modest rate compared to uni-structural and multi-structural levels. Post-test assessment revealed that 50% of participants achieved “Medium” level classification, indicating development of capacity for integrating DT concepts into coherent conceptual frameworks and understanding systematic relationships between algorithmic components.
The remaining participants were distributed between “Low” level classification (26.09%) and “High” level classification (23.91%), indicating considerable individual variation in achieving relational understanding. The 23.91% of students achieving “High” relational classification demonstrates that a substantial minority successfully developed sophisticated understanding capable of generalizing DT principles across contexts, explaining underlying algorithmic logic, and making connections to broader ML concepts.
The differential progression patterns across SOLO taxonomy levels provide empirical validation for the hierarchical nature of cognitive development in ML concept acquisition. The progressively slower advancement rates from uni-structural through relational levels align with theoretical predictions regarding increasing cognitive complexity demands. These findings suggest that while foundational concept acquisition can be achieved relatively readily through well-designed unplugged activities, developing relational understanding requires extended time, additional scaffolding, and possibly alternative pedagogical approaches.
The substantial individual variation observed at the relational level indicates the importance of differentiated instruction approaches that account for diverse cognitive development rates and learning style preferences among secondary education students encountering complex computational concepts. In what follows we discuss the answers to the research questions with existing literature.

5. Discussion

The systematic design and implementation of the proposed educational intervention within the framework of the official Greek Computer Science curriculum for 9th grade students resulted in significant pedagogical insights and theoretical contributions to the field of AI education. The empirical evidence substantiates the argument that structured approaches to constructing data-driven DT models are both pedagogically feasible and educationally beneficial for secondary education students [29]. While DT methodology as a ML concept is cognitively complex, through our intervention students successfully developed foundational conceptual understanding regarding DT structures, algorithmic logic, and predictive mechanisms, thereby establishing essential cognitive scaffolding for future advanced progression toward computational thinking and algorithmic literacy. The intervention’s success in demystifying abstract ML concepts suggests that carefully designed unplugged pedagogical approaches can bridge the cognitive gap between students’ intuitive decision-making processes and formal algorithmic reasoning, providing a conceptual foundation that transcends mere procedural knowledge acquisition.
In direct response to the first research question, empirical analysis demonstrates statistically significant improvement in 9th grade students’ performance following DT educational intervention utilizing unplugged activities. This finding confirms existing literature establishing that unplugged instructional methodologies enhance student mastery of technology-related concepts across diverse educational contexts [8,30,31]. The documented efficacy of unplugged activities in supporting AI concepts acquisition among younger learners has been substantiated through multiple educational research initiatives [6]. Eliminating the use of technology, coding complexity and digital tool dependencies, students are enabled to develop improved understanding of abstract concepts including decision-making processes, pattern recognition methodologies, and data classification principles. This approach enhances AI concept accessibility by transforming abstract computational processes into learning experiences that align with students’ existing cognitive frameworks.
The collaborative dimension inherent in unplugged activities facilitates peer-to-peer knowledge construction and discourse, encouraging students to articulate, examine, and refine their understanding of AI system applications within authentic real-world scenarios [15,32]. This social constructivist approach aligns with established theories of learning that emphasize the importance of collaborative knowledge building and situated cognition in developing deep conceptual understanding.
Regarding the second research question, the systematic assessment of 9th grade students’ cognitive progression during DT instruction through unplugged activities, classified according to uni-structural, multi-structural, and relational levels of the SOLO taxonomy, empirical results demonstrate significant positive effects on student learning outcomes across all measured cognitive complexity levels. The progression of learning objectives embedded within individual activities was successfully achieved at satisfactory levels, with participant assessment scores across all 3 examined cognitive levels showing statistically significant improvement throughout measurements. This cognitive progression pattern aligns with SOLO taxonomy’s theoretical predictions regarding structural complexity development in learning, validating the framework’s applicability to AI concept instruction, while also proving the taxonomy’s utility beyond traditional academic domains. The absence of comparable studies applying SOLO taxonomy to DT instruction represents both a limitation and a contribution of the present research. While numerous studies successfully employ SOLO taxonomy approaches for mathematical [33] and programming [34,35] concepts assessment, this study pioneers the application of structured cognitive assessment to ML education, establishing a methodological foundation for future research in AI pedagogy.
The implementation of collaborative learning methodologies represents a critical pedagogical component that likely contributed to observed improvements in student learning outcomes and engagement. Collaborative learning promotes critical thinking development while enhancing active participation and peer interaction among students. Extensive research demonstrates that collaborative learning environments significantly increase student interest and persistence in STEM disciplines [36,37], suggesting that the social dimension of learning plays a crucial role in developing positive attitudes toward complex technical subjects. In the context of DT instruction, collaborative learning facilitated peer explanation of algorithmic reasoning, collective problem-solving processes, and distributed cognitive load management, enabling students to leverage diverse perspectives and knowledge contributions in constructing comprehensive understanding of ML concepts.
Another significant pedagogical factor contributing to the intervention’s effectiveness was the contextualization of learning experiences through utilization of culturally relevant artifacts and examples aligned with students’ sociocultural backgrounds. The strategic selection of musical instrument classification as the primary DT application context leveraged students’ existing domain knowledge from their specialized music school environment, creating authentic learning opportunities that bridged abstract computational concepts with familiar real-world applications. Previous research has demonstrated the effectiveness of contextualized educational resources for implementing AI learning experiences in academic settings [38,39]. This cultural relevance principle operates through multiple cognitive mechanisms: reducing cognitive load by connecting new concepts to existing knowledge structures, enhancing motivation through personal relevance and meaning-making, and facilitating transfer of learning through authentic application contexts.

6. Conclusions

This study makes several significant contributions to the field of computer science education, particularly in the emerging area of AI literacy for secondary school students. The research demonstrates that DTs, despite their inherent complexity as ML algorithms, can be effectively taught to 9th grade students through carefully designed unplugged educational interventions. These findings challenge traditional assumptions about the age-appropriateness of AI concepts and suggest that foundational ML principles can be introduced earlier in the educational trajectory than previously considered.
The demonstrated feasibility of teaching complex ML concepts through unplugged activities suggests that AI literacy development need not be constrained by technological infrastructure limitations or advanced programming prerequisites. This finding is particularly significant for educational equity, as it reduces barriers to AI literacy that might otherwise disadvantage students in resource-constrained environments. It also carries significant implications for curriculum development and instructional design for AI education, and the broader field of computer science education.
The successful application of the SOLO taxonomy to assess cognitive progression in DT learning represents a methodological innovation that provides educators and researchers with a robust framework for evaluating the depth of student understanding in AI-related topics. This assessment approach offers greater effectiveness than traditional evaluation methods and enables the identification of specific cognitive developmental stages in ML concept acquisition.
The intervention’s success also validates the importance of scaffolded learning progressions in AI education, demonstrating that complex ML concepts can be effectively taught through carefully sequenced activities that build progressively from concrete manipulative experiences toward abstract algorithmic reasoning. This pedagogical approach aligns with constructivist learning theories while addressing the unique challenges posed by the abstract nature of computational concepts.
The study also validates the integration of collaborative learning methodologies in AI education, providing evidence that peer interaction enhances both conceptual understanding and engagement with complex technical topics. This finding supports the implementation of group-based learning activities in computer science curricula and suggests that social learning mechanisms play a crucial role in AI concept acquisition.
Finally, the importance of contextualization in AI education, as demonstrated in this study, indicates that educators should prioritize the development of culturally relevant examples and applications when designing AI learning experiences. This approach not only enhances student engagement but also promotes the development of critical perspectives on AI’s role in society and its potential impact on different communities.

7. Limitations and Future Research

While the study provides valuable insights into the teaching of DTs through unplugged methods, several limitations should be acknowledged. The research was conducted within a specific educational context (Greek 9th grade students) and curriculum framework, with a limited sample size (46 participants), within a single specialized school (a secondary music school in Greece). These factors potentially limit the generalizability of findings to other educational systems, participants’ ages or cultural contexts. Future research should investigate the effectiveness of the proposed intervention across diverse school types and/or different educational settings and student populations, in an extended sample size, to investigate the transferability and effectiveness of the proposed approach.
An additional methodological limitation stems from the non-existence of qualitative data that could have been leveraged for the interpretation of quantitative findings. The researcher witnessed beautiful moments of cooperation and knowledge discovery which could be recorded through student interviews, reflective writing, focus groups, and classroom observations, for further analysis to enhance understanding of student experiences. Future investigations should employ mixed-methods research designs incorporating both quantitative assessment instruments and qualitative data collection methods to explore affective dimensions of AI concept learning through unplugged activities.
As mentioned, the final metacognitive activity related to the extended abstract level of the SOLO taxonomy was not implemented, due to time constraints. This limitation significantly impacts the mapping of students’ progression through the SOLO hierarchy levels and should be addressed in future research. Furthermore, students’ assessments for multi-structural and relational cognitive levels relied on a single item (question). Utilizing only one item for each of these critical cognitive levels in the first individual test may, under specific conditions, yield inaccurate classification results and limit the precision of cognitive level determination. Future research should incorporate multiple assessment items across each SOLO taxonomy level to achieve more accurate classification of learning outcomes.
The study’s focus on short-term learning outcomes does not address the long-term retention of AI concepts or their transfer to more advanced computational contexts. Longitudinal studies would provide valuable insights into the durability of the learning achieved and the extent to which foundational AI understanding acquired through these methods supports subsequent learning in computer science and related fields.
Additionally, while the study demonstrates the effectiveness of unplugged approaches for teaching DTs, further research is needed to explore the optimal balance between unplugged and technology-enhanced learning experiences. Investigation into how unplugged foundational learning can be effectively transitioned to hands-on programming and implementation activities would provide valuable guidance for curriculum sequencing.
The findings of this research contribute to the growing recognition that AI literacy should be considered an essential component of 21st-century education. By demonstrating that foundational AI concepts can be successfully taught to secondary school students, this study supports the integration of AI education into computer science curricula. The development of AI-literate citizens is crucial for navigating an increasingly algorithm-driven society and for ensuring that future generations are equipped to engage critically and constructively with AI technologies. Educational systems should adapt to prepare students for a future in which AI literacy will be as fundamental as traditional computational skills. This study provides a foundation for the systematic integration of ML concepts into secondary education and provides a methodological framework for assessing student learning in this emerging domain. The demonstrated effectiveness of unplugged approaches, combined with collaborative learning and contextualized instruction, offers a promising pathway for democratizing AI education and fostering the development of critically engaged, AI-literate citizens.

Author Contributions

Conceptualization, G.F., V.K., K.L., K.K. and S.P. (Stavroula Prantsoudi); methodology, G.F. and V.K.; validation, K.K. and K.L.; formal analysis, K.L. and K.K.; investigation, K.K.; resources, G.F., V.K., S.P. (Stavroula Prantsoudi) and K.K.; data curation, K.K., K.L. and S.P. (Stavroula Prantsoudi); writing—original draft preparation, K.K. and K.L.; writing—review and editing, S.P. (Stavroula Prantsoudi); visualization, S.P. (Stamatios Papadakis); supervision, V.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted by the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of the Department of Educational Science and Early Childhood Education of the University of Patras (83951/8-11-2024).

Informed Consent Statement

Informed consent was obtained from all parents/guardians of the students involved in this study.

Data Availability Statement

Data is available upon request from the corresponding author.

Acknowledgments

The authors wish to thank the school principal, the teachers and students at the Music School of Patras for their participation in the research study. For transparency reasons, the authors wish to inform the reader that an LLM system has been used for the exclusive purpose of improving the quality of English language and readability of parts of the text.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CSComputer Science
DTDecision Tree
MLMachine Learning
SOLOStructure of the Observed Learning Outcome

References

  1. Druga, S.; Vu, S.T.; Likhith, E.; Qiu, T. Inclusive AI Literacy for Kids around the World. In Proceedings of the FabLearn 2019, New York, NY, USA, 9 March 2019; pp. 104–111. [Google Scholar]
  2. Rodríguez-García, J.D.; Moreno-León, J.; Román-González, M.; Robles, G. Evaluation of an Online Intervention to Teach Artificial Intelligence with LearningML to 10-16-Year-Old Students. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, Virtual Event, 3 March 2021; pp. 177–183. [Google Scholar]
  3. Kim, K.; Kwon, K.; Ottenbreit-Leftwich, A.; Bae, H.; Glazewski, K. Exploring Middle School Students’ Common Naive Conceptions of Artificial Intelligence Concepts, and the Evolution of These Ideas. Educ. Inf. Technol. 2023, 28, 9827–9854. [Google Scholar] [CrossRef]
  4. Ottenbreit-Leftwich, A.; Glazewski, K.; Jeon, M.; Jantaraweragul, K.; Hmelo-Silver, C.E.; Scribner, A.; Lee, S.; Mott, B.; Lester, J. Lessons Learned for AI Education with Elementary Students and Teachers. Int. J. Artif. Intell. Educ. 2023, 33, 267–289. [Google Scholar] [CrossRef]
  5. Sanusi, I.T.; Oyelere, S.S.; Vartiainen, H.; Suhonen, J.; Tukiainen, M. Developing Middle School Students’ Understanding of Machine Learning in an African School. Comput. Educ. Artif. Intell. 2023, 5, 100155. [Google Scholar] [CrossRef]
  6. Lee, I.; Ali, S.; Zhang, H.; DiPaola, D.; Breazeal, C. Developing Middle School Students’ AI Literacy. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, Virtual Event, 3 March 2021; pp. 191–197. [Google Scholar]
  7. Sabuncuoglu, A. Designing One Year Curriculum to Teach Artificial Intelligence for Middle School. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, Trondheim, Norway, 15 June 2020; pp. 96–102. [Google Scholar]
  8. Ma, R.; Sanusi, I.T.; Mahipal, V.; Gonzales, J.E.; Martin, F.G. Developing Machine Learning Algorithm Literacy with Novel Plugged and Unplugged Approaches. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1, Toronto, ON, Canada, 2 March 2023; pp. 298–304. [Google Scholar]
  9. Lehner, L.; Landman, M. Unplugged Decision Tree Learning—A Learning Activity for Machine Learning Education in K-12. In Creative Mathematical Sciences Communication; Fernau, H., Schwank, I., Staub, J., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2025; Volume 15229, pp. 50–65. ISBN 978-3-031-73256-0. [Google Scholar]
  10. Biggs, J.B.; Collis, K.F. Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome); Educational psychology; Academic Press: New York, NY, USA, 1982; ISBN 978-0-12-097550-1. [Google Scholar]
  11. Anderson, L.W.; Krathwohl, D.R. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Anderson, L.W., Ed.; Longman: New York, NY, USA; Munich, Germany, 2009; ISBN 978-0-8013-1903-7. [Google Scholar]
  12. Fleischer, Y.; Podworny, S.; Biehler, R. Teaching and Learning to Construct Data-Based Decision Trees Using Data Cards as the First Introduction to Machine Learning in Middle School. Stat. Educ. Res. J. 2024, 23, 3. [Google Scholar] [CrossRef]
  13. Hitron, T.; Orlev, Y.; Wald, I.; Shamir, A.; Erel, H.; Zuckerman, O. Can Children Understand Machine Learning Concepts? The Effect of Uncovering Black Boxes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 2 May 2019; pp. 1–11. [Google Scholar]
  14. Engel, J.; Erickson, T.; Martignon, L. Teaching and Learning about Tree-Based Methods for Exploratory Data Analysis. In Proceedings of the Looking Back, Looking Forward. Proceedings of the Tenth International Conference on Teaching Statistics, Kyoto, Japan, 8–13 July 2018; Sorto, M.A., White, A., Guyot, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  15. Touretzky, D.S.; Gardner-McCune, C. Artificial Intelligence Thinking in K–12. In Computational Thinking Education in K–12; Kong, S.-C., Abelson, H., Eds.; The MIT Press: Cambridge, MA, USA, 2022; pp. 153–180. ISBN 978-0-262-36897-1. [Google Scholar]
  16. Heinemann, B.; Opel, S.; Budde, L.; Schulte, C.; Frischemeier, D.; Biehler, R.; Podworny, S.; Wassong, T. Drafting a Data Science Curriculum for Secondary Schools. In Proceedings of the 18th Koli Calling International Conference on Computing Education Research, Koli, Finland, 22 November 2018; pp. 1–5. [Google Scholar]
  17. Sanusi, I.T.; Omidiora, J.O.; Oyelere, S.S.; Vartiainen, H.; Suhonen, J.; Tukiainen, M. Preparing Middle Schoolers for a Machine Learning-Enabled Future Through Design-Oriented Pedagogy. IEEE Access 2023, 11, 39776–39791. [Google Scholar] [CrossRef]
  18. Williams, R.; Kaputsos, S.P.; Breazeal, C. Teacher Perspectives on How To Train Your Robot: A Middle School AI and Ethics Curriculum. In Proceedings of the AAAI Conference on Artificial Intelligence, held virtually, 2–9 February 2021; Volume 35, pp. 15678–15686. [Google Scholar] [CrossRef]
  19. Fesakis, G.; Prantsoudi, S. Raising Artificial Intelligence Bias Awareness in Secondary Education: The Design of an Educational Intervention. In Proceedings of the 3rd European Conference on the Impact of Artificial Intelligence and Robotics, Lisbon, Portugal, 18–19 November 2021; Matos, F., Salavisa, I., Serrao, C., Eds.; pp. 35–42. [Google Scholar]
  20. Williams, R.; Ali, S.; Devasia, N.; DiPaola, D.; Hong, J.; Kaputsos, S.P.; Jordan, B.; Breazeal, C. AI + Ethics Curricula for Middle School Youth: Lessons Learned from Three Project-Based Curricula. Int. J. Artif. Intell. Educ. 2023, 33, 325–383. [Google Scholar] [CrossRef] [PubMed]
  21. Lindner, A.; Seegerer, S.; Romeike, R. Unplugged Activities in the Context of AI. In Informatics in Schools. New Ideas in School Informatics; Lecture Notes in Computer Science; Pozdniakov, S.N., Dagienė, V., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11913, pp. 123–135. ISBN 978-3-030-33758-2. [Google Scholar]
  22. Michaeli, T.; Seegerer, S.; Kerber, L.; Romeike, R. Data, Trees, and Forests—Decision Tree Learning in K-12 Education. Proc. Mach. Learn. Res. 2023, 207, 37–41. [Google Scholar]
  23. Yaacoub, A.; Assaghir, Z.; Da-Rugna, J. Cognitive Depth Enhancement in AI-Driven Educational Tools via SOLO Taxonomy. In Proceedings of the Third International Conference on Advances in Computing Research (ACR’25); Daimi, K., Al Sadoon, A., Eds.; Lecture Notes in Networks and Systems; Springer Nature: Cham, Switzerland, 2025; Volume 1346, pp. 14–25. ISBN 978-3-031-87646-2. [Google Scholar]
  24. Yaacoub, A.; Tarnpradab, S.; Khumprom, P.; Assaghir, Z.; Prevost, L.; Da-Rugna, J. Enhancing AI-Driven Education: Integrating Cognitive Frameworks, Linguistic Feedback Analysis, and Ethical Considerations for Improved Content Generation. arXiv 2025, arXiv:2505.00339. [Google Scholar] [CrossRef]
  25. Merrill, M.D. First Principles of Instruction. ETR&D 2002, 50, 43–59. [Google Scholar] [CrossRef]
  26. Greece. Ministry of Education Curriculum for the Informatics Course for Grades A, B, and C of Gymnasium; Government Gazette; 2023; Vol. No. 2932. Available online: https://www.iep.edu.gr/provoli-neon-programmaton-spoudon/ (accessed on 5 January 2025).
  27. Vygotsky, L.S. Mind in Society: Development of Higher Psychological Processes; Cole, M., Jolm-Steiner, V., Scribner, S., Souberman, E., Eds.; Harvard University Press: Cambridge, MA, USA, 1980; ISBN 978-0-674-07668-6. [Google Scholar]
  28. Vartiainen, H.; Tedre, M.; Valtonen, T. Learning Machine Learning with Very Young Children: Who Is Teaching Whom? Int. J. Child-Comput. Interact. 2020, 25, 100182. [Google Scholar] [CrossRef]
  29. Podworny, S.; Fleischer, Y.; Hüsing, S.; Biehler, R.; Frischemeier, D.; Höper, L.; Schulte, C. Using Data Cards for Teaching Data Based Decision Trees in Middle School. In Proceedings of the 21st Koli Calling International Conference on Computing Education Research, Joensuu, Finland, 18 November 2021; pp. 1–3. [Google Scholar]
  30. Hermans, F.; Aivaloglou, E. To Scratch or Not to Scratch?: A Controlled Experiment Comparing Plugged First and Unplugged First Programming Lessons. In Proceedings of the 12th Workshop on Primary and Secondary Computing Education, Nijmegen, The Netherlands, 8 November 2017; pp. 49–56. [Google Scholar]
  31. Wohl, B.; Porter, B.; Clinch, S. Teaching Computer Science to 5-7 Year-Olds: An Initial Study with Scratch, Cubelets and Unplugged Computing. In Proceedings of the Workshop in Primary and Secondary Computing Education, London, UK, 9 November 2015; pp. 55–60. [Google Scholar]
  32. Long, D.; Magerko, B. What Is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 21 April 2020; pp. 1–16. [Google Scholar]
  33. Claudia, L.F.; Kusmayadi, T.A.; Fitriana, L. The SOLO Taxonomy: Classify Students’ Responses in Solving Linear Program Problems. J. Phys. Conf. Ser. 2020, 1538, 012107. [Google Scholar] [CrossRef]
  34. Ladias, A.; Karvounidis, T.; Douligeris, C. Assessment of Command Structuring in Scratch Programming Using the SOLO Taxonomy. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), Tunis, Tunisia, 28 March 2022; pp. 857–862. [Google Scholar]
  35. Jimoyiannis, A. Using SOLO Taxonomy to Explore Students’ Mental Models of the Programming Variable and the Assignment Statement. Themes Sci. Technol. Educ. 2011, 4, 53–74. [Google Scholar]
  36. Casad, B.; Jawaharlal, M. Learning through Guided Discovery: An Engaging Approach to K-12 STEM Education. In Proceedings of the 2012 ASEE Annual Conference & Exposition Proceedings, San Antonio, TX, USA, 10–13 June 2012; pp. 25.886.1–25.886.15. [Google Scholar]
  37. Mosley, P.; Ardito, G.; Scollins, L. Pierre Van Cortlandt Middle School Promotes Student STEM Interest. Am. J. Eng. Educ. 2017, 7, 117–128. [Google Scholar] [CrossRef]
  38. Eguchi, A.; Okada, H.; Muto, Y. Contextualizing AI Education for K-12 Students to Enhance Their Learning of AI Literacy Through Culturally Responsive Approaches. Künstl Intell. 2021, 35, 153–161. [Google Scholar] [CrossRef] [PubMed]
  39. Oyelere, S.S.; Sanusi, I.T.; Agbo, F.J.; Oyelere, A.S.; Omidiora, J.O.; Adewumi, A.E.; Ogbebor, C. Artificial Intelligence in African Schools: Towards a Contextualized Approach. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), Tunis, Tunisia, 28 March 2022; pp. 1577–1582. [Google Scholar]
Figure 1. Decision tree on musical instruments.
Figure 1. Decision tree on musical instruments.
Ai 06 00217 g001
Figure 2. Classification at the uni-structural SOLO taxonomy level.
Figure 2. Classification at the uni-structural SOLO taxonomy level.
Ai 06 00217 g002
Figure 3. Classification at the multi-structural SOLO taxonomy level.
Figure 3. Classification at the multi-structural SOLO taxonomy level.
Ai 06 00217 g003
Figure 4. Classification at the Relational SOLO taxonomy level.
Figure 4. Classification at the Relational SOLO taxonomy level.
Ai 06 00217 g004
Table 1. Description of SOLO taxonomy levels [15].
Table 1. Description of SOLO taxonomy levels [15].
LevelDescription
Pre-structuralStudents do not focus on any of the aspects related to the subject–incompetence.
Uni-structuralStudents only focus on one or two aspects related to the subject, but not the whole.
Multi-structuralStudents focus on several relevant independent aspects related to the subject.
RelationalStudents focus on most of the aspects related to the subject and integrate them into a structure (conceptual scheme).
Extended AbstractStudents recall the conceptual scheme and generalize it in new contexts.
Table 2. SOLO taxonomy levels assessment criteria for decision trees (DTs).
Table 2. SOLO taxonomy levels assessment criteria for decision trees (DTs).
SOLO Level
Uni-structuralDescriptionStudents only focus on one or two aspects related to DTs, but not the whole.
EvidenceCan identify what a DT looks like (e.g., nodes and branches).Can define a simple use of DTs.
Multi-structuralDescriptionStudents focus on several relevant independent aspects related to DTs, but do not connect them.
EvidenceLists the parts of a DT: root, branches, nodes, leaves.Enumerates situations in which DTs can be used (e.g., problem solving, decision-making).Structures a simple DT with guidance.
RelationalDescriptionStudents focus on most of the aspects related to DTs and integrate them into a structure (conceptual scheme).
EvidenceExplains how DTs help in structured decision-making.Applies the theory of DTs to solve problems (e.g., yes/no questions that lead to results).Relates between parts of DTs and real life (e.g., root leads to branches, decisions lead to results).
Table 3. Linking test questions with the SOLO taxonomy levels.
Table 3. Linking test questions with the SOLO taxonomy levels.
SOLO LevelPre-Test
(Individual)
1st Individual Test2nd Individual TestPost-Test
(Individual)
Pre-structural
Uni-structural1, 3, 41, 31, 31, 3, 4
Multi-structural2, 5, 642, 42, 5, 6
Relational7, 8, 925, 67, 8, 9
Extended Abstract----
Table 4. Pre-test and Post-test questions.
Table 4. Pre-test and Post-test questions.
Pre-Test/Post-Test
QuestionAnswers
1.Which of the following are basic elements of a DT?
Three correct answers
  • Decision node
  • Orientation of the tree
  • Inefficiency of the tree
  • Leaf
  • Root
  • Number of direction lines
  • I don’t know the answer
2.Which of the following nodes in a DT would contain a Condition (a question)?
Three correct answers
  • Leaf nodes
  • Node checking the value of a patient’s microbiological test
  • Nodes that cross-check information
  • Root node
  • Node checking whether a trip can take place due to weather conditions
  • I don’t know the answer
3.Which of the following types of data would be suitable for a DT?
One correct answer
  • Audio data
  • Numerical data
  • Video data
  • Image
  • Large text data
  • I don’t know the answer
4.Which element of the training data of a DT is the most important for defining and producing the conditions (criteria) for splitting the data?
One correct answer
  • Number of training data
  • Type of training data
  • Features (attributes) of the training data
  • Format of the training data
  • Categories of the training data
  • I don’t know the answer
5.Two machine learning programmers created DTs. They both used the same dataset, but one used the first half and the other used the second half. What would you expect when comparing their DTs?
One correct answer
  • One tree is small and the other large
  • The decision trees are identical
  • Both decision trees are small in size
  • The decision trees are different
  • Both decision trees are large in size
  • I don’t know the answer
6.Which of the following apply to machine learning models with DTs?
Two correct answers
  • Choosing splits accurately is impossible with any other method
  • Decision trees in machine learning are always 100% accurate
  • They are suitable for handling large datasets with many features
  • They have a complex structure and operation
  • They can adapt their prediction based on new training data
  • I don’t know the answer
7.Which of the following tasks are carried out during the design preparation stage of a DT?
Two correct answers
  • Defining classification categories and data features
  • Measuring the model’s accuracy
  • Collecting and preparing the training data
  • Checking the model’s prediction with test data
  • Redefining the produced model
  • I don’t know the answer
8.Which of the following may occur during the testing phase of a DT?
Two correct answers
  • Collecting and preparing training data
  • Checking the model’s prediction with test data
  • Deleting training data
  • Redefining the produced model
  • Describing the use of the decision tree
  • I don’t know the answer
9.Number the following statements so that they describe, in order,
the steps involved in designing a decision tree for implementing an AI application:
Numbering
  • Formulating criteria/rules, dividing data
  • Preparation and recording of training and test data
  • Definition of classification categories (classes)
  • Recording the characteristics-properties of the sample
  • Modifying the tree according to the test data
  • Checking the tree’s prediction with test data
Table 5. 1st individual test questions.
Table 5. 1st individual test questions.
1st Individual Test
QuestionAnswers
1.DTs are a type of Machine Learning model used for:
One correct answer
  • producing products
  • classifying data
  • error checking
  • text formatting
  • programming commands
  • I don’t know the answer
2.Match the items of the two columns:
One correct answer
Name
  • Decision Node
  • Root
  • Leaf
  • Label or Clas
  • Branch/Edge
Role Description
  • Output category corresponding to a prediction
  • Indicates a decision that must be made
  • Leads from one node to the next
  • Main node from which all others start and branch
  • The end of a decision path
3.The features–attributes of the training data:
One correct answer
  • are not so important for the design of the tree
  • are numerical features extracted from images
  • are the factors the tree uses to split the data
  • represent the outcome of a tree
  • I don’t know the answer
4.The explainability of a decision tree:
One correct answer
  • means that its decision-making process is not understandable
  • affects the result
  • “splits” the training data
  • means we can understand how and why the tree reached a specific decision
  • I don’t know the answer
Table 6. 2nd individual test questions.
Table 6. 2nd individual test questions.
2nd Individual Test
QuestionAnswers
1.A simple use of the DT you designed is to:
One correct answer
  • make a weather prediction
  • prepare the training data
  • find a destination for a school trip
  • predict the success of a school trip
  • predict the reason for whether a school trip will take place or not
  • I don’t know the answer
2.Put in the right order (by numbering) the steps that must be taken before designing a DT:
Numbering
  • Separation and preparation of the test data
  • Separation and preparation of the training data
  • Definition of criteria for splitting the data
  • Collection of the data
  • Selection of the features–attributes of the data
  • I don’t know the answer
3.A criterion–rule for splitting the data is placed:
One correct answer
  • in a Leaf of the decision tree
  • in neighboring Leaves of the decision tree
  • before the root of the tree
  • in a decision node of the tree
  • the same in many decision nodes of the tree
  • I don’t know the answer
4.Put in the correct order (by numbering) the steps that must be taken after the design and training of a DT:
Numbering
  • Checking the proper functioning of the model using the test data
  • Selecting the test data that were not used in training
  • Redesigning–correcting the tree
  • Identifying possible errors in the model’s prediction
  • I don’t know the answer
5.To determine the effectiveness of the trained classification model:
One correct answer
  • the “model” is tested with new output data with the goal of classifying new unknown items into categories
  • the “model” is tested with new input data with the goal of classifying known items into categories
  • the “model” is tested with new input data with the goal of classifying new unknown items into categories
  • the “model” is tested with the same input data with the goal of classifying known items into categories
  • I don’t know the answer
6.Suppose you have a DT that predicts whether a student will perform well in a review test at school. A table of some training data is given:StudentStudy HoursParticipation
& Presence in Class
Consistence in AssignmentsPassed the Test?
A
B
C
D
6
2
8
7
High
Low
Moderate
Low
Moderate
Low
High
Low
Yes
No
Yes
No
Questions:
  • What are the classification categories?
  • What are the features (attributes) of the data?
  • Draw the decision tree below:
Table 7. t-test analysis of samples.
Table 7. t-test analysis of samples.
SOLO LevelPre-TestPost-TestEffect-Size
MSDMSD
Uni-structural *0.520.602.100.650.91
Multi-structural *0.760.721.890.730.83
Relational *1.000.662.160.740.82
Total *2.301.296.161.550.94
* Statistically significant differences (p < 0.05).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karapanos, K.; Komis, V.; Fesakis, G.; Lavidas, K.; Prantsoudi, S.; Papadakis, S. Unplugged Activities for Teaching Decision Trees to Secondary Students—A Case Study Analysis Using the SOLO Taxonomy. AI 2025, 6, 217. https://doi.org/10.3390/ai6090217

AMA Style

Karapanos K, Komis V, Fesakis G, Lavidas K, Prantsoudi S, Papadakis S. Unplugged Activities for Teaching Decision Trees to Secondary Students—A Case Study Analysis Using the SOLO Taxonomy. AI. 2025; 6(9):217. https://doi.org/10.3390/ai6090217

Chicago/Turabian Style

Karapanos, Konstantinos, Vassilis Komis, Georgios Fesakis, Konstantinos Lavidas, Stavroula Prantsoudi, and Stamatios Papadakis. 2025. "Unplugged Activities for Teaching Decision Trees to Secondary Students—A Case Study Analysis Using the SOLO Taxonomy" AI 6, no. 9: 217. https://doi.org/10.3390/ai6090217

APA Style

Karapanos, K., Komis, V., Fesakis, G., Lavidas, K., Prantsoudi, S., & Papadakis, S. (2025). Unplugged Activities for Teaching Decision Trees to Secondary Students—A Case Study Analysis Using the SOLO Taxonomy. AI, 6(9), 217. https://doi.org/10.3390/ai6090217

Article Metrics

Back to TopTop