Next Article in Journal
Research on the Impact of Executives with Overseas Backgrounds on Corporate ESG Performance: Evidence from Chinese A-Share Listed Companies
Previous Article in Journal
A Digital Health Equity Framework for Sustainable e-Health Services in Saudi Arabia
Previous Article in Special Issue
Teaching in the AI Era: Sustainable Digital Education Through Ethical Integration and Teacher Empowerment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Conceptual Model and Instructional Design Principles of Intelligent Problem-Solving Learning

1
Educational Policy Research Institute, Future Education Institute, Gyeongsangnam-do Office of Education, Uiryeong 52151, Republic of Korea
2
Department of Education, Busan Campus, Pusan National University, Busan 46241, Republic of Korea
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(17), 7682; https://doi.org/10.3390/su17177682
Submission received: 21 July 2025 / Revised: 12 August 2025 / Accepted: 25 August 2025 / Published: 26 August 2025
(This article belongs to the Special Issue Sustainable Digital Education: Innovations in Teaching and Learning)

Abstract

The rapid advancement of artificial intelligence has fundamentally transformed how knowledge is created, disseminated, and applied in problem-solving, presenting new challenges for educational models. This study introduces Intelligent Problem-Solving Learning (IPSL)—a capability-based instructional design framework aimed at cultivating learners’ adaptability, creativity, and meta-learning in AI-enhanced environments. Grounded in connectivism, extended mind theory, and the concept of augmented intelligence, IPSL places human–AI collaboration at the core of instructional design. Using a design and development research (DDR) methodology, the study constructs a conceptual model comprising three main categories and eight subcategories, supported by eighteen instructional design principles. The model’s clarity, theoretical coherence, and educational relevance were validated through two rounds of expert review using the Content Validity Index (CVI) and Inter-Rater Agreement (IRA). IPSL emphasizes differentiated task roles—those exclusive to humans, suitable for human–AI collaboration, or fully delegable to AI—alongside meta-learning strategies that empower learners to navigate complex and unpredictable problems. This framework offers both theoretical and practical guidance for building future-oriented education systems, positioning AI as a learning partner while upholding essential human qualities such as ethical judgment, creativity, and agency. It equips educators with actionable principles to harmonize technological integration with human-centered learning in an age of rapid transformation.

1. Introduction

The rapid advancement of artificial intelligence (AI) is fundamentally reshaping how knowledge is generated, disseminated, and applied in real-world problem-solving. AI technologies are redefining cognitive processes—transforming how humans think, learn, and engage with information. As the pace of knowledge creation accelerates exponentially, the “half-life of knowledge” shortens dramatically [1], resulting in an environment where learners can no longer rely solely on static, preexisting knowledge to solve emerging and complex problems.
This paradigm shift poses a critical challenge for education: how to determine not only what content should be taught but also how learning should be structured to prepare learners for an unpredictable future. Traditional instructional theories—such as behaviorism, cognitivism, and constructivism—have conceptualized learning primarily as the internalization of knowledge transmitted in structured formats. However, these models often fall short in equipping learners with the adaptive, real-world problem-solving skills demanded in AI-driven contexts.
In response, emerging frameworks such as connectivism emphasize that knowledge is not confined to the human brain but distributed across networks—including other people, digital resources, and AI systems [2,3]. Learning, in this view, is the capacity to access, connect, and navigate these networks effectively. This epistemological shift is further reinforced by extended mind theory, which posits those cognitive processes—such as memory, reasoning, and decision-making—can be meaningfully distributed across external tools and environments [4,5]
These shifts call for a renewed focus in education: not merely the acquisition of competency, which implies the ability to perform familiar tasks, but the development of capability—the potential to adapt, create, and thrive amid uncertainty [6,7,8]. In the context of AI-mediated learning environments, where human–AI interaction is dynamic and often unpredictable, capability refers to an individual’s capacity to respond effectively to emerging challenges by demonstrating adaptability, creativity, and resilience. Learners in such environments must engage in continuous learning, exercise flexible thinking, and practice meta-cognitive reflection.
Moreover, AI should not be viewed solely as a tool for automation or information delivery, but rather as a partner in augmented intelligence—a human–AI collaborative model in which AI supports lower-order functions (e.g., pattern recognition, data processing) and humans contribute higher-order judgment, ethical reasoning, and creativity [9]. AI’s potential as a co-creative partner further enhances human originality, enabling learners to escape conventional thought patterns and reframe problems through novel perspectives [10].
Recent advancements in human–machine interaction have also been explored outside the educational domain, particularly in the context of intelligent rehabilitation. For instance, studies using 3D deep learning and point cloud technologies have demonstrated how AI systems can support physical recovery through embodied interaction and cognitive augmentation [11,12]. While these studies were not originally designed for instructional settings, they highlight the broader potential of AI to enhance human capability—an idea that aligns closely with the IPSL framework’s emphasis on cognitive extension, co-agency, and embodied collaboration in education.
Despite growing theoretical consensus on these developments, there remains a lack of integrative instructional design models that translate these ideas into actionable educational practice. In response, this study proposes a model called Intelligent Problem-Solving Learning (IPSL)—a capability-based instructional framework that places human–AI collaboration at its core.
In contemporary educational research, meta-learning has also been examined through the lens of cognitive scaffolding, which supports learners’ gradual assumption of responsibility for their own learning processes. Within AI-mediated environments, such scaffolding is critical to ensuring that learners retain agency and actively engage with complex tasks. However, recent studies caution that overreliance on generative AI tools can lead to reduced critical engagement, diminished metacognitive regulation, and what some scholars term “cognitive offloading” [13,14,15]. Embedding explicit scaffolds—such as structured prompts, reflective checkpoints, and progressive AI literacy training—within the IPSL framework is therefore essential for balancing the affordances of AI with the cultivation of independent, future-ready thinkers.
IPSL aims to cultivate future-ready learners by guiding them to solve complex, unpredictable problems while preserving human agency and values. This study develops and validates a conceptual model and set of instructional design principles for IPSL using a design and development research (DDR) methodology. It seeks to contribute to a new educational paradigm that balances technological innovation with human-centered learning in the age of artificial intelligence.
To construct a theoretically grounded and practically applicable instructional framework for Intelligent Problem-Solving Learning (IPSL), this study explores the following research questions:
  • What is the conceptual model of IPSL that reflects the demands of AI-integrated, future-oriented education?
  • What are the instructional design principles that can guide the implementation of the IPSL model in educational practice?
  • To what extent do experts evaluate the validity, coherence, and applicability of the proposed IPSL model and its instructional design principles?
Accordingly, the research proceeded through three main phases aligned with the DDR methodology: conceptual modeling, principle derivation, and expert validation. The research questions were developed to guide this iterative design process. By focusing on learner agency, value-driven reasoning, and future-oriented capability, the proposed IPSL model aligns with the goals of Education for Sustainable Development, thereby contributing to sustainable digital innovation in teaching and learning.

2. Theoretical Background

2.1. Paradigm Shift in the Concepts of Learning and Capability

Artificial intelligence (AI) has triggered an era of rapid and continuous knowledge expansion—commonly described as a “knowledge explosion.” Today, the total volume of human knowledge can double within just a few years. This accelerated pace shortens the “half-life of knowledge” [1], meaning that information once considered current can quickly become outdated. In such an environment, traditional school-based education—focused primarily on content delivery—is showing clear limitations.
Classical learning theories—such as behaviorism, cognitivism, and constructivism—traditionally emphasize the accumulation of knowledge within the learner’s mind. This process occurs through content that is systematically organized and delivered by teachers. In the AI era, however, learners must navigate constantly changing information landscapes. As a result, these inward-focused models no longer provide sufficient support for the skills needed today. As a response, connectivism has emerged as an alternative paradigm [2,3].
Connectivism views knowledge as a distributed network, wherein learning involves building and navigating meaningful connections among various “nodes”—which may include people, databases, digital tools, or AI systems. Learning, in this sense, is not about storing facts in one’s brain, but about accessing, synthesizing, and applying knowledge across a dynamic system. According to AlDahdouh et al. [16], the act of linking, reorganizing, and innovating across these nodes constitutes the essence of learning in digital environments.
AI technologies serve as powerful external nodes that expand learners’ knowledge networks. In a world of rapidly evolving challenges and disappearing job predictability, lifelong learning and relearning are no longer optional. Relying on a fixed body of knowledge is no longer viable. In this context, connectivism offers a compelling theoretical foundation for cultivating agile and self-directed learners.
Siemens [2] emphasized that modern learners must develop not only the ability to “know what” or “know how,” but more importantly, the ability to “know where” and “know who”—that is, the ability to locate relevant information and expertise across distributed networks. In such a view, the flow of knowledge becomes more valuable than its content, and learning capacity becomes more crucial than static mastery.
This epistemological shift also calls into question the sufficiency of competency-based education, which typically focuses on pre-specified knowledge, skills, and attitudes for known tasks. Although valuable, competencies often fall short when applied to unfamiliar or rapidly changing situations [17]. In contrast, capability encompasses the learner’s justified confidence to act effectively even in unpredictable or novel contexts [6].
Capability-based education emphasizes adaptability, creativity, and the ability to generate new knowledge—rather than merely applying existing content. It fosters self-directed learning, situational responsiveness, and transferability of skills. In connectivist terms, capability aligns with the learner’s ability to form new connections, identify patterns, and engage in meta-cognitive inquiry. As such, capability—not just competency—must become the central aim of education in the AI era.

2.2. Human–AI Collaboration and Augmented Intelligence

As AI technologies advance, the focus of education is moving away from purely human problem-solving. Instead, attention is turning toward collaborative intelligence—the joint problem-solving capacity of humans and machines. Within this shift, concepts such as augmented intelligence, intelligence amplification, and co-creativity are gaining a central place in educational discussions.
Augmented intelligence refers to the enhancement of human cognitive abilities through synergistic collaboration with AI. Rather than replacing human tasks, AI complements them by processing large-scale data, recognizing patterns, and generating alternatives. Dede et al. [9] argue that the combined performance of humans and AI exceeds the sum of their individual contributions, representing a new cognitive strategy rooted in cooperation—not substitution. While AI excels at processing speed, accuracy, and data scalability, humans provide irreplaceable higher-order capabilities: moral reasoning, creative synthesis, empathy, and ethical decision-making [18]. Augmented intelligence thus proposes a division of cognitive labor, wherein machines support the analytic and procedural aspects, while humans retain judgment, interpretation, and accountability.
This division extends to creativity. Assisted creativity frames AI not as a substitute for human imagination but as a partner that stimulates novel thinking [10]. For instance, AI can quickly generate numerous idea variants, freeing learners from conventional constraints and encouraging exploration of innovative alternatives. Such synergy enhances human originality by offering unexpected perspectives and reducing cognitive fixation.
The extended mind theory [4,5] provides the philosophical foundation for this collaborative model. It posits those cognitive processes—such as memory, learning, and problem-solving—can extend into external tools and environments. From this standpoint, AI serves not merely as a tool, but as part of a distributed cognitive system. Recording ideas with a smartphone or searching for patterns using an AI assistant are examples of how cognition now spans beyond the brain.
However, extended cognition is not about blind reliance. It demands metacognitive awareness—knowing how to manage and evaluate information across internal and external resources. Students must therefore be trained not only to use AI but to use it wisely: knowing when to delegate, how to interpret results, and where to retain human control. The theory offers a critical foundation for designing instructional models that support human–AI co-thinking.
Ultimately, the abilities required for productive human–AI collaboration—augmented intelligence, ethical discernment, meta-cognition, and collaborative creativity—are emerging as core competencies for the future. As automation redefines labor markets, most experts agree that AI will not fully replace human roles but will reconfigure them through intelligent partnerships [9,19]. Future education must prepare learners to thrive in this hybrid environment.
From this perspective, extended mind theory suggests that AI can expand human cognitive processing capacity virtually without limit, thereby redefining the traditional concept of learning as the accumulation of schema-based knowledge within the brain. Learning can no longer be fully explained as the internalization of fixed knowledge; rather, as connectivism emphasizes, it must be understood in terms of building connections within external networks, forming relationships among information nodes, and accessing and utilizing distributed resources. Moreover, when viewed through the lens of augmented intelligence, learning and problem-solving are no longer confined to the performance of an individual human but emerge from synergistic human–AI collaboration, in which AI supports and amplifies human judgment, creativity, and decision-making. This collaborative structure, grounded in augmented intelligence, in turn necessitates the network-based learning environments and interactional competencies described by Connectivism. Consequently, these theories operate in a mutually reinforcing way, collectively providing a robust theoretical foundation for the future-oriented learning paradigm envisioned by IPSL.
While each of the three theoretical foundations—connectivism, extended mind theory, and augmented intelligence—provides distinct insights, their combined application offers a robust rationale for the IPSL framework. Connectivism explains how learning emerges from dynamic knowledge networks, positioning AI as an active node within these networks. Extended mind theory expands this view by conceptualizing AI as a cognitive extension, enabling the strategic offloading of routine processes and the amplification of higher-order thinking. Augmented intelligence operationalizes this synergy by defining collaborative modes in which AI and humans contribute complementary strengths. Together, these theories establish a coherent foundation for designing IPSL’s human–AI task role taxonomy and meta-learning strategies (Figure 1). This integration underscores IPSL’s capacity to align philosophical reasoning with concrete instructional design.
While the IPSL framework is grounded in connectivism, extended mind theory, and augmented intelligence, it is also important to engage with contrasting perspectives that question the overenthusiastic embrace of AI–human integration. Critical scholars have raised concerns about “ethics-washing,” whereby AI ethics principles are adopted symbolically without meaningful implementation, as well as the potential dehumanization of education and the tokenistic application of AI ethics policies. Furthermore, societal perceptions of and trust in AI play a crucial role in shaping learner agency within AI-enhanced environments. Gerlich [20] highlights how acceptance of AI in educational settings is mediated by perceived fairness and transparency, while Gerlich [21] examines trust dynamics in human–AI collaboration. Empirical studies have also documented risks relevant to IPSL’s design principles. For example, Gerlich [13] demonstrates how unguided AI use can lead to reduced critical engagement, Kosmyna et al. [14] provide neurological and behavioral evidence of “cognitive debt” and diminished learning ownership in AI-assisted writing tasks, and Lee et al. [15] report reduced cognitive effort and inflated confidence in AI-supported reasoning. Integrating these cautionary perspectives alongside supportive theories ensures that IPSL’s epistemological foundation is both balanced and critically robust.

2.3. Deriving the Conceptual Structure of IPSL from Theoretical and Empirical Foundations

Building on the theoretical perspectives outlined in Section 2.1 and Section 2.2, this section conceptualizes the structure of Intelligent Problem-Solving Learning (IPSL) by integrating philosophical foundations, pedagogical constructs, and empirical insights. Specifically, the IPSL framework draws upon three core theories to inform its instructional logic: connectionism, which explains how knowledge is distributed across learners, AI systems, and networks; extended mind theory, which offers a rationale for treating AI as a cognitive extension of human thought; and augmented intelligence, which defines the synergy between humans and AI through shared cognitive processes.
Together, these perspectives provide the foundation for several core constructs: meta-learning, emotional regulation, and differentiated task roles. In the IPSL model, these theories are not treated in isolation. Instead, they are woven into a coherent design framework that fosters human–AI co-agency, strengthens learner metacognition, and builds future-oriented capabilities.
To illustrate the theoretical alignment and practical utility of these components, Table 1 presents an integrative schema that maps each IPSL subdomain to its operational definition, theoretical origin, and empirical foundation. This taxonomy serves as a conceptual foundation for the instructional design principles elaborated in Section 4 and validated in Section 5.

3. Methods

This study employed a Design and Development Research (DDR) [40] methodology, which integrates theoretical exploration, design solution construction, and empirical validation. The research was conducted in three iterative phases: (1) conceptual model development through literature synthesis, (2) derivation of instructional design principles, and (3) expert validation and model refinement. Each phase is built on the results of the previous stage to produce a cohesive and empirically grounded instructional framework.

3.1. Stage 1: Model Development Through Literature Review

To conceptualize the IPSL framework and derive its corresponding instructional design principles, a comprehensive literature review was conducted. The review focused on critical themes related to educational transformation in the era of artificial intelligence, including human–AI interaction, ethical considerations of AI use, meta-learning strategies, and sustainability-oriented education, particularly focusing on how AI-integrated instructional design can support the core aims of ESD, such as empowering learners with future-oriented capabilities and social responsibility.
Academic journal articles, books, and policy reports published between 2000 and 2024 were systematically collected from major scholarly databases, including Web of Science, Scopus, ERIC, and ScienceDirect. Search terms included combinations of intelligent problem solving, AI in education, human–AI collaboration, meta-learning, sustainable capability development, and instructional design in AI-supported environments.
To ensure transparency and reproducibility, the selection process adhered to PRISMA guidelines and involved four key stages:
  • Identification: A total of 482 records were retrieved through database searches, with an additional 23 records identified from other sources, such as policy reports and grey literature.
  • Screening: After removing duplicates, 430 unique records were retained. Titles and abstracts were reviewed for relevance, resulting in the exclusion of 320 records that did not pertain to instructional design, AI in education, or sustainability-related themes.
  • Eligibility: 110 full-text articles were assessed for eligibility based on their theoretical robustness, empirical contribution, and instructional relevance. Of these, 55 were excluded due to insufficient conceptual grounding or lack of empirical rigor.
  • Inclusion: A final set of 55 studies was included in the qualitative synthesis. These served as the theoretical and empirical foundation for the development of the IPSL model and the formulation of its design principles.
The 55 studies identified through the PRISMA process were summarized by publication type, geographic region, and disciplinary focus (Figure 2). Approximately two-thirds (n = 36) were empirical studies and one-third (n = 19) were theoretical or conceptual analyses. Regionally, the literature originated from Asia (40%), Europe (30%), North America (25%), and other regions (5%). In terms of disciplinary focus, the studies covered education (50%), psychology (20%), AI ethics (15%), and instructional design (15%). Rather than mapping each study individually to specific IPSL components, the thematic synthesis identified recurring patterns of contribution, as reflected in Table 1. For example:
-
Meta-Learning principles were informed by self-regulated learning theory [27] and meta-emotion theory [29], with empirical studies by Pekrun [26] and Garner [28] highlighting the role of emotional regulation in AI-supported goal setting.
-
Complex Problem Solving drew on Jonassen’s theory and authentic learning frameworks, supported by empirical research from Iancu and Lanteigne [24] and Aquino et al. [31].
-
Human–AI Collaboration principles were grounded in augmented intelligence and extended cognition, with Zhou et al. [36] and Huo [37] providing evidence on AI-supported co-creativity in educational contexts.
By synthesizing theoretical grounding with empirical insights from these representative studies, the IPSL framework integrates both foundational concepts and practical evidence, ensuring that its instructional design principles are robust, context-sensitive, and adaptable across diverse learning environments.

Qualitative Synthesis Strategy

To construct the IPSL framework grounded in both theory and empirical insight, a qualitative thematic synthesis was conducted on the final set of 55 studies selected through the PRISMA-based literature review process. This synthesis aimed to identify recurring conceptual patterns, pedagogical implications, and design-oriented insights relevant to intelligent problem-solving in AI-integrated educational environments.
The synthesis procedure was carried out in the following four stages:
  • Initial coding: Each included study was reviewed and coded for theoretical constructs (e.g., connectivism, extended mind, and capability approach), instructional considerations (e.g., AI affordances, learner agency, and task structure), and educational outcomes (e.g., future-readiness, ethical discernment, and adaptive learning).
  • Formation of descriptive themes: Codes were grouped to form broader thematic categories capturing shared concerns and priorities across the studies. These descriptive themes included aspects such as “value-oriented learning,” “human–AI task distribution,” and “metacognitive strategy use.”
  • Derivation of analytical domains: Through abstraction and reinterpretation, three core domains emerged as essential for intelligent problem-solving in future-ready education:
    • Fostering sustainable human values: Emphasizing the cultivation of ethical reasoning, emotional intelligence, and life purpose as central to education in the AI era.
    • Structuring task execution via role differentiation: Distinguishing between human-exclusive tasks, human–AI collaborative tasks, and tasks fully delegable to AI, to clarify roles and responsibilities within learning processes.
    • Promoting adaptive and reflective thinking: Highlighting the importance of metacognitive and meta-emotional strategies for navigating complex and unpredictable problems.
  • Model integration: These three analytical domains collectively informed the conceptual structure of the IPSL framework and served as an organizing logic for the derivation of its instructional design principles. The principles were initially drafted based on the thematic synthesis and later subjected to expert validation to refine their clarity, coherence, and applicability.
This thematic synthesis provided not only the foundation for theorizing the IPSL model but also a bridge from abstract theory to actionable instructional design. The outcome is a coherent framework that integrates human-centered values, AI-aware task structuring, and learner-driven adaptability as the pillars of future-ready instructional design.

3.2. Stage 2: Expert Validation

To enhance the transparency and reproducibility of the IPSL validation process, the expert review procedure was designed and documented in detail. Two rounds of expert validation were conducted with a two-week interval to ensure sufficient time for analysis, revision, and model refinement. Expert feedback was analyzed using inductive thematic coding to identify points of consensus and divergence. Divergent opinions were reconciled through the synthesis of commonly emerging themes, and all suggested revisions were tracked using a structured change log.
In particular, modifications to the wording and structure of instructional design principles were documented in a revision matrix that mapped each change to the corresponding expert comment. For example, the initial principle stating “Promote learner reflection using AI” was revised to “Enable learners to use AI as a critical peer to shift and expand their thinking”, based on expert recommendations that emphasized dialogic interaction and metacognitive framing.
To improve the reproducibility of the process, all revisions were recorded alongside changes in content validity index (CVI) and inter-rater agreement (IRA) scores between the two rounds. Items with borderline CVI values or low IRA scores from the first round were prioritized in the second evaluation. In addition to quantitative tracking, follow-up qualitative confirmations were solicited from experts to validate the interpretive alignment of the final version. This systematic procedure ensured that the finalized IPSL principles were both empirically grounded and theoretically coherent.

3.2.1. Expert Panel Composition

The expert panel consisted of eight professionals with extensive backgrounds in instructional design, educational psychology, AI in education, and educational policy. All experts held doctoral degrees and had more than ten years of experience in relevant domains. To improve transparency regarding disciplinary representation, the composition was clarified as follows: five experts specialized in instructional design and educational technology, and three in AI-supported learning and digital innovation. The panel also included specialists in cognitive psychology, several of whom had conducted research on AI ethics and its implications for AI-based education. This combination of pedagogical, technological, and ethical expertise ensured a balanced and comprehensive approach to validating the IPSL framework.
The selection criteria included (1) academic or practical expertise in instructional design, digital education, or AI-enhanced learning; (2) publications in peer-reviewed (SCIE/SSCI) journals or participation in national-level research projects; and (3) experience with curriculum evaluation or education policy development. The panel included three university professors in instructional technology and future education, two researchers in AI-driven learning and innovation, two teacher educators with experience in pre-service teacher training, and one policy specialist in sustainability and education. In addition, although the current expert panel primarily represents higher education institutions, some experts possess professional experience in K–12 education. For example, Expert E1 previously taught as a secondary school teacher for 10 years before transitioning into academia, while Expert E4 served as a primary school teacher for 5 years before becoming a university-based teacher educator. These backgrounds contributed valuable insights into the broader applicability of the IPSL framework across educational levels (Table 2).

3.2.2. Review Process and Evaluation Criteria

Round 1: Content and Reliability Validation of the IPSL Model and Principles
In the first round of validation, experts were provided with the initial conceptual model of IPSL, including a visual diagram, detailed descriptions, and six preliminary instructional design principles derived from the literature. Each item was evaluated using a 4-point Likert scale (1 = Not valid at all; 4 = Highly valid). Alongside numerical ratings, participants were invited to offer qualitative feedback to guide subsequent revisions and refinements of the framework.
The conceptual model was assessed according to six evaluative criteria:
  • Conceptual clarity: Are the core concepts clearly defined and easily understandable?
  • Theoretical validity: Is the model grounded in established educational theory and conceptually coherent?
  • Internal coherence: Do the components demonstrate logical consistency and alignment with one another?
  • Comprehensiveness: Does the model encompass all essential elements required to support the development of human values?
  • Visual communicability: Does the diagram effectively illustrate the relationships among components and convey the overarching message?
  • Innovativeness: Does the model introduce novel or creative perspectives appropriate for AI-integrated educational contexts?
These criteria were developed with reference to prior research on instructional design model evaluation [41,42,43,44,45].
The instructional design principles were reviewed using five distinct criteria:
  • Validity: Is the principle appropriate and contextually relevant to IPSL?
  • Clarity: Are the statements expressed in clear, concise, and unambiguous terms?
  • Usefulness: Can the principle be practically applied in instructional settings?
  • Universality: Is the principle adaptable across various educational levels and contexts?
  • Comprehensibility: Is the principle easily understood by both instructors and learners?
These evaluative dimensions were adapted from established frameworks for instructional design assessment [46,47,48]. To quantitatively analyze expert agreement, two indices were employed:
  • Content Validity Index (CVI): Calculated as the proportion of experts rating an item as either 3 or 4, divided by the total number of reviewers. A CVI of 0.80 or above was considered acceptable [49].
  • Inter-Rater Agreement (IRA): To assess the consistency among expert ratings, IRA was calculated as the proportion of items for which at least 75% of the experts (i.e., six out of eight) provided the same rating. An IRA value of 0.75 or higher was considered satisfactory, following the guidelines commonly used in scale development and expert validation studies [50].
In addition to quantitative measures, extensive qualitative feedback was solicited to inform the revision process. Particular attention was given to items that yielded low inter-rater agreement or borderline CVI values, with expert suggestions actively encouraged to enhance clarity, alignment, and applicability.
Round 2: Reassessment and Refinement of the Revised Model
Following the feedback obtained during the first round of expert review, both the conceptual model of IPSL and the associated instructional design principles were systematically revised to enhance clarity, structural coherence, and theoretical alignment. The revised materials were then re-evaluated by the same panel of experts using the identical criteria and procedures established in Round 1.
During this second evaluation phase, both the Content Validity Index (CVI) and Inter-Rater Agreement (IRA) were recalculated to assess whether the revisions had improved expert consensus. Items that failed to achieve the threshold CVI value of 0.80 or exhibited low inter-rater reliability were flagged for potential modification or further refinement.
In addition to quantitative reassessment, qualitative feedback was actively solicited and analyzed to supplement interpretation of the results. Expert suggestions were particularly instrumental in identifying remaining ambiguities or inconsistencies. Through this iterative process, the IPSL conceptual model and its instructional design principles were finalized with significantly improved clarity, theoretical consistency, and consensus among experts.

4. Expert Review Results

4.1. Validation of the IPSL Conceptual Model

In the first round of expert review, the evaluation of the IPSL conceptual model yielded mean scores ranging from M = 2.88 to M = 4.00, with corresponding Content Validity Index (CVI) values between 0.75 and 1.00, and Inter-Rater Agreement (IRA) ranging from 0.63 to 1.00. Among the six evaluation criteria, theoretical validity (M = 4.00, CVI = 1.00, IRA = 1.00) and visual communicability (M = 3.25, CVI = 1.00, IRA = 0.75) received particularly strong ratings. Conversely, relatively lower IRA scores were recorded for coherence among components (M = 2.88, CVI = 0.75, IRA = 0.63) and conceptual clarity (M = 3.13, CVI = 0.88, IRA = 0.63), suggesting variations in expert interpretation and the need for further refinement
In response, revisions were made to the model’s structure, terminology, and visual presentation. In the second round, all evaluation criteria achieved mean scores exceeding 3.25, while CVI values reached 1.00 across all items, indicating complete agreement on content validity. IRA scores also improved substantially, with all items meeting or exceeding the threshold of 0.75, including theoretical validity (IRA = 1.00) and innovativeness (IRA = 0.88).
A comparison of overall scores between the two rounds demonstrates significant improvement: the total average score rose from M = 3.38 (CVI = 0.90, IRA = 0.71) in Round 1 to M = 3.81 (CVI = 1.00, IRA = 0.90) in Round 2. These results affirm that iterative refinement based on expert feedback enhanced the model’s validity, reliability, and interpretability—particularly in the domains of conceptual clarity, theoretical alignment, structural coherence, practical relevance, and innovation (Table 3).
In addition to quantitative results, qualitative feedback from experts highlighted several areas for improvement across the six evaluation domains.
First, in terms of conceptual clarity, experts pointed out that key constructs such as existential value, capability, and meta-learning were initially presented in ways that were too abstract or insufficiently defined. For instance, one expert argued that “existential value” should be framed as a philosophical construct applicable across disciplines, while another contended that it needed a more operational definition tied directly to classroom activities. In response, the definitions of these core terms were refined, and consistency between visual and textual terminology was improved. These adjustments contributed to more favorable evaluations in the second round, particularly regarding clarity and communication.
Second, with regard to theoretical validity, while the use of foundational theories such as connectivism and extended mind theory was deemed appropriate, experts noted that the linkages between these theoretical underpinnings and the model’s components were not sufficiently explicit. For example, some reviewers felt that the connection between extended mind theory and meta-learning strategies was self-evident and did not require further elaboration, whereas others requested an explicit diagram mapping each theory to specific IPSL components. To address this, the relationships between theory and structure were clearly mapped and conceptually reinforced, which led to improved consensus in the follow-up review.
Third, concerning coherence among components, experts acknowledged the model’s attempt to differentiate between human-exclusive tasks, human–AI collaboration, and AI-delegable tasks. However, some ambiguity remained in how these categories were demarcated. The revised model addressed this issue by explicitly defining the boundaries and roles within each task category, which in turn strengthened structural consistency.
Fourth, in relation to comprehensiveness, while the initial model incorporated meta-learning and human–AI collaboration, several experts expressed concern that it lacked sufficient attention to emotional and ethical dimensions of human experience. In response, human-centered values were more deeply embedded within the model, resulting in broader acknowledgment of its humanistic orientation during the second round.
Fifth, regarding visual communicability, although the overall layout was viewed as intuitive, some confusion arose due to unclear boundaries between model components. To enhance visual readability, revisions included refinements to color contrast and the addition of explanatory text boxes. These enhancements were well received and contributed to stronger agreement in the second review.
Sixth, and finally, in the area of innovativeness, concepts such as existential value pursuit, role-based task distribution, and AI-assisted creativity were recognized as novel and meaningful. However, several experts noted that the model’s distinction from existing instructional design frameworks was not fully articulated. Accordingly, the introduction and conceptual rationale were revised to more clearly position IPSL as a unique contribution to AI-integrated educational design. These changes were positively received and helped solidify consensus on the model’s innovative character.
Taken together, the results of both rounds of expert review—quantitative and qualitative—demonstrate that the IPSL conceptual model achieved high levels of reliability, validity, and educational relevance. Through iterative refinement, the model evolved into a theoretically grounded and practically applicable framework suitable for designing future-oriented, AI-enhanced learning environments.

4.2. Expert Validation of Instructional Design Principles

In the first round of expert evaluation, the instructional design principles of the IPSL framework demonstrated a clear need for revision. The mean scores across the five evaluation criteria ranged from 2.38 to 3.00, with Content Validity Index (CVI) values falling below the generally accepted threshold of 0.80. Particularly low ratings were observed in the areas of clarity (M = 2.38, CVI = 0.38) and comprehensibility (M = 2.38, CVI = 0.38), largely due to concerns that the language used was overly abstract and the sentence structures unnecessarily complex.
Although the remaining criteria—validity (M = 2.75), usefulness (M = 3.00), and universality (M = 2.63)—met the minimum average score requirements, their CVI values also fell short, indicating insufficient expert consensus. Furthermore, the Inter-Rater Agreement (IRA) was calculated as 0.00, revealing a complete lack of consistency in expert ratings across all criteria.
To address these issues, the instructional design principles were extensively revised to enhance clarity, readability, theoretical alignment, and practical applicability.
In the second round of expert review, all criteria received mean scores above 3.00, with CVI values reaching 1.00 for every item—indicating unanimous agreement among reviewers. Notably, both validity and comprehensibility achieved perfect scores (M = 4.00, CVI = 1.00), reflecting significant improvement. The recalculated IRA also reached 1.00, confirming a high level of consistency and consensus across the panel. These results represent a substantial improvement over the first round (IRA = 0.00; CVI = 0.38–0.63), affirming the reliability and educational soundness of the final version (Table 4).
Based on expert feedback, five major revisions were made, corresponding to the five thematic categories of the IPSL framework (Table 5).
First, in the category of Pursuit of Inherent Human Values, experts noted that the concepts of existential value and identity were overly abstract and lacked a clear hierarchical structure. In response, these were consolidated into a single, coherent principle. Additional revisions addressed emotional competence by integrating references to psychological well-being and social connectedness. Furthermore, statements regarding learner agency were streamlined to enhance clarity and focus. These changes were positively received for improving conceptual coherence and practical relevance.
Second, in the category of Value Pursuit Strategies (Meta-Learning), experts highlighted insufficient distinction between goal setting and strategy development. The revised principles addressed this concern by reorganizing the content into a sequential structure that reflected the logic of self-regulated learning. Furthermore, the newly integrated concept of meta-emotion, proposed during the first review, emphasized the importance of emotional self-awareness and regulation—further reinforcing the comprehensive nature of meta-learning.
Third, in the area of Complex Problem Solving in Unpredictable Situations, the original principles were considered verbose and lacking conceptual focus. The revised version introduced complex thinking as a central theme and explicitly emphasized the integration of disciplinary knowledge, real-world context, and AI-based tools in problem-solving. The inclusion of a principle that addressed learners’ ability to resolve ethical conflicts and dilemmas was regarded as a significant enhancement, adding depth to the model’s alignment with socially situated learning.
Fourth, the category of Future-Oriented Capability was restructured to emphasize the development of learnability and the strategic use of AI as a “second brain.” Rather than listing conventional types of knowledge transfer, the revised principles promoted transdisciplinary thinking to encourage learners to synthesize and apply knowledge across disciplinary and contextual boundaries. These modifications were well aligned with the educational goal of cultivating adaptive and future-ready learners.
Fifth, in the category of Human–AI Collaborative Structures, the task roles of humans and AI were more clearly delineated into three categories: tasks performed exclusively by humans, tasks requiring human–AI collaboration, and tasks that could be delegated to AI. Specific examples of AI-delegable tasks—such as repetitive data processing or risk-intensive operations—were added for greater clarity. However, experts expressed differing views on the principle of encouraging learners to engage with AI as a critical peer. While several reviewers praised it for capturing the model’s aim of fostering autonomous and reflective thinking within AI-mediated learning environments, others cautioned that its effective implementation would require substantial scaffolding, especially for younger learners or those with limited AI literacy. This divergence underscored the importance of providing concrete instructional supports to ensure equitable application across diverse learner profiles.
In summary, the initial development of the instructional design principles was based on an extensive literature review and theoretical framework, resulting in an original set of 39 principles distributed across three main categories and eight subcategories (e.g., personal values, community values, meta-learning, and complex thinking). Through two iterative rounds of expert review, this initial set was refined and consolidated into a final set of 18 validated instructional principles, maintaining the original categorical structure while eliminating redundancy and improving clarity, relevance, and applicability. Table 5 summarizes the final set of validated instructional design principles derived through literature synthesis and expert review (Table 6).
To enhance the practical applicability and interpretability of the 18 instructional design principles, we developed a corresponding set of representative instructional examples. Table 5 presents how each principle can be operationalized in authentic classroom settings across different subject areas and educational levels. These examples are intended to support educators in translating abstract principles into actionable instructional strategies that reflect the core goals of IPSL—namely, human–AI collaboration and future-oriented learning design. Effective enactment of the IPSL principles requires attention to contextual variables such as digital literacy, infrastructure availability, teacher readiness, and learner age. In high-resource settings, advanced AI functionalities can be integrated with minimal constraints, whereas in low-resource environments, simplified AI tools, printed materials, and offline activities may be necessary. For younger learners, principles should be operationalized with visual aids, structured prompts, and high teacher facilitation. These adjustments ensure that the IPSL principles remain adaptable, inclusive, and impactful across diverse educational contexts (Table 7).

5. Conceptual Model and Design Principles of IPSL

Grounded in both theoretical foundations and expert validation, the Intelligent Problem-Solving Learning (IPSL) model functions as a strategic framework for nurturing future-oriented capabilities. Its primary objective is to equip learners with the adaptability, creativity, and judgment required to address complex and unpredictable problems in AI-mediated contexts. At its core, the model prioritizes the pursuit of inherent human values as the fundamental educational aim. In parallel, it clearly delineates the differentiated roles of humans and AI, ensuring that each contributes according to its distinct strengths within the learning process.
A central feature of IPSL is the use of meta-learning strategies to enable learners to analyze and engage with three distinct categories of tasks:
  • Tasks that must be performed exclusively by humans,
  • Tasks that require collaboration between humans and AI.
  • Tasks that can be effectively delegated to AI systems.
Within this framework, meta-learning serves as a driver for learners to make strategic decisions regarding task–role allocation. This, in turn, fosters the development of cognitive flexibility, ethical discernment, and adaptive thinking—core attributes of sustainable, future-ready education.
As shown in Figure 3, the IPSL model highlights the close relationship between human–AI task roles, meta-learning strategies, and the growth of learner capabilities. In this design, the two elements reinforce each other: practicing meta-learning strengthens learner capabilities, and stronger capabilities make meta-learning deeper and more effective. This process repeats over time within a collaborative learning environment that combines human agency with AI’s strengths, supporting both individual development and joint problem-solving.

5.1. Pursuit of Inherent Human Values

In the AI era, a central goal of future-oriented education is to help learners recognize and pursue their existential value as human beings. Rapid advances in AI—now capable of matching or even surpassing human performance in creative domains—pose a serious challenge to the meaning and uniqueness of human existence [37]. Rubin [22] warns that if AI capabilities continue to grow, they could disrupt established ideas of identity and intrinsic value, raising fundamental questions about what it means to be human in an increasingly automated world.

5.1.1. Pursuit of Personal Values

First, learners are encouraged to systematically explore their existential value and identity in order to construct a life vision and set meaningful goals. Alamin and Sauri [23] argue that schools in the AI age must create space for students to reflect on personal values and ethical beliefs, prompting fundamental questions such as “Why do I exist?” Through this reflective process, learners rediscover their intrinsic worth and shape a distinct identity within their sociocultural context. In this study, existential value is defined as the learner’s capacity to connect acquired knowledge and skills to a sense of life purpose, personal identity, and social responsibility. For example, in a sustainability-themed project, students may integrate AI-generated environmental data with their own ethical beliefs to design actionable community plans—transforming abstract values into concrete, impactful actions.
Second, the development of emotional competence is essential to sustaining psychological well-being and building healthy interpersonal relationships. The increasing prevalence of automation and digital communication can lead to a decline in direct human interaction, contributing to emotional detachment, isolation, and anxiety [23]. In addition, the rapid pace of technological change may exacerbate stress by disrupting familiar learning environments, work roles, and social expectations. Cultivating emotional regulation, empathy, and interpersonal resilience enables learners to maintain psychological stability and a deeper sense of meaning in their lives—capacities that AI cannot replicate [28].
Third, the IPSL model emphasizes the importance of learner agency. As generative AI becomes more prevalent, learners risk becoming passive recipients of algorithmically generated information, which can weaken their creativity, critical thinking, and emotional insight [51]. To counter this tendency, learners must be equipped to critically evaluate AI-generated content, make socially and ethically sound decisions, and maintain a clear sense of personal purpose. Education, therefore, should not only teach students how to use AI effectively, but also empower them to take ownership of their learning and intentionally design lives that reflect their values and aspirations.

5.1.2. Pursuit of Community Values

From the perspective of community values, the IPSL framework emphasizes the development of learners’ ethical consciousness and social responsibility in the context of rapidly advancing AI technologies. While AI can process vast amounts of data with speed and efficiency, it lacks the capacity for ethical judgment. Moreover, AI-generated outputs may reflect bias, misinformation, or violations of privacy [23]. Therefore, it is imperative that learners be equipped to make autonomous and ethically sound decisions, particularly as they navigate emerging dilemmas in an increasingly complex digital society.
The first principle focuses on guiding learners to internalize ethical values and engage in continuous reflection on their relevance and application. Ethical literacy is not static; it requires ongoing evaluation and adjustment in light of evolving social norms, technologies, and global challenges. Education should support learners in developing this reflective capacity as a foundation for responsible citizenship and decision-making in AI-mediated contexts.
The second principle centers on the recognition and practice of human dignity as a foundational moral value. In a time when education is increasingly influenced by technology and automation, it becomes all the more important to reaffirm the uniqueness of human beings. Beyond technical competence, education must cultivate moral character and reinforce the belief that human dignity is inviolable. Teachers play a crucial role in this process, guiding students to make ethically grounded decisions rooted in empathy, respect, and shared humanity [23].
The third principle encourages learners to uphold and advance public values within their communities. As AI continues to reshape education, economics, and social systems, the complexity and uncertainty of societal issues are also intensifying. Without a strong ethical orientation toward the public good, AI-driven progress may result in fragmentation, inequality, or even social destabilization. Thus, IPSL calls for fostering a deep sense of responsibility in learners to contribute to the sustainability and well-being of their communities by prioritizing collective values over purely individual or technical outcomes.

5.2. Strategic Approaches for Value Pursuit

Pursuing the inherent value of human existence—especially in ways that AI cannot replicate—requires a clear set of strategies suited to AI-integrated learning environments. In the IPSL framework, these strategies focus on three interconnected areas. The first is the development of meta-learning, which serves as a foundation for self-directed and reflective learning. The second is enhancing learners’ ability to solve unpredictable and complex problems. The third is cultivating future-oriented capabilities that support flexible adaptation and creative responses in uncertain contexts.

5.2.1. Meta-Learning

Meta-learning, often described as “learning how to learn,” refers to the learner’s ability not only to acquire knowledge but also to understand and regulate their own learning processes through reflective thinking [52,53]. This capability allows learners to plan, monitor, and adjust their learning continuously, making it a fundamental competency in the age of artificial intelligence.
In contemporary educational contexts—where information retrieval and routine problem-solving are increasingly handled by AI—human adaptability and cognitive flexibility have emerged as essential educational advantages [52]. While AI systems can generate and deliver vast amounts of information, it remains the learner’s responsibility to critically interpret, contextualize, and apply that information to real-world problems.
Meta-learning comprises two interrelated dimensions: metacognition and meta-emotion. Metacognition refers to the awareness and regulation of one’s own thinking processes, including the ability to plan, monitor, and evaluate learning strategies. Meta-emotion refers to the awareness, understanding, and regulation of one’s emotions during the learning process, as well as the ability to respond constructively to others’ emotions. Instructionally, this can involve activities such as guided reflection logs after AI-assisted debates, where learners analyze their emotional responses to conflicting viewpoints and adjust their communication strategies accordingly. These dimensions work together to enhance the quality and depth of learning experiences [52]. Without sufficient metacognitive skills, learners may fall into a state of “metacognitive laziness”, passively accepting AI-generated content without engaging in critical evaluation or reflective thinking. Research shows that overreliance on generative AI can reduce cognitive engagement and suppress the development of independent thinking [46,53]. Thus, meta-learning is not only vital for navigating AI tools effectively but also for positioning AI as a cognitive partner—a facilitator that enhances rather than replaces human agency in learning.
The IPSL model incorporates three key instructional design principles related to meta-learning:
First, learners should be supported in setting meaningful and self-directed learning goals. Goal-setting serves as the foundation of self-regulated learning [27,54]. In AI-enhanced environments, where information overload is common and direction can easily be lost, clearly defined goals help learners prioritize and filter relevant knowledge [39,55,56].
Second, learners must develop the ability to independently plan, monitor, and revise their learning strategies. These executive skills allow learners to manage their cognitive processes and adopt strategies that maximize learning efficiency [57,58]. Given the accelerating pace of technological and epistemic change, learners must become flexible, autonomous, and capable of lifelong learning. As Garrison and Akyol [59] note, excessive dependence on AI can lead to uncritical acceptance of information, further underscoring the need for strong metacognitive regulation in digital learning environments.
Third, learners need to cultivate meta-emotional competence—the ability to recognize, understand, and regulate emotions during the learning process [26,29]. While positive emotions foster motivation and engagement, the effective management of negative emotions contributes to resilience and perseverance [60,61]. In AI-mediated learning environments, where human emotional support may be limited [62], such meta-emotional skills are even more critical. Furthermore, learners should be encouraged to develop empathy and engage in emotionally responsive communication with others. This deeply human capacity for emotional connection and mutual understanding cannot be replicated by AI and is essential for creating sustainable, human-centered learning environments.

5.2.2. Solving Unpredictable and Complex Problems

Contemporary educational theory has long emphasized the importance of developing learners’ ability to solve ill-structured problems—those that reflect the ambiguity, complexity, and uncertainty of real-world situations. Jonassen [30] criticized conventional schooling for relying too heavily on overly structured problems that fail to capture the dynamic nature of actual problem contexts. In reality, learners often face situations involving multiple variables, conflicting interests, and incomplete or evolving information, while traditional instruction tends to present simplified and static scenarios. This disconnect creates a significant gap between classroom learning and the competencies required in real-life contexts.
With the accelerating pace of technological advancement and the societal transformations driven by artificial intelligence, learners are increasingly required to address unpredictable and complex problems that cannot be resolved using standard procedures or pre-existing knowledge. These problems often feature multilayered structures, interdependent variables, and dynamic conditions that evolve over time and lead to uncertain outcomes [63,64]. To prepare learners for such challenges, IPSL emphasizes a set of instructional design strategies that foster cognitive adaptability and integrative reasoning.
Learners should be trained to consider multiple variables, diverse stakeholders, and contextual uncertainties when solving problems. As Jonassen [30] notes, linear and reductionist thinking cannot address real-world complexity. Instructional approaches should therefore present learners with authentic problems that include competing values and perspectives, encouraging nuanced and holistic reasoning [34].
In addition, learners must be equipped to address fusion problems—problems that require the integration of academic knowledge, real-life experience, and AI-supported tools. Lombardi [34] emphasized that meaningful learning occurs when knowledge is applied to authentic, context-rich challenges. In today’s world, pressing societal issues such as climate change, algorithmic bias, or digital ethics demand a multidisciplinary approach that often involves human–AI collaboration [24,25,31].
Finally, learners must develop the capacity to manage ethical dilemmas and conflicting values that naturally arise during problem-solving processes. The rapid digital transformation powered by AI has intensified the complexity of such dilemmas, requiring learners to evaluate diverse viewpoints critically and construct socially responsible solutions [65,66]. Education should thus move beyond the pursuit of singular correct answers and instead guide learners to synthesize varied inputs and generate viable, context-sensitive resolutions.

5.2.3. Future-Oriented Capability

Traditional models of education and talent development have largely focused on competency—the mastery of specific knowledge and skills required to perform known tasks. However, in a rapidly changing and uncertain world, competency alone is no longer sufficient. What is increasingly needed is capability—the learner’s capacity to adapt, create, and act effectively in unfamiliar and evolving situations.
IPSL positions capability development as central to preparing learners for future-oriented education. Unlike competency, which often relates to predetermined outcomes, capability encompasses flexibility, curiosity, and continuous learning. Learners must be supported in developing the ability not only to acquire new skills, but also to unlearn outdated knowledge and approaches when necessary [32,33,67]. This learnability is now widely recognized as a critical criterion for employability and long-term professional growth.
Learners should be encouraged to view artificial intelligence not only as a functional tool, but as a “second brain” that extends their cognitive capacity. In effective human–AI collaboration, AI can expand data processing, support deeper analytical thinking, and enhance decision-making [68]. To benefit from these advantages, learners must understand how to engage AI strategically in their thinking processes, enabling them to approach problems from broader and more flexible perspectives [38,69].
Equally important is the cultivation of transdisciplinary thinking. Learners should be encouraged to apply their knowledge across disciplinary boundaries and transfer it to novel, unfamiliar contexts. Addressing complex societal issues—such as digital surveillance, public health, or climate resilience—requires integrated perspectives that transcend traditional subject-area silos [70,71]. Research supports those transdisciplinary approaches not only enhance knowledge transfer but also foster creativity and innovation in problem-solving across diverse domains [72].

5.3. Human–AI Collaborative Structures

As AI continues to reshape knowledge work and cognitive tasks, educational models must clearly define the respective roles of humans and AI in learning. In the IPSL framework, this is achieved through three task categories: human-exclusive tasks, human–AI collaborative tasks, and AI-delegable tasks. This structure supports targeted instructional strategies that preserve human dignity, make full use of AI’s technological strengths, and prepare learners to solve problems in hybrid human–AI environments.
To clarify the theoretical foundations that support this structure, we explicitly mapped how three core theories—connectionism, extended mind theory, and augmented intelligence—inform the IPSL framework. Connectivism provides a foundational logic for IPSL’s human–AI collaborative learning by positioning AI systems as active nodes in learners’ knowledge networks, thus justifying instructional strategies that engage learners in co-constructive problem-solving with AI. This perspective validates the pedagogical rationale for integrating AI into the learning environment as a cognitive node within a broader knowledge network. Extended mind theory offers philosophical grounding for instructional strategies where AI functions as a cognitive extension—particularly in guiding learners to offload routine tasks and focus on higher-order decision-making. It provides the philosophical foundation for treating AI tools as integral to processes like memory, reasoning, and decision-making—functions that are not confined to the biological brain but dynamically distributed across human–machine systems. Finally, augmented intelligence reframes the instructional division of cognitive labor, justifying IPSL’s task–role taxonomy by aligning AI strengths with efficiency and human roles with ethical discernment, reflection, and creativity.
These theoretical foundations directly inform the IPSL model’s task role classification. Tasks involving ethical judgment, moral agency, or existential reflection are categorized as human-exclusive because they depend on contextual sensitivity, empathy, and normative reasoning—capacities that current AI technologies cannot replicate. Conversely, tasks characterized by high volume, repetition, or physical risk are delegated to AI in line with automation theory and cognitive load principles. Human–AI collaborative tasks, such as multiperspective analysis, co-creative exploration, and reflective dialogue with AI, are designed to activate hybrid intelligence and scaffold learners’ metacognitive and adaptive thinking skills.

5.3.1. Tasks Exclusive to Humans

Even in the AI era, some aspects of human nature remain beyond the reach of machines. These include ethical reasoning, emotional depth, and existential reflection. Education must continue to affirm human uniqueness and dignity, particularly when AI’s growing efficiency and capabilities might encourage overreliance. While AI may outperform humans in specific technical domains, excessive dependence risks undermining human creativity, autonomy, and ultimately, the sense of purpose that defines meaningful learning and living.
To safeguard these human dimensions, the IPSL framework emphasizes the importance of clearly identifying and preserving tasks that should be, or must be, carried out exclusively by humans—even when AI is technically capable of doing so.
First, learners should be guided to make final decisions based on their own values and ethical reasoning, particularly in socially and culturally embedded contexts. Although AI can offer algorithmic suggestions based on data patterns, it lacks the moral agency required to weigh competing values or assess ethical consequences. Therefore, human learners must take full responsibility for ethical judgment and decision-making throughout the learning process [23,37,73].
Second, students must develop a clear understanding of the ethical use and governance of AI. While AI systems function through data-driven logic, it is ultimately humans who define the normative frameworks—philosophical, legal, and social—that determine how AI should operate. These judgments require human-level abstraction and deliberation, which no algorithm can replicate [22,51].
Third, learners should be equipped to strategically allocate roles among tasks that are human-exclusive, human–AI collaborative, or AI-delegable. Understanding when and how to involve AI, and when to retain human responsibility, is a core competency for future professional and civic life. For collaboration to be effective, learners must be aware of AI’s affordances and limitations and possess the critical insight to design partnerships in which human and machine contributions are meaningfully integrated [37].
To strengthen the operationalization of ethical education within the IPSL framework, it is essential to move beyond macro-level statements and provide concrete instructional tools that enable learners to engage in ethical decision-making. One promising approach is to integrate simulation-based environments, such as the AI Ethics Sandbox. In such environments, learners encounter virtual dilemmas involving algorithmic bias, data privacy, or autonomous decision-making in education or healthcare.
Through these simulated scenarios, learners are tasked with identifying ethical tensions, evaluating stakeholder perspectives, and making morally grounded decisions—while receiving feedback on the ethical implications of their choices. This process fosters moral reasoning while preparing students to navigate AI-related ethical challenges.
Embedding such tools into the IPSL design principles enhances their instructional usability and aligns ethical learning with learners’ agency and responsibility. Ultimately, these practices ensure that learners are not merely passive recipients of AI outcomes but active, reflective agents who uphold ethical integrity in increasingly automated decision spaces.

5.3.2. Human–AI Collaborative Tasks

Recent advances in artificial intelligence (AI) have underscored the potential of human–AI collaboration to enhance collective intelligence, yielding meaningful implications for education [18,35,74,75]. Studies have demonstrated that when humans and AI operate in complementary roles, they can produce more rational, creative, and nuanced decisions by reducing cognitive biases and expanding problem-solving capabilities [35,75].
In response to this, the IPSL model proposes three instructional design principles to foster effective human–AI collaboration in learning environments.
First, learners should be guided to define and approach problems from multiple perspectives in collaboration with AI. AI excels at processing large-scale and complex datasets, often exceeding the limits of human cognitive capacity [9]. When combined with human intuition and creativity, this synergy enables learners to restructure ill-defined problems in more integrated and multidimensional ways. Such tasks reposition learners from passive recipients of AI-generated output to active agents who co-investigate, reinterpret, and construct novel solutions informed by diverse data sources and perspectives.
Second, instructional designs should promote co-creativity, where learners use AI not as a substitute but as a catalyst for creative exploration. AI tools can offer diverse prompts, analogies, and iterations that expand learners’ cognitive reach, especially in artistic, design-based, and interdisciplinary learning contexts [10,76]. By helping to overcome fixation—rigid adherence to conventional thinking—AI facilitates divergent thinking and opens access to otherwise inaccessible ideas.
Third, learners should be encouraged to engage with AI as a critical peer—a dialogic partner that not only provides information but also challenges assumptions, stimulates reflection, and promotes metacognitive development. Generative AI can prompt learners to examine problems from alternative angles, ask deeper questions, and refine their understanding through iterative reasoning [77,78]. This interaction fosters intellectual autonomy and encourages learners to become reflective and responsible thinkers in AI-mediated environments.
In sum, the augmentation of collective intelligence through human–AI collaboration must go beyond efficiency. It should intentionally cultivate learners’ creativity, critical thinking, and multidimensional reasoning. The three instructional principles—multi-perspective problem framing, co-creativity, and interaction with AI as a critical peer—are foundational to transforming learners into active, creative knowledge producers, rather than passive users of technology in the AI era. To ensure equitable implementation of the “AI as a critical peer” principle across diverse learner profiles, targeted scaffolding strategies should be embedded. These may include (a) modeling AI–human interaction through teacher-led demonstrations before expanding to student-led dialogues (younger learners); (b) constraining AI use to small-group activities supplemented by offline resources (resource-limited settings); and (c) providing AI-output analysis checklists and teacher-facilitated critical review sessions (low AI literacy contexts). In addition, instructional designs should anticipate potential constraints—such as limited infrastructure, low digital literacy, or teacher resistance—and integrate mitigation strategies accordingly. Addressing these considerations can help translate the principle from a normative guideline into a practically viable classroom strategy.
To enhance the practical relevance of the IPSL framework, it is crucial to illustrate how its theoretical foundations—connectivism, extended mind theory, and augmented intelligence—can be operationalized into instructional practice. For instance, in a high school science classroom, learners may use AI-based simulation tools to predict environmental changes based on real-time climate data. This practice embodies the extended mind theory by distributing cognitive processing across human–AI systems, while simultaneously encouraging learners to engage in metacognitive reflection about prediction models—thus enacting core meta-learning strategies.
In language education, students might collaborate with generative AI to co-author stories or essays. Here, augmented intelligence facilitates creative expansion, enabling learners to test stylistic variations and refine narrative structure while maintaining human-led judgment and thematic intent. Teachers guide learners to evaluate AI-suggested revisions critically, reinforcing both agency and ethical consideration in text production.
In social studies or civic education, AI-enabled simulations can support learners in exploring complex socio-political scenarios—such as decision-making in pandemic responses or climate justice debates. Learners take on stakeholder roles, weigh competing values, and consult AI agents trained on policy data. This approach exemplifies connectivism in action: knowledge construction through distributed, networked, and collaborative processes.
These instructional scenarios demonstrate how IPSL bridges theory and practice. Each example aligns with a core learning principle—distributed cognition, ethical discernment, or co-creativity—and transforms abstract theory into contextualized learning strategies. By integrating these use cases, IPSL not only affirms its theoretical coherence but also strengthens its applicability across subject areas and learning environments.

5.3.3. Tasks Delegated to AI

As artificial intelligence (AI) becomes increasingly integrated into educational practice, collaborative problem-solving between learners and AI systems is rapidly becoming a practical reality. A key challenge in instructional design is determining which tasks should be appropriately delegated to AI without compromising the learner’s engagement, critical thinking, or sense of agency. Within the IPSL framework, three instructional design principles guide the identification and use of AI-delegable tasks.
The first principle highlights that repetitive and structured tasks—such as automated grading, instant feedback, content summarization, or scheduling—are best assigned to AI systems. These tasks often consume a significant portion of instructional time but contribute minimally to higher-order thinking. Delegating such tasks to AI allows educators and learners to focus on creative, reflective, and strategic dimensions of learning [79,80]. This approach not only improves efficiency but also fosters deeper engagement in self-directed learning and sustained problem-solving [36].
The second principle pertains to large-scale and cognitively intensive data processing. In contemporary educational settings, students are increasingly required to analyze complex datasets or navigate multilayered information structures. AI tools can effectively identify patterns, extract correlations, and generate real-time analytics that would be difficult for humans to process manually [81,82]. For example, in scientific research or social science contexts, AI can assist learners in interpreting big data, thereby freeing cognitive resources for conceptual understanding and critical analysis [10]. However, this delegation must be complemented with instructional designs that explicitly encourage critical reflection, ensuring that learners maintain active roles in interpreting AI-generated insights.
The third principle involves delegating physically hazardous or endurance-based tasks to AI. Laboratory experiments involving toxic substances, simulations of dangerous environments, or extended observation in field research may pose safety risks or induce fatigue. AI-driven robotics and virtual environments can mitigate these risks by taking over high-risk or monotonous components, allowing learners to safely engage with the theoretical and interpretive dimensions of the task [36]. Unlike humans, AI systems do not tire and are capable of sustaining operations over long durations, making them ideal for such scenarios.
Importantly, these instructional principles for AI task delegation are designed to maximize educational synergy between human and machine intelligence. However, for this synergy to be meaningful, learners must simultaneously develop both AI literacy and critical thinking skills [83]. Without this dual development, there is a risk of cognitive offloading, where learners passively accept AI outputs without reflective engagement. Educators must therefore create learning environments that actively prompt students to question, critique, and validate AI-generated information. Through such interactions, learners develop a clear sense of human-led decision-making and moral accountability, which are essential for ethical and effective learning in AI-integrated educational systems.
To ensure responsible use of AI in delegated tasks, it is essential to integrate an ethical oversight mechanism into the instructional design process. While AI can efficiently handle data-heavy, repetitive, or hazardous tasks, educators must remain vigilant about the ethical implications of AI outputs. Therefore, IPSL recommends that all AI-delegated activities include a structured review process whereby learners are guided to (1) critically assess the accuracy, bias, and fairness of AI-generated results, (2) reflect on the potential social or ethical consequences of relying on those outputs, and (3) make final decisions based on human judgment. Teachers should act as facilitators who help learners develop ethical awareness and accountability, ensuring that AI is not only used effectively but also responsibly. By embedding ethical reflection into AI-delegated workflows, learners develop a deeper understanding of their role as decision-makers in AI-mediated environments.
The IPSL framework proposes three categories of task division: human-exclusive, collaborative, and AI-delegable tasks. These categories are not fixed. As AI technologies evolve, the cognitive, ethical, and operational boundaries between them may change. To address this, instructional design should include a dynamic task role allocation mechanism that can adjust responsibilities over time.
One promising approach is to incorporate a Technology Readiness and Maturity Assessment (TRMA) [84] to periodically evaluate the capabilities, risks, and limitations of AI tools in specific educational contexts. For instance, tasks that currently require human judgment—such as formative assessment or ethical analysis—may become partially automatable in the future, depending on the development of explainable AI and regulatory oversight. Conversely, tasks initially considered AI-delegable may revert to human control if ethical concerns or contextual sensitivity arise.
Educators should be equipped to recalibrate task assignments based on factors such as technological maturity, learner characteristics, task complexity, and ethical considerations. This flexible design, referred to as the “task elasticity mechanism” [85], enables continuous adjustment of role boundaries to maintain the integrity of human-centered learning while maximizing AI affordances. Within the IPSL framework, the “task elasticity” mechanism refers to the dynamic reallocation of responsibilities between humans and AI according to evolving conditions. This mechanism is guided by specific evaluation metrics, including learner cognitive load, task completion accuracy, ethical risk level, and the degree of creativity required. Task elasticity operates through a continuous three-step adjustment cycle: (1) periodic assessment of task alignment with the comparative strengths of humans and AI—conducted, for example, on a monthly basis or at key project milestones; (2) role reallocation based on assessment results and the maturity level of AI capabilities; and (3) feedback integration to refine allocation strategies in the next cycle. By applying these iterative adjustments, IPSL ensures that human–AI task distribution remains optimally balanced, responsive to technological developments, and aligned with both learning objectives and ethical standards.
To ensure that the IPSL tripartite task taxonomy remains valid and relevant as AI technologies advance, both short-term and long-term dynamic adjustment strategies should be embedded into the human–AI collaboration structure. In the short term, an annual technical review can be conducted to evaluate emerging AI capabilities, limitations, and associated risks, updating the classification criteria where necessary. In the long term, a comprehensive re-evaluation cycle should be implemented every 2–3 years, combining the Technology Readiness and Maturity Assessment (TRMA) with an educational needs analysis to revalidate or reassign task categories. This cyclical approach aligns with the flexible frameworks outlined in [83,85], ensuring that IPSL remains adaptable to technological developments while safeguarding human-centered learning priorities.
Furthermore, training learners to recognize these shifting boundaries promotes critical digital literacy. Learners must understand not only what AI can do, but also when and why to include or exclude AI in different phases of learning and problem-solving. By fostering this meta-awareness, IPSL supports the development of adaptive, ethically grounded learners who can navigate evolving human–AI ecosystems with discernment and responsibility. While the IPSL framework emphasizes augmentation through human–AI collaboration, it is important to acknowledge potential risks such as over-delegation, overtrust in AI outputs, and cognitive disengagement. Recent literature [86,87,88] has also identified automation complacency, algorithm aversion, and ethical fatigue as emerging challenges that can diminish learner engagement and decision-making quality. Furthermore, hybrid tension exists in that not all AI collaboration leads to augmentation; in some cases, it may foster overdependence, reducing opportunities for critical and creative thinking. Addressing these risks through targeted scaffolding, explicit critical thinking prompts, and human-in-the-loop [89,90] designs can help maintain the integrity and intended impact of IPSL principles.
This study proposed Intelligent Problem-Solving Learning (IPSL) as a novel learning model tailored for the demands of the artificial intelligence (AI) era. Drawing on a comprehensive literature review and a two-phase expert validation process, the study developed a conceptual framework comprising three major categories and eight subcategories, culminating in the formulation of 18 instructional design principles.
At its core, the IPSL model places the pursuit of human existential value at the center of educational aims. It distinguishes the roles of humans and AI across three task domains—those that must be performed exclusively by humans, those requiring human–AI collaboration, and those that can be delegated to AI. Through meta-learning, learners are guided to strategically determine how to approach each task type, thereby enhancing their future-oriented capability to solve complex and unpredictable problems. A key feature of IPSL is this integration of role differentiation into instructional design, providing a concrete and actionable structure for human–AI collaboration.
Theoretically, IPSL offers several distinct contributions compared to existing instructional design models. First, it shifts the focus from competency-based education to capability-based learning, emphasizing adaptability, creativity, and transferability over the mastery of fixed knowledge [6,7]. Unlike fixed competencies, capabilities refer to a learner’s potential to continuously learn, unlearn, adapt, and act with confidence in unfamiliar and evolving contexts. Second, it advances a model of AI-integrated instructional design that moves beyond human-to-human collaboration, recognizing AI as a legitimate learning partner. This approach is grounded in Extended Mind Theory [4,5], which posits that cognitive processes extend into external tools, such as AI. Third, IPSL explicitly recenters human values and responsibilities in education. In a time when AI challenges human agency in creativity and decision-making, IPSL affirms that ethical judgment and final decisions must remain human-led [22].
Practically, the IPSL design principles offer clear strategies for implementation. By delegating repetitive or data-intensive tasks to AI, educators and learners can allocate more time to higher-order thinking, creativity, and reflection. When applied effectively, IPSL enhances both instructional efficiency and learner engagement.
Overall, IPSL provides a robust framework for teaching with AI as a learning partner, rather than merely teaching about AI as a subject. It offers a future-oriented paradigm that fosters co-creativity, critical thinking, and human–AI synergy in the classroom. Learners are encouraged to reconstruct problems from diverse perspectives and generate solutions in partnership with AI—aligning with research that shows how collaborative intelligence enhances decision-making and innovation [10,19].
Importantly, IPSL promotes a balanced approach to AI integration: one that embraces technological innovation while safeguarding human-centered judgment, autonomy, and ethical reflection. Several design principles within the model explicitly warn against metacognitive laziness and overreliance on AI, underscoring the importance of maintaining learner agency. In doing so, IPSL provides practical guidance for educators seeking to leverage AI’s affordances without compromising the fundamental human values that lie at the heart of meaningful education.

6. Future Research and Recommendations

To further strengthen the validity, instructional utility, and generalizability of the IPSL framework, several directions for future research are recommended. First, future research should include quasi-experimental studies comparing IPSL-based instruction with conventional teaching models. These studies should evaluate outcomes such as problem-solving performance, ethical reasoning, and meta-learning development. These studies will provide empirical evidence to support the design validity established in this paper and assess IPSL’s effectiveness in real-world educational settings.
In addition, future research should examine how IPSL design principles can be effectively adapted to differentiated instructional contexts. While the current framework offers universal guidance, its implementation may vary significantly depending on learners’ age, subject matter, and cognitive or emotional characteristics. For example, in primary education, IPSL-based strategies may require simplified language, visual scaffolds, and stronger emotional support. In contrast, learners in higher education may engage more independently with complex metacognitive tasks and AI-augmented inquiry. Moreover, special attention should be given to diverse learner populations, such as students with disabilities or neurodiverse profiles. Instructional adaptations may include the use of assistive technologies, multi-sensory learning environments, and differentiated human–AI role assignments.
To systematically guide this adaptation process, it is recommended that future research adopt a phased validation approach—beginning with pilot implementations across K–12, higher education, and inclusive classrooms, followed by iterative refinements based on stakeholder feedback. Such studies will enable continuous improvement of the design principles and ensure contextual adaptability.
Second, the professional development of educators should be a priority. The success of IPSL depends on teachers’ ability to design and facilitate AI-integrated learning environments. Future research should investigate training programs, instructional scaffolds, and institutional supports that can help educators apply IPSL principles effectively.
Third, learner-centered research is needed to understand how students develop AI literacy, ethical judgment, and learning agency within IPSL-based environments. This includes identifying pedagogical strategies that enable learners to critically engage with AI tools while preserving autonomy and human-centered decision-making.
Finally, given the rapid evolution of AI technologies, the IPSL framework must remain responsive to emerging innovations and educational trends. Future research should investigate how IPSL principles can be updated, extended, or reconfigured to address new AI capabilities and diverse learner needs. Collectively, these research directions will help ensure that IPSL continues to evolve as a sustainable, inclusive, and future-ready instructional model that empowers both educators and learners to thrive in AI-augmented learning ecosystems. Future research should empirically test the IPSL framework in diverse classroom contexts, assessing its impact on problem-solving performance, ethical reasoning, and meta-learning across different age groups and digital literacy levels. Particular attention should be paid to potential risks of AI-delegation failure, especially when AI-generated outputs are persuasive yet incorrect. Comparative studies exploring cultural differences in AI perception and acceptance will also enhance the global relevance and adaptability of the IPSL framework.
While ethical decision-making is addressed under “human-exclusive tasks,” a more systematic analysis of the risks associated with AI integration in education is necessary. Potential risks include data bias, which can perpetuate or amplify social inequalities, and overreliance on AI, which may lead to diminished critical thinking and learner autonomy. To mitigate these risks, future research should draw on AI ethics literature [24,31,73,74,86] to develop evidence-based strategies such as embedding bias detection protocols, promoting transparency in AI decision-making processes, and incorporating structured “human-in-the-loop” checkpoints into instructional design. Additionally, integrating reflective activities that encourage learners to question AI-generated outputs can help maintain human agency and ensure that AI serves as a supportive partner rather than a substitute for human judgment.
In conclusion, the IPSL model presented in this study offers a forward-looking framework for developing learner competencies aligned with the demands of future society. Rather than viewing AI as a threat to education, IPSL offers a design model that positions AI as a strategic partner—balancing technological innovation with human-centered learning while preserving core human responsibilities. With continued research and implementation, IPSL has the potential to serve as a catalyst for educational innovation, shaping a new generation of learning environments where artificial intelligence and human intelligence coexist and co-evolve. By centering human values, adaptive thinking, and collaborative intelligence, the IPSL model contributes to the development of pedagogical practices aligned with sustainability education frameworks. By providing transferable design principles grounded in human–AI collaboration and capability development, IPSL supports the creation of resilient, inclusive, and ethically informed digital learning systems aligned with long-term sustainability goals.

Author Contributions

Conceptualization, Y.L. and S.-S.L.; methodology, S.-S.L.; formal analysis, Y.L.; investigation, Y.L. and S.-S.L.; visualization, Y.L.; writing—original draft preparation, Y.L. and S.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Utecht, J.; Keller, D. Becoming relevant again: Applying connectivism learning theory to today’s classrooms. Crit. Quest. Educ. 2019, 10, 107–119. Available online: https://eric.ed.gov/?id=EJ1219672 (accessed on 27 May 2024).
  2. Siemens, G. Connectivism: Learning as network-creation. ASTD Learn. News 2005, 10, 1–28. [Google Scholar]
  3. Downes, S. Places to go: Connectivism & connective knowledge. Innovate J. Online Educ. 2008, 5, 6. Available online: https://nsuworks.nova.edu/innovate/vol5/iss1/6 (accessed on 27 May 2024).
  4. Clark, A.; Chalmers, D. The extended mind. Analysis 1998, 58, 7–19. [Google Scholar] [CrossRef]
  5. Paul, A.M. Extended Mind: The Power of Thinking Outside the Brain; Mariner Books: Boston, MA, USA, 2022. [Google Scholar]
  6. Fraser, S.W.; Greenhalgh, T. Coping with complexity: Educating for capability. BMJ 2001, 323, 799–803. [Google Scholar] [CrossRef]
  7. Jain, V.; Oweis, E.; Woods, C.J. Mapping the distance: From competence to capability. ATS Sch. 2023, 4, 400–404. [Google Scholar] [CrossRef] [PubMed]
  8. Sakata, N. Capability approach to valued pedagogical practices in Tanzania: An alternative to learner-centered pedagogy? J. Hum. Dev. Capab. 2021, 22, 663–681. [Google Scholar] [CrossRef]
  9. Dede, C.; Etemadi, A.; Forshaw, T. Intelligence Augmentation: Upskilling Humans to Complement AI; The Next Level Lab at the Harvard Graduate School of Education, President and Fellows of Harvard College: Cambridge, MA, USA, 2021. [Google Scholar]
  10. Ali Elfa, M.A.; Dawood, M.E.T. Using artificial intelligence for enhancing human creativity. J. Art Des. Music 2023, 2, 3. [Google Scholar] [CrossRef]
  11. Xing, Z.; Ma, G.; Wang, L.; Yang, L.; Guo, X.; Chen, S. Towards visual interaction: Hand segmentation by combining 3D graph deep learning and laser point cloud for intelligent rehabilitation. IEEE Internet Things J. 2025, 12, 21328. [Google Scholar] [CrossRef]
  12. Xing, Z.; Meng, Z.; Zheng, G.; Ma, G.; Yang, L.; Guo, X.; Tan, L.; Jiang, Y.; Wu, H. Intelligent rehabilitation in an aging population: Empowering human-machine interaction for hand function rehabilitation through 3D deep learning and point cloud. Front. Comput. Neurosci. 2025, 19, 1543643. [Google Scholar] [CrossRef]
  13. Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
  14. Kosmyna, N.; Hauptmann, E.; Yuan, Y.T.; Situ, J.; Liao, X.H.; Beresnitzky, A.V.; Braunstein, I.; Maes, P. Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task. arXiv 2025, arXiv:2506.08872. [Google Scholar] [CrossRef]
  15. Lee, H.-P.; Sarkar, A.; Tankelevitch, L.; Drosos, I.; Rintel, S.; Banks, R.; Wilson, N. The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 26 April–1 May 2025; pp. 1–22. [Google Scholar]
  16. AlDahdouh, A.A.; Osório, A.; Caires, S. Understanding knowledge network, learning and connectivism. Int. J. Instr. Technol. Distance Learn. 2015, 12, 3–21. [Google Scholar]
  17. Hase, S. Learner defined curriculum: Heutagogy and action learning in vocational training. S. Inst. Technol. J. Appl. Res. 2011, 1, 1–10. [Google Scholar]
  18. Li, D.H.; Towne, J. How AI and human teachers can collaborate to transform education. In Proceedings of the World Economic Forum Annual Meeting, Davos-Klosters, Switzerland, 20–24 January 2025; Available online: https://www.weforum.org/stories/2025/01/how-ai-and-human-teachers-can-collaborate-to-transform-education/ (accessed on 15 January 2025).
  19. Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. The future of human–AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. arXiv 2021, arXiv:2105.03354. [Google Scholar]
  20. Gerlich, M. Perceptions and Acceptance of Artificial Intelligence: A Multi-Dimensional Study. Soc. Sci. 2023, 12, 502. [Google Scholar] [CrossRef]
  21. Gerlich, M. Exploring Motivators for Trust in the Dichotomy of Human–AI Trust Dynamics. Soc. Sci. 2024, 13, 251. [Google Scholar] [CrossRef]
  22. Rubin, C. Artificial intelligence and human nature. New Atlantis 2003, 1, 88–100. Available online: https://thenewatlantis.com/wp-content/uploads/legacy-pdfs/TNA01-Rubin.pdf (accessed on 27 May 2024).
  23. Alamin, F.; Sauri, S. Education in the era of artificial intelligence: Axiological study. Progres Pendidik. 2024, 5, 146–150. [Google Scholar] [CrossRef]
  24. Rodríguez, N.D.; Ser, J.; Coeckelbergh, M.; Prado, M.L.D.; Herrera-Viedma, E.; Herrera, F. Connecting the dots in trustworthy artificial intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion 2023, 99, 101896. [Google Scholar] [CrossRef]
  25. Garibay, O.O.; Winslow, B.D.; Andolina, S.; Antona, M.; Bodenschatz, A.; Coursaris, C.K.; Falco, G.; Fiore, S.; Garibay, I.I.; Grieman, K.; et al. Six human-centered artificial intelligence grand challenges. Int. J. Hum.-Comput. Interact. 2023, 39, 391–437. [Google Scholar] [CrossRef]
  26. Pekrun, R. The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educ. Psychol. Rev. 2006, 18, 315–341. [Google Scholar] [CrossRef]
  27. Zimmerman, B.J. Becoming a self-regulated learner: An overview. Theory Pract. 2002, 41, 64–70. [Google Scholar] [CrossRef]
  28. Garner, P.W. Emotional competence and its influences on teaching and learning. Educ. Psychol. Rev. 2010, 22, 297–321. [Google Scholar] [CrossRef]
  29. Mayer, J.D.; Salovey, P. What is emotional intelligence? In Emotional Development and Emotional Intelligence: Educational Implications; Salovey, P., Sluyter, D., Eds.; Basic Books: New York, NY, USA, 1997; pp. 3–31. [Google Scholar]
  30. Jonassen, D.H. Toward a design theory of problem solving. Educ. Technol. Res. Dev. 2000, 48, 63–85. [Google Scholar] [CrossRef]
  31. Aquino, Y.S.J.; Carter, S.; Houssami, N.; Braunack-Mayer, A.; Win, K.; Degeling, C.; Wang, L.; Rogers, W. Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: A qualitative study of multidisciplinary expert perspectives. J. Med. Ethics 2023, in press. [Google Scholar] [CrossRef] [PubMed]
  32. Phelps, R.; Hase, S.; Ellis, A. Competency, capability, complexity and computers: Exploring a new model for conceptualising end-user computer education. Br. J. Educ. Technol. 2005, 36, 67–84. [Google Scholar] [CrossRef]
  33. Holdsworth, S.; Thomas, I. Competencies or capabilities in the Australian higher education landscape and its implications for the development and delivery of sustainability education. High. Educ. Res. Dev. 2020, 40, 1466–1481. [Google Scholar] [CrossRef]
  34. Lombardi, M.M. Authentic Learning for the 21st Century: An Overview. EDUCAUSE Learn. Initiat. 2007. Available online: https://library.educause.edu/resources/2007/1/authentic-learning-for-the-21st-century-an-overview (accessed on 27 May 2024).
  35. Jain, R.; Garg, N.; Khera, S.N. Effective human–AI work design for collaborative decision-making. Kybernetes 2022, 52, 5017–5040. [Google Scholar] [CrossRef]
  36. Zhou, L.; Paul, S.; Demirkan, H.; Yuan, L.; Spohrer, J.; Zhou, M.; Basu, J. Intelligence augmentation: Towards building human–machine symbiotic relationship. AIS Trans. Hum.-Comput. Interact. 2021, 13, 243–264. [Google Scholar] [CrossRef]
  37. Huo, Y. How AI ideas affect the creativity, diversity, and evolution of human ideas. arXiv 2023, arXiv:2401.01348. [Google Scholar]
  38. Davenport, T.H.; Ronanki, R. Artificial intelligence for the real world. Harv. Bus. Rev. 2018, 96, 108–116. Available online: https://hbr.org/2018/01/artificial-intelligence-for-the-real-world (accessed on 27 May 2024).
  39. Tribe AI. The Problem of Content Overload in Higher Education—And How AI Solves It. Tribe AI. 2025. Available online: https://www.tribe.ai/applied-ai/ai-and-content-overload-in-higher-education (accessed on 27 May 2024).
  40. Richey, R.C.; Klein, J.D. Design and Development Research: Methods, Strategies and Issues; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2007. [Google Scholar]
  41. Churchill, D. Conceptual model design and learning uses. Interact. Learn. Environ. 2013, 21, 54–67. [Google Scholar] [CrossRef]
  42. Islam, N.M.; Laughter, L.; Sadid-Zadeh, R.; Smith, C.; Dolan, T.A.; Crain, G.; Squarize, C.H. Adopting artificial intelligence in dental education: A model for academic leadership and innovation. J. Dent. Educ. 2022, 86, 1545–1551. [Google Scholar] [CrossRef]
  43. Luft, J.; Jeong, S.; Idsardi, R.; Gardner, G.E. Literature reviews, theoretical frameworks, and conceptual frameworks: An introduction for new biology education researchers. CBE Life Sci. Educ. 2022, 21, 3. [Google Scholar] [CrossRef]
  44. Selvik, J.; Abrahamsen, E.; Moen, V. Conceptualization and application of a healthcare systems thinking model for an educational system. Stud. High. Educ. 2022, 47, 1872–1889. [Google Scholar] [CrossRef]
  45. Sleegers, P.; Brok, P.D.D.; Verbiest, E.; Moolenaar, N.; Daly, A. Toward Conceptual Clarity: A Multidimensional, Multilevel Model of Professional Learning Communities in Dutch Elementary Schools. Elem. Sch. J. 2013, 114, 118–137. [Google Scholar] [CrossRef]
  46. Jung, E.; Lim, R.; Kim, D. A schema-based instructional design model for self-paced learning environments. Educ. Sci. 2022, 12, 271. [Google Scholar] [CrossRef]
  47. Kang, J.C. The development of instructional design principles for creativity convergence education. Korean J. Educ. Methodol. Stud. 2015, 27, 276–305. [Google Scholar] [CrossRef]
  48. Ryu, G.T.; Park, K.O. Developing study on instructional design principles based on the social studies blended learning for students with disabilities in inclusive environments. J. Spec. Educ. Curric. Instr. 2022, 15, 1–29. [Google Scholar] [CrossRef]
  49. Lynn, M.R. Determination and quantification of content validity. Nurs. Res. 1986, 35, 382–385. [Google Scholar] [CrossRef]
  50. Gwet, K.L. Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters, 4th ed.; Advanced Analytics, LLC: Gaithersburg, MD, USA, 2014. [Google Scholar]
  51. Harvard University. Teaching Resources for Artificial Intelligence. Available online: https://www.harvard.edu/ai/teaching-resources/ (accessed on 20 April 2025).
  52. Fadel, C.; Black, A.; Tylor, R.; Slesinski, J.; Dunn, K. Education for the Age of AI; Center for Curriculum Redesign: Boston, MA, USA, 2024. [Google Scholar]
  53. Stanton, J.D.; Sebesta, A.J.; Dunlosky, J. Fostering metacognition to support student learning and performance. CBE Life Sci. Educ. 2021, 20, 2. [Google Scholar] [CrossRef] [PubMed]
  54. Schunk, D.H.; Greene, J.A.; Zimmerman, B. Handbook of Self-Regulation of Learning and Performance, 2nd ed.; Routledge: New York, NY, USA, 2017. [Google Scholar]
  55. Hyperspace. Achieve Goals with AI-Enabled Goal Setting in VR Training. 2023. Available online: https://hyperspace.mv/ai-enabled-goal-setting-skills-development-in-vr/ (accessed on 27 May 2024).
  56. Zakrajsek, T.D. Teaching students AI strategies to enhance metacognitive processing. Scholarly Teach. 2023. Available online: https://www.scholarlyteacher.com/post/teaching-students-ai-strategies-to-enhance-metacognitive-processing (accessed on 27 May 2024).
  57. Flavell, J.H. Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. Am. Psychol. 1979, 34, 906–911. [Google Scholar] [CrossRef]
  58. Schraw, G.; Dennison, R.S. Assessing metacognitive awareness. Contemp. Educ. Psychol. 1994, 19, 460–475. [Google Scholar] [CrossRef]
  59. Garrison, D.R.; Akyol, Z. Toward the development of a metacognition construct for communities of inquiry. Internet High. Educ. 2015, 24, 66–71. [Google Scholar] [CrossRef]
  60. Pekrun, R.; Goetz, T.; Titz, W.; Perry, R.P. Academic emotions in students’ self-regulated learning and achievement: A program of qualitative and quantitative research. Educ. Psychol. 2002, 37, 91–105. [Google Scholar] [CrossRef]
  61. Kim, T.Y.; Cable, D.M.; Kim, S.P.; Wang, J. Emotional competence and work performance: The mediating effect of proactivity and the moderating effect of job autonomy. J. Organ. Behav. 2009, 30, 983–1000. [Google Scholar] [CrossRef]
  62. Meng, J.; Rheu, M.; Zhang, Y.; Dai, Y.; Peng, W. Mediated social support for distress reduction: AI chatbots vs. human. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–25. [Google Scholar] [CrossRef]
  63. ADR Times. Complex vs Complicated Understanding the Differences. 2024. Available online: https://adrtimes.com/complex-vs-complicated/ (accessed on 27 May 2024).
  64. Iancu, P.; Lanteigne, I. Advances in social work practice: Understanding uncertainty and unpredictability of complex non-linear situations. J. Soc. Work 2020, 22, 139–140. [Google Scholar] [CrossRef]
  65. Hunter, L.Y.; Albert, C.; Rutland, J.; Hennigan, C. The fourth industrial revolution, artificial intelligence, and domestic conflict. Glob. Soc. 2023, 37, 375–396. [Google Scholar] [CrossRef]
  66. Raisch, S.; Fomina, K. Combining human and artificial intelligence: Hybrid problem-solving in organizations. Acad. Manag. Rev. 2025, 50, 441–464. [Google Scholar] [CrossRef]
  67. Wang, X.; Lu, Y.; Zhao, Y.; Gong, S.; Li, B. Organisational unlearning, organisational flexibility and innovation capability: An empirical study of SMEs in China. Int. J. Technol. Manag. 2013, 61, 132–155. [Google Scholar] [CrossRef]
  68. Malone, T.W. Superminds: The Surprising Power of People and Computers Thinking Together; Little Brown Spark: New York, NY, USA, 2018. [Google Scholar]
  69. Brynjolfsson, E.; McAfee, A. Machine, Platform, Crowd: Harnessing Our Digital Future; W.W. Norton & Company: New York, NY, USA, 2017. [Google Scholar]
  70. Donelle, L.; Comer, L.; Hiebert, B.; Hall, J.; Shelley, J.J.; Smith, M.J.; Kothari, A.; Burkell, J.; Stranges, S.; Cooke, T.; et al. Use of digital technologies for public health surveillance during the COVID-19 pandemic: A scoping review. Digit. Health 2023, 9, 20552076231173220. [Google Scholar] [CrossRef]
  71. Morss, R.; Wilhelmi, O.V.; Meehl, G.; Dilling, L. Improving societal outcomes of extreme weather in a changing climate: An integrated perspective. Annu. Rev. Environ. Resour. 2011, 36, 1–25. [Google Scholar] [CrossRef]
  72. Fidler, D.; Williams, S. Future Skills: Update and Literature Review. Prepared for ACT Foundation and the Joyce Foundation. Institute for the Future. 2016. Available online: https://www.iftf.org/futureskills (accessed on 4 July 2025).
  73. Schicktanz, S.; Welsch, J.; Schweda, M.; Hein, A.; Rieger, J.; Kirste, T. AI-assisted ethics? Considerations of AI simulation for the ethical assessment and design of assistive technologies. Front. Genet. 2023, 14, 1176751. [Google Scholar] [CrossRef]
  74. Burton, J.W.; Stein, M.K.; Jensen, T.B. A systematic review of algorithm aversion in augmented decision making. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
  75. Metcalf, L.; Askay, D.A.; Rosenberg, L.B. Keeping humans in the loop: Pooling knowledge through artificial swarm intelligence to improve business decision making. Calif. Manag. Rev. 2019, 61, 84–109. [Google Scholar] [CrossRef]
  76. Hutagalung, D.L.I.; Nisa, L.K.; Verawati, D.M. Artificial intelligence and human creativity: Collaboration or replacement? In Proceedings of the Bengkulu International Conference on Economics, Management, Business and Accounting (BICEMBA), Bengkulu, Indonesia, 12 November 2024; Volume 2, pp. 965–972. [Google Scholar]
  77. D’Mello, S.K.; Biddy, Q.L.; Breideband, T.; Bush, J.B.; Chang, M.; Cortez, A.; Flanigan, J.; Foltz, P.W.; Gorman, J.C.; Hirshfield, L.M.; et al. From learning optimization to learner flourishing: Reimagining AI in education at the Institute for Student-AI Teaming (iSAT). AI Mag. 2024, 45, 61–68. [Google Scholar] [CrossRef]
  78. Rusandi, M.A.; Ahman; Saripah, I.; Khairun, D.Y.; Mutmainnah. No worries with ChatGPT: Building bridges between artificial intelligence and education with critical thinking soft skills. J. Public Health 2023, 45, e602–e603. [Google Scholar] [CrossRef]
  79. Ma, K.; Zhang, Y.; Hui, B. How does AI affect college? The impact of AI usage in college teaching on students’ innovative behavior and well-being. Behav. Sci. 2024, 14, 1223. [Google Scholar] [CrossRef]
  80. Wilson, H.J.; Daugherty, P.R. Collaborative intelligence: Humans and AI are joining forces. Harv. Bus. Rev. 2018, 96, 114–123. [Google Scholar]
  81. Jarrahi, M.H. Artificial intelligence and the future of work: Human–AI symbiosis in organizational decision making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
  82. Soori, M.; Arezoo, B.; Dastres, R. Artificial intelligence, machine learning and deep learning in advanced robotics: A review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  83. Seo, K.; Tang, J.; Roll, I.; Fels, S.; Yoon, D. The impact of artificial intelligence on learner–instructor interaction in online learning. Int. J. Educ. Technol. High. Educ. 2021, 18, 54. [Google Scholar] [CrossRef]
  84. Lavin, A.; Renard, G. Technology readiness levels for AI & ML. arXiv 2020, arXiv:2006.12497. [Google Scholar] [CrossRef]
  85. Petzoldt, C.; Niermann, D.; Maack, E.; Sontopski, M.; Vur, B.; Freitag, M. Implementation and evaluation of dynamic task allocation for human–robot collaboration in assembly. Appl. Sci. 2022, 12, 12645. [Google Scholar] [CrossRef]
  86. Çela, E.; Fonkam, M.M.; Potluri, R. Risks of AI-Assisted Learning on Student Critical Thinking. Int. J. Risk Conting. Manag. 2024, 12, 1–19. [Google Scholar] [CrossRef]
  87. Lehner, O.M.; Ittonen, K.; Silvola, H.; Ström, E.; Wührleitner, A. Artificial Intelligence Based Decision-Making in Accounting and Auditing: Ethical Challenges and Normative Thinking. Account. Audit. Account. J. 2022, 35, 109–135. [Google Scholar] [CrossRef]
  88. Msambwa, M.M.; Wen, Z.; Daniel, K. The Impact of AI on the Personal and Collaborative Learning Environments in Higher Education. Eur. J. Educ. 2025, 60, e12909. [Google Scholar] [CrossRef]
  89. Humr, S.A.; Canan, M.; Demir, M. A Quantum Probability Approach to Improving Human–AI Decision Making. Entropy 2025, 27, 152. [Google Scholar] [CrossRef]
  90. Wei, J.; Qi, S.; Wang, W.; Jiang, L.; Gao, H.; Zhao, F.; Al-Bukhaiti, K.; Wan, A. Decision-Making in the Age of AI: A Review of Theoretical Frameworks, Computational Tools, and Human-Machine Collaboration. Contemp. Math. 2025, 6, 2089–2112. [Google Scholar] [CrossRef]
Figure 1. Integration of theoretical foundations into the IPSL framework.
Figure 1. Integration of theoretical foundations into the IPSL framework.
Sustainability 17 07682 g001
Figure 2. Identification, screening, and inclusion of studies for IPSL conceptualization.
Figure 2. Identification, screening, and inclusion of studies for IPSL conceptualization.
Sustainability 17 07682 g002
Figure 3. Conceptual model of Intelligent Problem-Solving Learning (IPSL).
Figure 3. Conceptual model of Intelligent Problem-Solving Learning (IPSL).
Sustainability 17 07682 g003
Table 1. Conceptual structure of IPSL with theoretical and empirical foundations.
Table 1. Conceptual structure of IPSL with theoretical and empirical foundations.
Major
Category
Sub
Category
Defined ConceptTheoretical
Grounding
Empirical
Studies
Pursuit of Inherent Human ValuesPersonal ValuesRecognition of existential values, identity, and learner agencyExistential Pedagogy; Capability Approach Rubin [22]; Alamin & Sauri [23]; Fraser & Greenhalgh [6]
Community ValuesEthical reflection, human dignity, and public value pursuitMoral Education; Reflective EthicsRodríguez et al. [24]; Garibay et al. [25]
Strategic ApproachesMeta-LearningMetacognition and meta-emotion in goal setting and learning regulationSelf-Regulated; Meta-emotion Theory Pekrun [26]; Zimmerman [27]; Garner [28]; Mayer & Salovey [29]
Complex Problem SolvingAddressing ambiguous, real-world problems with integrated knowledge and ethicsComplex Problem-Solving Theory; Authentic LearningJonassen [30]; Iancu & Lanteigne [24]; Aquino et al. [31]
Future-Oriented CapabilityCapacity to unlearn, relearn, and adapt across novel contextsCapability-Based Learning; Extended Mind TheoryPhelps et al. [32]; Holdsworth & Thomas [33]
Transfer and synthesis of knowledge across boundariesTransdisciplinary Education; Knowledge Transfer TheoryLombardi [34]; Garibay et al. [25]
Human–AI Collaborative StructureHuman–Exclusive TasksEthical judgment, value-based decisions, and reflective agencyDecision-Making Theory; Moral PhilosophyRubin [22]; Jain et al. [35]
Human–AI CollaborationAI-supported co-creativity and dialogic engagementAugmented Intelligence; Extended CognitionZhou et al. [36]; Huo [37]
AI-Delegated TasksStrategic offloading of repetitive or data-heavy processes to AIAutomation Theory; Cognitive Load TheoryDavenport & Ronanki [38]; Tribe AI [39]
Table 2. Expert panel composition and profile.
Table 2. Expert panel composition and profile.
NoExpert CodeAffiliationArea of
Expertise
Major Experience and RoleRole
Category
1E1University A, Department of EducationInstructional Design, Educational TechnologyPh.D. in Educational Technology; 15+ years university teaching; Former secondary school teacher (10+ years); AI-based instructional design research
Published on AI ethics in education
Instructional Design Expert
2E2University B, Future Education Research InstituteFuture Education, AI-based Instructional DesignNational advisor on digital education policy; multiple SSCI publicationsFuture Education Expert
3E3Cyber University, Dept. of AI EducationAI-based Learning Environment DesignParticipated in AI tutoring system development; Lead researcher on MOE R&D project;
Conducted research on AI ethics and implications for AI-based learning
AI-Based Learning Expert
4E4National University of Education DPre-service Teacher EducationLed teacher training programs; Former primary school teacher (5 years); planned in-service training for school teachersTeacher Education Expert
5E5Educational Institute E
(Policy Research)
Sustainability in Educational PolicyConducted SDG4-based education policy researchSustainability Policy Expert
6E6University F, Department of Educational PsychologyMetacognition, Self-Regulated LearningLed development of learner cognitive and affective modelsEducational Psychology Expert
7E7Private AI Education Company GAI Content Development and UX DesignField expert in AI-based educational content and UX prototypingEdTech Industry Expert
8E8National University H, Department of EducationCurriculum and Assessment DesignParticipated in national project for AI-based performance assessment systemAssessment Design Expert
Note: Some institutional names have been anonymized (e.g., University A, University B, Research Institute E) to protect confidentiality.
Table 3. Summary of expert validation results for the IPSL conceptual model.
Table 3. Summary of expert validation results for the IPSL conceptual model.
DomainRound 1 Experts (n = 8)Round 2 Experts (n = 8)
MeanCVIIRAMeanCVIIRA
Conceptual Clarity3.130.880.633.751.000.75
Theoretical
Validity
4.001.001.004.001.001.00
Coherence Among
Components
2.880.750.633.251.000.75
Comprehensiveness3.500.880.634.001.001.00
Visual Communicability3.251.000.754.001.001.00
Innovativeness3.500.880.633.881.000.88
Overall Average3.380.900.713.811.000.90
Table 4. Summary of expert validation results for IPSL design principles.
Table 4. Summary of expert validation results for IPSL design principles.
DomainRound 1 Experts (n = 8)Round 2 Experts (n = 8)
MeanCVIIRAMeanCVIIRA
Validity2.750.630.504.001.001.00
Clarity2.380.380.383.131.000.88
Usefulness3.000.630.503.751.000.75
Universality2.630.630.633.001.001.00
Comprehensibility2.380.380.384.001.001.00
Overall Average2.630.530.383.581.000.93
Table 5. Refinement of instructional design principles based on round 1 expert review.
Table 5. Refinement of instructional design principles based on round 1 expert review.
CategoryBefore RevisionExpert FeedbackAfter Revision
Conceptual clarityPrinciples expressed in abstract, philosophical language (e.g., “value pursuit”, “self-identity”)Too abstract, unclear instructional implication; low clarity CVI (0.38)Rephrased into instructional language suitable for classroom application; added practical verbs and learner-centered phrasing
(e.g., Learners should explore their existential value and identity to establish life goals and vision.)
Structural consistencyOverlapping subcategories (e.g., self-regulation vs. agency), unclear order of principlesAmbiguity in hierarchical structure and sequence among principlesClarified subcategory boundaries and re-ordered principles to reflect learning progression. (e.g., Learners should first set meaningful goals and then develop strategies to achieve them.)
Component inclusionNo reference to meta-emotion or emotional resilienceMissing humanistic dimensions, especially emotional componentsAdded new principle on meta-emotion to emphasize emotional regulation and resilience. (e.g., Learners should develop meta-emotional abilities such as emotional awareness and regulation.)
Table 6. Final instructional design principles for IPSL.
Table 6. Final instructional design principles for IPSL.
Major CategorySubcategoryFinal Instructional Design Principles
Pursuit of Inherent Human ValuesPersonal Values
  • Guide learners to explore their existential value and identity, and to establish life goals and vision accordingly.
  • Foster learners’ emotional competence to maintain psychological well-being and build healthy social relationships.
  • Encourage learners to develop agency and ownership in their learning.
Community Values
  • Guide learners to internalize ethical values in society and continuously reflect on and update them.
  • Promote learners’ recognition and practice of human dignity as the highest value.
  • Encourage learners to pursue public values in communities with a sense of responsibility.
Value Pursuit
Strategies
Meta-Learning
  • Support learners in setting meaningful personal learning goals.
  • Help learners develop and continuously revise their own learning strategies.
  • Promote learners’ development of meta-emotional abilities (e.g., emotional self-awareness and regulation).
Complex Problem
Solving
  • Encourage learners to think complexly by considering diverse variables and factors in the problem-solving process.
  • Enable learners to solve fusion problems integrating subject matter, life, and AI.
  • Empower learners to resolve various conflicts and dilemmas during problem-solving.
Future-Oriented Capability
  • Enhance learners’ ability to learn (learnability).
  • Promote learners’ use of AI as a “Second Brain” to expand cognitive capabilities.
  • Strengthen learners’ knowledge transfer by encouraging transdisciplinary thinking.
Human–AI Collaborative StructureHuman-Exclusive Tasks
  • Support learners in strategically dividing tasks between humans and AI.
  • Guide learners to use AI ethically and recognize regulatory principles.
  • Ensure that learners make all final decisions based on personal and community values.
Human–AI Collaborative Tasks
  • Encourage learners to collaborate with AI to redefine problems from multiple perspectives.
  • Support learners in reconstructing meaning through co-creativity with AI.
  • Enable learners to use AI as a critical peer to shift and expand their thinking.
AI-Delegated Tasks
  • Allow learners to delegate repetitive or efficiency-driven tasks to AI.
  • Guide learners to assign tasks involving complex or large-scale data processing to AI.
  • Encourage learners to delegate risky or sustainability-required tasks to AI.
Table 7. Instructional examples for IPSL design principles.
Table 7. Instructional examples for IPSL design principles.
SubcategoryInstructional Design PrincipleInstructional Example
Community ValuesGuide learners to internalize ethical values and reflect on them.During a civics lesson, students use AI to analyze policy debates and write ethical evaluations. Students compare AI-generated objective data with their own ethical judgments, synthesizing both into a final report to balance factual accuracy and moral reasoning.
Meta-LearningSupport learners in setting meaningful personal learning goals.In a middle school science class, students set their own investigation goal and use AI tools to analyze data. AI provides data summaries and trend detection, while students interpret these results in light of their original hypotheses, adjusting their strategies accordingly.
Complex Problem SolvingEmpower learners to resolve various conflicts and dilemmas during problem-solving.In a simulation game, students make decisions in a disaster response scenario and reflect on ethical dilemmas. AI offers predictive models for different decisions, and students critically evaluate these against social and ethical implications before choosing a course of action.
Future-Oriented CapabilityStrengthen learners’ knowledge transfer by encouraging transdisciplinary thinking.Students design an AI-powered sustainable city plan integrating science, social studies, and design thinking. AI generates feasibility analyses for proposed solutions, while students adapt and refine their designs based on local cultural, environmental, and ethical contexts.
Human–AI Collaborative TasksSupport learners in reconstructing meaning through co-creativity with AI.In a language arts class, students co-write stories with a generative AI, revising tone and structure. Students prompt AI for alternative narrative developments, then select, modify, or merge these outputs to align with thematic intentions and emotional resonance.
AI-Delegated TasksAllow learners to delegate repetitive or efficiency-driven tasks to AI.In a data science project, students use AI to clean and sort large datasets before interpreting patterns. While AI automates preprocessing, students focus on drawing meaningful conclusions and identifying anomalies that require human judgment.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, Y.; Lee, S.-S. Exploring the Conceptual Model and Instructional Design Principles of Intelligent Problem-Solving Learning. Sustainability 2025, 17, 7682. https://doi.org/10.3390/su17177682

AMA Style

Lee Y, Lee S-S. Exploring the Conceptual Model and Instructional Design Principles of Intelligent Problem-Solving Learning. Sustainability. 2025; 17(17):7682. https://doi.org/10.3390/su17177682

Chicago/Turabian Style

Lee, Yuna, and Sang-Soo Lee. 2025. "Exploring the Conceptual Model and Instructional Design Principles of Intelligent Problem-Solving Learning" Sustainability 17, no. 17: 7682. https://doi.org/10.3390/su17177682

APA Style

Lee, Y., & Lee, S.-S. (2025). Exploring the Conceptual Model and Instructional Design Principles of Intelligent Problem-Solving Learning. Sustainability, 17(17), 7682. https://doi.org/10.3390/su17177682

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop