Next Article in Journal
Knowledge, Attitude, and Practices toward Artificial Intelligence among University Students in Lebanon
Previous Article in Journal
Expanding Models for Physics Teaching: A Framework for the Integration of Computational Modeling
Previous Article in Special Issue
Design and Psychometric Properties of the Student Perception of Teacher Care Scale in University Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measuring Learning Presence as Fourth Dimension in the Community of Inquiry Survey: Defining Self-Regulation Items and Subscales through a Heutagogical Approach

by
Salvatore Nizzolino
1,2,* and
Agustí Canals
3
1
Doctoral Programme in Education and ICT (e-Learning), Research Line Challenges for Sustainable Management and Organization in Online Education, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
2
Faculty of Information Engineering, Computer Science and Statistics, Sapienza University of Rome, 00185 Rome, Italy
3
Faculty of Economic and Business, Universitat Oberta de Catalunya, 08018 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(8), 862; https://doi.org/10.3390/educsci14080862
Submission received: 5 June 2024 / Revised: 7 July 2024 / Accepted: 24 July 2024 / Published: 9 August 2024
(This article belongs to the Collection Trends and Challenges in Higher Education)

Abstract

:
The Community of Inquiry (CoI) has increased in popularity in almost 25 years due to its adaptability which has spanned from asynchronous text-based environments up to a wide range of different settings. The CoI identifies the mutual interaction of three dimensions named presences. The survey to detect the perception of presences is a Likert-scale survey based on 34 items arranged in 10 subscales which are assigned as follows: 4 to teaching presence, 3 to social presence, and 3 to cognitive presence. Several studies identified alternative arrangements of the main components as a result of EFA and CFA. Consequently, the exploration of alternative settings keeps on revealing variations in the way presences interact with each other. The ongoing debate about whether to add a fourth dimension, specifically learning presence, has produced numerous publications but no definitive revised version of the survey. This study suggests an extension of the classical survey by incorporating a supplementary set of 12 items related to learning presence inspired by the theory of heutagogy (or self-determined learning). The sample for the experimental four-dimensional CoI framework comprised 55 university students. The analysis investigated the internal correlations of this extended survey, revealing positive expectations and opportunities for further adaptations.

1. Introduction

The community of inquiry (CoI) framework, introduced by Garrison, Anderson, and Archer [1] has emerged as a cornerstone for understanding and enhancing online learning environments by fostering higher-order thinking. The framework posits that successful online learning experiences are characterized by the synergistic interplay of three interrelated dimensions: teaching presence, social presence, and cognitive presence. These three dimensions are hinged on a learning context originally based on asynchronous text-based online interactions, and can be summarized as follows:
Cognitive presence (CP) is the crucial component of critical thinking, often viewed as the fundamental result of a successful higher education journey. To be evident, this dimension necessitates participants to possess a certain level of awareness about their learning style and strategies [1,2,3].
Social presence (SP) is the ability of participants to project their personal characteristics into the learning community, aiming to be perceived as real human beings despite the absence of face-to-face interactions [1,2,3].
Teaching presence (TP) involves all actions undertaken by the teacher or instructor to oversee and manage the learning performances, as well as coordinate and stimulate the other two presences [1,2,3].
In two and a half decades, the use of CoI has been extended from online settings to blended education [4,5], and due to its flexibility, it is currently being applied to the most innovative trends such as 3D learning environments and the Metaverse [6]. It is necessary to remember that CoI is a model to design and facilitate meaningful learning experiences and not a tool to grade individual learning outputs. Being a framework based on constructivism, it also relates to affective and social aspects influencing the learning process.
To this day, there is not a common resolution on whether a fourth presence may effectively improve the model. Nevertheless, the debate around the expansion of CoI has generated various proposals about adding a fourth dimension, such as the emotional presence [7,8] or the learning presence (LP) originally proposed by Shea and Bidjerano [9]. As it stands, the academia revolving around LP has been generating recurrent case studies and proposals to advance the framework [6,10,11,12,13,14,15,16]. In addition, LP summarizes the acknowledged paradigms of self-efficacy and self-regulation rooted in Bandura’s landmark studies [17,18,19,20]. Nevertheless, a four-presence model has not yet been acknowledged by the CoI founders, and it is still subject to a wide debate. Therefore, in the present work, we will examine adjustments to the survey based on the original three-presence framework, as shown in Figure 1.
Nonetheless, the present contribution maintains the perspective of continuous adaptation, recognizing that the official CoI survey has undergone further changes. These changes evolved from the original need to capture the adaptation of the student role to the online setting [21] to a series of continuous reconfigurations.

What Does Each Presence Measure?

The three presences are subdivided into 10 subscales to determine the interactions between the theoretical premises of each presence and the 34 Likert-scale items, as shown in Table 1. This survey format was established in 2008 [22,23,24], and since then it has been shared as the official CoI instrument and validated as such. Nevertheless, items have gone through several adaptations, and currently, the official survey in use worldwide is the 14th version. This is the prime sign that the founders themselves considered the framework subject to epistemological observations and susceptible to empirical implementations. In addition, a framework is not immutable, and, being an instrument to collect data from an environment it should reflect the changes in the area in which it was conceived and applied [14]. The original 10 subscales (the academia also calls them categories or subdimensions) identify the level of CoI perceived by the students (Table 1). To weigh the observations deriving from the present contribution, it is advisable to become familiarized with the survey which can be downloaded from the Athabasca University website. This 34-item survey is always administered in the format of a 5-point Likert scale.
TP is subdivided into three subscales corresponding to survey Items 1–13:
Design and Organization: Planning and structuring the learning experience effectively.
Facilitation of Discourse: Actively participating in discussions, guiding, and fostering critical thinking.
Direct Instruction: Providing clear explanations and instruction when necessary.
SP is subdivided into three subscales corresponding to survey Items 14–22:
Affective Expression: Expressing emotions and creating a supportive online environment.
Open Communication: Encouraging transparent and honest communication among participants.
Group Cohesion: Building a sense of community and belonging.
CP is subdivided into four subscales corresponding to survey Items 23–34:
Triggering Event: The stimulus or problem that initiates critical thinking.
Exploration: Active engagement in exploring ideas, seeking information, and considering different perspectives.
Integration: Connecting new information with existing knowledge to form a cohesive understanding.
Resolution: Reaching a conclusion and being able to apply the understanding to future situations.
The official survey has been extensively validated in English [14,21,23,25,26,27] but in recent times several scholars have asserted that the initial format is most fitting for individuals with English proficiency of at least B2, whilst the questionnaire is administered worldwide in contexts of English as a lingua franca. As a consequence, validated translations into various languages have been undertaken: Portuguese [28], Korean [29], Chinese [30], Turkish [31], Spanish [32], German [33], and Italian [34].

2. Adjusting the Classical Survey

The proposals to revise the community of inquiry framework (CoI) have increased, spanning from adding a new presence [7,8,9,11,35] and modifying the existing ones [36] to adjusting the number of items in the survey. However, since such proposals for adjustments are specific to certain settings and environments, they keep on opening uncharted scenarios. The ten subscales or categories underlying the three presences are the backbone of the CoI survey and represent the smallest units to measure internal consistency (Table 1). Determining an acceptable degree of reliability and consistency was the core topic of several works focused on assessing correlations among the ten subscales and among the items they contain. In the next sections of this contribution, several case studies will be mentioned which detected alternative models (based on the original three presences) that differ from the structure of ten subscales (Table 1) and also fit four, five, or six factors. This means that there are possible alternative versions of the CoI survey which may offer a high degree of internal consistency and reliability and may be suitable for further adaptation. The available literature currently provides a significative range of variables as a result of exploratory factor analysis (EFA) but investigations rarely link these variations to the typicalities intrinsic to the discipline taught or the learning setting.

2.1. Ignored Externalities

Indeed, the CoI survey has not frequently been researched as an instrument possibly influenced by environmental variables, but as a theoretical object that carries within it the comprehensiveness of the teaching/learning experience. Some observations were formulated on the possible disciplinary impact on the survey results [37] but they never favored a specific line of research. Lim and Richardson [38] discussed the outreach of this aspect by considering the basic principle of categorizing academic disciplines: soft-pure, soft-applied, hard-pure, hard-applied. The general tendency is that of favoring memorization and application of theoretical concepts in hard disciplines, whilst promoting cooperative learning and discussion in the soft disciplines. Also, the learning strategies may profoundly vary between students of hard and soft disciplines, the latter being more aware of the potential offered by online tools to establish cooperative learning. As our experience is confined to the learning of English as a foreign language (EFL), further observations will be formulated in relation to possible disciplinary influence.

2.2. Learning Presence as a Fourth Dimension

Following the first introduction of learning presence (LP) in the CoI by Shea and Bidjerano [9], and the further growing awareness about a possible extended framework [13], the further empirical experiences did not accomplish a conclusive advancement. To date, one of the most pioneering studies was published by Wertz [16] who introduced a set of new items to define LP under the three subscales of motivation, behaviors, and development. Wertz’s most solid assumptions were based on the simultaneous measurement of the four presences, showing that the introduction of LP actually creates a new pattern of interrelations involving all the presences. Her analysis was based on a 47-item survey. This expanded survey was made using the classical CoI 34-item questionnaire plus the addition of new items related to the fourth presence. The LP items were borrowed from prior studies which surveyed personal learning strategies, self-regulation, and learning environment preferences [38]. Another significative attempt to characterize a fourth presence named learner presence was published by Honig and Salmon [39] whose proposal included learner intentions, learner metacognition, and peer monitoring. While there are overlaps between the concepts introduced by Honig and Salmon regarding learner presence and the original presences of the CoI framework, each set of concepts brings some unique elements to understanding online learning dynamics. These concepts highlight learners’ active engagement, self-reflection, and collaborative behaviors within the online learning environment. When considering whether these ideas are redundant with the three original presences of the CoI framework—TP, SP, and CP—it becomes evident that there is some overlap but also distinctiveness in the focus of each.
1. Learner Intentions: This concept aligns closely with TP, which involves the design and organization of the course to facilitate meaningful learning outcomes. Learner intentions reflect learners’ orientation towards the course content and their goals, which can be influenced by the design and facilitation provided by instructors. While TP encompasses aspects of guiding behaviors and feedback, learner intentions delve into learners’ personal motivations and expectations within the learning context.
2. Learner Metacognition: It intersects with CP in the CoI framework, which involves learners’ ability to construct meaning through reflection and discourse in a CoI. Metacognition goes beyond CP by focusing on learners’ self-awareness, reflection on their learning processes, connections between content and goals, feedback utilization, and seeking additional practice opportunities. It emphasizes learners’ active engagement in monitoring and regulating their own learning strategies.
3. Peer Monitoring: It introduces a unique aspect that extends beyond SP, which relates to participants’ ability to identify with the community, communicate purposefully, and develop interpersonal relationships. Peer monitoring involves evaluating peers’ contributions based on their potential to advance one’s own learning goals. While SP fosters interaction and group cohesion, peer monitoring emphasizes a more evaluative aspect where learners assess peers’ efforts in assignments and discussions to enhance their own learning.
Notwithstanding the significative contributions by Wertz [15,16] and Honig and Salmon [39], the concept of LP as a fourth CoI dimension is still subject to exploration, and proposals to establish permanent survey items are scarce. Since the learning context under our lens is associated with EFL, we are interested in shaping LP items in the wake of the evolution of self-efficacy and self-regulated learning according to Bandura’s works, exploring areas such as personal motivation, behaviors, and awareness. Indeed, the assumptions tied to the awareness area are also rooted in the learning-to-learn skill, which is of paramount importance to the lifelong learning pathway, since learning a language is a long-lasting process and does not end with a final exam [40,41].

2.3. Evolution of the Concept of Self-Efficacy (1977–2009)

In order to clarify the significance of the theories supporting LP, it is crucial to trace their origins. Bandura’s initial insights, influenced by social learning theories, provide an entry point into how individual behavioral patterns can be reinforced. This reinforcement occurs not only through the anticipation of future rewards but also through an understanding of how previously acquired behaviors contribute to a motivational mindset [17]. This self-determination device, known as the cognitive locus of Bandura, is an internal effectiveness monitor. This control device can be defined as a motivational mechanism implying self-determined standards and continuous self-corrective attitudes (based on feedback received) to raise the bar of personal demands. This type of learner evaluates future performances based on past behaviors by observing the divergence in their own outcomes rather than solely relying on the outcomes of others. Bandura emphasized the impact of personalized feedback on the psychological state of individuals and argued that it has a greater cognitive effect compared to simply observing examples provided by others. Indeed, even if others demonstrate effective behaviors, these effective learners do not internalize them as readily as individualized feedback which works as a stimulus. Bandura’s social cognitive theory (SCT) emerged from foundational principles, linking cognition, behavior, and environment. It distinguishes between passive observational learning and active decision making, emphasizing internal processes like self-efficacy determinants such as self-monitoring and regulation. SCT’s applications span diverse contexts, evolving through assessments and modifications, broadening research horizons. Notably, research on self-efficacy attributes within behavioral predictors has thrived, alongside investigations into the theory of planned behavior, which extends Bandura’s concepts to understanding internal and external phenomena, including performance, intentions, and motivation [42,43].

2.4. From Self-Efficacy and Learning Presence to the Theory of Heutagogy

The primary definition for LP could be encapsulated as the degree to which learners engage in metacognitive, motivational, and behavioral processes within collaborative online educational settings. This is deemed especially pertinent in self-directed learning modalities such as discovery-oriented learning, self-directed reading, or information retrieval from online sources. In this context, Shea and Bidjerano’s works [9,11] emphasized that learners may also manifest intentions and goals that were not detectable within the original CoI presences and subdimensions. Their results acknowledge a constellation of behaviors and traits such as metacognition; motivation; self-efficacy; self-analysis; reflection; personal beliefs [9,11,44]. This set of behaviors has been extensively studied across various disciplines and research orientations, relating the aspects of LP to semantic integrations and cross-disciplinary lines of research.
Recent studies that have focused on the validation and measurement of CoI subscales have yet to bring self-regulatory analysis, the multi-dimensionality of CoI factors, and the causal relationships together into one comprehensive study. Furthermore, there is still the scarcity of practical proposals to define the items which may frame the LP subscales. As LP is a multi-dimensional representation of student self-regulation, it is not inappropriate to also consider the sociological context framing the teaching/learning experience. Nonetheless, the present contribution is not intended to consider how different education systems mold the idea of autonomy in learning, but being research based in a European country it should consider the way learners’ inner disposition is shaped by the school systems. Indeed, self-efficacy and self-regulated learning can be recognized in the descriptors of the learning-to-learn skills defined by the European Parliament and Council of the European Union since 2006.
“Learning to learn is the ability to pursue and persist in learning, to organise one’s own learning, including through effective management of time and information (…). This competence includes awareness of one’s learning process and needs, identifying available opportunities, and the ability to overcome obstacles in order to learn successfully. This competence means gaining, processing and assimilating new knowledge and skills as well as seeking and making use of guidance (…) Motivation and confidence are crucial to an individual’s competence”. ([45], p. 7).
The definition of the European Commission also includes the constructivist idea linked to the sedimentation of new knowledge on the basis of previous knowledge.
“Learning to learn engages learners to build on prior learning and life experiences in order to use and apply knowledge and skills in a variety of contexts (…)” ([45], p. 7).
Regardless of the specific learning situation, the development of learning-to-learn skills necessitates that individuals understand their preferred learning strategies, recognize the strengths and weaknesses of their abilities and qualifications, and possess the ability to seek out educational opportunities and guidance or support when needed [46,47]. Building upon foundational skills enables individuals to access, acquire, process, and assimilate new knowledge and skills effectively. This requires effective management of one’s own learning process as well as one’s career trajectory and work patterns. Specifically, it requires the ability to persevere in the face of challenges during learning experiences, maintain focus for extended periods of time, and engage in critical reflection on the purposes and objectives of learning. Individuals should be able to benefit from a diverse group while sharing their own knowledge. Additionally, individuals should possess a set of organizational skills to manage their own learning processes while evaluating their own work independently, beyond motivation and confidence in pursuing continuous personal growth through education [48,49]. Furthermore, ICT skills facilitate collaboration and networking with experts and peers worldwide, fostering a supportive ecosystem for continuous personal growth. Whether through online courses, virtual mentorship programs, or collaborative projects, digital platforms enable individuals to tap into diverse perspectives and expertise, fueling inspiration and confidence in their ability to learn and grow. Digital literacy and, more recently, familiarity with AI tools enable individuals to track their progress effectively, utilizing digital ecosystems for goal setting, performance monitoring, and feedback collection [50]. This real-time feedback loop enhances self-awareness and confidence, motivating individuals to persist in their learning journey despite challenges or setbacks.
It is evident how modern self-organized learning patterns interact with a wide range of pedagogical approaches, and an imperative cross-disciplinary element that cannot be overlooked is digital literacy. Among the most recent contributions to the learning theories, heutagogy stands out as a comprehensive integration of constructivism, humanism, connectivism, and neuroscience of learning, attributing a key role to digital technologies in this context [51,52,53]. It focuses on self-determined learning and the ability to choose one’s individual pathway for learning with the affordances provided by digital technologies [51,54]. As a way of example, an experience documented by McKerlich and Anderson [55] brings out that when CoI is implemented into a multi-user virtual environment (MUVE) learning space, it becomes immediately palpable that traditional items related to the three presences cannot fully capture the complexity of the educational experience. Therefore, the identification of new indicators becomes crucial to effectively assess educational experiences within advanced digital settings. Aspects such as user proficiency and navigational skills within immersive environments, as well as the integration with other educational tools like learning management systems, should be considered as part of teaching presence evaluation, acknowledging their importance for effective participation and engagement. Control over side channels, such as auxiliary media, is key for teachers to maintain a constant supervision of the learning process. Moreover, evaluating unique assessment opportunities within MUVEs, such as applying knowledge to activities or artifacts created in the virtual environment, offers insights into cognitive presence (CP).
Indeed, such practical and theoretical aspects characterizing not only highly digital gradient learning settings but also the self-created learning ecosystems by instructors and students themselves are not addressed in the debate on the CoI and by the proposals to add a fourth presence. While digital affordances may not be explicitly discussed in all aspects of the CoI framework, they undeniably play a key role in shaping learning agency, promoting active engagement, and facilitating collaborative knowledge construction within educational settings.

2.5. What Are the Key Principles of Heutagogy and How They Suit Learning Presence Assessment in Digital Learning?

Heutagogy, also known as self-determined learning, is based on several key principles that support lifelong learning. It places a strong emphasis on learner agency, where students take responsibility for their learning design and pathway, while instructors act as facilitators, encouraging learner action and experience. Heutagogy aims to develop learner self-efficacy, cognitive and metacognitive skills, critical thinking, and reflection by giving learners control over their learning process. It promotes reflection through single-loop and double-loop learning. Learners not only reflect on what they have learned (single loop) but also on how they have learned and how this knowledge influences their value system (double loop). Heutagogy views education as an ongoing process of continuous learner inquiry and trial and error, so, it embraces failure as a critical means for learning and encourages learners to learn from their experiences [51,52,54,56,57]. By distinguishing between competency and capability, heutagogy defines the former as the ability to perform a specific skill, while the latter involves demonstrating competency in unfamiliar and unique contexts. Thus, capability is about using competencies in new environments. By fostering learner agency, self-efficacy, reflection, and continuous learning, heutagogy equips individuals with the tools and mindset needed to thrive in a rapidly changing world.
To define elements of a heutagogical-based LP, a proper survey has to gauge:
1. Learner Autonomy—Signs of self-regulated learning showing how students take initiative in setting their learning goals, choosing resources, and directing their learning process. Heutagogy emphasizes the learners as the major agent in their own learning.
2. Reflective Practices—How learners engage in reflecting on their learning experiences, identifying their learning preferences, and evaluating their progress. Reflection is a key component of heutagogy that supports continuous learning and self-awareness.
3. Metacognitive Skills—The learners’ ability to plan, monitor, and evaluate their learning strategies.
4. Capability Development—How learners demonstrate the ability to apply their competencies in diverse and changing contexts, beyond mere competencies.
5. Non-Linear Learning Approaches—Whether learners engage in non-linear learning pathways, such as exploring multiple routes to achieve learning outcomes, embracing trial-and-error learning, and adapting to unexpected challenges.
6. Self-Directed Learning—How learners create their own learning ecologies and self-directed learning environment; how they define goals, contexts, content, process, resources, and relationships for their learning pathways.
7. Use of Digital Media—How learners utilize digital media and online resources to access information, collaborate with peers, create content, and share knowledge.
Based on these, we have proposed a refined breakdown of the Likert-scale items into four subscales, with three items each (see Table 2). In formulating the statements that constitute the items, we have emphasized both aspects of metacognition and those invoking resources and tools, as they are instrumental to the exercise of a modern LP and can reveal learning adaptative strategies.
Subscale #1: Self-Regulated Learning and Learner Autonomy (Items 35–37)
I analyzed the learning objectives during the course and adjusted them to fit my preferred way of learning.
I actively chose resources, tools, or devices that best supported my learning style during the course.
I could independently approach difficult concepts or tasks and find ways to understand or solve them.
Subscale #2: Self-Awareness and Reflective Practices (Items 38–40)
I regularly reflected on my learning habits during the course.
I valued feedback from others to gain insights into my learning strategies and areas for improvement.
I aimed to develop consistent study routines in study-friendly environments.
Subscale #3: Metacognitive Skills (Items 41–43)
I planned my timetable, strategies, and resources before engaging in any study session.
I could acknowledge if I was taking an effective or ineffective learning approach to a certain topic and changed accordingly.
I was able to reflect on my own contributions and learning progress in the collaborative setting.
Subscale #4: Adaptability and Non-Linear Learning Approaches (Items 44–46)
I was open to trying new resources, study approaches, or tools when faced with difficult concepts or tasks.
I was comfortable with engaging in trial-and-error learning and adapting to unexpected challenges.
I used my skills in different situations that might change, going beyond what was already set.
Therefore, Table 2 illustrates the CoI survey expanded with the four new subscales assigned to LP, associated with the twelve new items.

3. Methodology

The present approach develops four steps on the basis that the authors already encompassed the relevant body of literature on this specific field [34,58,59]. This research approach utilizes relevant case studies as a reference point to validate the traditional community of inquiry (CoI) survey, which assesses teaching presence (TP), social presence (SP), and cognitive presence (CP) using a single Likert-scale survey. Additionally, it underscores the potential for exploring alternative survey versions by reducing the number of items. In view of the fact that integrating learning presence (LP) into the framework has not led to a permanent new survey yet, the aim of this contribution is to adopt the plasticity spotted by previous scholars to justify a new pilot model. Therefore, this contribution proceeds through the following steps.
The relevant case studies which involved adjustments to the original survey provide a benchmark for alternative factor arrangements, which will serve to frame the current experience after the fact. The adjustments are always the result of exploratory factor analysis (EFA), and confirmatory factor analysis (CFA), and observations on factor loading. Not only it is necessary to consider future potential reductions in the original 34 items in terms of internal consistency but also in practical terms of managing a new expanded survey that may become excessively long (with 40 to 50 items) after including LP items. This may require a significant amount of concentration to complete effectively and may affect the reliability of feedback in certain contexts.
A firsthand experience with the 34-item classical survey is analyzed and compared to the most significative case studies revolving around a possible reduction of the 34 items.
The set of 12 additional LP items is included in the survey and it is administered to a sample of n = 55 Italian university students involved in EFL courses where blended interactions were adopted to foster specific CoI-related strategies.
To ensure consistently high-quality feedback, the survey was translated into Italian. For the traditional survey, we used the validated Italian version [34] and included the 12 LP items translated accordingly.
EFA, CFA, and reliability tests were carried out to assess the interdependence of the four presences and consistency patterns were highlighted. Potential reductions in the number of items were considered to maximize the survey effectiveness.

3.1. The Internal Consistency of the Original Items

As the classical CoI framework is organized into three presences and ten subscales, we found that the academia focused more often on analyzing the relationships among the three main dimensions, while the studies focusing on the causal relationships of the ten subscales were not as numerous [60,61,62,63]. Therefore, according to a consensus within the field, in the first two decades after the launch of the CoI, a common inconsistency arose from imbalanced relationships among the three presences. In more detail, while there is commonly a notable correlation between TP and CP (see Table 3), there is evidence of lower interdependence of SP with the former two. Ballesteros et al. [32] underscored that SP items prompt learners to evaluate their own emotions and their connection to the learning process, whereas TP and CP items necessitate the assessment of teachers’ effectiveness and the impact on learners’ abilities and skills.
Following EFA and CFA performed in all examined studies, the significant correlation between subscales associated with TP and CP is frequently confirmed, whilst the mediation role of SP frequently suggests a less marked degree of correlation with the other two presences. The results of the authors’ firsthand experience in surveying an English as a foreign language (EFL) course [34] also show that the mediating function of SP does not appear to be crucial for TP and CP to interact properly. Indeed, items linked to SP consist of a series of statements closely connected to an individual’s emotional and affective realm. As a result, these items may promote self-evaluation rather than assessing the individual within a social context. However, these observations may provide a different perspective when they are associated with the discipline taught and even with the specific teaching strategies, the core topics, the learning resources, or the learning management system adopted. By way of example, Teng et al. [64] investigated the perception of the three CoI presences in an EFL course in China, adopting the reading circles methodology, which requires students to meet during classroom time to discuss in English the assigned readings. The respondents showed the highest perception of TP, particularly in the subscale Design and Organization. Then, the second highest score was for CP, particularly the subscales Exploration and Integration. Conversely, students revealed a lower perception of SP, and the factor loading showed an aggregation of the subscale Group Cohesion (which belongs to SP) into CP. Another relevant case study by Chen [65], carried out in an EFL blended course for university freshmen, was organized in a way to engage the sense of community and compensate for the insufficient classroom time usually affecting foreign language education. Similarly, in this case TP and CP showed higher perception than SP and the author reinforces the concept that the outcome of a blended learning experience can be context specific. Indeed, SP and CP are tied to students’ learning styles, whilst TP, despite being potentially shared across the group, may remain a teacher’s prerogative. Our firsthand experience, illustrated in the next sections, shows similar results when the factor adjustments bind the items associated with SP and CP into the same factor.
Given that most of the research cases pursue a three- or four-factor analysis, several studies point out which items usually undergo reduction as a result of these assumptions based on EFA and CFA. On the other hand, several documented cases provided alternative arrangements as a result of EFA, by describing an internal structure based on more than three factors. These additional factors are the results of the re-aggregation based on converging parameters and reveal a set of valid relationships that do not match the original proposed structure (see Table 1). By way of example, Bai et al. [60] showed an alternative optimal model based on six factors comprising the original TP, SP, CP, and a second-order structure including three subscales as additional factors (one for each presence), namely, Design and Organization, Affective Expression, and Resolution (see Table 1). In other terms, the case study by Bai et al. [60] detected that Design and Organization can be differentiated from TP, while Affective Expression can be differentiated from SP, and Resolution can be differentiated from CP, in a way that they could substantially represent dimensions of their own. The immediate benefit of such a study is that a six-factor structural model may offer better criteria to optimize the instructional design and implementation of an online course (in this particular case they were MOOCs). Another documented case is that of Heilporn and Lakhal [61] who compared alternative structures based on eight, seven, and six factors to the original ten-subscale model (see Table 1). Also, in this case, the original three presences were still valid structures but a second-order set of factors was identified according to learners’ perceptions.

3.2. Learners’ Perception as the Cause for Alternative Arrangements

If the internal structure may consist of a different factor arrangement, the consistency of the survey based on three presences divided into ten subscales can be put up for discussion. But, can we deduct further observations useful for our specific EFL teaching environment from the connotations that items may invoke?
Yu and Richardson [29] removed two items that did not fit neatly into any subscale, and the study found that the classical three-factor structure best explained the data. This final structure comprised 32 items: 12 for TP, 12 for CP, and 8 for SP. The first item that was deleted was SP_14_Getting to know other course participants gave me a sense of belonging in the course (because of a factor loading of 0.511 on SP and a cross-loading of 0.424 on TP). Then, the second deleted item was TP_4_The instructor clearly communicated important due dates/time frames for learning activities, (because of a factor loading of 0.493 on TP and a cross-loading of 0.422 on CP).
Borup et al. [66] found that the CoI questionnaire had a better internal consistency once seven items were removed: TP_2, TP_13, SP_14, SP_15, SP_22, CP_27, and CP_33; so, three of them were taken away from SP. Nevertheless, it is necessary to consider that this case study relates to a MOOC learning experience, so, participants may downgrade certain interactions and perceive them as unnecessary to achieve successful learning goals.
According to Caskurlu [36], the error covariances and the conclusions proceeding from previous works show that the three-subscale form for TP based on Design and Organization, Facilitation, and Direct Instruction may not be the only viable format. In other words, the high standardized covariance between the subscales Facilitation and Direct Instruction may lead to further testing of the questionnaire by deleting the subscale Design and Organization, namely Items 1–4. The practical explanation is that students may struggle to discern between Design elements and Direct Instruction in an effective and well-organized course, so, Items 1–4 may be redundant in some cases and unnecessary to reach high internal consistency, and in that case, the conclusions are opposite to Bai et al. [60]. Concerning SP, Items 21 and 22 were cross-loaded on more subscales. Since online discussion forums do not exclusively encompass group communication, it is suggested that this aspect should either be expressed more inclusively or divided into multiple inquiries to encompass various forms of media.
Heilporn and Lakhal [61], after CFA and EFA, proposed to enhance the CoI questionnaire by refining its items to prevent content overlap and more precisely delineate more distinct subscales. Specifically, they advise avoiding the incorporation of double-barreled items, and in their work, they observed some remarkable patterns. First, a strong set of correlations between Facilitation and Direct Instruction that leads to the assumption that they may be considered as a single subscale, echoing similar findings from prior literature. Second, when they looked at the link between Open Communication and Group Cohesion, their results were a bit different from what Caskurlu found [36], who reported much higher connections between Affective Expression and Group Cohesion. Nevertheless, it is worth noting that their findings align with the skepticism expressed by Lowenthal and Dunlap [67] regarding how SP items are conceptualized. In addition, their investigation into CP raised more concerns, especially the high correlation between Exploration and Integration. It seemed like students might not see these as distinct categories. While Caskurlu [36] did not explicitly mention this, her study also showed significant connections between CP internal subscales. In general, several authors found a high degree of overlap content in the original CoI questionnaire. Essentially, some survey items might be interpreted by students as basically the same question, just worded differently.
According to the firsthand experience with the original questionnaire [59], Item 12 was not loaded: TP_The instructor provided feedback that helped me understand my strengths and weaknesses relative to the course’s goals and objectives. Instead, Item 22 was loaded under two main factors, SP_Online discussions help me to develop a sense of collaboration (factor loading of 0.414 on TP and 0.475 on CP).
Given these findings, it is questionable whether certain subscales were distinctly separate, even though the original model seemed to fit well. To dig deeper into this, many authors have explored alternative models with fewer than ten latent subscales. Models with eight, seven, and six latent factors showed reasonable fits in terms of internal consistency and reliability. According to these results, it is acceptable when reconsidering the item configuration to adjust those items that could be perceived as duplicate requests and discover new emerging patterns. Additionally, the potential influence of organizational design or disciplinary factors on variations in factor interdependence has not been thoroughly investigated.

3.3. Our Firsthand Experience with the Classical CoI Survey

3.3.1. First Sample

At this stage, we administered the classic CoI questionnaire, based on three presences, in a digital format. Students from the Faculty of Pharmacy attending the course of EFL during the 2nd semester of a.y. 2022/23 were invited by the first author (also their course teacher) to participate in the survey. A total of 21 attending participants in 43 took part in the survey. The reliability test gave an excellent Cronbach’s alpha of 95%. The CFA was performed to verify the structure based on three main factors (the three presences) and to detect any anomaly. X2 of TP was 144.8 and Item 5 emerges as the one with the lowest mean (3.952) and a consequential confidence interval out of line. X2 of SP was 75.798 with no outliers. X2 of CP was 174.709.

3.3.2. Teaching Presence

The following EFA (Table 4) aims to determine the number of factors based on an eigenvalue above 1, rotation method of promax, and minimum chi-square. The factor loading of TP mainly suggests a two-subscale dimension against the four original subscales: F1 = 5.693; F2 = 1.456; F3 = 0.817. Conversely, the CFA gives C1 = 6.068; C2 = 1.822; C3 = 1.132.
Item 7 was not loaded, whilst Items 9 and 11 were plausibly perceived as variations of a similar statement.
9_The instructor encouraged course participants to explore new concepts in this course.
11_The instructor helped to focus discussion on relevant issues in a way that helped me to learn.
So, we took Item 11 away and the model gained more consistency, also defusing the cross-loading of Item 9 (Table 5). However, the original three subscales are no more subdivided as in the original version.

3.3.3. Social Presence

EFA of SP suggests a better two-subscale dimension against the three original subscales (Table 6 and Table 7). F1 = 2.676; F2 = 2.304; F3 = 0.976. Item 17 was not loaded and, as already observed in TP, diverging items convey similar core concepts. Despite belonging to two different subscales, namely, Item 16 to Affective Expression and Item 17 to Open Communication (see Table 1), they may be perceived as different versions of the same idea, suggesting how socially suitable digital tools are. Furthermore, once Items 17 and 16 are taken away, Item 22 still maintains its status as an additional factor with a significantly high loading value of 1.001, signaling that it is not affected by the previous corrections.
16_Online or web-based communication is an excellent medium for social interaction.
17_I felt comfortable conversing through the online medium.
22_Online discussions help me to develop a sense of collaboration.
Therefore, by also deleting item 22 it is possible to achieve a more consistent version based on only two balanced factors.

3.3.4. Cognitive Presence

EFA of CP suggests a suitable three-subscale dimension (Table 8), F1 = 5.460; F2 = 2.125; F3 = 0.976, but it does not match the classical three-subscale subdivision (see Table 1).
Table 8 gives Item 26 to Triggering Event whilst the remaining Items 27 and 28 constitute a weak factor with the lowest values.

3.3.5. Second Sample; EFA and CFA

This sample included 53 students from the Faculty of Engineering attending the course of coding techniques, supported by the platform OpenAnswers, during the 2nd semester of a.y. 2022/23 and 181 students attending EFL courses in the Faculties of Engineering and Economics, 1st and 2nd semesters of a.y. 2022/23. In this instance, the questionnaire was administered in a mixed format, both paper based and online, to accommodate students who were not present on the day of the survey. The reliability test gave an excellent Cronbach’s alpha of 0.943.
In the confirmatory factor analysis (CFA) conducted to verify the structure based on three main factors (the three presences), several key aspects were highlighted. The chi-square (χ2) statistics and the performance of individual items in terms of mean values and confidence intervals were particularly notable. For the TP factor, the χ2 value was 144.8, and Item 5 was identified as having the lowest mean value of 3.952, with its confidence interval deviating significantly from the expected range. This indicates that Item 5 may not align well with the rest of the items in this factor, suggesting potential issues with its validity or reliability. The SP factor displayed a χ2 value of 75.798, with no outliers detected. This result implies a relatively good fit for the items within this factor, indicating that the model adequately captures the underlying structure of the social presence construct. For the CP factor, the χ2 value was 174.709. Although no specific items were highlighted as problematic in this summary, the relatively high chi-square value suggests that there might be room for further refinement in the items representing this factor.
In the EFA the factor loadings indicated strong correlations between certain variables and their respective factors. For instance, Item 4 had a high loading of 0.911 on Factor 1, with a low uniqueness value of 0.127, demonstrating its strong association and low measurement error. Conversely, Item 22 showed a negative loading of −0.472 on Factor 1 with a uniqueness value of 0.297, indicating a weaker and inverse relationship with this factor. The CFA further validated these findings. The factor loadings in CFA, such as λ11 = 0.680 for Item 1 with a significant p-value (<0.001), confirmed the strong associations observed in EFA. The CFA results demonstrated that the model fit the data well, with the factor model χ2 significantly lower than the baseline model, indicating a good fit. Bartlett’s test of sphericity also yielded significant values in both analyses, confirming that the variables were sufficiently correlated to warrant factor analysis. Overall, the CFA results largely corroborated those from the EFA, indicating that the model is robust and the factors are well defined. The decision to reduce the number of items from 34 to 33, by removing Item 28 due to its lower statistical significance, appears to have been a sound choice, enhancing the model’s fit to the data. Despite minor adjustments and methodological differences, both EFA and CFA demonstrated substantial correlations among the variables and a good fit of the model to the data, thereby validating the theoretical structure of the three presences.

4. Measuring the Learning Presence Construct Only

As illustrated in Section 2.5, twelve new items were developed, added to the classical survey, and translated into Italian to be administered at the end of the second semester to three groups of students attending three English as a foreign language courses in three faculties at the Sapienza University of Rome. The total number of participants was 55 (27 women, 28 men) from the Faculty of Economics n = 20; Faculty of Engineering n = 12; Faculty of Pharmacy n = 23. The sample size is limited because the first author was also the instructor for the three respective courses and collected feedback only from those students who attended at least 75% of the blended course activities. Each course was delivered in a face-to-face format with regular remote activities through the Moodle platform of Sapienza University of Rome. The survey was completed in a controlled environment and the feedback is highly reliable.
As shown in Table 9, the exploratory factor analysis (EFA) for the new dimension learning presence (LP) reveals two main factors.
Interpretation of Factor 1
This factor represents the dimension of self-regulation and reflectivity in learning. Items with high factor loadings in this factor pertain to students’ ability to reflect on their learning habits, adapt their learning style, experiment with new resources and approaches, and evaluate their progress. The high uniqueness of some items, such as Item 36, suggests that they might not align perfectly with the other items in the factor and may need revision.
Interpretation of Factor 2
This factor represents the dimension of study planning and strategy. Items with high factor loadings in this factor relate to students’ ability to plan their study activities, maintain regular study habits, and adapt their learning strategies according to their needs. The items in this factor reflect a more practical and operational aspect of learning.

5. Measuring the Entire Framework with Four Presences

The Bartlett’s test of sphericity is significant—chi-square (χ2): 707.875; degrees of freedom (df): 1035; p-value: < 0.001—suggesting that there may be discrepancies among the variables in our dataset. This supports the use of dimensionality reduction techniques such as factor analysis.
Chi-square (χ2): The high value of 707.875 indicates the magnitude of the association between the variables, suggesting that there are significant correlations among the variables in the dataset.
The degrees of freedom are 1035, which is typical for tests involving large correlation matrices.
The p-value is less than < 0.001 indicates that the chi-square value is significantly different from what would be expected if the model fit the data perfectly. However, given the sensitivity of the χ2 to sample size, it is important to consider this result in the context of other fit indices. In this case, the χ2/degrees of freedom ratio suggests a good fit, but further fit indices should be examined for a more comprehensive evaluation of the model.
The exploratory factor analysis (EFA) conducted (see Table 10), which resulted in twelve factors, provides a detailed breakdown of the underlying structure of the dataset. This analysis reveals important insights into the dimensionality of the data and the relationships between the observed variables. Follows a detailed commentary on the EFA results, considering the possibility of associating each of the 12 factors with one of the four presences: teaching presence (TP), social presence (SP), cognitive presence (CP), learning presence (LP).

5.1. Factor Loadings and Interpretation

Factor 1: Practical Application
High loadings for items related to practical application of learned knowledge, such as applying what has been learned to work or other contexts and applying course approaches to concrete situations.
Number of incorporated items: 5 (Items 29, 31, 32, 33, 34).
Association: CP, as it involves applying knowledge and skills in practical contexts.
Factor 2: Social Acceptance
Sharing personal opinions and feeling comfortable in a climate of trust.
Number of incorporated items: 5 (Items 18, 19, 20, 21, 42).
Association: SP, as it reflects the sense of belonging and trust within the learning community. Nevertheless, Item 42 belongs to the new LP construct and are related to study habits. This may suggest certain implications between social interactions and personal study mindset.
Factor 3: Teaching Clarity and Organization
Includes items on clarity and guidance provided by the instructor, such as clear instructions for activities and clear communication of deadlines.
Number of incorporated items: 4 (Items 3, 4, 8, 9).
Association: TP, focusing on the instructor’s role in organizing and guiding the learning experience.
Factor 4: Cognitive Engagement
High loadings on items related to curiosity and interest stimulated by course activities.
Number of incorporated items: 3 (Items 23, 24, 30).
Association: CP, as it involves engaging students intellectually and fostering curiosity.
Factor 5: Exploration and Feedback
Items focused on regular reflection and feedback.
Number of incorporated items: 5 (Items 26, 38, 39).
Association: CP/LP, due to its emphasis on self-regulation and reflective practices.
Factor 6: Self-Regulation and Collaboration through Digital Tools
Items like reflecting on personal learning habits and collaboration.
Number of incorporated items: 3 (Items 17, 35, 43).
Association: CP/LP, focusing on collaboration through digital tools and adjusting personal learning habits.
Factor 7: Instructor’s Feedback and Facilitation
Items such as communicating goals, facilitating understanding of content, providing feedback and engaging students in brainstorming activities.
Number of incorporated items: 4 (Items 2, 12, 13, 27).
Association: TP, emphasizing the instructor’s role in facilitating discourse and understanding.
Factor 8: Facilitation
Items like identifying areas of agreement and disagreement, guiding the class and engage students in a productive dialogue.
Number of incorporated items: 3 (Items 5, 6, 7).
Association: TP, related to the ease of facilitating teacher/learners interactions.
Factor 9: Study Habits and Self-Reflection
Items like reflecting on learning habits and progress.
Number of incorporated items: 3 (Items 40, 41, 46).
Association: LP, focusing on continuous self-assessment and reflection.
Factor 10: Discussing, Combining and Constructing Explanations
Items from the Exploration and Integration subscales of CP.
Number of incorporated items: 1 + 2 cross-loadings (Items 28, 29, 30).
Association: CP, cross-loadings with factor 1 and factor 4, both associated with CP.
Factor 11: Sense of Community and Social Well-Being
Items related to Affective Expression and instructor’s reinforcing the sense of belonging to the course.
Number of incorporated items: 3 (Items 10, 14, 15).
Association: TP.
Factor 12: Choosing Learning Resources
Number of incorporated items: 1 (Items 36).
Association: LP.
Items not loaded: 1, 11, 16, 22, 25, 37, 44, 45.
The cumulative variance explained by the factors is crucial for understanding how well the factors capture the underlying data structure. In the unrotated solution, Factor 1 explains a substantial 31.7% of the variance, with subsequent factors adding smaller proportions, cumulatively explaining 70.9% of the variance. After rotation (promax), the variance is more evenly distributed across factors, with Factor 1 explaining 9.3% and the cumulative variance still reaching 70.9%. Rotation generally aids in achieving a more interpretable factor structure. The EFA with 12 factors shows a more granular division of dimensions compared to CFA models with fewer factors. This might indicate overfitting or a need for further refinement to reduce redundancy. Factors in the EFA, such as those related to practical application, social acceptance, and cognitive engagement, align with broader dimensions identified in the CFA models (teaching presence, social presence, cognitive presence, and learning presence). High loadings and clear factor delineations suggest strong internal consistency within each factor. However, the presence of 12 factors, and the exclusion of 8 items may complicate the model’s application and interpretation compared to a more parsimonious model with fewer, broader dimensions. Uniqueness values (e.g., 0.112 for λ34 and 0.181 for λ33) indicate how much of the variance of each item is not explained by the factors. Lower uniqueness suggests higher reliability of the factors in capturing the item variance.
The EFA results with 12 factors provide a nuanced view of the survey data, identifying multiple distinct dimensions of the learning experience. While this granularity offers deeper insights, it may also lead to complexity in interpretation and practical application. Comparing these results with previous CFA models suggests that while the 12-factor solution captures more specific aspects of the data, associating these factors with the four presences helps simplify the model and make it more interpretable. Future analysis could focus on combining closely related factors to achieve a balance between detail and simplicity, ensuring the model remains both explanatory and practical.

5.2. Forcing the EFA

If we manually set the EFA on four factors, it returns a remarkable configuration showing TP and SP as two distinct constructs, whilst CP and LP share commonalities.
The four factors identified in the factor analysis include the following items:
Factor 1: Items 33, 32, 35, 34, 37, 29, 44, 43, 31, 25, 23, 42, 38, 24, 46, 22.
Factor 2: Items 8, 9, 3, 11, 7, 1, 2, 12, 6, 4.
Factor 3: Items 21, 14, 16, 20, 19, 18, 15, 10, 17.
Factor 4: Items 26, 39, 27, 36, 13.
Excluded items: 5, 28, 30, 40, 41, 45.
Cross-loaded items: 12, 38.
The first factor seems to aggregate elements related to reflection and practical application of knowledge, indicating a strong component of CP and self-reflective behavior conveyed by LP. This is consistent with previous studies suggesting that students perceive strong connections between knowledge integration and practical application. The second factor is dominated by items related to TP, such as facilitation by the teacher and provided support, confirming the relevance of this dimension in facilitating students’ learning pathways. This aligns with the observations of Shea and Bidjerano [11], who highlight how TP can positively influence students’ motivation and self-regulation. The third factor is characterized by items related to SP, such as the sense of belonging and digital communication, reflecting the importance of the social dimension in influencing student engagement. The overlap between SP and TP, as suggested by previous studies, is evident since some TP items also show correlations with the social dimension. Finally, the fourth factor includes items related to self-regulation, study planning and feedback, which are aspects of CP and LP. This supports the idea that the introduction of LP can indeed add a new dimension to the CoI analysis, as suggested by Shea et al. [12] and Wertz [16], proposing that LP can offer a more nuanced understanding of students’ self-regulated behaviors.

5.3. Observation on the Four-Factor EFA

The EFA conducted with a forced four-factor solution on the extended CoI survey reveals some noteworthy discrepancies, including excluded items and cross-loaded items. This analysis explores the possible implications of these aggregations from behavioral, cognitive, learning, and teaching perspectives.
In analyzing the excluded items, we can make some relevant observations in line with previous studies and the results of the factor analysis. The exclusion of items can be attributed to several factors, including low communality, weak correlation with other items within the same factor, or significant overlap with items from other factors. For example, excluded items such as The instructor was helpful in identifying areas of agreement and disagreement on course topics that helped me to learn (Item 5) and Online discussions were valuable in helping me appreciate different perspectives (Item 28) might be perceived as redundant or less specific in the context of factor analysis. Overall, the results indicate that while a four-factor model provides a structured framework, certain items exhibit cross-loadings and high uniqueness, suggesting room for refinement. Methodologically, refining these items or potentially expanding the number of factors could yield a more nuanced understanding of the extended CoI framework. The recommended steps include reviewing and possibly revising cross-loaded items to enhance their specificity and alignment with a single dimension, assessing excluded items to determine if they require rephrasing or if new items should be developed to capture the intended constructs more effectively, and considering additional EFAs with more factors to see if a more granular structure better captures the complexities of the extended CoI dimensions. Recognizing the dual role of feedback in enhancing cognitive, social, and teaching presences, and ensuring items reflect these multi-faceted contributions, will help create a more robust and comprehensive CoI survey that effectively captures the expanded dimensions of the learning environment.
Concerning the two cross-loaded items, their meaning is likely not clearly associated with a single dimension of the survey, which may complicate the interpretation of results. For instance, the item The instructor provided feedback that helped me understand my strengths and weaknesses relative to the course’s goals and objectives (Item 12) shows loadings on both Factor 2 (TP) and Factor 4. This suggests that while the item is intended to measure aspects of TP, it may also tap into elements of another dimension, such as CP, if students perceive feedback as not only an instructional activity but also a way to enhance their understanding and cognitive engagement. Another example is I could acknowledge if I was taking an effective or ineffective learning approach to a certain topic and changed accordingly (Item 42). This item shows loadings on Factor 1 (CP) and potentially Factor 4 (LP), indicating that self-assessment and adaptation of learning strategies are seen as both cognitive processes and integral to self-regulation.
At this stage, two sets of CFA results incorporating TP, SP, CP, and LP are compared to determine which model presents better internal consistency, correlations, and reliability. The first CFA, which is the full model with 46 items, shows varied factor loadings across different presences. For TP, the loadings range from 0.133 to 0.412, with all items being significant except for one (Item 10). An example is Item 19, which has a loading of 0.412 and is highly significant. SP has loadings ranging from 0.325 to 0.580, all significant, exemplified by Item 25 with a loading of 0.580. CP shows loadings between 0.279 and 0.459, all significant, with Item 31 having a loading of 0.459. LP has loadings from 0.208 to 0.551 (Item 41).
In terms of factor covariances, the first model shows strong relationships between the presences. For instance, the covariance between TP and SP is 0.632, while the highest covariance is between CP and LP at 0.871. These high covariances indicate significant inter-factor relationships.
The second EFA excludes six items (5,28,30,40,41,45) and presents similar factor loadings. For TP, the loadings range from 0.132 to 0.416, with all items significant except for Item 10. SP loadings range from 0.285 to 0.579, all significant. CP loadings are between 0.270 and 0.441, all significant, and LP loadings range from 0.285 to 0.545, all significant.
The covariances in the second model are slightly lower but still show strong inter-factor relationships. The covariance between TP and SP is 0.621, while the highest is again between CP and LP at 0.876.
Comparing factor loadings in both models, we see that they are generally within similar ranges and statistically significant, indicating strong relationships between items and their respective factors. Slightly higher loadings in the first model for some TP and SP items suggest marginally better item–factor relationships in the full model. Most items in both models are significant at p < 0.001, demonstrating robust correlations across all factors. The non-significant items in both models suggest areas for potential item revision or removal. Nevertheless, factor covariances in both models range from around 0.421 to 0.876, with CP and LP consistently showing the highest covariance, indicating strong inter-factor relationships. Both models exhibit high internal consistency, as shown by the significant factor loadings and covariances. The exclusion of six items in the second model does not significantly impact the overall structure, suggesting the streamlined model retains robust internal consistency.

6. Discussion

Both EFAs of the extended survey with 46 items show strong internal consistency, significant factor loadings, and high factor covariances, indicating reliable and valid measurement models. However, the second model (based on four factors), which excludes six items, presents a slightly cleaner and more streamlined structure without compromising the robustness of the relationships among factors. The similar ranges of factor loadings and the maintained significance levels suggest that the exclusion of items simplifies the model while retaining its explanatory power. Therefore, the second EFA model with excluded items may be preferred for its parsimony and efficient representation of the CoI framework.
According to Garrison [68] the argument against the existence of a fourth presence in the CoI is motivated by avoiding redundancy. However, research indicates that the LP construct has valuable implications for a deeper understanding of hidden aspects of the learning experience. This dimension highlights significant connections with other presences in the CoI and holds potential for advancing individual and collective learning within online learning communities. Whether it is considered as a distinct fourth presence or as an emerging construct within the dynamics of CoI, further investigation into LP is warranted. A case study involving 2010 college students conducted by Shea and Bidjerano [11], showcased that in situations where TP or SP is deficient, the learner’s self-regulation, as defined by the authors as LP, serves as a compensatory element in fostering cognitive presence. A similar approach during pandemic lockdown containment measures returned similar results and observations [34]. On the other hand, TP seems to be a key factor to compel self-related behaviors among learners.

Observations on the Six Excluded Items from the EFA

From the teaching perspective, the exclusion of Item 5, which pertains to the instructor helping students identify aspects that generate agreement or disagreement to facilitate learning, suggests a potential gap in capturing the nuanced role of facilitation in TP. This might indicate that while facilitation is critical, it might not align well with the other measured constructs of TP in the survey. Additionally, the cross-loading of Item 12, which involves instructor feedback helping students understand their strengths and weaknesses, suggests that effective feedback is multi-faceted. It contributes not only to teaching presence but also intersects with social and cognitive dimensions, indicating the broad impact of instructional feedback. From the social perspective, the exclusion of Item 28, which addresses the role of online exchanges in helping students consider perspectives different from their own, highlights a potential gap in the social presence dimension. This exclusion suggests that the survey might not fully capture the social interactions that facilitate cognitive growth and perspective taking. Furthermore, the cross-loading of Item 12 implies that instructor feedback has significant social implications, contributing to the feeling of being supported and understood within the learning community. From the cognitive standpoint, the exclusion of Items 28 and 30, which deal with online exchanges aiding perspective taking and activities helping to elaborate clarifications and solutions, respectively, suggests a potential shortfall in measuring the full scope of cognitive processes involved in learning. These exclusions imply that the survey might need to better integrate items that capture the interplay between cognitive engagement and collaborative activities. The cross-loading of Item 38, which involves regular reflection on learning habits, further underscores the cognitive dimension’s overlap with self-regulation and learning strategies. Taking the behavioral perspective into consideration as well, the exclusion of Item 40, related to maintaining regular study habits in conducive environments, and Item 41, related to planning study schedules and materials, indicates that these specific study behaviors might not fit neatly within the established factors. This exclusion suggests that while these behaviors are relevant, they might be too specific or operate independently from the broader behavioral constructs measured. Indeed, the cross-loading of Item 38 highlights the behavioral aspect of regularly reflecting on one’s learning habits, indicating a strong link between behavior and self-regulation. From a psychological standpoint, the exclusion of Item 45, which pertains to comfort with learning through trial and error and adapting to unforeseen difficulties, suggests that this aspect of psychological resilience and adaptability might not be fully captured within the current survey structure or that students may not develop such an attitude in the examined context. These exclusions suggest that self-regulated learning behaviors might be more complex or multi-dimensional than currently captured.
Additionally, the cross-loading of Item 12, involving feedback on strengths and weaknesses, also highlights the psychological impact of effective feedback on students’ self-perception and confidence. The learning perspective suggests that the exclusion of Items 30 and 40 indicates potential gaps in capturing the full learning process, particularly in how students engage with and reflect on activities and study environments. These exclusions suggest that the survey might need to better incorporate items that reflect the iterative and reflective aspects of learning. Finally, from a self-regulation focus, the exclusion of Items 40, 41, and 45, which highlight study habits, planning, and adaptability, sheds light on potential gaps in measuring certain nuances of self-regulation within the survey.
These identified correlations in the EFA highlight important areas for refinement in the extended CoI survey. The exclusion of items suggests potential gaps in capturing the full scope of teaching, social, cognitive, behavioral, psychological, learning, and self-regulation dimensions. Cross-loaded items indicate the multi-faceted nature of certain aspects, such as feedback and reflection, which intersect multiple dimensions. Addressing these anomalies through item refinement and potential expansion of the survey will help create a more comprehensive and nuanced measure of the extended CoI framework. This iterative process is essential for accurately capturing the complexities of the learning environment and informing effective educational practices.

7. Conclusions

Given the small sample size (n = 55) to which we administered the extended CoI questionnaire, we did not attempt to validate this experimental version, but rather aimed to observe the correlations established among the four dimensions. In light of the comprehensive analysis presented throughout this contribution, it becomes evident that the various iterations and adaptations of the CoI survey underscore its nature as an adaptive instrument rather than a definitive one. Each example of alternative versions of the CoI survey demonstrates that its structure and content are inherently susceptible to modification and fine tuning in response to the specific contextual needs, characteristics of the learners, and even socio-cultural variations. The flexibility of the CoI framework, as shown by numerous empirical studies supporting its adaptability, indicates that it should be viewed as a dynamic tool, rather than dogmatic. Moreover, the integration of LP as a potential fourth dimension further illustrates the need for a fluid approach to the CoI survey. The proposal to include LP highlights significant connections with the existing presences—teaching, social, and cognitive—and suggests that the survey must remain open to incorporating new dimensions that capture emergent aspects of learning, such as self-regulation and metacognition [12,16] Therefore, the CoI survey should be regarded as an instrument that is continually refined and adapted to better suit the evolving landscape of educational research and practice. This perspective ensures that the survey remains relevant and effective in diverse learning contexts, providing educators and researchers with a robust tool for measuring and understanding the intricate dynamics of online and blended learning communities [29,60,61]. Our results indicate that the new four-factor structure, including LP, is not only feasible but can also provide a more comprehensive view of blended learning dynamics. The inclusion of LP seems to improve the survey’s ability to capture students’ self-regulation and adaptability, crucial aspects in digital and self-directed learning environments.
The times are ripe for a well-documented debate about formulating the 4th presence in terms of subscales and items. Every new proposal to establish items to be added to the questionnaire should be considered only temporary and subject to subsequent applications and verifications. Undoubtedly, the complexities generated by new subscale elements imply that the new case studies examine how these interact with the traditional model. Certainly, this aspect does not promote empirical experiences devoid of a solid analytical and statistical foundations.
Every new adjustment of the CoI survey should be considered temporary as it will be subject again to EFA and CFA to assess the internal consistency and detect the new set of correlations among items. Crucially, our collection of observations denotes the impact of the discipline on the CoI framework [37]. This implies that factors such as the subject matter, teaching methodology, learner attributes, and learning management system are all significant components with the potential to affect the perception of the CoI. These aspects will reasonably play a significant role in the future proposals aimed at measuring LP through brand new items.

Author Contributions

Conceptualization, S.N. and A.C.; methodology, A.C.; software, S.N.; validation, S.N.; formal analysis, S.N.; investigation, resources and data collection, data curation, S.N.; writing—original draft preparation, S.N.; writing, review and editing, S.N. and A.C.; visualization, S.N.; supervision, A.C.; project administration, A.C.; funding acquisition, S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The three faculties involved have been informed of the data collection activity. The collection did not include personal or sensitive data, and the feedback was anonymous.

Informed Consent Statement

The questionnaire included an introductory disclaimer explaining that feedback was used for research purposes and that the final submit worked as authorization. Written informed consent from the participant(s) to publish this paper is not applicable.

Data Availability Statement

The data-set is available in the corresponding author’s Google repository (Google Form), at the following link: https://docs.google.com/forms/d/189ItZI99FQVi1xbxFec_Yf17OS1LsZs77lS3GrUJVGM/viewanalytics.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

EFAExploratory factor analysis
CFAConfirmatory factor analysis
CoICommunity of Inquiry
TPTeaching presence
SPSocial presence
CPCognitive presence
LPLearning presence
EFLEnglish as a foreign language

References

  1. Garrison, D.R.; Anderson, T.; Archer, W. Critical Inquiry in a Text-Based Environment: Computer Conferencing in Higher Education. Internet High. Educ. 1999, 2, 87–105. [Google Scholar] [CrossRef]
  2. Anderson, T.; Rourke, L.; Garrison, D.R.; Archer, W. Assessing teaching presence in a computer conferencing context. J. Asynchronous Learn. Netw. 2001, 5, 1–17. [Google Scholar] [CrossRef]
  3. Rourke, L.; Anderson, T.; Garrison, D.R.; Archer, W. Assessing social presence in asynchronous text-based computer conferencing. J. Distance Educ. 1999, 14, 50–71. [Google Scholar]
  4. Garrison, D.R.; Vaughan, N.D. Blended Learning in Higher Education: Framework, Principles, and Guidelines; John Wiley & Sons: San Francisco, CA, USA, 2008. [Google Scholar]
  5. Stenbom, S. A systematic review of the Community of Inquiry survey. Internet High. Educ. 2018, 39, 22–32. [Google Scholar] [CrossRef]
  6. Ng, D.T.K. What is the metaverse? Definitions, technologies and the community of inquiry. Australas. J. Educ. Technol. 2022, 38, 190–205. [Google Scholar] [CrossRef]
  7. Cleveland-Innes, M.; Campbell, P. Emotional presence, learning, and the online learning environment. Int. Rev. Res. Open Distance Learn. 2012, 13, 269–292. [Google Scholar] [CrossRef]
  8. Majeski, R.A.; Stover, M.; Valais, T. The Community of Inquiry and Emotional Presence. Adult Learn. 2018, 29, 53–61. [Google Scholar] [CrossRef]
  9. Shea, P.; Bidjerano, T. Learning presence: Towards a theory of self-efficacy, self-regulation, and the development of a communities of inquiry in online and blended learning environments. Comput. Educ. 2010, 55, 1721–1731. [Google Scholar] [CrossRef]
  10. ElSayad, G. Can learning presence be the fourth community of inquiry presence? Examining the extended community of inquiry framework in blended learning using confirmatory factor analysis. Educ. Inf. Technol. 2023, 28, 7291–7316. [Google Scholar] [CrossRef]
  11. Shea, P.; Bidjerano, T. Learning presence as a moderator in the community of inquiry model. Comput. Educ. 2012, 59, 316–326. [Google Scholar] [CrossRef]
  12. Shea, P.; Hayes, S.; Smith, S.U.; Vickers, J.; Bidjerano, T.; Pickett, A.; Gozza-Cohen, M.; Wilde, J.; Jian, S. Learning presence: Additional research on a new conceptual element within the community of inquiry (CoI) framework. Internet High. Educ. 2012, 15, 89–95. [Google Scholar] [CrossRef]
  13. Shea, P.; Richardson, J.; Swan, K. Building bridges to advance the Community of Inquiry framework for online learning. Educ. Psychol. 2022, 57, 148–161. [Google Scholar] [CrossRef]
  14. Wei, L.; Hu, Y.; Zuo, M.; Luo, H. Extending the COI Framework to K-12 Education: Development and Validation of a Learning Experience Questionnaire. In Blended Learning. Education in a Smart Learning Environment, Proceedings of the 13th International Conference, ICBL 2020, Bangkok, Thailand, 24–27 August 2020, Proceedings 13; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 315–325. [Google Scholar] [CrossRef]
  15. Wertz, R.E.H. What is learning presence and what can it tell us about success in learning online? In Proceedings of the Frontiers in Education (FIE) Conference, Madrid, Spain, 22–25 October 2014. [CrossRef]
  16. Wertz, R.E.H. Learning presence within the Community of Inquiry framework: An alternative measurement survey for a four-factor model. Internet High. Educ. 2021, 52, 100832. [Google Scholar] [CrossRef]
  17. Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol. Rev. 1977, 84, 191–215. [Google Scholar] [CrossRef]
  18. Bandura, A. Self-regulation of motivation through anticipatory and self-reactive mechanisms. In Nebraska Symposium on Motivation, 1990: Perspectives on Motivation; University of Nebraska Press: Lincoln, NE, USA, 1991; pp. 69–164. [Google Scholar]
  19. Bandura, A. Social cognitive theory of self-regulation. Organ. Behav. Hum. Decis. Process. 1991, 50, 248–287. [Google Scholar] [CrossRef]
  20. Bandura, A. The role of self-efficacy in goal-based motivation. In New Developments in Goal Setting and Task Performance; Routledge/Taylor & Francis Group: London, UK, 2013; pp. 147–157. [Google Scholar]
  21. Garrison, D.R.; Cleveland-Innes, M.; Fung, T. Student role adjustment in online communities of inquiry: Model and instrument validation. J. Asynchronous Learn. Netw. 2004, 8, 61–74. [Google Scholar] [CrossRef]
  22. Akyol, Z.; Garrison, D.R. The Development of a Community of Inquiry over Time in an Online Course: Understanding the Progression and Integration of Social, Cognitive and Teaching Presence. J. Asynchronous Learn. Netw. 2008, 12, 3–22. [Google Scholar]
  23. Arbaugh, J.B.; Cleveland-Innes, M.; Diaz, S.R.; Garrison, D.R.; Ice, P.; Richardson, J.C.; Swan, K.P. Developing a community of inquiry instrument: Testing a measure of the Community of Inquiry framework using a multi-institutional sample. Internet High. Educ. 2008, 11, 133–136. [Google Scholar] [CrossRef]
  24. Swan, K.; Richardson, J.C.; Ice, P.; Garrison, D.R.; Cleveland-Innes, M.; Arbaugh, J.B. Validating a measurement tool of presence in online communities of inquiry. E-Mentor 2008, 2, 1–12. [Google Scholar]
  25. Abbitt, J.T.; Boone, W.J. Gaining insight from survey data: An analysis of the community of inquiry survey using Rasch measurement techniques. J. Comput. High. Educ. 2021, 33, 367–397. [Google Scholar] [CrossRef]
  26. Shea, P.; Bidjerano, T. Community of inquiry as a theoretical framework to foster “epistemic engagement” and “cognitive presence” in online education. Comput. Educ. 2009, 52, 543–553. [Google Scholar] [CrossRef]
  27. Swan, K.; Ice, P. The community of inquiry framework ten years later: Introduction to the special issue. Internet High. Educ. 2010, 13, 1–4. [Google Scholar] [CrossRef]
  28. Moreira, J.A.; Ferreira, A.G.; Almeida, A.C. Comparing communities of inquiry of Portuguese higher education students: One for all or one for each? Open Prax. 2013, 5, 165. [Google Scholar] [CrossRef]
  29. Yu, T.; Richardson, J.C. Examining reliability and validity of a Korean version of the Community of Inquiry instrument using exploratory and confirmatory factor analysis. Internet High. Educ. 2015, 25, 45–52. [Google Scholar] [CrossRef]
  30. Ma, Z.; Wang, J.; Wang, Q.; Kong, L.; Wu, Y.; Yang, H. Verifying causal relationships among the presences of the Community of Inquiry framework in the Chinese context. Int. Rev. Res. Open Distance Learn. 2017, 18, 213–230. [Google Scholar] [CrossRef]
  31. Olpak, Y.Z.; Kiliç Çakmak, E. Examining the Reliability and Validity of a Turkish Version of the Community of Inquiry Survey. Online Learn. J. 2018, 22, 147–161. [Google Scholar] [CrossRef]
  32. Ballesteros, B.V.; Gil-Jaurena, I.; Encina, J.M. Validation of the Spanish version of the “Community of Inquiry” survey. Rev. Educ. A Distancia 2019, 1, 1–26. [Google Scholar] [CrossRef]
  33. Norz, L.M.; Hackl, W.O.; Benning, N.; Knaup-Gregori, P.; Ammenwerth, E. Development and Validation of the German Version of the Community of Inquiry Survey. Online Learn. J. 2023, 27, 468–484. [Google Scholar] [CrossRef]
  34. Nizzolino, S.; Canals, A.; Temperini, M. Validation of the Italian Version of the Community of Inquiry Survey. Educ. Sci. 2023, 13, 1200. [Google Scholar] [CrossRef]
  35. Stenbom, S.; Hrastinski, S.; Cleveland-Innes, M. Emotional presence in a relationship of inquiry: The case of one-to-one online math coaching. Online Learn. J. 2016, 20, 41–56. [Google Scholar] [CrossRef]
  36. Caskurlu, S. Confirming the subdimensions of teaching, social, and cognitive presences: A construct validity study. Internet High. Educ. 2018, 39, 1–12. [Google Scholar] [CrossRef]
  37. Arbaugh, J.B.; Bangert, A.; Cleveland-Innes, M. Subject matter effects and the Community of Inquiry (CoI) framework: An exploratory study. Internet High. Educ. 2010, 13, 37–44. [Google Scholar] [CrossRef]
  38. Lim, J.; Richardson, J.C. Considering how disciplinary differences matter for successful online learning through the Community of Inquiry lens. Comp. Educ. 2022, 187, 104551. [Google Scholar] [CrossRef]
  39. Honig, C.A.; Salmon, D. Learner Presence Matters: A Learner-Centered Exploration into the Community of Inquiry Framework. Online Learn. 2021, 25, 95–119. [Google Scholar] [CrossRef]
  40. Redecker, C.; Punie, Y.; European Commission. European Framework for the Digital Competence of Educators: DigCompEdu; Publications Office of the European Union: Luxembourg, 2017; p. 93. [Google Scholar] [CrossRef]
  41. Sala, A.; Punie, Y.; Garkov, V.; Cabrera, M.; European Commission; Joint Research Centre. LifeComp: The European Framework for Personal, Social and Learning to Learn Key Competence; Publications Office of the European Union: Luxembourg, 2020. [Google Scholar] [CrossRef]
  42. Pintrich, P.R.; De Groot, E.V. Motivational and Self-Regulated Learning Components of Classroom Academic Performance. J. Educ. Psychol. 1990, 82, 33–40. [Google Scholar] [CrossRef]
  43. Zimmerman, B.J. Self-Efficacy: An Essential Motive to Learn. Contemp. Educ. Psychol. 2000, 25, 82–91. [Google Scholar] [CrossRef] [PubMed]
  44. Shea, P.; Hayes, S.; Uzuner-Smith, S.; Gozza-Cohen, M.; Vickers, J.; Bidjerano, T. Reconceptualizing the community of inquiry framework: An exploratory analysis. Internet High. Educ. 2014, 23, 9–17. [Google Scholar] [CrossRef]
  45. Recommendation of the European Parliament and of the Council of 18 December 2006 on key competences for lifelong learning (2006/962/EC). Off. J. Eur. Union 2006, 394, 10–18.
  46. Espada, M.; Navia, J.A.; Rocu, P.; Gómez-López, M. Development of the Learning to Learn Competence in the University Context: Flipped Classroom or Traditional Method? Res. Learn. Technol. 2020, 28, 2020. [Google Scholar] [CrossRef]
  47. Zydziunaite, V.; Kaminskiene, L.; Jurgile, V.; Jezukeviciene, E. ‘Learning to Learn’ Characteristics in Educational Interactions between Teacher and Student in the Classroom. Eur. J. Contemp. Educ. 2022, 11, 213–240. [Google Scholar] [CrossRef]
  48. Caena, F.; Stringher, C. Towards a new conceptualization of Learning to Learn [Hacia una nueva conceptualización del Aprender a Aprender]. Aula Abierta 2020, 49, 199–216. [Google Scholar] [CrossRef]
  49. Pérez-Pérez, C.; García García, F.J.; Verdera, V.V.; Félix, E.G.; Soto, V.R. The “Learning to learn” competence in bachelor’s degrees [La competencia “aprender a aprender” en los grados universitarios]. Aula Abierta 2020, 49, 309–323. [Google Scholar] [CrossRef]
  50. Hilpert, J.C.; Greene, J.A.; Bernacki, M. Leveraging complexity frameworks to refine theories of engagement: Advancing self-regulated learning in the age of artificial intelligence. Br. J. Educ. Technol. 2023, 54, 1204–1221. [Google Scholar] [CrossRef]
  51. Blaschke, L.M. Heutagogy and lifelong learning: A review of heutagogical practice and self-determined learning. Int. Rev. Res. Open Distance Learn. 2012, 13, 56–71. [Google Scholar] [CrossRef]
  52. Moore, R.L. Developing lifelong learning with heutagogy: Contexts, critiques, and challenges. Distance Educ. 2020, 41, 381–401. [Google Scholar] [CrossRef]
  53. Winarno, A.; Naim, M.; Hia, A.K.; Hermana, D. Self-directed Capability Learning, Heutagogy and Productivity of Retirees Moderating by ICT. In Proceedings of the 2021 2nd International Conference on ICT for Rural Development (IC-ICTRuDev), Jogjakarta, Indonesia, 27–28 October 2021. [Google Scholar] [CrossRef]
  54. Blaschke, L.M.; Hase, S. Heutagogy and digital media networks. Pac. J. Technol. Enhanc. Learn. 2019, 1, 1–14. [Google Scholar] [CrossRef]
  55. McKerlich, R.; Anderson, T. Community of Inquiry and Learning in Immersive Environments. Online Learn. Consort. 2019, 11. [Google Scholar] [CrossRef]
  56. Blaschke, L.M.; Marín, V.I. Applications of heutagogy in the educational use of e-portfolios|Aplicaciones de la heutagogía en el uso educativo de e-portfolios. Rev. De Educ. A Distancia 2020, 20. [Google Scholar] [CrossRef]
  57. Mulrennan, D. Mobile social media and the news: Where heutagogy enables journalism education. J. Mass Commun. Educ. 2018, 73, 322–333. [Google Scholar] [CrossRef]
  58. Nizzolino, S. Planning a three-year-research based on the Community of Inquiry theory SUBTITLE A framework to monitor the learning of English as a 2nd Language in th EU Academic Environments applying the Community of Inquiry Theory Author. In Handbook of Research on Technologies and Systems for E-Collaboration during Global Crises; Zhao, J., Kumar, V.V., Eds.; IGI-Global: Hershey, PA, USA, 2022; pp. 234–260. [Google Scholar] [CrossRef]
  59. Nizzolino, S.; Canals, A. Pandemic and Post-Pandemic Effects on University Students’ Behavioral Traits: How Community of Inquiry Can Support Instructional Design during Times of Changing Cognitive Habits. Int. J. e-Collab. 2023, 19, 1–19. [Google Scholar] [CrossRef]
  60. Bai, X.; Gu, X.; Guo, R. More factors, better understanding: Model verification and construct validity study on the community of inquiry in MOOC. Educ. Inf. Technol. 2023, 28, 10483–10506. [Google Scholar] [CrossRef] [PubMed]
  61. Heilporn, G.; Lakhal, S. Investigating the reliability and validity of the community of inquiry framework: An analysis of categories within each presence. Comput. Educ. 2020, 145, 103712. [Google Scholar] [CrossRef]
  62. Kovanović, V.; Joksimović, S.; Poquet, O.; Hennis, T.; Čukić, I.; de Vries, P.; Hatala, M.; Dawson, S.; Siemens, G.; Gašević, D. Exploring communities of inquiry in Massive Open Online Courses. Comput. Educ. 2018, 119, 44–58. [Google Scholar] [CrossRef]
  63. Wang, Y.; Zhao, L.; Shen, S.; Chen, W. Constructing a Teaching Presence Measurement Framework Based on the Community of Inquiry Theory. Front. Psychol. 2021, 12, 694386. [Google Scholar] [CrossRef] [PubMed]
  64. Teng, Y.; Yin, Z.; Wang, X.; Yang, H. Investigating relationships between community of inquiry perceptions and attitudes towards reading circles in Chinese blended EFL learning. Int. J. Educ. Technol. High. Educ. 2024, 21, 6. [Google Scholar] [CrossRef]
  65. Chen, R.H. Effects of Deliberate Practice on Blended Learning Sustainability: A Community of Inquiry Perspective. Sustainability 2022, 14, 1785. [Google Scholar] [CrossRef]
  66. Borup, J.; Shin, J.K.; Powell, M.G.; Evmenova, A.S.; Kim, W. Revising and Validating the Community of Inquiry Instrument for MOOCs and other Global Online Courses. Int. Rev. Res. Open Distrib. Learn. 2022, 23, 82–103. [Google Scholar] [CrossRef]
  67. Lowenthal, P.R.; Dunlap, J.C. Problems Measuring Communities of Inquiry: An Investigation of the Community of Inquiry Questionnaire Limitations. October 2014. Available online: https://www.researchgate.net/publication/267027589_Problems_Measuring_Communities_of_Inquiry_An_Investigation_of_the_Community_of_Inquiry_Questionnaire_Limitations (accessed on 14 April 2024).
  68. Garrison, R.D. The Community of Inquiry: Other Presences? 2017. Available online: https://www.thecommunityofinquiry.org/editorial7 (accessed on 10 April 2024).
Figure 1. Original Community of Inquiry framework adapted from Garrison et al. [1].
Figure 1. Original Community of Inquiry framework adapted from Garrison et al. [1].
Education 14 00862 g001
Table 1. The theoretical structure of the Community of Inquiry survey on presences > subscales > survey items.
Table 1. The theoretical structure of the Community of Inquiry survey on presences > subscales > survey items.
PresenceSubscalesItems
Teaching
presence
1. Design and Organization1–4
2. Facilitation of Discourse5–10
3. Direct Instruction11–13
Social
presence
4. Affective Expression14–16
5. Open Communication17–19
6. Group Cohesion20–22
Cognitive
presence
7. Triggering Event23–25
8. Exploration26–28
9. Integration29–31
10. Resolution32–34
Table 2. Expanded Community of Inquiry survey based on 4 presences >14 subdimensions > 46 items.
Table 2. Expanded Community of Inquiry survey based on 4 presences >14 subdimensions > 46 items.
PresenceSubscalesItems
Teaching
presence
1. Design and Organization1–4
2. Facilitation of Discourse5–10
3. Direct Instruction11–13
Social
presence
4. Affective Expression14–16
5. Open Communication17–19
6. Group Cohesion20–22
Cognitive
presence
7. Triggering Event23–25
8. Exploration26–28
9. Integration29–31
10. Resolution32–34
Learning
presence
11. Self-Regulated Learning35–37
12. Self-Awareness38–40
13. Metacognitive Skills 41–43
14. Adaptability44–46
Table 3. Factorial correlations matrix adapted from Ballesteros et al. [32].
Table 3. Factorial correlations matrix adapted from Ballesteros et al. [32].
FactorTPSPCP
11.0000.2990.629
20.2991.0000.399
30.6290.3991.000
Table 4. Factor loading of the teaching presence items.
Table 4. Factor loading of the teaching presence items.
Factor 1Factor 2Factor 3Uniqueness
v11.080 0.071
v30.900 0.072
v60.676 0.409
v80.661 0.510
v120.635 0.561
v20.588 0.413
v90.504 0.4130.341
v5 0.977 0.218
v13 0.845 0.222
v10 0.723 0.374
v4 0.492 0.727
v11 0.7260.447
v7 0.694
Table 5. Factor loadings’ adjustment of the teaching presence items.
Table 5. Factor loadings’ adjustment of the teaching presence items.
Factor 1Factor 2Uniqueness
v10.855 0.213
v80.846 0.434
v60.764 0.387
v20.755 0.413
v120.712 0.511
v90.699 0.457
v30.559 0.283
v5 0.9110.307
v13 0.8590.208
v10 0.7290.481
v4 0.4880.737
Table 6. Factor loadings of the social presence items.
Table 6. Factor loadings of the social presence items.
Factor 1Factor 2Factor 3Uniqueness
v210.922 0.168
v140.851 0.331
v150.739 0.467
v160.569 0.4090.088
v19 1.016 0.148
v20 0.766 0.422
v18 0.568 0.610
v22 1.1380.051
v17 0.627
Table 7. Factor loadings’ adjustment of the social presence items.
Table 7. Factor loadings’ adjustment of the social presence items.
Factor 1Factor 2Uniqueness
v140.885 0.242
v210.814 0.275
v150.790 0.392
v20 0.7950.297
v19 0.7770.422
v18 0.6680.562
Table 8. Factor loadings of the cognitive presence items.
Table 8. Factor loadings of the cognitive presence items.
Factor 1Factor 2Factor 3Uniqueness
v320.976 0.061
v340.905 0.197
v290.838 0.321
v330.687 0.242
v310.602 0.558
v300.449 0.543
v26 0.902 0.315
v24 0.891 0.084
v23 0.742 0.323
v25 0.625 0.441
v28 0.9080.187
v27 0.8940.165
Table 9. Factor loadings of the learning presence items.
Table 9. Factor loadings of the learning presence items.
ItemsFactor 1Factor 2Uniqueness
35. I analyzed the learning objectives during the course and adjusted them to fit my preferred way of learning0.965 0.258
38. I regularly reflected on my learning habits during the course0.840 0.435
44. I was open to trying new resources, study approaches, or tools when faced with difficult concepts or tasks0.717 0.346
39. I valued feedback from others to gain insights into my learning strategies and areas for improvement0.552 0.735
43. I was able to reflect on my own contributions and learning progress in the collaborative setting0.526 0.496
45. I was comfortable with engaging in trial-and-error learning and adapting to unexpected challenges0.419 0.619
36. I actively chose resources, tools, or devices that best supported my learning style during the course0.416 0.831
41. I planned my timetable, strategies, and resources before engaging in any study session 0.8290.444
46. I used my skills in different situations that might change, going beyond what was already set 0.7460.422
40. I aimed to develop consistent study routines in study-friendly environments 0.6730.650
42. I could acknowledge if I was taking an effective or ineffective learning approach to a certain topic and changed accordingly 0.4630.477
37. I could independently approach difficult concepts or tasks and find ways to understand or solve them 0.4510.500
Note. Applied rotation method is promax.
Table 10. Factor loadings of the new whole survey based on 46 items.
Table 10. Factor loadings of the new whole survey based on 46 items.
ItemFactor 1Factor 2Factor 3Factor 4Factor 5Factor 6Factor 7Factor 8Factor 9Factor 10Factor 11Factor 12Uniqueness
34.10.205 0.112
33.0.941 0.181
32.0.702 0.315
29.0.452 0.438 0.296
31.0.424 0.262
21. 10.010 0.069
20. 0.888 0.357
18. 0.611 0.200
19. 0.548 0.284
42. 0.439 0.309
3. 0.863 0.269
4. 0.563 0.469
8. 0.556 20.212
9. 0.453 0.369
24. 0.818 0.280
23. 0.766 0.194
30. 0.413 0.403 0.262
38. 0.831 0.117
39. 0.635 0.378
26. 0.627 0.307
35. 0.745 0.130
17. 0.554 0.321
43. 0.435 0.334
13. 0.849 0.359
27. 0.638 0.278
12. 0.501 0.365
2. 0.433 0.324
6. 0.769 0.369
5. 0.611 0.375
7. 0.460 0.374
41. 0.881 0.239
40. 0.791 0.442
46. 0.683 0.220
28. 0.920 0.179
15. 0.870 0.265
10. 0.687 0.310
14. 0.634 0.254
36. 0.9350.182
37. 0.238
44. 0.305
45. 0.455
1. 0.368
11. 0.393
25. 0.369
16. 0.421
22. 0.276
Note. Applied rotation method is promax.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nizzolino, S.; Canals, A. Measuring Learning Presence as Fourth Dimension in the Community of Inquiry Survey: Defining Self-Regulation Items and Subscales through a Heutagogical Approach. Educ. Sci. 2024, 14, 862. https://doi.org/10.3390/educsci14080862

AMA Style

Nizzolino S, Canals A. Measuring Learning Presence as Fourth Dimension in the Community of Inquiry Survey: Defining Self-Regulation Items and Subscales through a Heutagogical Approach. Education Sciences. 2024; 14(8):862. https://doi.org/10.3390/educsci14080862

Chicago/Turabian Style

Nizzolino, Salvatore, and Agustí Canals. 2024. "Measuring Learning Presence as Fourth Dimension in the Community of Inquiry Survey: Defining Self-Regulation Items and Subscales through a Heutagogical Approach" Education Sciences 14, no. 8: 862. https://doi.org/10.3390/educsci14080862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop