Next Article in Journal
Cultivating Success: Unveiling the Influence of Higher Education Strategies on Information Technology Governance, Academic Excellence, and Career Prospects in Saudi Arabia
Previous Article in Journal
Research on the Spatial Differences and Network Structure of Economic Development in the Yangtze River Belt, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing China’s Sustainable Development of ICT in Education: A Delphi Approach

1
Zhejiang Academy of Higher Education, Hangzhou Dianzi University, Hangzhou 310018, China
2
Center for Research in Science, Technology, and Innovation Education, Hangzhou Dianzi University, Hangzhou 310018, China
3
Jiangsu Academy of Educational Sciences, Nanjing 210013, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(12), 5024; https://doi.org/10.3390/su16125024
Submission received: 8 April 2024 / Revised: 5 June 2024 / Accepted: 11 June 2024 / Published: 13 June 2024
(This article belongs to the Section Sustainable Education and Approaches)

Abstract

:
The evaluation of Information and Communication Technology (ICT) in Education (ICTE) has become a crucial aspect in educational management. While previous attempts have been made to develop evaluation frameworks for ICTE, many existing frameworks fail to include information from multiple sources and are thus not sufficiently accurate. This study aims to address this gap by identifying core dimensions and indicators from various stakeholders in ICTE practices. Accordingly, a set of guidelines for a more comprehensive evaluation of ICTE development is generated. By applying these guidelines, a Comprehensive Evaluation Framework for ICT in Education (CEF-ICTE) is developed and validated using the Delphi methodology. The results reveal that the most pertinent dimensions for ICTE evaluation are ICT infrastructure and digital resources, followed by ICT usage, personnel support, and ICT management. Key indicators identified include the average frequency of online services utilized by students per week and the percentage of schools implementing basic data applications for management purposes. These findings underscore the importance of prioritizing certain indicators in evaluating ICTE. The present study offers a valuable tool for comprehensively assessing ICTE development and may provide policymakers with essential support for setting priorities and allocating educational resources where they are most urgently needed.

1. Introduction

Due to the global spread of COVID-19, many countries have been prompted to adopt Information and Communication Technology (ICT) to facilitate educational opportunities amidst pandemic-related lockdowns, primarily through online and distance-learning methods [1]. In this context, ICT in Education (ICTE) plays an increasingly crucial role in supporting access to studies [2] and advancing the overall quality and balanced development of education across various regions [3]. Despite advancements in leveraging ICT to enhance educational access, several systemic challenges hinder the effective implementation of distance-learning for all students [4]. Persistent challenges include teachers lacking access to computers for instruction, students facing barriers to online educational resources, and some teachers lacking proficiency in ICT for facilitating online coursework [5]. Without adequate ICT devices, digital resources, and teacher training, students are simply unable to fully engage in distance education to continue their learning journeys [6]. In regions where this is the case, existing measures of ICT availability fall short in capturing the necessary requirements such as internet connectivity and computer access for instructional purposes [7]. Hence, more comprehensive and accurate evaluations of ICTE are needed to monitor its development and detect regional or national trends.
The evaluation of ICTE is recognized as a best practice in educational management. It is also an effective approach to promoting the broader implementation of ICTE. Such evaluations have already garnered the attention of governments and education authorities worldwide, providing essential insights for optimizing resource allocation and enhancing educational decision-making [8]. Various evaluation initiatives have been undertaken to monitor ICTE implementation; for instance, the European Commission conducted the 2nd Survey of Schools in 2015 [9], respectively, to track ICT development across Europe. An annual survey on the development of ICT in K-12 schools was carried out by the Educational Informatization Strategy Research Base (EISR) under the Ministry of Education (MOE) of the People’s Republic of China [10], identifying various problems in ICTE development across the country. Beyond its current states of implementation, researchers have also explored the development of ICTE through learning- and teaching-specific indicators [11] and in middle and high school programs [12].
Across various evaluation practices and studies, a consensus is emerging regarding the indicators or frameworks utilized to measure ICTE implementation. Indicators such as schools’ ICT vision and policy [13] and ICT access for instructional purposes [2] have been adopted to evaluate ICT implementation in schools. Additionally, scholars proposed frameworks for assessing ICT implementation in K-12 schools [14] and established internationally agreed-upon evaluation criteria for ICTE [15]. However, previous studies have tended to focus more on individual aspects rather than comprehensive ICTE development measurements based on entire educational processes within certain regions.
Some countries, including China, regard regions as units in promoting ICTE development and increasing the use of ICT to balance educational resources. “Regional ICTE” refers to the specific applications of ICT in various regions within a nation under a unified plan, which accelerates the modernization of regional education. The situation of regional ICTE development must be periodically evaluated to understand the current status of regional ICTE and to guide progress toward optimal outcomes, without wasting time, efforts, or resources. Existing school-level evaluation frameworks lack the necessary detail to provide appropriate information to governments or education authorities in terms of improving access to and utilization of ICT. There is an urgent need to comprehensively evaluate the development of regional ICTE to ensure effectiveness and precision in decision-making.
The objective of conducting a comprehensive evaluation of ICTE is to gain a comprehensive as well as in-depth understanding of the application of regional ICTE. From a broad perspective, this encompasses evaluation objectives involving multiple participants in the ICT teaching activities—students, teachers, principals, and ICT management department administrators. Additionally, it encompasses various aspects of ICTE such as conceptual components in ICT for Development (ICT4D) and their interrelationships, including dimensions of development, perspectives of development, conceptions of artefacts, theories of change, and the usage of ICT in teaching and infrastructures. The in-depth perspective involves examining the availability of digital resources, internet access, training to enhance teachers’ ICT integration skills, and training for principals to incorporate ICT into their management practices [15]. Furthermore, the presence of ICT assistants is expected to facilitate the ICT usage in schools, ensuring access to digital resources and ICT application among teachers and students.
This holistic assessment approach can be realized by containing multiple stakeholders involved in ICTE activities and conducting a thorough analysis of gathered information, including ICT usage in both teaching and learning. Collecting data from school-based surveys can provide necessary informational support for education policies and procedures, particularly in developing countries. Capturing a diverse range of variables can further provide a more realistic understanding of the availability and utilization of ICT by both students and teachers.
There is a lack of research regarding such comprehensive measurements of regional ICTE development. This limitation hinders the acquisition of information necessary for comparative analysis or to fully reveal the development status of regional ICTE. There is a pressing need for a universally applicable set of tools to comprehensively measure the development of ICTE. To fill this gap, this paper proposes a Comprehensive Evaluation Framework for ICTE (CEF-ICTE) tailored to the regional context, developed in efforts to bridge this research gap. It offers a framework for thoroughly characterizing ICTE development and securing more accurate data for scientifically informed education policies. Essentially, we seek to understand the following research question: what evaluation framework can be used to assess the development of regional ICTE?

2. Literature Review

2.1. Framework of ICTE Evaluation

A sound theoretical framework is the foundation for conducting practical evaluations, clearly delineating the objectives to-be-measured along with their sub-components [16]. Several countries worldwide have actively researched ICTE evaluation frameworks. For example, the American Education Technology CEO developed the School Technology and Readiness (STaR) evaluation framework [17] for assessing the status and usage of ICT in schools across four dimensions. The British Educational Communications and Technology Agency (BECTA) established the Self-Review Framework (SRF) to evaluate the development of ICT in schools in terms of leadership, management, planning, learning, and ICT competency [18]. The Korea Education and Research Information Service (KERIS) also developed an ICTE evaluation framework that covers dimensions including personal resource inputs, physical resources inputs, and utilization [19]. These evaluation frameworks were used for self-assessments of schools’ ICT statuses and became benchmarks for measuring ICT levels more broadly.
Numerous studies have endeavored to construct ICTE evaluation frameworks to date. For instance, one study constructed an evaluation framework based on ICTE implementation experience in K-12 education, comprising eight dimensions, including policy, infrastructure, human resources, curriculum, educational service, educational resources, usage, and equity [20]. Another study developed a monitoring and evaluation framework for the integration of ICT in teaching and learning [21]. Another framework for evaluating ICT-based learning technologies was proposed, focusing on inclusive ICT-based learning technologies [22]. Other frameworks include a region-wide monitoring framework for Dutch-speaking schools in Belgium [14], an e-capacity framework for evaluating ICT applications in schools [23], and an ICT competency framework for evaluating Korean elementary and middle schools [24]. The ICT in School Education Maturity Model (ICTE-MM) was established comprising information criteria, ICT resource, and leverage domains [25]. From the perspective of student development, the Extensive Digital Competence (EDC) model was developed to integrate factors related to pupils’ ICT competencies [26]. Scholars have also outlined factors influencing students’ digital skills levels and formulated a school analytic model accordingly [27].

2.2. Indicators of ICTE Evaluation

Several international institutions have developed indicators for ICTE and evaluated its application status using various methodologies. For example, the ICT for Development Partnership alliance published the ICT Core Indicators Report 2010, which listed core indicators such as the percentage of students in a school with internet access [28]. The World Bank evaluated the ICTE application in developing countries using indicators like teaching methods and ICT content [29]. The United Nations Educational Scientific and Cultural Organization (UNESCO) issued indicators for ICTE, such as the percentage of schools with computer-assisted instruction [30]. Other international organizations, like the European Commission, have also developed ICTE indicators, including the percentage of teachers who used ICT for lesson preparation in the past year, which was used to benchmark progress in ICT integration in schools [9].
Many countries have researched ICTE evaluation indicators. KERIS issued several ICTE indicators, including personal resource inputs [19]; Canada evaluates ICTE application based on indicators like the percentage of 15-year-old students who reported using computers at home and at school [31]. Australia developed ICTE indicators, such as students’ access to in-school ICT, to monitor progress towards the National Goals for Schooling [32]. One study proposed evaluating ICT use in schools based on the percentage of school computers [33], while another considered multimedia classroom usage as a measure of ICT application [34]. Another study proposed the percentage of professional informatics (computer) teachers as an indicator of ICTE development [25].
In summary, existing literature on ICTE evaluation varies by country and/or researcher, with different frameworks and/or indicators. There is a lack of universally agreed-upon evaluation tools for comprehensively evaluating ICTE, especially at the regional level.

3. Materials and Methods

3.1. Delphi Method

The Delphi method establishes a communication process for a group of experts to address complex problems. It involves gathering information from participants to facilitate planning, decision-making, and problem-solving based on evaluation information, thereby maximizing opportunities for individual contributions to knowledge and fostering the collective development of professional skills. According to this methodology, there is no direct face-to-face interaction among team members; instead, individuals provide feedback anonymously to ensure that responses are not influenced by group dynamics [35]. The Delphi method can be used not only to collect expert opinions but also to seek consensus among different experts with varying viewpoints. Throughout the process, it is essential to calculate the consistency rate and scoring stability among these experts [36].
The classical Delphi method typically involves three steps: (1) identifying the research subject, specifying the research question, establishing a rudimentary conceptual model, and designing an appropriate questionnaire; (2) selecting a panel of participating experts; and (3) conducting a survey to collect experts’ opinions, often spanning two or more rounds.
To gather sufficient information for constructing the indicator pool for comprehensive ICTE evaluation, a literature review, analysis of suggestions regarding the CEF-ICTE, and interviews were conducted. CEF-ICTE-related suggestions, foundational to this study, were collected through the TenCent conference—a widely used video meeting platform—and email. Experts’ response rates decrease as the duration of the Delphi process prolongs [37], and two rounds of the Delphi method have been shown to yield the most accurate possible answers [38]. Hence, a two-round Delphi method was implemented in this study. A flowchart of the process is shown in Figure 1.

3.2. Delphi Survey

3.2.1. Survey Procedures

First, survey items in the questionnaire were designed in terms of China’s ICTE-related policy documents as well as the existing literature on the evaluation of the status quo of ICTE development. In such a way, an initial set of evaluation indicators was identified. Prior to the first round of the Delphi survey, a pre-survey was conducted to modify the research plan and questionnaire, ensuring its relevance, practicality, and readability.
In Round 1, materials were distributed to the selected participants, including an invitation detailing the purpose of this study, a questionnaire, and a table outlining each indicator in detail. Participating experts were given a two-week period to respond upon distribution of the questionnaires and reminded via e-mail three days before the deadline. Experts were tasked with assessing the importance and feasibility of each dimension and indicator on a five-point Likert scale, with 1 indicating “not important/feasible” and 5 indicating “very important/feasible”. The purpose of Round 1 assessment is to gather preliminary opinions from experts on the designed dimensions and indicators and form the basis for Round 2 assessment. Statistical analysis was then completed within two weeks after the end of Round 1, and the questionnaire was modified and truncated as needed based on the results.
Moving to Round 2, the revised questionnaire and the detailed Round 1 evaluation report were then distributed to the same pool of experts, and the reassessment process was the same as it was in Round 1. It is worth noting that, in this round, only the indicators that need further reassessments, as well as the newly added indicators suggested in Round 1, are retained. Indicators that have not been part of a consensus by experts would not be included in Round 2. Both rounds of the Delphi survey spanned three months, including time for data processing and information generation.

3.2.2. Expert Recruitments

The choice of participants as experts is a critical phase of the Delphi method, given that the results primarily rely on their opinions [39]. Participants were required to possess familiarity with the relevant issues and/or topics and to have worked in similar areas for no less than five years [40]. In this study, experts were selected by compiling a list of participants from an ICTE evaluation conference and reviewing ICTE evaluation literature to identify individuals and obtain contact information from relevant papers. The Delphi survey allows for flexibility in sample size, typically recruiting between 15 and 50 participants [41]. Standardization for selecting experts was based on their roles as experienced teachers, school principals responsible for ICT management, experts working in ICTE, and professionals in regional ICTE evaluation and management. On average, participants had around 5–6 years of experience in ICTE, with over 50% possessing eight years or more. These chosen participants were perceived by their colleagues, supervisors, and peers as authoritative figures in their field, possessing high qualifications in ICTE evaluation and offering targeted guidance and informed opinions.
Research assistants sent invitation emails to these experts after conducting background and experience checks online. Because the survey was conducted anonymously and participants were involved in only two rounds of communication, the feedback exchanged among experts in these two rounds was deemed effective. A total of 33 experts received invitations and questionnaires in Round 1, with 30 valid questionnaires collected in the specified time frame, resulting in a high response rate of 90.91%. In Round 2, 30 questionnaires were distributed and 27 were recovered, indicating a slightly lower response rate than Round 1. Additionally, 14 participants provided supplementary comments or suggestions in Round 1 and three participants offered supplementary reviews in Round 2. The participation and response rates for the two rounds are summarized in Table 1.

3.2.3. Statistics

The statistics involved in the present study were generated from the participating experts’ responses. The opinions they expressed were primarily measured through response rates. Expert familiarity was generalized by means and rated as low (score 1.00–2.49), medium (2.50–3.49), and high (3.5–5.00) [36]. The importance and feasibility of each evaluation dimension and indicator were summarized. Regarding frequencies, scores of 4 (very important/very feasible) and 5 (important/feasible) were combined to calculate the percent agreement (PA). The standard deviation (SD) and coefficient of variation (CV) of the scores were calculated to assess the dispersion of expertise. It should be noted that a lower CV indicates that a closer expert consensus has been reached [42].

3.2.4. Selection Criteria of Indicators in Round 1 and 2

The selection of evaluation indicators was based on the importance of evaluation objects. The selection criteria applied in Round 1 and 2 were referred to Wu et al. [42] and Zhang and Huang [43]. Given that the primary objective of this study is to establish the CEF-ICTE covering all relevant aspects, priority was given to the importance of indicators over their feasibility [42]. In Round 1, for each indicator, if its PA was below 50% or the mean score was less than 3.5, and its CV was lower than 25%, it was excluded. If its PA was equivalent to or higher than 70% or the mean was greater than 3.5, while its CV was less than 25%, the indicator was directly retained from Round 1. If its PA was below 70% or its CV was larger than 25%, the indicator was reassessed [43]. Because this Delphi procedure covered only two rounds, in Round 2, if a reassessed indicator’s CV was less than 70% and its CV was less than 25%, it was excluded; the criteria for retaining a reassessed indicator were the same as those for Round 1.

3.3. Framework Construction

This section outlines the development process and structure of the proposed framework. Similar to other frameworks, it is structured as a set of indicators that can be further categorized into dimensions based on their themes. Both dimensions and indicators were identified through a literature review and the extensive experience of experts in the field of ICTE evaluation. Five dimensions were determined based on previous studies, emphasizing the importance of retaining those that consistently reappear in the literature, albeit with slight adjustments due to potential differences in classification [24]. For example, while materials are not presented as a separate dimension in some frameworks [25], they were regarded here as a subclass within the ICT infrastructure dimension.
Table 2 provides a summary of existing evaluation frameworks, demonstrating how the present framework draws upon pre-existing research while introducing new elements corresponding to this study’s objectives. While previous studies have primarily explored ICTE development from the perspective of schools, this research focuses specifically on regional levels of ICTE development. Therefore, the proposed framework adopts a regional perspective from existing structures while introducing new elements specific to ICTE implications at the regional level. Five dimensions were shared across all aspects: ICT infrastructure, digital resources, ICT usage, ICT management, and personnel support.
ICT infrastructure is explicitly included due to its fundamental supportive role, encompassing all ICT hardware facilities necessary for internet access and teaching activities [23]. This dimension considers the diversification of ICT hardware supply sources, multimedia classrooms, internet availability, and computer accessibility for students or teachers [33].
Digital resources impact the use of ICT by teachers and students for classroom activities [15]. This dimension is also emphasized in previous studies [38,39], highlighting the significance of digital resource provision and its impact on ICT usage distribution.
The dimension of ICT usage mainly reflects the application of ICT in teaching and learning, considering strategies promoting ICTE integration [20,33]. Similarly, previous studies have summarized various types of ICT use and analyzed factors affecting usage conditions [13,20].
ICT management is an important aspect of ICTE evaluation, as evidenced by existing studies. It facilitates physical connections between ICT and school management and supports the application of ICT in school management. Features like network security systems [49] provide convenience for school managers and generate data supporting decision-making processes. Monitoring systems have been implemented to enhance campus internet security, contributing to the modernization of governing structures and education administration.
Personnel support, as highlighted by previous scholars [20,24], is considered an integral part of the CEF-ICTE in the regional context. What sets this framework apart is its consideration of region-management involvement and its impact on ICTE evaluation. For example, there is a need for more streamlined school systems to promote ICTE development, with considerations on how personnel support may be advantageous given the current development situation [20]. This dimension encompasses participants (teachers or principals) involved in school ICT activities and relevant training.

4. Results

4.1. Experts’ Familiarity

At the outset of the Round 1 Delphi survey, participating experts were queried regarding their familiarity with existing frameworks and indicators of ICTE evaluation. The familiarity for all indicators exceeded 3.5, indicating moderate familiarity. This suggests that their perspectives were reliable for present study.

4.2. Round 1 of Delphi Survey

4.2.1. Identification and Indicator Screening

The screening of indicators was solely based on importance, so only the importance score of each indicator was specified at the initial stage. Participating experts reached a consensus, and no dimensions were deleted in Round 1. They closely agreed on the importance of each dimension, with scores exceeding 4 for all five—particularly ICT usage, digital resources, and ICT infrastructure. The mean scores ranged from 4.57 to 4.87, with a CV across the five dimensions of less than 25%, indicating consensus among participants. Two out of five dimensions reached 100%. Although the average score of ICT management was 4.37 with a relatively high CV score of 14%, the consistency of ICT management content was higher than 70%, making it a primary dimension, as shown in Table 3.
Under Dimension (1), “ICT infrastructure”, all indicators exhibited high consistency in terms of importance (mean ≥ 3.5). However, “1.2 Percentage of multimedia classrooms with interactive electronic whiteboards” did not meet the inclusion criteria, as its corresponding PA was less than 70% and its CV was below 25%. In Dimension (2), “Digital resources”, the indicator “2.3 Percentage of schools with cyberspace is supported by the national or provincial platform” exhibited worse consistency than other indicators with a PA less than 70% and a CV below 25%. This indicator was isolated for reassessment in Round 2. Indicators of Dimension (3), “ICT usage”, demonstrated majority consensus, as evidenced by high mean scores (≥3.5), low CVs (≤25%), and PAs (>70%). Only one indicator, “3.6 Utilization rate of multimedia classrooms” received a PA of 87% (>70%), though its mean importance score was higher than 4 and its CV was comparatively at 25%. All indicators in this dimension were retained.
For Dimension (4), “ICT management”, only one indicator, “4.4 Percentage of schools with public information release platforms” failed to meet inclusion criteria and was reassessed in Round 2. Experts suggested changes to the description of the indicator “4.6 Percentage of schools with security monitoring system to achieve full coverage of the campus (gate, teaching building, office area)”. Regarding the appraisal of Dimension (5), “Personnel support”, three indicators had PAs less than 70% and CVs less than 25%: “5.2 Percentage of teachers who are participant in ICT training at the school level”, “5.5 Percentage of schools with informatics teachers”, and “5.6 Percentage of schools with ICT assistants”. Therefore, these three indicators were further discussed in Round 2, as shown in Table 3.

4.2.2. Amendments in Response to Comments

Experts’ opinions were gathered in an open-ended manner. Upon reviewing experts’ feedback, two initially independent indicators (Indicator 1.1 and Indicator 1.2) were amalgamated into one. Two experts noted that individual indicators were no longer sufficient to represent ICTE development, prompting the need for more generalized indicators. The wording of two indicators was refined as well: “1.5 Number of computers per teacher” was adjusted to “1.5 Number of information-based teaching terminals used by teachers in school teaching”, and “4.6 Percentage of schools with security monitoring system to achieve full coverage of the campus” was revised to “4.6 Percentage of schools with security monitoring system to achieve full coverage of the campus (gate, teaching building, office area)”. One indicator was added to the same dimension, “1.7. Percentage of schools with Innovation Labs”.
Given that the importance scores of more than 80% of indicators and dimensions were relatively high and unanimously agreed upon by experts, in Round 2, a concise evaluation questionnaire was administered to reduce the burden on participants. This questionnaire only included the modified and newly added indicators that had sparked debate.

4.3. Round 2 of Delphi Survey

Seven indicators were reassessed at the conclusion of Round 2, including a newly added item “1.3 Percentage of schools with Innovation Lab (such as 3D, etc.)”. Four of them met the screening criteria and were included in the CEF-ICTE. Three indicators, “2.3 Percentage of schools with cyberspace is supported by the national or provincial platform”, “5.2 Percentage of teachers who are participant in ICT training at the school level”, and “5.6 Percentage of schools with ICT assistants”, were removed due to PAs less than 50% and CVs less than 25%. Unlike in Round 1, “4.4 Percentage of schools with public information release platforms” and “5.5 Percentage of schools with informatics teachers” met the inclusion criteria. After screening and revision, a total of five dimensions were included with 27 indicators in the final CEF-ICTE version.

5. Discussion

The present study contributes an instrument designed to comprehensively measure regional ICTE development. Utilizing the Delphi method, experts’ opinions were obtained and analyzed to support this work and expand upon the methodology for ICTE evaluation. A two-round survey was conducted with a panel of experts, focusing on identifying core indicators for the CEF-ICTE. In Round 1, all experts reached a consensus on the five dimensions with most indicators garnering approval. Only one indicator was added, and six indicators failing to meet retention criteria were reconsidered. Seven indicators were under discussion in Round 2, leading to agreement among experts to remove three of them. This process resulted in a valuable evaluation tool for policymakers, aiding in the prioritization and resource allocation of ICT in areas of pressing need.

5.1. Core Indicators for CEF-ICTE

The strategic planning of ICTE necessitates thorough consideration of current and requisite indicators, which may differ at the regional level from those at the school level. In this study, indicators tailored to pre-existing ICTE, post Round 1, did not align with the status of schools equipped with multimedia classrooms, interactive electronic whiteboards, public information release platforms, or informatics teachers from Round 1, resulting in low scores. However, after Round 1, indicators that were rewritten to reflect the present status of ICTE evaluation achieved high scores and stable consensus.
Additionally, certain ICTE evaluation indicators focused on ICTE development can aid in establishing guidelines from a targeted perspective. For example, in China, regional ICTE assessments supported by the MOE and receiving continuous funding from local governments [50,51] are predominantly centered on ICT teaching facilities [52]. However, in the development trajectory of regional ICTE, aspects such as the accessibility of digital resources for teachers or students through online services supported by national/provincial platforms are primarily overseen by education administration sectors. Consequently, incorporating these indicators into regional education informatization evaluations seems logical. In this study, new school-related and regional evaluation indicators suited to assessing ICTE yielded significant scores, demonstrating their importance.

5.2. Delphi Method for CEF-ICTE Indicator Identification

The approach utilized in the present study may provide researchers with opportunities to generate new assessment indicators for various stakeholders in ICTE, including teachers, principals, and students. While several previous studies have employed the Delphi method in ICTE-related research, these studies have mainly utilized plans or standards promulgated by nations or organizations [53,54]. To our knowledge, there has been limited research employing the Delphi method for systematic evaluations of ICTE at the regional level. Among literature containing terms such as “comprehensive evaluation” or “ICT in education” in education technology and ICTE journals, this is the first instance of Delphi method application to identify key indicators related to the comprehensiveness of ICTE in the regional context.
Additionally, the Delphi method is commonly utilized across various academic fields to achieve consensus among experts. One advantage the Delphi method demonstrated in this study is that ICTE evaluation indicators were derived from the perspective of diverse participants closely associated with ICTE, and consensus was reached among them. Challenges in ICTE evaluation are not confined to individual users (e.g., singular schools) but also extend to stakeholders like policymakers, principals, teachers, and students across a given region. In this study, the Delphi method, which necessitates agreement among experts from diverse fields, exhibited the potential to develop a consistent and effective solution for implementing comprehensive ICTE evaluations at the regional level. Thus, this study provides insights into how ICTE evaluation studies in the field of education management should establish measurement indicators by incorporating the perspectives of various experts.

5.3. Resolving Variance, Enhancing Standardization and Regulation Requirements

For policymakers, these indicators can serve as valuable tools for enhancing the managerial objectives of ICTE. While requirements for constructing ICTE evaluations within the regional context do exist based on national policy and plan requirements, some policies or plans lack detailed specifications. The findings of the present study may provide indicators with practical insights for enhancing standardization and regulatory requirements suitable for constructing a comprehensive ICTE framework.
Firstly, these indicators can help policymakers to identify practical recommendations as essential requirements for ICTE administration. Following the two-round Delphi process, most indicators received high scores with strong consensus. Evidently, the majority sentiment of experts is that the assessment of regional ICTE should differ from that of a typical school. For example, Indicator 1.3 in the dimension of ICT infrastructure provided specific details concerning new ICT equipment in education (e.g., 3D Innovation Laboratories), which existing ICTE evaluation frameworks at the school level may not address. Similarly, Indicator 4.6 in the dimension of ICT management was revised to include coverage area details, as experts deemed assessing coverage challenging and specifying coverage areas necessary to enhance assessment precision. Additionally, Indicator 1.5 was modified to reflect the number of computers per teacher, acknowledging that some teachers in rural schools frequently utilize computer resources. This adjustment aimed to provide a universal indicator for evaluating teachers’ ICT equipment usage. The indicators established in the present study may offer valuable insights for the standardization and regulation of ICTE.
Secondly, though experts in Round 2 expressed varying opinions on indicators with low consensus but high means, these indicators could be considered for debate to develop new regulatory proposals. For instance, while the of Indicator 5.5 was 89, indicating relatively low consensus compared to PAs above 90, it garnered a high mean score of 4.04. Some experts suggested that ICTE evaluations should be geographically accessible and proximal to offer easy access for students and their schools [55]. However, other experts argued that not all ICTE evaluation need to involve ICT-using teachers and that it may be impractical for all evaluated schools to be located in urban areas. They proposed categorizing schools by type, distinguishing between “rural” and “urban” based on location. For “urban-type” schools, specialized ICT services and teaching by ICT-trained teachers may be more relevant. These discussions may offer valuable guidance for addressing variability and inconsistencies in terms of comprehensive requirements for ICTE evaluation.

6. Conclusions, Novelty, Limitations, Future Research Directions

This paper proposes an evaluation framework designed to comprehensively evaluate the development of regional ICTE, with the ultimate aim of providing direction for ICT policymakers in education. Comprising five dimensions and 27 indicators, this framework facilitates a holistic understanding of the ICTE development that can enable comparative analyses and periodic reviews.
The innovation of the present study is mainly reflected in the following two aspects. First, this study developed a comprehensive evaluation framework to assess the development status of regional ICTE. It not only supports macro-management and scientific decision-making in terms of regional education but also addresses the intrinsic requirements for quality ICT construction in schools. Validated by experts, our CEF-ICTE provides necessary components for the evaluation of ICT environments and regulatory efforts, thereby fostering the implementation and further development of ICTE. Adopting this framework can generate the evidence required to inform the prioritization and acceptance of effective ICTE initiatives, as well as to establish a more equitable system for ICT access. Second, the participating experts comprised experienced teachers, school principals, and policymakers, who held a longstanding involvement in ICTE-related domains. Their vast experience in these fields renders the indicators within our framework more practical and applicable.
Last, several limitations are also worth taking into consideration. Firstly, while efforts were made to reach a consensus among experts, the Delphi technique may be subject to biases, necessitating further rounds of survey and communication to refine the proposed framework. Secondly, although the CEF-ICTE was preliminarily validated, it has yet to be tested with real-world data to test its practical implementation. The Capability Maturity Model (CMM) [25] could offer insights into self-assessing the maturity of evaluations based on specific criteria, potentially enhancing the framework’s applicability. Additionally, given the dynamic nature of regional ICTE development, ongoing adjustments to the framework’s indicators are essential to align with evolving needs. Future research could explore the entry and exit mechanisms of evaluation indicators using relevant CMM knowledge to flexibly adapt the framework and more precisely evaluate regional ICTE development. Finally, while the proposed framework contains established dimensions and indicators, further studies could delve into each indicator’s feasibility regarding data-collection practices and/or strengthen the CEF-ICTE through more in-depth interviews.

Author Contributions

Conceptualization, D.X.; methodology, D.X.; software, T.Z.; validation, D.X. and C.X.; formal analysis, D.X. and C.X.; investigation, D.X. and C.X.; resources, D.X. and C.X.; data curation, D.X. and T.Z.; writing—original draft preparation, D.X. and C.X.; writing—review and editing, D.X. and T.Z.; visualization, D.X. and C.X.; supervision, D.X. and T.Z.; funding acquisition, D.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Philosophy and Social Science Planning Youth Project of Zhejiang Province (24NDQN113YBM) and the Educational Science Planning Foundation of Zhejiang Province (2023SCG004).

Institutional Review Board Statement

This study was conducted according to the guidelines of Hangzhou Dianzi University and approved by the Institutional Review Board of Zhejiang Academy of Higher Education (#2023.0006, 13 June 2023).

Informed Consent Statement

Informed consent was obtained from all individuals involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors are grateful to the experts involved for their support in participating in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dhawan, S. Online Learning: A panacea in the time of COVID-19 crisis. J. Educ. Technol. Syst. 2020, 49, 5–22. [Google Scholar] [CrossRef]
  2. Kim, S.; McVee, M.; Faith, M.S. Can information and communication technology improve college access for all in the United States of America? Educ. Sci. Theory Pract. 2019, 19, 14–32. [Google Scholar]
  3. Dores, A.R.; Geraldo, A.; Carvalho, I.P.; Barbosa, F. The use of new digital information and communication technologies in psychological counseling during the COVID-19 pandemic. Int. J. Environ. Res. Public Health 2020, 17, 7663. [Google Scholar] [CrossRef] [PubMed]
  4. Buchbinder, N. Digital Capacities and Distance Education in Times of Coronavirus: Insights from Latin America; World Education Blog; UNESCO: Paris, France, 2020. [Google Scholar]
  5. Bell, S.; Cardoso, M.; Giraldo, P.J.; Makkouk, E.N.; Nasir, B.; Mizunoya, S.; Dreesen, T. Can Broadcast Media Foster Equitable Learning Amid the COVID-19 Pandemic? COVID-19 Data and Research; UNICEF: New York, NY, USA, 2021. [Google Scholar]
  6. De Souza, G.H.S.; Jardim, W.S.; Junior, G.L. Brazilian Students’ Expectations Regarding Distance Learning and Remote Classes During the COVID-19 Pandemic. Educ. Sci. Theory Pract. 2020, 20, 65–80. [Google Scholar]
  7. Montoya, M.; Barbosa, A. The Importance of Monitoring and Improving ICT Use in Education Post-Confinement; UNESCO: Paris, France, 2021. [Google Scholar]
  8. Lim, C.P.; Ra, S.; Chin, B.; Wang, T. Information and communication technologies (ICT) for access to quality education in the global south: A case study of Sri Lanka. Educ. Inf. Technol. 2020, 25, 2447–2462. [Google Scholar] [CrossRef]
  9. European Commission. 2nd Survey of Schools: ICT in Education Objective 1: Benchmark Progress in ICT in School. Available online: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=57894 (accessed on 7 April 2024).
  10. EISR. China Educational Information Development Report (2019); People’s Education Press: Beijing, China, 2020. [Google Scholar]
  11. Gibson, D.; Broadley, T.; Downie, J.; Wallet, P. Evolving learning paradigms: Re-setting baselines and collection methods of information and communication technology in education statistics. J. Educ. Technol. Soc. 2018, 21, 62–73. [Google Scholar]
  12. Ismaili, J. Evaluation of information and communication technology in education programs for middle and high schools: GENIE program as a case study. Educ. Inf. Technol. 2020, 25, 5067–5086. [Google Scholar] [CrossRef]
  13. Vanderlinde, R.; Aesaert, K.; van Braak, J. Measuring ICT use and contributing conditions in primary schools. Br. J. Educ. Technol. 2015, 46, 1056–1063. [Google Scholar] [CrossRef]
  14. Goeman, K.; Elen, J.; Pynoo, B.; van Braak, J. Time for action! ICT Integration in Formal Education: Key Findings from a Region-wide Follow-up Monitor. TechTrends 2015, 59, 40–50. [Google Scholar] [CrossRef]
  15. Yim, M. Towards a Comprehensive Framework for ICT for Education Evaluation. Workshop of the Ais Special Interest Group for Ict in Global Development. GlobDev 2015, 6. Available online: http://aisel.aisnet.org/globdev2015/6 (accessed on 7 April 2024).
  16. Greco, S.; Ishizaka, A.; Tasiou, M.; Torrisi, G. On the Methodological Framework of Composite Indices: A Review of the Issues of Weighting, Aggregation, and Robustness. Soc. Indic. Res. 2018, 141, 61–94. [Google Scholar] [CrossRef]
  17. CEO. School Technology and Readiness Chart. Available online: https://tea.texas.gov/academics/learning-support-and-programs/technology-resources/school-technology-and-readiness-chart (accessed on 7 April 2024).
  18. BECTA. The Self-Review Framework Revision. Available online: https://www.gov.uk/government/organisations/british-educational-communications-and-technology-agency (accessed on 7 April 2024).
  19. KERIS. 2006 White Paper on ICT in Education Korea. Available online: https://www.keris.or.kr/main/cf/fileDownload.do?fileKey=1d5a73f9e4429a15fc7f3c71e47fb247 (accessed on 7 April 2024).
  20. Song, K.S.; Kim, H.S.; Seo, J.; Kim, C.H. Development and pilot test of ICT in education readiness indicators in the global context. Kedi J. Educ. Policy 2013, 10, 243–265. [Google Scholar]
  21. Njagi, R.N.; Oboko, R.O. A Monitoring and Evaluation Framework for the Integration of ICTs in Teaching and Learning in Primary Schools in Kenya. J. Educ. Pract. 2013, 4, 21–29. [Google Scholar]
  22. Hersh, M. Evaluation framework for ICT-based learning technologies for disabled people. Comput. Educ. 2014, 78, 30–47. [Google Scholar] [CrossRef]
  23. Vanderlinde, R.; van Braak, J. The e-capacity of primary schools: Development of a conceptual model and scale construction from a school improvement perspective. Comput. Educ. 2010, 55, 541–553. [Google Scholar] [CrossRef]
  24. Aoki, H.; Kim, J.; Lee, W. Propagation & level: Factors influencing in the ICT composite index at the school level. Comput. Educ. 2013, 60, 310–324. [Google Scholar]
  25. Solar, M.; Sabattin, J.; Parada, V. A Maturity Model for Assessing the Use of ICT in School Education. J. Educ. Technol. Soc. 2013, 16, 206–218. [Google Scholar]
  26. Aesaert, K.; van Braak, J.; van Nijlen, D.; Vanderlinde, R. Primary school pupils’ ICT competences: Extensive model and scale development. Comput. Educ. 2015, 81, 326–344. [Google Scholar] [CrossRef]
  27. Sergis, S.; Sampson, D.G.; Giannakos, M.N. Supporting school leadership decision making with holistic school analytics: Bridging the qualitative-quantitative divide using fuzzy-set qualitative comparative analysis. Comput. Hum. Behav. 2018, 89, 355–366. [Google Scholar] [CrossRef]
  28. United Nations. Partnership on Measuring ICT for Development—Core ICT Indicator. Available online: https://unstats.un.org/unsd/statcom/doc07/BG-ICT.pdf (accessed on 7 April 2024).
  29. Michael, T. Knowledge Maps: ICTs in Education. Available online: https://files.eric.ed.gov/fulltext/ED496513.pdf (accessed on 7 April 2024).
  30. UNESCO Institute for Statistics. Guide to Measuring Information and Communication Technologies (ICT) in Education. Available online: https://uis.unesco.org/sites/default/files/documents/guide-to-measuring-information-and-communication-technologies-ict-in-education-en_0.pdf (accessed on 7 April 2024).
  31. Hodgkinson, G.D. The Role and Relevance of Ines in Canada. Comp. Int. Educ. 2012, 31, 111–122. [Google Scholar] [CrossRef]
  32. Cuttance, P.; Shirley, S. Monitoring Progress towards the National Goals for Schooling: Information and Communication Technology (ICT) Skills and Knowledge; Ministerial Council on Education, Employment, Training and Youth Affairs: Carlton, VIC, Australia, 2000. [Google Scholar]
  33. Gil-Flores, J.; Rodríguez-Santero, J.; Torres-Gordillo, J.-J. Factors that explain the use of ICT in secondary-education classrooms: The role of teacher characteristics and school infrastructure. Comput. Hum. Behav. 2017, 68, 441–449. [Google Scholar] [CrossRef]
  34. Comi, S.L.; Argentin, G.; Gui, M.; Origo, F.; Pagani, L. Is it the way they use it? Teachers, ICT and student achievement. Econ. Educ. Rev. 2017, 56, 24–39. [Google Scholar] [CrossRef]
  35. Turoff, M. The design of a policy Delphi. Technol. Forecast. Soc. Chang. 1970, 2, 149–171. [Google Scholar] [CrossRef]
  36. Ilic, S.; LeJeune, J.; Ivey, M.L.L.; Miller, S. Delphi expert elicitation to prioritize food safety management practices in greenhouse production of tomatoes in the United States. Food Control 2017, 78, 108–115. [Google Scholar] [CrossRef]
  37. Keeney, S.; Hasson, F.; McKenna, H.P. A critical review of the Delphi technique as a research methodology for nursing. Int. J. Nurs. Stud. 2001, 38, 195–200. [Google Scholar] [CrossRef] [PubMed]
  38. Dalkey, N.; Brown, B.; Cochran, S. Use of self-ratings to improve group estimates: Experimental evaluation of Delphi procedures. Technol. Forecast. 1970, 1, 283–291. [Google Scholar] [CrossRef]
  39. Harold, A.; Linstone, M.T. Delphi: A brief look backward and forward. Technol. Forecast. Soc. Chang. 2011, 78, 1712–1719. [Google Scholar]
  40. Kauko, K.; Palmroos, P. The Delphi method in forecasting financial markets—An experimental study. Int. J. Forecast. 2014, 30, 313–327. [Google Scholar] [CrossRef]
  41. Skulmoski, G.J.; Hartman, F.T.; Krahn, J. The Delphi Method for Graduate Research. J. Inf. Technol. Educ. 2007, 6, 1–21. [Google Scholar] [CrossRef]
  42. Wu, D.; Ding, H.; Chen, J.; Fan, Y. A Delphi approach to develop an evaluation indicator system for the National Food Safety Standards of China. Food Control 2021, 121, 107591. [Google Scholar] [CrossRef]
  43. Zhang, L.; Huang, J. Establishment of population health assessment index system for urban community residents in China. Chin. J. Public Health 2010, 26, 1378–1380. [Google Scholar]
  44. Kim, J.; Lee, W. An analysis of educational informatization level of students, teachers, and parents: In Korea. Comput. Educ. 2011, 56, 760–768. [Google Scholar] [CrossRef]
  45. Xu, C.; Yan, W.; Hu, C.; Shi, C. Design and Validation of School ICT Monitoring & Evaluation Framework: A Case Study. Am. J. Educ. Inf. Technol. 2020, 1–8. [Google Scholar] [CrossRef]
  46. Dong, T. Construction of vocational education informatization development indicators and analysis of regional differences. Chin. Vocat. Tech. Educ. 2020, 36, 5–11. [Google Scholar]
  47. Zhang, R.; Zhang, H.; Zhang, Y. Research on the Application Index of Basic Education Informationization in Xinjiang—Based on the Empirical Evaluation of 13 Primary and Middle Schools in X Area of Xinjiang. Mod. Educ. Technol. 2021, 31, 72–80. [Google Scholar] [CrossRef]
  48. Li, L. Research on Current Development Status and Countermeasures of Regional Smart Education—Taking Yuncheng Smart Education Demonstration Zone as An Example. E-Educ. Res. 2022, 43, 56–62. [Google Scholar] [CrossRef]
  49. Ho, C.-W.; Wang, Y.-B.; Neil, Y.Y. Does Environmental Sustainability Play a Role in the Adoption of Smart Card Technology at Universities in Taiwan: An Integration of TAM and TRA. Sustainability 2015, 7, 10994–11009. [Google Scholar] [CrossRef]
  50. Ministry of Education of the People’s Republic of China. The 10-Year Development Plan of Educational Informatization. Available online: http://www.moe.gov.cn/srcsite/A16/s3342/201203/t20120313_133322.html (accessed on 7 April 2024).
  51. Ministry of Education of the People’s Republic of China. The 13th Five-Year Plan for Educational Informatization. Available online: http://www.moe.gov.cn/srcsite/A16/s3342/201606/t20160622_269367.html (accessed on 7 April 2024).
  52. Wastiau, P.; Blamire, R.; Kearney, C.; Quittre, V.; Van de Gaer, E.; Monseur, C. The Use of ICT in Education a survey of schools in Europe. Eur. J. Educ. 2013, 48, 11–27. [Google Scholar] [CrossRef]
  53. Ristić, M. E-Maturity in Schools. Croat. J. Educ. 2017, 19, 317–334. [Google Scholar]
  54. Vanderlinde, R.; Dexter, S.; van Braak, J. School-based ICT policy plans in primary education: Elements, typologies and underlying processes. Br. J. Educ. Technol. 2012, 43, 505–519. [Google Scholar] [CrossRef]
  55. Vermeulen, M.; Kreijns, K.; van Buuren, H.; Van Acker, F. The role of transformative leadership, ICT-infrastructure and learning climate in teachers' use of digital learning materials during their classes. Br. J. Educ. Technol. 2017, 48, 1427–1440. [Google Scholar] [CrossRef]
Figure 1. The Delphi flowchart.
Figure 1. The Delphi flowchart.
Sustainability 16 05024 g001
Table 1. Two rounds of Delphi survey participants and responses rate.
Table 1. Two rounds of Delphi survey participants and responses rate.
ParticipantsInvitedRound 1Round 2
NPercentageNPercentageNPercentage
Teachers with experience of using ICT in teaching928%827%623%
Principals in charge of the management of ICT in school721%620%619%
Experts working in ICTE824%723%727%
Experts working in ICTE evaluation and management927%930%831%
Total33100%30100%27100%
Table 2. Classifications in pre-existing evaluation frameworks and relationship with the presented one.
Table 2. Classifications in pre-existing evaluation frameworks and relationship with the presented one.
ResearchUnitCategories
Previous studies
(Kim & Lee, 2011) [44]SchoolAccess to education information, competency, utilization, and the satisfaction of students, teachers, and parents
(Aoki et al., 2013) [24]SchoolPersonnel, Materials, Usage
(Song et al., 2013) [20]NationPolicy, Infrastructure, Human resources, Curriculum, Service, Educational resources, Use, Equity
(Njagi & Oboko, 2013) [21]SchoolICT curriculum, ICT infrastructure, School Management support for integration of ICTs in teaching and learning, Teacher training, Extent of ICT integration in Teaching and Learning
(Yim, 2015) [15]NationFour major evaluation domains (Surrounding Environment, IS/ICT, Teacher, Student), Four intersection evaluation domains (Technology Integration, Technology Utilization, Educational Delivery and Interaction, etc.)
(Goeman et al., 2015) [14]RegionInfrastructure, Policy, Competencies, Use at the micro level, Perceptions
(Xu et al., 2020) [45]SchoolData Usability, School ICT Evaluation Framework, Data Analysis
(Dong, 2020) [46]NationVocational education informatization, Development Indicators, Regional Differences, Evaluation Framework
(Zhang et al., 2021) [47]NationEducation Informatization, Application Index, Primary and Secondary School, Information Gap
(Li, 2022) [48]NationSmart Education, Current Situation Investigation, Main Problems, DevelopmentCountermeasures
The present study
OursRegionICT infrastructure, Digital resources, ICT usage, ICT management, Personnel support
Table 3. The identification on importance and screening of the indicators behind the two-round Delphi survey.
Table 3. The identification on importance and screening of the indicators behind the two-round Delphi survey.
No.Dimension/IndicatorRound 1Round 2
MeanSDCV%PA%DecisionMeanSDCV%PA%Decision
1ICT Infrastructure4.570.5011100K-----
1.1Percentage of multimedia classrooms4.030.481290K-----
1.2Percentage of multimedia classrooms with interactive electronic whiteboards3.570.792240R4.220.761881K
1.3Percentage of schools with Innovation Lab (such as 3D, etc.) -----4.000.601583K
1.4Number of computers per student4.370.831980K-----
1.5Number of information-based teaching terminals used by teachers in school teaching4.430.711687K-----
1.6Percentage of schools connected to the Internet4.270.771880K-----
1.7Percentage of schools withfull wireless network coverage4.070.771973K-----
2Digital Resources4.800.388100K-----
2.1Percentage of schools with digital resources for textbook3.770.491373K-----
2.2Percentage of schools with school-based resources4.400.4811100K-----
2.3Percentage of schools with cyberspace is supported by the national or provincial platform3.430.792327R3.410.782344E
2.4Percentage of teachers with learning cyberspace4.070.691780K-----
2.5Percentage of students with learning cyberspace4.230.4210100K-----
3ICT Usage4.870.44997K-----
3.1Average frequency of on-line service use by students per week4.770.521197K-----
3.2Average frequency of on-line service use by teachers per week4.200.4210100K-----
3.3Average ICT use time by teachers for academic purposes4.730.571293K-----
3.4Percentage of teachers who use multimedia teaching resources (such as PPT courseware, video animation, etc.) for teaching 4.700.611393K-----
3.5Percentage of teachers who participate in ICT-based activities peer week4.630.5111100K-----
3.6Utilization rate of multimedia classrooms4.200.671687K-----
4ICT Management4.370.611493K-----
4.1Percentage of schools with management information system to achieve unified identify authentication4.470.721687K-----
4.2Percentage of schools with management information system achieve normalized applications4.430.711687K-----
4.3Percentage of schools that implement the application of basic data in management information4.770.621390K-----
4.4Percentage of schools with public information release platforms3.330.672023R4.150.711781K
4.5Percentage of schools with campus safety monitoring systems4.030.561487K-----
4.6Percentage of schools with security monitoring system to achieve full coverage of the campus (gate, teaching building, office area)4.100.621587K-----
4.7Percentage of schools with Smart-ID cards3.870.701870K-----
5Personnel Support4.530.591397K-----
5.1Percentage of schools with appointing an ICT director4.570.641493K-----
5.2Percentage of teachers who are participant in ICT training at the school level3.430.621837R3.190.732337E
5.3Percentage of teachers who are participant in ICT training at the nation or province level4.300.4711100K-----
5.4Percentage of schools with principal’s participant in ICT training at the province level4.730.571293K-----
5.5Percentage of schools with informatics teachers2.930.702420R4.040.531389K
5.6Percentage of schools with ICT assistants3.200.541727R3.410.651948E
Note. 1. The experts rated the importance of each indicator in the form of Likert scale (“1” showing this indicator was “not important” and 5 showing this indicator was “very important”). 2. Mean: the mean of importance score, SD: standard deviation, CV: coefficient of variation, PA: percent agreement, K: keep, R: reassessment, E: exclude. 3. In Round 2, indicator “1.3 Percentage of schools with Innovation Lab (such as 3D, etc.)” was newly added. 4. In Round 2, indicator “1.5 Number of computers per teachers” was revised as “1.5 Number of information-based teaching terminals used by teachers in school teaching”. 5. In Round 2, indicator “4.6 Percentage of schools with security monitoring system to achieve full coverage of the campus” was revised as “Percentage of schools with security monitoring system to achieve full coverage of the campus (gate, teaching building, office area)”. 6. In Round 2, indicator “1.1” and “1.2” were synthesize one indicator.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xing, D.; Zhao, T.; Xie, C. Assessing China’s Sustainable Development of ICT in Education: A Delphi Approach. Sustainability 2024, 16, 5024. https://doi.org/10.3390/su16125024

AMA Style

Xing D, Zhao T, Xie C. Assessing China’s Sustainable Development of ICT in Education: A Delphi Approach. Sustainability. 2024; 16(12):5024. https://doi.org/10.3390/su16125024

Chicago/Turabian Style

Xing, Danxia, Teng Zhao, and Chuanbing Xie. 2024. "Assessing China’s Sustainable Development of ICT in Education: A Delphi Approach" Sustainability 16, no. 12: 5024. https://doi.org/10.3390/su16125024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop