Next Article in Journal
Locally-Scaled Kernels and Confidence Voting
Previous Article in Journal
Assessment of Software Vulnerability Contributing Factors by Model-Agnostic Explainable AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Human-Centred Design of a Universal Module for Artificial Intelligence Literacy in Tertiary Education Institutions

1
Research Centre for Data Analytics and Cognition, La Trobe University, Melbourne 3083, Australia
2
Education Services, La Trobe University, Melbourne 3086, Australia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mach. Learn. Knowl. Extr. 2024, 6(2), 1114-1125; https://doi.org/10.3390/make6020051
Submission received: 27 April 2024 / Revised: 13 May 2024 / Accepted: 16 May 2024 / Published: 18 May 2024
(This article belongs to the Section Data)

Abstract

:
Generative Artificial Intelligence (AI) is heralding a new era in AI for performing a spectrum of complex tasks that are indistinguishable from humans. Alongside language and text, Generative AI models have been built for all other modalities of digital data, image, video, audio, and code. The full extent of Generative AI and its opportunities, challenges, contributions, and risks are still being explored by academic researchers, industry practitioners, and government policymakers. While this deep understanding of Generative AI continues to evolve, the lack of fluency, literacy, and effective interaction with Generative and conventional AI technologies are common challenges across all domains. Tertiary education institutions are uniquely positioned to address this void. In this article, we present the human-centred design of a universal AI literacy module, followed by its four primary constructs that provide core competence in AI to coursework and research students and academic and professional staff in a tertiary education setting. In comparison to related work in AI literacy, our design is inclusive due to the collaborative approach between multiple stakeholder groups and is comprehensive given the descriptive formulation of the primary constructs of this module with exemplars of how they activate core operational competence across the four groups.

1. Introduction

The exponential increase in Generative Artificial Intelligence (AI) applications in industrial, commercial, social, and personal settings is driving an urgent need for AI literacy skills that deliver operational competence while also setting the foundation for further study into the technical topics of AI [1]. Through its access to coursework and research students and academic and professional staff, universities are suitable testbeds for the design and development of a universal module for AI literacy that is focused on the learning needs of general to specialised cohorts. In terms of approach, this module builds upon the success of similar works, such as those for teaching digital literacy and academic integrity. In terms of structure and content, it accommodates diverse learning abilities while addressing the practical requirements of all student cohorts and staff groups within a university setting. This inclusive approach aligns with the need to embrace AI as foundational for the future ethics of work in driving student employability and organisational productivity.
Digital literacy is a precursor to AI literacy, and given this dependence, AI literacy can also be positioned as an extension of digital literacy, which has evolved significantly since its inception in the 1990s when it was defined by Gilster [2] as ‘the ability to understand and use information in multiple formats from a wider range of sources when it is presented via computers’. However, recent literature reveals a lack of consensus on a comprehensive definition mainly due to the evolving nature of technology and the multifactorial needs of key stakeholders [3,4]. Within this pool of definitions, the common themes are characteristics, digital competence, related knowledge, and skills [5].
AI literacy, on the other hand, is a term in its infancy. In general, it refers to having proficiency in comprehending, using, monitoring, and evaluating AI applications without necessarily being able to develop AI models themselves [6,7]. A more comprehensive and frequently cited definition of AI literacy was formulated by Long et al. [8], who defined it as ‘a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace’. The use of AI in education has been actively researched [9] and implemented [10], primarily for assessment [11], teaching support [12], and the technical and ethical aspects [13]. However, these have maintained a specific focus on Science, Technology, Engineering, and Manufacturing (STEM) disciplines [14].
Despite these endeavours and the increasing use of Generative AI in education, the inclusive design of curriculum and pedagogy of a universal AI literacy module has not been reported. In this article, we aim to address this gap by adopting an inclusive, human-centred design approach involving multiple stakeholder groups to identify, rationalise, and formulate the primary constructs of a universal AI literacy module that provides operational competence to students and staff in a tertiary education setting. Students and staff are categorised into high-level groups of coursework students, research students, academic staff, and professional staff. Based on our findings, the four constructs that we propose for a universal AI module are (1) foundational knowledge of AI, (2) solving problems using AI, (3) the ethical and responsible practice of AI, and (4) entrepreneurship and innovation with AI. These four constructs should also be sequentially aligned to deliver the continuum of simple-to-complex and concrete-to-abstract learning objectives of Bloom’s taxonomy [15].
The rest of the manuscript is organised as follows. Section 2 reviews recent work on AI literacy and AI in education while also introducing the state-of-the-art Generative AI. In addition to reviewing the literature, we anticipate this section will contribute towards addressing misconceptions of AI and Generative AI that are common in the tertiary education sector, as well as aggregating its progression and capabilities. Section 3 presents our primary contribution of the human-centred design approach involving multiple stakeholder groups to identify, rationalise, and formulate the primary constructs of a universal AI literacy module, followed by a description of each construct. Section 4 presents a discussion on the implications of this universal AI literacy module and its role in addressing the current challenges in tertiary education, and Section 5 concludes this paper.

2. Related Work

The landscape of AI literacy has been described as heterogeneous and lacking reliable information, where questions are raised on the lack of a widely recognised and accepted definition of AI literacy, the positioning (exclusion/inclusion) of programming skills, the lack of robust metrics for measuring literacy, and the lack of empirical research with relevant control variables [14]. Another study [6] aligns with Bloom’s taxonomy in proposing four aspects for defining AI literacy that are based on the adaptation of classic literacies; however, their exploratory review does not progress beyond naming and aligning these aspects. Notwithstanding the headlining presence of Generative AI and its impact on the work and life of adult humans, a number of studies have focused on AI literacy in early childhood education [16], middle-school students [17], and K-12 students [18]. These studies resonate with the same limitations mentioned above, signifying the infancy of the domain despite its increasing importance across all educational landscapes.
Outside AI literacy, AI technologies have been used across diverse applications in education. Given the diversity of these applications, a number of systematic reviews have reported groupings of these capabilities. Four such perspectives were reported in a 2018 study [19], namely, personalised instructional material, innovative instructional strategies, technology-assisted assessment, and communications between learners and instructors, which was followed by four schemes, identified as profiling and prediction, assessment and evaluation, adaptivity and personalisation, and intelligent tutoring systems [11]. A 2022 study reported an increase in implementing and designing online education, personalised learning support, learner profiling, and learning analytics [20]. More broadly, a review of two decades of AI in education [9] reported eight areas for future development, namely, intelligent tutoring systems, natural language processing, educational robots, educational data mining, discourse analysis, teaching evaluation, learner emotion detection, and personalised learning systems. It is expedient to note that every systematic review and research article on the topic of AI in education published prior to 2023 has not anticipated the transformative capabilities of Generative AI. Instead, they focus on the four primary capabilities of what is now being categorised as conventional AI (or narrow/weak AI): prediction, classification, association, and optimisation [21]. These primary capabilities have been demonstrated in a number of recent studies, focusing on applications such as smart cities [22,23], healthcare [24,25], and energy [26,27].
Generative AI is broadly defined by its ability to “generate new content” that is complex and seemingly meaningful [28]. This “new content” is unlike the datasets used to train the Generative AI model and is typically known as “AI-generated content (AIGC)”. A clear distinction can be drawn between Generative AI and conventional AI (or narrow/weak AI) based on the complexity of the output that is generated. Conventional AI produces an output that is well-defined, such as prediction, classification, association, or optimisation. In contrast, Generative AI generates complex content in response to an ad hoc human query that is not predefined. This complexity and human-like content have led to Generative AI being recognised as a General-Purpose Technology due to its sustained impact of complementing human intelligence across occupations and constituent work tasks. Higher-wage occupations are increasingly exposed to Generative AI, with approximately 80% of the U.S. workforce having a minimum of 10% of work tasks automated and close to 19% of occupations with a higher risk of 50% exposure [13]. Rapid developments in novel deep-learning algorithms for predicting sequences, availability of large volumes of high-quality training data, and access to scalable and distributed computing facilities on cloud infrastructure have laid the technical foundations for this exponential growth in Generative AI [28,29]. The first generation of models were trained on image data and specialised in computer vision-related tasks, such as face detection, object detection [30], object localisation, and image captioning, followed by the second generation focusing on language and text. Commonly known as Large Language Models (LLMs), these models have found a wider audience due to the simplicity of interaction through conversation, as demonstrated by the success of ChatGPT [13,31]. The combination of images and text has led to the further development of multimodal models and specialised scientific, programming, and robotics models.
“Prompting” is the most common form of interaction with a Generative AI model. This takes the form of a question or query where intent information is expressed within the instructions provided by the human operator. A prompt typically consists of the following elements: an instruction, which follows how the model operates; context, which supports the interpretation of the instruction; input, which is the semantic information within the instruction; and output, which is the format and type of output required [28]. The simplest and most straightforward method of prompting is to use the graphical web interface or smartphone interface, followed by the programmatic method of accessing a Generative AI model using an API, which is also automated and connects the model to a pre-existing software package or system. Chain-of-thought prompting, zero-shot prompting, tree-of-thought prompting, and graph prompting are advanced variants for more complex interactions [32]. Model finetuning and transfer learning are more advanced techniques that allow users to train a Generative AI model using their own data [33]. Generative AI model architecture, hyperparameters, and training algorithms can also be redesigned and trained from scratch to learn from new datasets, which is typically a larger undertaking requiring technical and domain expertise.

3. The Universal AI Literacy Module

Underpinned by the phenomenological methodology of participatory action research, we adopted a human-centred design approach [34] that facilitates an exploratory process of joint inquiry, identification, and deliberation of the expectations and requirements of the universal AI literacy module. The initial phase of the design approach was an ideation workshop [35] of AI researchers and practitioners (n = 35) that congregated their collective technical and professional expertise into a blueprint for AI literacy. This blueprint informed the design of content for the next phase of in-depth interviews with participants who were a representative sample of the target groups (n = 15), coursework students, research students, academic staff, and professional staff. As future employers of university students and an active consumer group of AI, industry practitioners (with/without technical or AI expertise) were also included in this sample. These target groups are representative of a tertiary education setting where students branch out into coursework (20%) and research (20%) subgroups and staff split into academic (20%) and professional (20%), with industry practitioners (20%) being partially involved in teaching and research activities. The demographics of this sample are as follows: gender—46.6% female, 40% male, and 13.3% undisclosed and age group—20–29: 40%, 30–39: 20%, 40–49: 20%, and 50–59: 20%. The interviews were conducted using an online platform by an experienced qualitative researcher with expertise in undertaking phenomenological research for over 10 years. The average length of the interviews was approximately 30 min. Five transcripts underwent preliminary coding by two members of the research team with experience undertaking qualitative analysis, and any discrepancies in coding were discussed to ensure consensus was reached prior to proceeding with the coding of the remaining transcripts. The transcripts of the interviews underwent inductive thematic analysis to determine common themes encompassing main points and opportunities for advancing AI literacy. These findings informed the final phase of co-design workshops where we further recruited a representative sample of the target groups (n = 10) to collaboratively ideate and design the AI learning module through iterations of empathy building, hypothesis testing, idea generation, and prototyping. As depicted in Figure 1, this approach reaches beyond participant involvement to co-evolve the problem and the solution in a collaborative setting. Across the phases of ideation, interviews, thematic analysis, and three iterations of co-design workshops, we transformed and refined the blueprint of the universal AI literacy module into a complete specification. Table 1 presents this specification in its implementable format of constructs and topics. When aggregating the findings of the design approach, the first undertaking was the number of constructs with justification for providing adequate operational competence while also not overbearing on those participants without pre-requisite knowledge or skill. Drawing on the incremental premise of knowledge, skill, ethics, and innovation and aligning with the learning levels of Bloom’s taxonomy, we rationalised the four constructs: (1) foundational knowledge of AI, (2) solving problems using AI, (3) the ethical and responsible practice of AI, and (4) entrepreneurship and innovation with AI. Each construct is further deliberated in terms of the major topics and the levels of learning from Bloom’s taxonomy (presented in parentheses) in Table 1. The constructivist underpinnings of this specification can be observed in the progression of foundational knowledge into solving problems using AI, followed by the ethical and responsible practice of such AI solutions and systems, leading up to the capacity for entrepreneurship and innovation using AI.
Table 2 expands the specification by presenting exemplars of how the four stakeholder groups draw benefit and value from each of the constructs. For instance, a coursework student will leverage their foundational knowledge of AI to understand the AI capabilities of a social media app on their smartphone and how the social media provider trains AI using consumer data to deliver these capabilities. In solving problems using AI, coursework students will benefit from the conversational abilities of Generative AI (such as ChatGPT) to simplify complex topics with familiar examples during self-study sessions. This is already a common-use case of Generative AI, which can be further augmented through formal training in prompt engineering skills. By adopting the responsible practice of AI, coursework students will know how to use and cite AI tools and AI-generated content not only in assignments but also in work tasks and personal activities. The combination of these three constructs, i.e., knowledge, problem-solving skills, and the responsible practice of AI, is transformational in ensuring the next generation of employees and citizens are cognizant of the opportunities and risks of AI, competent in its use, and proficient in the identification of AI systems and AI-generated content. The final construct is an extension of these capabilities where AI-literate individuals can progress towards innovation with AI. This can take diverse forms, such as start-ups and creative output that would lead to alternate career pathways for coursework students, promote academic research, create personalised learner journeys that reduce attrition, and lead to AI transformations that deliver operational efficiencies across the tertiary education landscape.
In the following subsections, we unpack each of the constructs and provide guidelines for identifying pedagogical approaches to develop the necessary knowledge and skills. We also identify strategies for the application of effective methods to scaffold activities for using AI to solve problems through reflective, ethical, and responsible practices. We signal that interactions with students and AI tools provide opportunities for knowledge production, as opposed to knowledge consumption, thus contributing to innovation and supporting entrepreneurship with AI. Such constructivist approaches position the learning process at the core of quality learning where the process is the vehicle for meaningful and engaged learning [36]. Applying this approach in practice requires closer attention to the redesign of authentic assessment activities and pedagogical approaches that integrate the use of AI. The focus is on utilising critical higher-order thinking to build student capabilities in analysis, complex problem solving, logic development, creativity, and collaboration. Intentional design is vital for the provision of structure and support for learners required during the learning process [37], particularly in the context of the Generative AI environment.
Successful design of the module requires thoughtful and effective pedagogical considerations to support the engineering of learning and teaching strategies, with a focus on the process of learning, as opposed to the product of learning. We approach the design of the AI literacy module through the lens of constructivism, which defines learning as a process for active knowledge construction instead of passive knowledge absorption [38]. The constructivist approach is student-centred, where meaning, learner processes, collaboration, and interactivity are the key foci [39,40]. We assign a high value to learning processes where students produce knowledge through the provision of instructor guidance [39]. Such an approach highlights the importance of active, reflective, and collaborative methodologies to support quality learning. The four constructs of AI literacy weave these teaching and learning approaches to leverage AI tools, building the required literacy in the context of the discipline.

3.1. Foundational AI Knowledge

Foundational AI knowledge builds a baseline awareness and understanding of the theory and practice of AI. Beginning with practical and everyday applications of AI, participants will draw on these lived experiences to understand the theoretical notions of an AI lifecycle, training datasets, learning algorithms, model development, hyperparameters, evaluation metrics, and AI system deployment. This disciplinary knowledge will be developed through practice and formative activities using lower-order skills from Bloom’s taxonomy. This construct integrates relevant curriculum and provides scope for active learning using AI tools. It provides mechanisms for students to learn content, offers strategies and self-directed learning skills, and allows learners to reflect on their own experiences through engagement and self-directed inquiry [41]. Proponents of active learning advocate that such approaches provide students with skills in discipline-specific reasoning [42,43]. Other researchers have identified correlations between active learning strategies and authentic learning [44,45,46]. Strategies for achieving active learning with AI require facilitators to construct scaffolds with contextual instructional materials and sequence tasks with effective feedback.

3.2. Solving Problems Using AI

This construct relates to providing learners with skills in the application, analysis, and evaluation of knowledge where learners use higher-order skills from Bloom’s taxonomy to select, transfer, classify, appraise, assess, etc. This would begin with the widely cited skill for querying Generative AI models, i.e., prompt engineering, with options for extending into advanced prompting using prompt parameters (such as temperature), API interfaces, or transfer learning capabilities. The skill to identify a situation that would benefit from an AI solution, design the blueprint of such a solution, and then compare existing solutions in terms of the expected capabilities is another learning outcome of this construct. Model interpretation and explainability, insight generation using AI, options for human-in-the-loop AI solutions, and the evaluation of AI-based decision-making methods and metrics are further skills that participants will acquire through this construct. This construct should be designed to incorporate active learning processes for students with AI interaction and engagement. Both interaction and engagement are often interchangeably used in the literature to discuss learning processes that support effective learning [47]. Engagement is widely recognised as a key ingredient for student connectivity, satisfaction, and academic performance [48]. This critical construct is central to building core AI literacy where students are equipped with essential skills to achieve a broad range of learning outcomes (analysis, evaluation, and problem solving). The design of activities in this construct should be scaffolded to provide scope for students to exercise skills in the construction and validation of disciplinary content. This requires the generation of topics and activities that incorporate an ‘inquiry framework’ [49] to support quality interaction with AI, resulting in reflection where ideas can be critiqued to solve problems, resulting in deep learning [50].

3.3. Ethical and Responsible Practice of AI

Aligned with the recent progress in AI regulation, the European Union AI Act [51], AI Ethics guidelines such as the IEEE Ethically Aligned Design, EU trustworthy AI, and others [52], this construct examines the purpose, role, and use of AI within socio-technical environments, focusing on the ethical dimensions of human agency, safety, robustness, privacy, transparency, fairness, security, safety, and accountability. This construct is concerned with building skills to reflect, question, and synthesise AI-produced content to generate efficacy towards the utilisation of AI. Current challenges associated with Generative AI, such as the use of copyright data for training, intellectual property and ownership, algorithmic bias, AI hallucinations, and factual inaccuracies are integral to building literacy in the responsible use of AI. The responsible use and referencing of all AI-generated content, as well as methods and metrics for the detection of such content, should also be part of this construct. The fundamental principles of the responsible and ethical use of AI are closely aligned with the values underpinning academic integrity, such as honesty, trust, fairness, respect, and responsibility. Acknowledging the fast-evolving nature of AI technologies, this construct should also highlight the importance of lifelong learning to keep abreast with new developments.

3.4. Entrepreneurship and Innovation with AI

The process of learning across the three constructs previously discussed culminates in the skills necessary to foster entrepreneurship and innovation. This construct should impart a critical thinking-and-analytical mindset that maintains an AI-first philosophy, skills for recruiting and leading AI teams, the fundamentals of intellectual property for innovation, patents and commercialisation, as well as communication skills that would enable learners to present AI innovations to diverse audiences. Learners should also be trained to identify opportunities for authentic applications of AI that are not limited to mere operational efficiencies or task automation but span across the strategic mindset of entrepreneurship and innovation. The previous construct on the ethical and responsible practice of AI will ensure this commercial pursuit of AI does not exploit consumers and their behaviours or data. Pedagogical approaches can include role-play, case-based, and simulated active learning experiences of authentic AI practices. This can be extended to reviews of successful commercial and creative AI start-ups and the underlying high-impact capabilities and low barriers to the adoption of authentic AI practices.

4. Discussion

Generative AI is leading a paradigm shift in the acceptance and application of AI across all disciplines and industry sectors. For instance, ChatGPT is the flagship Generative AI model that has amassed the largest audience in human history, i.e., 100 million active users in two months [53]. Despite the criticisms of an ‘intelligence without knowledge or reasoning or the notions of truth’, ChatGPT is highly effective at human-like conversations with seemingly sophisticated and useful responses to questions and summarization, classification, extraction, and generation tasks. In education, Generative AI technologies such as ChatGPT can be directly leveraged to build knowledge and improve learning. It can be used as a writing assistant or as a tutor, to brainstorm and draw out ideas, and to recall, retrieve, and strengthen disciplinary knowledge. It can also be used as a Socratic tutor for a deeper understanding of constructs and as an analogy generator for elaboration to promote learning by making connections, expanding on ideas, and applying concepts to individual experiences. These high-order capabilities contrast with all other technological developments to date as those technologies were only able to support a human expert (the educator) in teaching. With Generative AI, we now have an opportunity to leverage its “generalised intelligence” directly in teaching with minimal supervision from the human intermediary. The human supervision factor is likely to be less relevant as newer Generative AI models address the challenges of bias, factual inaccuracies, fallacies, and plagiarism, commonly known as AI hallucinations or stochastic parroting.
AI presents compelling challenges to the academic integrity of scholarly works and conventional assessment practices given that every output generated is unique. For example, ChatGPT can replicate chunks of text from existing works without correct references to the source material, and human operators could use ChatGPT for direct text generation and use its outputs in assessment submissions and research articles. The challenge of academic integrity lies therein, where prevailing conventional assessment practices built on the low-order understanding (multiple-choice, simple-programming, and true/false quizzes) of knowledge and application pose risks to academic integrity as responses can be easily generated by Generative AI tools and plagiarised in assessment. While there are tools for detecting AI-generated text, these only provide a likelihood score, which makes them indeterminate and unreliable. To date, limited effective tools have been employed in higher education institutions in Australia as privacy concerns related to programs that analyse and recognise student writing styles present ethical considerations [54]. Furthermore, the continuous learning and improvement of AI adds to the complexity of detection.
We posit that it is a far more effective approach to invest time in building AI literacies across disciplines and employing authentic learning strategies that engage students with AI tools, to support the development of disciplinary foundational knowledge and skills, and to use AI to solve real-world problems in an ethical and responsible manner. It is through such a process of learning that entrepreneurship and innovation come to light. The implementation of AI in curriculum and assessment is paramount to all higher education disciplines. Active interaction with Generative AI to support the retrieval, evaluation, production, and utilisation of content necessitates the implementation of new literacy, specifically in the context of the AI environments.
The significance of implementing the AI literacy module is further brought to the fore in the context of major gaps in the tertiary education sector, such as those recently highlighted by the Australian Universities Accord [55]. The report stresses the imperative for the higher education system to ensure it can meet Australia’s future knowledge and skill needs. Emphasis is placed on the expansion of wider access and opportunities to people and on the delivery of new knowledge, innovation, and capabilities to benefit society and the economy. Pertinently, the creation of new knowledge and equity and the need for people and industries to have the capacity to absorb new discoveries, including high levels of skills and knowledge, are required to drive growth. AI is highlighted as a critical and emerging area where effective learning and teaching practices are required for best practices in the use of new technologies. AI literacy has an essential role to play in meeting such goals as it shines a light on the issue of pedagogy, equity, accessibility, and contribution to the growth of skills. It can support filling the skill gap and augment the understanding of Generative AI in learning and teaching.
Within the global context, AI literacy can be a powerful enabler in supporting the progression and achievement of the United Nations’ Sustainable Development Goals (SDGs). AI literacy can be a driver in equipping future generations with global problem-solving skills for sustainable development, including quality education, respectful work, economic growth, and reduced inequalities. Higher education has a critical role to play in identifying pathways to address sustainable development challenges. In addressing SDG 4, i.e., “Inclusive and equitable quality education and lifelong learning opportunities for all”, an AI literacy module can become an enabler for general literacy, knowledge, and skill training in developing countries. An AI study companion or personalised tutor can be trialled in areas with a low supply of suitably qualified teaching professionals; the learning outcomes and student experience and feedback received from such a trial can inform the customisation of this universal AI literacy module for diverse demographics in developing countries. In further exploration of ‘digital/technology poverty’, older adults and culturally and linguistically diverse communities can also be studied separately to identify the benefits and value of AI literacy. This universal AI literacy maintains adaptability to suit such diverse audiences and their literacy levels and information needs.

5. Conclusions

The transformative capabilities of AI, primarily driven by Generative AI, have necessitated a sector-wide rethink of the role of tertiary education institutions in addressing a critical shortage of knowledge and skills. AI literacy is an effective solution to address this gap. We adopted an inclusive, human-centred design approach involving multiple stakeholder groups to identify, rationalise, and formulate the primary constructs of a universal AI literacy module that provides operational competence of AI to students and staff in a tertiary education setting. A universal AI literacy module that meets the needs of a diverse cohort of learners requires effective pedagogical approaches that situate learning in the context of real-world situations and real-world problem solving. The four primary constructs we identified and developed are foundational knowledge of AI, solving problems using AI, the ethical and responsible practice of AI, and entrepreneurship and innovation with AI, which we have explicated in terms of central themes, curriculum, and pedagogical approach. Although we have focused on a tertiary education setting, the generalised, human-centric disposition of the design of this AI literacy module lends itself to broader implementation and adoption as a micro-credential for industry professionals, a classroom subject/project for high school students, and an AI operating licence for the general public.

Author Contributions

Conceptualization, D.D.S., S.J., M.E.-A., Z.I., H.M. and N.M.; Data curation, S.J. and H.M.; Formal analysis, D.D.S. and Z.I.; Investigation, S.J., M.E.-A. and N.M.; Methodology, D.D.S., S.J., M.E.-A., H.M. and N.M.; Resources, H.M.; Software, Z.I.; Supervision, D.D.S., M.E.-A. and N.M.; Validation, Z.I.; Visualization, S.J.; Writing—original draft, D.D.S., S.J., M.E.-A., Z.I., H.M. and N.M.; Writing—review and editing, D.D.S., S.J., M.E.-A., Z.I., H.M. and N.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author due to human research ethics requirements.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. De Silva, D.; Kaynak, O.; El-Ayoubi, M.; Mills, N.; Alahakoon, D.; Manic, M. Opportunities and Challenges of Generative Artificial Intelligence in Research, Education, Industry Engagement and Social Impact. IEEE Ind. Electron. Mag. 2024, in press.
  2. Gilster, P.; Glister, P. Digital Literacy; Wiley Computer Pub: New York, NY, USA, 1997. [Google Scholar]
  3. Reddy, P.; Sharma, B.; Chaudhary, K. Digital literacy: A review of literature. Int. J. Technoethics 2020, 11, 65–94. [Google Scholar] [CrossRef]
  4. Peng, D.; Yu, Z. A literature review of digital literacy over two decades. Educ. Res. Int. 2022, 2022, 2533413. [Google Scholar] [CrossRef]
  5. Bejaković, P.; Mrnjavac, Ž. The importance of digital literacy on the labour market. Empl. Relat. Int. J. 2020, 42, 921–932. [Google Scholar] [CrossRef]
  6. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 4221–4241. [Google Scholar] [CrossRef]
  7. Long, D.; Blunt, T.; Magerko, B. Co-Designing AI Literacy Exhibits for Informal Learning Spaces. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–35. [Google Scholar] [CrossRef]
  8. Long, D.; Megerko, B. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar] [CrossRef]
  9. Chen, X.; Zou, X.H.D.a.; Cheng, G.; Liu, C. Two decades of artificial intelligence in education. Educ. Technol. Soc. 2022, 25, 28–47. [Google Scholar]
  10. Kandlhofer; Steinbauer, M.; Hirschmugl-Gaisch, G.; Huber, S.; Huber, P. Artificial intelligence and computer science in education: From kindergarten to university. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Erie, PA, USA, 12–15 October 2016; pp. 1–9. [Google Scholar]
  11. Zawacki-Richter, O.; Marín, I.V.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef]
  12. Kuka, L.; Hörmann, C.; Sabitzer, B. Teaching and Learning with AI in Higher Education: A Scoping Review. Learn. Technol. Technol. Learn. Exp. 2022, 551, 551–571. [Google Scholar]
  13. Eloundou, T.; Manning, S.; Mishkin, P.; Rock, D. Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv 2023, arXiv:2303.10130. [Google Scholar]
  14. Laupichler, C.M.; Aster, A.; Schirch, J.; Raupach, T. Artificial intelligence literacy in higher and adult education: A scoping literature review. Comput. Educ. Artif. Intell. 2022, 3, 1001. [Google Scholar] [CrossRef]
  15. Bloom, B.S.; Krathwohl, D.R. Taxonomy of Educational Objectives: The Classification of Educational Goals. Book 1, Cognitive Domain; Longman: New York, NY, USA, 2020. [Google Scholar]
  16. Su, J.; Ng, K.D.T.; Chu, W.S.K. Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Comput. Educ. Artif. Intell. 2023, 4, 100124. [Google Scholar] [CrossRef]
  17. Zhang, H.; Lee, I.; Ali, S.; DiPaola, D.; Cheng, Y.; Breazeal, C. Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. Int. J. Artif. Intell. Educ. 2022, 33, 290–324. [Google Scholar] [CrossRef] [PubMed]
  18. Casal-Otero, L.; Catala, A.; Fernández-Morante, C.; Taboada, M.; Cebreiro, B.; Barro, S. AI literacy in K-12: A systematic literature review. Int. J. STEM Educ. 2023, 10, 29. [Google Scholar] [CrossRef]
  19. Chassignol, M.; Khoroshavin, A.; Klimova, A.; Bilyatdinova, A. Artificial Intelligence trends in education: A narrative overview. Procedia Comput. Sci. 2018, 136, 16–24. [Google Scholar] [CrossRef]
  20. Guan, C.; Mou, J.; Jiang, Z. Artificial intelligence innovation in education: A Twenty-year data-driven historical analysis. Int. J. Innov. Stud. 2020, 4, 34–147. [Google Scholar] [CrossRef]
  21. De Silva, D.; Alahakoon, D. An artificial intelligence life cycle: From conception to production. Patterns 2022, 3, 100489. [Google Scholar] [CrossRef]
  22. Nallaperuma, D.; De Silva, D.; Alahakoon, D.; Yu, X. Intelligent detection of driver behavior changes for effective coordination between autonomous and human driven vehicles. In Proceedings of the IECON 2018-44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 3120–3125. [Google Scholar]
  23. Nawaratne, R.; Alahakoon, D.; De Silva, D.; Kumara, H.; Yu, X. Hierarchical two-stream growing self-organizing maps with transience for human activity recognition. IEEE Trans. Ind. Inform. 2019, 16, 7756–7764. [Google Scholar] [CrossRef]
  24. De Silva, D.; Burstein, F.; Jelinek, H.F.; Stranieri, A. Addressing the complexities of big data analytics in healthcare: The diabetes screening case. Australas. J. Inf. Syst. 2015, 19. [Google Scholar] [CrossRef]
  25. Chamishka, S.; Madhavi, I.; Nawaratne, R.; Alahakoon, D.; De Silva, D.; Chilamkurti, N.; Nanayakkara, V. A voice-based real-time emotion detection technique using recurrent neural network empowered feature modelling. Multimed. Tools Appl. 2022, 81, 35173–35194. [Google Scholar] [CrossRef]
  26. De Silva, D.; Yu, X.; Alahakoon, D.; Holmes, G. Semi-supervised classification of characterized patterns for demand forecasting using smart electricity meters. In Proceedings of the 2011 International Conference on Electrical Machines and Systems, Beijing, China, 20–23 August 2011; pp. 1–6. [Google Scholar]
  27. Lyu, W.; Liu, J. Artificial Intelligence and emerging digital technologies in the energy sector. Appl. Energy 2021, 303, 117615. [Google Scholar] [CrossRef]
  28. De Silva, D.; Mills, N.; El-Ayoubi, M.; Manic, M.; Alahakoon, D. ChatGPT and Generative AI Guidelines for Addressing Academic Integrity and Augmenting Pre-Existing Chatbots. In Proceedings of the 2023 IEEE International Conference on Industrial Technology (ICIT), Orlando, FL, USA, 4–6 April 2023; IEEE: Piscataway, NJ, USA, 2023. [Google Scholar]
  29. Kleyko, D.; Osipov, E.; De Silva, D.; Wiklund, U.; Alahakoon, D. Integer self-organizing maps for digital hardware. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  30. Nawaratne, R.; Bandaragoda, T.; Adikari, A.; Alahakoon, D.; De Silva, D.; Yu, X. Incremental knowledge acquisition and self-learning for autonomous video surveillance. In Proceedings of the IECON 2017-43rd Annual Conference of the IEEE Industrial Electronics Society, Beijing, China, 29 October–1 November 2017; pp. 4790–4795. [Google Scholar]
  31. Matharaarachchi, A.; Mendis, W.; Randunu, K.; De Silva, D.; Gamage, G.; Moraliyage, H.; Mills, N.; Jennings, A. Optimizing Generative AI Chatbots for Net-Zero Emissions Energy Internet-of-Things Infrastructure. Energies 2024, 17, 1935. [Google Scholar] [CrossRef]
  32. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, S.P.; Sun, L. A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv 2023, arXiv:2303.04226. [Google Scholar]
  33. Wei, J.; Bosma, M.; Zhao, Y.V.; Guu, K.; Yu, W.A.; Lester, B.; Le, V.Q. Finetuned language models are zero-shot learners. arXiv 2021, arXiv:2109.01652. [Google Scholar]
  34. Maguire, M. Methods to support human-centred design. Int. J. Hum.-Comput. Stud. 2001, 55, 587–634. [Google Scholar] [CrossRef]
  35. McTaggart, R. Principles for participatory action research. Adult Educ. Q. 1991, 41, 168–187. [Google Scholar] [CrossRef]
  36. Askeroth, H.J.; Richardson, C.J. Instructor perceptions of quality learning in MOOCs they teach. Online Learn. 2019, 23. [Google Scholar] [CrossRef]
  37. Gašević, D.; Siemens, G.; Sadiq, S. Empowering learners for the age of artificial intelligence. Comput. Educ. Artif. Intell. 2023, 4, 100130. [Google Scholar] [CrossRef]
  38. Brophy, E.J.; Freibergand, J.H. Beyond behaviorism: Changing the classroom management paradigm. Allyn Bacon 1999, 3–20. [Google Scholar] [CrossRef]
  39. Honebein, C.P. Seven goals for the design of constructivist learning environments. Constr. Learn. Environ. Case Stud. Instr. Des. 1996, 11, 11. [Google Scholar]
  40. Johnson, B.; Christensen, B.L. Educational Research: Quantitative, Qualitative, and Mixed Approaches; SAGE Publications: Newbury Park, CA, USA, 2004. [Google Scholar]
  41. Hmelo-Silver, E.C.; Duncan, G.R.; Chinn, A.C. Scaffolding and achievement in problembased and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educ. Psychol. 2007, 42, 99–107. [Google Scholar] [CrossRef]
  42. Espey, M. Enhancing critical thinking using team-based learning. High. Educ. Res. Dev. 2018, 37, 15–29. [Google Scholar] [CrossRef]
  43. Styers, L.M.; Zandt, V.A.P.; Hayden, L.K. Active learning in flipped life science courses promotes development of critical thinking skills. CBE—Life Sci. Educ. 2018, 17, ar39. [Google Scholar] [CrossRef]
  44. Dhanarajan, G. Sustaining knowledge societies through distance learning: The nature of the challenge. In Proceedings of the 19th Annual Conference of the Association of Asian Open Universities, Jakarta, Indonesia, 23–26 June 2009. [Google Scholar]
  45. Ehlers, U.D.; Pawlowski, M.J. Quality in European e-learning: An introduction. In Handbook on Quality and Standardisation in e-Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar] [CrossRef]
  46. Sun, P.C.; Tsai, J.R.; Finger, G.; Chen, Y.Y.; Yeh, D. What drives a successful e-learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput. Educ. 2008, 50, 1183–1202. [Google Scholar] [CrossRef]
  47. Martin, F.; Bolliger, U.D. Engagement matters: Student perceptions on the importance of engagement strategies in the online learning environment. Online Learn. 2018, 22, 205–222. [Google Scholar] [CrossRef]
  48. Lu, H. Online learning: The meanings of student engagement. Educ. J. 2020, 9, 73–79. [Google Scholar] [CrossRef]
  49. Garrison, R.D.; Cleveland-Innes, M. Facilitating cognitive presence in online learning: Interaction is not enough. Am. J. Distance Educ. 2005, 19, 133–148. [Google Scholar] [CrossRef]
  50. Roddy, C.; Amietr, D.; Chung, J.; Holt, C.; Shaw, L.; McKenzie, S.; Garivaldis, F.; Lodge, M.J.; Mund, E.M. Applying best practice online learning, teaching, and support to intensive online environments: An integrative review. Front. Educ. 2017, 2, 59. [Google Scholar] [CrossRef]
  51. Veale, M.; Borgesius, Z.F. Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach. Comput. Law Rev. Int. 2021, 22, 97–112. [Google Scholar] [CrossRef]
  52. Hagendorff, T. The ethics of AI ethics: An evaluation of guidelines. Minds Mach. 2020, 30, 99–120. [Google Scholar] [CrossRef]
  53. Hu, K. CHATGPT Sets Record for Fastest-Growing User Base—Analyst Note. Available online: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ (accessed on 15 May 2024).
  54. Lee, A.J. Algorithmic bias and the New Chicago School. Law 2022, 14, 95–112. [Google Scholar] [CrossRef]
  55. Government, A. Australian Universities Accord, Interim Report; Australian Government Department of Education: Canberra, Australia, 2023.
Figure 1. Human-centred design approach for constructing the universal AI literacy module.
Figure 1. Human-centred design approach for constructing the universal AI literacy module.
Make 06 00051 g001
Table 1. The four constructs and corresponding topics of the universal AI literacy module.
Table 1. The four constructs and corresponding topics of the universal AI literacy module.
Constructs (With Levels of Learning from Bloom’s Taxonomy)Topics
Foundational AI Knowledge
(Remember, Understand)
Applications of AI across diverse disciplines and industries
The lifecycle of an AI application, from design to deployment
Datasets and attributes used for AI model building
Algorithms used for learning, reasoning, optimisation
AI models and hyper-parameters
AI model evaluation methods and metrics
AI model deployment and scalability of AI solutions
Management of AI systems and solutions
Solving Problems using AI
(Understand, Analyse, Apply)
Basic to advanced skills in prompt engineering
Using AI to produce creative work
Identification and design of an AI solution
Fit for purpose comparison of existing AI solutions
AI model interpretation and explainability
Insights generation using AI solutions
Building Human-in-the-Loop AI systems
Evaluation of AI-based decision-making, methods and metrics
Ethical and Responsible Practice of AI
(Analyse, Apply, Evaluate)
AI Regulations, local and international
AI ethics guidelines, codes of conduct, best practices
Responsible approaches to prompt engineering
Responsible approaches to AI creativity
Bias detection, reporting and remediation methods
Responsible use and referencing of all AI-generated content
Methods and metrics for the detection of AI-generated content
Lifelong learning for the responsible practice of AI
Entrepreneurship and Innovation with AI
(Evaluate, Create)
Critical thinking and analytical mindset
AI-first approaches to business strategy, operations, planning
Recruiting and leading AI teams
Fundamentals of IP, patents and commercialisation
Pitching AI to investors
Scaling AI solutions
Presenting AI to non-technical audiences
Table 2. Exemplars of AI literacy by stakeholder group.
Table 2. Exemplars of AI literacy by stakeholder group.
Stakeholder GroupFoundational Knowledge of AISolving Problems Using AIEthical and Responsible Practice of AIEntrepreneurship and Innovation with AI
Coursework studentsDescribing the AI capabilities of a smartphone applicationUnderstand complex topics using AI-based explanationsGuidelines for responsible use of Generative AI content in work, study, and personal settingsRecognising alternate career pathways for graduate employability
Research studentsUnpacking the functionality of an AI research tool for lit reviewComparing research methods by expected outcomesEnsuring the reproducibility of research outcomes when using AI-based research toolsContributing a library of customised AI tools for discipline specific research activities
Academic staffRecognising how AI-generated content can be included in assignment submissionsIntegrating classroom experience into personalised, authentic assessmentsKnowing the risks of bias, inaccuracies and fallacies when integrating AI into learningAdvocating and implementing personalised learner journeys that reduce attrition
Professional staffIdentifying opportunities for integrating AI into work activitiesUsing Generative AI for process automationPreserving privacy, confidentiality and integrity of sensitive data when using AI toolsProgressing the digital transformation of institutional operations into AI transformation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

De Silva, D.; Jayatilleke, S.; El-Ayoubi, M.; Issadeen, Z.; Moraliyage, H.; Mills, N. The Human-Centred Design of a Universal Module for Artificial Intelligence Literacy in Tertiary Education Institutions. Mach. Learn. Knowl. Extr. 2024, 6, 1114-1125. https://doi.org/10.3390/make6020051

AMA Style

De Silva D, Jayatilleke S, El-Ayoubi M, Issadeen Z, Moraliyage H, Mills N. The Human-Centred Design of a Universal Module for Artificial Intelligence Literacy in Tertiary Education Institutions. Machine Learning and Knowledge Extraction. 2024; 6(2):1114-1125. https://doi.org/10.3390/make6020051

Chicago/Turabian Style

De Silva, Daswin, Shalinka Jayatilleke, Mona El-Ayoubi, Zafar Issadeen, Harsha Moraliyage, and Nishan Mills. 2024. "The Human-Centred Design of a Universal Module for Artificial Intelligence Literacy in Tertiary Education Institutions" Machine Learning and Knowledge Extraction 6, no. 2: 1114-1125. https://doi.org/10.3390/make6020051

Article Metrics

Back to TopTop