Next Article in Journal
Insulator Defect Detection Based on YOLOv8s-SwinT
Previous Article in Journal
IUAutoTimeSVD++: A Hybrid Temporal Recommender System Integrating Item and User Features Using a Contractive Autoencoder
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Literacy in Artificial Intelligence as a Challenge for Teaching in Higher Education: A Case Study at Portalegre Polytechnic University

1
Polytechnic Institute of Portalegre, 7350-092 Portalegre, Portugal
2
VALORIZA-Research Center for Endogenous Resource Valorization, 7300-555 Portalegre, Portugal
3
Centro de Estudos e Formação Avançada em Gestão e Economia, Instituto de Investigação e Formação Avançada, Universidade de Évora, Largo dos Colegiais 2, 7004-516 Évora, Portugal
*
Author to whom correspondence should be addressed.
Information 2024, 15(4), 205; https://doi.org/10.3390/info15040205
Submission received: 11 March 2024 / Revised: 4 April 2024 / Accepted: 4 April 2024 / Published: 5 April 2024
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)

Abstract

:
The growing impact of artificial intelligence (AI) on Humanity is unavoidable, and therefore, “AI literacy” is extremely important. In the field of education—AI in education (AIED)—this technology is having a huge impact on the educational community and on the education system itself. The present study seeks to assess the level of AI literacy and knowledge among teachers at Portalegre Polytechnic University (PPU), aiming to identify gaps, find the main opportunities for innovation and development, and seek the degree of relationship between the dimensions of an AI questionnaire, as well as identifying the predictive variables in this matter. As a measuring instrument, a validated questionnaire based on three dimensions (AI Literacy, AI Self-Efficacy, and AI Self-Management) was applied to a sample of 75 teachers in the various schools of PPU. This revealed an average level of AI literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4 (based on a Likert scale from 1 to 5). The results also demonstrate that the first dimension is highly significant for the total dimensions, i.e., for AI Literacy, and no factor characterizing the sample is a predictor, but finding a below-average result in the learning factor indicates a pressing need to focus on developing these skills.

1. Introduction and Literature Review

The recent dissemination of artificial intelligence (AI) to the general public has promoted studies on its application in everyday life. The growing impact of AI on Humanity is unavoidable, and therefore, it is extremely important to understand what it is and what it can do. The set of skills that include the use, application, and interaction with AI is currently called “AI literacy”.
The importance of this topic arises, from the outset, in the field of education—AI in education (AIED)—where this technology is having a huge impact on the educational community and on the education system itself. Studying the use of AIED is a search for solutions that can add value to the teaching–learning process, supporting teachers and students, highlighting the human factor, their thinking skills, teamwork and flexibility, management of knowledge, ethics, and responsibility [1].
The scientific term “artificial intelligence”, as a science of intelligent machines, according to [2,3], dates back to 1956. This was followed during the 1980s by a great development of intellectual skills in machines, as well as the first attempts to replicate the teaching process using AI [1]. Ref. [3] states that the entire education system should be reviewed, not only to make it more practical, but also more open to the world of work and to anticipate the transformations in knowledge. AIED began as a field of recreation and research for computer scientists, with a great impact on education [4], and today fuels the controversy referred to by [5] regarding the use of AIED and the fear that the machine will replace the teacher [6,7].
The acquisition and development of digital skills are seen as essential tools to facilitate lifelong learning, and are therefore one of the main economic concerns in most developed countries. “Literacy”, the ability to read and write and to perceive and interpret what is read, undergoes an important development when it becomes clear that, despite having the ability to write and read, some people are unable to understand the meaning of what they read. In terms of information and communication technologies (ICTs), digital literacy has been studied in depth, but there is still no consensual definition, because the ability to use a computer is currently an insufficient measure to define digital literacy [8].
When AI is introduced into the concept of digital literacy, the scenario becomes even more complex. According to [9], AI literacy is more than knowing how to use AI-driven tools, as it involves lower- and higher-level thinking skills to understand the knowledge and capabilities behind AI technologies and make work easier. For this author, it will not be possible to adequately understand this technology as long as we insist on considering it only as knowledge and skills, as AI involves attitudes and moral decision making for the development of AI literacy and its responsible use. According to [10,11], AI literacy is composed of different competencies, enabling individuals to critically evaluate the use of those kinds of technologies, to communicate and collaborate with AI, and to use it in different contexts, its objective being to describe the skills necessary for a basic understanding of AI.
The widely reported and recognized need for AI regulation leads to new steps towards this. On 26 October 2023, the Secretary-General of the United Nations (UN), António Guterres, launched a high-level multisectoral advisory body on AI, to identify risks, challenges, and main opportunities, while more recently the Spanish presidency of the EU Council announced that the EU co-legislators, the Council and the European Parliament, had reached a provisional agreement on the world’s first rules for AI, advancing the preparation of a regulation aiming to ensure that AI in use in the EU should be safe and respect European rights and values [12].
On 26 January 2024, the Council of the European Union approved the Proposal for a Regulation of the European Parliament and of the Council, establishing harmonized rules in the field of artificial intelligence (Artificial Intelligence Act) and amending certain legislative acts. Aiming to ensure a high level of protection of health, safety, and fundamental rights, including democracy, the rule of law, and environmental protection, the possible first AI Act includes sanctions for non-compliance, impact assessment on fundamental rights, provisions for testing high-risk AI systems, and rules and obligations for all general-purpose AI models, regulating the development, deployment, and use of artificial intelligence systems [13].
In Portugal, the most recent document with official recommendations for the use of AI is a guide for ethical, transparent, and responsible Artificial Intelligence in Public Administration, published in July 2022 by the Agency for Administrative Modernization [14]. This document advances a structuring conceptualization for ethical, responsible, and transparent AI, identifies barriers, challenges, and dangers, and presents recommendations and a tool for risk assessment. Despite very complete content, the level of dissemination and the respective scope for an effective contribution to AI literacy in Portuguese public administration are unknown.
Although research into AIED has at its heart the desire to support student learning, experience from other areas of AI suggests that this ethical intention is not, in itself, sufficient [15,16,17,18]. Complementing this, education research shows that factors such as teacher–student interaction, educational programs, teachers’ attitudes, and their decisions made in the classroom are related to the ethical dimension [18].
So, for the ethical dimension of AI, there is a need to consider issues such as equity, responsibility, transparency, partiality, autonomy, and inclusion, and also distinguish between “doing ethical things” and “doing things ethically”, in order to understand and make pedagogical choices that are ethical and take into account the ever-present possibility of unintended consequences [18,19,20,21]. In academia, where the citizens of the future are prepared, people will tend to use these AI tools constantly, irresponsibly, and without ethical principles in their studies [22]. As an example of this, it was recently detected that around 200 scientific studies had been written with ChatGPT and accepted by scientific reviews (https://www.dailymail.co.uk/sciencetech/article-13211523/ChatGPT-scandal-AI-generated-scientific-papers.html, accessed on 28 March 2024).
According to [23], the generalized use of AIED can potentially harm teacher–student interaction and compromise the independent and capable student’s development. This threat may be amplified by marketing efforts to make the public believe in neutral and objective AI algorithms. This reveals two dimensions of the real ethical problem of AI: the ethical user–system relation in AI systems and ethical use of AI systems by users. Much work needs to be done in this area and, in this context, it is recognized by [4] that most AIED researchers do not have the training to deal with emerging ethical issues.
Indeed, Ref. [24] suggests some principles for ethical and reliable AIED that should be considered, namely
(i)
Governance and management principle: AIED governance and management must take into account interdisciplinary and multi-stakeholder perspectives, as well as all ethical considerations from relevant domains, including, among others, data ethics, learning analytics ethics, computational ethics, human rights and inclusion;
(ii)
Principle of transparency of data and algorithms: The process of collecting, analyzing, and communicating data must be transparent, with informed consent and clarity about data ownership, accessibility, and the objectives of its use;
(iii)
Accountability principle: AIED regulation must explicitly address recognition and responsibility for the actions of each stakeholder involved in the design and use of systems, including the possibility of auditing, the minimization and communication of negative side effects, trade-offs, and compensation;
(iv)
Principle of sustainability and proportionality: AIED must be designed, developed, and used in a way that does not disrupt the environment, the global economy, and society, namely the labor market, culture, and politics;
(v)
Privacy principle: AIED must guarantee the user’s informed consent and maintain the confidentiality of user information, both when they provide information and when the system collects information about them;
(vi)
Security principle: AIED must be designed and implemented to ensure that the solution is robust enough to effectively safeguard and protect data against cybercrime, data breaches, and corruption threats, ensuring the privacy and security of sensitive information;
(vii)
Safety principle: AIED systems must be designed, developed, and implemented according to a risk management approach, in order to protect users from unintentional and unexpected harm and reduce the number of serious situations;
(viii)
Principle of inclusion in accessibility: The design, development, and implementation of AIED must take into account infrastructure, equipment, skills, and social acceptance, allowing equitable access and use of AIED;
(ix)
Human-centered AIED principle: The aim of AIED should be to complement and enhance human cognitive, social, and cultural capabilities, while preserving meaningful opportunities for freedom of choice and ensuring human control over AI-based work processes.
In turn, ref. [25] states that definitions of AI literacy differ in terms of the exact number and configuration of skills it entails, and referring to [26], indicates that an analysis of conceptualizations of AI literacy in education can be organized into four concepts: (1) knowing and understanding AI, (2) using and applying AI, (3) evaluating and creating AI, and (4) AI ethics. For that author, the vast majority of conceptualizations of AI literacy are parallel to Bloom’s taxonomy in terms of its general configuration of skills. Considering that this taxonomy constitutes the basis of countless formulations of competences in schools and universities, this is of enormous importance and correlation with AIED.
It is still difficult to measure AI literacy. Four published scales are currently used to carry out this measurement, three of which are not school-focused, but can be used for more general measurement purposes. As they are not based on established theoretical models of competences, it makes the interpretation of the latent factors of these scales seem arbitrary [25]. In fact, Carolus et al. [25] developed a new measuring instrument based on the existing literature on AI literacy, which is modular, meets psychometric requirements, and includes other psychological skills in addition to the classical ones of AI literacy.
Although it is not objectively clear how the development of AI can be applied to education systems, enthusiasm is growing, with excessive optimism regarding the potential to transform current education systems [27]. Ref. [4] sought to identify potential aspects of threat, excitement, and promise of AIED and highlighted the importance of traditional pedagogical values, such as skepticism, continuing to argue that the ultimate goal of education should be to promote responsible citizens and healthy educated minds. Therefore, the adoption of ethical frameworks for the use and development of AIED is extremely important, ensuring that it will be continually discussed and updated in light of the rapid development of AI techniques and their potential for widespread application [28].
At the same time, a set of questions must be carefully considered and comprehensively addressed as soon as possible: “What will be the future role of the teacher, and other school personnel, in education with AI systems? And how does this align with our beliefs or pedagogical theories? Do educational leaders and teachers have enough knowledge in the field of AI to distinguish a poorly developed system from a good one? Or how to apply them appropriately in the education context? Furthermore, how can we protect student and teacher data when the skills and knowledge to develop AIED systems are in the hands of for-profit organizations and not in the education sector?”. In particular, the issue of aligning AI with pedagogical theory must remain on the table, as any new technology integrated into education must be designed to fill a pedagogical need [4].
Although the use of questionnaires to assess literacy in AI is still limited, mainly because there are not many validated questionnaires yet, it is possible to find some evidence on studies in AI literacy in higher education in the literature. For example, ref. [29] concludes that the use of tools associated with artificial intelligence, in an exploratory learning environment context, can benefit teaching itself. Ref. [30] relates the knowledge of AI by teachers with data literacy, proposing an approach to reflect those data literacy competencies to use AI. Based on a literature review and through a survey applied in Serbia, in this case among students, ref. [31] concludes that AI, together with machine learning, has the potential to improve the learning levels of the student population. In a recent study, ref. [32] analyzes the adoption of artificial intelligence in higher education practices, relating it to literacy levels and their opinion on the conditions under which the use of AI tools is defensible, identifying clear concerns with justice and responsibility, as well as the lack of knowledge about the phenomenon of AI. In fact, despite the increase in knowledge about AI applied to education, it is still a challenge, taking into account the current context [33].
Considering the relevance of understanding the use of artificial intelligence in education, in particular in the higher education context, the present study seeks to assess the level of AI literacy and knowledge among lecturers at Portalegre Polytechnic University (PPU), aiming to identify gaps and find the main opportunities for innovation and development so that the education system can adopt AIED as an ally in promoting higher-quality education better prepared for the challenges of the future. As specific objectives, we seek to assess the degree of relationship between the dimensions of AI literacy and identify the predictive factors.
The remainder of this paper is organized as follows: Section 2 presents the materials and methods, in particular, the questionnaire; Section 3 presents the results; Section 4 discusses the results and concludes the analysis.

2. Materials and Methods

Despite the high number of studies produced on AI literacy to date, its measurement is still complex. The difficulties of conceptualization and the fact that many articles on the subject originate in an educational context limit the development of measurement scales and therefore their adoption in different contexts.
Ref. [25] developed a measuring instrument that builds on the existing literature on AI literacy. The questionnaire presented is modular (including distinct facets that can be used independently), is easily applicable to professional life, meets psychometric requirements, and includes other psychological skills besides the classic facets of AI literacy, having been tested for its factorial structure. Therefore, the questionnaire by [25] was applied in this study, adapted to the Portuguese language. It consists of 29 questions, based on three dimensions—AI Literacy, AI Self-Efficacy, and AI Self-Management—measured using a 5-point Likert scale (from 1 = “totally disagree” to 5 = “completely agree”). A brief note about the questionnaire used in this paper is the following: we decided to keep the original names of the dimensions proposed by [25], although the first dimension (AI Literacy) may be confused with the final result, which, in fact, is named by the authors as a scale.
In the first dimension (AI Literacy), using and applying AI, according to [34], means applying knowledge, concepts, and applications of AI in different scenarios and implies understanding the applications of AI and how it can affect one’s life. In turn, the knowing and understanding AI factor means knowing the basic functions of AI and knowing how to use its applications, covering the acquisition of fundamental concepts, skills, knowledge, and attitudes that do not require prior knowledge, as well as understanding the technologies underlying techniques and basic concepts underlying AI in different products and services. The AI ethics factor, advanced by the same author, means human-centered considerations (for example, equity, responsibility, transparency, ethics, and security), therefore incorporating knowledge of ethical issues relating to AI technologies. Still in this dimension, but by the authors of [10,35], it means distinguishing between technological equipment that uses and does not use AI.
The second dimension (AI Self-Efficacy) integrates the Problem Resolution factor. According to [35], this means voluntary behavior aimed at solving problems, based on belief in the advantages of behavioral success, external approval, and the level of control of internal and external factors. The learning factor, according to [36], means understanding how AI learns and can be affected by data, that is, having a basic understanding of how AI and machine learning work, as well as knowledge of the implications of data quality, feedback, and one’s own data of interaction. Still on this factor, ref. [37] integrates skills that allow the development of adaptive knowledge to make self-learning and technological evolution profitable, with [38] including the level of availability for AI.
The third and final dimension (AI Self-Management) integrates the AI persuasion literacy factor, which, according to [36], means understanding how the human-like characteristics of AI systems can unconsciously manipulate users’ perceptions and behaviors and thus thwart attempts to influence them. According to the same author, the Emotion Regulation factor means the constructive management of negative emotions (such as frustration and anxiety) when interacting with AI systems.
The present sample was made up of 75 teachers from the various schools of PPU, from a total of 225, corresponding to one third of the population. The whole set of teachers includes a relevant number of invited professionals, who are normally less likely to answer this kind of questionnaire, resulting in the 75 completed answers. The questionnaire was presented online, and all teachers were asked to answer it, through the institutional email. Participation was completely voluntary and anonymous. The questionnaire was preceded by an explanation of the objectives and respondents’ consent was explicitly obtained, following PPU procedures.
The respondents are aged from 25 to over 50, with 70.7% being 45 or over. A total of 41 participants were female (54.7%), 33 participants were male (44.0%), and 1 chose not to specify (1.3%). A major proportion of participants came from the School of Technology, Management and Design (40.0%), followed by the School of Health (29.3%), the School of Education and Sciences (22.7%), and the School of Biosciences of Elvas (8.0%). It is noteworthy that most participants teach in more than one study cycle, with 25.3% teaching in higher technical courses and bachelor’s degrees, 22.7% teaching bachelor’s and master’s degrees, and 21.3% teaching only bachelor’s degrees, while 20.0% teach in the three study cycles (higher technical courses and bachelor’s and master’s degree). The main areas of basic training for participants are health (29.3%), social and behavioral sciences (13.3%), and business sciences (12.0%).
The instrument used in the present study designed by [25] covers the three dimensions based on the existing literature (AI Literacy, AI Self-Efficacy, and AI Self-Management), each containing more than one descriptor factor. The whole questionnaire can be consulted in Table A1.

3. Results

The results of the questionnaire, presented in Table 1, reveal an average level of AI literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4.
The AI Literacy dimension recorded the highest average response (3.56), highlighting that the factor of using and applying AI had the highest average response (3.85), followed by the ethics of AI factor with an average of 3.73. Still in this dimension, the knowing and understanding AI factor had an average response of 3.46 and detecting AI was at 3.20 (see Figure 1). In fact, most respondents reveal the capacity to use and apply AI as well as saying they are able to act in an ethical way, regarding AI.
In turn, the AI Self-Efficacy dimension obtained the lowest average response (2.86), highlighting that learning was the factor with the lowest average response (2.49). Specifically in this factor, 65.4% of participants responded with levels 2 and 3, reflecting that they have more difficulty in handling problems and challenges related to AI (see Figure 2).
Finally, in the AI Self-Management dimension, which had an average response of 3.41, the AI persuasion literacy factor had an average response of 3.27 and Emotion Regulation had an average of 3.55, revealing respondents’ greater perception of the possibility of controlling their emotions regarding the use of AI than in considering the influence of AI in their daily life (see Figure 3).
A more detailed analysis of the means and standard deviation of each question can be seen in Appendix B (Table A2), with the respective mean values, ranging from 2.40 (“Despite the rapid changes in the field of artificial intelligence, I can always keep up to date”) to 4.25 (“I can operate AI applications in everyday life.”).
We continue our analysis by calculating the internal consistency of the instrument, obtaining a value of 0.930 for Cronbach’s Alpha. The correlation coefficients between each question and the total suggest good internal validity indices that exceed the critical index (<0.20), including the items with the lowest value, highlighting that the vast majority of items have a correlation greater than 0.35, with some items reaching an index above 0.5 (see Table 2). It is noteworthy that in the field of AI Self-Management, two factors related to AI persuasion literacy have a correlation value lower than 0.3. Even so, it was decided to keep them as their elimination would not make significant improvements to the result of the instrument and because, above all, the aim is to maintain the theoretical framework chosen for the objective of this study, that is, to assess the sample’s level of AI literacy.
To study the construct validity, the principal component factor analysis (PCFA) method with varimax rotation was chosen. Following the procedure, a Keyser–Meyer–Olkin measure of 0.812 was obtained after rotation, which reflects a reasonable variance of the factors [39]. Bartlett’s test of sphericity is associated with a chi-square of 1783.012. Factor extraction followed the method advocated by [40], which consists of reading the scree plot graph. Results of the component factor analysis are presented in Table 3.
After the analysis, reading of the scree plot graph suggested the existence of three factors. So, considering three factors, the results were relatively aligned with the reference, with those factors explaining 58.6% of the variance found (the first factor explained 36.09%, the second 16.20%, and the third 6.31%).
In terms of correlations between dimensions, as presented in Table 4, there was a very high level of correlation resulting from the three-factor factor analysis, with correlation values from 0.501 to 0.766 between dimensions and values from 0.636 to 0.949 between the total dimensions and each of them. The results demonstrate that the first dimension is highly significant for the total dimensions, i.e., for AI Literacy in the sample.

4. Discussion and Conclusions

The objective of this study was to assess the AI literacy of PPU teachers, in order to identify gaps and find the main opportunities for innovation and development, specifically seeking to assess the degree of relationship between the dimensions of AI literacy and identify what could be the predictive variables in this matter.
The results of the questionnaire revealed an overall average level of AI literacy in the sample, meaning it will be desirable to implement strategies to develop faculty skills in AI matters given the growing impact of AIED on the education community and the education system itself. A higher level of AI literacy will allow us to find and implement better solutions to add value to the teaching–learning process through AI technologies and simultaneously support teachers and students, consistent, for example, with the results of [30].
The correlation between the three dimensions studied allows us to conclude that the AI Literacy dimension is the biggest predictor of the level of AI literacy score in the sample, which, integrating the factors of use and application, knowledge and understanding, detection, and ethics, suggests specific development of these skills to increase the participants’ overall level of literacy. However, the below-average result of the learning factor, incorporating understanding of the functioning of AI technologies, the development of adaptive knowledge, and the level of availability for AI, indicates the pressing need to focus on developing these skills through awareness-raising policies and targeted training actions.
From the study of correlations, it was concluded that no factor characterizing the sample (age, gender, study cycle taught, or area of training) is a predictor. Therefore, these factors do not explain the sample’s level of literacy, nor do they determine or limit it.
Highlighted is the higher level of the application of knowledge, concepts, and applications of AI in different scenarios and awareness of ethical issues relating to AI technologies such as equity, responsibility, transparency, ethics, and safety of teachers, given the recent emergence of public and widespread use of AI applications by the education community (like, for example, [32]).
The results presented in this work are relevant to the reality studied, allowing specific measures to be taken to increase the level of AI literacy in its various components, not only in terms of knowledge of the different possible tools for teaching and their possible integration into the teaching–learning process, but also in terms of how to deal with challenges related to ethical concerns. A more in-depth analysis of the tools that may be used by both students and teachers may also be relevant, promoting the possible involvement of the academic community in the joint analysis of these issues. Also associated with the level of AI literacy, creating initiatives that can demystify the use of AI in higher education, in particular, by demonstrating ways in which it can be useful in the daily lives of the academic community, could help to encourage the fair use of this type of tool.
The sample size preventing generalizations and the fact that it is not possible to make comparisons of the application of the same measuring instrument are the main limitations identified. This leads to suggesting future work applying the instrument to the student body at PPU and expanding similar studies to Portuguese polytechnic higher education, looking for possible predictors in a broader educational community and identifying intervention priorities to increase AI literacy in academia in Portugal. The fact of applying this study to a single higher education institution is another limitation. In the future, these results could be complemented with other assessments not only in other institutions, but also in other professional fields, including technical and non-technical professions, and in other areas where AI could be used. A final note is that this is one of the first applications of the questionnaire to higher education, making it difficult to make comparisons. This should also be considered in future work, even comparing different kinds of higher education institutions.

Author Contributions

Conceptualization, E.L., C.G., and P.F.; methodology, E.L., C.G., and P.F.; validation, E.L., C.G., and P.F.; formal analysis, E.L., C.G., and P.F.; data curation, E.L., C.G., and P.F.; writing—original draft preparation, E.L., C.G., and P.F.; writing—review and editing, E.L., C.G., and P.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundação para a Ciência e a Tecnologia (grant UIDB/05064/2020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data will be supplied on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Questionnaire presentation.
Table A1. Questionnaire presentation.
ItemSourceNumberQuestion
AI Literacy
Use and Apply AI[9]1I can operate AI applications in everyday life.
2I can use AI applications to make my everyday life easier.
3I can use artificial intelligence meaningfully to achieve my everyday goals.
4In everyday life, I can interact with AI in a way that makes my tasks easier.
5In everyday life, I can work together gainfully with artificial intelligence.
6I can communicate gainfully with artificial intelligence in everyday life.
Know and Understand AI[9]7I know the most important concepts of the topic “artificial intelligence”.
8I know definitions of artificial intelligence.
9I can assess what the limitations and opportunities of using AI are.
10I can think of new uses for AI.
11I can imagine possible future uses of AI.
Detect AI[10,34]12I can tell if I am dealing with an application based on artificial intelligence.
13I can distinguish devices that use AI from devices that do not.
14I can distinguish if I interact with AI or a “real human”.
AI Ethics[9]15I can weigh up the consequences of using AI for society.
16I can incorporate ethical considerations when deciding whether to use data provided by AI.
17I can analyze AI-based applications for their ethical implications.
AI Auto-Efficacy
Problem Solving[35]18I can rely on my skills in difficult situations when using AI.
19I can handle most problems in dealing with artificial intelligence well on my own.
20I can also usually solve strenuous and complicated tasks when working with artificial intelligence well.
Learning[25,26,27]21I can keep up with the latest innovations in AI applications.
22Despite the rapid changes in the field of artificial intelligence, I can always keep up to date.
23Although there are often new AI applications, I manage to always be “up to date”.
AI Self-Management
AI Persuasion Literacy[25]24I don’t let AI influence me in my everyday decisions.
25I can prevent AI from influencing me in my everyday decisions.
26I realize it if artificial intelligence is influencing me in my everyday decisions.
Emotion Regulation[25]27I keep control over feelings like frustration and anxiety while doing everyday things with AI.
28I can handle it when everyday interactions with AI frustrate or frighten me.
29I can control my euphoria that arises when I use artificial intelligence for everyday purposes.

Appendix B

Table A2. Mean and standard deviation for each question.
Table A2. Mean and standard deviation for each question.
Original DimensionItemMeanStd. Dev.
AI LiteracyUse and apply AI_14.250.931
Use and apply AI_24.210.920
Use and apply AI_33.801.090
Use and apply AI_44.050.985
Use and apply AI_53.880.999
Use and apply AI_62.921.171
Know and understand AI_13.291.037
Know and understand AI_23.441.030
Know and understand AI_33.441.017
Know and understand AI_43.411.079
Know and understand AI_53.690.986
Detect AI_13.271.031
Detect AI_23.041.045
Detect AI_33.280.966
AI Ethics_13.571.068
AI Ethics_23.851.062
AI Ethics_33.771.073
AI Self-EfficacyProblem Solving_13.481.070
Problem Solving_23.121.078
Problem Solving_33.090.989
Learning_12.641.048
Learning_22.401.053
Learning_32.431.055
AI Self-CompetencyAI Persuasion Literacy_13.491.018
AI Persuasion Literacy_23.321.016
AI Persuasion Literacy_33.001.053
Emotion Regulation_13.391.126
Emotion Regulation_23.520.964
Emotion Regulation_33.730.991

References

  1. Bates, A.W. Educar na Era Digital: Design, Ensino e Aprendizagem; Tecnologia Educacional; Artesanato Educacional: São Paulo, Brazil, 2017. [Google Scholar]
  2. Ergen, M. What is Artificial Intelligence? Technical Considerations and Future Perception. Anatol. J. Cardiol. 2019, 22, 5–7. [Google Scholar] [CrossRef] [PubMed]
  3. Ganascia, J.-G. A Inteligência Artificial; Biblioteca Básica da Ciência e Cultura; Instituto Piaget: Lisbon, Portugal, 1993. [Google Scholar]
  4. Humble, N.; Mozelius, P. The threat, hype, and promise of artificial intelligence in education. Discov. Artif. Intell. 2022, 2, 22. [Google Scholar] [CrossRef]
  5. Tavares, L.A.; Meira, M.C.; Amaral, S.F.D. Inteligência Artificial na Educação: Survey. Br. J. Dermatol. 2020, 6, 48699–48714. [Google Scholar] [CrossRef]
  6. Oliveira, L.; Pinto, M. A Inteligência Artificial na Educação—Ameaças e Oportunidades para o Processo Ensino-Aprendizagem. 2023. Available online: http://hdl.handle.net/10400.22/22779 (accessed on 8 February 2024).
  7. Ayed, I.A.H.; Oman Higher Education Institutions Dealing with Artificial Intelligence. BUM—Teses de Doutoramento CIEd—Teses de Doutoramento em Educação/PhD Theses in Education. 2022. Available online: https://hdl.handle.net/1822/76188 (accessed on 8 February 2024).
  8. Miranda, P.; Isaias, P.; Pifano, S. Digital Literacy in Higher Education: A Survey on Students’ Self-assessment. In Learning and Collaboration Technologies. Learning and Teaching; Zaphiris, P., Ioannou, A., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2018; Volume 10925, pp. 71–87. [Google Scholar] [CrossRef]
  9. Ng, D.T.K.; Wu, W.; Leung, J.K.L.; Chu, S.K.W. Artificial Intelligence (AI) Literacy Questionnaire with Confirmatory Factor Analysis. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA, 10–13 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 233–235. [Google Scholar] [CrossRef]
  10. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA; pp. 1–16. [Google Scholar] [CrossRef]
  11. Hornberger, M.; Bewersdorff, A.; Nerdel, C. What do university students know about Artificial Intelligence? Development and validation of an AI literacy test. Comput. Educ. Artif. Intell. 2023, 5, 100165. [Google Scholar] [CrossRef]
  12. Committee on Artificial Intelligence; Council of Europe. Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and The Rule of Law; Council of Europe: Strasbourg, France, 2023. [Google Scholar]
  13. Council of the European Union. Proposal for a Regulation of the European Parliament and of the Council amending Regulation (EC) No 561/2006 as Regards Minimum Requirements on Minimum Breaks and Daily and Weekly Rest Periods in the Occasional Passenger Transport Sector—Analysis of the Final Compromise Text with a View to Agreement. 2024. Available online: https://data.consilium.europa.eu/doc/document/ST-6021-2024-INIT/en/pdf (accessed on 28 March 2024).
  14. AMA. Guia para uma Inteligência Artificial Ética, Transparente e Responsável na AP. Available online: https://www.sgeconomia.gov.pt/destaques/amactic-guia-para-uma-inteligencia-artificial-etica-transparente-e-responsavel-na-ap.aspx (accessed on 2 November 2023).
  15. Boulay, B. The overlapping ethical imperatives of human teachers and their Artificially Intelligent assistants. In The Ethics of Artificial Intelligence in Education, 1st ed.; Routledge: New York, NY, USA, 2022; pp. 240–254. [Google Scholar] [CrossRef]
  16. Boulay, B. Artificial Intelligence in Education and Ethics. In Handbook of Open, Distance and Digital Education; Zawacki-Richter, O., Jung, I., Eds.; Springer Nature: Singapore, 2023; pp. 93–108. [Google Scholar] [CrossRef]
  17. Flores-Vivar, J.-M.; García-Peñalvo, F.-J. Reflections on the ethics, potential, and challenges of artificial intelligence in the framework of quality education (SDG4). Comun. Rev. Científica Comun. Educ. 2023, 31, 37–47. [Google Scholar] [CrossRef]
  18. Yildiz, Y. Ethics in education and the ethical dimensions of the teaching profession. ScienceRise 2022, 4, 38–45. [Google Scholar] [CrossRef]
  19. Eaton, S.E. Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. Int. J. Educ. Integr. 2023, 19, 23. [Google Scholar] [CrossRef]
  20. Howley, I.; Mir, D.; Peck, E. Integrating AI ethics across the computing curriculum. In The Ethics of Artificial Intelligence in Education, 1st ed.; Routledge: New York, NY, USA, 2022; pp. 255–270. [Google Scholar] [CrossRef]
  21. Remian, D. Augmenting Education: Ethical Considerations for Incorporating Artificial Intelligence in Education. Master’s Thesis, University of Massachusetts, Boston, MA, USA, 2019. Instructional Design Capstones Collection 52. Available online: https://scholarworks.umb.edu/instruction_capstone/52 (accessed on 8 February 2024).
  22. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.; Santos, O.; Rodrigo, M.; Cukurova, M.; Bittencourt, I.; et al. Ethics of AI in Education: Towards a Community-Wide Framework. Int. J. Artif. Intell. Educ. 2022, 32, 504–526. [Google Scholar] [CrossRef]
  23. Bom, L. Regresso às provas orais. In 88 Vozes sobre Inteligência Artificial, 1st ed.; Camacho, F., Ed.; Oficina do Livro: Alfragide, Portugal, 2023; pp. 431–437. [Google Scholar]
  24. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 2023, 28, 4221–4241. [Google Scholar] [CrossRef] [PubMed]
  25. Carolus, A.; Koch, M.J.; Straka, S.; Latoschik, M.E.; Wienrich, C. MAILS—Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100014. [Google Scholar] [CrossRef]
  26. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell. 2021, 2, 100041. [Google Scholar] [CrossRef]
  27. Holmes, W.; Persson, J.; Chounta, I.-A.; Wasson, B.; Dimitrova, V. Artificial Intelligence and Education: A Critical View through the Lens of Human Rights, Democracy and the Rule of Law; Council of Europe: Strasbourg, France, 2022. [Google Scholar]
  28. Birks, D.; Clare, J. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks. Int. J. Educ. Integr. 2023, 19, 20. [Google Scholar] [CrossRef]
  29. Mavrikis, M.; Geraniou, E.; Santos, S.; Poulovassilis, A. Intelligent analysis and data visualisation for teacher assistance tools: The case of exploratory learning. Br. J. Educ. Technol. 2019, 50, 2920–2942. [Google Scholar] [CrossRef]
  30. Olari, V.; Romeike, R. Addressing AI and data literacy in teacher education: A review of existing educational frameworks. In Proceedings of the 16th Workshop in Primary and Secondary Computing Education (WiPSCE ‘21), Virtual Event, 18–20 October 2021. [Google Scholar] [CrossRef]
  31. Kuleto, V.; Ilić, M.; Dumangiu, M.; Ranković, M.; Martins, O.M.D.; Păun, D.; Mihoreanu, L. Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions. Sustainability 2021, 13, 10424. [Google Scholar] [CrossRef]
  32. McGrath, C.; Pargman, T.; Juth, N.; Palmgren, P. University teachers’ perceptions of responsibility and artificial intelligence in higher education—An experimental philosophical study. Comput. Educ. Artif. Intell. 2023, 4, 100139. [Google Scholar] [CrossRef]
  33. Vazhayil, A.; Shetty, R.; Bhavani, R.; Akshay, N. Focusing on Teacher Education to Introduce AI in Schools: Perspectives and Illustrative Findings. In Proceedings of the 2019 IEEE Tenth International Conference on Technology for Education (T4E), Goa, India, 9–11 December 2019; pp. 71–77. [Google Scholar] [CrossRef]
  34. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  35. Ajzen, I. From Intentions to Actions: A Theory of Planned Behavior; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  36. Carolus, A.; Augustin, Y.; Markus, A.; Wienrich, C. Digital interaction literacy model—Conceptualizing competencies for literate interactions with voice-based AI systems. Comput. Educ. Artif. Intell. 2023, 4, 100114. [Google Scholar] [CrossRef]
  37. Cetindamar, D.; Kitto, K.; Wu, M.; Zhang, Y.; Abedin, B.; Knight, S. Explicating AI Literacy of Employees at Digital Workplaces. IEEE Trans. Eng. Manag. 2024, 71, 810–823. [Google Scholar] [CrossRef]
  38. Dai, Y.; Chai, C.-S.; Lin, P.-Y.; Jong, M.S.-Y.; Guo, Y.; Qin, J. Promoting Students’ Well-Being by Developing Their Readiness for the Artificial Intelligence Age. Sustainability 2020, 12, 6597. [Google Scholar] [CrossRef]
  39. Martinez, L.; Ferreira, A. Análise dos Dados com SPSS. Primeiros Passos; Escolar Editora: Lisbon, Portugal, 2007. [Google Scholar]
  40. Cattel, R. The ccree test for the number of factors. Multivar. Behav. Res. 1966, 1, 245–276. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Results of the AI Literacy dimension.
Figure 1. Results of the AI Literacy dimension.
Information 15 00205 g001
Figure 2. Results of the AI Self-Efficiency dimension.
Figure 2. Results of the AI Self-Efficiency dimension.
Information 15 00205 g002
Figure 3. Results of the AI Self-Management dimension.
Figure 3. Results of the AI Self-Management dimension.
Information 15 00205 g003
Table 1. Results of the AI Literacy questionnaire.
Table 1. Results of the AI Literacy questionnaire.
Factor1
(Totally Disagree)
2
(Somewhat Disagree)
3
(Neither Disagree nor Agree)
4
(Somewhat Agree)
5
(Totally Agree)
Mean Values
AI LiteracyUse and apply AI3.6%10.0%18.4%33.6%34.4%3.853.56
Know and understand AI4.3%14.1%27.2%40.5%13.9%3.46
Detect AI5.3%19.1%34.7%32.4%8.4%3.20
AI Ethics4.0%9.3%21.8%39.1%25.8%3.73
AI Self-EfficacyProblem Solving6.7%14.2%40.9%25.8%12.4%3.232.86
Learning18.7%33.8%31.6%12.0%4.0%2.49
AI Self-ManagementAI Persuasion Literacy4.9%16.0%40.0%25.3%13.8%3.273.41
Emotion Regulation4.9%6.2%38.2%30.7%20.0%3.55
Total6.7%15.6%32.0%30.4%17.2% 3.28
Table 2. Item-Total Statistics.
Table 2. Item-Total Statistics.
ItemScale Mean If Item DeletedScale Variance If Item DeletedCorrected Item-Total CorrelationCronbach’s Alpha If Item Deleted
Use and apply AI_194.55289.2780.3950.929
Use and apply AI_294.59289.1110.4060.929
Use and apply AI_395.00283.4050.4920.928
Use and apply AI_494.75286.2730.4620.928
Use and apply AI_594.92287.5070.4170.929
Use and apply AI_695.88288.5390.3200.931
Know and understand AI_195.51278.6320.6630.926
Know and understand AI_295.36276.3960.7360.925
Know and understand AI_395.36278.8280.6710.926
Know and understand AI_495.39277.2670.6740.925
Know and understand AI_595.11280.7720.6330.926
Detect AI_195.53278.7930.6620.926
Detect AI_295.76277.0230.7050.925
Detect AI_395.52287.4150.4360.929
AI Ethics_195.23284.7990.4640.928
AI Ethics_294.95279.8890.6090.926
AI Ethics_395.03279.2150.6220.926
Problem Solving_195.32275.6260.7280.925
Problem Solving_295.68279.3020.6160.926
Problem Solving_395.71277.9940.7180.925
Learning_196.16274.8120.7700.924
Learning_296.4277.8650.6750.925
Learning_396.37276.1560.7240.925
AI Persuasion Literacy_195.31292.4050.2640.931
AI Persuasion Literacy_295.48292.550.2610.931
AI Persuasion Literacy_395.8287.1890.4020.929
Emotion Regulation_195.41289.8940.3000.931
Emotion Regulation_295.28289.5020.3730.929
Emotion Regulation_395.07286.090.4650.928
Table 3. Component factor analysis with varimax rotation method.
Table 3. Component factor analysis with varimax rotation method.
Component
Original DimensionItem123
AI LiteracyUse and apply AI_10.4070.807−0.08
Use and apply AI_20.4140.83−0.055
Use and apply AI_30.5090.7580.082
Use and apply AI_40.4710.8020.03
Use and apply AI_50.4310.7870.001
Use and apply AI_60.3390.4250.131
Know and understand AI_10.721−0.186−0.052
Know and understand AI_20.787−0.096−0.015
Know and understand AI_30.733−0.329−0.168
Know and understand AI_40.7250.0310.013
Know and understand AI_50.6840.0370.012
Detect AI_10.693−0.0780.252
Detect AI_20.742−0.2220.236
Detect AI_30.484−0.2210.297
AI Ethics_10.523−0.506−0.186
AI Ethics_20.666−0.157−0.461
AI Ethics_30.676−0.159−0.407
AI Self-EfficacyProblem Solving_10.764−0.098−0.26
Problem Solving_20.667−0.096−0.402
Problem Solving_30.750.158−0.067
Learning_10.814−0.0570.02
Learning_20.731−0.2580.034
Learning_30.774−0.0610.006
AI Self-ManagementAI Persuasion Literacy_10.318−0.549−0.07
AI Persuasion Literacy_20.288−0.540.306
AI Persuasion Literacy_30.4310.0230.286
Emotion Regulation_10.332−0.2410.365
Emotion Regulation_20.388−0.0770.621
Emotion Regulation_30.470.0430.457
Table 4. Correlation between Factors.
Table 4. Correlation between Factors.
Component 1Component 2Component 3Total
Component 110.766 **0.425 **0.949 **
Component 2 10.501 **0.889 **
Component 3 10.636 **
Total 1
** Correlation is significant at the 0.01 level (2-tailed).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lérias, E.; Guerra, C.; Ferreira, P. Literacy in Artificial Intelligence as a Challenge for Teaching in Higher Education: A Case Study at Portalegre Polytechnic University. Information 2024, 15, 205. https://doi.org/10.3390/info15040205

AMA Style

Lérias E, Guerra C, Ferreira P. Literacy in Artificial Intelligence as a Challenge for Teaching in Higher Education: A Case Study at Portalegre Polytechnic University. Information. 2024; 15(4):205. https://doi.org/10.3390/info15040205

Chicago/Turabian Style

Lérias, Eduardo, Cristina Guerra, and Paulo Ferreira. 2024. "Literacy in Artificial Intelligence as a Challenge for Teaching in Higher Education: A Case Study at Portalegre Polytechnic University" Information 15, no. 4: 205. https://doi.org/10.3390/info15040205

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop