Next Article in Journal
Gamified VR Storytelling for Cultural Tourism Using 3D Reconstructions, Virtual Humans, and 360° Videos
Previous Article in Journal
Analysis, Evaluation, and Future Directions on Multimodal Deception Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education

by
Haseeb Ali Khan
1,
Sonain Jamil
2,3,
Md. Jalil Piran
4,*,
Oh-Jin Kwon
2 and
Jong-Weon Lee
1,*
1
Department of Software, Sejong University, Seoul 05006, Republic of Korea
2
Department of Electronics Engineering, Sejong University, Seoul 05006, Republic of Korea
3
Department of Computer Science, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
4
Department of Computer Engineering, Sejong University, Seoul 05006, Republic of Korea
*
Authors to whom correspondence should be addressed.
Technologies 2024, 12(5), 72; https://doi.org/10.3390/technologies12050072
Submission received: 27 March 2024 / Revised: 14 May 2024 / Accepted: 17 May 2024 / Published: 19 May 2024

Abstract

:
Machine learning (ML) is enabling augmented reality (AR) to gain popularity in various fields, including gaming, entertainment, healthcare, and education. ML enhances AR applications in education by providing accurate visualizations of objects. For AR systems, ML algorithms facilitate the recognition of objects and gestures from kindergarten through university. The purpose of this survey is to provide an overview of various ways in which ML techniques can be applied within the field of AR within education. The first step is to describe the background of AR. In the next step, we discuss the ML models that are used in AR education applications. Additionally, we discuss how ML is used in AR. Each subgroup’s challenges and solutions can be identified by analyzing these frameworks. In addition, we outline several research gaps and future research directions in ML-based AR frameworks for education.

1. Introduction

Adding perceptual information to the real world using computers is AR. As a result of this technology, education is positively impacted. Various educational levels, academic subjects, and learning situations can benefit from AR [1]. AR allows learners to interact with artificial 3D objects to enhance their learning. Through the use of 3D synthetic objects, AR can enhance the visual impression of target systems and environments. Students can use a wide range of perspectives to improve their interpretation of 3D objects [2].
AR has been enhanced by various technologies, including ML, which is still in its infancy. Despite this, it is used for various educational and training purposes, for example, medical education [3]. Following the periods of inference, knowledge, and ML, deep learning (DL) [4,5] represents the next phase in artificial intelligence (AI). In addition to convolutional neural networks (CNNs), DL includes several representative models [4,6,7]. ML and AR are important in medical education and learning [3]. Furthermore, they are used in plant education for precise farming [8]. Surgical education should produce surgeons, clinicians, researchers, and teachers [9,10,11]. Surgical training and education are becoming increasingly computer-based as the field evolves. The states of patients are also classified using ML algorithms based on their records [12,13]. When the AR module is activated, digital information is displayed first. Based on the previous dataset provided by the system [14], an ML algorithm is then used to identify the affected tissue from the rest.
In sports education, rock climbing and basketball offer the most promising frameworks for AR development. Incorporating basketball AR into practitioners’ environments and spectators’ viewing experiences may be beneficial. The small area of bordering surfaces and the calibration of fixed holds make rock climbing a technical sport. A new AR advancement can also enhance baseball and soccer games and ball directions [15]. AR is more effective than traditional media in performing or preparing an errand. As a result of AR experience, understudies are more likely to transfer their knowledge to actual equipment use.
Content learned from AR experiences is more vividly remembered than from non-AR experiences, e.g., compared to content learned through paper or video media, many weeks after the fact [16]. AI schooling [17,18] utilizing AR can be applied to non-engineering majors, and grown-ups can advance effectively with interest and can ceaselessly adapt to society and technology [19]. AR can be used in the educational system to improve traditional education while reducing old problems. Additionally, it facilitates collaboration between teachers and students. Regarding educational applications, there is no better technology and potential to explore than AR. Due to technological advancements, it is necessary to improve educational domains using efficient methods. A study found that AR may engage, trigger, and stimulate students to consider course materials from a variety of angles [20].
The main contributions of this survey can be summarized as follows.
  • ML techniques in AR applications are discussed for several areas of education.
  • An analysis of related works is presented in detail.
  • ML models for AR applications such as support vector machine (SVM), CNN, artificial neural network (ANN), etc., are discussed.
  • A detailed analysis of ML models in the context of AR is presented.
  • A set of challenges and possible solutions are presented.
  • Research gaps and future directions are discussed in several fields of education involving ML-based AR frameworks.
  • Emerging trends and developments in the use of ML and AR are recognized and analyzed in educational settings.
  • Insights are provided into areas that need more research or improvement.
  • Insights to help guide future research and development activities in the sector are provided.
The remaining article is organized as follows. Section 2 presents the related work. Section 3 explains the fundamentals of ML and AR, their techniques, and types of ML. Section 4 presents an introduction to AR, types of AR, and the intersection of ML and AR in education. Section 5 explains ML techniques for AR in education, its types, and uses. Section 6 presents SL and USL models in AR and their applications. Section 7 discusses open research challenges. Section 8 concludes the survey. Figure 1 shows the organization of the survey.

2. Related Work

Medical education and learning are the most common applications of ML-based AR. In [1], the authors explore which students AR systems benefit the most and analyze their impact on student learning. However, this study did not focus on ML models for AR in education.
Ren et al., in [3], concentrated on CNN, ANN, and SVM in AR, explaining how AR and DL can be employed in healthcare. Unfortunately, this study had a limitation as it did not include all ML models.
A survey of AR applications in plant education is presented by the authors in [8], with a focus on agriculture, particularly livestock and crops. Notably, the discussion in this paper centered on conventional methods rather than ML models.
In [21], the researchers explore the application of AR in education, providing an overview and describing the three generations of AR in education. However, this study did not delve into ML methods.
Khandelwal et al., in [14], introduced a surgical training system that combines AR, AI, and ML. Throughout the survey, SVM, KNN, and ANN models are discussed along with their applications.
In the domain of e-learning research, hierarchical linear modeling (HLM) was employed as a multilevel modeling technique [22]. Nonetheless, the authors did not provide a detailed explanation of how ML was used.
In [23], the focus was on AR technologies and limitations for neurosurgical training as an educational tool. However, the researchers did not elaborate on the use of ML in their survey.
The ongoing clinical applications of AR in education and surgery are reviewed in [24]. Despite this, the researchers did not mention the utilization of ML models.
In [25], the authors discuss the impact of AR on programming education, the challenges and issues it presents, and how it benefits student learning. Regrettably, this study did not delve into the usage of ML in the survey.
Real-time data collecting, ML-aided processing, and visualization are anticipated goals for using AR technologies in the healthcare sector. The study in [26] focuses on the potential future application of AR in breast surgery education, describing two prospective applications (surgical remote telementoring and impalpable breast cancer localization using AR) as well as the technical requirements to make it viable.
The purpose of the research in [27] was to look into the impact of AR on student attitudes, engagement, and knowledge of mechanical engineering principles. The creation of an AR app for mobile devices to aid in the comprehension of planar mechanisms by exhibiting models of basic machines is the contribution of this work. The AR simulation provides a three-dimensional representation of planar mechanics, as well as a variety of interactions, charts, and calculations. Students utilized a smartphone app to complete a basic task. In addition, a questionnaire was used to collect their thoughts on using AR in their mechanical engineering lessons. The evaluation of the exercise, as well as the answers to the questions, revealed that the students had a favorable opinion of the usage of AR in the classroom. In addition, AR increased their involvement and grasp of the process components.
Furthermore, “LeARn” [28], a novel network-based collaborative learning environment uses AR to transform a real-world surface into a virtual lab. The system contributes to the replacement of a face-to-face learning environment with an enhanced collaborative setting. A scenario with a virtual chemical lab is shown to showcase the concept. Any real-world surface is supplemented in the demo with virtual lab equipment used in a chemistry experiment. The instructor hosts the virtual lab, and all students can access it solely through their mobile phones or tablets. Each participant can interact with the lab equipment, which the instructor or fellow students can view in real time. The system enables real-time communication, creating a truly collaborative atmosphere. The resulting solution demonstrates that a sophisticated lab experiment may be carried out from a personalized location that incorporates collaborative characteristics. In an uncontrolled user study, the system was implemented and reviewed, and the results demonstrate the effectiveness of an AR-based interactive and collaborative learning environment.
The study in [29] described an ML-augmented, wearable, self-powered, and long-lasting HMI sensor for human hand motion and virtual tasks. The triboelectric friction between the moving object and the specific electrode array was employed to generate a unique and stable electrical signal that regulated the programmable output curve of the instantaneous parameters. It established that the motion of a movable object may be tracked and correctly recreated by the output signal by decoupling it into various motion patterns. Furthermore, with evident visualization performance, the ML method can identify fast and slow finger actions. It also indicated that multiple linear regression (MLR) and PCA+K-means clustering (K-means) exhibited significant efficacy in terms of grouping, visualization, and motion speed interference. This study not only established the viability of designing self-powered HMI sensors but also demonstrated a way to identify ML-augmented motion patterns.
The authors in [30] present the findings of a survey of touchless interaction studies in educational applications and propose the use of ML agents to achieve real-time touchless hand interaction inside kinesthetic learning. This study shows the design of two AR applications with real-time hand contact and ML agents, enabling engaged kinesthetic learning as an alternative learning interface.
Ref. [31] includes a review of the present literature, an investigation of the problems, the identification of prospective study areas, and lastly, reports on the construction of two case studies that can highlight the initial steps needed to address these research areas. The findings of this study, finally, reveal the research gap needed to enable real-time touchless hand interaction, kinesthetic learning, and ML agents using a remote learning methodology.
The research in [32] presents an improved ML approach for evaluating extended reality (XR)-based simulators. Healthcare simulators are being developed for the training and instruction of medical residents and students. Many researchers have utilized machine learning (ML) to evaluate medical simulators. When performing such an examination, however, there is a lack of standardization. Some academics have also looked into utilizing ML to standardize the assessment process; however, they have only looked into virtual reality (VR). The goal of this study is to create an enhanced framework that includes assessment techniques for virtual, mixed, and AR simulators as well as multiple ML models.
The following are our research questions:
  • How advanced are augmented reality applications in education today?
  • How is machine learning being integrated into the educational augmented reality applications?
  • In comparison with conventional approaches, how successful and efficient are machine-learning-powered augmented reality applications in increasing learning outcomes?
  • What are the primary elements influencing student and instructor user experiences with machine-learning-powered augmented reality in education?
  • What technical challenges are there when combining machine learning and augmented reality in educational settings?
  • What emerging trends in the development and deployment of machine-learning-powered augmented reality applications in education are anticipated?
Based on our analysis of related work, there is a clear need for a comprehensive survey of ML models, including SVM, KNN, ANN, and CNN, for AR in various educational fields. As a result, this survey focuses on ML models in AR for education, providing an in-depth analysis of each model’s advantages, disadvantages, challenges, and limitations. Table 1 shows a summary of existing surveys.

2.1. Bibliometric Analysis and Methodology

Several databases were employed to select papers, including Google Scholar, Web of Science (WoS), IEEE Xplore, and ScienceDirect.

2.1.1. Bibliometric Analysis

The papers under consideration were published between 2017 and 2023. A total of 169 publications have been published on ML-assisted AR in education. In 2017, three articles related to ML-assisted AR in education were published, while seven were published in 2018. In total, 16 papers were published in 2019. Additionally, 26, 31, 51, and 31 papers were published in 2020, 2021, 2022, and 2023, respectively. Figure 2 illustrates the number of publications per year.
We present a world map indicating the countries that are most active in working on this topic. Between 2017 and 2023, China published 46 articles, making it the most active country. The United States of America (USA), Spain, Taiwan, Germany, India, Italy, and South Korea published 40, 14, 14, 13, 12, 10, and 10 papers, respectively, during the same period. Figure 3 depicts the leading countries in ML-assisted AR applications in education.

2.1.2. Methodology

Based on the selection criteria outlined in Algorithm 1, we selected 50 papers for analysis:
Algorithm 1 Article selection criteria
Require: Search on databases
Ensure: Article from 2017 to 2023
    while keyword—Augmented Reality Machine Learning Education do
        if Discuss ML-assisted AR application | Evaluate performance | Analyze application in education then
                Consider for analysis
        else if Does not discuss ML then
                Exclude from the analysis
        end if
    end while

3. Fundamentals of ML and AR

3.1. Overview of ML Techniques

ML has gained immense popularity and plays a crucial role in modern technology. Teaching ML in high school is essential to empower students with responsible and innovative skills. ML is a subset of AI [33]. At its core, ML automates the process of creating and solving analytical models based on training data [34]. It has become an integral component of various applications, such as image recognition [35], speech recognition [36], intelligent assistants [37], autonomous vehicles [38], and many others.
In ML, real-world problems are approached through learning rather than explicit programming. The system learns typical patterns, such as word combinations, from data. For example, in the context of social media analysis, ML systems can learn to identify words or phrases in tweets that indicate customer needs, leading to need classification [39]. ML professionals leverage various open-source ML frameworks available in the market to develop new projects and create impactful ML systems [40].

3.2. Types of ML

ML comprises four primary types: supervised learning (SL), unsupervised learning (UL), semi-supervised learning (SSL), and reinforcement learning (RL) [41,42]. Let us delve into each of these types.

3.2.1. SL

SL is a paradigm where a set of inputs is used to achieve specific target outcomes [43,44]. SL tackles both regression problems, which involve predicting continuous values, and classification problems, which involve categorizing data into distinct classes. In classification, the output variable is divided into various groups or categories, such as ‘red’ or ‘green’, or ‘car’ and ‘cycle’. An example of a regression problem is predicting cardiovascular disease risk. Common algorithms employed in SL include logistic regression, deep neural networks (DNNs), SVM, decision tree (DT), k-nearest neighbors (KNN), and ANN [45].

3.2.2. UL

UL is characterized by its data-driven approach, requiring no human-labeled data. UL techniques excel in identifying underlying trends, structures, and performing exploratory analysis [46]. Tasks within UL encompass density estimation, clustering, association rule mining, feature learning, and anomaly detection. Common algorithms in UL include self-organizing maps (SOMs), generative adversarial networks (GANs), and belief networks (DBNs) [47].

3.2.3. SSL

SSL represents a blend of labeled and unlabeled data. It is particularly advantageous when extracting relevant patterns from data is challenging and labeling examples is time consuming. SSL techniques find utility in labeling data, fraud detection, text translation, and text classification [48].

3.2.4. RL

RL stands in contrast to SL, as agents in RL learn by trial and error rather than relying on labeled data [49]. In RL, agents determine how to behave within an environment through interactions and observations of the outcomes. It is particularly relevant in scenarios where agents need to make sequential decisions and learn optimal strategies.
A variety of applications for RL can be found in computer-controlled board games, robotic hands, robotic mazes, and autonomous vehicles. Several RL algorithms are used, including Q-learning, R-learning, deep reinforcement learning (DRL), actor–critic, deep adversarial networks (DANs), temporal difference algorithms (TDAs), and the Sarsa algorithm [50].

4. Introduction to AR

4.1. Definition and Characteristics

AR seamlessly intertwines the real and virtual worlds [51,52]. By overlaying digital information onto the physical world, AR creates the illusion of digital content being an integral part of the real environment. One of AR’s key strengths is its ability to immerse users without isolating them from their physical surroundings [53]. AR experiences are easily accessible through devices like tablets and smartphones equipped with AR applications [54]. These applications can be operated in handheld mode or leveraged with accessories like Google Cardboard to provide immersive 3D experiences. Additionally, there are free applications available, enabling students to create AR content and engage with AR without the need for costly equipment [55]. AR spans a range of viewing devices, from AR headsets like Microsoft HoloLens to VR and gaming headsets such as HTC Vive and Samsung Gear.
AR finds applications across various educational levels, including primary [56] to university education [57]. It caters to diverse learner groups, encompassing K-12 students, kindergarteners, elderly individuals, adult learners, vocational and technical higher education [58], and those with special needs [59]. The integration of AR into education necessitates the development of suitable methods and applications, presenting valuable research opportunities [60]. AR technology empowers users to experience scientific phenomena that would be inaccessible in the real world, such as visualizing complex chemical reactions, providing access to previously unattainable knowledge [2,61]. AR enables users to interact with virtual objects and observe phenomena that may be challenging to visualize in reality, enhancing understanding of abstract or unobservable concepts [2]. AR and VR technology also helps in the surgical training in laparoscopic surgery [62].

4.2. Types of AR Systems

AR systems are categorized into three primary types: marker-based AR, markerless AR, and location-based AR [63,64,65].

4.2.1. Marker-Based AR

Marker-based AR relies on markers, which can take the form of QR codes, 2D barcodes, or distinctive, highly visible images. When a device captures an image with its camera, the AR software identifies the marker, determines the camera’s position and orientation, and overlays virtual objects onto the screen [66]. This method has proven to be robust and accurate, and virtually all AR software development kits (SDKs) support marker-based tracking techniques. Marker-based AR provides precise information about the marker’s position in the camera’s coordinate system, enabling the identification of sequences of markers and their utilization for various control functions [67].

4.2.2. Markerless AR

Markerless AR, also known as markerless tracking, is concerned with determining an object’s position and orientation in relation to its surroundings. This capability is crucial in VR and AR, as it allows the virtual world to adapt to the user’s perspective and field of view, ensuring that AR content aligns seamlessly with the physical environment [68]. Unlike marker-based approaches, markerless tracking does not require specialized optical markers, offering greater flexibility. It eliminates the need for predefined environments with fiducial markers, allowing users to move freely in various settings while receiving precise positional feedback [69].

4.2.3. Location-Based AR

Location-based AR applications deliver digital content to users as they arrive at specific physical locations or move through the real world. Typically, these applications are presented on mobile devices like smartphones or tablets, and they utilize Global Positioning System (GPS) or wireless network data to track the user’s location [70]. While location-based AR has been around for some time, it gained widespread popularity with games like Ingress and Pokemon Go. Another term often used to describe such applications is “location-aware AR”. Numerous industries, including tourism, entertainment, marketing, and education, have embraced location-based AR applications. These apps serve multiple purposes, such as entertaining and educating tourists while simultaneously achieving marketing objectives in the tourism sector [71].

4.3. The Intersection of ML and AR in Education

ML models are integrated with AR to enhance educational experiences. This survey explores the utilization of ML models, including SVM, KNN, ANN, CNN, and more, within AR for diverse educational purposes.

5. ML Techniques for AR in Education

In the realm of AR for education, various ML techniques play a pivotal role. These techniques empower AR applications to deliver engaging educational experiences.

5.1. SVM

SVM, an SL algorithm, finds applications in classifying data for AR in education. SVM establishes hyperplanes to separate classes, expanding the boundary between them and creating partitions. The algorithm maximizes the margin between classes, minimizing generalization error. In the educational context, SVM in AR enhances students’ comprehension, and the combination of ML and AR yields impressive results [72].

5.2. KNN

KNN, another ML method, classifies unseen examples stored in a database. It is a versatile technique, widely used not only in education but also in various fields such as nephropathy prediction in children, fault classification, intrusion detection systems, and AI applications [73,74].

5.3. ANN

ANNs mimic the human brain’s learning process, excelling in solving non-linear problems. In the brain, interconnected neurons handle complex tasks. In ANNs, artificial neurons, akin to biological neurons, process information through interconnected nodes. This technology finds utility in addressing intricate problems that defy linear solutions [75].

AR for Object Tracking and Visualization

Ref. [76] introduces IVM-CNN which combines the best features of RNN and CNN for object tracking and machine vision tasks. It outperforms previous models in the M2CAI 2016 contest datasets, with a mean average precision (mAP) of 97.1 for device diagnosis and a mean rate of 96.9. It also runs at a rate of 50 frames per second (FPS), ten times faster than region-based CNNs. The paper describes the use of masked R-CNN, which replaces the region proposal network (RPN) with a region proposal module (RPM) to generate more accurate boundary boxes while requiring less labeling. This improves the model’s reliability and effectiveness. The paper also presents the development of Microsoft HoloLens software, which provides an AR-based approach to clinical education and assistance. This technology enhances the visualization and understanding of medical data, thus improving healthcare practices.

5.4. CNN

CNNs are pivotal in AR for education. They possess the ability to identify relevant features without human supervision, making them indispensable in various domains, including speech recognition, face recognition, computer vision, and AR applications [77]. The weight-sharing feature in CNNs reduces overfitting and enhances generalization, setting them apart from conventional neural networks.

5.4.1. SVM and CNN in AR for Education

A study in 2018 explored the use of SVM and CNN in AR to detect English alphabets as markers, enhancing learning experiences for students. The CNN model achieved an impressive accuracy of 96.5%, while SVM reached 92.5%. The research involved the creation of a custom dataset for training and validation, contributing significantly to marker-based AR systems in education [77].

5.4.2. ML for Motor Skills Assessment

In 2022, researchers delved into assessing the motor skills of early education students using SVM, KNN, DT, and CNN image recognition methods. Among these, the CNN model outperformed the others, achieving an accuracy of 82%. This study is a testament to the potential of ML in evaluating students’ abilities in various educational contexts [78].

5.4.3. Simulating Circuits with Capsule Networks

The field of electrical engineering education witnessed innovation in 2021 with the introduction of a system that enables students to simulate circuits on mobile devices using image recognition. Capsule networks, a form of DL, played a vital role in recognizing and classifying characters within circuit diagrams. With a remarkable 96% accuracy, capsule networks outperformed traditional CNNs, making circuit simulation more accessible and engaging for students [79].

5.4.4. AR for Alphabet Handwriting Learning

In 2022, a novel AR application named “Learn2Write” emerged, designed to aid children in learning alphabet handwriting. Leveraging ML techniques, including several CNN models like DenseNet, BornoNet, Xception, EkushNet, and MobileNetV2, the application empowers children to practice and perfect their handwriting skills. Among the models, EkushNet stood out due to its efficiency, achieving a test accuracy of 96.71%. The app not only assists with handwriting but also offers a promising avenue for enhancing early education through AR and ML [80].

5.4.5. AR for Image Classification in Education

The authors in [81] describe a technique for automatically generating various 3D views of textbook pages to create a large dataset that is then trained with CNNs like Alexnet, GoogLeNet, VGG, GoogLeNet, or ResNet. The system stores the trained model and returns it to the client for classification on a web browser with TensorFlow.JS, allowing book page recognition. It also enables the display of 3D graphics on top of recognized book pages, providing an AR marker generation method that preserves the original images of the books while increasing detection accuracy. The research offers a promising and low-cost AR approach that can be applied in a variety of settings, including education and training.
In the context of chemical experiments, Ref. [82] addresses the use of a transformer-based object detection model called detection transformer (DETR) for object identification in images and its integration into an AR mobile application. AR and computer vision techniques together present a viable method for improving learning applications’ user experiences. The method they used consists of two steps: first, the DETR model is built and trained on the customized dataset; next, it is integrated into the augmented reality application to use a multi-class classification approach to predict the experiment name and detect objects.

6. SL and USL Models in AR

6.1. Gesture Recognition in AR for Children

In 2019, researchers ventured into the realm of gesture recognition in children’s education through AR. They harnessed SVM for static gestures and hidden Markov models (HMMs) for dynamic gestures, fostering a tangible connection between physical gestures and virtual learning experiences. While static gestures were well modeled, there was room for improvement in handling dynamic gestures in AR for education [83].

6.2. ARChem for Chemistry Education

In 2022, ARChem, a cutting-edge mobile application, emerged to revolutionize chemistry education. This app combines AR, AI, and ML to assist students in their chemistry studies. It excels in chemical equation identification and correction, image processing, text summarization, and even sentiment analysis through a Chatbot. The fusion of mobile development techniques, ML, and DL makes ARChem a game-changer in the realm of virtual education, aiming to alleviate the challenges students face in comprehending and applying complex chemistry concepts [84].

6.3. Interactive Multi-Meter Tutorial

Another 2022 innovation came in the form of an interactive multi-meter tutorial using AR and DL. By amalgamating TensorFlow’s object detection API with Unity 3D and AR Foundation, this project empowers students to learn how to use multi-meters. DL models facilitated real-time recognition of meter components, providing step-by-step guidance. Such applications demonstrate the potential of ML and AR in technical education, simplifying complex topics for students [85]. Figure 4 shows the advantages of ML-driven AR in education as discussed earlier.

7. Open Research Challenges

In the realm of using ML and DL techniques to enhance the educational experience with AR, several research challenges have emerged. While these methods offer numerous advantages, there are still gaps in our understanding and implementation. As this is an emerging technology, researchers often find themselves creating their datasets due to the limited availability of relevant datasets [77].
One significant challenge is the removal of multi-media noise from the immersive AR environment, a process that can be time consuming [78]. Additionally, researchers face the need to conduct experiments with kindergarten students to test the AR devices they have developed. This requires not only technological proficiency but also effective training for young learners on how to use AR devices [77].
Another critical challenge in the combination of ML and AR is the accuracy and speed of object recognition within complex diagrams [79]. Aligning AR objects seamlessly with real-world scenes and training models with a large amount of data are also formidable tasks [79]. To ensure a comfortable visual experience with head-mounted displays (HMDs), ideally, frame rates should reach around 60 frames per second. However, edge-based approaches in low-resolution video transmission can lead to latencies exceeding 16.67 milliseconds [86].
Privacy and data security pose additional challenges. AR devices often transmit data on user’s surroundings to the edge for processing. Depending on the context, this information may need to remain confidential or private, necessitating robust data encryption. Furthermore, in industrial settings, the accuracy of information delivered to users via AR applications must be unquestionable [87].
Moreover, AR devices are underutilized in many fields, and both teachers and students may need training to maximize their potential. Additionally, implementing AR in educational settings may require additional resources and equipment [80]. ML models integrated into AR systems sometimes lack precision, hindering effective education. Recognizing objects accurately in AR from a distance, especially from several meters away, remains a challenge [85]. Dynamic motion images also pose difficulties for AR applications that primarily excel with static images. However, researchers have also proposed solutions to these challenges, as follows:
  • The accuracy and speed of object recognition have improved through the utilization of DL models and AR target databases [85].
  • The Vuforia software v9.8 had been instrumental in tracking and aligning AR objects with real-world scenes, enhancing tracking and alignment [88].
  • Developers can create AR applications using ML models and AR technology through platforms like Unity3D [89] and Apple’s iOS SDK [90].
  • Improving the performance of ML models in AR relies heavily on the quality and quantity of training data.
Exploring the use of ML techniques to incorporate real-time feedback mechanisms within AR applications is a promising route for improving the learning experience. Addressing the ethical issues of using ML in educational AR environments, such as developing effective privacy safeguards and mitigating algorithmic biases, is also critical for ensuring responsible and fair technology use. Implementing real-world ML-powered AR applications in educational institutions is essential to provide educators and policymakers with vital information about student engagement, learning outcomes, and teacher effectiveness.
Future studies should also focus on enhancing the overall user experience of these applications, investigating pedagogical ways for customization, and examining the generalizability of findings across various educational settings. To create a holistic knowledge of the elements influencing the success of AR applications in education, interdisciplinary collaboration between ML scientists, educators, and psychologists is encouraged. In summary, exploring these future research topics has the potential to advance our understanding of the interaction of ML and AR in education. By addressing these research gaps, we can all work together to create more effective and ethical educational tools.

8. Conclusions

In today’s modern world, technology, including AR, has profoundly impacted various aspects of life, most notably education. Traditional teaching methods are gradually giving way to more immersive and interactive alternatives like AR, which offer a deeper understanding of educational content.
This survey delved into the integration of ML models into AR applications for education, exploring the diverse ML techniques used in this context. CNNs emerged as a popular choice due to their remarkable accuracy. Throughout our exploration, we discovered numerous applications developed by researchers to provide students with immersive learning experiences, fostering a comprehensive understanding of their subjects and improving overall learning efficiency.
AR technology has found applications across multiple educational domains, and we explored how AR models are implemented using SDKs and platforms. These tools play pivotal roles in the creation and deployment of AR solutions in educational settings.
Finally, we discussed several open research challenges and future directions that warrant further investigation. These directions have been derived from our comprehensive discussions and insights, pointing toward exciting opportunities for future advancements in the field of ML-enhanced AR education.

Author Contributions

Conceptualization, H.A.K. and S.J.; methodology, H.A.K.; formal analysis, H.A.K. and S.J.; investigation, M.J.P. and O.-J.K.; resources, J.-W.L.; data curation, H.A.K.; writing—original draft preparation, H.A.K. and S.J.; writing—review and editing, M.J.P., O.-J.K. and J.-W.L.; visualization, S.J.; supervision, M.J.P. and J.-W.L.; project administration, J.-W.L.; funding acquisition, J.-W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2023-RS-2022-00156354) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation). It was also supported by Institute of Information & communications Technology Planning & Evaluation (IITP) under the metaverse support program to nurture the best talents (IITP-2023-RS-2023-00254529) grant funded by the Korea government (MSIT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
ANNArtificial neural network
ARAugmented reality
CNNConvolutional neural network
DLDeep learning
KNNK-nearest neighbors
MLMachine learning
SVMSupport vector machine
SLSupervised learning
ULUnsupervised learning
RLReinforcement learning
SSLSemi-supervised learning
VRVirtual reality
DTDecision tree
LSTMLong short-term memory
SDKSoftware development kit
SMILESSimplified molecular input line entry system
SOMsSelf-organizing maps
GANsGenerative adversarial networks
DBNsBelief networks
EEGElectroencephalogram
DANDeep Adversarial Networks
TDATemporal Difference Algorithms
DRLDeep reinforcement learning

References

  1. Garzón, J.; Acevedo, J. Meta-analysis of the impact of Augmented Reality on students’ learning gains. Educ. Res. Rev. 2019, 27, 244–260. [Google Scholar] [CrossRef]
  2. Wu, H.K.; Lee, S.W.Y.; Chang, H.Y.; Liang, J.C. Current status, opportunities and challenges of augmented reality in education. Comput. Educ. 2013, 62, 41–49. [Google Scholar] [CrossRef]
  3. Ren, Y.; Yang, Y.; Chen, J.; Zhou, Y.; Li, J.; Xia, R.; Yang, Y.; Wang, Q.; Su, X. A scoping review of deep learning in cancer nursing combined with augmented reality: The era of intelligent nursing is coming. Asia-Pac. J. Oncol. Nurs. 2022, 9, 100135. [Google Scholar] [CrossRef] [PubMed]
  4. Menghani, G. Efficient deep learning: A survey on making deep learning models smaller, faster, and better. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
  5. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  6. Gheisari, M.; Ebrahimzadeh, F.; Rahimi, M.; Moazzamigodarzi, M.; Liu, Y.; Dutta Pramanik, P.K.; Heravi, M.A.; Mehbodniya, A.; Ghaderzadeh, M.; Feylizadeh, M.R.; et al. Deep learning: Applications, architectures, models, tools, and frameworks: A comprehensive survey. CAAI Trans. Intell. Technol. 2023, 8, 581–606. [Google Scholar] [CrossRef]
  7. Jamil, S.; Jalil Piran, M.; Kwon, O.J. A comprehensive survey of transformers for computer vision. Drones 2023, 7, 287. [Google Scholar] [CrossRef]
  8. Hurst, W.; Mendoza, F.R.; Tekinerdogan, B. Augmented reality in precision farming: Concepts and applications. Smart Cities 2021, 4, 1454–1468. [Google Scholar] [CrossRef]
  9. Kovoor, J.G.; Gupta, A.K.; Gladman, M.A. Validity and effectiveness of augmented reality in surgical education: A systematic review. Surgery 2021, 170, 88–98. [Google Scholar] [CrossRef]
  10. Lu, J.; Cuff, R.F.; Mansour, M.A. Simulation in surgical education. Am. J. Surg. 2021, 221, 509–514. [Google Scholar] [CrossRef]
  11. Keller, D.S.; Grossman, R.C.; Winter, D.C. Choosing the new normal for surgical education using alternative platforms. Surgery 2020, 38, 617–622. [Google Scholar] [PubMed]
  12. Zheng, T.; Xie, W.; Xu, L.; He, X.; Zhang, Y.; You, M.; Yang, G.; Chen, Y. A machine learning-based framework to identify type 2 diabetes through electronic health records. Int. J. Med. Inform. 2017, 97, 120–127. [Google Scholar] [CrossRef] [PubMed]
  13. Salari, N.; Hosseinian-Far, A.; Mohammadi, M.; Ghasemi, H.; Khazaie, H.; Daneshkhah, A.; Ahmadi, A. Detection of sleep apnea using Machine learning algorithms based on ECG Signals: A comprehensive systematic review. Expert Syst. Appl. 2022, 187, 115950. [Google Scholar] [CrossRef]
  14. Khandelwal, P.; Srinivasan, K.; Roy, S.S. Surgical education using artificial intelligence, augmented reality and machine learning: A review. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Yilan, Taiwan, 20–22 May 2019; pp. 1–2. [Google Scholar]
  15. Soltani, P.; Morice, A.H. Augmented reality tools for sports education and training. Comput. Educ. 2020, 155, 103923. [Google Scholar] [CrossRef]
  16. Radu, I. Why should my students use AR? A comparative review of the educational impacts of augmented-reality. In Proceedings of the 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, GA, USA, 5–8 November 2012; pp. 313–314. [Google Scholar]
  17. Burgsteiner, H.; Kandlhofer, M.; Steinbauer, G. Irobot: Teaching the basics of artificial intelligence in high schools. In Proceedings of the AAAI conference On Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  18. Chiu, T.K. A holistic approach to the design of artificial intelligence (AI) education for K-12 schools. TechTrends 2021, 65, 796–807. [Google Scholar] [CrossRef]
  19. Kim, J.; Shim, J. Development of an AR-based AI education app for non-majors. IEEE Access 2022, 10, 14149–14156. [Google Scholar] [CrossRef]
  20. Bistaman, I.N.M.; Idrus, S.Z.S.; Abd Rashid, S. The use of augmented reality technology for primary school education in Perlis, Malaysia. J. Phys. Conf. Ser. 2018, 1019, 012064. [Google Scholar] [CrossRef]
  21. Garzón, J. An overview of twenty-five years of augmented reality in education. Multimodal Technol. Interact. 2021, 5, 37. [Google Scholar] [CrossRef]
  22. Lin, H.M.; Wu, J.Y.; Liang, J.C.; Lee, Y.H.; Huang, P.C.; Kwok, O.M.; Tsai, C.C. A review of using multilevel modeling in e-learning research. Comput. Educ. 2023, 198, 104762. [Google Scholar] [CrossRef]
  23. Cho, J.; Rahimpour, S.; Cutler, A.; Goodwin, C.R.; Lad, S.P.; Codd, P. Enhancing reality: A systematic review of augmented reality in neuronavigation and education. World Neurosurg. 2020, 139, 186–195. [Google Scholar] [CrossRef]
  24. Fourman, M.S.; Ghaednia, H.; Lans, A.; Lloyd, S.; Sweeney, A.; Detels, K.; Dijkstra, H.; Oosterhoff, J.H.; Ramsey, D.C.; Do, S.; et al. Applications of augmented and virtual reality in spine surgery and education: A review. In Seminars in Spine Surgery; Elsevier: Amsterdam, The Netherlands, 2021; Volume 33, p. 100875. [Google Scholar]
  25. Theodoropoulos, A.; Lepouras, G. Augmented Reality and programming education: A systematic review. Int. J. Child-Comput. Interact. 2021, 30, 100335. [Google Scholar] [CrossRef]
  26. Gouveia, P.F.; Luna, R.; Fontes, F.; Pinto, D.; Mavioso, C.; Anacleto, J.; Timóteo, R.; Santinha, J.; Marques, T.; Cardoso, F.; et al. Augmented Reality in Breast Surgery Education. Breast Care 2023, 18, 182–186. [Google Scholar] [CrossRef] [PubMed]
  27. Urbina Coronado, P.D.; Demeneghi, J.A.A.; Ahuett-Garza, H.; Orta Castañon, P.; Martínez, M.M. Representation of machines and mechanisms in augmented reality for educative use. Int. J. Interact. Des. Manuf. (IJIDeM) 2022, 16, 643–656. [Google Scholar] [CrossRef]
  28. Ahmed, N.; Lataifeh, M.; Alhamarna, A.F.; Alnahdi, M.M.; Almansori, S.T. LeARn: A Collaborative Learning Environment using Augmented Reality. In Proceedings of the 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), Magdeburg, Germany, 8–10 September 2021; pp. 1–4. [Google Scholar]
  29. Zhu, J.; Ji, S.; Yu, J.; Shao, H.; Wen, H.; Zhang, H.; Xia, Z.; Zhang, Z.; Lee, C. Machine learning-augmented wearable triboelectric human-machine interface in motion identification and virtual reality. Nano Energy 2022, 103, 107766. [Google Scholar] [CrossRef]
  30. Iqbal, M.Z.; Mangina, E.; Campbell, A.G. Exploring the real-time touchless hand interaction and intelligent agents in augmented reality learning applications. In Proceedings of the 2021 7th International Conference of the Immersive Learning Research Network (iLRN), Virtual Conference, 17 May–10 June 2021; pp. 1–8. [Google Scholar]
  31. Iqbal, M.Z.; Mangina, E.; Campbell, A.G. Current challenges and future research directions in augmented reality for education. Multimodal Technol. Interact. 2022, 6, 75. [Google Scholar] [CrossRef]
  32. Gupta, A.; Nisar, H. An Improved Framework to Assess the Evaluation of Extended Reality Healthcare Simulators using Machine Learning. In Proceedings of the 2022 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), Arlington, VA, USA, 17–19 November 2022; pp. 188–192. [Google Scholar]
  33. Martins, R.M.; Gresse Von Wangenheim, C. Findings on Teaching Machine Learning in High School: A Ten-Year Systematic Literature Review. Inform. Educ. 2022, 22, 421–440. [Google Scholar] [CrossRef]
  34. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  35. Lou, G.; Shi, H. Face image recognition based on convolutional neural network. China Commun. 2020, 17, 117–124. [Google Scholar] [CrossRef]
  36. William, P.; Gade, R.; esh Chaudhari, R.; Pawar, A.; Jawale, M. Machine Learning based Automatic Hate Speech Recognition System. In Proceedings of the 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), Erode, India, 7–9 April 2022; pp. 315–318. [Google Scholar]
  37. Polyakov, E.; Mazhanov, M.; Rolich, A.; Voskov, L.; Kachalova, M.; Polyakov, S. Investigation and development of the intelligent voice assistant for the Internet of Things using machine learning. In Proceedings of the 2018 Moscow Workshop on Electronic and Networking Technologies (MWENT), Moscow, Russia, 14–16 March 2018; pp. 1–5. [Google Scholar]
  38. Tuncali, C.E.; Fainekos, G.; Ito, H.; Kapinski, J. Simulation-based adversarial test generation for autonomous vehicles with machine learning components. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1555–1562. [Google Scholar]
  39. Kühl, N.; Schemmer, M.; Goutier, M.; Satzger, G. Artificial intelligence and machine learning. Electron. Mark. 2022, 32, 2235–2244. [Google Scholar] [CrossRef]
  40. Shinde, P.P.; Shah, S. A review of machine learning and deep learning applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
  41. Ray, S. A quick review of machine learning algorithms. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India, 14–16 February 2019; pp. 35–39. [Google Scholar]
  42. Jamil, S.; Piran, M.J.; Rahman, M.; Kwon, O.J. Learning-driven lossy image compression: A comprehensive survey. Eng. Appl. Artif. Intell. 2023, 123, 106361. [Google Scholar] [CrossRef]
  43. Sarker, I.H.; Kayes, A.; Badsha, S.; Alqahtani, H.; Watters, P.; Ng, A. Cybersecurity data science: An overview from machine learning perspective. J. Big Data 2020, 7, 1–29. [Google Scholar] [CrossRef]
  44. Uddin, S.; Khan, A.; Hossain, M.E.; Moni, M.A. Comparing different supervised machine learning algorithms for disease prediction. BMC Med. Inform. Decis. Mak. 2019, 19, 1–16. [Google Scholar] [CrossRef] [PubMed]
  45. Saravanan, R.; Sujatha, P. A state of art techniques on machine learning algorithms: A perspective of supervised learning approaches in data classification. In Proceedings of the 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 14–15 June 2018; pp. 945–949. [Google Scholar]
  46. Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef] [PubMed]
  47. Dike, H.U.; Zhou, Y.; Deveerasetty, K.K.; Wu, Q. Unsupervised learning based on artificial neural network: A review. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; pp. 322–327. [Google Scholar]
  48. Van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2020, 109, 373–440. [Google Scholar] [CrossRef]
  49. Aradi, S. Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 23, 740–759. [Google Scholar] [CrossRef]
  50. Qiang, W.; Zhongli, Z. Reinforcement learning model, algorithms and its application. In Proceedings of the 2011 International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), Jilin, China, 19–22 August 2011; pp. 1143–1146. [Google Scholar]
  51. Khan, T.; Johnston, K.; Ophoff, J. The impact of an augmented reality application on learning motivation of students. Adv. Hum.-Comput. Interact. 2019, 2019, 7208494. [Google Scholar] [CrossRef]
  52. Sirakaya, M.; Alsancak Sirakaya, D. Trends in educational augmented reality studies: A systematic review. Malays. Online J. Educ. Technol. 2018, 6, 60–74. [Google Scholar] [CrossRef]
  53. Tzima, S.; Styliaras, G.; Bassounas, A. Augmented reality applications in education: Teachers point of view. Educ. Sci. 2019, 9, 99. [Google Scholar] [CrossRef]
  54. Wei, X.; Weng, D.; Liu, Y.; Wang, Y. Teaching based on augmented reality for a technical creative design course. Comput. Educ. 2015, 81, 221–234. [Google Scholar] [CrossRef]
  55. Holley, D.; Hobbs, M. Augmented reality for education. In Encyclopedia of Educational Innovation; Springer: Singapore, 2019. [Google Scholar]
  56. Koutromanos, G.; Sofos, A.; Avraamidou, L. The use of augmented reality games in education: A review of the literature. Educ. Media Int. 2015, 52, 253–271. [Google Scholar] [CrossRef]
  57. Scrivner, O.; Madewell, J.; Buckley, C.; Perez, N. Augmented reality digital technologies (ARDT) for foreign language teaching and learning. In Proceedings of the 2016 Future Technologies Conference (FTC), San Francisco, CA, USA, 6–7 December 2016; pp. 395–398. [Google Scholar]
  58. Radosavljevic, S.; Radosavljevic, V.; Grgurovic, B. The potential of implementing augmented reality into vocational higher education through mobile learning. Interact. Learn. Environ. 2020, 28, 404–418. [Google Scholar] [CrossRef]
  59. Akçayır, M.; Akçayır, G. Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educ. Res. Rev. 2017, 20, 1–11. [Google Scholar] [CrossRef]
  60. Pochtoviuk, S.; Vakaliuk, T.; Pikilnyak, A. Possibilities of application of augmented reality in different branches of education. Educ. Dimens. 2020, 54, 179–197. [Google Scholar] [CrossRef]
  61. Akçayır, M.; Akçayır, G.; Pektas, H.; Ocak, M. AR in science laboratories: The effects of AR on university students’ laboratory skills and attitudes toward science laboratories. Comput. Hum. Behav. 2016, 57, 334–342. [Google Scholar] [CrossRef]
  62. Lee, S.; Shetty, A.S.; Cavuoto, L.A. Modeling of Learning Processes Using Continuous-Time Markov Chain for Virtual-Reality-Based Surgical Training in Laparoscopic Surgery. IEEE Trans. Learn. Technol. 2023, 17, 462–473. [Google Scholar] [CrossRef] [PubMed]
  63. Martin Sagayam, K.; Ho, C.C.; Henesey, L.; Bestak, R. 3D scenery learning on solar system by using marker based augmented reality. In Proceedings of the 4th International Conference of the Virtual and Augmented Reality in Education, VARE 2018, Budapest, Hungary, 17–19 September 2018; pp. 139–143. [Google Scholar]
  64. Brito, P.Q.; Stoyanova, J. Marker versus markerless augmented reality. Which has more impact on users? Int. J. Hum.-Interact. 2018, 34, 819–833. [Google Scholar] [CrossRef]
  65. Yu, J.; Denham, A.R.; Searight, E. A systematic review of augmented reality game-based Learning in STEM education. Educ. Technol. Res. Dev. 2022, 70, 1169–1194. [Google Scholar] [CrossRef]
  66. Bouaziz, R.; Alhejaili, M.; Al-Saedi, R.; Mihdhar, A.; Alsarrani, J. Using Marker Based Augmented Reality to teach autistic eating skills. In Proceedings of the 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Utrecht, The Netherlands, 14–18 December 2020; pp. 239–242. [Google Scholar] [CrossRef]
  67. Liu, B.; Tanaka, J. Virtual marker technique to enhance user interactions in a marker-based AR system. Appl. Sci. 2021, 11, 4379. [Google Scholar] [CrossRef]
  68. Sharma, S.; Kaikini, Y.; Bhodia, P.; Vaidya, S. Markerless augmented reality based interior designing system. In Proceedings of the 2018 International Conference on Smart City and Emerging Technology (ICSCET), Mumbai, India, 5 January 2018; pp. 1–5. [Google Scholar]
  69. Hui, J. Approach to the interior design using augmented reality technology. In Proceedings of the 2015 Sixth International Conference on Intelligent Systems Design and Engineering Applications (ISDEA), Guizhou, China, 18–19 August 2015; pp. 163–166. [Google Scholar]
  70. Georgiou, Y.; Kyza, E.A. The development and validation of the ARI questionnaire: An instrument for measuring immersion in location-based augmented reality settings. Int. J. Hum.-Comput. Stud. 2017, 98, 24–37. [Google Scholar] [CrossRef]
  71. Kleftodimos, A.; Moustaka, M.; Evagelou, A. Location-Based Augmented Reality for Cultural Heritage Education: Creating Educational, Gamified Location-Based AR Applications for the Prehistoric Lake Settlement of Dispilio. Digital 2023, 3, 18–45. [Google Scholar] [CrossRef]
  72. Burman, I.; Som, S. Predicting students academic performance using support vector machine. In Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 4–6 February 2019; pp. 756–759. [Google Scholar]
  73. Mohamed, A.E. Comparative study of four supervised machine learning techniques for classification. Int. J. Appl. 2017, 7, 1–15. [Google Scholar]
  74. Lopez-Bernal, D.; Balderas, D.; Ponce, P.; Molina, A. Education 4.0: Teaching the basics of KNN, LDA and simple perceptron algorithms for binary classification problems. Future Internet 2021, 13, 193. [Google Scholar] [CrossRef]
  75. Chen, C.H.; Wu, C.L.; Lo, C.C.; Hwang, F.J. An augmented reality question answering system based on ensemble neural networks. IEEE Access 2017, 5, 17425–17435. [Google Scholar] [CrossRef]
  76. K, P.; N, B.; D, M.; S, H.; M, K.; Kumar, V. Artificial Neural Networks in Healthcare for Augmented Reality. In Proceedings of the 2022 Fourth International Conference on Cognitive Computing and Information Processing (CCIP), Bengaluru, India, 23–24 December 2022; pp. 1–5. [Google Scholar] [CrossRef]
  77. Dash, A.K.; Behera, S.K.; Dogra, D.P.; Roy, P.P. Designing of marker-based augmented reality learning environment for kids using convolutional neural network architecture. Displays 2018, 55, 46–54. [Google Scholar] [CrossRef]
  78. Rodríguez, A.O.R.; Riaño, M.A.; Gaona-García, P.A.; Montenegro-Marín, C.E.; Sarría, Í. Image Classification Methods Applied in Immersive Environments for Fine Motor Skills Training in Early Education. Int. J. Interact. Multimed. Artif. Intell. 2019, 5, 151–158. [Google Scholar] [CrossRef]
  79. Alhalabi, M.; Ghazal, M.; Haneefa, F.; Yousaf, J.; El-Baz, A. Smartphone Handwritten Circuits Solver Using Augmented Reality and Capsule Deep Networks for Engineering Education. Educ. Sci. 2021, 11, 661. [Google Scholar] [CrossRef]
  80. Opu, M.N.I.; Islam, M.R.; Kabir, M.A.; Hossain, M.S.; Islam, M.M. Learn2Write: Augmented Reality and Machine Learning-Based Mobile App to Learn Writing. Computers 2022, 11, 4. [Google Scholar] [CrossRef]
  81. Le, H.; Nguyen, M.; Nguyen, Q.; Nguyen, H.; Yan, W.Q. Automatic Data Generation for Deep Learning Model Training of Image Classification used for Augmented Reality on Pre-school Books. In Proceedings of the 2020 International Conference on Multimedia Analysis and Pattern Recognition (MAPR), Ha Noi, Vietnam, 8–9 October 2020; pp. 1–5. [Google Scholar]
  82. Hanafi, A.; Elaachak, L.; Bouhorma, M. Machine learning based augmented reality for improved learning application through object detection algorithms. Int. J. Electr. Comput. Eng. (IJECE) 2023, 13, 1724–1733. [Google Scholar] [CrossRef]
  83. Sun, M.; Wu, X.; Fan, Z.; Dong, L. Augmented reality-based educational design for children. Int. J. Emerg. Technol. Learn. 2019, 14, 51. [Google Scholar] [CrossRef]
  84. Menikrama, M.; Liyanagunawardhana, C.; Amarasekara, H.; Ramasinghe, M.; Weerasinghe, L.; Weerasinghe, I. ARChem: Augmented Reality Chemistry Lab. In Proceedings of the 2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 27–30 October 2021; pp. 0276–0280. [Google Scholar]
  85. Estrada, J.; Paheding, S.; Yang, X.; Niyaz, Q. Deep-Learning-Incorporated Augmented Reality Application for Engineering Lab Training. Appl. Sci. 2022, 12, 5159. [Google Scholar] [CrossRef]
  86. Salman, S.M.; Sitompul, T.A.; Papadopoulos, A.V.; Nolte, T. Fog Computing for Augmented Reality: Trends, Challenges and Opportunities. In Proceedings of the 2020 IEEE International Conference on Fog Computing (ICFC), Sydney, NSW, Australia, 21–24 April 2020; pp. 56–63. [Google Scholar] [CrossRef]
  87. Langfinger, M.; Schneider, M.; Stricker, D.; Schotten, H.D. Addressing security challenges in industrial augmented reality systems. In Proceedings of the 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), Emden, Germany, 24–26 July 2017; pp. 299–304. [Google Scholar]
  88. Amara, K.; Aouf, A.; Kerdjidj, O.; Kennouche, H.; Djekoune, O.; Guerroudji, M.A.; Zenati, N.; Aouam, D. Augmented Reality for COVID-19 Aid Diagnosis: Ct-Scan segmentation based Deep Learning. In Proceedings of the 2022 7th International Conference on Image and Signal Processing and their Applications (ISPA), Mostaganem, Algeria, 8–9 May 2022; pp. 1–6. [Google Scholar]
  89. Supruniuk, K.; Andrunyk, V.; Chyrun, L. AR Interface for Teaching Students with Special Needs. In Proceedings of the COLINS, Lviv, Ukraine, 23–24 April 2020; pp. 1295–1308. [Google Scholar]
  90. Sakshuwong, S.; Weir, H.; Raucci, U.; Martínez, T.J. Bringing chemical structures to life with augmented reality, machine learning, and quantum chemistry. J. Chem. Phys. 2022, 156, 204801. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Organization of the survey.
Figure 1. Organization of the survey.
Technologies 12 00072 g001
Figure 2. Number of publications in each year from 2017 to 2023 based on Web of Science (data acquired on 30 April 2024).
Figure 2. Number of publications in each year from 2017 to 2023 based on Web of Science (data acquired on 30 April 2024).
Technologies 12 00072 g002
Figure 3. Leading countries working on ML-assisted AR applications in education based on Web of Science.
Figure 3. Leading countries working on ML-assisted AR applications in education based on Web of Science.
Technologies 12 00072 g003
Figure 4. Benefits of ML-driven AR in education.
Figure 4. Benefits of ML-driven AR in education.
Technologies 12 00072 g004
Table 1. Summary of existing surveys.
Table 1. Summary of existing surveys.
ResearchYearScope of the SurveysContributions and Limitations
ARSVM KNNANNCNN
[1]2019Technologies 12 00072 i001Technologies 12 00072 i002Technologies 12 00072 i003Technologies 12 00072 i004Technologies 12 00072 i005Study of the medium’s effect on student learning gains. ML models for AR were not focused on.
[3]2022Technologies 12 00072 i006Technologies 12 00072 i007Technologies 12 00072 i008Technologies 12 00072 i009Technologies 12 00072 i010Focused on uses of AR and DL in cancer nursing. All ML models were not discussed.
[8]2021Technologies 12 00072 i011Technologies 12 00072 i012Technologies 12 00072 i013Technologies 12 00072 i014Technologies 12 00072 i015Discussed AR in plant education for precise farming. Only conventional methods were discussed, not ML models.
[21]2021Technologies 12 00072 i016Technologies 12 00072 i017Technologies 12 00072 i018Technologies 12 00072 i019Technologies 12 00072 i020Overview of AR; description of three generations of AR in education; challenges of AR applications.
[14]2019Technologies 12 00072 i021Technologies 12 00072 i022Technologies 12 00072 i023Technologies 12 00072 i024Technologies 12 00072 i025Explored the combination of AR, AI, and ML for surgical education.
[22]2021Technologies 12 00072 i026Technologies 12 00072 i027Technologies 12 00072 i028Technologies 12 00072 i029Technologies 12 00072 i030Highlighted the application of HLM as a multilevel modeling technique in e-learning research.
[23]2020Technologies 12 00072 i031Technologies 12 00072 i032Technologies 12 00072 i033Technologies 12 00072 i034Technologies 12 00072 i035Surveyed current technologies and limitations in AR for neurosurgical training as an educational tool.
[24]2021Technologies 12 00072 i036Technologies 12 00072 i037Technologies 12 00072 i038Technologies 12 00072 i039Technologies 12 00072 i040Reviewed current clinical applications of AR in spine surgery and education.
[25]2021Technologies 12 00072 i041Technologies 12 00072 i042Technologies 12 00072 i043Technologies 12 00072 i044Technologies 12 00072 i045Studied the impact of AR on programming education, its challenges and benefits for student learning.
This survey2024Technologies 12 00072 i046Technologies 12 00072 i047Technologies 12 00072 i048Technologies 12 00072 i049Technologies 12 00072 i050Focuses on ML models in AR for different fields of education: pros, and cons of each model.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, H.A.; Jamil, S.; Piran, M.J.; Kwon, O.-J.; Lee, J.-W. A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education. Technologies 2024, 12, 72. https://doi.org/10.3390/technologies12050072

AMA Style

Khan HA, Jamil S, Piran MJ, Kwon O-J, Lee J-W. A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education. Technologies. 2024; 12(5):72. https://doi.org/10.3390/technologies12050072

Chicago/Turabian Style

Khan, Haseeb Ali, Sonain Jamil, Md. Jalil Piran, Oh-Jin Kwon, and Jong-Weon Lee. 2024. "A Comprehensive Survey on the Investigation of Machine-Learning-Powered Augmented Reality Applications in Education" Technologies 12, no. 5: 72. https://doi.org/10.3390/technologies12050072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop