Next Article in Journal
A Modern Approach to Securing Critical Infrastructure in Energy Transmission Networks: Integration of Cryptographic Mechanisms and Biometric Data
Previous Article in Journal
An AC-DC Coordinated Scheme for Cascaded Hybrid High-Voltage Direct Current to Suppress Wind Power Fluctuations
Previous Article in Special Issue
Tear Film Break-Up Time before and after Watching a VR Video: Comparison between Naked Eyes and Contact Lens Wearers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Virtual Reality in Developing the Digital Twin for an Integrated Robot Learning System

1
Institute of Learning Sciences and Technologies, National Tsing Hua University, Hsinchu 300044, Taiwan
2
Institute of Mathematics and Science Education, National Tsing Hua University, Hsinchu 300044, Taiwan
3
Graduate Degree Program of Robotics, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
4
Institute of Information Systems and Applications, National Tsing Hua University, Hsinchu 300044, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2848; https://doi.org/10.3390/electronics13142848
Submission received: 20 June 2024 / Revised: 14 July 2024 / Accepted: 17 July 2024 / Published: 19 July 2024
(This article belongs to the Special Issue Applications of Virtual, Augmented and Mixed Reality)

Abstract

:
Robotics includes complex mathematical calculations and coordinate transformations in forward and inverse kinematics, path planning, and robot dynamics. Students may experience a high cognitive load and lose learning motivation because robotics can be complex and challenging to understand. This study applied virtual reality (VR) technology in robotics education to simplify and visualize complex robot kinematics, aiming to increase learning motivation and reduce cognitive load. This study incorporated real and virtual robot control to develop an integrated robot learning system. This system enables learners to control the digital twin of a physical robot and observe the synchronized motion of both the virtual and physical robots. Users can operate the virtual robot to achieve the target position by setting joint parameters or using values calculated from inverse kinematics. They can also understand the principle of digital twins by observing the synchronous motion of both robots. A teaching experiment was conducted to explore the performance of applying VR in robotics education and its impacts on cognitive load and learning motivation. The system was improved based on user responses to facilitate subsequent promotional activities. VR can transform complex robotics into easily understandable learning experiences and provide an interactive user interface, making the system a suitable learning tool for STEM education.

1. Introduction

According to the International Organization for Standardization (ISO), robots are automated, position-controllable, multifunctional machines capable of executing complex actions. These machines, characterized by multiple axes, can process materials, parts, tools, and special devices to perform various tasks [1]. In general, robots encompass mechanical entities that simulate human behavior or thinking and are analogs of other biological beings. However, various classification methods and controversies exist regarding the narrow definition of robots, as some computer programs are also considered robots. In modern industry, robots refer to artificial devices capable of performing tasks autonomously, and their goal is to replace or assist human labor [2].
The applications of robots are diverse, extending from daily life to various industries such as production, service, mining, construction, agriculture, healthcare, and aerospace exploration. Examples include Honda’s ASIMO, introduced in 2000 [3], TOSY Robotics’ TOPIO [4], and industrial robots and swarm robots that operate collectively [5]. Discussions on robots are also prevalent in science fiction films, and their scope even extends to nanorobots [6]. Robots find extensive applications in manufacturing, assembly, transportation, surgery, weaponry, and laboratory research. Specifically, they are well suited for dirty, dangerous, or tedious tasks [7].
Virtual Reality (VR) is a technology that creates a simulated environment, allowing users to interact with a three-dimensional space using special equipment, such as Head-Mounted Displays (HMDs), providing an immersive experience that can mimic or transcend real-world settings [8]. VR is highly useful in robotic simulation for several reasons, e.g., enhancing both the development and operational aspects of robotics. This study utilized VR technology to create a Cyber-Physical Robot Learning System (CPRLS), enabling users to manipulate a virtual robot and observe the results of robot kinematics execution on both robots. Users can understand how a robot completes tasks by setting joint parameters to achieve the target position and avoiding collisions with obstacles through path planning. They can gain insights into VR technology and applications of digital twins by observing the synchronous motion of virtual and physical robots. This experience effectively enhances the understanding of robot kinematics and digital twins. Therefore, it is a suitable assistance tool for promoting robotics education.
The limited capacity of working memory can affect learning, especially the understanding of complex robot kinematics. Sweller et al. [9] defined cognitive load as the load placed on an individual’s cognitive system when engaged in learning activities, and the difficulty of teaching materials is closely related to cognitive load. This theory identifies three types of cognitive loads. Intrinsic cognitive load refers to the innate difficulty of the material being learned, influenced by learners’ background knowledge and the material’s complexity. Extraneous cognitive load is the unnecessary mental effort by the way information is presented, often due to poor instructional design. Germane cognitive load is the beneficial mental effort dedicated to processing, understanding, and integrating new information with existing knowledge, facilitating the construction of schemas. Effective instructional design seeks to reduce extraneous load while managing intrinsic load and enhancing germane load to optimize learning.
Artino [10] explored the impacts of instructional strategies or presentation formats on learners’ cognitive load during the learning process, identifying instructional designs that can effectively reduce the cognitive burden on working memory. Following Artino’s suggestion, this study utilized VR technology to simplify and visualize complex robot kinematics, aiming to reduce learners’ cognitive load and enhance their learning motivation. To reduce cognitive load, the objective of this study is to develop an integrated robot learning system, the CPRLS, to provide an immersive and interactive environment for learning robotics. The learning content includes forward kinematics, inverse kinematics, and path planning. Traditional methods for teaching robotics focus on mathematical calculation and coordinate transformation, which may lead to lower learning motivation and higher cognitive load due to difficulties in understanding robotics.
Traditional methods of teaching robotics often rely heavily on mathematical calculations and coordinate transformations, which can be difficult for students to understand. This often leads to lower learning motivation and a higher cognitive load. There is a need for more effective teaching tools that can simplify complex concepts and make learning more engaging and accessible. While VR technology has been explored in various educational fields, its application in robotics education, specifically in the context of an integrated learning system like CPRLS, remains underexplored.
This study aims to fill this gap by investigating whether VR technology can enhance learning outcomes and motivation and reduce cognitive load in robotics education. More precisely, this study used VR technology to simplify and visualize robot kinematics by creating a digital twin to synchronize with the physical robot. A teaching experiment was conducted to explore students’ learning effectiveness, learning motivation, cognitive load, and attitudes toward the application of VR in robotics education. The research questions in this study are listed as follows:
  • Does the CPRLS improve students’ learning achievement in robotics compared to traditional teaching methods?
  • Does the CPRLS enhance students’ motivation to learn robotics more than traditional teaching methods?
  • Does the CPRLS incur less cognitive load in learning robotics compared to traditional teaching methods?
  • What is the students’ level of satisfaction after using the CPRLS for learning robotics?
By addressing these research questions, this study aims to provide comprehensive evidence on the benefits and challenges of using VR in robotics education, potentially leading to more effective and engaging teaching methods.

2. Literature Review

In this section, a literature review is conducted, including topics related to robots and STEM education, digital twins, virtual reality, and its application in learning robotics. This review aims to establish the stage for the current research by connecting it with existing knowledge and identifying the problems for investigation.

2.1. Robots and STEM Education

Robotics is a multidisciplinary field that focuses on designing, building, operating, and applying robots. It integrates various disciplines, such as electrical engineering, mechanical engineering, control engineering, electronics, software engineering, computer engineering, mathematics, and bioengineering [11]. Robotics has gained momentum with the advances in precision mechanical technology, industrial automation, and wireless remote control, following rapid progress in the electronics industry. Generally speaking, robots are suitable for tasks characterized by high repeatability [12], and they are commonly used as teaching aids in STEM education [13].
STEM education is a cross-domain, integrated teaching approach focusing on science, technology, engineering, and mathematics. Unlike traditional cramming education or unidirectional knowledge transfer, STEM education guides students to think critically by combining knowledge and skills across disciplines to solve real-world problems. It aims to cultivate individuals who enjoy active learning and encourages children to explore autonomously and find answers to problems [14]. Therefore, STEM education can significantly enhance learners’ interest, motivation, and learning outcomes [15]. The application of robots in STEM education can increase creativity and improve hands-on skills. For example, students must design the structure of a robot and its control programs to accomplish designated tasks, but they may also encounter unexpected problems, thereby encouraging critical thinking and improving problem-solving skills.
The four industrial revolutions involve coal, gas, electronics, nuclear, the internet, and renewable energy, and they have significantly changed human life and global landscapes. The first industrial revolution harnessed the power of hydraulics and steam as energy sources to overcome the limitations of human and animal labor. The second industrial revolution utilized electricity to support mass production, realizing machines producing machines. The third industrial revolution employed electronic devices and information technology to reduce human influence by enhancing the precision and automation of industrial manufacturing. Industry 4.0, the fourth industrial revolution, focuses on integrating artificial intelligence (AI) with sensing and control systems.
High levels of automation allow the proactive elimination of production obstacles while utilizing the Internet of Things (IoT) to create an intelligent industry environment. A smart factory with adaptability, resource efficiency, and human-machine collaborative manufacturing connects the supply chain with workplace automation and customer demand. This leads to product-service hybrids and customized supply capabilities [16]. In modernized industries, the combination of robot control systems with intelligent awareness allows the testing and training of robots in automated workplace environments. The industry can use this approach to strengthen production efficiency and work effectiveness while driving innovation and promoting sustainable growth.

2.2. Digital Twins and AI Robots

This study utilized VR technology to simplify and visualize the complex principles of robot kinematics to increase learning motivation and reduce cognitive load. It combines the sensors and control systems of real robots to provide learners with an integrated virtual and real environment for robot training. Industry 4.0 leverages digital twin technology to enable information sharing and collaboration among automated equipment, industrial robots, and human workers in a factory. They can obtain real-time data from dynamic processes to prevent unexpected shutdowns and ensure workplace safety [17].
Digital twins refer to dynamic digital representations or counterparts of workflows, services, and physical devices. Digital twins can be applied based on analyzed data in a work environment, allowing for the assessment and operation of physical products [18]. It is a mechanism for the fusion of virtual and real worlds to constitute an integrated work environment. Essentially, digital twins use real-world data to create simulated environments through computer programs to predict the operation of products or processes. When combined with the IoT, AI, and software tools, digital twins can enhance production efficiency and reliability as well as product quality [19].
AI, also known as machine intelligence, is the manifestation of human intelligence in machines created by humans, and it typically involves the use of computer programs to demonstrate human-like cognitive functions. AI is a machine or computer that simulates human thinking and cognitive functions with the capability to learn and solve problems. As an area of computer science, AI perceives its environment and makes decisions to enhance task success rates. It can learn from past experiences, make rational decisions, and respond rapidly to stimuli or inputs [20]. The goals of AI researchers include constructing meaningful reasoning methods or developing computer programs to understand the surrounding environments and generate corresponding approaches to address societal challenges and enhance human welfare. The main components of AI are as follows:
  • Perception: The ability to gather data from the environment through sensors such as cameras, microphones, and other devices.
  • Reasoning and Decision-Making: The process of interpreting data, making inferences, and deciding on actions based on algorithms and models.
  • Learning: The capability to improve performance over time through various forms of learning, such as supervised, unsupervised, and reinforcement learning.
In AI-related research, problem-solving methods rooted in symbolic and logical reasoning are essential. When it comes to problems that individual actions cannot solve, the focus shifts to finding a series of steps to achieve the goal. Path planning for mobile robots is an example of AI solutions to such problems, and search algorithms are commonly used to find a suitable path in the state space. Generally, these algorithms fall into two categories: uninformed search and heuristic search. In addition, a high-order polynomial may be required to define the continuity of acceleration in a robot’s moving path.
The development of AI robots includes two stages: (1) basic robots with sensing, recognition, understanding, judgment functions, etc., and (2) advanced robots with the ability to learn from experience. Basic intelligent robots can be widely seen, while advanced robots with learning capabilities are still experimental. The development of intelligent robots has the potential to change future lifestyles [21]. The ultimate goal is to perform an in-depth analysis of large amounts of data, from production line robots to real-time monitoring of factory operations for remote troubleshooting. These applications exemplify the characteristics of Industry 4.0, which focuses on developing AI systems through big data and machine learning to advance their understanding and capabilities [22]. A robot vision system is developed in this study to identify the shape, color, and position of an object, enabling the AI robot to locate and grab it automatically.

2.3. Virtual Reality

This study combined virtual reality with robotics to enhance learners’ motivation and help them understand how robots are applied in industries and household chores. Virtual reality provides an immersive environment created through 3D visualization, utilizing human senses and special devices like HMDs to simulate real-life scenarios. In the virtual world, users can see, hear, and interact with virtual objects just like they were in the real world. When users change their positions, the computer immediately performs complex calculations and transmits simulated visual images to the display to generate a sense of presence [23]. Virtual reality combines computer graphics, simulation and modeling, AI, positional sensing, and 3D display, constituting a real-time system developed from modern immersive technology for simulating various situations.
Virtual reality has attracted the attention of researchers in various fields, such as science, engineering, healthcare, education, and entertainment. A virtual and realistic simulation can achieve lifelike effects without facing potential hazards in real environments [24]. Dalgarno and Hedberg [25] proposed that VR has three attractive features: (1) simulating three-dimensional visual effects in the real world, (2) placing learners in meaningful cognitive environments to help them construct useful knowledge through human senses, and (3) allowing learners to observe at different scales or viewing angles. In the virtual world, users can manipulate objects to achieve higher-level cognition and understanding of surrounding contexts, making VR a powerful learning tool.
Burdea and Coiffet [26] described virtual reality as a real-time interactive system that integrates computers and peripheral devices, characterized by Immersion, Interaction, and Imagination (3I). VR uses human senses to simulate micro, macro, or real-world scenarios that are difficult to achieve, allowing users to interact with virtual objects and situations [27], which makes it suitable for applications in science education. VR can simulate real-world scenarios in a 3D space for users to visualize abstract concepts, principles, or experimental processes. Therefore, it plays a significant role in conveying information and provides students with an effective learning environment [28].
Traditionally, students passively obtain knowledge in the classroom, making it difficult to experience realistic situations. When applied in education, virtual reality provides an excellent human-computer interface to facilitate the construction of comprehensive concepts and useful knowledge through interactive exploration and active participation [29]. With the development and application of VR technology, this study created an integrated virtual and physical environment for learners to interact with robots, assisting them in learning the principles of robot kinematics and digital twins to enhance their readiness for careers in related fields and practical applications.
Virtual reality offers several benefits for learning scientific concepts, for example, using virtual experiments as learning scaffolds to construct scientific knowledge [30]. It can simulate invisible phenomena for observation and make abstract concepts tangible. Students can repeat virtual experiments to understand complex principles and concepts [31]. VR allows learners to change viewing angles and scales to understand molecular structures, enhancing spatial reasoning ability, learning effectiveness, and satisfaction [32]. Virtual experiments offer effective practice to improve experimental skills, reduce waiting times and implementation costs, and minimize errors during the explorative process. VR technology has potential applications in various disciplinary fields. Research results have shown significant improvements in learning achievement, learner engagement, and learning motivation. Therefore, this study utilized VR technology in robotics education to enhance students’ understanding of robot kinematics, path planning, and digital twins through interactive operation and 3D visualization.
Togias et al. [33] proposed a VR approach to control industrial robots through remote operation with the objective of reducing the time and effort required for operators to adjust robot parameters when they are not on-site. Verner et al. [34] investigated how students experienced robot operation through the internet and assessed the impact of VR on learning robot operation. The results indicated that VR can enhance the understanding of robotics and enable integrated thinking patterns in the learning process. Adami et al. studied whether VR could improve workers’ confidence, self-efficacy, mental workload, and situational awareness when operating construction robots remotely [35]. Malik et al. [36] explored the development of human-centered production systems by creating an integrated environment that combines VR and human-machine interfaces and proposed that it is effective in estimating work schedules and improving robot control procedures.
Heradio et al. [37] systematically reviewed the use of virtual and remote laboratories in robotics education. They analyzed the effectiveness, usability, and technologies used in their development. Tzafestas and Alifragis [38] compared remote and virtual labs in the context of engineering education, with a focus on their application in robotics. They evaluated the benefits and challenges from both educators’ and students’ perspectives. Alimisis [39] explored the integration of educational robotics with virtual labs in higher education by presenting case studies and best practices for enhancing robotics education through virtual environments. Bencomo et al. [40] discussed the design and implementation of virtual labs for robotics education, as well as the technical requirements, user interface design, and pedagogical benefits. Gomes and Bogosyan [41] examined the debate between simulation and remote experimentation in engineering education, with a focus on robotics labs by discussing their advantages and limitations.
In modern workplaces, workers can interact with production equipment and robots using VR simulation programs. Virtual robots can also assist them during the design process, which is useful in reducing risks. VR allows users to learn efficiently and safely, while interactive operation can enhance motivation and motivate them to acquire knowledge. VR is capable of simulating complex coordinate transformations, enabling users to interact with the digital twin of a physical robot to achieve assigned tasks. These features make it suitable for displaying the operating principles of robotics. Therefore, this study applied VR technology in robotics education to visualize complex robot kinematics, aiming to alleviate the cognitive burden and enhance learning motivation.
VR applications in digital twins can enhance the visualization and simulation of physical environments [42]. This integration allows for real-time interaction with digital replicas, improving the understanding and decision-making processes in fields such as construction, manufacturing, and urban planning. VR-based digital twins offer immersive training environments, especially in complex technical fields [43]. These tools provide hands-on experience without the risks associated with physical prototypes, thereby improving learning outcomes and operational safety. VR applications in digital twins facilitate predictive maintenance and operational efficiency by enabling real-time monitoring and analysis of systems [44]. This predictive capability helps preemptively address issues, reducing downtime and maintenance costs.
The interactive nature of VR enhances user engagement and provides a more intuitive understanding of digital twin data [45]. This engagement is crucial for effective decision-making in complex systems, such as smart cities and advanced manufacturing. VR digital twins are used to simulate patient-specific scenarios for diagnosis, treatment planning, and medical training [46]. These applications have the potential to revolutionize personalized medicine, although they also raise ethical and data privacy concerns. Despite these benefits, integrating VR with digital twins presents challenges, including high development costs, the need for robust data integration frameworks, and ensuring the accuracy and reliability of simulations [47]. Addressing these challenges requires interdisciplinary collaboration and ongoing technological advancements.
The integration of VR with digital twin technology is a rapidly growing field with significant potential across various industries. By enhancing visualization, improving training, and increasing operational efficiency, VR applications in digital twins offer substantial benefits. However, challenges such as high development costs and the need for accurate data integration must be addressed to fully realize their potential. Further research and interdisciplinary collaboration will be crucial in overcoming these obstacles and advancing the state of digital twin technology. This study incorporated real and virtual robot control to develop an integrated robot learning system. This system enables learners to remotely control the digital twin of a physical robot and observe the synchronized motion of both virtual and physical robots. Users can operate the virtual robot to achieve the target position by setting joint parameters or using the values calculated from inverse kinematics. They can also understand the principle of digital twins by observing the synchronous motion of both robots.

3. Materials and Methods

The objective of this study is to develop an integrated environment for learning robot kinematics and digital twins. The user interface includes options for setting joint parameters in forward kinematics, determining joint parameters according to the target position by inverse kinematics, configuring paths to avoid collisions with obstacles, and observing the motion of the physical robot synchronized with its digital twin. A teaching experiment was conducted in this study to evaluate learners’ achievements, cognitive load, learning motivation, and user satisfaction, and the results can be used to improve the system. The concept map of the instructional design employed in this study is shown in Figure 1. In this concept map, robotic kinematics, including robotic kinematics and path planning, are used for teaching analytics and the control of the digital twin, and the observation of synchronous motion between the physical robot and the digital twin is used for learning analytics.
A robotic arm is a fundamental tool for robots to accomplish tasks such as manipulating and moving objects. Its operation mimics the functionality of a human arm, which is capable of grasping objects and relocating them to designated positions while maintaining correct postures to fulfill task requirements. A robotic arm can rotate the wrist to maintain a predefined orientation and employ specific tools for assigned tasks. With multiple degrees of freedom, a robotic arm can perform actions such as extension, rotation, and elevation. For stable operations in unknown environments, robots also require sensing capabilities. In the context of a robotic grasping system, machine vision provides information on the detection and positioning of target objects. Additionally, it facilitates data transmission to the robot for obstacle avoidance.
The structure of a robotic arm is intricately related to its degrees of freedom and joint functionality. As the degrees of freedom or structural complexity increase, the dynamic analysis of the robotic arm becomes more complex. Robot kinematics encompasses both forward and inverse kinematics [48]. Given all joint parameters, forward kinematics has to calculate the orientation and position of a robot end-effector. In contrast, inverse kinematics involves determining all the joint parameters of a robotic arm, given the orientation and position of its end-effector. Figure 2 shows the joint parameters of the WLKATA six-axis industrial robotic arm manipulator, primarily developed for STEM makers, adolescent education, and higher education purposes. In this study, the Denavit-Hartenberg (DH) [49] parameters and matrices are utilized as a standard way to describe the kinematic chains of robotic manipulators. They simplify the representation of the relative positions and orientations of robot links and joints and provide a systematic method to model the geometry of a robotic arm to facilitate its analysis and control.
According to the DH parameters and joint coordinate systems of the robotic arm shown in Figure 3, where coordinate systems with different orientations are shown in different colors, we can calculate the transformation matrix at each joint as follows:
A n = R o t ( z , θ n ) T r a n s ( 0 , 0 , d n ) T r a n s ( a n ,   0 , 0 ) R o t ( x , a n ) = c θ n s θ n c a n s θ n s a n a n c θ n c θ n s a n a n s θ n s θ n c θ n c a n 0 s a n c a n d n 0 0 0 1
where Rot and Trans are the rotation and translation matrixes for each joint, respectively.

3.1. Forward Kinematics

Taking the WLKATA robotic arm as an example, the transformation matrix T 6 of its end-effector is obtained by multiplying matrices A1, A2, A3, A4, A5, A6 of the six joints:
A 1 =   c θ 1 0 s θ 1 29.69 c θ 1 s θ 1 0 c θ 1 29.69 s θ 1 0 1 0 127 0 0 0 1   ,   A 2 =     c θ 2 s θ 2 0   108 c θ 2   s θ 2 c θ 2 0   108 s θ 2 0 0 1 0 0 0 0 1  
A 3 =   c θ 3 0 s θ 3 20 c θ 3 s θ 3 0 c θ 3 20 s θ 3 0 1 0 0 0 0 0 1   ,   A 4 =   c θ 4 0 s θ 4 0 s θ 4 0 c θ 4 0 0 1 0 168.98 0 0 0 1
A 5 =   c θ 5 0 s θ 5 0 s θ 5 0 c θ 5 0 0   1 0 0 0 0 0 1   ,   A 6 =   c θ 6 s θ 6 0 0 s θ 6 c θ 6 0 0 0 0 1 0 0 0 0 1  
The default position of the end-effector is calculated by setting θ n = 0 for all joints as T 6 = A 1 A 2 A 3 A 4 A 5 A 6 =   1 0 0 29.69 0 0   1 0 0 1 0 127 0 0 0 1   A 2 A 3 A 4 A 5 A 6 =   1 0 0 137.69 0 0 1 0 0 1 0 127 0 0 0 1   A 3 A 4 A 5 A 6 = 1 0 0 157.69 0 1 0 0 0 0 1 127 0 0 0 1   A 4 A 5 A 6 = 1 0 0 157.69 0 0 1 0 0 1 0 41.98 0 0 0 1   A 5 A 6 = 1 0 0 157.69 0 1 0 0 0 0 1 41.98 0 0 0 1   A 6 = 1 0 0 157.69 0 1 0 0 0 0 1 41.98 0 0 0 1 Therefore, the default position of the end-effector is 157.69 ,   0 ,   41.98 .

3.2. Inverse Kinematics

Inverse kinematics is used to determine the joint parameters for achieving the target position and posture. This is important in robotics because joint variables are used to control the motion of a robot. From the equations of forward kinematics, as shown above, we have T 6 = A 1 A 2 A 3 A 4 A 5 A 6 and therefore A 1 1 T 6 = A 2 A 3 A 4 A 5 A 6 by the derivation below:
C 1 S 1 0 29.69 0 0   1 127 S 1 C 1   0 0 0 0 0   1   n x o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1   =   108 C 2 + 20 C 2 + 168.98 S 23 108 C 2 + 20 C 2 168.98 S 23       0       1  
The first three columns will affect the end-effector’s orientation, so the calculation is performed on the fourth column to derive the following three equations:
C 1 p x + S 1 p y 29.69 = 108 C 2 + 20 C 2 + 168.98 S 23
p z 127 = 108 C 2 + 20 C 2 168.98 S 23
S 1 p x C 1 p y = 0
We can obtain S 1 p x = C 1 p y from Equation (3), which implies tan θ 1 = p y / p x and thus θ 1 = Atan 2 ( p y , p x ) . The following equation (useful for calculating θ 3 ) can be obtained by adding the squares of Equations (1)–(3):
p x 2 + p y 2 + p z 2 58.38 C 1 p x + S 1 p y 254 p z + 29.69 2 +   127 2 = 108 2 + 20 2 +   168.98 2 + 2   ×   108   ×   20 C 3 + 2   ×   108   ×   168.98 S 3
By letting M 1 = p x 2 + p y 2 + p z 2 58.38 ( C 1 p x + S 1 p y ) 254 p z + 29.69 2 +   127 2 108 2 20 2 168.98 2 )/216, it can be simplified as M 1 = 20 C 3 + 168.98 S 3 and thus
θ 3 = A t a n 2 M 1 ± 20 2 + 168.98 2 M 1 2 A t a n 2 ( 20 , 168.98 )
We can obtain A 3 1 A 2 1 A 1 1 T 6 = A 4 A 5 A 6 from T 6 = A 1 A 2 A 3 A 4 A 5 A 6 by the derivation of
A 3 1 A 2 1 A 1 1 = C 3 S 3 0 20 0 0 1 0 S 3 C 3 0 0 0 0 0 1   C 2 S 2 0 108 S 2 C 2 0 0 0 0 1 0 0 0 0 1 A 1 1 and thus
A 3 1 A 2 1 A 1 1 =   C 1 C 23 S 1 C 23 S 23 29.69 C 23 127 S 23 108 C 3 20 S 1 C 1 0 0 C 1 S 23 S 1 S 23 C 23             29.69 S 23 + 127 C 23 108 S 3 0 0 0 1
From the fourth column of the above matrix, we can derive the following equation using the parameters of the WLKATA robotic arm in Figure 2 as
A 3 1 A 2 1 A 1 1 n x o x a x p x n y o y a y p y n z o z a z p z 0 0 0 1   = A 4 A 5 A 6 = 0 0 168.98 1  
and then combine Equations (5) and (6) to obtain
C 1 C 23 p x + S 1 C 23 p y + S 23 p z 29.69 C 23 127 S 23 108 C 3 20 = 0
After rearranging the above equation by combining the first three terms, we have
C 1 p x + S 1 p y 29.69 C 23 + ( p z 127 ) S 23 = 108 C 3 + 20
By letting M 2 = ( C 1 p x + S 1 p y 29.69 ), M 3 = ( p z 127 ), and M 4 = ( 108 C 3 + 20 ), we obtain
θ 23 = A t a n 2 M 4 ± M 2 2 + M 3 2 M 4 2 t a n 2 ( M 2 , M 3 )
Using the value of θ 23 obtained from Equation (7), θ 2 can be calculated as θ 23 θ 3 . The three rear joints can be used to adjust the orientation of the end-effector on the robotic arm to keep it perpendicular to the platform for grasping an object easily. We have θ 5 = ( θ 2 + θ 3 ) and use the default value of θ 4 = 35 ° to make the end-effector perpendicular to the platform when the WLKATA robotic arm is reset. If the attached tool is a suction cup, the value of θ 6 can always remain unchanged. After calculating the joint parameters, we can move the robotic arm to a specified position according to the forward kinematics while maintaining the required orientation of the end-effector. The above derivations are also implemented in the control programs of the virtual robot.

3.3. Path Planning

A robot can move forward as planned using an AI algorithm to control its movement on the right track while avoiding obstacles to reach the target position. If the robot’s path is represented by a cubic polynomial x ( t ) for a continuous time interval 0 t t f , given the initial position x i = x ( 0 ) and the end position x f = x ( t f ) , we have the following equations to describe the state variable x ( t ) in the path:
x t = a 3 t 3 + a 2 t 2 + a 1 t + a 0
x ˙ t = 3 a 3 t 2 + 2 a 2 t + a 1
x ¨ t = 6 a 3 t + 2 a 2
The above equations can be used to calculate each point of x ( t ) in the path with primary differential continuity along the path connecting the two segments. Assuming that the speeds of the robot’s starting point and end point are 0, we can calculate the coefficients of the path equation according to the following boundary conditions:
x t i = x i x t f = x f x 0 = 0 x t f = 0
The four conditions in Equation (11) can be used to solve the four unknown coefficients a 0 , a 1 , a 2 and a 3 in the cubic polynomial
1 t i t i 2 t i 3 1 t f t f 2 t f 3 0 1 2 t i 3 t i 2 0 1 2 t f 3 t f 2 a 0 a 1 a 2 a 3 a 4 = x t i x t f x ˙ t i x ˙ t f = x i x f 0 0
Multiply both sides of Equation (12) by the inverse matrix of the time parameters
1 t i t i 2 t i 3 1 t f t f 2 t f 3 0 1 2 t i 3 t i 2 0 1 2 t f 3 t f 2 1 1 t i t i 2 t i 3 1 t f t f 2 t f 3 0 1 2 t i 3 t i 2 0 1 2 t f 3 t f 2 a 0 a 1 a 2 a 3 a 4 = 1 t i t i 2 t i 3 1 t f t f 2 t f 3 0 1 2 t i 3 t i 2 0 1 2 t f 3 t f 2   1 x i x f 0 0
and we can obtain the coefficients of the cubic polynomial as
a 0 a 1 a 2 a 3 a 4 = 1 t i t i 2 t i 3 1 t f t f 2 t f 3 0 1 2 t i 3 t i 2 0 1 2 t f 3 t f 2 1 x i x f 0 0
We need to use a higher-order polynomial to define the continuity of acceleration in the moving path within [ A , B ] by the following conditions:
x A t + = x A t , x B t + = x B t x ˙ A t + = x ˙ A t , x ˙ B t + = x ˙ B t x ¨ A t + = x ¨ A t , x ¨ B t + = x ¨ B t
If the symmetry of the acceleration at the starting and end points is used, it can be reduced from the fifth-order to the fourth-order polynomial, as shown below:
x t = a 4 t 4 + a 3 t 3 + a 2 t 2 + a 1 t + a 0
x ˙ t = 4 a 3 t 3 + 3 a 3 t 2 + 2 a 2 t + a 1
x ¨ t = 12 a 4 t 3 + 6 a 3 t + 2 a 2
In addition to moving the end-effector to the target position, we can perform path planning to simulate the movement of characters or vehicles in computer games, and the goal is to reach the target position from the starting point through continuous actions while avoiding collisions with obstacles during the movement.

3.4. Digital Twin

A digital twin is a concept that integrates virtual and physical elements, commonly utilized in simulating the behavior of physical entities in a virtual space. In this study, a 3D model of the WLKATA six-axis robotic arm is developed using Blender 4.0 and Unity3D 2020 and installed on the Meta Quest 3 VR headset as the digital twin. The virtual robotic arm’s appearance and structure resemble the WLKATA six-axis robotic arm. Figure 4 shows the shaded 3D model of the virtual robotic arm created by Blender. After designing the user interface with Unity 3D, the application software is exported to the Meta Quest 3, manufactured by Meta Platforms, Inc. in Menlo Park, California, USA, to provide the user with an immersive operating experience.
For convenience, the user control panel is incorporated into the virtual platform with buttons and scroll bars for adjusting the angles of the six joints to simulate the operability of a physical robotic arm. When users operate the control panel in the virtual platform, the corresponding joints of the virtual robotic arm will rotate, and the control commands for the real robot can also be obtained. These commands are transmitted via the serial port to the physical robotic arm, ensuring synchronization between the virtual and physical robotic arms to achieve a coherent linkage with each other.
A C# program is used for serial-port communication to establish the connection between the virtual and physical robotic arms. The joint parameters of the virtual robotic arm are obtained from the control panel and transmitted to the serial port of the physical robot to control its motion. For example, sending the command “M21 G90 G01 X0.00 Y-35 Z40 A30 B-90 C0.00 F2000.00” with the coordinate angles of the six joints to the physical robotic arm will move the end-effector to the specified position. The command “$h” is the reset instruction that brings the robotic arm to its default position and orientation.
Using the Meta Quest 3 VR headset, Unity3D, and C# language, this study created a digital twin to simulate the motion of a robotic arm in a real environment. Users can change the angle of any joint by dragging the scroll bar or performing fine-tuning by pressing the ‘plus’ or ‘minus’ button. Serial-port communication with a physical robotic arm connects the virtual and physical environments and allows users to understand the applications of digital twins and their impacts on industries, education, and beyond, enhancing the effectiveness of learning robotics and operational training.
The virtual robotic arm developed in this study can utilize the images received by a robot vision system to obtain the current position and important features of a target object. Then, the joint parameters can be calculated through inverse kinematics to perform the task of grasping the object. The development of the virtual robotic arm is based on the Microsoft Windows 10 OS environment. This requires the design of C# programs and the utilization of a VR suite to complete the triggering of a virtual object with interactive functions in the virtual world. When the virtual robotic arm is completed, learners can operate it individually (Figure 5) or use it to control the physical robot (Figure 6).
The external sensors of a robot are mainly used to obtain important information from the work environment, such as the features of an object and its position or the status of whether the object is firmly grasped. Common types of robot sensors include the visual system and tactile sensors. In this study, the virtual robotic arm receives data from the robot vision system to determine the current position and features of an object (Figure 7) and then calculates the joint parameters by the formula of inverse kinematics. After that, both the virtual and physical robots can accurately perform the task of grasping the object. The major technical challenge encountered in the study was to ensure the digital twin’s accuracy, which was overcome after the cooperation of the research team in adjusting the virtual robot’s parameters and fine-tuning the 3D model. Finally, both the virtual and physical robots can move to the same position with the same orientation.
There are three major components in a robot vision system, that is, image acquisition, image processing and analysis, and image output or display. A typical robot vision system also includes a light source, optical imaging module, image capture module for image acquisition and digitization, intelligent image processing module, and control module for executing tasks. When playing a chess game, the robot can obtain the chessboard image through the vision system, digitize it to understand the current game status, and then use image processing and decision-making modules to determine the next move and calculate the chess position for movement. Figure 8 shows a flowchart for detecting an object using the robot vision system.
The robot vision system reads the image obtained from the camera and uses QR codes to identify the position of an object inside the working area. The image is converted from RGB to HSB format to facilitate segmentation and obtain the features of the object, including its color, shape, and position. After that, the feature data are transmitted to the VR system to create a virtual object as a digital twin of the real object.

4. Teaching Experiment

A quasi-experimental research design with non-equivalent groups was adopted in this study to evaluate the performance of the integrated robot learning system. Seventy-five students (ages 16–18) from a senior high school in Hsinchu, Taiwan, were selected as the research subjects: 40 students as the control group (using the traditional teaching method) and 35 students as the experimental group (using the integrated robot learning system). In this study, the independent variable is the teaching method; learning effectiveness, cognitive load, and learning motivation are dependent variables; background knowledge of robotics is the covariate; and learning content, learning time, and the instructor are the control variables (Figure 9).
This teaching experiment aims to analyze the impacts of different teaching methods on the learning effectiveness of robotics. Questionnaires were employed to examine the cognitive load and learning motivation of the two groups, as well as the experimental group’s technology acceptance of the integrated robot learning system. Additionally, the researchers gathered the instructor’s opinions after implementing virtual reality for learning robotics, providing valuable insights for system improvement. The research tools in this study include data collection instruments (achievement tests and questionnaires), data analysis software (SPSS 26.0), and learning instruments.
  • Achievement test
To compare the learning effectiveness of the two groups, this study created an achievement test based on custom learning content covering industrial robots, forward kinematics, inverse kinematics, digital twins, and robot applications. Science teachers and robotics experts were invited to review and revise the questions for validity and alignment with the study’s learning objectives. The test consists of twelve multiple-choice questions, each with four options (see Appendix A), and each correct answer earns one point, with higher scores indicating better learning achievement.
  • Learning motivation questionnaire
To assess differences in learning motivation using various methods, this study revised Wang and Chen’s [50] motivation scale and consulted experts for validity. The questionnaire uses a five-point Likert scale (strongly agree = 5 to strongly disagree = 1) and includes five items with a Cronbach’s α of 0.925.
  • Cognitive load questionnaire
To assess the differences in cognitive load from various learning methods, this study used Hwang et al.’s [51] cognitive load scale and consulted experts for validity. The questionnaire employed a five-point Likert scale (strongly agree = 5 to strongly disagree = 1) with five items and Cronbach’s α of 0.836.
  • Technology acceptance questionnaire
To assess the experimental group’s attitudes toward the integrated robot learning system, this study designed a satisfaction questionnaire based on the technology acceptance model [52] and validated it with experts. The questionnaire uses a five-point Likert scale (strongly agree = 5 to strongly disagree = 1) and includes thirteen items, with a Cronbach’s α of 0.915.
  • The integrated robot learning system
In this study, a 3D model of the WLKATA six-axis robotic arm manipulator was developed using Blender and Unity3D. It was installed on the Meta Quest 3 VR headset as a digital twin. The appearance and structure of the virtual robotic arm are the same as those of the WLKATA six-axis robot arm manipulator. Users can control the virtual robotic arm to achieve the designated position by setting the joint parameters or using the parameters calculated from inverse kinematics. They can also understand the principle of digital twins by observing the synchronous motion between virtual and physical robots.
  • VR headset
Users can wear the Meta Quest 3 headset to operate the digital twin to complete learning tasks. This advanced VR device with mixed-reality capability offers a highly interactive and immersive experience, allowing users to engage with both virtual and real environments. It is ideal for observing and experiencing scenarios like archaeological sites, science experiments, and cultural events. The Meta Quest 3 headset also features advanced hand tracking, enabling interaction with the virtual world through gestures.
  • SPSS software
The SPSS (Statistical Package for the Social Sciences) is a comprehensive software suitable for data management and statistical analysis in social science research. It provides tools for performing a wide range of statistical tests, data manipulation, and visualization and features an intuitive interface, making it accessible to users with varying levels of statistical expertise. SPSS supports various data formats and offers advanced analytical techniques, such as regression, ANOVA, factor analysis, and non-parametric tests, helping researchers and analysts derive meaningful insights from their data. The research tools and statistical methods for the research questions are listed in Table 1.

5. Experimental Results

This study conducted a teaching experiment to compare learning effectiveness in robotics as well as learning motivation and cognitive load between the two groups to evaluate the integrated robot learning system. Data collected included pre-test and post-test scores, as well as responses to learning motivation, cognitive load, and satisfaction questionnaires. The statistical analysis results are described as follows.

5.1. Learning Effectiveness

As shown in Table 2, the experimental group had a pre-test mean of 3.60, with a standard deviation of 1.70, and a post-test mean of 8.71, with a standard deviation of 2.97. The control group had a pre-test mean of 2.93 with a standard deviation of 1.40 and a post-test mean of 3.75 with a standard deviation of 1.78. The post-test mean of the experimental group is notably higher than that of the control group. Additionally, the post-test scores of the control group have a smaller standard deviation, indicating a more concentrated score distribution compared to the experimental group.
To determine if there is significant progress between the pre-test and post-test scores of the experimental and control groups, a paired sample t-test was conducted. Table 3 shows that the experimental group had an average difference of −5.114, with a standard deviation of 3.367, while the control group had an average difference of −0.825, with a standard deviation of 1.893. These results indicate that the experimental group made more progress than the control group.
As shown in Table 3, the t-value for the experimental group before and after treatment is 8.895, with a p-value of less than 0.001, indicating a high level of significance. For the control group, the t-value is 2.756, and the p-value is 0.009, indicating a medium level of significance. These results suggest that both the integrated robot learning system and traditional teaching materials with real robot operations can improve learning effectiveness. However, to determine whether there is a significant difference in learning effectiveness between the two groups, an analysis of covariance (ANCOVA) is required.
Before conducting the ANCOVA, this study used the homogeneity test of regression coefficients to confirm that the variances of both groups are consistent to support the acceptance of the null hypothesis. The test for homogeneity of regression coefficients (F = 0.828, p = 0.366) indicated that the experimental and control groups have consistent regression slopes, allowing for the application of ANCOVA.
This study conducted an ANCOVA with pre-test scores as the covariate, post-test scores as the dependent variable, and learning methods as the independent variable to determine if different learning methods affect learning effectiveness. As shown in Table 4, the difference in learning effectiveness between the two groups is highly significant, with F = 71.868 and p < 0.001. Based on the previous analysis, the experimental group showed more progress than the control group, indicating that the integrated robot learning system is more effective than the traditional method for learning robotics.

5.2. Learning Motivation

In this study, we calculated the response scores for learning motivation and performed an independent samples t-test to determine whether there was a significant difference between the two groups. Table 5 presents the mean scores and standard deviations, where the experimental group has a mean score of 3.994 with a standard deviation of 0.735, while the control group has a mean score of 3.290 with a standard deviation of 0.923. As we can see, the experimental group’s mean score is higher, and their score distribution is more concentrated compared to the control group.
Further analysis revealed t = 3.621 and p < 0.001, indicating a significant difference in learning motivation between the two groups due to using different learning methods. The integrated robot learning system resulted in higher learning motivation than the traditional method. This is likely because the virtual reality system’s features—immersion, interaction, and imagination—enhanced students’ motivation. A comparison of learning motivation for individual questions between the two groups is shown in Table 6. It can be seen that all questions have achieved a significant difference. Especially, Question 3 “This method can increase my intention in learning robotics” has achieved a high level of significance (p < 0.001), supporting the finding that the integrated robot learning system can enhance students’ motivation to learn robotics.

5.3. Cognitive Load

In this study, we calculated the response scores for cognitive load and performed an independent samples t-test to determine if there was a significant difference between the two groups. Table 7 presents the mean scores and standard deviations of both groups, where the experimental group has a mean score of 2.149 with a standard deviation of 0.676, while the control group has a mean score of 2.970 with a standard deviation of 0.714. The experimental group’s mean score is higher and their score distribution is more concentrated compared to the control group.
Further analysis revealed t = −5.094 and p < 0.001, indicating a significant difference in cognitive load between the two groups due to the different learning methods. The integrated robot learning system incurred a lower cognitive load than the traditional method. This is likely because interaction with the digital twin and visualizing synchronous motion between virtual and physical robots can help learners understand robot kinematics and digital twins, which is effective in reducing the cognitive load for the experimental group. A comparison of the cognitive load for individual questions is shown in Table 8. Most questions have achieved a significant difference except Question 3 “I think this learning method cannot reduce the time pressure in learning robotics” (p = 0.108), indicating both learning methods required about the same amount of time in learning robotics. In addition, the first two questions have a high level of significance (p < 0.001), supporting the finding that the integrated robot learning system can reduce the cognitive load in learning robotics.

5.4. System Satisfaction Analysis

This study used the technology acceptance model (TAM) to evaluate the experimental group’s attitudes toward the integrated robot learning system. Table 9 shows an overall average satisfaction score of 4.16, with a standard deviation of 0.54, indicating a generally positive attitude. The dimensional mean scores are 4.29 for perceived usefulness, 4.05 for perceived ease of use, and 4.10 for behavioral intention. The average scores of all questions, except Question 10, are greater than 4, indicating the degrees of acceptance are mostly within the range of “strongly agree” and “agree”. According to the score (3.86) and description of Question 10 “I can use virtual robots to complete other tasks even if no one is there to tell me how to do it”, there is still room for improving the system’s “ease of use” by offering more on-demand support.
The experimental results indicate that the use of VR technology in robotics education can substantially improve students’ comprehension and mastery of complex robotics concepts, such as forward kinematics, inverse kinematics, and path planning, compared to traditional methods. The immersive and interactive nature of the VR-based CPRLS has been shown to boost students’ engagement and interest in learning robotics, addressing the common issue of low motivation associated with conventional educational approaches. Furthermore, by simplifying and visualizing complex robot kinematics through a digital twin, the VR system reduces the cognitive load, making it easier for students to grasp difficult concepts and enhancing their overall learning experience.
Additionally, the questionnaire results reveal high levels of student satisfaction with the CPRLS, indicating that the VR system is user-friendly and effective as a learning tool. This positive user experience suggests that VR technology can be widely accepted and adopted in STEM education. This research also sets a precedent for future innovations in educational tools by providing evidence for the successful integration of VR technology in robotics education. By demonstrating the practical application of digital twins in an educational context, this study contributes to the broader field of digital twin technology, showing its potential beyond industrial applications. Therefore, the integration of VR and digital twin technology can provide more effective, engaging, and accessible learning experiences in robotics and other complex subjects.

6. Discussion

The objective of this study is to develop an integrated robot learning system using VR technology and evaluate its performance through a teaching experiment. The system aims to provide an immersive and interactive environment for learning robotics, particularly focusing on forward kinematics, inverse kinematics, and digital twins. By leveraging VR technology, this study seeks to visualize complex robotics concepts in a more intuitive and comprehensible manner, allowing learners to manipulate the digital twin of a physical robotic arm and observe their synchronous motion in real-time.
The integrated robot learning system developed in this study differs from those developed in previous studies in the following ways. Unlike Verner et al. [34] and Adami et al. [35], who focused on AR and VR for integrative thinking skills and human-robot interaction, respectively, our study emphasizes simplifying and visualizing complex robot kinematics to reduce learners’ cognitive load using VR technology. While Malik et al. [36] and Tzafestas and Alifragis [38] explored VR applications in manufacturing and engineering education, our study created an immersive and interactive environment specifically tailored for learning robotics, which enhances the practical understanding of robotics. Heradio et al. [37] and Bencomo et al. [40] reviewed virtual and remote labs without considering the incurred cognitive load, but our study integrated VR technology to effectively reduce cognitive load by making robot kinematics more accessible and engaging. Our experimental results also support Artino’s suggestion of instructional design [10], which can effectively reduce the cognitive load and enhance learning motivation. This study utilized VR technology to simplify and visualize robot kinematics and to create an immersive and interactive environment to enhance learning outcomes. Alimisis [39] and Gomes and Bogosyan [41] focused on educational robotics and remote labs by discussing their advantages and limitations, but our study combined VR technology to develop a more comprehensive and interactive educational tool to enhance the learning experience in robotics.
The integrated robot learning system successfully replicates the principles of robot kinematics, including forward and inverse kinematics. This could ensure that the virtual and physical robotic arms move synchronously in response to user inputs by calculating the joint parameters to control the robotic arms accurately. From an educational standpoint, users may find the integrated robot learning system effective in conveying complex robotics concepts in an understandable way. The results of the achievement test reveal that the learning effectiveness of the experimental group is better than that of the control group, indicating the integrated robot learning system is more effective in helping students understand the principles of robotics and their applications in daily life. This provides a clear answer to the first research question: There is a significant difference in students’ learning achievement when utilizing the integrated robot learning system for learning robotics compared to traditional teaching methods.
The integrated robot learning system provides a freely available educational tool for teachers, students, and the general public, contributing to the promotion of science popularization and STEM education in two ways: (1) Accessibility: By making the system freely available, it ensures equitable access to high-quality educational resources for individuals from diverse backgrounds and geographic locations, regardless of financial or locational constraints. (2) Engaging learning experience: The interactive and immersive nature of the VR-based learning environment can captivate learners’ interest and foster active engagement with the concepts of robotics and digital twins, enhancing their understanding and retention of robotics and its operating principles. The questionnaire results reveal that interacting with the virtual robotic arms enhanced their understanding of kinematics principles and improved their ability to comprehend abstract robotics concepts. Additionally, the questionnaire results reveal that the integrated robot learning system can enhance learning motivation and reduce cognitive load in learning robotics. This clearly answers the second and third research questions.
The integrated robot learning system can illustrate the principles of forward and inverse kinematics, as well as the concepts of digital twins, displaying the capability of VR technology as a tool for learning robotics. When operating in an integrated learning environment, learners may gain practical experience to enhance their understanding of a robot system and its control mechanisms. Users can understand abstract concepts through interactive operation, which is useful for enhancing their comprehension and retention of robotics and digital twins. They can also manipulate a virtual robotic arm and perform tasks in a user-friendly and intuitive manner. According to the results of the technology acceptance model questionnaire, students were satisfied with the ease of use and usefulness of the integrated robot learning system, and they had the intention to use the system for learning robotics in the future. This answers the fourth research question.
A systematic review by Luo et al. [24] discusses the effectiveness of virtual reality in K-12 and higher education. The findings indicate that VR can significantly increase engagement and motivation in learning environments, which is similar to the experimental results obtained using the integrated robot learning system developed in this study. An example of the interaction between virtual and physical experiments in [30] highlights the complementary nature of VR and physical learning tools in education, which can lead to better learning outcomes. This supports the idea that integrating robotic systems with virtual learning mechanisms can provide a comprehensive and satisfying learning experience. The VR system for astronomy education presented in [31] demonstrates how VR can enhance the understanding of complex concepts through immersive and interactive experiences, which aligns with the reported simplicity and effectiveness of the integrated robot learning system. The study by Lee et al. [32] used structural modeling to show how desktop VR can improve learning outcomes by providing interactive and engaging learning experiences. This also supports the notion that the interactive nature of a robot learning system enhances learning satisfaction and intention to use it in the future. The use of a VR environment for industrial robot control and path design proposed in [33] emphasizes the practical applications and effectiveness of VR in complex learning scenarios. This justifies the usefulness of the integrated robot learning system in providing practical and hands-on learning experiences that are both engaging and effective.
The availability of the integrated robot learning system as a freely accessible educational tool allows for flexible learning modalities that accommodate various learning styles and preferences. Users can engage with the system at their own pace and convenience, supplementing traditional classroom instruction or independent study. Teachers can incorporate the integrated robot learning system into their curriculum to enrich classroom instruction and promote cooperative learning activities. Students can collaborate on robotics projects, share insights, and collectively solve problems to foster teamwork spirit. By promoting science popularization and STEM education, the integrated robot learning system is aligned with broader initiatives aimed at cultivating interest and proficiency in the fields of science, technology, engineering, and mathematics, contributing to workforce development and societal progress. After the teaching experiment, a series of activities, including workshops and short courses, were conducted in northern Taiwan for the promotion of STEM education and science popularization (Figure 10).

7. Conclusions and Future Works

This study used VR technology to simplify and visualize the complex principles of robotics by providing an immersive and interactive learning environment, allowing learners to change the joint parameters of a robotic arm to achieve the designated position. Given the target position, the integrated robot learning system can use inverse kinematics to calculate the joint parameters, enabling precise control of the robotic arm to move its end-effector to the specified location. This study developed a digital twin using VR technology, allowing learners to monitor the current status of a physical robot from a virtual platform and control its movement by sending commands through the communication port. The results of the achievement tests show that the integrated robot learning system is more effective than the traditional teaching method in learning robotics. The questionnaire results show that most users were satisfied with the integrated robot learning system and considered that it could enhance their learning motivation and reduce their cognitive load.
By creating a more intuitive and realistic environment, VR can enhance the learning experience through diverse and complementary perspectives. The integrated robot learning system transforms complex robot kinematics into a straightforward and understandable learning experience. A series of promotional activities have been conducted in northern Taiwan to achieve the goals of science popularization and STEM education. The objective is to foster the interest and enthusiasm of students in learning robotics, prepare them to face challenges in the digital age and empower them to make meaningful contributions to the country and enhance human welfare.
The theoretical contributions of this study can be described based on its technological aspects and teaching experimental results. Firstly, the integrated robot learning system developed in this study advances the understanding of how VR and digital twin technology can be effectively incorporated into educational systems to simplify and visualize complex concepts, such as those found in robotics. This integration provides a new theoretical framework for enhancing students’ cognitive processes and learning effectiveness by reducing their cognitive load through immersive and interactive learning environments. The findings of the experimental results support the notion that VR technology can significantly improve comprehension and mastery of intricate topics, offering an alternative to traditional teaching methods that often rely heavily on abstract mathematical calculations and coordinate transformations.
Secondly, this study contributes to the body of knowledge about the impact of VR on student motivation and engagement. The experimental results showed that the integrated robot learning system can increase interest and participation in challenging subjects, such as robotics. This provides empirical evidence of the motivational benefits of immersive learning technologies. This insight helps refine educational theories related to student engagement and motivation, particularly in STEM education. Furthermore, the reported high levels of student satisfaction underscore the potential of VR and digital twin technologies to enhance the overall learning experience, paving the way for broader adoption and further theoretical exploration of these technologies in diverse educational contexts. The integrated robot learning system can be applied to various learning topics in STEM education, and our future work will include the extension of educational applications to:
  • Learning geometric shapes and spatial concepts
Students can learn the concepts of geometric shapes and spatial positions using the virtual robot developed in this study through interactive operation and visual simulation. They can control the virtual robot to pick up objects with various geometric shapes and place them into designated boxes. Users can interact with the robot by identifying the same shape from a list of objects. The robot can also be used to display the concepts of spatial positioning, such as above, below, in front of, and behind, for small children. They can control the robot to move an object to a specified position relative to other objects in the virtual environment. The robot can also be used in sorting activities, where users must control the robot to pick up objects according to their shapes or colors.
  • Learning vector operation and rotation in 3D space
The movement of the robotic arm is determined based on coordinate transformations. Therefore, operating a robotic arm can help users develop the concepts of vector addition and rotation in 3D space. It can be used to demonstrate vector addition by specifying different segments of the robotic arm as the vectors for addition. Users can rotate one or more joints to observe the movement of the end-effector and understand how a vector addition or rotation is performed by checking the magnitude and direction of the resultant vector. The robotic arm can also display the rotation of the end-effector in 3D space using input parameters (e.g., the angles and joints for rotation) from the control panel. The robotic arm can display the result of rotation so users can see how an object or vector rotates around the designated axis and understand the effects of rotation. The hands-on experiences not only enhance learning motivation but also empower users to develop proficiency in spatial concepts and 3D geometric operations.

Author Contributions

Investigation and formal analysis: W.T. and T.-L.W.; methodology and investigation: Y.-J.W., L.-Y.Y., C.-W.T., C.-L.L. and Y.-C.L.; methodology, writing–review, and editing: W.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Council (NSTC), Taiwan, under grant number 112-2410-H-007-043-MY2.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee of National Tsing Hua University, Taiwan (REC No. 11112HT130, 10 February 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available on request due to restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Achievement Test (12 multiple-choice questions)
  • ( ) Which of the following statements about industrial six-joint manipulators is correct?
A. The first and the last joints have angle limitations, while the other four joints do not.
B. They can operate successfully even without any prismatic joints.
(A) Only A is correct (B) Only B is correct (C) Both are correct (D) Both are incorrect
2.
( ) How is an industrial six-joint manipulator usually designed to control each joint?
(A) The first and the last joints control rotation, while the other joints control position.
(B) The first two joints control rotation, while the other joints control position.
(C) The first three joints control rotation, while the other joints control position.
(D) The first three joints control position, while the other joints control rotation.
3.
( ) Which of the following statements about industrial six-joint manipulators is correct?
A. Adjusting the first joint does not change the position of the end-effector.
B. When the last joint is adjusted, the end-effector does not change its position.
(A) Only A is correct (B) Only B is correct (C) Both are correct (D) Both are incorrect
4.
( ) Which of the following statements about forward and inverse kinematics is correct?
A. Because each joint may have its own angle limitations, inverse kinematics may not always have a solution.
B. If there is a candy on the table and I want to find the angles of all joints to grab it, I need to use forward kinematics.
(A) Only A is correct (B) Only B is correct (C) Both are correct (D) Both are incorrect
5.
( ) Which of the following statements about forward and inverse kinematics is correct?
A. In inverse kinematics, setting all parameters to 10 means that each joint of the robot arm rotates 10 degrees clockwise.
B. In forward kinematics, the position of the end-effector may change even if only one joint parameter is changed.
(A) Only A is correct (B) Only B is correct (C) Both are correct (D) Both are incorrect
The following image shows the rotation direction of each joint of the WLKATA six-joint manipulators. Please answer the following questions based on its operating principles:
Electronics 13 02848 i001
6.
( ) Which point can the end-effector move to from (−6, 8, 6) by adjusting only the first joint?
(A) (−10, 0, 6) (B) (−6, 8, 0) (C) (6, −8, 0) (D) (6, −6, 8)
7.
( ) Which point can the end-effector move to from (9, 12, 6) by adjusting only the first joint?
(A) (6, 12, 6) (B) (12, 9, 0) (C) (0, 15, 6) (D) (9, 12, 0)
8.
( ) If the end-effector moves from (17, 0, 9) to (9, 0, 17) and you only want to adjust the first joint,
What is the difference between the new and old joint parameters of the first joint?
(A) 45 (B) 90 (C) 180 (D) It’s impossible to adjust only the first joint.
9.
( ) If the end-effector moves from (17, 0, 9) to (0, 17, 9) and you only want to adjust the first joint,
What is the difference between the new and old joint parameters of the first joint?
(A) 45 (B) 90 (C) 180 (D) It’s impossible to adjust only the first joint.
10.
( ) Which set of joint parameters (J1, J2, J3, J4, J5, J6) can make the end-effector perpendicular to the platform?
(A) (60, 30, −30, 0, 0, −10) (B) (0, 45, 45, 0, 45, 0)
(C) (90, −30, −30, 0, −45, 0) (D) (60, 30, 45, 0, 45, −60)
11.
( ) If the joint parameters (0, 0, 45, 0, J5, −45) are to make the end-effector perpendicular to the platform, then what is the value J5?
(A) 0 (B) 45 (C) −45 (D) All of the above are acceptable.
12.
( ) If the joint parameters (0, 90, 0, 0, −90, J6) are to make the end-effector perpendicular to the platform, then what is the value of J6?
(A) 0 (B) 90 (C) −90 (D) All of the above are acceptable.

References

  1. ISO. Manipulating Industrial Robots. ISO Standard 8373. 2023. Available online: https://www.iso.org/standard/75539.html (accessed on 12 December 2023).
  2. Robots. What Is a Robot? Top Roboticists Explain Their Definition of a Robot. 2023. Available online: https://robots.ieee.org/learn/what-is-a-robot (accessed on 12 December 2023).
  3. Honda. ASIMO by Honda: The World’s Most Advanced Humanoid Robot. 2023. Available online: https://global.honda/en/robotics (accessed on 10 June 2023).
  4. Rorel, B. A Ping-Pong-Playing Terminator. 2010. Available online: https://www.popsci.com/technology/article/2010-02/ping-pong-playing-terminator (accessed on 9 September 2023).
  5. Gurgul, M. Industrial Robots and Cobots: Everything You Need to Know about Your Future Co-Worker; INKPAD: Carmel, IN, USA, 2018; ISBN 978-83-952513-0-6. [Google Scholar]
  6. Ghosh, A.; Fischer, P. Controlled propulsion of artificial magnetic nanostructured propellers. Nano Lett. 2009, 9, 2243–2245. [Google Scholar] [CrossRef]
  7. Geeks for Geeks. Top 10 Applications of Robotics in 2020. Available online: https://www.geeksforgeeks.org/top-10-applications-of-robotics-in-2020 (accessed on 12 December 2023).
  8. Lindner, P. Better, virtually: The past, present, and future of virtual reality cognitive behavior therapy. Int. J. Cogn. Ther. 2021, 14, 23–46. [Google Scholar] [CrossRef]
  9. Sweller, J. Element interactivity and intrinsic, extraneous, and germane cognitive load. Educ. Psychol. Rev. 2010, 22, 123–138. [Google Scholar] [CrossRef]
  10. Artino, A.R., Jr. Cognitive load theory and the role of learner experience: An abbreviated review for educational practitioners. AACE J. 2008, 16, 425–439. [Google Scholar]
  11. Corke, P. Robotics, Vision & Control: Fundamental Algorithms in MATLAB; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar] [CrossRef]
  12. Tao, J.; Qin, C.; Xiao, D.; Shi, H.; Ling, X.; Li, B.; Liu, C. Timely chatter identification for robotic drilling using a local maximum synchro-squeezing-based method. J. Intell. Manuf. 2020, 31, 1243–1255. [Google Scholar] [CrossRef]
  13. Tiryaki, A.; Adigüzel, S. The effect of STEM-based robotic applications on the creativity and attitude of students. J. Sci. Learn. 2021, 4, 288–297. [Google Scholar] [CrossRef]
  14. Apriyani, R.; Ramalis, T.R.; Suwarma, I.R. Analyzing students’ problem-solving abilities of direct current electricity in STEM-based learning. J. Sci. Learn. 2019, 2, 85–91. [Google Scholar] [CrossRef]
  15. Anikarnisia, N.M.; Wilujeng, I. Need assessment of STEM education based on local wisdom in junior high school. J. Phys. Conf. Ser. 2020, 1440, 012001. [Google Scholar] [CrossRef]
  16. Henning, K.; Wolfgang, W.; Johannes, H. Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0; Federal Ministry of Education and Research: Frankfurt, Germany, 2013.
  17. Aheleroff, S.; Xu, X.; Zhong, R.Y.; Lu, Y. Digital twin as a service (DTaaS) in Industry 4.0: An architecture reference model. Adv. Eng. Inform. 2021, 47, 101225. [Google Scholar] [CrossRef]
  18. Bao, J.; Guo, D.; Li, J.; Zhang, J. The modelling and operations for the digital twin in the context of manufacturing. Enterp. Inf. Syst. 2019, 13, 534–556. [Google Scholar] [CrossRef]
  19. He, B.; Bai, K.J. Digital twin-based sustainable intelligent manufacturing: A review. Adv. Manuf. 2021, 9, 1–21. [Google Scholar] [CrossRef]
  20. Kowalski, C.; Brinkmann, A.; Böhlen, C.F.; Hinrichs, P.; Hein, A. A rule-based robotic assistance system providing physical relief for nurses during repositioning tasks at the care bed. Int. J. Intell. Robot. Appl. 2022, 7, 1–12. [Google Scholar] [CrossRef]
  21. Russell, S.; Norvig, P. Artificial Intelligence—A Modern Approach, 4th ed.; Pearson: London, UK, 2016; Available online: http://aima.cs.berkeley.edu (accessed on 22 August 2022).
  22. Cox, A.M. Exploring the impact of Artificial Intelligence and robots on higher education through literature-based design fictions. Int. J. Educ. Technol. High. Educ. 2021, 18, 3. [Google Scholar] [CrossRef]
  23. Dede, C. Immersive interfaces for engagement and learning. Science 2009, 323, 66–69. [Google Scholar] [CrossRef]
  24. Luo, H.; Li, G.; Feng, Q.; Yang, Y.; Zuo, M. Virtual reality in K-12 and higher education: A systematic review of the literature from 2000 to 2019. J. Comput. Assist. Learn. 2021, 37, 887–901. [Google Scholar] [CrossRef]
  25. Dalgarno, B.; Hedberg, J. 3D learning environments in tertiary education. In Meeting at the Crossroads; University of Melbourne for Computers in Learning in Tertiary Education: Melbourne, Australia, 2001; pp. 33–36. [Google Scholar]
  26. Burdea, G.; Coiffet, P. Virtual Reality Technology; Wiley IEEE-Press: London, UK, 2003. [Google Scholar]
  27. Sherman, W.R.; Craig, A.B. Understanding Virtual Reality; Morgan Kaufmann Publishers: New York, NY, USA, 2003. [Google Scholar]
  28. Tarng, W.; Pan, I.C.; Ou, K.L. Effectiveness of virtual reality on attention training for elementary school students. Systems 2022, 10, 104. [Google Scholar] [CrossRef]
  29. Dunleavy, M.; Dede, C.; Mitchell, R. Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. J. Sci. Educ. Technol. 2016, 18, 7–22. [Google Scholar] [CrossRef]
  30. Lee, H.P. An example of the interaction between virtual and physical experiments in dynamics. Int. J. Mech. Eng. Educ. 2004, 32, 93–99. [Google Scholar] [CrossRef]
  31. Chen, C.H.; Yang, J.C.; Shen, S.; Jeng, M.C. A desktop virtual reality earth motion system in astronomy education. Educ. Technol. Soc. 2007, 10, 289–304. [Google Scholar]
  32. Lee, E.A.L.; Wong, K.W.; Fung, C.C. How does desktop virtual reality enhance learning outcomes? A structural equation modeling approach. Comput. Educ. 2010, 55, 1424–1442. [Google Scholar]
  33. Togias, T.; Gkournelos, C.; Angelakis, P.; Michalos, G.; Makris, S. Virtual reality environment for industrial robot control and path design. Procedia CIRP 2021, 100, 133–138. [Google Scholar] [CrossRef]
  34. Verner, I.; Cuperman, D.; Perez-Villalobos, H.; Polishuk, A.; Gamer, S. Augmented and virtual reality experiences for learning robotics and training integrative thinking skills. Robotics 2022, 11, 90. [Google Scholar] [CrossRef]
  35. Adami, P.; Rodrigues, P.; Woods, P.; Becerik-Gerber, B.; Soibelman, L.; Copur-Gencturk, Y.; Lucas, G. Impact of VR-Based Training on Human-Robot Interaction for Remote Operating Construction Robots. J. Comput. Civ. Eng. 2022, 36, 1943–5487. [Google Scholar] [CrossRef]
  36. Malik, A.; Masood, T.; Bilberg, A. Virtual reality in manufacturing: Immersive and collaborative artificial-reality in design of human-robot workspace. Int. J. Comput. Integr. Manuf. 2020, 33, 22–37. [Google Scholar] [CrossRef]
  37. Heradio, R.; de la Torre, L.; Galan, D.; Cabrerizo, F.J.; Herrera-Viedma, E.; Dormido, S. A Systematic Review of Virtual and Remote Laboratories for Robotics Education. Int. J. Interact. Multimed. Artif. Intell. 2016, 4, 9–24. [Google Scholar]
  38. Tzafestas, S.G.; Alifragis, M. Remote and Virtual Labs in Engineering Education: A Comparison Study. Int. J. Eng. Educ. 2015, 31, 414–422. [Google Scholar]
  39. Alimisis, D. Educational Robotics and Virtual Labs in Higher Education. Int. J. Adv. Robot. Syst. 2018, 15, 1–10. [Google Scholar]
  40. Bencomo, S.D.; Dormido, R.; Farias, G.; Vargas, H. Design and Implementation of Virtual Labs for Robotics Education. IEEE Trans. Educ. 2014, 57, 76–82. [Google Scholar] [CrossRef]
  41. Gomes, L.; Bogosyan, S. Remote Laboratories in Engineering Education: The Simulation vs. Remote Experimentation Debate. IEEE Trans. Ind. Electron. 2009, 56, 4744–4756. [Google Scholar] [CrossRef]
  42. Sacks, R.; Girolami, M.; Brilakis, I. Building Information Modelling, Artificial Intelligence and Construction Tech. Dev. Built Environ. 2020, 4, 100011. [Google Scholar] [CrossRef]
  43. Mourtzis, D.; Vlachou, E.; Dimitrakopoulos, G.; Zogopoulos, V. Cyber-Physical Systems and Education 4.0—The Teaching Factory 4.0 Concept. Procedia Manuf. 2018, 23, 129–134. [Google Scholar] [CrossRef]
  44. Tao, F.; Zhang, H.; Liu, A.; Nee, A. Digital Twin in Industry: State-of-the-Art. IEEE Trans. Ind. Inform. 2019, 15, 2405–2415. [Google Scholar] [CrossRef]
  45. Jones, D.; Snider, C.; Nassehi, A.; Yon, J.; Hicks, B. Characterising the Digital Twin: A Systematic Literature Review. CIRP J. Manuf. Sci. Technol. 2020, 29, 36–52. [Google Scholar] [CrossRef]
  46. Bruynseels, K.; Santoni de Sio, F.; van den Hoven, J. Digital Twins in Health Care: Ethical Implications of an Emerging Engineering Paradigm. Front. Genet. 2018, 9, 31. [Google Scholar] [CrossRef] [PubMed]
  47. Uhlemann, T.H.-J.; Schock, C.; Lehmann, C.; Freiberger, S.; Steinhilper, R. The Digital Twin: Demonstrating the Potential of Real Time Data Acquisition in Production Systems. Procedia Manuf. 2017, 9, 113–120. [Google Scholar] [CrossRef]
  48. Kucuk, S.; Bingul, Z. Robot Kinematics: Forward and Inverse Kinematics. In Industrial Robotics: Theory, Modelling, Control; Cubero, S., Ed.; Intech Open: Berlin, Germany, 2006. [Google Scholar]
  49. Siciliano, B.; Sciavicco, L.; Villani, L.; Oriolo, G. Robotics: Modelling, Planning and Control; Springer: Berlin/Heidelberg, Germany, 2009; ISBN 978-1846286414. [Google Scholar]
  50. Wang, L.C.; Chen, M.P. The effects of game strategy and preference-matching on flow experience and programming performance in game-based learning. Innov. Educ. Teach. Int. 2010, 47, 39–52. [Google Scholar] [CrossRef]
  51. Hwang, G.J.; Yang, L.H.; Wang, S.Y. A concept map-embedded educational computer game for improving students’ learning performance in natural science courses. Comput. Educ. 2013, 69, 121–130. [Google Scholar] [CrossRef]
  52. Briz, L.; García-Peñalvo, F. An empirical assessment of a technology acceptance model for apps in medical education. J. Med. Syst. 2015, 39, 176. [Google Scholar] [CrossRef]
Figure 1. Concept map of the instructional design employed in this study.
Figure 1. Concept map of the instructional design employed in this study.
Electronics 13 02848 g001
Figure 2. Joint parameters of the WLKATA six-axis robotic arm.
Figure 2. Joint parameters of the WLKATA six-axis robotic arm.
Electronics 13 02848 g002
Figure 3. Coordinate systems of the WLKATA six-axis robotic arm.
Figure 3. Coordinate systems of the WLKATA six-axis robotic arm.
Electronics 13 02848 g003
Figure 4. Creating a 3D model of a virtual robotic arm using Blender.
Figure 4. Creating a 3D model of a virtual robotic arm using Blender.
Electronics 13 02848 g004
Figure 5. Developing a digital twin for the WLKATA robotic arm.
Figure 5. Developing a digital twin for the WLKATA robotic arm.
Electronics 13 02848 g005
Figure 6. Using the digital twin to control the physical robotic arm.
Figure 6. Using the digital twin to control the physical robotic arm.
Electronics 13 02848 g006
Figure 7. Developing a robot vision system for grasping an object.
Figure 7. Developing a robot vision system for grasping an object.
Electronics 13 02848 g007
Figure 8. Flowchart for detecting an object using the robot vision system.
Figure 8. Flowchart for detecting an object using the robot vision system.
Electronics 13 02848 g008
Figure 9. Research variables in the quasi-experimental research design.
Figure 9. Research variables in the quasi-experimental research design.
Electronics 13 02848 g009
Figure 10. Promotional activities in STEM education conducted in Hsinchu, Taiwan.
Figure 10. Promotional activities in STEM education conducted in Hsinchu, Taiwan.
Electronics 13 02848 g010
Table 1. Research tools and statistical methods for the research questions.
Table 1. Research tools and statistical methods for the research questions.
Research QuestionResearch ToolStatistical Method
  • Does the CPRLS improve students’ learning achievement in robotics compared to traditional teaching methods?
Achievement testPaired sample t-test and one-way ANCOVA
2.
Does the CPRLS enhance students’ motivation to learn robotics more than traditional teaching methods?
Learning motivation questionnaireIndependent samples t-test
3.
Does the CPRLS incur less cognitive load in learning robotics compared to traditional teaching methods?
Cognitive load questionnaireIndependent samples t-test
4.
What is the students’ level of satisfaction after using the CPRLS for learning robotics?
Satisfaction questionnaireDescriptive statistics
Table 2. Experimental results of the pre-test and post-test scores for both groups.
Table 2. Experimental results of the pre-test and post-test scores for both groups.
GroupPre-Test MeanPre-Test S.D.Post-Test MeanPost-Test S.D.
Experimental Group3.601.708.712.97
Control Group2.931.403.751.78
Table 3. Experimental results of paired sample t-test results between the two groups.
Table 3. Experimental results of paired sample t-test results between the two groups.
SourceMeanS.D.tSignificance
Experimental Group−5.1143.3678.895<0.001 ***
Control Group−0.8251.8932.7560.009 **
** p < 0.01, *** p < 0.001.
Table 4. Comparing the learning effectiveness of the two groups by ANCOVA.
Table 4. Comparing the learning effectiveness of the two groups by ANCOVA.
SourceType III Sum of SquaresFreedomFSignificance η 2
Pre-test7.50111.3010.2580.018
Group414.382171.868<0.001 ***0.500
Deviation415.14171
Sum3643.00075
*** p < 0.001.
Table 5. Results of independent samples t-test on learning motivation for both groups.
Table 5. Results of independent samples t-test on learning motivation for both groups.
GroupNMeanS.D.dftp
Experimental Group353.9940.735733.621<0.001 ***
Control Group403.2900.923
*** p < 0.001.
Table 6. The mean, standard deviation, and significance of learning motivation for both groups.
Table 6. The mean, standard deviation, and significance of learning motivation for both groups.
Evaluation ItemsExperimental GroupControl Groupp
MeanS.D.MeanS.D.
  • Learning robotics using this method is interesting to me.
4.090.823.480.860.004 **
2.
I believe this learning method can enhance my interest in robotics and improve my learning outcomes.
4.030.823.420.950.006 **
3.
This method can increase my intention in learning robotics.
3.970.793.100.880.000 ***
4.
This method allows me to focus on learning robotics.
3.970.823.250.900.001 **
5.
This learning method motivates me to explore robots.
3.910.893.200.840.001 **
** p < 0.01, *** p < 0.001.
Table 7. Results of independent samples t-test on cognitive load for both groups.
Table 7. Results of independent samples t-test on cognitive load for both groups.
GroupNMeanS.D.dftp
Experimental Group352.1490.67673−5.094<0.001 ***
Control Group402.9700.714
*** p < 0.001.
Table 8. The mean, standard deviation, and significance of cognitive load for both groups.
Table 8. The mean, standard deviation, and significance of cognitive load for both groups.
Evaluation ItemsExperimental GroupControl Groupp
MeanS.D.MeanS.D.
  • I find the forward kinematics of robots difficult to understand.
2.030.793.200.960.000 ***
2.
I find the inverse kinematics of robots difficult to understand.
2.110.963.800.840.000 ***
3.
I think this learning method cannot reduce the time pressure in learning robotics.
2.200.832.450.800.108
4.
I think this learning method cannot reduce the frustration in learning robotics.
2.260.702.700.890.014 *
5.
I think this learning method cannot reduce the mental stress in learning robotics.
2.140.732.700.920.004 **
* p < 0.05, ** p < 0.01, *** p < 0.001.
Table 9. TAM questionnaire results for the integrated robot learning system.
Table 9. TAM questionnaire results for the integrated robot learning system.
Technology Acceptance ModelMeanS.D.
Perceived Usefulness (Mean = 4.29)
  • I think the integrated robot learning system can help me understand the principles of forward kinematics.
4.340.59
2.
I think the integrated robot learning system can help me understand the principles of inverse kinematics.
4.310.58
3.
I think the integrated robot learning system can help me understand the digital twin and its functionality.
4.290.67
4.
I think the visual effects of virtual reality can help me understand the movement and paths of robots in three-dimensional space.
4.340.68
5.
I think the integrated robot learning system can provide me with more specific knowledge about robots.
4.170.71
Perceived Ease of Use (Mean = 4.05)
6.
I find the operation method of the virtual robot simple and easy to use.
4.110.68
7.
I find the user interface of the integrated robot learning system clear and easy to understand.
4.170.71
8.
I can easily complete the assigned tasks using the virtual robot.
4.030.95
9.
I find the description and explanation of the teaching materials in the integrated robot learning system clear and easy to understand.
4.090.89
10.
I can use the virtual robot to complete other tasks if no one is there to tell me how to do them.
3.860.98
Behavioral Intention (Mean = 4.10)
11.
I am willing to use the integrated robot learning system to learn robotics.
4.170.79
12.
When I want to study robotics, I will use the integrated robot learning system.
4.110.80
13.
When I need to learn robot-related content in the future, I am willing to use the integrated robot learning system in the future.
4.030.79
Overall technology acceptance4.160.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tarng, W.; Wu, Y.-J.; Ye, L.-Y.; Tang, C.-W.; Lu, Y.-C.; Wang, T.-L.; Li, C.-L. Application of Virtual Reality in Developing the Digital Twin for an Integrated Robot Learning System. Electronics 2024, 13, 2848. https://doi.org/10.3390/electronics13142848

AMA Style

Tarng W, Wu Y-J, Ye L-Y, Tang C-W, Lu Y-C, Wang T-L, Li C-L. Application of Virtual Reality in Developing the Digital Twin for an Integrated Robot Learning System. Electronics. 2024; 13(14):2848. https://doi.org/10.3390/electronics13142848

Chicago/Turabian Style

Tarng, Wernhuar, Yu-Jung Wu, Li-Yuan Ye, Chun-Wei Tang, Yun-Chen Lu, Tzu-Ling Wang, and Chien-Lung Li. 2024. "Application of Virtual Reality in Developing the Digital Twin for an Integrated Robot Learning System" Electronics 13, no. 14: 2848. https://doi.org/10.3390/electronics13142848

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop