Next Article in Journal / Special Issue
Data Partitioning Technique for Improved Video Prioritization
Previous Article in Journal / Special Issue
Enhancing BER Performance Limit of BCH and RS Codes Using Multipath Diversity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Recognising Learning Evidence in Collaborative Virtual Environments: A Mixed Agents Approach †

1
Department of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
2
Umm Alqura University, Makkah 24382, Saudi Arabia
*
Author to whom correspondence should be addressed.
This paper is an extended version of our published paper: Felemban, S.; Gardner, M.; Callaghan, V. An event detection approach for identifying learning evidence in collaborative virtual environments. In Proceedings of the 8th Computer Science and Electronic Engineering Conference (CEEC) 2016, Colchester, UK, 24–25 September 2016.
Computers 2017, 6(3), 22; https://doi.org/10.3390/computers6030022
Submission received: 31 May 2017 / Revised: 22 June 2017 / Accepted: 22 June 2017 / Published: 26 June 2017

Abstract

:
Three-dimensional (3D) virtual environments bring people together in real time irrespective of their geographical location to facilitate collaborative learning and working together in an engaging and fulfilling way. However, it can be difficult to amass suitable data to gauge how well students perform in these environments. With this in mind, the current study proposes a methodology for monitoring students’ learning experiences in 3D virtual worlds (VWs). It integrates a computer-based mechanism that mixes software agents with natural agents (users) in conjunction with a fuzzy logic model to reveal evidence of learning in collaborative pursuits to replicate the sort of observation that would normally be made in a conventional classroom setting. Software agents are used to infer the extent of interaction based on the number of clicks, the actions of users, and other events. Meanwhile, natural agents are employed in order to evaluate the students and the way in which they perform. This is beneficial because such an approach offers an effective method for assessing learning activities in 3D virtual environments.

1. Introduction

1.1. Collaborative Learning Environments

Collaborative learning is beneficial because it enables individual students to interact with their peers in ways that develop new skills and spread knowledge. In particular, collaborative learning is well-suited to three-dimensional (3D) virtual worlds (VWs) [1,2,3]. There are various platforms upon which 3D VWs can be developed, including Open Simulator (http://opensimulator.org/), Active Worlds (https://www.activeworlds.com/), Open Wonderland (http://www.openwonderland.org/), and Second Life (http://secondlife.com/). VWs encourage students to engage with each other in ways that improve teamwork and decision-making by bringing them together in real time [4]. As such, this sharing of ideas can help individuals to gain a better understanding of complex phenomena and relationships [5]. However, in order to get the most out of VWs and maximise their potential for learning opportunities, further analysis of how they function is required [6].
Many factors could affect groups’ performance [7,8], so educators should be aware of them to have successful collaborations. It is normal for teachers to be wary when they first attempt to use collaborative activities and assess learning outcomes in such scenarios. “Learning within technology creates a pedagogical shift that requires teachers to think about measuring outcomes in non-traditional ways” [9]. In order to evaluate how individual students are performing in such environments, it is necessary to identify learning evidence. However, it is not easy to trace students’ behaviour and evaluate learning outcomes such as assessing a student’s skills in VWs. Collaborative learning certainly confers benefits in terms of encouraging the development of interpersonal skills, but it is problematic to assess them in such an environment. Furthermore, several users may be contributing simultaneously to the learning activity, which makes monitoring each learner’s contribution more difficult.
Consequently, it would be highly beneficial to develop an event recognition technique that could simultaneously gather evidence of learning, and identify and appraise users’ actions. This would help with the collection of evidence during collaborative activities and with linking this evidence to learning outcomes.

1.2. Techniques and Approaches

1.2.1. Agents and Multi-Agent Systems

An agent is a computer-based system located in an environment that has the ability to act in this environment to achieve the aim it is designed for [10,11]. Some agents have the capability to learn [12,13] while others do not, arguing that it is undesirable for agents [10] to do this. An agent could be any computer application that takes input and performs in an environment to produce an output. Sánchez [14] identified several different types of agents that can be classified according to their specific capabilities: (a) user agents that are valuable for end users for the purpose of collecting data, providing objects for user interfaces, or running user applications in order to observe how they behave; (b) network agents that work in distributed systems; and (c) programmer agents capable of working with hardware and software entities. Other types of agents used in distributed e-learning systems [15] are: (a) device agents which can monitor a learner’s behaviour when using a learning website; and (b) teacher agents that could collaborate with tutor agents to provide learners with beneficial information.
In a multi-agent system (MAS), there are numerous agents that can have different roles and objectives but which are connected in terms of the outputs they produce. In order for MAS to operate effectively, protocols are required to govern how agents interact and communicate with each other. It is often the case that agents are required to communicate using a common method [16]. Durfee and Lesser [17] defined a MAS as a group of problem-solvers (agents) that execute together rather than performing alone to achieve a goal. Performing together allows issues to be solved that would otherwise be beyond an individual agent's abilities. The importance of agents and MAS systems has extended their scope from being an interesting research topic into the basis for emerging novel applications.

1.2.2. Fuzzy Logic

Some studies have found that integrating methods such as MAS and fuzzy logic yields further benefits. In this context, fuzzy logic (FL) may be regarded as expert reasoning placed into an understandable context by computers. It has been applied in expert systems and artificial intelligence, and it is considered to be the basis of fuzzy expert systems [18]. Moreover, fuzzy systems can deal with uncertainty in data and are able to model common reasoning mechanisms that is often difficult to do using conventional approaches. It can work with logical representations containing linguistic variables and multiple values such as “poor”, “average”, and “good”, unlike classical logic which contains only true or false values. The major drawback of classical logic is the limitation of being constrained by these two values, which does not represent the complexity of the real, non-numerical world. Thus, FL can be considered as being a multiple value logic, but with a reasoning logic purpose. Fuzzy reasoning represents the inference of an imprecise and possible deduction out of an initial set. FL is especially compatible with our model, since humans reason in a fuzzy manner, and we are trying to combine natural and software agents within a single system. The use of fuzzy logic for building our artificial agents introduces better consistency across the differing agents. Concerning the construction of a fuzzy control system, usually the fuzzy controller is composed of different steps [18,19] (see Figure 1):
  • pre-processor, the process to obtain the data (crisp values) and generate the fuzzy inputs;
  • fuzzification, to convert the data into fuzzy variables by using membership functions;
  • fuzzy inference contains the rule base inference engine;
  • defuzzification, to convert the output of the inference into a numeric value; and
  • post-processor, to reduce the final data to be sent to the process.
This paper offers an innovative framework for monitoring learning experiences in collaborative groups. It demonstrates that a combination of software agents and user perceptions based on fuzzy reasoning is able to provide insight into learning activities in 3D VWs. This approach promises to transform the way in which learning evidence is collected and interpreted in order to better understand learning outcomes in 3D VWs.

2. Related Work

It is relatively simple for a teacher to monitor learning progress when a student is asked to write an essay or perform a written examination test. Thompson and Markauskaite [20] suggested that “educators need to move beyond traditional forms of assessment and search for evidence of learning in the learners’ interactions with each other and the virtual environment, and artefacts created.” For instance, Alrashidi [21] proposed the Pedagogical Virtual Machine (PVM), an approach for gathering pedagogical gains from online activities and providing real-time feedback for an individual learner in immersive environments.
An advantage of VWs is that all activities are largely instrumented or logged, an aspect we will try to capitalise on in our model. However, it is significantly more difficult to monitor students’ progress when the learning environment requires the use of collaborative educational games or VW scenarios, because these practices do not maintain a definite relationship between the objects and student actions. There are several problems when assessing learning outcomes in VWs. First, there is a lack of established evaluation techniques for 3D VWs. In addition, there is a lack of guidance in the empirical literature for interpreting student data streams from these environments in order to monitor their performance [22].
In 3D VWs, analysing students’ actions and behaviours can positively help to assess learning. To analyse these actions, some researchers have applied a cognitive task analysis method, which involves developing logical rules to track users’ actions and to differentiate the students’ skill levels [23]. Another way in which the performance of learners can be inferred is to examine the automatically generated log files of users. It is possible to monitor the learners’ pathways and observe how they went about achieving the tasks they were set, by studying data log files. This methodology was employed by Anneea et al. [24], who monitored user activities such as their login time, decisions, chat logs, and interactions. Having collected the necessary data, Anneea et al. applied various methods of data mining. Similarly, Kerr and Chung [25] also studied learners’ log data using cluster analysis algorithms. Studying educational game environments, their aim was to reveal the key features of how learners perform in such settings. However, analysing users’ performance from log files is further complicated, because they include both the correct and incorrect actions of the students. Relying on log files as a source of data is not ideal because of the vast amounts of data involved, which must be sifted through in order to interpret the learning outcomes in VWs [26].
Tesfazgi [27] suggested that any attempt to monitor learning experiences and evaluate student performance will be more fruitful if the learning space has been designed from the beginning to facilitate that purpose. It is not enough to merely collect data from immersive environments; what is needed is to have a clear understanding of how that data will be interpreted and analysed in order to assess learning outcomes. For instance, many studies in the empirical literature have relied on data mining methods, but these techniques are commonly limited to concluding the correlations between the data and the intended educational goals.
Another aspect that needs to be clearly assessed in the learning activities is the quality of the students’ performance. Collaboration lends itself to honing certain skills, but amassing evidence of this is by no means easy in 3D VWs [28]. As an example, conventional assessment techniques are not ideal for gauging the quality of an individual’s interpersonal skills. Instead, they are better suited to merely calculating the quantity of actions performed. Ibáñez et al. [29] proposed a methodology based on smart objects in 3D environments for assessing knowledge and competence with regards to learning outcomes. However, such an approach is unable to provide an insight into the quality of performance generated in 3D VWs. Moreover, Shute [30] suggested a “stealth assessment” approach, which utilises a Bayesian network to model the learner’s behaviour in a virtual game and to evaluate their problem-solving skill level. The results showed that the inferred learning events closely matched with the real students’ learning gains. This study assesses a specific skill based on a player’s behaviour in a game context. Most of the research focuses on evaluating individual learners, whereas an important feature of VWs is permitting the sharing of knowledge and collaboration. There is a lack of theoretical guidance to assess collaborative activities in order to identify the quality of learning in such spaces. Finally, as any monitoring technology is unlikely to be perfect, there is a risk that students will use gaming strategies to cheat the system, for example by learning what actions (even mindless ones) can lead to higher scores.
From the above discussion, it is apparent that while technology is certainly useful for collecting data in a variety of scenarios, it is not well suited for analysing and interpreting data in order to determine the quality and quantity of learning. Therefore, in order to address these issues and evaluate the quality and quantity of learners’ performance, the Mixed Agents (MixAgent) model is proposed, which can be applied to monitor the behaviour of individuals and to accumulate evidence in real time in collaborative environments.

3. Mixed Agents Model (MixAgent)

Our earlier work led to the development of the Mixed Intelligent Virtual Observation (MIVO) framework [31]. MIVO includes the Observation Lenses model [32] that maps between observing learners in conventional classroom settings and in 3D VWs in order to assess how they perform. This requires a way of combining agents to replicate how a teacher would monitor its students and observe progress in a classroom setting. The work in this paper continues this research [33], which demonstrates the mechanism of collecting data in VWs to better understand the learning outcomes of groups and individuals. In order to record learning events, and overcome some of the limitations discussed above, the MAS method is extended by adding natural agents to the software agents. Natural agents are the learners taking part in the task. In the MixAgent model (Figure 2), the data from the agents are sent to the fuzzy system to identify the learning evidence and to assess the learning for each student.
The following section gives details of the capabilities of the agents, including their particular assessment roles:
  • Software Agents (User Agent (UA)): When a user has received authorisation in the virtual world, a user agent (UA) is assigned to each student. UAs are able to monitor the activities of individuals in real time, save the data, and then transfer this data to the fuzzy inference engine.
  • Natural Agents (NA): Evaluation by peers is particularly suitable for group exercises [34], and can provide insight that conventional technology would struggle to pick up on [28]. The students themselves are the natural agents, providing details about the skills and qualities of other learners within the group setting. When learners are working together in assigned tasks, they can rate each other’s performance by using a rating tool. These quantitative scores are compiled and are then transferred for fuzzy reasoning. Natural agents can assist in measuring the quality of learning outcomes, which can be difficult to achieve when relying solely on automated approaches.
  • Fuzzy Inference: Agents have common objectives and they collaborate in real time to amass data that can then be transferred for fuzzy inference. The inference is based on the fuzzy logic approach to make sense of the data collected by all of the agents. Once the data is collected, it can be analysed according to fuzzy rules in order to shed light on the performance of individual learners. Moreover, the inferences can be used to resolve relationships between the data and its meaning in order to infer further evidence of learning. The fuzzy logic method can handle multiple values and perform human-like reasoning, going someway to providing a unified vision of agency within our model. More details about applying the fuzzy logic inference are given in Section 5.

4. System Architecture

In order to realise the MixAgent model, the following system architecture has been devised for the research prototype (see Figure 3):
  • Authentication: Identifies the learners and teachers and attributes the functions that they perform.
  • Virtual 3D Environment: Provides the environment interface to enable users to fulfil their stated roles. This involves students collaborating on educational pursuits and monitoring each other’s progress. Meanwhile, teachers use the interface to construct activities and give each student details of their tasks. Once the activities have been completed, learners’ evaluation appears in the graphical user interface. Section 6 defines the learning environment used in this research.
  • MixAgent Model: As previously described, this model uses software agents to monitor the activities of users in real time. Students (natural agents) are not only learning but also monitoring the performance of their peers. The data yield from natural agents is also used for the purposes of evaluation. All of the data produced is transferred to the data manager from where it can be retrieved whenever needed.
  • Fuzzy Model: This model contains the different processes that the data are put through to obtain the final evaluation output. These stages are fuzzification, inferencing, and defuzzification.
  • Data Manager: It controls the flow of information to and from the data repositories. Data is received from the agents and then transferred to the repositories. It also sends data to and receives data from the fuzzy logic model.
  • Data Layer: This layer includes the database created to save the events and the performed actions in real time.

5. The Fuzzy Model

The fuzzy model processes (Figure 4) for evaluating the students’ performance are given below:
  • Crisp Values: Crisp values represent the students’ data obtained by the system from both the software and natural agents.
  • Fuzzification: This is the process of changing the crisp values (students’ actions and ratings) to fuzzy input values with an appropriate membership function. We have used the triangular membership function [18] to convert the data collected from users to a fuzzy set. A membership function is identified by three parameters (x, y, z), and each student data item is defined by the function. For instance, the system gathers each user’s clicks, communications, and tasks completed by the software agent and gathers the ratings from other sources (such as the natural agent), and translates the data to fuzzy sets. Then the fuzzy sets are sent to the inference engine as inputs.
  • Inference: This is the development of different linguistic rules for student evaluation. We have generated linguistic IF-THEN rules to define the inputs and outputs that are used in the inference process.
  • Fuzzy Output: This determines a membership function value as an output for each active rule (“IF-THEN” rule).
  • Defuzzification (students’ assessment): This is the process of determining the final output (students’ assessment) by using a defuzzification technique. After completing the inference decision, the fuzzy number should be transformed into a crisp value; this process is called defuzzification. There are several defuzzification techniques which have been developed; the centre of area (centroid) method is used in this research because it is one of the most common techniques [35].

6. The Learning Environment

The virtual environment employed for the purposes of the MixAgent model is the InterReality Portal [36], which was created using the Unity platform (https://unity3d.com/), a game engine for developing 2D/3D environments based on the C# programming language. The InterReality Portal was developed by a separate research project at the University of Essex [36]. It helps teachers to manage learning tasks that enable students to gain knowledge of the functionality of sensors in intelligent environments. This is achieved by means of IF-THEN-ELSE rules that govern actuators and sensors (see Figure 5). Also, the students can observe in real time the effects of the created rules on the sensors.

7. The Learning Scenario

Students will be grouped together, each group containing two to three students. Then, they will be given several collaborative programming activities to programme actuators and sensors in order to teach them the functionality of embedded systems. The program results will be reflected in the virtual smart home if the students generate syntactically correct rules (Figure 6). Also, the graphical user interface (GUI) permits collaboration between users as well as communication through a messaging tool. All of the students’ actions and events will be saved in the repository to be retrieved in real-time.
When the students perform the activities and collaborate with their peers, we expect that there will be a variation in their performance. Some learners might be clearly keen to contribute, but others might be less inclined to do so. Therefore, the GUI offers a rating tool (shown in Figure 6) to allow collaborators to repeatedly score each other’s quality of performance.
Upon finishing the tasks, the teachers and students will receive dashboard reports assessing the individuals’ contributions and giving details of the learning outcomes and skills that they demonstrated. Figure 7 displays the scene that will appear to users when a group session ends. The left-hand screen illustrates the quantity of relevant student performance events (data collected by software agents). The right-hand screen illustrates the quality of their performance (data from natural agents).
Also, learners can view their social collaborative skills level in the learning activities as shown in Figure 8. The data is collected from both software and natural agents, and it is examined by the fuzzy logic model to produce a more in-depth evaluation of the students’ skills.
It is important to be able to measure the performance of individual students because it is only by doing so that one can determine whether a student has achieved the desired learning objectives. Such an approach could prove to be highly valuable for teachers when reviewing the learners’ work and then further enhancing the learning activities in VWs. Peer monitoring plays a central role in improving awareness of how students are performing. In addition, the feedback generated by the system can be used to show individual students their weakest areas so that they can work at these and improve their overall performance.

8. Conclusions

We have presented the MixAgent model (a conceptual model) that could play a useful role in identifying events in real time, amassing evidence of learning, and appraising student performance in the collaborative learning activities conducted in 3D VWs. This model utilises an approach based on fuzzy reasoning as a mechanism to combine natural agents (teachers and students, who intrinsically employ fuzzy reasoning) with artificial agents (software) to create a novel unified multi-agent platform that improves the overall means of evaluation, thereby yielding superior results in terms of the feedback generated. In this way, the fuzzy logic approach amalgamates the data that is generated from the agents to infer the learning outcomes that the students acquire from their learning tasks.
When the MixAgent model is applied in the InterReality Portal, we anticipate that it will provide a superior insight into the performance of individual students. One aim of this work is to evaluate the model to prove that combining natural agents and software agents with the fuzzy logic approach can improve the collection of learning evidence. Evaluating this work is of great importance because it will help the research to advance. Therefore, the results of the evaluation process will be disclosed at some point in the future.

Acknowledgments

We are pleased to acknowledge Umm Alqura University, Saudi Arabia, for the funding of a PhD scholarship to the lead author. Furthermore, we wish to thank Anasol Pena-Rios for providing the InterReality Portal virtual environment and for her technical support with it.

Author Contributions

Samah Felemban worked on the theoretical and implementation parts of the proposed MixAgent model, and contributed to writing the manuscript. Michael Gardner and Victor Callaghan are the supervisors for the lead author. They both had significant contributions to the proposed conception, techniques, and models. In addition, they reviewed the manuscript, edited it, and gave the final approval of the version to be published.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Andreas, K.; Tsiatsos, T.; Terzidou, T.; Pomportsis, A. Fostering collaborative learning in Second Life: Metaphors and affordances. Comput. Educ. 2010, 55, 603–615. [Google Scholar] [CrossRef]
  2. Belazoui, A.; Kazar, O.; Bourekkache, S. A Cooperative Multi-Agent System for modeling of authoring system in e-learning. J. E-Learn. Knowl. Soc. 2016, 12. [Google Scholar] [CrossRef]
  3. Felemban, S. Distributed pedagogical virtual machine (d-pvm). In Proceedings of the Immersive Learning Research Network Conference (iLRN 2015), Prague, Czech, 13–14 July 2015; p. 58. [Google Scholar]
  4. Schouten, A.P.; van den Hooff, B.; Feldberg, F. Real Decisions in Virtual Worlds: Team Collaboration and Decision Making in 3D Virtual Worlds. In Proceedings of the International Conference on Information Systems (ICIS 2010), Saint Louis, MO, USA, 12–15 December 2010. [Google Scholar]
  5. Dalgarno, B.; Lee, M.J. What are the learning affordances of 3-D virtual environments? Br. J. Educ. Technol. 2010, 41, 10–32. [Google Scholar] [CrossRef]
  6. Duncan, I.; Miller, A.; Jiang, S. A taxonomy of virtual worlds usage in education. Br. J. Educ. Technol. 2012, 43, 949–964. [Google Scholar] [CrossRef]
  7. Zheng, L.; Huang, R. The effects of sentiments and co-regulation on group performance in computer supported collaborative learning. Internet High. Educ. 2016, 28, 59–67. [Google Scholar] [CrossRef]
  8. De Meo, P.; Messina, F.; Rosaci, D.; Sarné, G.M. Combining trust and skills evaluation to form e-Learning classes in online social networks. Inf. Sci. 2017, 405, 107–122. [Google Scholar] [CrossRef]
  9. Gardner, M.; Elliott, J. The Immersive Education Laboratory: Understanding affordances, structuring experiences, and creating constructivist, collaborative processes, in mixed-reality smart environments. EAI Endorsed Trans. Future Intell. Educ. Environ. 2014, 14, e6. [Google Scholar] [CrossRef]
  10. Weiss, G. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence; MIT Press: Cambridge, MA, USA; London, UK, 1999. [Google Scholar]
  11. Wooldridge, M.; Jennings, N.R. Intelligent agents: Theory and practice. Knowl. Eng. Rev. 1995, 10, 115–152. [Google Scholar] [CrossRef]
  12. Rosaci, D. CILIOS: Connectionist inductive learning and inter-ontology similarities for recommending information agents. Inf. Syst. 2007, 32, 793–825. [Google Scholar] [CrossRef]
  13. Rosaci, D.; Sarné, G.M. EVA: An evolutionary approach to mutual monitoring of learning information agents. Appl. Artif. Intell. 2011, 25, 341–361. [Google Scholar] [CrossRef]
  14. Sánchez, J.A. A Taxonomy of Agents. Rapport Technique; ICT-Universidad de las Américas-Puebla: Puebla, México, 1997. [Google Scholar]
  15. Rosaci, D.; Sarné, G.M. Efficient Personalization of E-Learning Activities Using a Multi-Device Decentralized Recommender System. Comput. Intell. 2010, 26, 121–141. [Google Scholar] [CrossRef]
  16. Nwana, H.S. Software agents: An overview. Knowl. Eng. Rev. 1996, 11, 205–244. [Google Scholar] [CrossRef]
  17. Durfee, E.H.; Lesser, V.R. Negotiating task decomposition and allocation using partial global planning. Distrib. Artif. Intell. 1989, 2, 229–244. [Google Scholar]
  18. Yadav, R.S.; Singh, V.P. Modeling academic performance evaluation using soft computing techniques: A fuzzy logic approach. Int. J. Comput. Sci. Eng. 2011, 3, 676–686. [Google Scholar]
  19. Albertos, P.; Sala, A. Fuzzy Expert Control Systems: Knowledge Base Validation; UNESCO Encyclopedia of Life Support Systems: Paris, France, 2002. [Google Scholar]
  20. Thompson, K.; Markauskaite, L. Identifying Group Processes and Affect in Learners: A Holistic Approach to. In Cases on the Assessment of Scenario and Game-Based Virtual Worlds in Higher Education; IGI Global: Hershey, PA, USA, 2014; p. 175. [Google Scholar]
  21. Alrashidi, M.; Almohammadi, K.; Gardner, M.; Callaghan, V. Making the Invisible Visible: Real-Time Feedback for Embedded Computing Learning Activity Using Pedagogical Virtual Machine with Augmented Reality. In Proceedings of the International Conference on Augmented Reality, Virtual Reality and Computer Graphics, Ugento, Italy, 12–15 June 2017; Springer: Cham, Switzerland. [Google Scholar]
  22. Gobert, J.D.; Sao Pedro, M.A.; Baker, R.S.; Toto, E.; Montalvo, O. Leveraging educational data mining for real-time performance assessment of scientific inquiry skills within microworlds. JEDM J. Educ. Data Min. 2012, 4, 111–143. [Google Scholar]
  23. Schunn, C.D.; Anderson, J.R. The generality/specificity of expertise in scientific reasoning. Cogn. Sci. 1999, 23, 337–370. [Google Scholar] [CrossRef]
  24. Annetta, L.A.; Folta, E.; Klesath, M. Assessing and Evaluating Virtual World Effectiveness. In V-Learning; Springer: Dordrecht, The Netherlands, 2010; pp. 125–151. [Google Scholar]
  25. Kerr, D.; Chung, G.K. Identifying key features of student performance in educational video games and simulations through cluster analysis. JEDM J. Educ. Data Min. 2012, 4, 144–182. [Google Scholar]
  26. Mislevy, R.J.; Almond, R.G.; Lukas, J.F. A brief introduction to evidence-centered design. ETS Res. Rep. Ser. 2003, 2003, i-29. [Google Scholar] [CrossRef]
  27. Tesfazgi, S.H. Survey on Behavioral Observation Methods in Virtual Environments; Research Assignment; Delft University of Technology: Delft, The Netherlands, 2003. [Google Scholar]
  28. Csapó, B.; Ainley, J.; Bennett, R.; Latour, T.; Law, N. Technological Issues for Computer-Based Assessment. In Assessment and Teaching of 21st Century Skills; Griffin, P., McGaw, B., Care, E., Eds.; Springer: Dordrecht, The Netherlands, 2012; pp. 143–230. [Google Scholar]
  29. Ibáñez, M.B.; Crespo, R.M.; Kloos, C.D. Assessment of knowledge and competencies in 3D virtual worlds: A proposal. In Key Competencies in the Knowledge Society; Springer: Berlin/Heidelberg, Germany, 2010; pp. 165–176. [Google Scholar]
  30. Shute, V.J. Stealth assessment in computer-based games to support learning. Comput. Games Instr. 2011, 55, 503–524. [Google Scholar]
  31. Felemban, S.; Gardner, M.; Challagan, V. Virtual Observation Lenses for Assessing Online Collaborative Learning Environments. In Proceedings of the Immersive Learning Research Network (iLRN 2016), Santa Barbra, CA, USA, 27 June–1 July 2016; Verlag der Technischen Universität Graz: Santa Barbra, CA, USA, 2016; pp. 80–92. [Google Scholar]
  32. Felemban, S.; Gardner, M.; Callaghan, V.; Pena-Rios, A. Towards Observing and Assessing Collaborative Learning Activities in Immersive Environments. In Proceedings of the Immersive Learning Research Network: Third International Conference (iLRN 2017), Coimbra, Portugal, 26–29 June 2017; Beck, D., Allison, C., Morgado, L., Pirker, J., Khosmood, F., Richter, J., Gütl, C., Eds.; Springer: Cham, Switzerland, 2017; pp. 47–59. [Google Scholar]
  33. Felemban, S.; Gardner, M.; Callaghan, V. An event detection approach for identifying learning evidence in collaborative virtual environments. In Proceedings of the Computer Science and Electronic Engineering (CEEC), Colchester, UK, 28–30 September 2016. [Google Scholar]
  34. Excellence, E.C.F.T. How Can I Assess Group Work? Available online: https://www.cmu.edu/teaching/designteach/design/instructionalstrategies/groupprojects/assess.html (accessed on 31 May 2017).
  35. Padhy, N. Artificial Intelligence and Intelligent Systems; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  36. Pena-Rios, A. Exploring Mixed Reality in Distributed Collaborative Learning Environments. In School of Computer Science and Electronic Engineering; University of Essex: Colchester, UK, 2016. [Google Scholar]
Figure 1. Fuzzy control system structure.
Figure 1. Fuzzy control system structure.
Computers 06 00022 g001
Figure 2. Mixed Agents Model (MixAgent). Abbreviations: UA = user agent; NA = natural agent.
Figure 2. Mixed Agents Model (MixAgent). Abbreviations: UA = user agent; NA = natural agent.
Computers 06 00022 g002
Figure 3. System architecture.
Figure 3. System architecture.
Computers 06 00022 g003
Figure 4. Proposed fuzzy model for students evaluation.
Figure 4. Proposed fuzzy model for students evaluation.
Computers 06 00022 g004
Figure 5. InterReality Portal—Graphical user interface (GUI).
Figure 5. InterReality Portal—Graphical user interface (GUI).
Computers 06 00022 g005
Figure 6. Students’ collaboration in the virtual environment.
Figure 6. Students’ collaboration in the virtual environment.
Computers 06 00022 g006
Figure 7. Dashboards show student interactions by time in the learning activity.
Figure 7. Dashboards show student interactions by time in the learning activity.
Computers 06 00022 g007
Figure 8. Student skill level dashboard.
Figure 8. Student skill level dashboard.
Computers 06 00022 g008

Share and Cite

MDPI and ACS Style

Felemban, S.; Gardner, M.; Callaghan, V. Towards Recognising Learning Evidence in Collaborative Virtual Environments: A Mixed Agents Approach. Computers 2017, 6, 22. https://doi.org/10.3390/computers6030022

AMA Style

Felemban S, Gardner M, Callaghan V. Towards Recognising Learning Evidence in Collaborative Virtual Environments: A Mixed Agents Approach. Computers. 2017; 6(3):22. https://doi.org/10.3390/computers6030022

Chicago/Turabian Style

Felemban, Samah, Michael Gardner, and Victor Callaghan. 2017. "Towards Recognising Learning Evidence in Collaborative Virtual Environments: A Mixed Agents Approach" Computers 6, no. 3: 22. https://doi.org/10.3390/computers6030022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop