Next Article in Journal
Claiming Space to (Re)generate: The Impact of Critical Race Professional Development on Teacher Educators of Color
Previous Article in Journal
The System Architecture-Function-Outcome Framework for Fostering and Assessing Systems Thinking in First-Year STEM Education and Its Potential Applications in Case-Based Learning
Previous Article in Special Issue
Enhancing Active Learning through a Holistic Approach: A Case Study of Primary Education in Lithuania
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Active Learning Approach to Evaluate Networking Basics

1
Computer Engineering Department, Miguel Hernández University, 03202 Elche, Spain
2
Mathematics and Computer Science Department, University of the Balearic Islands, 07022 Palma de Mallorca, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(7), 721; https://doi.org/10.3390/educsci14070721
Submission received: 3 May 2024 / Revised: 18 June 2024 / Accepted: 19 June 2024 / Published: 2 July 2024
(This article belongs to the Special Issue Active Teaching and Learning: Educational Trends and Practices)

Abstract

:
Active learning is a paradigm where students take a more active role in their learning process. In this paper, two team-based evaluation activities are established for students according to the principles of active learning in order to assess their knowledge, skills and attitudes. These activities were based on scenarios related to network administration issues, and at a later stage, each team had to make a pitch presentation about what they did and how they did it to address them. Each presentation was rated by all students with a specific construct on a peer review basis, which had previously been validated by a panel of experts. The results obtained revealed quite high values with low variability and an acceptable reliability. Additionally, the level of motivation was also assessed, indicating a high level of motivation while undertaking the activities.

1. Introduction

Active learning is a broad concept, which has received much interest in recent years. The first thing to note is that active learning is not a matter of learning, but a matter of instruction [1]. Here, the role of instructing students turns from the teacher’s lectures in traditional learning to the student’s interactive and hands-on activities in active learning [2]. This new paradigm has been applied in all education fields [3], where many reports suggest an increase in students’ performance [4], with the results reported in STEM fields being significantly positive [5].
Furthermore, it has been reported that the use of active learning methodologies positively impacts learner’s well-being, related not only to their academic accomplishment, but also to their physical, emotional and social lives [6]. Likewise, it also furnishes them with multiple competences not only related to technical skills, but also to soft skills [7]. Nonetheless, active learning sometimes faces some degree of resistance for many reasons, such that students may feel unfamiliar with this approach [8], or students may prefer learning strategies involving lower effort. This is the case of listening to lectures, in spite of better performance being achieved with active learning [9].
One key concept in active learning is digital competence, as using digital tools in active learning contexts contributes to enhanced academic performance, along with improved participation and involvement [10]. Hence, digital competence can help achieve more inclusive and higher quality education [11]. Digital literacy typically refers to ICT skills and ICT usage [12], though other factors also need to be considered, such as safety, problem solving, digital content creation, ethics, communication and collaboration [13].
Digital competence encompasses the set of knowledge, skills, and attitudes that enables and ensures universal access to information [14], and which supports critical and creative usage of ICT and digital media [15]. In this sense, the integration of technology in the classroom for education purposes can help students to achieve digital competence [16] and, at the same time, enabling teachers to free up time in order to take care of students who need extra support [17]. In a broader sense, digital competence may be seen as the safe and critical use of technology for work, leisure and communication [18], despite its changing, transversal and flexible nature [19]. Digital competence is crucial and strategically important in the process of digital transformation that the world is undergoing [20].
Digital competence is also considered to be a key factor for social inclusion and quality of life, as it empowers citizens to participate in an ever-increasing digitalized society [21]. In this regard, Bilbao et al. presented a review of the level of digital competence in college teachers, stating that the level is moderate and that some areas need to be improved, such as reflective practice and learner empowerment [22]. Similarly, Martín-Párraga et al. analyzed different assessment frameworks for teachers, such as the European DigCompEdu and the Spanish “Marco de Referencia de la Competencia Digital Docente” (MRCDD), concluding that these models need to be enhanced [23].
Saltos et al. claimed that a high percentage of studies devoted to measuring digital competences fell short in terms of the validity and reliability of the measurements obtained [24]. Pérez-Escoda et al. described a study focusing on different areas of digital competence, such as use, learning, and critical thinking, which exhibited significant differences among both areas and countries [25]. Cabrero-Almenara et al. proposed the creation of teaching and counseling plans in order to develop the digital competence required in order to keep up with societal knowledge [26].
Ariz et al. analyzed the acquisition of digital competence at different educational levels, showing that the use of smartphones as learning tools is superseding the use of traditional PCs, with the use of messaging apps for digital learning being increasingly used as part of learning management systems (LMS) [27]. Fernández-Luque et al. presented a study on digital competence, focusing on communication, safety, and problem-solving [28]. López-Belmonte et al. reported a similar study in which the level of study and the type of work was shown to determine the degree of digital competence [29].
Digital competence is suggested to involve different areas according to the framework used. For instance, the DigComp framework proposes five competence areas, such as information, communication, content creation, safety and problem solving, with 21 further competences defined within these competence areas [30]. According to the DigComp framework, it is suggested that the knowledge and skills related to information searching are the most central, followed by those related to safety and communication, whilst those relating to content creation and problem solving are less emphasized [31]. It should be noted that publications about digital competence in general have grown significantly in recent years, reflecting a high level of interest in this field for all educational levels [32].
An effective way to improve digital competence is through the use of active learning, as it has been reported that active methodologies contribute to developing digital competence [33]. The use of digital tools in active learning environments has been stated to help improve academic performance and success rate, as well as participation and involvement [34]. Moreover, academic improvement has been tied to mastering digital competence, specifically with respect to how the information learned is encoded, how the information gained is communicated, and how significant knowledge is built [35].
In this context, team-based learning (TBL) can be defined as “an active learning and small group instructional strategy that provides students with opportunities to apply conceptual knowledge through a sequence of activities that includes individual work, teamwork, and immediate feedback” [36]. Hence, TBL not only focuses on the simple transfer of content, but is also centered on applying the knowledge and skills through problem-solving techniques, both conceptual and procedural [37]. Furthermore, TBL may be understood as a student-centered active learning method, which requires less instruction time than other active learning methodologies [38].
TBL is based on collaborative work in groups of a few students, which can be undertaken either in-class or on-line, or even in a blended fashion [39]. It has been reported that TBL improves academic performance, success rate and engagement level [40], as it allows for a deeper understanding of content. In this way, it can contribute to students being more effectively prepared for assessment, and thus obtaining improved learning outcomes [41]. Moreover, TBL can help students to develop other key skills, such as teamwork, coordination and communication [42], without undermining either knowledge acquisition or the positive attitudes of students [43].
The teaching objective of TBL is basically to teach students how to think, as opposed to how to memorize [44]. This is achieved by means of working collaboratively, where some roles are appointed within a team at their own discretion, such as a spokesperson, a coordinator, a supervisor, or a moderator [45]. In this way, not only are competences related to a specific task developed, but also others related to transversal competences, such as organizational skills, soft skills, or leadership [46]. In addition, learners’ motivation is another key point when adopting TBL strategies [47], as they can act as a driving force to boost academic results [48].
There is evidence that engagement is involved in TBL, as well as in many active learning activities. It has been described not only for STEM education, but also for any other educational area [49]. For instance, the level of engagement has been cited as the driving force of academic improvement in different education fields, such as chemistry [50], business [51], medicine [52], mathematics [53], law [54], or computer science [55]. However, the level of engagement experienced by students while undertaking active learning activities has not been explicitly measured in any of those studies.
Therefore, the aim of this article is to introduce an activity set that addresses an unmet need for engaging learning tools for digital competence, and also to test if the activities actually meet these requirements. In order to do this, two team-based activities are run for student evaluation in the context of TBL within a course dedicated to learn the basics of network administration, with the aim of facilitating students to develop digital competence and allowing teachers to measure their level of engagement.
The rest of the article is organized as follows: Section 2 presents the methodology used, where the TBL activities are described, Section 3 displays the results obtained, Section 4 exhibits the discussion about those results, and Section 5 draws the final conclusions.

2. Method

This research has been undertaken in a university course on network administration, dedicated to managing and maintaining configurations of network devices. The aim of this course is to provide quality of service and security to traffic flows traveling around a computer network. In addition, this course is part of an engineering degree at a university, such that students need to obtain a sound background on this topic, as network administration is one of the potential areas they may enter when they egress college and search for a new job. The participants are twenty-four fourth year students pursuing Telecommunication Engineering college degree during the first semester of a 2023/2024 academic year. This sample is composed of 9 women and 15 men, whose ages range from 20 to 23.
In order to apply the active learning paradigm through TBL in the evaluation of the lessons studied in this course, two activities have been designed for students to work in groups. It should be noted that students did not have previous experience in managing and configuring networks before taking up this course, so the tasks could not be overly complex. Hence, these activities have been designed in order for students to work in a collaborative manner. Students have been divided into 6 separate groups, where each one is composed of 4 members randomly chosen. Therefore, all teams had to work independently, thus they had the freedom to organize their own strategies, such as working on a whole team basis, or on a peer review basis.
Each group had to undertake two complex activities, where each one was carried out in a separate session. These types of activities, or missions, required each group to become self-organized in order to become more efficient, as the first group clearing an activity obtained top marks. The ultimate goal of those TBL activities is for students to acquire and reinforce digital competence, such as data literacy, communication, or problem solving. Hence, the expected way to gain digital competence is by undertaking engaging active learning activities, as quoted in the introduction, considering that this research is part of an engineering course.

2.1. Team Work 1

The first activity is divided into two similar tasks, whose target is to find a single digit in each case. The introductory plot of this activity is the following: “A meteorite is coming straight towards Earth, and if it hits, there will be no survivors. Only the launch of a rocket can save us, but the person who coded it has just died and has not left any documentation about it. He only had time to write the activation code but he missed the final two digits. We must find them!!!”. Moreover, a royalty-free picture taken from the Pixabay website is attached to get students embedded into the game, which is depicted in Figure 1.
The topology layout of the two tasks added within this mission is provided to the learners in the form of a software application called Packet Tracer. The owner of this application is Cisco Systems, although its use only requires the registration into Cisco’s online learning platform called netacad.com. This piece of software was originally designed to learn the fundamentals of networking, though further functionalities were added at a later stage, such as those related to cybersecurity and Internet of Things.
Nonetheless, activity 1 does not require the use of Packet Tracer, but it was chosen because it allows to design network topologies in a straightforward manner. Furthermore, Packet Tracer is used in the lab session of this course. Both tasks included in activity 1 present the same logical network topology, which is exhibited in Figure 2. However, the initial conditions of both tasks differ, which lead students to undertake different calculations to solve each task.
In both tasks within activity 1, students are asked to carry out the following steps, focusing on the basic concepts of network subnetting:
(a)
Find out the appropriate subnetwork addressing scheme.
(b)
Assign the IP addressing to the appropriate devices within each subnet, according to the instructions given in each task.
(c)
Select the digits requested in each of the items presented.
(d)
Answer the final question related to those digits.
The logical topology provided for activity 1 exhibits 7 routers connected in a daisy-chain manner. The layout of these links are a string of Wide Area Network (WAN) links with the shape of an inverted C, though those WAN connections are irrelevant for the resolution of the task. Each of these routers has a Local Area Network (LAN) hanging on it, which is composed of a switch and a pair of end devices. The identifiers for the routers and switches have been randomly assigned, although the identifiers for the two end devices within each LAN always contain either an even or an odd number.
Regarding network addressing, the last address in each subnetwork must be assigned to the router interface within that LAN. Then, the second-to-last address must be assigned to the management interface of the switch. In turn, the third-to-last address must be assigned to the even end device, whilst the fourth-to-last address must be assigned to the odd end device.
Additionally, a pool of questions is proposed in order to select a specific digit in each case, which in turn leads to answer the final question. Following are the questions for both tasks within activity 1:
(i)
Last digit of the 4th octet in unicast IP address of PC2.
(ii)
First, digit of the 4th octet in broadcast IP address of PC4.
(iii)
Last digit of the 4th octet in broadcast IP address of PC8.
(iv)
First, digit of the 4th octet in unicast IP address of PC1.
(v)
Last digit of the 4th octet in unicast IP address of R4’s LAN.
(vi)
Last digit of the 4th octet in unicast IP address of S6.
(vii)
Last digit of the 4th octet in unicast IP address of PC9.
(viii)
Last digit of the 4th octet in unicast IP address of PC7.
Eventually, the final question asks about which of the selected digits gets repeated the most among the answers to the eight questions proposed above.
As stated before, both tasks within activity 1 share the same network topology, the same questions proposed and the same steps to complete the final question. However, they both differ in their three initial premises, which change the operations to be undertaken in order to solve each task. These differential points are the subnet mask assigned to the block of network addresses available, along with the order, and the way to assign the necessary subnetwork addresses.
In the first task, the IP address block is 172.16.54.0/26, subnetting must be assigned clockwise, beginning from the LAN located on top left corner, and the minimum number of bits needs to be used to accommodate the end devices shown in each LAN.
In the second task, the IP address block is 172.16.54.0/24, subnetting must be assigned counterclockwise, beginning from the LAN situated just on the right hand side, which is the one hosting the servers, and the minimum number of bits needs to be used according to the amount of LANs exhibited.

2.1.1. How to Carry out Activity 1—Task 1

To start with, the block of network addresses 172.16.54.0/26 is proposed to undertake this task. The goal is to perform subnetting using the minimum number of bits in order to accommodate the end devices within every LANs according to the network topology and considering the instructions given above. Furthermore, the sub-blocks must be assigned clockwise, starting from the LAN located on the top left corner.
The first step is to find out the lowest amount of bits to cover all addresses needed in each LAN. This amount is 4, as each LAN contains one router, one switch and two end devices. As it will be seen at a later stage, it is not possible to assign only 2 bits, even though 2 2 = 4 , because one of those addresses has to be reserved to identify the network address, whereas another must be booked to identify the broadcast address. Therefore, it is necessary to use at least 3 bits to assign the host part of each IP network address, as 2 3 = 8 . Hence, as it is necessary to assign 3 bits for the host part of each network address, then its network part is extended up to cover 32 3 = 29 bits, as an IP address is composed of 32 bits overall. Thus, its network part covers 29 bits and its host part includes 3 bits.
The original network mask is 26 bits long, so it is necessary to dedicate the amount of 29 26 = 3 bits to carry out the subnetting in order to assign the least amount of bits for the host part. This provides 2 3 = 8 subnets available, which is greater or equal to the number of subnets needed, namely 7, thus the 8th subnet will not be used. On the other hand, each of those subnets will have a subnet mask of 26 + 3 = 29 bits, whose counterpart in dotted decimal notation is 255.255.255.248. In summary, each of the 7 subnets has 29 bits for the network part and 32 29 = 3 bits for the host part.
Therefore, the first subnetwork address is the initial block with a subnet mask of /29, and the successive ones are found out by adding up 2 32 29 = 2 3 = 8 to the fourth octet of the predecessor ones. Table 1 exhibits the subnetwork addresses resulting from these operations, where each subnet is assigned to a LAN, starting with the one connected to the router located in the top left corner, and moving clockwise, as stated in the instructions.
After that, the second step is geared to assign the IP addressing to the devices within each subnet, according to the instructors given above. It should be noted that the first address in each subnet is the subnetwork address, which identifies the whole network, whereas the last address is the broadcast address, which allows us to send traffic to all devices within that network. The usual way to assign IP addresses within a network is to exclude those instances from the available addresses to be assigned, thus there is a total amount of 2 32 29 2 = 2 3 2 = 6 host addresses available in order to be assigned to devices within each subnet. Table 2 displays the IP address assignment of the fourth octet for the devices within all LANs, according to these operations.
Afterwards, the third step involves a battery of eight questions proposed in order to extract information related to the digits used in the IP addressing, which are used as clues when solving the final question. All those questions ask for a specific digit related to the table shown above, where only a particular digit within the fourth octet in decimal format of a given IP address is searched for. Table 3 exhibits these questions, along with the IP address of the fourth octet in dotted-decimal notation for each device referred to and the specific digit requested in each case.
Eventually, the final question refers to the collection of digits selected in order to find the most repeated one. Table 4 displays the frequency of each of the chosen digits in the different questions proposed, along with the most selected one, which is actually the solution of this first task of activity 1. In this sense, the sign ✓ represents the answer of each question and the sign ⊚ identifies the most repeated answer. It should also be noted that the final answer is not found out until the last question proposed is solved, as different digits are repeated across the questions.
Actually, the most selected digit in activity 1—task 1 is 5, as it is shown in the previous table. Therefore, this is the first digit expected as part of the solution for activity 1.

2.1.2. How to Carry out Activity 1—Task 2

This task is similar to the previous one, although some small changes in the instructions lead to different calculations, which results in obtaining different results. The first change is in the block of network addresses to carry out this task, which is now 172.16.54.0/24. Moreover, another change is in the assignment of sub-blocks, which must be assigned counterclockwise, starting from the LAN with servers, located on the right hand side of the design. Additionally, the other change is to perform subnetting by using the minimum number of bits to cover the number of LANs shown in the topology, whereas the rest of the instructions are the same as in task 1.
Starting with the subnetting, 3 bits are also needed to undertake it, because it results in 2 3 = 8 subnets. Hence, the subnets mask will have 24 + 3 = 27 bits, whose counterpart in dotted decimal notation is 255.255.255.224. Thus, the network part includes 27 bits, whereas the host part contains 32 27 = 5 bits. Hence, the first subnetwork address will be that of the initial block with a subnet mask of /27, and the successive ones will be found out by adding up 2 32 27 = 2 5 = 32 to the fourth octet of the predecessor ones. Table 5 shows the subnetwork addresses after carrying out these operations, where each subnet is assigned to a LAN, beginning with the one connected to the router located on the right end, and moving counterclockwise, as described in the instructions.
Regarding the IP address assignment in each network, it should be noted that each subnet has 2 32 27 2 = 2 5 2 = 30 addresses available to be assigned to the interfaces of interior devices. Table 6 displays the IP address assignment of the fourth octet for the devices within all LANs according to the aforesaid operations.
In turn, Table 7 displays the questions proposed, along with the fourth digit of the device related to and the specific digit requested.
Eventually, the final question is related to the different digits selected in order to find the most repeated one. Table 8 exhibits the frequency of each of the selected digits in the different questions proposed, along with the most selected one, which is actually the solution of this second task of activity 1. In this context, the sign ✓ indicates the answer of each question and the sign ⊚ identifies the most repeated answer, which is not found out until the last question is solved, as different digits are repeated across the questions.
It is clear that the most selected digit in activity 1—task 2 is 9, as it is seen in the previous table. Hence, this is the second digit expected as part of the solution for activity 1. Therefore, the pair of digits missing in activity 1 are 5 and 9.

2.2. Team Work 2

The second activity is the continuation of the two tasks undertaken in the first activity. However, it was technically more difficult, as it requires more technical skills to clear it. The introductory plot of this activity is the following: “We have already obtained the two missing digits in order to launch the rocket to save us from the meteorite before it is too late. However, we need to establish communication with the launching pad from our premises. We must get it!!!”. As in the previous case, a royalty free picture taken out of the Pixabay website is attached to get students into the mood of the mission, which is exhibited in Figure 3.
The network topology related to this mission is shown in Packet Tracer, which presents a logical grid of routers. Looking in more detail, the dimensions of this grid are 5 × 5 , meaning that there are 5 rows and 5 columns, where a router is located in the cross of every row and every column. This layout presents 25 routers, which are identified from 0 to 24. This kind of grid might be compared to the shape of a k-ary n-cube, except for the fact that there are no wraparound links in this structure.
Regarding the allocation of routers, it should be noted that Router0 is located on the bottom left corner, whilst Router24 is situated on the top right corner. Hence, there is a PC connected to Router0, which simulates the headquarter premises, and on the other hand, there is a Server connected to Router24, which represents the launching pad. With respect to the rest of the routers, those with lower identifiers are situated closer to the former, whereas those with higher identifiers are located near to the latter. However, the layout of all those routers within the grid is not clearly established, meaning that the different router identifiers are distributed around the grid with no definite pattern.
Additionally, all those routers allocated within the topology are only allowed to be connected by following the lines presented in the grid, meaning that the only connections allowed between any pair of routers are the ones according to the direct neighbors within the grid diagram, and no other interconnection is permitted. Hence, this logical grid simulates the internetworking paths available between a source and a destination, which in this case are represented by the PC and the Server, respectively. Figure 4 displays the logical layout proposed.
It should be noted that, in the networking field, the IP address of a device within a logical network topology is commonly cited only with a dot and the value of its last octet, such as “.x”, as long as the whole network address is also quoted. The goal of this technique is to save space in the picture, as the first octets of the IP addresses for all devices within a common network are the same. This is the case in both the lower left corner and the upper right corner of this picture, where the IP addresses of the devices are formed with the first three octets associated to the corresponding network address and the last octet assigned to the appropriate device, because the subnet mask is /24. For instance, the IP address of the PC in the lower left corner is 10.0.0.3/24, whereas the IP address of the Server in the upper right corner is 10.0.24.3/24.
However, the layout presented to the students was not as straightforward, because the grid was tilted and skewed a bit, thus resulting in a system of skew coordinates, as shown in Figure 5. This type of perspective made things a bit harder for students to clearly visualize the scenario, as the references to top, bottom, left and right sides in the picture were not represented in the usual manner.
Furthermore, the type of routers deployed throughout the grid is not uniform, as different models of routers have been used. This is not a trivial issue, as links must be established between a pair of neighbor routers within the grid, and the type of interface available in a particular router determines the type of link established. In other words, a router within the grid being part of the path from source to destination need to set up two links, namely one to its predecessor and one to its successor along the path, which involves the setup of different LAN or even WAN connections.
Eventually, a further condition was imposed in the path construction from source to destination, as it was requested to find a path passing through all even routers on the way from the PC to the Server. Hence, it was not compulsory to find the shortest path joining both end points and including all even routers on it, though it became really convenient to search for it, as the lower the number of routers, the less time would be involved in configuring them. In this way, considering the path going through all routers distributed along the structure would be valid, although it would represent a suboptimal path.
In summary, the resolution of this activity may be divided into six steps:
(a)
Find the shortest possible path from source to destination where all even routers are included.
(b)
Set up the interfaces for all routers belonging to that path.
(c)
Identify the networks involved throughout that path.
(d)
Assign the proper addresses to the interfaces of each router.
(e)
For each router within that path, configure the IP addressing in the interfaces involved and the static routing needed. Moreover, it is recommended to configure the hostname of each router for clarity purposes.
(f)
Execute a tracert command from the source PC to reach the destination Server successfully, where the different hops to move end to end will be shown.

How to Carry out Activity 2

To start with, the first step requires to spot all routers with even identifiers and try to find the shortest path between source and destination, namely between the PC and the Server, although longer paths are also permitted. Router identifiers range from 0 all the way to 24, where the former is attached to the source and the latter is attached to the destination, hence the path must traverse all remaining routers with even identifiers, which are just eleven. Figure 6 displays all even routers within the logical grid layout proposed.
Many combinations have been found to move end to end, where simplest one goes through all twenty-five routers, although it is possible to obtain some combinations with only nineteen routers, such as the one shown in Figure 7.
After that, the second step is to set up the interfaces within this path. There are some key points to take into consideration for this task. To start with, the routers used in the layout topology are all commercial Cisco routers, although different models have been deployed across the grid. Hence, as each model was released at different times, then the types of Ethernet interfaces implemented are different as well, whereas the interface names also depend on the model.
On the other hand, there is a certain degree of freedom to choose the kind of interface to link two given routers, as well as the particular interface selected for each link. Regarding the former, it refers to Ethernet, Serial or any other guided media, and furthermore, Ethernet interfaces might be of different kinds, such as FastEthernet or GigabitEthernet, depending on the model of router chosen. With respect to the latter, it relates to the order of the port identifier selected, which can be port 0, port 1, and so on. Additionally, the interface names also depend on the model of router selected, as they can be just 0, or 0/0, or 0/0/0. In summary, all those features need to be considered when configuring each router.
The easiest way to link routers through guided connections in Packet Tracer is by means of Ethernet ports, regardless of the speed they work at, as those type of ports are available in all commercial routers within the application. There are other kinds of guided interfaces, offered as spare ports in those routers, such as Serial or Fiber ones. However, they need to be installed as external network modules, which requires to first switch the device off, then connect the appropriate network card into the proper slot in the device, and finally, switch the device back on. Additionally, unguided connections are also available in Packet Tracer, such as Wi-Fi or 5G, though this activity specifically asks for guided connections.
Hence, Ethernet ports are preferably used to build up the links throughout the network topology, although the specific type of Ethernet port depends on the model of router. Furthermore, two interfaces are needed per router in order to build up the sequential path from source to destination, where one of the ports points out to its predecessor in the path and the other one points out to its successor in the path. Therefore, the fastest way to obtain a sequential path in Packet Tracer is to use two Ethernet interfaces per router, as no extra network cards are required.
Moreover, two types of Ethernet cables are available in Packet Tracer, such as copper straight-through and copper cross-over, where the former is depicted as continuous black lines and the latter is exhibited as discontinuous black lines. If both ends of a link are GigabitEthernet ports or ports with higher speed, then the former can be used, no matter which device is located at each end. However, if any of the ends of a link is a FastEthernet port or a plain Ethernet port, then the latter must be used when interconnecting two devices of the same kind, or if an end device is connected to a router, whereas the former is used to connect a switch to either an end device or a router. In order to simplify the activity, all Ethernet interconnections between any pair of routers are going to be considered as copper cross-over cables.
However, special attention must be paid when dealing with two specific kind of routers. First, the 2620XM model only presents one Ethernet port by default, so an additional network card need to be installed. Different options are offered, although the most appropriate one in this case is the NM-1FE-TX, which is a network module with one FastEthernet interface for twisted pair. Second, the 819HGW model presents four Etherswitch ports, which are fit for layer 2 connections, although they are not suitable for establishing links in a daisy chain fashion. Hence, the only two guided interfaces fit for establishing a sequential path are a GigabitEthernet port and a Serial port, and consequently, the router on the other side of the link with the Serial port must have a Serial port as well.
In summary, Table 9 displays the types of interfaces available in each router within the sequential path. Nonetheless, the interface identifiers shown in each router are orientative, although they need to be taken into consideration when configuring the interfaces of each router. These interfaces have been assigned in an ordered manner, such that the one with the lower identifier points out to the predecessor router and the following one points out to the successor router. However, this rule of thumb has been broken when a new interface was added, such as in the 2620XM model, or when a serial interface was needed to connect to the serial interface of a 819HGW router being located as the direct predecessor.
Figure 8 shows the links used to deploy the sequential path, though the interface identifiers are not shown. It should be noted that, in Packet Tracer, the Ethernet links exhibited in both the source LAN and destination LAN are considered as copper straight-through cables, which are drawn as continuous lines, whereas the Ethernet links displayed between any pair of routers are considered as copper cross-over cables, which are displayed as dotted lines. This distinction is not necessary when dealing with GigabitEthernet links, although it is needed when dealing FastEthernet or Ethernet links. However, all connections between routers have been presented as cable cross-over cables just for clarity purposes. On the other hand, Serial links have been depicted as red rays, without specifying which end is establishing the clock rate needed for these links.
Afterwards, the third step is to identify each of the networks involved along the whole path. In order to standardize the way of assigning the IP addressing for the particular network defined by the connection of any pair of routers, a definite addressing scheme has been applied. In this way, if a connection is horizontalwise, then the four octets of the IP network address is “10.left.right.0/24” In this case, the word ‘left’ indicates the router identifier located on the left end of the link, while the word ‘right’ indicates the router identifier situated on the right end of that link. Analogously, if the connection was verticalwise, then the IP network address is “10.up.down.0/24” In that case, the words ‘up’ and ‘down’ relate to the router identifiers located on the upper and lower sides of that link, respectively.
In order to summarize the IP addressing of all the networks involved, Figure 9 represents the different routers and networks within the grid. Specifically, each router identifier is highlighted in yellow, whereas the different network addresses between any pair of routers within the grid topology are exhibited as well.
Additionally, Figure 10 presents the path chosen to move from source to destination with the different routers being traversed, along with the network addresses assigned to each of the networks between any pair of routers. Therefore, the red line stands for the path selected to move across the grid from router 0, in the lower left corner, to router 24, in the upper right corner.
At this point, the fourth step is dedicated to assign the corresponding IP address to all interfaces involved in the path. In order to simplify the procedure, a definite scheme has been implemented to assign the fourth octet within each of the networks shown above, which depend on whether the link which defines a network is horizontalwise or verticalwise. In case the link is horizontalwise, the last octet on the left end of that link will be 1, while it will be 2 on its right end. On the other hand, in case the link is verticalwise, the last octet on the upper end of that link will be 1, while it will be 2 on its lower end.
For instance, Figure 11 exhibits an example to assign IP addresses to the interfaces of both links involved. Focusing on the vertical link in the picture, router 49 is located on its upper end, and router 50 is located on its lower end, meaning that the network joining both routers has the network address 10.49.50.0/24, according to the IP addressing scheme proposed above. Hence, the IP address of the vertical link of router 49 is 10.49.50.1/24, while the IP address of the vertical link of router 50 is 10.49.50.2/24.
On the other hand, taking the horizontal link in the picture, router 50 is situated on its left end, whilst router 51 is situated on its right end, meaning that the network connecting both routers has the network address 10.50.51.0/24, according to the IP addressing scheme described above. Therefore, the IP address of the horizontal link of router 50 is 10.50.51.1/24,while the IP address of the horizontal link of router 51 is 10.50.51.2/24.
Once the addressing scheme is completed, the routers being part of the sequential path have to be configured. This can be performed in a systematic way for each router, as all routers need to have the following items configured:
  • Optionally, configure its hostname.
  • Configure the IP address on each interface involved in the sequential path.
  • Configure static routing to reach source and destination hosts by quoting the next hop address.
The relevant configurations for all routers involved in the shortest path proposed are presented in Appendix A.
After the configuration has been carried out in all routers involved, the last part is to execute a ping from the PC to the Server in order to check out if there is end to end connectivity, where the whole path established in Figure 12 is traversed.
Alternatively, the execution of the command tracert from the command line interface (CLI) of the PC to reach the IP address of the Server represents as many hops as the number of routers included in such a sequential path, as shown in Figure 13.

3. Results

These TBL activities were scheduled in three sessions overall. Basically, during the first session, all teams had to undertake both tasks included in activity 1, and at the end of that session, each group had to fill in a small report relating the results they obtained. Then, the second session was dedicated to carry out activity 2, where another small report had to be filled in with the results achieved at the end of that session. Additionally, in the third session, each team had to make a pitch presentation in order to present what they did and how they did in both activities.
Furthermore, after each presentation was made, all students had to evaluate the work of the corresponding team with a specific construct on a peer review basis as an active evaluation method. This construct was composed of two items related to the first task of activity 1, another two items for the second task of activity 1, and four items for activity 2. The items or questions proposed are displayed in Table 10.
The questions of this construct were assessed by a panel of four experts, where two dimensions were evaluated, such as the construction and the clarity of each item. Table 11 exhibits the validity of the experts’ judgment, calculated by means of the Aiken’s V test in a four-point Likert type scale, whose values are 1 for strongly disagree, 2 for disagree, 3 for agree and 4 for strongly agree.
Regarding the results obtained by the students, those were collected with a construct with the eight questions previously assessed by the panel of experts, which had to be individually assessed with a five-point Likert type scale. Therefore, the possible grades for each question ranged from 1 to 5, where 1 was assigned to strongly disagree, 2 was tied to disagree, 3 was attached to neither agree nor disagree, 4 was linked to agree, and 5 was assumed as strongly agree. It should be noted that each country has its own grading system, and so does Spain, where academic marks go from 0 to 10 [56,57]. Hence, the marks obtained by each group were multiplied by 2 in order to adapt those scores to the Spanish grading system, because the top mark in the construct is 5, while it is 10 in the Spanish grading system. Table 12 displays the most common centralized and dispersion statistics applied to the outcome obtained.
Furthermore, the reliability of these results has been measured with the Cronbach’s alpha, as shown in Table 13.
Additionally, at the end of the session dedicated to presentations, another construct was presented to the students in order to measure their level of engagement during both activities. This was performed through the ISA engagement scale, composed of nine standard questions which are evenly distributed among three dimensions, such as intellectual, social, and affective [58]. Those questions are standard in the ISA engagement scale, so there is no need for them to be validated [59]. Each question or item is rated in a seven-point Likert scale, such that the average scores for each dimension and overall must be higher than 6, which is the value corresponding to agree in the ISA engagement scale.
Table 14 shows the dimensions evaluated in the ISA engagement scale, along with its corresponding items. On the other hand, Table 15 displays the statistics corresponding to each dimension and overall.
In addition, the reliability of these results has been assessed with the Cronbach’s alpha, as seen in Table 16.

4. Discussion

The evaluation of these activities was performed through an active learning approach, which required the use of three different constructs. First, of all, a construct with a proposal of questions was sent out to a panel of four experts for them to judge the convenience of those questions. This construct originally had eight questions, where half of them were dedicated to assess activity 1 and the other half were devoted to activity 2. Furthermore, as activity 1 was composed of two tasks, then half of the questions assigned to activity 1 were associated with task 1, whereas the other half were linked to task 2.
Each of those questions were assessed from two different points of view, also known as dimensions, such as the construction of the question and its clarity. This construct to evaluate the validity of the questions proposed was rated with a four-point Likert-type scale, and the average scores obtained in each of the dimensions established were 3.781 for construction and 3.843 for clarity, which resulted in an overall average of 3.812. Then, those values were used to calculate the Aiken’s V test in order to check the validity of the construct, which accounted for 0.927 and 0.948 for each dimension, and 0.938 overall.
Hence, as this values are higher than the cutoff mark proposed by Aiken, which was 0.87 [60], then the construct proposed was validated and no question had to be dropped out. Other common cutoff marks in the literature are 0.70, quoted by Charter [61], or 0.50, cited by Cicchetti [62], which allow for more discrepancy among judges, though in this case, the Aiken’s one was adopted, which is more restrictive. Therefore, the questions proposed were considered to be valid to measure the results obtained by the students.
Once the construct was validated by the panel of experts, it was ready to be presented to the students, which took place in the session dedicated to pitch presentations. Students had to rate the questions within the construct for the presentations, where each question was rated with a five-point Likert-type scale on a peer review basis. The centralization statistics obtained overall reached an average of 4.33, along with a mode of 4 and a median of 4 as well. Considering that the value 4 represents “Agree”, then the results obtained prove that students ’agreed’ with the activities.
However, focusing on the values obtained for each activity, the first one received better marks than the second one, most likely due to the difficulty of the latter, as commented by the students as well. Regarding the dispersion statistics, the values related to the first activity offer lower variability than those of the second one, whereas the overall values are closer to the second one than to the first one. This also occurs with the coefficient of variation, although the overall value of 0.16 is considered between low and moderate variability of data, which is an accepted value.
Furthermore, with respect to the reliability, the Cronbach’s alpha for activity 1 was 0.701, whilst it was 0.720 for activity 2, whereas the overall value was 0.719. All values were slightly above 0.7, which is the benchmark for an acceptable value [63], so the values obtained denote internal consistency [64], whereas it also shows that there is a high correlation between the results obtained in both activities.
Eventually, the assessment of the level of engagement provided values above 6 for all dimensions, which is considered as the cutoff mark for high engagement [65]. Actually, the intellectual engagement attained was 6.60, whilst the social engagement was 6.63, and the affective engagement was 6.64, thus leading to an overall engagement of 6.62. Hence, these results indicate that the engagement level is high, as the value of 6 represents “Agree”. Furthermore, the dispersion statistics were quite low, leading to coefficients of variation around 0.1, which is taken as low variability of data.
Additionally, the reliability of the construct for measuring the level of engagement resulted in a value of 0.864 for the intellectual dimension, 0.957 for the social one, and 0.871 for the affective one, which are located in the area between good and excellent internal consistency of data. Moreover, the results for the overall construct presented a value of 0.731, which is considered as acceptable.
In summary, the results obtained meet the goal of presenting attractive digital competence learning tools and verifying their effectiveness. Regarding the outcome obtained in the peer review assessment, this is clearly linked to the digital competence objective, as communication, collaboration, and problem solving are key aspects in the completion of the activities proposed. With respect to the outcome referred to the level of engagement, the high average scores on the ISA engagement scale are relevant as they indicate that the activities are indeed engaging.
As a side note, with regard to the Cronbach’s alpha, it makes more sense to calculate it in the peer review assessment, as some students may perform better than others, whereas in the engagement level assessment is much less relevant, since it could imply that there are participants who scored consistently high on some items whereas scoring consistently low on some other items, while the activities should cause high levels of engagement among all participants. Nonetheless, the same statistical treatment has been performed in the results of both the peer review assessment and the engagement level assessment for methodological reasons.
It should be noted that, at the end of the Section 1, it was stated that engagement is involved when applying TBL in any educational area. In this regard, some recent references from the literature were provided where the level of engagement is defined as the driving force of academic performance in different fields, such as chemistry [50], business [51], medicine [52], mathematics [53], law [54], or computer science [55]. However, none of those references offer any quantitative measurement of the level of engagement involved, hence it is not possible to make comparisons with those previous research papers.
Moreover, there is no control group in this research, as each group was free to become organized independently from the rest. The setup of a control group may be taken into account for future Team-Based Learning activities in order to be able to better establish the effect of methodology on the students’ commitment and performance, as the results obtained could simply be related to their personal characteristics.
Furthermore, both activities proposed have different approaches to sort them out, so the optimal solutions for both activities have been given in detail in the Section 2 in order to facilitate the reproducibility of such solutions. However, as happens in many engineering issues, there may be more than one way to fix them, where some of those ways could be optimal, whereas some others could be suboptimal. On the other hand, many mathematical issues usually have only one solution, even though there may be alternative ways to complete it.
Focusing on the two tasks contained in activity 1, they are based on the mathematical background behind network subnetting, so they both have a unique solution. Hence, each step within both tasks are solved by applying the appropriate arithmetic calculations, though there are different approaches to complete it.
Therefore, the solution proposed in the Section 2 is the optimal one, although there are alternative solutions, which may be suboptimal. In this sense, during the pitch presentations made by the different teams, four groups preferred the use of powers to obtain the network addresses required, while one group chose to used multiplications, and another one performed it by means of sums. Eventually, all groups agreed that the use of powers is the optimal way, though some of the groups felt more confident by using alternative methods as multiplications or sums instead.
On the other hand, focusing on activity 2, it is based on engineering, so different solutions are allowed to be designed in order to solve the issue proposed. Hence, the two first steps of activity 2 allow for different solutions because they are related to the engineering design of the solution. However, the other steps are related to mathematics, thus the same results will apply for the same design, though these results may be obtained in different ways.
For instance, regarding the first step, the shortest possible path is preferred, as it provides a lower amount of hops throughout the path. However, there are different alternative paths with the minimum length of 19 hops, which were quoted during the pitch presentations. Additionally, other solutions with 20 and 21 hops were also proposed, though they are suboptimal.
With respect of the second step, most of the groups chose to use the lowest Ethernet interfaces, although one group selected the highest ones. Nonetheless, it does not affect the optimality of the solution in any way. Moreover, one group chose to use Serial interfaces with the 2620XM router, though that solution is suboptimal because any Ethernet interface is faster than a Serial one.

5. Conclusions

In this paper, two TBL activities were proposed in the context of a course within an engineering degree at university. The course was dedicated to learning the basics of networking engineering, and two activities were designed to be undertaken in groups and, in turn, each team had to present to the rest of students as a pitch presentation. After each presentation was completed, then all students had to assess the work carried out by the corresponding team with a construct composed of eight questions or items, which had to be rated with a five-point Likert type scale. Half of the questions within the construct were related to the first activity, whereas the other half were bound to the second activity. Furthermore, after finishing all presentations, a further construct was delivered to the students in order to assess their level of engagement during the activities.
To begin with, the questions within the construct for the students’ results were previously validated by a panel of experts by means of the Aiken’s V test, which considered two dimensions for each item, such as construction and clarity. Then, the results obtained by the students presented quite high values, though the first activity were rated significantly higher than the second one, which can be explained by the fact that the latter had more difficulty, as it required more technical skills to complete it. This point is also supported by the greater dispersion statistics attained in the latter. With respect to the reliability of the results achieved, which has been calculated through the Cronbach’s alpha, both activities obtained acceptable values. Additionally, those values indicate the internal consistency of the data collected, as well as the high correlation between the results in both activities.
Eventually, the level of engagement was measured by means of the ISA engagement scale, where all dimensions included, namely intellectual, social, and affective, achieved really high values. This fact supports the idea that students were really engaged during the activities. Furthermore, the coefficient of variation attained in all cases indicate low variability of the values collected, supporting the point that most of them were highly motivated. On the other hand, the Cronbach’s alpha was used to calculate the reliability of the engagement level measurements, resulting in an acceptable overall value. In this sense, the diverse dimensions involved obtained moderate to excellent values, which reinforces the idea that students became motivated in the activities. Overall, the results obtained show that the activities developed are both highly engaging for the participants and allow to test the digital competences achieved by the participants.
As a reminder, the goal in this research is twofold; the first one is to develop digital competence by means of engaging learning tools, and the second one is to assess the level of engagement achieved. As has been described in the introduction, there are reports in the literature stating that active learning methodologies portray engagement due to the level of involvement experienced by students, which usually acts as the driving force for improving academic performance.
In this context, TBL is an active learning methodology which requires students to apply not only technical skills in order to solve complex problems, but also soft skills in order to establish an internal self-organization among the team members, as well as an efficient communication system among them. Hence, TBL requires a certain level of involvement for students in order to become integrated into the team and follow the team dynamics, so according to the previous paragraph, it ends up portraying engagement.
Therefore, the activity set presented in this manuscript allows students to acquire the digital competence expected in relation to the course on network administration, as the activity set is aligned to course objectives. On the other hand, the TBL format implies a certain level of involvement for students, which lead to engagement. Hence, joining both arguments, the first goal was met because digital competence is acquired by means of engaging learning tools.
Regarding the second goal, the measurement of the level of engagement has been carried out by the ISA engagement scale questionnaire. The outcome obtained has shown a high level of engagement when undertaking the activity set on a team basis, as the overall average level was above 6, which is the benchmark value. Hence, the second goal was also met.

Author Contributions

Conceptualization, P.J.R.; Formal analysis, P.J.R.; Supervision, P.J.R., S.A., K.G., C.B. and C.J.; Validation, P.J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki. No approval by the Institutional Ethics Committee was necessary, as all data were collected anonymously from capable, consenting adults. The data are not traceable to participating individuals. The procedure complies with the general data protection regulation (GDPR).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ICTInformation and Communication Technologies
IPInternet Protocol
ITInformation Technology
LANLocal Area Network
PBLProject-Based Learning
SDLSelf-Directed Learning
SEGSerious Educational Games
STEMScience, Technology, Engineering, Mathematics
TBLTeam-Based Learning
WANWide Area Network

Appendix A. Annex I

The relevant configurations of the routers involved in the sequential path selected as the shortest path to go from source PC to destination Server are displayed here:
hostname Router0
interface fa0/0
 ip address 10.0.0.1 255.255.255.0
 no shutdown
interface fa0/1
 ip address 10.0.2.1 255.255.255.0
 no shutdown
ip route 10.0.24.0 255.255.255.0 10.0.2.2
end
————————————————————–
hostname Router2
interface fa0/0
 ip address 10.0.2.2 255.255.255.0
 no shutdown
interface fa0/1
 ip address 10.2.5.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.0.2.1
ip route 10.0.24.0 255.255.255.0 10.0.5.2
end
————————————————————–
hostname Router5
interface gi0/0
 ip address 10.2.5.2 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.5.7.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.2.5.1
ip route 10.0.24.0 255.255.255.0 10.5.7.2
end
————————————————————–
hostname Router7
interface fa0/0
 ip address 10.5.7.2 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.7.14.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.5.7.1
ip route 10.0.24.0 255.255.255.0 10.7.14.2
end
————————————————————–
hostname Router14
interface e0/0
 ip address 10.7.14.2 255.255.255.0
 no shutdown
interface fa0/0
 ip address 10.18.14.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.7.14.1
ip route 10.0.24.0 255.255.255.0 10.18.14.1
end
————————————————————–
hostname Router18
interface gi0/0
 ip address 10.18.14.1 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.13.18.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.18.14.2
ip route 10.0.24.0 255.255.255.0 10.13.18.1
end
————————————————————–
hostname Router13
interface gi0
 ip address 10.13.18.1 255.255.255.0
 no shutdown
interface se0
 ip address 10.6.13.2 255.255.255.0
 clock rate 64000
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.13.18.2
ip route 10.0.24.0 255.255.255.0 10.6.13.1
end
————————————————————–
hostname Router6
interface se0/0/0
 ip address 10.6.13.1 255.255.255.0
 no shutdown
interface gi0/0
 ip address 10.3.6.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.6.13.2
ip route 10.0.24.0 255.255.255.0 10.3.6.1
end
————————————————————–
hostname Router3
interface fa0/0
 ip address 10.3.6.1 255.255.255.0
 no shutdown
interface fa0/1
 ip address 10.8.3.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.3.6.2
ip route 10.0.24.0 255.255.255.0 10.8.3.1
end
————————————————————–
hostname Router8
interface gi0/0
 ip address 10.8.3.1 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.4.8.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.8.3.2
ip route 10.0.24.0 255.255.255.0 10.4.8.1
end
————————————————————–
hostname Router4
interface fa0/0
 ip address 10.4.8.1 255.255.255.0
 no shutdown
interface fa0/1
 ip address 10.9.4.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.4.8.2
ip route 10.0.24.0 255.255.255.0 10.9.4.1
end
————————————————————–
hostname Router9
interface fa0/0
 ip address 10.9.4.1 255.255.255.0
 no shutdown
interface fa0/1
 ip address 10.10.9.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.9.4.2
ip route 10.0.24.0 255.255.255.0 10.10.9.1
end
————————————————————–
hostname Router10
interface gi0/0
 ip address 10.10.9.1 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.10.15.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.10.9.2
ip route 10.0.24.0 255.255.255.0 10.10.15.2
end
————————————————————–
hostname Router15
interface gi0/0
 ip address 10.10.15.2 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.15.12.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.10.15.1
ip route 10.0.24.0 255.255.255.0 10.15.12.2
end
————————————————————–
hostname Router12
interface gi0
 ip address 10.15.12.2 255.255.255.0
 no shutdown
interface se0
 ip address 10.12.16.1 255.255.255.0
 clock rate 64000
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.15.12.1
ip route 10.0.24.0 255.255.255.0 10.12.16.2
end
————————————————————–
hostname Router16
interface se0/0/0
 ip address 10.12.16.2 255.255.255.0
 no shutdown
interface gi0/0
 ip address 10.16.20.1 255.255.255.0
 clock rate 64000
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.12.16.1
ip route 10.0.24.0 255.255.255.0 10.16.20.2
end
————————————————————–
hostname Router20
interface gi0/0
 ip address 10.16.20.2 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.20.22.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.16.20.1
ip route 10.0.24.0 255.255.255.0 10.20.22.2
end
————————————————————–
hostname Router22
interface gi0/0
 ip address 10.20.22.2 255.255.255.0
 no shutdown
interface gi0/1
 ip address 10.24.22.2 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.20.22.1
ip route 10.0.24.0 255.255.255.0 10.24.22.1
end
————————————————————–
hostname Router24
interface fa0/0
 ip address 10.24.22.1 255.255.255.0
 no shutdown
interface fa0/1
 ip address 10.0.24.1 255.255.255.0
 no shutdown
ip route 10.0.0.0 255.255.255.0 10.24.22.2
end
————————————————————–

References

  1. Hartikainen, S.; Rintala, H.; Pylväs, L.; Nokelainen, P. The Concept of Active Learning and the Measurement of Learning Outcomes: A Review of Research in Engineering Higher Education. Educ. Sci. 2019, 9, 276. [Google Scholar] [CrossRef]
  2. Lombardi, D.; Shipley, T.F.; Bailey, J.M.; Bretones, P.S.; Prather, E.E.; Ballen, C.J.; Knight, J.K.; Smith, M.K.; Stowe, R.L.; Cooper, M.M.; et al. The Curious Construct of Active Learning. Psychol. Sci. Public Interest 2021, 22, 8–43. [Google Scholar] [CrossRef]
  3. Freeman, S.; Eddy, S.L.; McDonough, M.; Smith, M.K.; Okoroafor, N.; Jordt, H.; Pat, M. Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. USA 2014, 111, 8410–8415. [Google Scholar] [CrossRef]
  4. Driessen, E.P.; Knight, J.K.; Smith, M.K.; Ballen, C.J. Demystifying the Meaning of Active Learning in Postsecondary Biology Education. CBE-Life Sci. Educ. 2020, 19, ar52. [Google Scholar] [CrossRef]
  5. Leksuwankun, S.; Bunnag, S.; Namasondhi, A.; Pongpitakmetha, T.; Ketchart, W.; Wangasaturaka, D.; Itthipanichpong, C. Students’ Attitude Toward Active Learning in Health Science Education: The Good, the Challenges, and the Educational Field Differences. Front. Educ. 2022, 7, 748939. [Google Scholar] [CrossRef]
  6. Ribeiro-Silva, E.; Amorim, C.; Aparicio-Herguedas, J.L.; Batista, P. Trends of Active Learning in Higher Education and Students’ Well-Being: A Literature Review. Front. Psychol. 2022, 13, 844236. [Google Scholar] [CrossRef]
  7. Umar, I.N.; Yusoff, M.T.M. A study on Malaysian Teachers’ Level of ICT Skills and Practices, and its Impact on Teaching and Learning. Procedia-Soc. Behav. Sci. 2014, 116, 979–984. [Google Scholar] [CrossRef]
  8. Gin, L.E.; Guerrero, F.A.; Cooper, K.M.; Brownell, S.E. Is Active Learning Accessible? Exploring the Process of Providing Accommodations to Students with Disabilities. CBE-Life Sci. Educ. 2020, 19, es12. [Google Scholar] [CrossRef]
  9. Li, R.; Lund, A.; Nordsteien, A. The link between flipped and active learning: A scoping review. Teach. High. Educ. 2023, 28, 1993–2027. [Google Scholar] [CrossRef]
  10. Colás-Bravo, P.; Conde-Jiménez, J.; Reyes-de-Cózar, S. The development of the digital teaching competence from a sociocultural approach. Comunicar 2019, 62, 19–30. [Google Scholar] [CrossRef]
  11. Cabero-Almenara, J.; Guillén-Gámez, F.D.; Ruiz-Palmero, J.; Palacios-Rodríguez, A. Teachers’ digital competence to assist students with functional diversity: Identification of factors through logistic regression methods. Br. J. Educ. Technol. 2022, 53, 41–57. [Google Scholar] [CrossRef]
  12. Alazam, A.O.; Bakar, A.R.; Hamzah, R.; Asmiran, S. Teachers’ ICT Skills and ICT Integration in the Classroom: The Case of Vocational and Technical Teachers in Malaysia. Creat. Educ. 2012, 3, 70–76. [Google Scholar] [CrossRef]
  13. Gümüs, M.M.; Kukul, V. Developing a digital competence scale for teachers: Validity and reliability study. Educ. Inf. Technol. 2023, 28, 2747–2765. [Google Scholar] [CrossRef] [PubMed]
  14. Kaarakainen, M.T.; Kivinen, O.; Vainio, T. Performance-based testing for ICT skills assessing: A case study of students and teachers’ ICT skills in Finnish schools. Univers. Access Inf. Soc. 2018, 17, 349–360. [Google Scholar] [CrossRef]
  15. Tretinjak, M.F.; Andelic, V. Digital Competences for Teacher: Classroom Practice. In Proceedings of the 39th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 30 May–3 June 2016. [Google Scholar]
  16. Instefjord, E.J.; Munthe, E. Educating digitally competent teachers: A study of integration of professional digital competence in teacher education. Teach. Teach. Educ. 2017, 67, 37–45. [Google Scholar] [CrossRef]
  17. Demissie, E.B.; Labiso, T.O.; Thuo, M.W. Teachers’ digital competencies and technology integration in education: Insights from secondary schools in Wolaita Zone, Ethiopia. Soc. Sci. Humanit. Open 2022, 6, 100355. [Google Scholar] [CrossRef]
  18. Cabero-Almenara, J.; Romero-Tena, R.; Palacios-Rodríguez, A. Evaluation of Teacher Digital Competence Frameworks Through Expert Judgement: The Use of the Expert Competence Coefficient. J. New Approaches Educ. Res. 2020, 9, 275–293. [Google Scholar] [CrossRef]
  19. Pérez-Escoda, A.; García-Ruiz, R.; Aguaded, I. Dimensions of digital literacy based on five models of development. Cult. Educ. 2019, 31, 232–266. [Google Scholar] [CrossRef]
  20. Ríoseco-País, M.; Silva-Quiroz, J.; Carrasco-Manríquez, C. Development of Digital Competences in Students of a Public State-Owned Chilean University Considering the Safety Area. Educ. Sci. 2023, 13, 710. [Google Scholar] [CrossRef]
  21. Kumpulainen, K.; Kajamaa, A.; Leskinen, J.; Byman, J.; Renlund, J. Mapping Digital Competence: Students’ Maker Literacies in a School’s Makerspace. Front. Educ. 2020, 5, 00069. [Google Scholar] [CrossRef]
  22. Bilbao-Aiastui, E.; Arruti, A.; Carballedo-Morillo, R. A systematic literature review about the level of digital competences defined by DigCompEdu in higher education. Aula Abierta 2021, 50, 841–850. [Google Scholar] [CrossRef]
  23. Martín-Párraga, L.; Llorente-Cejudo, C.; Cabrero-Almenara, J. Analysis of teachers’ digital competencies from assessment frameworks and instruments. Int. J. Educ. Res. Innov. (IJERI) 2022, 18, 62–79. [Google Scholar]
  24. Saltos-Rivas, R.; Novoa-Hernández, P.; Serrano-Rodríguez, R. On the quality of quantitative instruments to measure digital competence in higher education: A systematic mapping study. PLoS ONE 2021, 16, e0257344. [Google Scholar] [CrossRef] [PubMed]
  25. Pérez-Escoda, A.; Lena-Acebo, F.; García-Ruiz, R. Digital Competences for Smart Learning During COVID-19 in Higher Education Students from Spain and Latin America. Digit. Educ. Rev. 2021, 40, 122–140. [Google Scholar] [CrossRef]
  26. Cabrero-Almenara, J.; Gutiérrez-Castillo, J.J.; Palacios-Rodríguez, A.; Barroso-Osuna, J. Development of the Teacher Digital Competence Validation of DigCompEdu Check-In Questionnaire in the University Context of Andalusia (Spain). Sustainability 2020, 12, 6094. [Google Scholar] [CrossRef]
  27. Arif, M.Z.; Nurdin, D.; Sururi, S. Mapping the use of digital learning tools and methods for increasing teachers’ digital competence. J. Pendidikal Glas. 2023, 7, 226–235. [Google Scholar] [CrossRef]
  28. Fernández-Luque, A.M.; Ramírez-Montoya, M.S.; Cordón-García, J.A. Training in digital competencies for health professionals: Systematic mapping (2015–2019). Prof. Inf. 2021, 30, e300213. [Google Scholar] [CrossRef]
  29. López-Belmonte, J.; Pozo-Sánchez, S.; Fuentes-Cabrera, A.; Domínguez-Campoy, N. The Level of Digital Competence in Education Professionals: The Case of Spanish Physical Education Teachers. Zona Próxima 2020, 33, 146–165. [Google Scholar] [CrossRef]
  30. Evangelinos, G.; Holley, D. A Qualitative Exploration of the DIGCOMP Digital Competence Framework: Attitudes of students, academics and administrative staff in the health faculty of a UK HEI. EAI Endorsed Trans. E-Learn. 2015, 2, e1. [Google Scholar] [CrossRef]
  31. Napal-Fraile, M.; Peñalva-Vélez, A.; Mendióroz-Lacambra, A.M. Development of Digital Competence in Secondary Education Teachers’ Training. Educ. Sci. 2018, 8, 104. [Google Scholar] [CrossRef]
  32. Cisneros-Barahona, A.; Marqués-Molías, L.; Samaniego-Erazo, N.; Uvidia-Fassler, M.I.; Castro-Ortiz, W.; Villa-Yáñez, H. Digital competence, faculty and higher education. Int. Humanit. Rev. 2023, 16, 2–20. [Google Scholar]
  33. Romero-García, C.; Buzón-García, O.; de Paz-Lugo, P. Improving Future Teachers’ Digital Competence Using Active Methodologies. Sustainability 2020, 12, 7798. [Google Scholar] [CrossRef]
  34. Mosquera-Gende, I. Digital tools and active leaning in an online university: Improving the academic performance of future teachers. J. Technol. Sci. Educ. 2023, 13, 632–645. [Google Scholar] [CrossRef]
  35. Fernández-Jiménez, A. Active methodologies based on digital skills to improve academic performance. Hum. Rev. Int. Humanit. Rev. 2023, 17, 1–20. [Google Scholar]
  36. Parmelee, D.; Michaelsen, L.K.; Cook, S.; Hudes, P.D. Team-based learning: A practical guide: AMEE guide no 65. Med. Teach. 2012, 34, e275-87. [Google Scholar] [CrossRef] [PubMed]
  37. Burgess, A.; van Diggele, C.; Roberts, C.; Mellis, C. Team-based learning: Design, facilitation and participation. BMC Med. Educ. 2020, 20, 461. [Google Scholar] [CrossRef] [PubMed]
  38. Fatmi, M.; Hartling, L.; Hillier, T.; Campbell, S.; Oswald, A.E. The effectiveness of team-based learning on learning outcomes in health professions education: BEME Guide No. 30. Med. Teach. 2013, 35, e1608–e1624. [Google Scholar] [CrossRef] [PubMed]
  39. Costa e Silva, E.; Lino-Neto, T.; Ribeiro, E.; Rocha, M.; Costa, M.J. Going virtual and going wide: Comparing Team-Based Learning in-class versus online and across disciplines. Educ. Inf. Technol. 2022, 27, 2311–2329. [Google Scholar] [CrossRef] [PubMed]
  40. Dorodchi, M.; Dehbozorgi, N.; Benedict, A.; Al-Hossami, E.; Benedict, A. Scaffolding a Team-based Active Learning Course to Engage Students: A Multidimensional Approach. In Proceedings of the 2020 ASEE Virtual Annual Conference Content Access, Virtual Online, 22–26 June 2020. [Google Scholar]
  41. Swanson, E.; McCulley, L.V.; Osman, D.J.; Lewis, N.S.; Solis, M. The effect of team-based learning on content knowledge: A meta-analysis. Act. Learn. High. Educ. 2017, 20, 39–50. [Google Scholar] [CrossRef]
  42. Hazel, S.J.; Herbele, N.; McEwen, M.M.; Adams, K. Team-Based Learning Increases Active Engagement and Enhances Development of Teamwork and Communication Skills in a First-Year Course for Veterinary and Animal Science Undergraduates. J. Vet. Med. Educ. 2013, 40, 333–341. [Google Scholar] [CrossRef]
  43. Rawekar, A.; Garg, V.; Jagzape, A.; Deshpande, V.; Tankhiwale, S.; Chalak, S. Team Based Learning: A controlled trial of Active learning in Large Group Setting. IOSR J. Dent. Med. Sci. (IOSR-JDMS) 2013, 7, 42–48. [Google Scholar] [CrossRef]
  44. Ruiz-Bañuls, M.; Gómez-Trigueros, I.M.; Rovira-Collado, J.; Rico-Gómez, M.L. Gamification and transmedia in interdisciplinary contexts: A didactic intervention for the primary school classroom. Heliyon 2021, 7, e07374. [Google Scholar] [CrossRef]
  45. Reyssier, S.; Hallifax, S.; Serna, A.; Marty, J.C.; Simonian, S.; Lavoué, E. The Impact of Game Elements on Learner Motivation: Influence of Initial Motivation and Player Profile. IEEE Trans. Learn. Technol. 2022, 15, 42–54. [Google Scholar] [CrossRef]
  46. Okubo, F.; Shiino, T.; Minematsu, T.; Taniguchi, Y.; Shimada, A. Adaptive Learning Support System Based on Automatic Recommendation of Personalized Review Materials. IEEE Trans. Learn. Technol. 2023, 16, 92–105. [Google Scholar] [CrossRef]
  47. Moraga, D.; Soto, J. TBL—Team-Based Learning. Estud. Pedagógicos 2016, 42, 437–447. [Google Scholar] [CrossRef]
  48. López, E. Aplicación de TBL en el aula. Padres Maest. 2021, 385, 48–51. [Google Scholar] [CrossRef]
  49. Hodges, L.C. Student Engagement in Active Learning Classes. In Active Learning in College Science; Springer: Cham, Switzerland, 2020; pp. 27–41. [Google Scholar]
  50. Viswanathan, R.; Krishnamurthy, N. Engaging Students through Active Learning Strategies in a Medicinal Chemistry Course. J. Chem. Educ. 2023, 200, 4638–4643. [Google Scholar] [CrossRef]
  51. Jabulisile, N.; Sphelele, Z. Teaching Strategies to Engage Learners in Active Learning in Business Studies. Int. J. Innov. Technol. Soc. Sci. 2023, 3, 1–10. [Google Scholar] [CrossRef]
  52. Grijpma, J.W.; van der Vossen, M.M.; Kusurkar, R.A.; Meeter, M.; de la Croix, A. Medical student engagement in small-group active learning: A stimulated recall study. Med. Educ. 2022, 56, 432–443. [Google Scholar] [CrossRef]
  53. Naughton, L.; Butler, R.; Parkes, A.; Wilson, P.; Gascoyne, A. Raising aspirations using elements of team-based learning in mathematics: A pilot study. Int. J. Math. Educ. Sci. Technol. 2021, 52, 1491–1507. [Google Scholar] [CrossRef]
  54. Hanley, J. Team-based learning in social work law education: A practitioner enquiry. Soc. Work Educ. 2021, 40, 1038–1050. [Google Scholar] [CrossRef]
  55. Ada, M.B.; Foster, M.E. Enhancing postgraduate students’ technical skills: Perceptions of modified team-based learning in a six-week multi-subject Bootcamp-style CS course. Comput. Sci. Educ. 2023, 33, 186–210. [Google Scholar]
  56. Spanish Grading System (Higher Education). Available online: https://www.upv.es/entidades/OPII/infoweb/pi/info/1147768normali.html (accessed on 16 February 2024).
  57. Tabla de Conversión de Calificaciones (Higher Education). Available online: https://internacional.ugr.es/pages/movilidad/tablaconversioncalificaciones/! (accessed on 16 February 2024).
  58. Soane, E.; Bailey, C.; Alfes, K.; Shantz, A.; Rees, C.; Gatenby, M. Development and application of a new measure of employee engagement: The ISA Engagement Scale. Hum. Resour. Dev. Int. 2012, 15, 529–547. [Google Scholar] [CrossRef]
  59. Mañas-Rodríguez, M.A.; Pecino-Medina, V.; Limbert, C. Validation of the Spanish version of Soane’s ISA Engagement Scale. Rev. Psicol. Trab. Organ. 2016, 32, 87–93. [Google Scholar] [CrossRef]
  60. Aiken, L.R. Three Coefficients for Analyzing The Reliability And Validity of Rating. Educ. Psychol. Meas. 1985, 45, 131–142. [Google Scholar] [CrossRef]
  61. Charter, R.A. A breakdown on reliability coefficients by test type and reliability method, and the clinical implications of low reliability. J. Gen. Psychol. 2003, 130, 290–304. [Google Scholar] [CrossRef]
  62. Cicchetti, D.V. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol. Assesments 1994, 6, 284–290. [Google Scholar] [CrossRef]
  63. Taber, K.S. The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education. Res. Sci. Educ. 2018, 48, 1273–1296. [Google Scholar] [CrossRef]
  64. Hoekstra, R.; Vugteveen, J.; Warrens, M.J.; Kruyen, P.M. An empirical analysis of alleged misunderstandings of coefficient alpha. Int. J. Soc. Res. Methodol. 2018, 22, 351–364. [Google Scholar] [CrossRef]
  65. Roig, P.J.; Alcaraz, S.; Gilly, K.; Bernad, C.; Juiz, C. Using Escape Rooms as Evaluation Tool in Active Learning Contexts. Educ. Sci. 2023, 13, 535. [Google Scholar] [CrossRef]
Figure 1. A meteorite is coming to Earth and we need to complete the mission to avoid collision.
Figure 1. A meteorite is coming to Earth and we need to complete the mission to avoid collision.
Education 14 00721 g001
Figure 2. Logical network topology involved in both tasks considered in activity 1.
Figure 2. Logical network topology involved in both tasks considered in activity 1.
Education 14 00721 g002
Figure 3. A meteorite is getting closer to Earth and we need to complete the mission to avoid collision.
Figure 3. A meteorite is getting closer to Earth and we need to complete the mission to avoid collision.
Education 14 00721 g003
Figure 4. Logical grid involved in activity 2, set up in orthogonal coordinates.
Figure 4. Logical grid involved in activity 2, set up in orthogonal coordinates.
Education 14 00721 g004
Figure 5. Logical grid involved in activity 2, set up in skew coordinates.
Figure 5. Logical grid involved in activity 2, set up in skew coordinates.
Education 14 00721 g005
Figure 6. Logical grid involved in activity 2 with all even routers highlighted.
Figure 6. Logical grid involved in activity 2 with all even routers highlighted.
Education 14 00721 g006
Figure 7. Shortest path proposed to carry out activity 2, where only 19 routers are involved.
Figure 7. Shortest path proposed to carry out activity 2, where only 19 routers are involved.
Education 14 00721 g007
Figure 8. Links involved in the shortest path proposed to undertake activity 2.
Figure 8. Links involved in the shortest path proposed to undertake activity 2.
Education 14 00721 g008
Figure 9. Network addresses involved in the topology layout involved in activity 2.
Figure 9. Network addresses involved in the topology layout involved in activity 2.
Education 14 00721 g009
Figure 10. Network addresses involved in the shortest path to carry out activity 2.
Figure 10. Network addresses involved in the shortest path to carry out activity 2.
Education 14 00721 g010
Figure 11. Values of the fourth octet for horizontal and vertical links.
Figure 11. Values of the fourth octet for horizontal and vertical links.
Education 14 00721 g011
Figure 12. Logical network topology with the shortest path proposed to carry out activity 2.
Figure 12. Logical network topology with the shortest path proposed to carry out activity 2.
Education 14 00721 g012
Figure 13. Final tracert executed from the PC towards the Server in activity 2.
Figure 13. Final tracert executed from the PC towards the Server in activity 2.
Education 14 00721 g013
Table 1. Subnetwork addresses involved in Activity 1—Task 1.
Table 1. Subnetwork addresses involved in Activity 1—Task 1.
Subnet IdSubnetwork AddressRouter Acting as Default Gateway
1172.16.54.0/29R5
2172.16.54.8/29R4
3172.16.54.16/29R3
4172.16.54.24/29R2
5172.16.54.32/29R1
6172.16.54.40/29R0
7172.16.54.48/29R6
8172.16.54.56/29unused
Table 2. Address assignment’s 4th octet for LANs involved in Activity 1—Task 1.
Table 2. Address assignment’s 4th octet for LANs involved in Activity 1—Task 1.
Fourth Octet
of the IP Address
LAN
R5
LAN
R4
LAN
R3
LAN
R2
LAN
R1
LAN
R0
LAN
R6
Network address081624324048
Broadcast address7152331394755
Last usable address6142230384654
Second-to-last5132129374553
Third-to-last4122028364452
Fourth-to-last3111927354351
Table 3. Questions related to the requested digits involved in Activity 1—Task 1.
Table 3. Questions related to the requested digits involved in Activity 1—Task 1.
IdQuestion4th OctetDigit
Requested
Q1Last digit of the 4th octet in unicast IP address of PC2122
Q2First digit of the 4th octet in broadcast IP address of PC4474
Q3Last digit of the 4th octet in broadcast IP address of PC8555
Q4First digit of the 4th octet in unicast IP address of PC1191
Q5Last digit of the 4th octet in unicast IP address of R4’s LAN144
Q6Last digit of the 4th octet in unicast IP address of S6455
Q7Last digit of the 4th octet in unicast IP address of PC9511
Q8Last digit of the 4th octet in unicast IP address of PC7355
Table 4. Frequency of the digits selected involved in Activity 1—Task 1.
Table 4. Frequency of the digits selected involved in Activity 1—Task 1.
DigitQ1Q2Q3Q4Q5Q6Q7Q8Freq.Final
1 2
2 1
3 -
4 2
5 3
6 -
7 -
8 -
9 -
0 -
Table 5. Subnetwork addresses involved in Activity 1—Task 2.
Table 5. Subnetwork addresses involved in Activity 1—Task 2.
Subnet IdSubnetwork AddressRouter Acting as Default Gateway
1172.16.54.0/27R2
2172.16.54.32/27R3
3172.16.54.64/27R4
4172.16.54.96/27R5
5172.16.54.128/27R6
6172.16.54.160/27R0
7172.16.54.192/27R1
8172.16.54.224/27unused
Table 6. Address assignment’s 4th octet for LANs involved in Activity 1—Task 2.
Table 6. Address assignment’s 4th octet for LANs involved in Activity 1—Task 2.
Fourth Octet
of the IP Address
LAN
R2
LAN
R3
LAN
R4
LAN
R5
LAN
R6
LAN
R0
LAN
R1
Network address0326496128160192
Broadcast address316395127159191223
Last usable address306294126158190222
Second-to-last296193125157189221
Third-to-last286092124156188220
Fourth-to-last275991123155187219
Table 7. Questions related to the requested digits involved in Activity 1—Task 2.
Table 7. Questions related to the requested digits involved in Activity 1—Task 2.
IdQuestion4th OctetDigit
Requested
Q1Last digit of the 4th octet in unicast IP address of PC2922
Q2First digit of the 4th octet in broadcast IP address of PC41911
Q3Last digit of the 4th octet in broadcast IP address of PC81599
Q4First digit of the 4th octet in unicast IP address of PC1595
Q5Last digit of the 4th octet in unicast IP address of R4’s LAN944
Q6Last digit of the 4th octet in unicast IP address of S61899
Q7Last digit of the 4th octet in unicast IP address of PC91555
Q8Last digit of the 4th octet in unicast IP address of PC72199
Table 8. Frequency of the digits selected involved in Activity 1—Task 2.
Table 8. Frequency of the digits selected involved in Activity 1—Task 2.
DigitQ1Q2Q3Q4Q5Q6Q7Q8Freq.Final
1 1
2 1
3 -
4 1
5 2
6 -
7 -
8 -
9 3
0 -
Table 9. Types of interfaces used in each router within the path.
Table 9. Types of interfaces used in each router within the path.
Router IDRouter TypeDefault
Interface Type
Interface to
Predecessor
Interface to
Successor
Router01841FastEthernetFastEthernet0/0FastEthernet0/1
Router21841FastEthernetFastEthernet0/0FastEthernet0/1
Router51941GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router72811FastEthernetFastEthernet0/0FastEthernet0/1
Router142620XMEthernetEthernet1/0FastEthernet0/0
Router181941GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router13819HGWGigabitEthernet
& Serial
GigabitEthernet0Serial0
Router62901GigabitEthernetSerial0/0/0GigabitEthernet0/0
Router32621XMFastEthernetFastEthernet0/0FastEthernet0/1
Router82901GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router42621XMFastEthernetFastEthernet0/0FastEthernet0/1
Router92811FastEthernetFastEthernet0/0FastEthernet0/1
Router102911GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router151941GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router12819HGWGigabitEthernet
& Serial
GigabitEthernet0Serial0
Router161941GigabitEthernetSerial0/0/0GigabitEthernet0/0
Router202911GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router222911GigabitEthernetGigabitEthernet0/0GigabitEthernet0/1
Router241841FastEthernetFastEthernet0/0FastEthernet0/1
Table 10. Questions organized by dimensions to assess the results obtained, associated with digital competence.
Table 10. Questions organized by dimensions to assess the results obtained, associated with digital competence.
DimensionsTasksQuestions
1Activity 1Task 1Q1: Methodology to calculate the digits.
Q2: Degree of completion.
Task 2Q3: Methodology to calculate the digits.
Q4: Degree of completion.
2Activity 2Task 1Q5: Establishment of the shortest path.
Q6: Interconnection of devices.
Q7: Configuration of the devices.
Q8: Degree of completion.
Table 11. Validity of the construct obtained according to Aiken’s V test.
Table 11. Validity of the construct obtained according to Aiken’s V test.
Construction
Dimension
Clarity
Dimension
Overall
Value
Average marks3.7813.8433.812
Aiken’s V test0.9270.9480.938
Table 12. Centralization and dispersion statistics related to the results obtained.
Table 12. Centralization and dispersion statistics related to the results obtained.
Related to
Activity 1
Related to
Activity 2
Overall
Construct
Average4.664.014.33
Mode544
25th percentile444
Median544
75th percentile545
Variance0.240.460.46
Standard Deviation0.490.680.68
Coefficient of Variation0.110.170.16
Table 13. Reliability of the results obtained according to Cronbach’s alpha.
Table 13. Reliability of the results obtained according to Cronbach’s alpha.
Related to
Activity 1
Related to
Activity 2
Overall
Construct
σ 2 X i 0.9641.7282.692
σ 2 Y 2.0333.7577.264
Cronbach’s Alpha0.7010.7200.719
Table 14. Items organized by dimensions to assess the level of engagement attained.
Table 14. Items organized by dimensions to assess the level of engagement attained.
DimensionsQuestions
1IntellectualQ1: I focus hard on my work.
Q2: I concentrate on my work.
Q3: I pay a lot of attention to my work.
2SocialQ4: I share the same work values as my colleagues.
Q5: I share the same work goals as my colleagues.
Q6: I share the same work attitudes as my colleagues.
3AffectiveQ7: I feel positive about my work.
Q8: I feel energetic in my work.
Q9: I am enthusiastic in my work.
Table 15. Centralization and dispersion statistics related to the level of engagement.
Table 15. Centralization and dispersion statistics related to the level of engagement.
Intellectual
Dimension
Social
Dimension
Affective
Dimension
Overall
Construct
Average6.606.636.646.62
Mode7777
25th percentile6666
Median7777
75th percentile7777
Variance0.440.350.320.37
Standard Deviation0.670.590.570.61
Coefficient of Variation0.100.090.090.09
Table 16. Reliability of the level of engagement attained according to Cronbach’s alpha.
Table 16. Reliability of the level of engagement attained according to Cronbach’s alpha.
Intellectual
Dimension
Social
Dimension
Affective
Dimension
Overall
Construct
σ 2 X i 1.3561.0560.9823.394
σ 2 Y 3.2002.9212.3409.689
Cronbach’s Alpha0.8640.9570.8710.731
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roig, P.J.; Alcaraz, S.; Gilly, K.; Bernad, C.; Juiz, C. An Active Learning Approach to Evaluate Networking Basics. Educ. Sci. 2024, 14, 721. https://doi.org/10.3390/educsci14070721

AMA Style

Roig PJ, Alcaraz S, Gilly K, Bernad C, Juiz C. An Active Learning Approach to Evaluate Networking Basics. Education Sciences. 2024; 14(7):721. https://doi.org/10.3390/educsci14070721

Chicago/Turabian Style

Roig, Pedro Juan, Salvador Alcaraz, Katja Gilly, Cristina Bernad, and Carlos Juiz. 2024. "An Active Learning Approach to Evaluate Networking Basics" Education Sciences 14, no. 7: 721. https://doi.org/10.3390/educsci14070721

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop