*5.1. Testing Environment*

We deployed a testing environment composed of three MEC Hosts with different hardware resources, which are representative of a real smart classroom; these are a server, a desktop PC and a laptop. These devices can be used in the different scenarios presented in Section 3. The server was an Intel machine with dodeca-core (12 cores) 3.50 GHz CPU and 32 GB DDR4 RAM, the PC was an Intel machine with octa-core (8 cores) 3.40 GHz CPU and 16 GB DDR4 RAM, and the laptop was an Intel Celeron machine with dual-core 1.10 GHz CPU and 4 GB DDR4 RAM. In particular, laptops have similar computational capabilities to tablets and mini-PCs, so our experimental results with laptops are comparable to tablets and mini-PCs.

For each device, we set up a realistic evaluation environment with the typical services and graphical interface used to reduce the overhead. The operating system of all hosts was Ubuntu 64-bit 18.0.4, and the containers were deployed by the latest version (19.03.6) of Docker Engine. No more additional software components were needed to deploy the learning tools on our testing environment. Each learning tool was allocated within a unique Docker container providing a single learning task.

Our testbeds evaluated the performance and efficiency of our solution by increasing the number of containers on each type of MEC Host. This allows for observing the performance variance across different scenarios according to their capabilities. We expect that changing between scenarios would have an impact in the performance, e.g., the learning device installed in a classroom work table would require much more learning tools in a Tabletop Task Collaboration scenario than a Programming Project-based Learning scenario. Another possibility is that there could be changes in the number of students taking each class, hence affecting the computation requirements. Therefore, the performance for each configuration must be well-known by the Orchestrator to properly reconfigure the learning devices in each class.

#### *5.2. Docker Container with High-Intensive Computing Application*

There are several learning scenarios that can require a face detection tool to identify students or infer affect states. As shown in Section 3 for an Intelligent Tutoring System, an MEC Host with a camera capturing a video feed of student face expression can be used to infer the affect (e.g., surprise, neutral, confusion and angry) and identify when a student needs help. We used *dlib* library to implement a HOG face detection MEC App and created a Docker container that provides this app in our testing environment. The HOG is one of the most reliable and applied algorithms for person identification, but also an intensive computational task. Therefore, it is essential to properly manage the available computing resources in the learning device that can be dedicated to the execution of this learning tool.

In order to evaluate the performance and efficiency of the Face Recognition application in Docker containers, our testbed used a H.264 video source with 640 × 360 image size and applied the HOG algorithm in each video frame. We used the analyzed frames per second (FPS) as performance evaluation index to assess how fast the HOG algorithm is. If a configuration has higher FPS value, it has higher video quality and can produce smoother video. Figure 4 shows the experimental results obtained when increasing the number of containers for each type of learning device. The left graph depicts the maximum analyzed FPS for each configuration and the right graph shows how many CPU cores are overloaded.

 **Figure 4.** Performance results for Face Recognition application in Docker containers.

As it can be seen in Figure 4, the maximum speed achieved was above 6 FPS for configurations with up to six containers in server and up to four containers in PC, whereas the throughput in the laptop was much lower with less than 3 FPS. In addition, the server was absolutely overloaded with 12 containers, the PC with eight containers and the laptop with two containers. Therefore, we observe that each container consumed approximately one CPU core. These experimental results imply that a face detection tool can be provided in different configurations e.g., a PC with eight cameras could serve for a work table shared by eight students or a laptop for one single student. Note that the server achieved the highest computation performance, and this performance could further improve if it included a graphics card to implement the HOG algorithm.

#### *5.3. Docker Container with Medium Computing Application*

Identifying students via their voice in a microphone can be useful for several learning scenarios, as shown in our use case related to project-based learning (see Section 3). An MEC Host with a microphone capturing the meeting audio can identify students, perform speech-to-text transcription, calculate speaker metrics (e.g., speaking time or counters) and infer the emotional state (e.g., angry, boring or excited).

We implemented an MEC App based on MFCC to recognize persons and created a Docker container with this tool to carry out our experiments. The MFCC are widely used in automatic speech and speaker recognition and allow transforming the audio source into a sequence of feature vectors that characterize voice signals. Our MEC App extracted feature vectors in one second window in order to apply a real-time student recognition. The process to calculate MFCCs consisted in framing the signal in short windows to later apply specific mathematical operations that convey a medium computing task.

In order to evaluate the performance and efficiency of the ASR application in Docker containers, our testbed used an audio signal stored and processed each second using MFCC with a length of the analysis window of 25 ms and a step between successive windows of 10 ms. In this case, we used the processing time as the performance evaluation index because this indicator shows how fast the MFCC algorithm is. If a configuration has slower processing time, it can process more audio sources and serve more users. Figure 5 shows the experimental results obtained when increasing the number of containers for each type of MEC Host. The left graph depicts the processing time to analyze the audio signal each second and the right graph shows how many CPU cores are used in the processing.

(**a**) Feature extractor speed. (**b**) CPU usage per container. **Figure 5.** Performance results for Automatic Speaker Recognition application in Docker containers.

The audio feature extractor is a relatively low computationally expensive task that is well-supported in server, PC and even laptop. As shown in Figure 5, the processing time was always below 100 ms for our three learning devices and below 30 ms for server and PC. However, the CPU overload was relevant for PC when the number of containers doubles its number of cores. In addition, the laptop was stuck when the number of containers was greater than 10, whereas the server was not overloaded with up to 20 containers. These experimental results show that an ASR tool can be easily provided in our use cases, e.g., a laptop/tablet with microphone could serve a 6-student work group or a server with 20 students simultaneously.

#### *5.4. Docker Container with a High-Data Consuming Application*

The interactive simulation-based learning can be useful in multiple scenarios, for example using an ITS in the classroom, as shown in Section 3. When students interact with the simulation, they generate events and clickstream data that can be stored and processed to calculate usage metrics (e.g., idle times or event counters) and even infer about their learning experience (e.g., difficulty or simplicity).

There are several types of interactive simulations which could be used in a classroom. Physics simulations are widely used to improve the learning process in science and engineering education. We implemented a Matplotlib MEC App to build an animated physics simulation that shows a wave motion. In particular, the physics simulation used a 1.5 GB array to plot a 3D surface animated. The size of plotting array implied that the simulation carried out a high data consuming task for learning devices or MEC Hosts.

In order to experiment our physics simulation in Docker containers, our testbed updated the plotting constantly in order to evaluate the performance in each learning device. We used the changes per second (CPS) of the simulation as the performance evaluation index because this indicator shows how fast the simulation is running. If a configuration has higher CPS value, it has higher simulation quality and can produce more fluent simulations. Figure 6 shows the experimental results obtained when increasing the number of containers for each type of learning device. The left graph depicts the maximum CPS for each configuration, and the right graph shows the percentage of RAM memory used.

Given the results shown in Figure 6 and that our physics simulation required 4 CPS at least to show a fluent animation, a laptop served only one container with our simulation. However, the server and PC could achieve 20 and 14 containers, respectively. Moreover, the memory was full and additional containers were rejected when the server launched more than 20 containers, the PC 14 containers, and the laptop 2 containers. These experimental results show that a high-data consuming simulation can be used in different configurations, e.g., a laptop/tablet could be used for a single student and a server to up to 20 students in the classroom.

(**a**) Simulation speed. (**b**) RAM memory usage per container. **Figure 6.** Performance results for Computational Physics simulation in Docker containers.
