*5.5. Instrumentation*

The hardware used in this experiment was: a host with 8 GB of RAM, a Core i3 processor with a 3.1 GHz clock, a WiFi module, and Ubuntu Linux OS version 20.04 64 bits. The software used were: Shell script, for experiment implementation, monitoring and collection of data generated as results through the bash command interpreter; K3S version v1.22.5+k3s1; Minikube version 1.15.1 commit: 23f40a012abb52eff365ff99a709501a61 ac5876; Kubernetes v1.19.4 on Docker 19.03.13 for running the Kubernetes Cluster and Pods.

Metrics were collected with an interval of 60 s for monitoring CPU utilization and memory consumption, while for the disk usage metric the interval was 5 s, which we considered necessary to have enough samples, avoiding interference from monitoring activity. on actual system performance.

#### **6. Experimental Results**

In this section, the results collected from the experiment will be presented for both Minikube and K3S environments, considering the metrics of CPU utilization, memory consumption, disk utilization, and, finally, the requests made to the service. Each metric result is described in the following subsections. These are metrics for continuity and performance of the UAM-ODT cloud infrastructure. The data were collected from the cloud infrastructure rather than from the vehicle. The reason is we are investigating the software aging problems in a private cloud to host 24/7/365 operational digital twin services for UAM management.

It is worth highlighting that the experiments' total time differs in Minikube and K3S due to a difference in the average time to restart the pods within the auto-scaling process. This information was also measured and is presented in Table 1, showing the fastest execution of this action in the K3S environment, 25.4% faster than Minikube, evidencing an improved efficiency in auto-scaling of K3S when compared to Minikube.

**Table 1.** Average Pod Reset Time.

