*5.4. Experimental Design*

The following steps were developed for the execution of the experiment:

	- **–** Step 3.a: Execute the monitoring script for 2 h without any workload before the cluster is started.
	- **–** Step 3.b: Run script that starts the cluster with the container orchestrator and keeps monitoring for initial 2 h without stress.
	- **–** Step 3.c: Run the high workload emulating the auto-scaling 420 times in a loop.
	- **–** Step 3.d: After the end of stress, wait 2 h and execute a script that ends the container orchestrator as a possible software rejuvenation action.
	- **–** Repeat steps 3.a, 3.b, 3.c, and 3.d until completing five cycles.

To reinforce the understanding of our experiment, Figure 8 depicts a diagram that represents the sequence of operations performed by the general script we just described.

**Figure 8.** Diagram for cycles of operations performed by the experiment script.

Throughout all routines in step 3, another script sends client requests to the service that is hosted in the cluster. Those requests are effectively serviced by any pods that might have been created throughout the stress workload. Figure 9 illustrates the interaction between a client and the service in the Kubernetes cluster both in the Minikube environment and in the K3S environment, in both, the infrastructure architecture is configured as in Figure 2, which defines a logical set of Pods and enables exposure external traffic, load balancing and service discovery for these Pods, which have Nginx as a lightweight HTTP server, which is represented with Other App.

**Figure 9.** Cluster and Client Interaction Overview.
