*2.1. Solution Architecture*

In order to meet the goals above for developing an EDSS as a service, an architectural design based on the latest available technology for SaaS is proposed. Figure 2 shows the encapsulation of the different core layers, followed by a detailed description of each of the core layers.

**Figure 2.** The different layers in the EDSS as a service design. Note: Grey represents layers that will require changes between implementations of the EDSS. White represents the infrastructure layers that do not require changes between different implementations.

**Model**—The CE-QUAL-W2 model is the heart of the EDSS, as it is used as a kernel for decision making. The model was released only for Windows operating systems in the last few years. Although it is possible to develop this kind of EDSS as a service in a Windows environment [14], we prefer the Linux environment since it is more cloudenvironment-compatible and license-free. For that need, we created an open-source GitHub project. This project holds the needed files and instructions to compile the CE-QUAL-W2 source code to be executed in a Linux environment [15]. Besides the model executable, this layer also includes the user-specific input files of a calibrated model ready for simulations. These input files are the template that the application layer changes according to the algorithm requirements.

**Algorithm**—The layer of the algorithm is responsible for two primary operations: (1) Deciding on the needed permutation for the model simulations and supplying the different parameters needed for the model input files for each of these simulations; (2) Analyzing the results of the simulations according to the developed algorithm. A single EDSS can hold multiple algorithms for the user. For example, an algorithm can conduct a grid search for the best matching simulation output according to the user input targets, while another algorithm can plot a specific model output parameter as recorded in all the different simulations.

**Application**—This layer has five different responsibilities. The interactions of these responsibilities are described in Figure 3: (1) Provide a simple web-based user interface for the EDSS, allowing users to send requests to the service (Figure 4). An example of a user request is shown later in Section 2.2.1; (2) Pass the user request to the algorithm layer and receive a response from the algorithm to specify the needed simulation permutations list; (3) Initiate parallel model execution requests according to the permutations; (4) Collect all the model simulation results and send them back to the algorithm once the model simulations are completed. (5) Obtain the analyzed results from the algorithm and display them back in the web user interface.

**Figure 3.** Application layer interactions.

**Docker**—The Docker container layer was developed to allow the isolation and packaging of the software together with all of its dependencies. A container is an executable that can be run on any computing environment without worrying about the operating system or the hardware infrastructure [14]. The Docker infrastructure is free for use and holds the following benefits for the EDSS implementation: (1) No need to set up the infrastructure or the operating system; (2) It can run on cloud resources as well as on a single computer. The Docker engine can be run on Windows, Linux, or Mac operating systems; (3) Cloud providers supply a cost-effective and straightforward interface for setting up your application using Docker. In this EDSS as a service, a single Docker container was created. This container can be used either for running the application or running individual model simulations in parallel across multiple model executions.

**Kubernetes (K8s)**—This is a layer developed after organizations started to adopt the Docker solution and were looking for a way to streamline the scaling process and coordinate multiple services encapsulated as Docker containers. Scaling involves initiating more and more Docker containers according to the demand [16]. K8s is also open-source. In this EDSS, K8s is leveraged to manage the runs of multiple Docker containers in parallel, each running a different model simulation. This allows for the automatic scaling of the cloud computing power when needed, allowing the user to benefit from the "pay as you use" cloud computing model while all the simulations are performed in parallel.

**Figure 4.** The web-based user interface of the EDSS.

**Helm**—This is a packaging manager for Kubernetes that was developed to simplify the deployment of K8s applications according to a predefined configuration file [17]. It is free to use and allows the easy and repeatable deployment of this EDSS to the cloud computing environment. Deploying the EDSS to a cloud provider would require many manual configurations without using Helm.

**Cloud computing**—This final layer allows the EDSS as a service to hold one unit that runs only the user interface during the idle times and scale to multiple computer units when model simulations are needed. To make it generic for any cloud computing provider, the EDSS relies on basic computing building blocks used by all cloud computing providers. However, it could have been easier to design the system for a specific cloud computing provider. Where applicable, current industry standards and best practices were applied to ensure that the APIs and interfaces between the layers will be supported by various third-party tools to simplify the infrastructure layers' deployment, scaling, and management.
