Next Article in Journal
In Silico Investigation of Taurodispacamide A and Strepoxazine A from Agelas oroides S. as Potential Inhibitors of Neuroblastoma Targets Reveals Promising Anticancer Activity
Previous Article in Journal
Global Regulatory Challenges for Medical Devices: Impact on Innovation and Market Access
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model-Driven Approach to Cloud-Portability Issue

Department of InfoComm Networks, Faculty of Management Science and Informatics, University of Zilina, 010 26 Zilina, Slovakia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(20), 9298; https://doi.org/10.3390/app14209298
Submission received: 17 September 2024 / Revised: 3 October 2024 / Accepted: 9 October 2024 / Published: 12 October 2024

Abstract

:
This paper focuses on the portability of Cloud Computing (CC) services, specifically on the problems with the portability of Infrastructure as a Service (IaaS). We analyze the current state of CC with the intention of standardizing the portability of CC solutions. CC IaaS providers often use proprietary solutions, which leads to a problem known as “vendor lock-in”. Another problem might appear during migration between two providers if huge scripts are written in a proprietary language. To solve the portability problem, we applied the Model-Driven Architecture (MDA) approach to propose the general IaaS reference architecture. Using a generic IaaS model, we are able to describe entities of the IaaS environment and then design necessary transformation rules for specific IaaS environments in a simplified but flexible way. Using this model, we continue designing transformation rules that define the transcript of IaaS services. The CC-portability problem is thus solved by transforming a specific IaaS service description from one description to another through the generic model. This approach is extensible and can be adopted for the evolution of CC services. Therefore, it can be used as a generic solution to IaaS-portability issues. Using this flexible approach, the introduction of a new CC environment requires only the design of a single transformation rule that prevents proprietary peer-to-peer full-mesh mappings. Thanks to the proposed model and the transformation rules described, we were able to experimentally confirm the functionality of the transfer of the environment description between three cloud providers.

1. Introduction

In today’s world of information technology (IT) we encounter virtualization in several ways. Whether it is the virtualization of operating systems (OSs), network elements, development platforms or applications. Virtualization of this type is particularly beneficial for service providers because they can provide services to multiple users on one physical device without these services colliding with each other. Virtualization is a way to optimize the usage of physical resource capacities with scalability and flexibility in their allocation. From the end user’s point of view, virtualization has virtually no influence on the service itself, because the virtual service has the same behavior as the service that would be executed directly on a physical device. Many times, users do not even notice using a service that is being virtualized.
With the evolution of virtualization, we came to a state called Cloud Computing (CC). CC is a relatively new IT industry and has not yet been fully unified and standardized. Currently, several standardization groups are trying to unify the use of CC environments from the perspective of users and providers. We can divide standardization organizations into two groups. In the first group, there are organizations that deal with business relationships between individual participants. We can mention the European Commission and its working groups the Cloud Select Industry Group (C-SIG) and the European Commission Expert group on Cloud Computing Contracts [1,2]. Another organization is the European Telecommunications Standards Institute (ETSI) and its Cloud Standard Coordination group [3].
In the second group there are organizations dealing mainly with the technological aspects of CC. We will focus on the three best-known standardization organizations, NIST (National Institute of Standards and Technology), ITU-T (International Telecommunication Union–Telecommunication Standardization Sector) and ISO (International Organization for Standardization).
In NIST SP-500-291 [4], CC is defined as a model that enables ubiquitous, practical, network-accessible and on-demand computing resources (such as networks, servers, storage, applications and services) with minimal effort and without interaction with the provider of these resources. This model is composed of five basic features, three service models and four deployment models.
The ITU-T in Y.3500 Recommendation defines CC as a paradigm that allows access to a set of shared physical or virtual resources via a network. These resources are scalable and managed by users. This paradigm is composed of key features, user roles, deployment models and cross-sectional CC aspects. A text identical text to Y.3500 was published by ISO as standard ISO/IEC 17788:2014 [5].
Nowadays, CC environments are increasingly being used, mainly at corporate environments. According to [6,7,8,9,10,11,12], more than 80 percent of enterprises and small businesses are using some type of CC. This is mainly Infrastructure as a Service (IaaS) and container technologies. Reasons are different, but one of them could be that CCs significantly save resources or open up new opportunities in providing IT services. With the wider availability and knowledge of CC, users and providers are aware of CC benefits. On the one hand, CC saves resources for providers, because the optimized use of physical devices can meet the requirements of a far greater number of users than on physically dedicating devices. On the other hand, CCs save resources for users who are not forced to buy, operate and manage hardware or software. Providers mostly use the so-called “pay-as-you-go” model, where users pay only for those calculating resources that they actually consume. Such a solution is particularly beneficial for larger organizations. They are not forced to manage their own data center and are able to dynamically increase and reduce leased computing power on demand. This can ultimately save a lot of money when properly managed. On the other hand, there might be the risk that a customer will use a vendor-specific feature. He cannot migrate his infrastructure or applications to another vendor. This problem can be found in the literature as “vendor lock-in”.
Currently, there are several providers of the public IaaS service, which is used quite often. Administrators of IaaS services often use automation scripts to deploy large and complex environments. Creating complex scripts takes a lot of time and requires a fairly deep knowledge of the environment in which it is deployed. The problem arises if the given environment is to be moved to another IaaS provider. This is the case whether it is for financial, technical, administrative or any other reason. Even if the “vendor lock-in” problem mentioned above does not occur, it is very time-consuming to rewrite complex scripts from one specific language to another. The administrator must first learn the syntax of the new environment and then gradually rewrite the script. When moving again to another environment, the process is repeated. This process will be greatly simplified thanks to the general model we have proposed. Although the administrator will have to learn the syntax of our general model, one rewrite will be enough, after which the scripts will be automatically rewritten for other environments. If the administrator decides to write the script directly in the general model, he will not have to learn any specific syntax and will not waste valuable time manually rewriting scripts.
This article is divided into four main parts. After the Related Works section, there is comparison of three CC platforms that we used for experiments. Then, there is proposal of a generic model that solves the portability problem. After the model, we created transformation rules that are used for transformations between specific CC platforms and generic model. The last part deals with implementation and verification of the model and transformation rules.

2. Related Works

According to NIST SP-500-291 [4], there are three basic service models of CC: SaaS, PaaS and IaaS. SaaS (Software as a Service) is an application provided to a user, whilst application is available from different clients. A user of the application does not manage CC infrastructure. In PaaS (Platform as a Service), a customer can run an application on the CC infrastructure. The customer is able to use programming languages, libraries, services and tools provided by the provider. IaaS (Infrastructure as a Service) is defined as the ability of the user to create basic computing resources (computing power, storage, networking, etc.). The user does not manage the infrastructure on which the computing resources are running, but has full control over these resources. In this paper, we deal with the IaaS service model only.

2.1. Cloud Standardization

Cloud Computing is a relatively new IT industry trend and has not yet been fully standardized or even unified between providers. Currently, several standardization groups are trying to unify the use of CC environments from the perspective of users and providers.
The first group is mainly composed of organizations dealing with the technological aspects of CC. The best-known representatives are standardization organizations NIST (National Institute of Standards and Technology) and ITU-T (International Telecommunication Union).
ITU-T in Y.3500 Recommendation [13] defines CC as a service that allows access to a set of shared, scalable and user managed physical or virtual resources via networks. This paradigm mentions the key features, deployment models, user roles and cross-sectional aspects of CC services.
In the NIST SP-500-291 [4] standard, CC is defined as a computing model that enables ubiquitous, practical, network-accessible and on-demand computing resources with minimal effort and without interaction with the provider of these resources. This model is composed of five basic features, three service models and four deployment models.
The second group consists of organizations that deal with business relationships between individual participants. Those worth mentioning are European Commission and its working groups Cloud Select Industry Group (C-SIG) and European Commission Expert group on Cloud Computing Contracts [1,2]. Another organization is the European Telecommunications Standards Institute (ETSI) and its Cloud Standard Coordination group [3].

2.2. Service Models of Cloud Computing

According to NIST SP-500-291 [4], there are three basic service models of CC: SaaS, PaaS and IaaS. SaaS (Software as a Service) is an application provided to a user, whilst application is available from different clients. A user of the application does not manage CC infrastructure. In PaaS (Platform as a Service), a customer can run an application on the CC infrastructure. The customer is able to use programming languages, libraries, services and tools provided by the provider. IaaS (Infrastructure as a Service) is defined as the ability of the user to create basic computing resources (computing power, storage, networking, etc.). The user does not manage the infrastructure on which the computing resources are running, but has full control over these resources. In this paper, we deal with the IaaS service model only [14].

2.3. Interoperability and Portability of CC Systems

ITU-T in its recommendation Y.3502 [15] defined several cross-cutting aspects of Cloud Computing. Two of them are interoperability and portability. Interoperability is defined as the ability of a cloud service customer to interact with a cloud service and exchange information according to a prescribed method and obtain predictable results. According to [16], general interoperability is the ability of different systems and organizations to cooperate. In [17], CC interoperability is the ability of public CC environments, private CC environments and other systems to share system and application interfaces, configuration, authentication, authorization, data formats and so on due to mutual cooperation. Interoperability is also the topic of many articles; in the publication [18], the authors refer to the need to standardize CC in all respects so that the public can use it widely.
Portability is defined according to ITU-T Y.3502 as an ability of CC customers, which allows them to move their application’s data between multiple cloud service providers at low cost and with minimal disruption. The amount of cost and disruption that is acceptable can vary based upon the type of cloud service that is being used. When interoperability is defined as communication between different systems, portability is the ability to use systems or their components on different hardware or software platforms [16,19,20]. Portability can also be defined as the ability to transfer an individual entity or the entire environment without the need to modify it [21]. In [22,23], the portability of CC is divided into data, application and platform.
There are more papers and conference proceedings dealing with cloud migration and portability, such as [24,25,26,27,28,29,30]. There are some papers dealing with cloud and computer interoperability from other scientific fields such as medicine [31,32], smart cities and IoT [33,34,35], transportation [36] or agriculture [37]. But they usually deal with virtual machine (VM) migration. None of these solved environment migration between different cloud environments.
During our long-term research on the portability and interoperability of the IaaS model, we did not come across any articles addressing or examining this issue. Most research focused on the portability and interoperability of the SaaS model, or, to a lesser extent, the portability of the PaaS model [29,30,38]. The IaaS service model contains more technical and hardware details compared to the SaaS and PaaS models, making its description more complex and difficult to understand. This is also related to the portability issue, as the administrator needs to understand the various nuances of both the source and target environments in order to transform the description in the form of code without errors.

3. Cloud Computing Platform Comparison

There are many Cloud Computing platforms available on the market. According to [6,7,8,9,10,11,12], we have chosen the three most used platforms—namely, Amazon AWS, Microsoft Azure and OpenStack—to test our proposed script-transformation architecture on. AWS and Azure are public CC platforms, while OpenStack is a private one. For evaluation purposes, MS Azure may be used in private data centers, under restricted conditions. This setup is not recommended for production environments.
There are several pros and cons of having private or public Cloud Computing platforms. When a public cloud platform is used, the customer does not need any trained staff to install, maintain, service or back up the platform. All these task are performed by the provider, who is providing the platform as a service for its customers. On the other hand, all customers’ data are in a data center owned by the provider, which may not be suitable for every organization, especially if the company policy does not allow critical data to leave the organization and the data are considered sensitive. The other advantage of public cloud service usage is that customers only pay for consumed resources, for example, CPU hours, RAM usage or amount of data stored on virtual storage drives. This leads to other advantage: hardware ownership, which means that a customer does not need to own any hardware used for computing tasks, which usually become obsolete in a few years. That is why customers can perform their tasks on current hardware without large investments into new equipment. Public cloud customers can also buy necessary resources on demand. For example if the load on the service in a public cloud is high, the resources assigned to the service could be automatically increased.
Private clouds, on the other hand, are located in a local data center owned or rented by the organization. That means that the organization needs its own administrators for the CC environment and hardware for running their services and data center or another place where the hardware is installed. This includes redundant energy sources and connectivity, cooling and so on. A customer (or owner) does not pay for utilization of the hardware, but for all the environment, spare parts and energy. This model is usually not economical in cases of less than maximum CC platform utilization, because staff and energy must still be payed as the hardware is running all the time. In a public cloud, a customer pays only for resources that are really consumed. Table 1, presents a brief comparison of the three CC platforms we have chosen for our testing purposes.
As we can see in Table 1, there are minor differences between presented platforms. All platforms allow customers to use their own images as a base for virtual machines, orchestrate whole environments by scripts or use the provided cloud environment only as data storage without the need for the usage of full featured virtual machines. The main difference is that while using OpenStack, data are located in a private data center, while public clouds use the provider’s own data centers located all around the world.

Problem Specification

From the analysis of the problem, we can state that there currently does not exist a unified transformation method for migrating the description of the IaaS CC environment among providers of different CC environments. Therefore, it is not possible to provide IaaS service portability easily. To the best of our knowledge and according to the analysis of available sources, no solution is available that would allow automated or semi-automated transformation of IaaS service descriptions of the CC environment from one specific provider into the language of another provider and thereby provide the portability of the CC systems or environments.

4. Entities of Existing IaaS Platforms

In this section, we present an overview of six basic entities that are used in almost every virtual environment: parameters, virtual machine, SSH key, security group, network and subnet. We chose the three most used CC IaaS platforms to demonstrate the definition and usage of the basic entities: Amazon AWS, OpenStack and Microsoft Azure. In the chapter below, we referred to the cloud platform description script as the “script”. Scripts are text files, which define entities in the desired topology. In these files, there are descriptions of each entity, their physical resources and their interconnections with each other.
Virtual topology of any CC IaaS service can consist of dozens of IaaS logical, interconnected entities. The intention of this paper is to introduce a systematic design approach to an IaaS service portability practically demonstrated on a basic scope only. It is beyond the scope of the authors to create a fully functional model for all IaaS entities, but the approach and the design of the model is fully scalable for future expandability. For demonstration and testing purposes, we propose a simple topology of the IaaS service. The service consists of one virtual machine (VM) placed in the private network behind the logical router of the CC environment. Such simple topology is usually the basis or is a part of more complex topologies of virtual environments of IaaS CC services. The diagram of such a topology is shown in Figure 1. The basic logical entities that can be used to create a given topology are analyzed below.

4.1. Amazon Web Services

According to [8,9,10,11,12], AWS is the most used public CC IaaS solution used in a corporate environment. The following is a summary of the selected IaaS service entities as defined and used by AWS.

4.1.1. Parameters of the Environment

AWS also offers the use of the “parameters” section in scripts, items of which act as variables for entire scripts. Each environment parameter entity has its own parameters; among them, the only mandatory parameter is Type. It describes the type of value it can contain. In AWS, this can be a string, a number and a comma-separated list of strings. Some optional parameters can be used to limit the range of usable values in the parameter entity. For a string, this may be the minimum or maximum length or allowed characters defined by regular expressions. With numeric values, it is possible to limit the range of values.

4.1.2. Virtual Machine

Virtual machine (VM) is one of the base entities within topologies. In scripts, it is referred to as AWS::EC2::Instance. VM has one mandatory parameter, ImageId, which is a reference to the image of the operating system from which the VM will be cloned. All other parameters are optional. The InstanceType parameter represents a template according to which VM will be assigned computing resources, number of CPU cores, amount of RAM memory or hard disk size. Another parameter is security groups, which works as a packet filter. The AvailabilityZone parameter denotes the Amazon data center where the VM is created. Nowadays, Amazon has data centers in 21 geographical locations all over the world. The parameter SubnetId is the subnet to which the VM will be connected. The KeyName parameter is the SSH key that is used for remote access to the VM. One of the very important parameters is UserData. Using this parameter, it is possible to run a shell script during the first boot of a VM. A custom shell script can be written in any language that is supported by the guest operating system. Based on these scripts, it is possible to modify the VM, as is required. The last parameter is Tags, which represents just simple attributes that help administrators to easily identify virtual instances.

4.1.3. SSH Key

AWS does not allow one to create an SSH key using a script. The only way to create a key is through a web interface called “AWS Management Console”. In the script, you can simply refer to the name of an existing key that will be imported into the created VM.

4.1.4. Security Groups

The security group inside the script is defined as AWS::EC2::SecurityGroup. Each security group has one mandatory parameter, a description of the group. The group name is a unique identifier within a single project, but it is not a mandatory parameter. There are two additional optional parameters, a reference to groups of input and output rules. The last parameter is Tags, which serves only as a label of the group.
The input and output parameter groups have the same syntax. As the name suggests, they differ only in the direction of deployed packet filtering. The direction is determined from the VM viewpoint; that is, the input rules will be applied to the packets that are destined for the VM and output to those generated by the VM. These groups can be shared across multiple security groups, since they are separate logical entities called AWS::EC2::SecurityGroupIngress and AWS::EC2::SecurityGroupEgress. There are four mandatory parameters in these groups. The first is IpProtocol, which is the transport protocol carried in the packet. The next two mandatory parameters are related. They are FromPort and ToPort. These two parameters determine the port intervals that will be compared in that rule. The last parameter is CidrIp, which specifies the range of IP addresses that will be compared in that rule.

4.1.5. Network

In AWS, the network has a single mandatory parameter, CidrBlock, which contains a CIDR record. All subnets from this network must have addresses of the given CIDR record. It also contains optional attributes such as tags. The network is indicated in scripts as AWS::EC2::VPC, where VPC stands for “Virtual Private Cloud”.

4.1.6. Subnet

Every subnet entity belongs to some network entity. The IP prefix of the subnet must be from the range defined in the network entity. A mandatory attribute is a reference to the network entity. With this reference, the subnet is uniquely assigned to one network, resulting in the ability to create subnets with the same prefixes for different users. Some values that can be configured in other CC systems are set by AWS. For example, DNS servers and default gateways are pre-defined; it is only possible to use addresses provided by AWS.

4.2. OpenStack

According to [8,9,10,11,12], OpenStack is the most widely used open-source CC IaaS solution deployed in a corporate environment. The following is a list of identified IaaS entities along with descriptions of mandatory and optional parameters used in this system.

4.2.1. Parameters of the Environment

Similarly to AWS, OpenStack also allows the use of parameters in descriptive scripts. The type parameter is mandatory and specifies the parameter value, which can be the string, number, string list or the value from the Bool Algebra. Optional parameters are description of the parameter entity, constraints containing a list of limitations for the custom parameter value, minimum and maximum string length, value range for numeric parameters or regular expression limiting characters in the text string. There is also a list of values from which the administrator can choose when applying the script.

4.2.2. Virtual Machine

VM in OpenStack is referred to as OS::Nova::Server. The Nova module is generally responsible for creating, operating and modifying virtual instances of virtual computers. In OpenStack, VM has one mandatory parameter. It is a template from which VM will inherit its computing resources (CPU, RAM, HDD) and is called the flavor. When we look at the optional parameters, there is the VM name that is unique to the project and the name of the operating system image from which the new VM will be cloned. Another optional parameter is the Security Group, which is a simple packet filter at the CC system level. The parameter network is used to connect a VM to the network. Using this parameter, it is possible to connect a VM to a single subnet or to all subnets within a single network. The next optional parameter is the SSH key reference that is used for remote access and VM management. The user_data parameter allows one to run a supplied user script in the newly created VM. The script is executed only during the first boot of the VM and can modify the VM according to the administrator’s preferences. The script can be written in any language that is executable within the guest operating system. The last optional parameter is tag, which represents the VM tag. These are used only by the administrator and provide the VM sort and filter features in various listings.

4.2.3. SSH Key

In OpenStack, the SSH key can be created through a web browser, via a command line or directly within the script. If the SSH key is created using a browser, the key must exist before running the script. This behavior is the same as it is on the AWS platform. When creating a key within a script, it is necessary to define the name and other optional parameters. Subsequently, within the definition of VM there is a reference to the created key. The entity of the SSH key has the name OS::Nova::KeyPair.

4.2.4. Security Groups

The Security Group entity in the OpenStack system uses a similar security logic as in the AWS platform. It can be defined either via the web interface, through a command prompt or using a script where its name is OS::Neutron::SecurityGroup. It has no mandatory parameters, only three optional ones. The first is a description that serves only as a description of the group. The second parameter is its name, which is unique within a single project and forms the same identifier of a given group. The last parameter is rules, which are security rules. Each rule has five parameters. The first is a direction that indicates whether the rule will be applied to incoming or outgoing packets. The other four parameters are the same as in AWS. Protocol refers to the transport protocol carried in the body of the packet. Further, the pair of parameters port_range_min and port_range_max indicate the port intervals that will be compared within the rule. The last rule is remote_ip_prefix, which specifies the range of IP addresses compared in the rule.

4.2.5. Network

In the OpenStack system, the Neutron module takes care of the network. The network is represented by the OS::Neutron::Net keyword in the script. In this system, the network has no required parameters; all parameters are optional. We have chosen for our portability model probably the most used parameters: name and tags.

4.2.6. Subnet

There are two mandatory parameters and three optional parameters. From mandatory parameters, this is a reference to the network entity under which the subnet belongs. Each subnet can be uniquely identified by the network it belongs to. The second parameter is an IP prefix. Optional parameters include a name that is only informative, the DNS server address or the gateway address of the subnet. Another parameter is the IP protocol version, using which the administrator can set whether to use the IPv4 or IPv6 subnet. Each subnet can only use one version at a time. The last optional parameter is tag, which is information for the administrators, which can be subdivided into different categories, regardless of their functionality or belonging to the parent network.

4.3. Microsoft Azure

According to [8,9,10,11,12], Microsoft Azure is the second most widely used public CC system in the corporate environment in 2018. The following section identifies relevant IaaS entities of this cloud platform, along with their mandatory and optional parameters.

4.3.1. Parameters of the Environment

As in AWS and OpenStack, an administrator can define environmental parameters that can be later referenced in scripts. In the Azure system, every parameter entity has a single mandatory parameter—type (text string, a number or a logical value). Other parameters are optional such as description, the default value used when the administrator does not specify a custom value, the string length, the range of values allowed for the numeric values or a list of values from which the administrator has the option to select when applying a script.

4.3.2. Virtual Machine

One of the basic entities is the virtual machine (VM). The virtual machine is located under “Microsoft.Compute” and the entire VM definition name is “Microsoft.Compute/virtualMachines”. Similarly as in AWS and OpenStack, in the Azure system both mandatory and optional parameters are available as well. The only mandatory parameter is the name of the VM. All other parameters are optional. Unlike Amazon AWS and OpenStack, the parameters are more structured. The image identifier from which the VM will be cloned has the name “storageProfile/imgeReference/id”, where the slashes separate each level. The template for VM, which sets the number of CPU cores, RAM size and HDD, has the name “vmSize” and is located under the “hardwareProfile” section. To create a VM in a specific zone, you need to define the “zone” directive. Under “networkProfile/networkInterfaces/id” it is possible to define VM connections for virtual networks. Azure allows one to run a script at the first boot of the VM. An administrator can define it in the “OSProfile/customData” section. For remote sign in, the public key value can be entered on the VM, specifically in “OSProfile/linuxConfiguration/ssh/publicKeys/keyData”. The last parameter is “tags”, which represents tags that the administrator can assign to a specific VM.

4.3.3. SSH Key

In Azure, similar to Amazon AWS, it is not possible to create an SSH key using an automation script. According to the documentation, the only option is to create a key in the web interface and then put the value of the public key in the script. Azure does not even support the ability to import an existing SSH key into a VM.

4.3.4. Security Groups

The security group has the same logical structure as the AWS. Here it is defined as “Microsoft.Network/networkSecurityGroups” and has one required parameter, “name”, which indicates the name of the group. There are two optional parameters, one of which is “securityRules”, which contains a list of security group entities. The second optional parameter is “tags”, which allows administrators to assign a tag to a particular group.
The security rules entity is defined as “Microsoft.Network/networkSecurityGroups/securityRules”. It has three mandatory and three optional parameters. The first mandatory parameter is “name”, which sets the rule name; then, “direction” defines the packet flow direction. The last mandatory parameter is “protocol”. This parameter defines the transport protocol carried in the IP. From optional parameters, the administrator can define “description”, which is a description of the defined rules and the pair “sourceAddressPrefix” and “sourcePortPrefix”. They define the source IP address and source port. Similarly, in AWS and OpenStack systems, Azure does not allow one to disable operating rules.

4.3.5. Network

Azure has a different subnet allocation philosophy, such as AWS and OpenStack. In AWS and OpenStack, the administrator can assign a subnet to the network. In the Azure system it is vice versa. The network entity contains a definition of subnets, which belongs to it. The name of the network definition in Azure is “Microsoft.Network/virtualNetworks”. The network has one mandatory parameter—a network name called the “name”. In addition to the name, there are two optional parameters. The first is the “subnets”, which defines the list of subnets belonging to the network. The second parameter is the tag that the administrator can assign to the network and is called “tags”.

4.3.6. Subnet

The subnet in Azure is located in the same section, “virtualNetworks” and defines it as “Microsoft.Network/virtualNetworks/subnets”. It has three parameters: The name parameter is simply called the “name”. The optional parameter “addressPrefix” defines the CIDR address range. The “tags” parameter allows administrators to add tags to a particular subnet.

4.4. Example of Code Differences

As an example, we present a definition of a network and its subnet in Amazon AWS and Microsoft Azure orchestration script syntax. Both examples are shown in YAML format. Not only are there syntactical differences, but also logical ones. In AWS, you have to define in the subnet section which network it belongs to; in Azure, it is vice versa. In the network section, the administrator defines what subnets the network has. All thee documentation of the orchestration syntax used in AWS, Azure and OpenStack can be found in [39,40,41].
============ Amazon AWS
test_net:
  type: AWS::EC2::VPC
  properties:
    CidrBlock: 192.168.0.0/16
test_subnet:
  Type: AWS::EC2::Subnet
  Properties:
    CidrBlock: 192.168.1.0/24
    VpcId: { ref : test_net }
============ Microsoft Azure
test_net:
  type: Microsoft.Network/virtualNetworks
  name: test_network
  subnets:
    - test_subnet
test_subnet:
  type: Microsoft.Network/virtualNetworks/subnets
  name: test_subnet
  properties:
    addressPrefix: 192.168.1.0/24

5. Proposal of the Generic IaaS Model

As we showed above, all three CC systems use building blocks, which have parameters. By using suitable transformations, we can transform environment description from one CC platform to another. It would be very difficult to create transformations between each platform. It would be very complex to map every one to every one. Adding another platform would increase the complexity exponentially. Therefore, it would be appropriate to create a “meta model”, which would be an interface between various CC platforms. Based on our research, we used model-driven architecture as the software-development approach to propose the “meta model”.

5.1. Overview of Model-Driven Architecture Principles

Model-Driven Architecture (MDA) is one of the software-development approaches that takes into account the Model-Driven Development (MDD) facts and actually puts them into practice. MDA is a specification released by OMG (Object Management Group [42]). This specification provides guidance on how to design structured specifications—models. The MDA specifies four layers, as shown in Figure 2.
As shown in several works [38,43,44,45,46,47], MDA is used for the proposal and transformation of information systems. MDA, except for the mentioned layer, contains transformations between each layer. Transformations can be bi-directional. This means that a designer can make transformations from the first to the second layer and vice versa. As is shown below, we have proposed an approach that uses MDA as a transformation framework applicable for the transformation between specific CC platforms and generic models. We propose a generic model as a common layer in the architecture of specific CC platforms’ script transformations between each other. Specific CC platforms are information systems and we see parallels between MDA layers (specifically PIM and PSM) and transformations of description scripts between generic and specific CC platforms.
Although the main idea of MDA architecture is the creation of models on higher levels of abstraction, transformations among these models remain relevant as artefacts of the CC script-transformation architecture proposed in this article. MDA conceptually introduces the process of transformation of models of higher levels of abstraction to models of lower levels of abstraction. Transformation rules describe how elements of a source model should be transformed to elements of an object model. This paradigm is exactly what we use to transform specific CC platform scripts to proposed generic models and vice versa.
The Computer-Independent Model (CIM) is not included in our proposal. Transformation into the Platform-Independent Model (PIM) is not currently automated; according to software analysts, it must be done manually.
The PIM layer completely specifies the functionality of the software system that is transformed into a specific platform for implementation. It has a certain degree of abstraction from a particular solution in order to be easily used by any development platform. PIM contains information that is important for the solution and development of the software system. This can be algorithms, different kinds of rules or restrictions and so on.
The Platform-Specific Model (PSM) is dependent on the particular platform on which the system will be developed. According to this model, we can create specific structures in the resulting implementation platform (C ++, Java, .Net). This model adequately reflects the code structure and is a sufficient basis for implementation. PSM also contains certain objects, which are used by the resulting programming process, for example classes, constructors, access to objects and so on.
The last layer is an implementation model that represents the “code”. In this layer, the software is programmed using the programming language of choice. The result of this layer is compilable code that is ready for deployment and usage by users.
MDA assumes transformation between these layers, with development progressing from the most generic model to a more specific platform where the proposed application will be implemented. MDA principles can also be used reversibly and the user is not required to use all layers within the transformation process.

5.2. Generic IaaS Model Proposal

The purpose of the paper is to introduce the proposal of our generic CC IaaS model as a contribution to the CC-portability issue that could establish an architecture base for future portability approaches. The model can be understood as the independent description of an IaaS environment. Our approach and here-proposed design was inspired by the MDA model approach used in software development. To specify the model entities, we used a synthetic approach. The process of the model entity selection was influenced by the existing implementations of the CC IaaS environment. The here-proposed model is applicable to any CC provider/user facing the problem of service portability.
Figure 3 shows the concept of the proposed model within the MDA paradigm. Each CC provider offers an opportunity for users to create a description of environments in a language specific to their environment. We describe this description in the PSM layer. We classify our model into the PIM layer because it is not composed of specific components of any CC implementation, but retains all of the attributes and properties of the environment from which we could create a functional platform-dependent description.
Based on an analysis of existing IaaS CC solutions, we present a proposal of our own generic model. In MDA terminology, this model corresponds to the PIM layer. The proposed and presented generic model includes IaaS entities, their parameters and relationships in a manner that is independent of the particular IaaS service implementation (AWS, Azure, etc.). The proposed generic model is flexible and easily expandable. Our proof-of-concept implementation [48] implements the proposed generic model, platform-dependent models and transformation rules. This tool can transform the IaaS descriptive service orchestration scripts (entities, parameters and their relationships) between different CC platforms. This transformation is performed between the general and dependent models and vice versa (no direct transformation between various dependent models is possible).

5.3. Proposed Logical Entities

In the following sections, we present our own design of IaaS logical entity services for the general IaaS service model. The design was based on existing systems, because of the simpler transformation rules.

5.3.1. Parameters of Environment

The proposed generic model also includes support for script parameters. Similarly to AWS and OpenStack, even in this model, logical parameter entities may have their own parameters. One is the type that sets the type of parameter’s own value. Here is a description of the parameter entity and the default value that will be used if the administrator does not define a new value when applying the script. Again like in previous systems, there are also limiting factors for the parameter’s own value. These can be string length, range for numeric values, regular expression or list of values.

5.3.2. Virtual Machine

There is virtually no difference between AWS and OpenStack CC systems in a VM section. In addition, OpenStack has the VM name parameter, which can be converted to AWS simply by using the Tags parameter. Another difference is that in the AWS system it is not possible to assign VM to the entire network and thus to all its subnets. VM can only be assigned to specific subnets. If the OpenStack VM is connected to the entire network, we can interpret this connection as a connection to all the subnets of the network and then simply transform it into AWS.
The diagram below shows a dependency diagram in the proposed generic model. The diagram is identical to the OpenStack diagram. Since there are no mandatory parameters in it, unlike the OpenStack system, there is no logical entity of the template for the computational resources assigned to the VM. As mentioned above, the differences between AWS and OpenStack CC systems can be relatively easy to solve. For this reason, we can afford to take a description of one system that has more configuration options in some ways.

5.3.3. SSH Key

Within the generic model, we were inspired by the OpenStack system. We can say that these two entities are identical, as we believe that functionality is fully sufficient for any IaaS environment. The diagram below shows a diagram of SSH key dependencies in the proposed generic model. Just as in OpenStack, the only mandatory parameter is the key name.

5.3.4. Security Groups

In the generic model of the security groups section, we were inspired by AWS. We think that the separation of security rules into separate logic entities allows greater variability of configurations than the direct integration of rules into a security group. For this reason, we propose to separate the security rules as separate logical entities. The security group has, like in AWS and OpenStack systems, parameters, name, description, tags and custom security rules. They are divided into two entities, separately for incoming and for outgoing packets. Policy groups have the same syntax, they include a name, a transport protocol type carried in a packet, two transport port intervals and an IP prefix in the CIDR format. The security group has no external dependencies and may exist as a totally separate entity.

5.3.5. Network

In the generic model, we tried to combine attributes from all systems to cover the most commonly used parameters in each system.

5.3.6. Subnet

In the generic model, we were once again trying to design entities so that they could be transformed from/to any CC system. Based on the above, we can say that AWS and MS Azure use just a subset of the OpenStack system parameters. For this reason, we chose parameters that are essentially identical to OpenStack.

6. Transformation Rules

In this section, we show the mapping of parameters between our proposed generic model and three CC platforms. There are mappings of all six entities in the IaaS service: environmental parameters, virtual machine, SSH key, security groups, network and subnet.

6.1. Parameters of Environment

Table 2 shows transformations for environmental parameters. We can see that the AWS system can be transformed directly into the generic model. The OpenStack system is a little bit more complicated because restrictions such as string length or numeric value range are encapsulated in the constraints section. As well as the AWS, Azure is directly transformable into a generic model. The exception is the parameter description, which is one level below in the metadata section. The allowed_pattern parameter, which defines the regular usable values, is not implemented in Azure.

6.2. Virtual Machine

Table 3 shows the virtual machine transformation rules between OpenStack, AWS, Azure and generic Model CC systems. The CC system AWS does not include the VM name, but as we have advised above, the name can be set in the Tags parameter.
Another difference can be seen in the sixth parameter, where the generic model and OpenStack have the network, AWS has a subnet and Azure can directly define an interface that can then be connected to a specific network or subnet. If we plug VM into the entire network in OpenStack, it means connecting to all its subnets. It is also possible to connect the VM to only one particular subnet.

6.3. SSH Key

In Table 4, we can see that AWS and Azure systems are totally devoid of any rules. This is because the key cannot be created within a script, but only through a web interface or directly importing key values. Since the generic model was inspired by the OpenStack system, there is no need to make any transformations between these models. Because the keywords are the same, the values were just simply copied.

6.4. Security Groups

Based on the analysis, we found differences between AWS, Azure and OpenStack. The OpenStack system has rules integrated into one logical entity, while AWS and Azure divide these rules into separate logical entities. In Table 5, we can see that the rules do not have a common intersection within the generic model. The OpenStack system uses one parameter rule, while the generic model, AWS and Azure use separate input and output policy parameters. In addition, AWS distributes these entities for ingress and egress rules, while Azure uses one common logical entity for both directions. As mentioned above, in the generic model, we proposed two independent parameters, similar to the AWS. For this reason, we see the transformation of parameters between the AWS system and the generic model. In the transformation between OpenStack and the generic model, it will be necessary to create new logical entities. Mapping to the Azure system is relatively straightforward, it is only necessary to divide the rules for the input and output entities according to the directional parameter found in each rule entity.
Table 6 is a composite table. For AWS and for the security group, we see transformations of the logical entity rules for incoming and outgoing packets, while for OpenStack the rules integrated into the body of the security group are shown. However, as we can see in this table, transformations of these parameters are relatively straightforward. The difference is found in the group name parameters, which the OpenStack system does not need because it is located directly in the body of the security group. On the other hand, we see that the OpenStack system has a direction parameter in which packets will be compared to the rule. However, this is not required in the general model and in the AWS system because these groups are directly deployed in the desired flow direction.

6.5. Network

In Table 7, there are transformations of individual network attributes into the general model displayed. If the system does not use any of the parameters, the parameter is a dash.

6.6. Subnet

Subnet is much less configurable in AWS and Azure than in OpenStack. However, this is partly due to the nature of the AWS and Azure systems where the provider has predetermined some values and the user cannot change them, as the default gateway always has the first possible address from the subnet in both systems. Also, it is not possible to determine the IP protocol version because currently both AWS and Azure offer IPv4 connectivity only. Transformation rules can be found in Table 8.

7. Implementation and Verification

Several implementation options were considered during the implementation process. From several different programming languages, we chose Python, specifically Python3. This is because this language is sufficiently high-level, interpreted and therefore platform-independent.
We tried to follow some conventions that are used when writing source code. The reason was to make the code clearer. Class names start with a capital letter, names with a small initial letter. Attributes begin with the letter “a”, the parameters begin with “pa”. Also, variable names have a small initial letter and use the so-called “camel notation”. That is, if the name of the variable is multi-word, each word starts with a capital letter. As an example, we will mention the “number of objects” that would be written in the notation as “numberOfObjects”. The complete source code is freely available in the public BitBucket git repository [48].
The application is designed with modular architecture in mind. This means that there is a shared core of an application and every individual CC IaaS implementation has its own module. The main class is “CloudMigration”, from which the functions of other classes are called. It represents an interface through which the application can be controlled.
Another class is the “MainWindow”, which includes a graphical user interface for user interaction with the application. From this approach, it is clear that this class can be replaced by any other class that would be used to provide control over the application. In the future, the application control over, for example, Command Line Interface (CLI) or an Application Program Interface (API) can be added.
The “Loader” class is used to initialize the modules and transformation rules. This class also initiates the classes the modules of the individual implementations of CC IaaS solutions (Generic, OpenStack, AWS and Azure) consist of. Each implementation of the CC IaaS solution has its own class that provides the transformations to and from the generic model.
The “Mapper” folder contains text files with definitions of direct resource mapping. For each source, such as a subnet, a file is created where parameter mapping is for different implementations of CC systems. If any system does not contain a parameter, it is replaced with a dash. For the sake of clarity, let us give an example section of a file for a subnet’s mapping rules. In Figure 4, we can see source file syntax and format containing mapping rules, which are further used for transformations.
As we can see, AWS does not have a defined subnet name and Azure does not define the network to which the subnet belongs. If we needed to transform the IP prefix from the generic model (in the “CIDR” view) into the Azure system, the name of the parameter would be changed to “address-prefix”. These mappings correspond to the mappings listed in Section 6.
The “Template” class is responsible for reading the schema found in the “Schemas” folder. Each module has its own schema of parameters that are loaded and then a translation is performed according to these schemes. Schemas are written in JSON format. In the diagrams, the properties of the individual parameters such as their brief description and their type are defined. The type can currently be one of the 3 values—“value”, “list” and “special”. If the value is “value,” the transformation takes place directly; that is, the value is simply replaced. Similarly, it is at the “list” value, which stipulates that the value is actually a list of values. The last type is “special”, which says that direct mapping cannot be done, but special mapping is required in the code. As an example, we can specify special values, such as a script version that is specific to a specific CC platform. In this case, a warning will appear in the resulting script that the administrator should manually modify and check the parameter. The UML diagram of the classes is shown in Figure 5.
In the future, if a module which implements the transformation of the description of another CC environment into the generic model is created, it needs to meet three conditions to be successfully included into the application. The first is to add parameter transformations to all sources in the “Mapper” folder. The second is to create a schema of specific parameters in the “Schemas” folder and the third condition is to create a class handling special cases of transformations that cannot be conducted directly.

Verification and Validation

Implementation requires model verification. In our case, we transformed descriptions from/into the generic model. For syntactic control of transformed descriptions, we used real implementations of CC systems that guarantee correctness of syntactic transformation. As a reference topology, we used one virtual instance of the server in the network behind the logical router, which is also shown in Figure 1.
As in the OpenStack platform, we used our own OpenStack deployment, Pike release, which we run for our own purposes, mainly as a research and teaching platform. For purposes of validation of our generic model, we set up the Microsoft Azure development kit (ASDK), which is a private set up of Microsoft Azure. It is restricted to a single physical machine, just for evaluation processes. ASDK has the same Application Programming Interface (API) and control as the public cloud version from Microsoft. For testing of transformations to and from Amazon AWS, we asked friends in the company that have access to AWS consoles. Because of the fact that every running instance is paid immediately, we just tried to import transformed script into the environment. This was just enough to test the correctness of the syntactical transformation.
We transformed the scripts from all of the three chosen Cloud Computing platforms into the generic model and then from the generic model back to the particular platform. The first step was to create a test topology script. In our case, we used OpenStack as the first transformation system because we have the most experience with this cloud platform. After syntax verification of the script in OpenStack was completed, the second step was to transform this script into a generic model. The third step was to transform the script in the general format back into the OpenStack format. This file was then successfully syntactically validated in OpenStack, which confirmed the correctness and functionality of our generic model implementation.
Using the same three-step method, we also verified syntactic translation accuracy for Amazon AWS and Microsoft Azure. In both cases, for AWS and Azure, we used a real implementation of CC environments to verify the accuracy of the script’s translation. As mentioned above, we transformed the same test topology that consisted of a single machine server located in the virtual network behind a logical router. As the real implementations of these cloud platforms were used for the testing process, we can guarantee the syntactical correctness and accuracy of the transformed scripts.
On the basis of the syntactic accuracy tests on real implementations of CC IaaS systems, we can state that our implementation of the generic model as well as the transformation rules are functional and ready to use. However, for production use, future generalization of the generic model will require the addition of additional entities.

8. Discussion of Results

Nowadays, equivalent solutions are available to deploy CC with IaaS. But the choice of the specific solution may be subject to various factors (ease of use, extensive documentation, ease of deployment in case of private CC, etc.). One of the factors may also be the current price of the solution. However, the importance of each factor can change over time, and migration from one CC provider to another can also influence the importance of each factor, especially the price. That is why many users are considering migrating their IaaS service from one provider to another and potentially the problem of the vendor lock-in needs to be considered.
The tool we created is not dependent on any CC platform. As it is standalone, the transformation (which is a one-time process) does not need to run itself on any CC platform, either a source or a destination one. Therefore, it does not affect the performance of the CC system as a whole.
The proposed transformation method is a rigorous solution for the IaaS service portability between different IaaS environments and is designed to be generic and easily extensible. The specification will make it easier for users to migrate their service and helps to “open” the CC service environment. Due to the complexity of the problem, this paper focuses only on the IaaS service, but we assume that, after successful verification, the solution will also be usable for other CC services types. The paper can also be considered as a contribution to the standardization of CC system portability.
We believe that this paper addresses the current problem. Our assumption is also supported by the fact that in December 2017, ISO issued the 19941 standard [49], which defines interoperability and portability from the CC providers’ and users’ point of view.

9. Conclusions

The aim of this paper was the proposal and verification of a generic model for CC environment scripts. The first step was analysis of existing CC environments. The analysis was based on an understanding of how these environments create logical topologies. The next step was to become familiar with these environments and gain some practical skills while creating a script in each environment.
Subsequently, we designed a model in which we tried to find an intersection among analyzed CC environments. The model should be as simple to implement as possible. On the other hand, it should transform parameters with as small an overhead as possible. We tried to create a model as similar as possible to existing CC environments, so that administrators would be familiar with its structure.
The last step was to implement the model in the programming language and verify it. An integral part of verification was testing of transformed scripts in real implementations of CC environments, which confirmed syntactical accuracy of transformation.
In our opinion, the main advantage of this article is the creation of a generic model for CC IaaS environment script, its implementation and practical verification. Another advantage is the overview of the current state of CC IaaS services in the world.
We hope that the proposed model, together with its implementation, will be used in real transformations of orchestration scripts and will ease the work of administrators. The model’s proof-of-concept implementation was created in Python3 programming language and its source code is publicly available in the repository [48].
This project, as well as research in the CC field may be further extended with more transformation rules or to proceed with a standardization of some descriptive language, which could lead to their wider usage in any CC environment. In our opinion, the standardization could unite the CC market, which could force CC providers to provide better services. This could have a positive impact on CC users’ usage experience.

Author Contributions

Conceptualization, M.M., P.S. and M.K.; methodology, M.M.; software, M.M. and P.S.; validation, M.M.; formal analysis, L.Z.; investigation, M.K. and P.S.; resources, M.K. and L.Z.; data curation, L.Z.; writing—original draft preparation, M.M.; writing—review and editing, P.S. and L.Z.; visualization, L.Z. and M.K.; supervision, P.S.; project administration, M.M.; funding acquisition, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the University of Žilina, Žilina, Slovakia grant scheme number 1/2021/FRI/MVP and by the Slovak Grant Agency KEGA project Improving the quality of education in the field of cyber security, No. 004ŽU-4/2024.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, M.M., upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. European Commission. Cloud Select Industry Group on Service Level Agreements; Technical Report; European Commission: Brussels, Luxembourg, 2018.
  2. European Commission. Cloud Select Industry Group on Code of Conduct; Technical Report; European Commission: Brussels, Luxembourg, 2018.
  3. European Telecommunications Standards Institute. Cloud Standards Coordination; Technical Report; European Telecommunications Standards Institute: Valbonne, France, 2016. [Google Scholar]
  4. Roadmap, S. NIST Cloud Computing Standards Roadmap—SP 500-291 v2; Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2013. [CrossRef]
  5. ISO/IEC 17788:2014; Information Technology—Cloud Computing—Overview and Vocabulary. ISO/IEC: Geneva, Switzerland, 2014.
  6. RightScale. The RightScale 2018 State of the Cloud Report; RightScale: Santa Barbara, CA, USA, 2018. [Google Scholar]
  7. RightScale. The RightScale 2019 State of the Cloud Report; RightScale: Santa Barbara, CA, USA, 2019. [Google Scholar]
  8. Flexera. The Flexera 2020 State of the Cloud Report; Flexera: Barcelona, Spain, 2020. [Google Scholar]
  9. Flexera. The Flexera 2021 State of the Cloud Report; Flexera: Barcelona, Spain, 2021. [Google Scholar]
  10. Flexera. The Flexera 2022 State of the Cloud Report; Flexera: Barcelona, Spain, 2022. [Google Scholar]
  11. Flexera. The Flexera 2023 State of the Cloud Report; Flexera: Barcelona, Spain, 2023. [Google Scholar]
  12. Flexera. The Flexera 2024 State of the Cloud Report; Flexera: Barcelona, Spain, 2024. [Google Scholar]
  13. ITU-T. Y.3500 Overview and Vocabulary; Technical Report; ITU-T: Geneva, Switzerland, 2014. [Google Scholar]
  14. Islam, S.; Ouedraogo, M.; Kalloniatis, C.; Mouratidis, H.; Gritzalis, S. Assurance of Security and Privacy Requirements for Cloud Deployment Models. IEEE Trans. Cloud Comput. 2018, 6, 387–400. [Google Scholar] [CrossRef]
  15. ITU-T. Y.3502 Reference Architecture; Technical Report; ITU-T: Geneva, Switzerland, 2014. [Google Scholar]
  16. Petcu, D. Portability and Interoperability between Clouds: Challenges and Case Study. In Proceedings of the Towards a Service-Based Internet—4th European Conference, Poznan, Poland, 26–28 October 2011; pp. 62–74. [Google Scholar] [CrossRef]
  17. Council, C.S.C. Interoperability and Portability for Cloud Computing: A Guide; Technical Report; Cloud Standards Customer Council: Needham, MA, USA, 2014. [Google Scholar]
  18. Parameswaran, A.; Chaddha, A. Cloud interoperability and standardization. SETlabs Brief. 2009, 7, 19–26. [Google Scholar]
  19. Jamshidi, P.; Ahmad, A.; Pahl, C. Cloud Migration Research: A Systematic Review. IEEE Trans. Cloud Comput. 2013, 1, 142–157. [Google Scholar] [CrossRef]
  20. Tziritas, N.; Khan, S.U.; Loukopoulos, T.; Lalis, S.; Xu, C.Z.; Li, K.; Zomaya, A.Y. Online Inter-Datacenter Service Migrations. IEEE Trans. Cloud Comput. 2020, 8, 1054–1068. [Google Scholar] [CrossRef]
  21. Bhavya, K.; Yamini, K.; Sreenivas, V. Cloud Services Portability for secure migration. Int. J. Comput. Trends Technol. IJCTT 2013, 4, 546–549. [Google Scholar]
  22. Bojanova, I. Cloud Interoperability and Portability II; Technical Report; IEEE Computer Society: Los Alamitos, CA, USA, 2018. [Google Scholar]
  23. The Open Group. Cloud Portability and Interoperability; Technical Report; The Open Group: Boston, MA, USA, 2016. [Google Scholar]
  24. Pamami, P.; Jain, A.; Sharma, N. Cloud Migration Metamodel: A framework for legacy to cloud migration. In Proceedings of the 2019 9th International Conference on Cloud Computing, Data Science Engineering (Confluence), Noida, India, 10–11 January 2019; pp. 43–50. [Google Scholar] [CrossRef]
  25. Kolb, S.; Lenhard, J.; Wirtz, G. Application Migration Effort in the Cloud - The Case of Cloud Platforms. In Proceedings of the 2015 IEEE 8th International Conference on Cloud Computing, New York, NY, USA, 27 June–2 July 2015; pp. 41–48. [Google Scholar] [CrossRef]
  26. Khajeh-Hosseini, A.; Greenwood, D.; Sommerville, I. Cloud Migration: A Case Study of Migrating an Enterprise IT System to IaaS. In Proceedings of the 2010 IEEE 3rd International Conference on Cloud Computing, Miami, FL, USA, 5–10 July 2010; pp. 450–457. [Google Scholar] [CrossRef]
  27. Awasthi, A.; Gupta, R. Multiple hypervisor based Open Stack cloud and VM migration. In Proceedings of the 2016 6th International Conference—Cloud System and Big Data Engineering (Confluence), Noida, India, 14–15 January 2016; pp. 130–134. [Google Scholar] [CrossRef]
  28. Suen, C.; Kirchberg, M.; Lee, B.S. Efficient Migration of Virtual Machines between Public and Private Cloud. In Proceedings of the 2011 IEEE Third International Conference on Cloud Computing Technology and Science, Athens, Greece, 29 November–1 December 2011; pp. 549–553. [Google Scholar] [CrossRef]
  29. Kostoska, M.; Gusev, M.; Ristov, S. An Overview of Cloud Portability. In Proceedings of the Future Access Enablers for Ubiquitous and Intelligent Infrastructures, Skopje, North Macedonia, 23–25 September 2015; pp. 248–254. [Google Scholar] [CrossRef]
  30. Ramalingam, C.; Mohan, P. Addressing Semantics Standards for Cloud Portability and Interoperability in Multi Cloud Environment. Symmetry 2021, 13, 317. [Google Scholar] [CrossRef]
  31. Torab-Miandoab, A.; Samad-Soltani, T.; Jodati, A.; Rezaei-Hachesu, P. Interoperability of heterogeneous health information systems: A systematic literature review. BMC Med. Inform. Decis. Mak. 2023, 12, 100754. [Google Scholar] [CrossRef] [PubMed]
  32. Abughazalah, M.; Alsaggaf, W.; Saifuddin, S.; Sarhan, S. Centralized vs. Decentralized Cloud Computing in Healthcare. Appl. Sci. 2024, 14, 7765. [Google Scholar] [CrossRef]
  33. Cimmino, A.; Cano-Benito, J.; Fernández-Izquierdo, A.; Patsonakis, C.; Tsolakis, A.C.; García-Castro, R.; Ioannidis, D.; Tzovaras, D. A scalable, secure and semantically interoperable client for cloud-enabled Demand Response. Future Gener. Comput. Syst. 2023, 141, 54–66. [Google Scholar] [CrossRef]
  34. Pliatsios, A.; Kotis, K.; Goumopoulos, C. A systematic review on semantic interoperability in the IoE-enabled smart cities. Internet Things 2023, 22, 100754. [Google Scholar] [CrossRef]
  35. Ding, S.; Tukker, A.; Ward, H. Opportunities and risks of internet of things (IoT) technologies for circular business models: A literature review. J. Environ. Manag. 2023, 336, 117662. [Google Scholar] [CrossRef] [PubMed]
  36. Atanasov, I.; Pencheva, E.; Trifonov, V.; Kassev, K. Railway Cloud: Management and Orchestration Functionality Designed as Microservices. Appl. Sci. 2024, 14, 2368. [Google Scholar] [CrossRef]
  37. Falcão, R.; Matar, R.; Rauch, B.; Elberzhager, F.; Koch, M. A Reference Architecture for Enabling Interoperability and Data Sovereignty in the Agricultural Data Space. Information 2023, 14, 197. [Google Scholar] [CrossRef]
  38. Kaur, K.; Sharma, S.; Kahlon, K.S. Towards a Model-Driven Framework for Data and Application Portability in PaaS Clouds. In Proceedings of the First International Conference on Sustainable Technologies for Computational Intelligence, Jaipur, India, 29–30 March 2020; Springer: Singapore, 2020; pp. 91–105. [Google Scholar]
  39. Services, A.W. AWS CloudFormation. Available online: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html (accessed on 8 August 2023).
  40. Microsoft. Microsoft Azure Deployment Stacks. Available online: https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell (accessed on 16 August 2023).
  41. OpenStack. OpenStack Resource Types. Available online: https://docs.openstack.org/heat/latest/template_guide/openstack.html (accessed on 20 September 2023).
  42. Group, O.M. Model Driven Architecture—A Technical Perspective; Technical Report; Object Management Group: Milford, MA, USA, 2011. [Google Scholar]
  43. Kherraf, S.; Lefebvre, E.; Suryn, W. Transformation from CIM to PIM Using Patterns and Archetypes. In Proceedings of the 19th Australian Conference on Software Engineering (ASWEC 2008), Perth, WA, Australia, 26–28 March 2008; pp. 338–346. [Google Scholar]
  44. De Castro, V.; Marcos, E.; Wieringa, R. Towards a service-oriented mda-based approach to the alignment of business processes with it systems: From the business model to a web service composition model. Int. J. Coop. Inf. Syst. 2009, 18, 225–260. [Google Scholar] [CrossRef]
  45. Zhang, W.; Mei, H.; Zhao, H.; Yang, J. Transformation from CIM to PIM: A Feature-Oriented Component-Based Approach. In Proceedings of the 8th International Conference on Model Driven Engineering Languages and Systems, Montego Bay, Jamaica, 2–7 October 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 248–263. [Google Scholar] [CrossRef]
  46. Cao, X.X.; Miao, H.K.; Chen, Y.H. Transformation from computation independent model to platform independent model with pattern. J. Shanghai Univ. Engl. Ed. 2008, 12, 515–523. [Google Scholar] [CrossRef]
  47. Drozdova, M.; Kardos, M.; Kurillova, Z.; Bucko, B. Transformation in model driven architecture. In Proceedings of the Advances in Intelligent Systems and Computing, Karpacz, Poland, 20–22 September 2015; Springer: Berlin/Heidelberg, Germany, 2016; Volume 429, pp. 193–203. [Google Scholar] [CrossRef]
  48. Moravcik, M. BitBucket Repository. Available online: https://bitbucket.org/marekmoravcik123/cloudmigration/src/master/ (accessed on 20 September 2024).
  49. ISO/IEC 19941:2017; Information Technology—Cloud Computing—Interoperability and Portability. ISO/IEC: Geneva, Switzerland, 2017.
Figure 1. Testing topology.
Figure 1. Testing topology.
Applsci 14 09298 g001
Figure 2. MDA model—abstraction levels.
Figure 2. MDA model—abstraction levels.
Applsci 14 09298 g002
Figure 3. Proposal of the generic model.
Figure 3. Proposal of the generic model.
Applsci 14 09298 g003
Figure 4. Source file containing mapping rules.
Figure 4. Source file containing mapping rules.
Applsci 14 09298 g004
Figure 5. UML diagram of classes.
Figure 5. UML diagram of classes.
Applsci 14 09298 g005
Table 1. Cloud platforms comparison.
Table 1. Cloud platforms comparison.
AWSAzureOpenStack
TypePublicPublic/PrivatePrivate
Custom imageYesYesYes
Usage of orchestrationYesYesYes
Storage onlyYesYesYes
IPv6Not everywhereNot everywhereYes
Data locationProvider’s data centerProvider’s data centerOn premise
Table 2. Transformation rules: parameters of environment.
Table 2. Transformation rules: parameters of environment.
Generic ModelAWSOpenStackAzure
typeTypetypetype
descriptionDescriptiondescriptionmetadata—description
defaultDefaultdefaultdefaultValue
min_lengthMinLengthconstraints—lengthminLength
max_lengthMaxLengthconstraints—lengthmaxLength
min_valueMinValueconstraints—rangeminValue
max_valueMaxValueconstraints—rangemaxValue
allowed_valuesAllowedValuesconstraints—allowed_valuesallowedValues
allowed_patternAllowedPatternconstraints—allowed_pattern
Table 3. Transformation rules: virtual machine.
Table 3. Transformation rules: virtual machine.
Generic ModelAWSOpenStackAzure
namenamename
properties—imageproperties—ImageIdproperties—imageproperties—StorageProfile—imageReference—id
properties—instancetypeproperties—InstanceTypeproperties—flavorproperties—hardwareProfile—vmSize
properties—securitygroupsproperties—SecurityGroupIdsproperties—securitygroups
properties—availabilityzoneproperties—AvailabilityZoneproperties—availabilityzonezones
properties—networkproperties—SubnetIdproperties—networksproperties—networkProfile—networkInterfaces—id
properties—sshkeyproperties—KeyNameproperties—keynameproperties—OSProfile—linuxConfiguration—ssh—publicKeys—keyData
properties—userdataproperties—UserDataproperties—userdataproperties—OSProfile—CustomData
properties—tagsproperties—Tagsproperties—tagstags
Table 4. Transformation rules: SSH key.
Table 4. Transformation rules: SSH key.
Generic ModelAWSOpenStackAzure
properties—nameproperties—name
properties—save_private_keyproperties—save_private_key
properties—public_keyproperties—public_key
properties—userproperties—user
Table 5. Transformation rules: security group.
Table 5. Transformation rules: security group.
Generic ModelAWSOpenStackAzure
properties—nameproperties—GroupNameproperties—namename
properties—descriptionproperties—GroupDescriptionproperties—description
properties—rulessecurityRules
properties—ingress_rulesproperties—SecurityGroupIngress
properties—egress_rulesproperties—SecurityGroupEgress
properties—tagsproperties—Tagstags
Table 6. Transformation rules: security group—rules.
Table 6. Transformation rules: security group—rules.
Generic ModelAWSOpenStackAzure
nameGroupNamename
descriptiondescription
directiondirection
protocolIpProtocolprotocolprotocol
from_portFromPortport_range_minsourcePortPrefix
to_portToPortport_range_max
cidrCidrIpremote_ip_prefixsourceAddressPrefix
Table 7. Transformation rules: network.
Table 7. Transformation rules: network.
Generic ModelAWSOpenStackAzure
properties—nameproperties—VpcIdproperties—namename
properties—subnetssubnets
properties—cidrproperties—CidrBlock
properties—tagsproperties—Tagsproperties—tagstags
Table 8. Transformation rules: subnet.
Table 8. Transformation rules: subnet.
Generic ModelAWSOpenStackAzure
properties—nameproperties—namename
properties—networkproperties—VpcIdproperties—network
properties—cidrproperties—CidrBlockproperties—cidraddressPrefix
properties—dnsproperties—dns_nameservers
properties—gatewayproperties—gateway_ip
properties—ip_versionproperties—ip_version
properties—tagsproperties—Tagsproperties—tagstags
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moravcik, M.; Segec, P.; Kontsek, M.; Zidekova, L. Model-Driven Approach to Cloud-Portability Issue. Appl. Sci. 2024, 14, 9298. https://doi.org/10.3390/app14209298

AMA Style

Moravcik M, Segec P, Kontsek M, Zidekova L. Model-Driven Approach to Cloud-Portability Issue. Applied Sciences. 2024; 14(20):9298. https://doi.org/10.3390/app14209298

Chicago/Turabian Style

Moravcik, Marek, Pavel Segec, Martin Kontsek, and Lubica Zidekova. 2024. "Model-Driven Approach to Cloud-Portability Issue" Applied Sciences 14, no. 20: 9298. https://doi.org/10.3390/app14209298

APA Style

Moravcik, M., Segec, P., Kontsek, M., & Zidekova, L. (2024). Model-Driven Approach to Cloud-Portability Issue. Applied Sciences, 14(20), 9298. https://doi.org/10.3390/app14209298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop