1. Introduction
The advanced usability and popularity of cloud computing encourage developers to intensively investigate its environment. Evaluating the performance of several existing cloud platforms assists consumers’ decisions regarding the optimal environment option [
1]. Cloud computing has introduced scalable and elastic network resources that are offered as services through the Internet. The information, data, and shared resources are provided to the consumers upon request [
2]. However, the performance evaluation of the applications uploaded to cloud computing platforms is considered highly complex. It is affected by the complexity of the infrastructure of the cloud environment where the applications are executed [
3].
Application Programming Interfaces (APIs) represent a set of programming codes that permit and control the data transmission between different connected applications. Two main components are included in any API: technical specifications and the software interface. The technical specifications define the choices and rules of the data exchange among the connected applications. These specifications comprise the request for data delivery and the processing protocols. The software interface is written for the indicated specifications. When a certain application (i.e., requesting application) needs to access some operation or information from another application (i.e., providing application), it calls the corresponding API to determine how this operation or these data can be provided. The providing application, in turn, returns the requested operation or information directly to the requesting application. The API specifies the interface through which the requesting and providing applications can communicate.
APIs provide several advantages to the software development field in terms of speeding up and simplifying the development process. Developers can add functionalities and operations from other existing providers to their proposed software. The APIs can also serve as a layer of abstraction between two systems, where the working and complex details of the finalsystem are hidden [
4]. Today, there are many applications and APIs that different corporations provide over the Internet. The user can access these services and applications any time and any where. Therefore, the servers that offer these applications and services must handle a massive number of simultaneously received requests. Thus, enterprises have adopted this promising cloud computing technology since not all companies have the resources to create data centers or purchase many servers for these applications.
The performance evaluation of different API-based applications uploaded to cloud platforms is an important issue. It assists in selecting the right and suitable platform to run newly developed services and applications. Several research studies have been conducted in this field considering different circumstances, cloud platforms, applications, and APIs. Here, we introduce another performance evaluation study that seeks to assess different cloud computing platforms for different API-based application categories. Amazon AWS EC2 and Microsoft Azure are considered the most commonly adopted cloud platforms [
5]. We used these two platforms in our experiments since they are the top cloud service providers. Moreover, we selected web-APIs to test different mobile applications. Two taxonomies were selected and compared in this work: educational (i.e., Sebawayh) and professional (i.e., TaqTak) services. Education and professional services have witnessed a high percentage of switching to the cloud environment recently. This is due to the COVID-19 lockdowns, with schools and universities running online classes, and likewise for several businesses’ activities. An experimental comparison of the performance of the API-based applications under the Amazon AWS EC2 and Microsoft Azure platforms in terms of response time, latency, processing time, and throughput is presented in this work. These experiments were performed with several users. From the experimental study, we can infer that the Microsoft Azure platform is the best choice for uploading high-computational-resource applications. On the other hand, for educational applications, Amazon AWS EC2 provides less latency compared to Microsoft Azure.
In this study, we measured the performance of cloud-API-based applications from a technical perspective, which becomes important as more API-based applications migrate to the cloud. The experimental analysis of the results considered four sets of comparisons, in order to provide a platform to construct a methodology to effectively compare the performance of cloud-API-based applications and consequently support deployment decisions with technical arguments.
The remainder of this paper is organized as follows.
Section 2 reviews some of the previous studies in the literature.
Section 3 presents the methodology and sets the experimental parameters of this study.
Section 4 presents the obtained results and the discussion. Finally,
Section 5 presents the conclusions for the entire work and the future work.
2. Related Work
Several studies have assessed and evaluated cloud computing applications by considering several techniques [
6,
7,
8]. Researchers have investigated the capacity of cloud servers and the performance of the applications uploaded to these cloud environments. Jackson et al. [
6] presented an efficient assessment study that compared HPC and the Amazon AWS EC2 cloud platform. The experiments employed real applications that represented the workload of a super-computing center. From the experimental results, it was shown that using a Linux cluster was six-times faster than Amazon AWS EC2, and modern HPC systems were found to be twenty-times faster than Amazon AWS EC2. The performance of the tested applications was limited by interconnecting with the Amazon AWS EC2 platform. Moreover, the Amazon AWS EC2 platform introduces substantial variability.
Moreover, Bautista et al. [
7] used “ISO Quality Characteristics” to evaluate the performance of the applications uploaded to the cloud computing environment. The main consideration regarding this experimental study was the high complexity of the cloud environment, which is directly inherited from the complex infrastructure. In this experimental study, they adopted a new measurement framework and applied it to log data obtained from the data center. This was performed to map and examine particular quality characteristics of the ISO (i.e., the behavior during a period of time). A particular industrial private cloud was used to conduct this experimental study. The experimental results confirmed the effectiveness of the proposed framework at measuring and evaluating the performance of the uploaded applications. Ravanello et al. [
3] investigated the performance of big data applications, which combine the concepts of software quality from the ISO25010 model. Their experimental study aimed to fill the gap in the numerical representation of these concepts and the measurement of big data applications’ performance.
New models and frameworks have been proposed, aiming to facilitate the experimental study of applications uploaded to the cloud environment. Addamani and Basu [
8] presented a new model to analyze the performance of web applications in the cloud computing environment. They modeled the IaaS platform as multiple queues and “Virtual Machines” (VMs) as service centers. This model depends on the application behavior of the reverse instances on Amazon EC2. On the other hand, Vasar et al. [
9] proposed a new framework that combines different monitoring and benchmarking tools. Performance engineers can use this framework to test the applications under different loads and configurations. Moreover, the dynamic allocation of the sever is supported through this framework depending on the incoming load by employing response-time-aware heuristics.
Many researchers have investigated different performance metrics of the cloud environment for several applications. An experimental assessment of the performance of an analytic application that is run through different configurations of the storage service was presented in [
10]. The authors analyzed the performance achieved through analytic workloads and explained the problems that occurred due to impedance mismatch, which occurs in some configurations. Marian et al. [
11] tried to help the users choose the optimal configuration of each application by creating a prediction model using the cloud providers for the offered system. This model of prediction was created by applying machine-learning methods. They used the OpenStack cloud system to ensure the validity of the suggested methodology. Kotas et al. measured the performance of the most commonly used cloud platforms (i.e., Amazon AWS EC2 and Microsoft Azure). The experiments were run by generating clusters of Azure and AWS instances. Then, they executed the benchmarks related to the several instances. Another recent work investigated the performance of real-time-stream-processing systems for Internet of Things (IoT) applications [
12]. The paper measured and compared the performance of a number of IoT applications based on response time, throughput, jitter, and scalability. Moreover, Ismail and Materwala [
13] evaluated and compared the performance of the blockchain and the client/server cloud paradigms for healthcare applications.
Furthermore, scalability has been investigated in the cloud environment to determine the capacity of the uploaded applications. Expósito et al. [
14] proposed methods to reduce the virtualization overhead effect on the scalability of communication-intensive HPC codes. A method that combined multiple threading and message-passing techniques to run HPC applications was presented. The scalability and the cost efficiency were the major metrics used to evaluate the performance. An experimental evaluation of the auto-scaling policies’ performance was performed in [
15] as well. These experiments performed different group and pairwise comparisons to recognize the performance dissimilarities among the seven strategies. A recent work by Potodar et al. [
16] presented a performance evaluation of cloud docker containers and VMs in terms of CPU performance, throughput, disk input/output, load testing, and operation speed measurement. The work used multiple benchmarking tools in order to compare docker containers and VMs, and the comparisons showed that dockers performed better in terms of the chosen technical measurements. Al-Said Ahmad and Andras [
17] also used a technical scalability measurement for cloud-based software services. They were inspired by the technical scalability and elasticity metrics. They employed two different cloud-based systems, Media-Wiki and OrangeHRM, to illustrate their metrics’ usefulness and compared the performance of these systems in terms of scalability on two platforms: Amazon AWS EC2 and Microsoft Azure.
In this work, we aim to continue the work presented in [
17]. Thereby, we investigated the performance of two popular and commonly used API-based applications for different numbers of users on cloud platforms.
3. Methodology and Experimental Setting
As discussed earlier, cloud computing is one of the most common promising technologies. Individuals, organizations, and enterprises can focus on developing their main activities while leaving the IT services’ maintenance and development to the cloud providers. APIs act as the interfaces that connect the service providers to the customers or other developed services. The most recent services provided on cloud platforms are performed through APIs. These APIs are used to provide services to the consumers in addition to managing and monitoring them. Investigating the performance of different API-based applications on different cloud platforms is considered an important issue. It facilitates the process of selecting the most suitable platform for each developed application and/or service and the main effects or considered parameters.
3.1. Testing Process and Approach
In this work, we introduce a performance evaluation study that aims to assess different cloud computing platforms for different API-based application categories (i.e., educational and professional). These categories were selected because they have been the ones most commonly deployed in the cloud environment during the COVID-19 lockdowns. The entire world suddenly switched to online studies, and most professional businesses went online as well.
Figure 1 summarizes the sequential steps of the proposed experimental study. The first step was to select applications for the testing process and investigate their characteristics and the parameters affected. In our work, two mobile applications based on web-APIs were selected from two different categories. These applications were Sebawayh (i.e., educational application) and TaqTak (i.e., professional application). Sebawayh is an educational web-API and smartphone app for Arabic language learning. It has three main users: students, teachers, and system administrators. It has multiple learning levels and categories. The system has a registration form for the students before starting any level of learning on the system. It has a smartphone interface that sends HTTP requests to the web-API. On the other hand, TaqTak is a solar power renewable energy design, monitoring, and management application. It is utilized to monitor power system usage and to show reports about the system. It can also be utilized by engineers to calculate the voltage requirements of any system. It calculates the voltages and battery power and monitors consumption. It can be connected to a third-party application to receive power usage data for the monitoring process.
The next step was to lease cloud servers and configure them according to the tested applications. The most popular and commonly used platforms were selected in this study (i.e., Amazon AWS EC2 and Microsoft Azure). The third step was to deploy the APIs on the configured servers. A load test method was used to compare the two applications on the two cloud platforms. To test these web services/APIs, an open-source load-testing framework was utilized, named Apache JMeter [
18]. This framework is used for application and web service testing, as well as for measuring and analyzing the performance of a variety of web services including web-based/cloud applications. JMeter, considered as a multi-threading framework, allows concurrent workload sampling [
19]. It can be configured to automate web services’ testing regarding different parameters. Finally, the collected results were analyzed and compared according to the considered parameters, as illustrated in
Figure 1. In this study, we focused on collecting technical performance parameters. All monitoring data were extracted from JMeter after each test completed [
20]. The investigated parameters of this study were:
Response time: measures the required time for handling the request and returning the results for the request;
Latency: measures the required time for the request to reach the server. This time is used to measure the network quality between the server location and clients. It also can be impacted by the load that hits the server;
Processing time: measures the required time of the operating system to work or process a request;
Throughput: measures the number of tasks that have been completed during a certain period of time. In the cloud environment, throughput measures how fast the application can transfer the data between a client and a server.
3.2. Cloud Platforms’ Configuration
In order to perform the comparison study, we leased two cloud servers in the same geographical location. The first server was leased and configured for Amazon AWS. The other server was configured for the Microsoft Azure platform. The two web-API (i.e., Sebawayh and TaqTak) were deployed on these servers. These applications adopt the Django web framework. The configuration parameters of these two servers are given in
Table 1. The Ubuntu operating system was utilized on both servers with two CPUs, with four gigabytes of RAM memory and five gigabytes assigned as a secondary storage for each server in these experiments.
This framework was configured to automate the performance tests with different parameters, such as the number of requests, the number of users, and the minimum and maximum delay time between the different requests. Apache JMeter was configured two times for the different tested applications. The load test was also performed using these configurations on both leased servers.
Table 2 shows the pages that were requested by the Sebawayh application. Two main methods were tested in this set of experiments: “Post” and “Get”. Different HTTP requests were applied for the different APIs/paths as well. Moreover,
Table 3 gives the pages requested by the TaqTak application and their paths.
During the time of the experimental test, all network applications were disabled on the computer used. Moreover, all other devices in the lab where we ran our experiments were disconnected from the Internet to reduce the latency and the delay of the experiments. This set of experiments required around sixty hours of execution time on the leased servers. Furthermore, the tested experiments were run for different numbers of users: 500 users, 1000 users, 2000 users, and 4000 users.
Table 4 illustrates the parameter setting for the different numbers of tested users, including the experiment time, number of iterations, number of experiments, and total running time.
5. Conclusions and Future Work
In this work, we evaluated the performance of two different categories of API-based applications on the Amazon AWS EC2 and Microsoft Azure cloud platforms. A performance evaluation study was conducted to compare the performance of Sebawayh and TaqTak on these two most commonly adopted cloud platforms. The experimental results showed that: the performance metric selected depended mainly on the API category rather than the number of users. The educational API (i.e., Sebawayh) had less response time and less latency than the professional application (i.e., TaqTak) on the leased cloud platforms. The processing time of Azure was lower than that of AWS EC2. The throughput of these applications did not change or decreased when moving from one cloud platform to the other. This means that the Azure platform is the best choice for API-based applications that require heavy computations.
As future works, other API categories/taxonomies will be investigated, tested, and compared to show the differences among them. Then, a full guideline for deployment in all categories can be constructed. Other parameters could be tested on cloud environments for different loads and basic parameters. Moreover, the usability of API-based applications can be further investigated, also considering the user’s quality of experience.