Previous Article in Journal
A Survey of Blockchain Applicability, Challenges, and Key Threats
Previous Article in Special Issue
6G-RUPA: A Flexible, Scalable, and Energy-Efficient User Plane Architecture for Next-Generation Mobile Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing

by
Mauro Femminella
1,2,*,† and
Gianluca Reali
1,2,†
1
Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia, Italy
2
Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT), 43124 Parma, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224
Submission received: 31 July 2024 / Revised: 1 September 2024 / Accepted: 4 September 2024 / Published: 6 September 2024
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)

Abstract

Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called "cold start" events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss.
Keywords: serverless; edge computing; Kubernetes; horizontal pod autoscaling; reinforcement learning; performance evaluation serverless; edge computing; Kubernetes; horizontal pod autoscaling; reinforcement learning; performance evaluation

Share and Cite

MDPI and ACS Style

Femminella, M.; Reali, G. Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing. Computers 2024, 13, 224. https://doi.org/10.3390/computers13090224

AMA Style

Femminella M, Reali G. Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing. Computers. 2024; 13(9):224. https://doi.org/10.3390/computers13090224

Chicago/Turabian Style

Femminella, Mauro, and Gianluca Reali. 2024. "Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing" Computers 13, no. 9: 224. https://doi.org/10.3390/computers13090224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop