Next Article in Journal
Optimizing the Steering of Driverless Personal Mobility Pods with a Novel Differential Harris Hawks Optimization Algorithm (DHHO) and Encoder Modeling
Previous Article in Journal
Fall Detection Method for Infrared Videos Based on Spatial-Temporal Graph Convolutional Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

A Parallel Compression Pipeline for Improving GPU Virtualization Data Transfers

1
Departamento de Informática de Sistemas y Computadores, Universitat Politècnica de València, 46022 Valencia, Spain
2
Departament d’Informàtica, Escola Tècnica Superior d’Enginyeria (ETSE-UV), Universitat de València, 46010 Valencia, Spain
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4649; https://doi.org/10.3390/s24144649
Submission received: 28 May 2024 / Revised: 2 July 2024 / Accepted: 16 July 2024 / Published: 17 July 2024
(This article belongs to the Section Internet of Things)

Abstract

GPUs are commonly used to accelerate the execution of applications in domains such as deep learning. Deep learning applications are applied to an increasing variety of scenarios, with edge computing being one of them. However, edge devices present severe computing power and energy limitations. In this context, the use of remote GPU virtualization solutions is an efficient way to address these concerns. Nevertheless, the limited network bandwidth might be an issue. This limitation can be alleviated by leveraging on-the-fly compression within the communication layer of remote GPU virtualization solutions. In this way, data exchanged with the remote GPU is transparently compressed before being transmitted, thus increasing network bandwidth in practice. In this paper, we present the implementation of a parallel compression pipeline designed to be used within remote GPU virtualization solutions. A thorough performance analysis shows that network bandwidth can be increased by a factor of up to 2×.
Keywords: deep learning; on-the-fly compression; parallel compression pipeline; network bandwidth; remote GPU virtualization; CUDA; rCUDA deep learning; on-the-fly compression; parallel compression pipeline; network bandwidth; remote GPU virtualization; CUDA; rCUDA

Share and Cite

MDPI and ACS Style

Peñaranda, C.; Reaño, C.; Silla, F. A Parallel Compression Pipeline for Improving GPU Virtualization Data Transfers. Sensors 2024, 24, 4649. https://doi.org/10.3390/s24144649

AMA Style

Peñaranda C, Reaño C, Silla F. A Parallel Compression Pipeline for Improving GPU Virtualization Data Transfers. Sensors. 2024; 24(14):4649. https://doi.org/10.3390/s24144649

Chicago/Turabian Style

Peñaranda, Cristian, Carlos Reaño, and Federico Silla. 2024. "A Parallel Compression Pipeline for Improving GPU Virtualization Data Transfers" Sensors 24, no. 14: 4649. https://doi.org/10.3390/s24144649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop