Heterogeneous and Energy-Efficient Computing Systems

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 October 2024 | Viewed by 1046

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, The University of Alabama at Birmingham, Birmingham, AL 35294, USA
Interests: high performance computing; GPU heterogeneous computing; energy-efficient computing; big data analysis and management

Special Issue Information

Dear Colleagues,

In the era of modern computing systems, the escalating operational carbon footprint and energy costs have made energy-efficient computing an essential primary design parameter and constraint for both software and hardware across the entire range of computing devices. This includes diverse platforms such as large-scale high-performance computing (HPC) systems, cloud platforms, data centers, personal workstations, and low-power edge and mobile systems. As advanced machine learning (ML) technologies, complex scientific computations, and edge computing for the Internet of Things (IoT) continue to evolve, heterogeneous systems incorporating computational accelerators (e.g., GPU, TPU, FPGA, and domain-specific architectures) have been developed to enhance the performance of critical computational tasks. While most accelerators aim to achieve higher energy efficiency compared to traditional CPU processors, effectively utilizing these heterogeneous systems to enable end-to-end energy-efficient computation across various applications remains a challenging endeavor.

Addressing the challenges associated with energy-efficient heterogeneous computing necessitates expertise from various domains, including computer science, computer engineering, electrical engineering, mathematics, and specific application areas. It is crucial to recognize that optimizing energy efficiency cannot be pursued in isolation, as it can have implications for other vital aspects of computing systems, such as reliability and computational performance. Therefore, a holistic approach is necessary that involves the comprehensive study of the problem and the development of integrated solutions. With the emergence of new computation workloads and novel heterogeneous architectures spanning multiple domains, it is imperative for the community to comprehend the intricate relationships among algorithm and application design, system scheduling and resource management, data management, programming models, system software, and hardware architecture design and configurations. Understanding how each of these components contributes to the end-to-end energy efficiency of computation is paramount.

Within this context, the topics of interest of this Special Issue (SI) include, but are not limited to, the following:

  • Evaluating and modeling the energy efficiency of emerging computation workloads on new heterogeneous architectures. This involves assessing and quantifying the energy consumption of diverse computational tasks running on these novel architectures, enabling researchers to gain insights into the energy characteristics and requirements of different workloads.
  • Understanding the trade-off between energy efficiency and other crucial aspects of computing, such as reliability and performance. For example, this can include exploring the intricate relationship between energy efficiency and these key factors to comprehend the potential trade-offs and synergies. This knowledge will guide the development of strategies that optimize energy efficiency while maintaining acceptable levels of reliability and performance.

Developing effective solutions for achieving high energy efficiency in heterogeneous computing systems. This encompasses various approaches, including algorithms and application-level techniques, system-level optimizations, and software-hardware co-design strategies. Addressing these research topics will contribute to advancing energy-efficient computing and ensuring the optimal utilization of heterogeneous architectures in a variety of domains.

Technical Program Committee Member:

Name: Dr. Hadi Zamani Sabzi
Email: [email protected]
Affiliation: Advanced Micro Devices, Inc., Santa Clara, CA 95054, USA
Research Interests: energy-efficient heterogeneous computing

Dr. Jieyang Chen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • energy-efficient computing
  • heterogeneous computing
  • computational accelerators
  • GPU
  • TPU
  • FPGA
  • high-performance computing
  • cloud computing
  • green computing

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2743 KiB  
Article
VerSA: Versatile Systolic Array Architecture for Sparse and Dense Matrix Multiplications
by Juwon Seo and Joonho Kong
Electronics 2024, 13(8), 1500; https://doi.org/10.3390/electronics13081500 - 15 Apr 2024
Viewed by 725
Abstract
A key part of modern deep neural network (DNN) applications is matrix multiplication. As DNN applications are becoming more diverse, there is a need for both dense and sparse matrix multiplications to be accelerated by hardware. However, most hardware accelerators are designed to [...] Read more.
A key part of modern deep neural network (DNN) applications is matrix multiplication. As DNN applications are becoming more diverse, there is a need for both dense and sparse matrix multiplications to be accelerated by hardware. However, most hardware accelerators are designed to accelerate either dense or sparse matrix multiplication. In this paper, we propose VerSA, a versatile systolic array architecture for both dense and sparse matrix multiplications. VerSA employs intermediate paths and SRAM buffers between the rows of the systolic array (SA), thereby enabling an early termination in sparse matrix multiplication with a negligible performance overhead when running dense matrix multiplication. When running sparse matrix multiplication, 256 × 256 VerSA brings performance (i.e., an inverse of execution time) improvement and energy saving by 1.21×–1.60× and 7.5–30.2%, respectively, when compared to the conventional SA. When running dense matrix multiplication, VerSA results in only a 0.52% performance overhead compared to the conventional SA. Full article
(This article belongs to the Special Issue Heterogeneous and Energy-Efficient Computing Systems)
Show Figures

Figure 1

Back to TopTop