energies-logo

Journal Browser

Journal Browser

Performance, Power and Energy-Efficiency Optimization in Computer Architectures

A special issue of Energies (ISSN 1996-1073). This special issue belongs to the section "A1: Smart Grids and Microgrids".

Deadline for manuscript submissions: closed (17 April 2020) | Viewed by 9211

Special Issue Editors


E-Mail
Guest Editor
Prof. of ECE, INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
Interests: high performance and parallel computing; micro-architectures for general purpose and specialized processors; computer arithmetic; cryptographic systems; multimedia systems
INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
Interests: computer architectures; specialized and dedicated structures for digital signal processing; heterogeneous processing structures (GPU, FPGA, and hybrid accelerators); parallel processing and high-performance computing systems; energy-aware computing; multimedia systems design

Special Issue Information

Dear Colleagues,

To satisfy the growing demands for higher application performance and reduced power and energy consumption, performance and energy efficiency optimization have become a fundamental constraint and requisite, embracing the processor architectures (homogeneous/heterogeneous many-core GPPs), the accelerators and co-processors (e.g., APUs, GPUs), and even the embedded domain (e.g., SoCs, FPGAs, ASICs), where the development of energy-saving methodologies is a fundamental issue not only for mobile, hand-held, and wireless applications but also to reach exascale computing.

The need to satisfy the required performance levels, as well as the new thermal, power, and energy constraints, requires the definition of innovative solutions to attain an effective optimization of the offered throughput and/or minimization of the power/energy consumption. Among many other research directions, this challenge naturally involves the design of high-performance and energy-efficient architectures and communication infrastructures, as well as the development of novel algorithms and tools comprising the scheduling, mapping, load-balancing, and scalability, together with innovative compilation techniques.

The goal of this Special Issue is to collect novel contributions covering the prevailing issues and prominent challenges related to optimizing the performance, power, and energy efficiency in computing devices and systems.

Topics of interest include (but are not limited to) the following:

  • Computer architecture trends for performance and energy efficiency:
    • ISA diversity and morphable structures;
    • Run-time reconfiguration/adaptation and dynamic scalability;
    • CPU accelerator co-design (GPUs, APUs, FPGAs, etc.);
    • Heterogeneous and parallel processing architectures;
    • Approximate computing techniques and architectures;
    • Neuromorphic architectures.
  • Energy/power management and control:
    • Run-time power/energy monitoring and sensing;
    • Performance, power, energy and heat/temperature modeling;
    • Dynamic voltage and frequency scaling (DVFS);
    • Power/clock gating strategies;
    • Performance vs. power/energy scaling and management.
  • Tools and algorithms:
    • Programming languages, compilers, and models for energy-aware computing;
    • Profiling and simulation tools for heat/power/energy estimation;
    • Scheduling, mapping and task/thread migration policies for performance and power/energy optimization;
    • Performance- and energy-aware resource management;
    • Operating system support and energy management tools.

Prof. Leonel Sousa
Dr. Nuno Roma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Energies is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 4399 KiB  
Article
Performance and Energy Trade-Offs for Parallel Applications on Heterogeneous Multi-Processing Systems
by A. M. Coutinho Demetrios, Daniele De Sensi, Arthur Francisco Lorenzon, Kyriakos Georgiou, Jose Nunez-Yanez, Kerstin Eder and Samuel Xavier-de-Souza
Energies 2020, 13(9), 2409; https://doi.org/10.3390/en13092409 - 11 May 2020
Cited by 10 | Viewed by 3421
Abstract
This work proposes a methodology to find performance and energy trade-offs for parallel applications running on Heterogeneous Multi-Processing systems with a single instruction-set architecture. These offer flexibility in the form of different core types and voltage and frequency pairings, defining a vast design [...] Read more.
This work proposes a methodology to find performance and energy trade-offs for parallel applications running on Heterogeneous Multi-Processing systems with a single instruction-set architecture. These offer flexibility in the form of different core types and voltage and frequency pairings, defining a vast design space to explore. Therefore, for a given application, choosing a configuration that optimizes the performance and energy consumption is not straightforward. Our method proposes novel analytical models for performance and power consumption whose parameters can be fitted using only a few strategically sampled offline measurements. These models are then used to estimate an application’s performance and energy consumption for the whole configuration space. In turn, these offline predictions define the choice of estimated Pareto-optimal configurations of the model, which are used to inform the selection of the configuration that the application should be executed on. The methodology was validated on an ODROID-XU3 board for eight programs from the PARSEC Benchmark, Phoronix Test Suite and Rodinia applications. The generated Pareto-optimal configuration space represented a 99% reduction of the universe of all available configurations. Energy savings of up to 59.77%, 61.38% and 17.7% were observed when compared to the performance, ondemand and powersave Linux governors, respectively, with higher or similar performance. Full article
Show Figures

Graphical abstract

20 pages, 744 KiB  
Article
AHEAD: Automatic Holistic Energy-Aware Design Methodology for MLP Neural Network Hardware Generation in Proactive BMI Edge Devices
by Nan-Sheng Huang, Yi-Chung Chen, Jørgen Christian Larsen and Poramate Manoonpong
Energies 2020, 13(9), 2180; https://doi.org/10.3390/en13092180 - 1 May 2020
Cited by 2 | Viewed by 2357
Abstract
The prediction of a high-level cognitive function based on a proactive brain–machine interface (BMI) control edge device is an emerging technology for improving the quality of life for disabled people. However, maintaining the stability of multiunit neural recordings is made difficult by the [...] Read more.
The prediction of a high-level cognitive function based on a proactive brain–machine interface (BMI) control edge device is an emerging technology for improving the quality of life for disabled people. However, maintaining the stability of multiunit neural recordings is made difficult by the nonstationary nature of neurons and can affect the overall performance of proactive BMI control. Thus, it requires regular recalibration to retrain a neural network decoder for proactive control. However, retraining may lead to changes in the network parameters, such as the network topology. In terms of the hardware implementation of the neural decoder for real-time and low-power processing, it takes time to modify or redesign the hardware accelerator. Consequently, handling the engineering change of the low-power hardware design requires substantial human resources and time. To address this design challenge, this work proposes AHEAD: an automatic holistic energy-aware design methodology for multilayer perceptron (MLP) neural network hardware generation in proactive BMI edge devices. By taking a holistic analysis of the proactive BMI design flow, the approach makes judicious use of the intelligent bit-width identification (BWID) and configurable hardware generation, which autonomously integrate to generate the low-power hardware decoder. The proposed AHEAD methodology begins with the trained MLP parameters and golden datasets and produces an efficient hardware design in terms of performance, power, and area (PPA) with the least loss of accuracy. The results show that the proposed methodology is up to a 4X faster in performance, 3X lower in terms of power consumption, and achieves a 5X reduction in area resources, with exact accuracy, compared to floating-point and half-floating-point design on a field-programmable gate array (FPGA), which makes it a promising design methodology for proactive BMI edge devices. Full article
Show Figures

Graphical abstract

19 pages, 2708 KiB  
Article
Containergy—A Container-Based Energy and Performance Profiling Tool for Next Generation Workloads
by Wellington Silva-de-Souza, Arman Iranfar, Anderson Bráulio, Marina Zapater, Samuel Xavier-de-Souza, Katzalin Olcoz and David Atienza
Energies 2020, 13(9), 2162; https://doi.org/10.3390/en13092162 - 1 May 2020
Cited by 6 | Viewed by 2633
Abstract
Run-time profiling of software applications is key to energy efficiency. Even the most optimized hardware combined to an optimally designed software may become inefficient if operated poorly. Moreover, the diversification of modern computing platforms and broadening of their run-time configuration space make the [...] Read more.
Run-time profiling of software applications is key to energy efficiency. Even the most optimized hardware combined to an optimally designed software may become inefficient if operated poorly. Moreover, the diversification of modern computing platforms and broadening of their run-time configuration space make the task of optimally operating software ever more complex. With the growing financial and environmental impact of data center operation and cloud-based applications, optimal software operation becomes increasingly more relevant to existing and next-generation workloads. In order to guide software operation towards energy savings, energy and performance data must be gathered to provide a meaningful assessment of the application behavior under different system configurations, which is not appropriately addressed in existing tools. In this work we present Containergy, a new performance evaluation and profiling tool that uses software containers to perform application run-time assessment, providing energy and performance profiling data with negligible overhead (below 2%). It is focused on energy efficiency for next generation workloads. Practical experiments with emerging workloads, such as video transcoding and machine-learning image classification, are presented. The profiling results are analyzed in terms of performance and energy savings under a Quality-of-Service (QoS) perspective. For video transcoding, we verified that wrong choices in the configuration space can lead to an increase above 300% in energy consumption for the same task and operational levels. Considering the image classification case study, the results show that the choice of the machine-learning algorithm and model affect significantly the energy efficiency. Profiling datasets of AlexNet and SqueezeNet, which present similar accuracy, indicate that the latter represents 55.8% in energy saving compared to the former. Full article
Show Figures

Figure 1

Back to TopTop