High-Performance Computing and Supercomputing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2021) | Viewed by 6324

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Architecture and Technology, The University of the Basque Country UPV/EHU
Interests: architecture of HPC systems, focused mainly on the interconnection network; performance evaluation; simulation; scheduling in HPC environments

Special Issue Information

Dear Colleagues,

Now more than ever, supercomputing systems are essential to the advancement of science and engineering. Every semester we see impressive advances in the Top500/Green500 charts, achieved through the use of high-performance interconnects linking thousands of high-performance nodes, often supplemented by high-performance accelerators/coprocessors. These large-scale systems are rarely used by a single application, but are shared by multiple concurrent users/applications. Additionally, despite implementing different measures to achieve energy efficiency, top-of-the-list supercomputers consume several GW. The range of applications for which supercomputers are used is large (biosciences, Earth sciences, energy, materials, computer architecture, and many more) and we are seeing a convergence between these “classic” and newer applications based on what is known as “big data”—the analysis of massive amounts of data in order to extract knowledge from them.

This reality motivates the topics of this Special Issue on High-Performance Computing and Supercomputing of the journal Applied Sciences: current trends and emerging technologies in the architecture of HPC systems, including data storage systems, interconnection networks, accelerators/coprocessors, and energy efficiency measures; task management and scheduling in HPC systems seeking to optimize performance and energy efficiency; novel uses of HPC with special focus on HPC–big data convergence; programming and run-time environments for HPC.

Prof. Dr. Jose Miguel-Alonso
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • architecture of HPC and supercomputing systems
  • storage systems
  • interconnection networks
  • accelerators and co-processors
  • task management and scheduling
  • energy efficiency
  • HPC–big data convergence
  • programming languages and run-time systems

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1912 KiB  
Article
Enhancing Robustness of Per-Packet Load-Balancing for Fat-Tree
by Chansook Lim
Appl. Sci. 2021, 11(6), 2664; https://doi.org/10.3390/app11062664 - 17 Mar 2021
Cited by 3 | Viewed by 1844
Abstract
Fat-tree networks have many equal-cost redundant paths between two hosts. To achieve low flow completion time and high network utilization in fat-tree, there have been many efforts to exploit topological symmetry. For example, packet scatter schemes, which spray packets across all equal-cost paths [...] Read more.
Fat-tree networks have many equal-cost redundant paths between two hosts. To achieve low flow completion time and high network utilization in fat-tree, there have been many efforts to exploit topological symmetry. For example, packet scatter schemes, which spray packets across all equal-cost paths relying on topological symmetry, work well when there is no failure in networks. However, when symmetry of a network is disturbed due to a network failure, packet scatter schemes may suffer massive packet reordering. In this paper, we propose a new load balancing scheme named LBSP (Load Balancing based on Symmetric Path groups) for fat-trees. LBSP partitions equal-cost paths into equal sized path groups and assigns a path group to each flow so that packets of a flow are forwarded across paths within the selected path group. When a link failure occurs, the flows affected by the failure are assigned an alternative path group which does not contain the failed link. Consequently, packets in one flow can still experience almost the same queueing delay. Simulation results show that LBSP is more robust to network failures compared to the original packet scatter scheme. We also suggest a solution to the queue length differentials between path groups. Full article
(This article belongs to the Special Issue High-Performance Computing and Supercomputing)
Show Figures

Figure 1

28 pages, 10759 KiB  
Article
HPC Cloud Architecture to Reduce HPC Workflow Complexity in Containerized Environments
by Guohua Li, Joon Woo and Sang Boem Lim
Appl. Sci. 2021, 11(3), 923; https://doi.org/10.3390/app11030923 - 20 Jan 2021
Cited by 7 | Viewed by 3669
Abstract
The complexity of high-performance computing (HPC) workflows is an important issue in the provision of HPC cloud services in most national supercomputing centers. This complexity problem is especially critical because it affects HPC resource scalability, management efficiency, and convenience of use. To solve [...] Read more.
The complexity of high-performance computing (HPC) workflows is an important issue in the provision of HPC cloud services in most national supercomputing centers. This complexity problem is especially critical because it affects HPC resource scalability, management efficiency, and convenience of use. To solve this problem, while exploiting the advantage of bare-metal-level high performance, container-based cloud solutions have been developed. However, various problems still exist, such as an isolated environment between HPC and the cloud, security issues, and workload management issues. We propose an architecture that reduces this complexity by using Docker and Singularity, which are the container platforms most often used in the HPC cloud field. This HPC cloud architecture integrates both image management and job management, which are the two main elements of HPC cloud workflows. To evaluate the serviceability and performance of the proposed architecture, we developed and implemented a platform in an HPC cluster experiment. Experimental results indicated that the proposed HPC cloud architecture can reduce complexity to provide supercomputing resource scalability, high performance, user convenience, various HPC applications, and management efficiency. Full article
(This article belongs to the Special Issue High-Performance Computing and Supercomputing)
Show Figures

Figure 1

Back to TopTop