Advanced AI Hardware Designs Based on FPGAs
A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence Circuits and Systems (AICAS)".
Deadline for manuscript submissions: closed (31 July 2021) | Viewed by 54388
Special Issue Editor
Interests: VLSI design; computer architecture; data center architectures; FPGA; AI accelerators; domain specific processors; hardware/software co-design; distributed machine learning; processing-in-memory (PIM)
Special Issue Information
Dear Colleagues,
Machine learning (ML) and artificial intelligence (AI) technology have revolutionized how computers run cognitive tasks based on a massive amount of observed data. As more industries are adopting the technology, we are facing a fast-growing demand for new hardware that enables faster and more energy-efficient processing in AI workloads.
In recent years, traditional hardware vendors such as Intel and Nvidia as well as new start-up companies such as Graphcore, Wave Computing, and Habana have tried to offer the best computing platform for complex ML algorithms. Although GPU is still the preferred computing platform due to its large userbase and well-established programming interface, its top spot is not forever safe, due to its low hardware utilization and bad energy efficiency.
On top of energy efficiency and programming easiness, how to adapt fast-changing AI/ML algorithms is another hot topic in AI hardware. FPGA has a clear benefit on this point, as it can reprogram or amend its processing quickly with a relatively low power budget. In this Special Issue, we invite the latest developments in the field of advanced AI hardware design based on FPGA, which can show the device’s strengths, such as hardware/software co-design, customization, and scalability over other types of hardware.
Topics include but are not limited to:
- DNN inference/training accelerators on FPGAs;
- Multi-FPGA approaches for scalable ML acceleration;
- Distributed deep learning architecture on multiple FPGAs;
- Hardware/Software co-design for energy-efficient ML on FPGA;
- Narrow-precision and efficient floating-point representation on FPGA for ML applications;
- Design automation from the ML algorithm to FPGAs;
- Soft DNN processor to cover a wide range of ML applications.
Prof. Dr. Joo-Young Kim
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- DNN inference/training accelerators on FPGAs
- Multi-FPGA approaches for scalable ML acceleration
- Distributed deep learning architecture on multiple FPGAs
- Hardware/Software co-design for energy-efficient ML on FPGA
- Narrow-precision and efficient floating-point representation on FPGA for ML applications
- Design automation from ML algorithm to FPGAs
- Soft DNN processor to cover a wide-range of ML applications
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.
Further information on MDPI's Special Issue polices can be found here.