Hardware Design and Verification with Large Language Models: A Scoping Review, Challenges, and Open Issues
Abstract
:1. Introduction
- Identification of core applications: we detail the fundamental ways in which LLMs are currently applied in hardware design, debugging, and verification, providing a solid foundation to understand their impact.
- Analysis of challenges: this paper presents a critical analysis of the inherent challenges in applying LLMs to hardware design, such as data scarcity, the need for specialized training, and integration with existing tools.
- Future directions and open issues: we outline potential future applications of LLMs in hardware design and verification and discuss methodological improvements to bridge the identified gaps.
1.1. A Brief History of LLMs
1.2. State-of-the-Art in the Application of LLMs in Different Domains
1.3. How LLM Facilitates Hardware Design and Verification?
- Improved Context Handling: GPT-4o can process significantly larger input contexts (e.g., up to 32k tokens) compared with GPT-3.5. This capability is critical for analyzing complex, multi-file HDL projects, where maintaining contextual relationships between modules, signals, and timing constraints is essential.
- Higher Code Accuracy and Reliability: GPT-4o demonstrates superior accuracy in generating and debugging HDL code. Its advanced reasoning capabilities reduce syntax errors and hallucinations, which are common issues in earlier models. In internal benchmarks (e.g., OpenAI Codex (https://openai.com/index/openai-codex/, accessed on 29 December 2024) vs. GPT-4o), GPT-4o achieved higher pass rates for code generation tasks across diverse programming languages, including Verilog and VHDL.
- Integration with Multi-Modal Systems: GPT-4o’s multi-modal capabilities allow it to process text and visual information simultaneously. In hardware verification, this means it can analyze waveforms, simulation results, and timing diagrams alongside textual descriptions, enhancing its ability to detect errors or inconsistencies.
- Applications in Code Verification: GPT-4o can enhance formal verification processes by assisting with property generation for assertions and test cases. For example, GPT-4o can translate specifications into formal properties expressed in System-Verilog Assertions (SVA)s.
2. Methods
2.1. Review Question
2.2. Eligibility Criteria
2.3. Exclusion Criteria
2.4. Search Strategy
2.5. Data Extraction and Data Synthesis
3. Results
3.1. Selection of Sources
3.2. Synthesis of Results
4. Discussion
4.1. Overview of LLMs in Hardware Design
4.2. Different Categories of LLMs for Hardware Design and Verification
4.2.1. Hardware Design
4.2.2. Hardware/Software Codesign
4.2.3. Hardware Accelerators
4.2.4. Hardware Security
4.2.5. Hardware Debugging
4.2.6. Hardware Verification
4.3. Use Cases and Successful Stories
5. Challenges
5.1. Training Challenges
5.2. Adaptation to Hardware-Specific Vocabulary
5.3. Explainability and Interpretability
5.4. Integration with Existing Design Tools
5.5. Scalability
6. Open Issues
6.1. Unexplored Applications
6.2. Research Gaps
- HLS
- -
- -
- -
- -
- -
- Bit-width optimization: minimizing the width of variables without sacrificing accuracy [198].
- -
- Control flow management: managing control flow statements (if-else, switch-case) for hardware synthesis [199].
- -
- -
- Interface generation: creating interfaces for communication between blocks during synthesis [202].
- HDL Generation
- -
- Synthesis-ready HDL code generation: automatically generating Verilog or VHDL that is ready for synthesis [203].
- -
- Parameterized HDL code: creating reusable code with configurable parameters [204].
- -
- -
- -
- -
- Hierarchical module design: automatically generating modular and hierarchical HDL blocks [212].
- -
- -
- Code formatting and cleanup: ensuring HDL code is clean, well-formatted, and error-free (https://www.einfochips.com, accessed on 29 December 2024, https://blogs.sw.siemens.com, accessed on 29 December 2024).
- Component Integration
- -
- Interface synthesis: automatically generating interfaces (e.g., Advanced eXtensible Interface (AXI), Advanced Microcontroller Bus Architecture specification (AMBA)) between hardware modules [214].
- -
- Signal mapping: automating the signal connection and mapping between modules [215].
- -
- Inter-module communication: managing and optimizing data and control flow between different hardware blocks [216].
- -
- Bus arbitration: design of efficient bus systems for shared resources [217].
- -
- Protocol handling: automating protocol management for communication between modules (https://www.mhtechin.com, accessed on 29 December 2024).
- -
- -
- Design Optimization
- -
- -
- -
- -
- -
- Data path optimization: optimizing the data path for reduced latency and better resource utilization [230].
- -
- -
- FSM Design
- -
- -
- Hierarchical FSM design: creating complex FSMs using a hierarchical approach [239].
- -
- -
- Power-aware FSM design: creating FSMs optimized for low power consumption [242].
- -
- State encoding optimization: optimizing state encodings (e.g., one-hot, binary) for efficiency [243].
- -
- Timing-aware FSM design: ensure that FSMs meet timing constraints [244].
- DSE
- -
- Pareto-optimal design space exploration: exploring the design space to identify Pareto-optimal trade-offs between power, area, and performance [245].
- -
- Multi-objective optimization: optimizing designs for multiple conflicting objectives (e.g., power vs. performance) [246].
- -
- Parametric design exploration: exploring various parameter configurations to achieve optimal results [247].
- -
- Constraint-driven design: ensure that all design options meet predefined constraints [248].
- -
- -
- Scenario-based DSE: exploring designs based on different use-case scenarios (e.g., high-performance vs. low-power modes) [252].
- Power-Aware Design
- Timing Analysis and Optimization
- -
- Static Timing Analysis (STA): automatically analyzing and optimizing timing paths [264].
- -
- Critical path analysis: identifying and optimizing the critical path to ensure timing closure [265].
- -
- Clock skew minimization: optimizing the clock distribution to minimize the skew in the design [266].
- -
- -
- Hold and setup time optimization: ensure that all paths meet the hold and setup time constraints [269].
- -
- Path delay optimization: shortening the longest paths in the design to improve performance [270].
- Floorplanning and Physical Design
- -
- Component placement optimization: place components to minimize delays and area usage [271].
- -
- Power grid design: design of power distribution networks to ensure reliable power delivery [272].
- -
- Routing congestion management: optimize placement to avoid routing congestion and improve performance [273].
- -
- -
- Timing-aware floorplanning: ensure that critical timing paths are optimized in the placement process [276].
- Low-Power Design Techniques
- Hardware Accelerators
- -
- -
- -
- -
- -
- -
- -
- CTS
- -
- Clock skew minimization: ensure that clock signals arrive at all components simultaneously to minimize skew [299].
- -
- Power-aware CTS: design of clock trees to minimize power consumption [300].
- -
- Multi-domain clock tree design: managing multiple clock domains to ensure efficient clock distribution [301].
- -
- Clock buffer insertion: strategic placement of buffers to reduce clock delay and skew [302].
- -
- -
- CTS for low-power designs: techniques to reduce clock power consumption, like multi-threshold designs or clock gating [305].
- Chip Architecture Design
- -
- -
- -
- -
- -
- -
- -
- Physical Layout and Routing
- -
- -
- -
- -
- -
- -
- Area optimization: minimizing the total area occupied by the physical layout of the components [334].
- ASIC Design
- -
- Standard cell library selection: choosing the right standard cell libraries for performance, power, and area trade-offs [335].
- -
- Custom cell design: design of custom logic cells optimized for specific performance and area requirements [336].
- -
- Power grid optimization: design of efficient power distribution networks across ASICs [337].
- -
- -
- -
- -
- Packaging and I/O design: optimizing external interfaces and packaging for the ASIC [344].
- Fault-Tolerant Design
- -
- -
- -
- -
- -
- -
- -
- Verification Plan Generation
- -
- -
- Random test generation: creating random test sequences to stress the design and catch edge cases [359].
- -
- Constraint-based verification: defining constraints for test generation to ensure valid input/output scenarios [360].
- -
- -
- -
- -
- UVM and SystemVerilog: implement advanced verification techniques using UVM and SystemVerilog [367].
6.3. Methodological Improvements
- (1)
- Domain-specific understanding and contextual knowledge: LLMs need a deeper and more precise understanding of hardware design languages, methodologies, and tools. While LLMs excel at NLP, they often lack in-depth knowledge of domain-specific languages like Verilog, VHDL, SystemVerilog, and UVM. To truly aid hardware designers, LLMs must be fine-tuned on vast datasets of HDLs, verification code, design documentation, and real-world projects. Additionally, understanding the context of a design, such as the specific requirements of a given project (e.g., low-power design, high performance, etc.), will enable LLMs to make more relevant suggestions during both the design and verification processes.
- (2)
- Enhanced formal reasoning capabilities: LLMs need to improve their formal reasoning abilities, especially for tasks such as formal verification, model checking, and constraint satisfaction, which are essential to hardware verification. Hardware design often involves proving that a design meets certain formal specifications, such as safety or liveness properties. Currently, LLMs struggle with formal logic and mathematical rigor. Enhancing their capability to handle formal methods—like understanding temporal logic, SVA, and finite state machines—would significantly improve their utility in verification tasks. This would allow LLMs to automatically generate and validate formal properties from natural language specifications, ensuring that hardware designs conform to their intended behavior.
- (3)
- Code generation for synthesis-ready HDL: While LLMs can generate HDL code from behavioral descriptions, they must become more adept at creating synthesis-ready code. This requires not only understanding how to describe hardware behavior but also generating optimized code that meets the constraints of modern hardware synthesis tools. To achieve this, LLMs need reinforcement in optimizing generated code for real-world constraints such as timing, power, and area. Incorporating feedback from synthesis and place-and-route tools into the LLM’s training data can improve its ability to generate resource-efficient, high-performance HDL designs.
- (4)
- Design space exploration and optimization: One of the critical tasks in hardware design is balancing multiple design constraints—performance, power, area, and cost—through DSE. LLMs should be reinforced with advanced optimization techniques and predictive modeling capabilities to help guide the exploration of various design parameters. Reinforcement learning approaches combined with LLMs can enable them to predict the impact of parameter choices on design metrics and suggest optimal configurations based on trade-offs. By enhancing LLMs’ ability to navigate complex design spaces, designers could receive better support in exploring Pareto-optimal designs that balance competing objectives.
- (5)
- Error detection and debugging: LLMs can play a crucial role in identifying bugs and design flaws, but their error detection capabilities must be reinforced to be more effective in the hardware domain. This includes being able to recognize subtle errors in HDL, such as incorrect state machine transitions, misaligned clock domain crossings, or resource contention. LLMs need to be trained on common hardware design errors and verification failures, improving their ability to offer precise feedback on potential issues. Additionally, LLMs should be reinforced with an understanding of simulation and synthesis reports, enabling them to trace errors back to their root causes and provide actionable debugging suggestions.
- (6)
- Verification automation and coverage analysis: Verification is one of the most time-consuming aspects of hardware development. To improve LLMs’ contributions in this domain, they should be reinforced with better tools for generating comprehensive testbenches, performing functional coverage analysis, and creating directed random tests. Specifically, LLMs should be enhanced to recognize coverage gaps in verification plans and generate appropriate tests to fill those gaps, ensuring that designs are thoroughly tested. Furthermore, improving the LLM’s ability to integrate with simulation tools, extract meaningful insights from waveform data, and recommend additional verification steps will reduce the manual burden on verification teams.
- (7)
- Learning from hardware development iterations: LLMs should be able to learn from previous iterations of a hardware design to assist in continuous improvement. By analyzing successive versions of a design, LLMs can identify what changes led to better performance, lower power consumption, or reduced area. Reinforcing this ability would enable LLMs to provide context-specific recommendations based on past design choices, helping hardware designers optimize their designs more effectively across multiple development cycles. This capability could also be extended to learning from community-wide datasets of hardware designs to suggest best practices and design patterns that are tailored to specific project goals.
- (8)
- Interaction with EDA tools and integration into workflows: For LLMs to be more effective in hardware design, they need tighter integration with EDA tools and workflows. LLMs should be able to interface with common hardware design tools (such as simulation, synthesis, and formal verification tools), extract relevant data, and act on that information in real time. By integrating LLMs with these tools, designers can receive real-time feedback on design choices, simulation results, or synthesis reports. LLMs should also be capable of automating repetitive tasks within the EDA workflow, such as setting up project configurations, running simulations, and analyzing results, reducing the overall design time.
- (9)
- Memory and state tracking for large-scale projects: Hardware design projects can span months or years and involve numerous changes over time. LLMs should be reinforced with better long-term memory and state-tracking capabilities so that they can keep track of ongoing changes across large projects. This would allow LLMs to assist designers by recalling relevant design decisions, tracking the evolution of specific modules or components, and ensuring consistency across the entire design. This state-tracking ability is crucial for handling complex projects with multiple designers, where coordination and memory of past decisions are key to success.
- (10)
- Security and safety in hardware design: LLMs should be enhanced to understand and enforce security and safety requirements during the design process. With the growing need for hardware security, LLMs must be able to detect potential vulnerabilities, such as insecure communication protocols or improper handling of sensitive data. Similarly, in safety-critical designs, such as automotive or aerospace systems, LLMs need reinforcement to ensure compliance with safety standards and protocols. By improving LLMs’ capabilities in these areas, designers can be alerted to potential security risks and safety violations early in the design phase.
7. Conclusions
7.1. Summary of Findings
- (1)
- Core applications:LLMs are applied to various tasks, including generating HDL code, optimizing design parameters, and automating verification processes like test case generation and bug detection. They can also enhance documentation and project management.
- (2)
- Challenges:Despite the advancements, challenges such as data scarcity, the need for specialized training, and integration with existing tools remain. The complexity of hardware design requires fine-tuning LLMs for specific tasks.
- (3)
- Future directions:the paper suggests potential areas for future research, including improving LLM integration in hardware design workflows, refining LLM-generated outputs, and addressing open issues such as handling high-dimensional data and design complexity.
7.2. Implications and Recommendations
- (1)
- Domain-specific LLMs:Developing LLMs tailored to the specific needs of hardware design and verification could enhance their effectiveness. This includes models trained on HDL, circuit layouts, and specialized verification protocols.
- (2)
- Improving verification capabilities: expanding the capacity of LLMs to automatically verify hardware designs through formal methods and simulation could reduce the burden of manual verification and lead to more robust, error-free hardware.
- (3)
- Hybrid systems: combining LLMs with other AI and traditional formal verification techniques could result in hybrid systems that leverage the strengths of both approaches, improving the accuracy and reliability of hardware designs.
- (4)
- Explainability and interpretability: Ensuring that LLM-generated hardware descriptions are transparent and interpretable by engineers is critical. Future research could focus on developing methods to make the reasoning behind LLM outputs more understandable and trustworthy.
- (5)
- Real-world applications: More real-world case studies are needed to evaluate the practical utility of LLMs in large-scale hardware projects. This will provide insights into the models’ performance in complex, industrial settings and help identify further areas of improvement.
- (6)
- Data privacy and security: addressing concerns around the secure use of LLMs in proprietary hardware design environments, including techniques for ensuring that sensitive data remains protected during model training and deployment, will be crucial for industrial adoption.
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Pinker, S. The Language Instinct: How the Mind Creates Language; Penguin uK: New York, NY, USA, 2003. [Google Scholar]
- Hauser, M.D.; Chomsky, N.; Fitch, W.T. The faculty of language: What is it, who has it, and how did it evolve? Science 2002, 298, 1569–1579. [Google Scholar] [CrossRef] [PubMed]
- Turing, A.M. Computing Machinery and Intelligence; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Chernyavskiy, A.; Ilvovsky, D.; Nakov, P. Transformers: “the end of history” for natural language processing? In Proceedings of the Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, 13–17 September 2021; Proceedings, Part III 21. Springer: Berlin/Heidelberg, Germany, 2021; pp. 677–693. [Google Scholar]
- Zhao, W.X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.; Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al. A survey of large language models. arXiv 2023, arXiv:2303.18223. [Google Scholar]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the opportunities and risks of foundation models. arXiv 2021, arXiv:2108.07258. [Google Scholar]
- Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. Emergent abilities of large language models. arXiv 2022, arXiv:2206.07682. [Google Scholar]
- Bahl, L.R.; Brown, P.F.; De Souza, P.V.; Mercer, R.L. A tree-based statistical language model for natural language speech recognition. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 1001–1008. [Google Scholar] [CrossRef]
- Frederick, J. Statistical Methods for Speech Recognition; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Gao, J.; Lin, C.Y. Introduction to the special issue on statistical language modeling. ACM Trans. Asian Lang. Inf. Process. 2004, 3, 87–93. [Google Scholar] [CrossRef]
- Bellegarda, J.R. Statistical language model adaptation: Review and perspectives. Speech Commun. 2004, 42, 93–108. [Google Scholar] [CrossRef]
- Zhai, C. Statistical language models for information retrieval a critical review. Found. Trends Inf. Retr. 2008, 2, 137–213. [Google Scholar] [CrossRef]
- Bengio, Y.; Ducharme, R.; Vincent, P. A neural probabilistic language model. Adv. Neural Inf. Process. Syst. 2000, 13, 1137–1155. [Google Scholar]
- Mikolov, T.; Karafiát, M.; Burget, L.; Cernockỳ, J.; Khudanpur, S. Recurrent neural network based language model. In Proceedings of the Interspeech, Makuhari, Chiba, Japan, 26–30 September 2010; Volume 2, pp. 1045–1048. [Google Scholar]
- Kombrink, S.; Mikolov, T.; Karafiát, M.; Burget, L. Recurrent Neural Network Based Language Modeling in Meeting Recognition. In Proceedings of the Interspeech, Florence, Italy, 27–31 August 2011; Volume 11, pp. 2877–2880. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
- Shaw, P.; Uszkoreit, J.; Vaswani, A. Self-attention with relative position representations. arXiv 2018, arXiv:1803.02155. [Google Scholar]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Ghojogh, B.; Ghodsi, A. Attention Mechanism, Transformers, BERT, and GPT: Tutorial and Survey; OSF Preprints: Charlottesville, VA, USA, 2020. [Google Scholar]
- Liu, Q.; Kusner, M.J.; Blunsom, P. A survey on contextual embeddings. arXiv 2020, arXiv:2003.07278. [Google Scholar]
- Ge, Y.; Hua, W.; Mei, K.; Tan, J.; Xu, S.; Li, Z.; Zhang, Y. Openagi: When llm meets domain experts. Adv. Neural Inf. Process. Syst. 2024, 36. [Google Scholar] [CrossRef]
- Alex, N.; Lifland, E.; Tunstall, L.; Thakur, A.; Maham, P.; Riedel, C.J.; Hine, E.; Ashurst, C.; Sedille, P.; Carlier, A.; et al. RAFT: A real-world few-shot text classification benchmark. arXiv 2021, arXiv:2109.14076. [Google Scholar]
- Qin, C.; Zhang, A.; Zhang, Z.; Chen, J.; Yasunaga, M.; Yang, D. Is ChatGPT a general-purpose natural language processing task solver? arXiv 2023, arXiv:2302.06476. [Google Scholar]
- Gao, J.; Zhao, H.; Yu, C.; Xu, R. Exploring the feasibility of chatgpt for event extraction. arXiv 2023, arXiv:2303.03836. [Google Scholar]
- Ma, Y.; Cao, Y.; Hong, Y.; Sun, A. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! arXiv 2023, arXiv:2303.08559. [Google Scholar]
- Cheng, D.; Huang, S.; Bi, J.; Zhan, Y.; Liu, J.; Wang, Y.; Sun, H.; Wei, F.; Deng, D.; Zhang, Q. Uprise: Universal prompt retrieval for improving zero-shot evaluation. arXiv 2023, arXiv:2303.08518. [Google Scholar]
- Ren, R.; Qu, Y.; Liu, J.; Zhao, W.X.; She, Q.; Wu, H.; Wang, H.; Wen, J.R. Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking. arXiv 2021, arXiv:2110.07367. [Google Scholar]
- Sun, W.; Yan, L.; Ma, X.; Wang, S.; Ren, P.; Chen, Z.; Yin, D.; Ren, Z. Is ChatGPT good at search? investigating large language models as re-ranking agents. arXiv 2023, arXiv:2304.09542. [Google Scholar]
- Ziems, N.; Yu, W.; Zhang, Z.; Jiang, M. Large language models are built-in autoregressive search engines. arXiv 2023, arXiv:2305.09612. [Google Scholar]
- Tay, Y.; Tran, V.; Dehghani, M.; Ni, J.; Bahri, D.; Mehta, H.; Qin, Z.; Hui, K.; Zhao, Z.; Gupta, J.; et al. Transformer memory as a differentiable search index. Adv. Neural Inf. Process. Syst. 2022, 35, 21831–21843. [Google Scholar]
- Dai, S.; Shao, N.; Zhao, H.; Yu, W.; Si, Z.; Xu, C.; Sun, Z.; Zhang, X.; Xu, J. Uncovering chatgpt’s capabilities in recommender systems. In Proceedings of the 17th ACM Conference on Recommender Systems, Singapore, 18–22 September 2023; pp. 1126–1132. [Google Scholar]
- Zheng, B.; Hou, Y.; Lu, H.; Chen, Y.; Zhao, W.X.; Wen, J.R. Adapting large language models by integrating collaborative semantics for recommendation. arXiv 2023, arXiv:2311.09049. [Google Scholar]
- Wang, L.; Ma, C.; Feng, X.; Zhang, Z.; Yang, H.; Zhang, J.; Chen, Z.; Tang, J.; Chen, X.; Lin, Y.; et al. A survey on large language model based autonomous agents. Front. Comput. Sci. 2024, 18, 186345. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, J.; Chen, X.; Lin, Y.; Song, R.; Zhao, W.X.; Wen, J.R. Recagent: A novel simulation paradigm for recommender systems. arXiv 2023, arXiv:2306.02552. [Google Scholar]
- Du, Y.; Liu, Z.; Li, J.; Zhao, W.X. A survey of vision-language pre-trained models. arXiv 2022, arXiv:2202.10936. [Google Scholar]
- Gan, Z.; Li, L.; Li, C.; Wang, L.; Liu, Z.; Gao, J. Vision-language pre-training: Basics, recent advances, and future trends. Found. Trends Comput. Graph. Vis. 2022, 14, 163–352. [Google Scholar] [CrossRef]
- Chen, W.; Su, Y.; Yan, X.; Wang, W.Y. KGPT: Knowledge-grounded pre-training for data-to-text generation. arXiv 2020, arXiv:2010.02307. [Google Scholar]
- Wang, X.; Wang, Z.; Liu, J.; Chen, Y.; Yuan, L.; Peng, H.; Ji, H. Mint: Evaluating llms in multi-turn interaction with tools and language feedback. arXiv 2023, arXiv:2309.10691. [Google Scholar]
- Zhang, X.; Yu, B.; Yu, H.; Lv, Y.; Liu, T.; Huang, F.; Xu, H.; Li, Y. Wider and deeper llm networks are fairer llm evaluators. arXiv 2023, arXiv:2308.01862. [Google Scholar]
- Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S.S.; Wei, J.; Chung, H.W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. Large language models encode clinical knowledge. Nature 2023, 620, 172–180. [Google Scholar] [CrossRef] [PubMed]
- Jeblick, K.; Schachtner, B.; Dexl, J.; Mittermeier, A.; Stüber, A.T.; Topalis, J.; Weber, T.; Wesp, P.; Sabel, B.O.; Ricke, J.; et al. ChatGPT makes medicine easy to swallow: An exploratory case study on simplified radiology reports. Eur. Radiol. 2024, 34, 2817–2825. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Kann, B.H.; Foote, M.B.; Aerts, H.J.; Savova, G.K.; Mak, R.H.; Bitterman, D.S. The utility of chatgpt for cancer treatment information. medRxiv 2023, 16. [Google Scholar] [CrossRef]
- Singhal, K.; Tu, T.; Gottweis, J.; Sayres, R.; Wulczyn, E.; Hou, L.; Clark, K.; Pfohl, S.; Cole-Lewis, H.; Neal, D.; et al. Towards expert-level medical question answering with large language models. arXiv 2023, arXiv:2305.09617. [Google Scholar]
- Yang, K.; Ji, S.; Zhang, T.; Xie, Q.; Ananiadou, S. On the evaluations of chatgpt and emotion-enhanced prompting for mental health analysis. arXiv 2023, arXiv:2304.03347. [Google Scholar]
- Tang, R.; Han, X.; Jiang, X.; Hu, X. Does synthetic data generation of llms help clinical text mining? arXiv 2023, arXiv:2303.04360. [Google Scholar]
- Rane, N.L.; Tawde, A.; Choudhary, S.P.; Rane, J. Contribution and performance of ChatGPT and other Large Language Models (LLM) for scientific and research advancements: A double-edged sword. Int. Res. J. Mod. Eng. Technol. Sci. 2023, 5, 875–899. [Google Scholar]
- Dai, W.; Lin, J.; Jin, H.; Li, T.; Tsai, Y.S.; Gašević, D.; Chen, G. Can large language models provide feedback to students? A case study on ChatGPT. In Proceedings of the 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA, 10–13 July 2023; pp. 323–325. [Google Scholar]
- Young, J.C.; Shishido, M. Investigating OpenAI’s ChatGPT potentials in generating Chatbot’s dialogue for English as a foreign language learning. Int. J. Adv. Comput. Sci. Appl. 2023, 14. [Google Scholar] [CrossRef]
- Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
- Susnjak, T.; McIntosh, T.R. ChatGPT: The end of online exam integrity? Educ. Sci. 2024, 14, 656. [Google Scholar] [CrossRef]
- Tamkin, A.; Brundage, M.; Clark, J.; Ganguli, D. Understanding the capabilities, limitations, and societal impact of large language models. arXiv 2021, arXiv:2102.02503. [Google Scholar]
- Nay, J.J. Law informs code: A legal informatics approach to aligning artificial intelligence with humans. Nw. J. Tech. Intell. Prop. 2022, 20, 309. [Google Scholar] [CrossRef]
- Yu, F.; Quartey, L.; Schilder, F. Legal prompting: Teaching a language model to think like a lawyer. arXiv 2022, arXiv:2212.01326. [Google Scholar]
- Trautmann, D.; Petrova, A.; Schilder, F. Legal prompt engineering for multilingual legal judgement prediction. arXiv 2022, arXiv:2212.02199. [Google Scholar]
- Sun, Z. A short survey of viewing large language models in legal aspect. arXiv 2023, arXiv:2303.09136. [Google Scholar]
- Savelka, J.; Ashley, K.D.; Gray, M.A.; Westermann, H.; Xu, H. Explaining legal concepts with augmented large language models (gpt-4). arXiv 2023, arXiv:2306.09525. [Google Scholar]
- Cui, J.; Li, Z.; Yan, Y.; Chen, B.; Yuan, L. Chatlaw: Open-source legal large language model with integrated external knowledge bases. arXiv 2023, arXiv:2306.16092. [Google Scholar]
- Guha, N.; Nyarko, J.; Ho, D.; Ré, C.; Chilton, A.; Chohlas-Wood, A.; Peters, A.; Waldon, B.; Rockmore, D.; Zambrano, D.; et al. Legalbench: A collaboratively built benchmark for measuring legal reasoning in large language models. Adv. Neural Inf. Process. Syst. 2024, 36. [Google Scholar] [CrossRef]
- Araci, D. Finbert: Financial sentiment analysis with pre-trained language models. arXiv 2019, arXiv:1908.10063. [Google Scholar]
- Li, Y.; Wang, S.; Ding, H.; Chen, H. Large language models in finance: A survey. In Proceedings of the Fourth ACM International Conference on AI in Finance, Brooklyn, NY, USA, 27–29 November 2023; pp. 374–382. [Google Scholar]
- Yang, H.; Liu, X.Y.; Wang, C.D. Fingpt: Open-source financial large language models. arXiv 2023, arXiv:2306.06031. [Google Scholar] [CrossRef]
- Son, G.; Jung, H.; Hahm, M.; Na, K.; Jin, S. Beyond classification: Financial reasoning in state-of-the-art language models. arXiv 2023, arXiv:2305.01505. [Google Scholar]
- Shah, A.; Chava, S. Zero is not hero yet: Benchmarking zero-shot performance of llms for financial tasks. arXiv 2023, arXiv:2305.16633. [Google Scholar] [CrossRef]
- Jin, Q.; Dhingra, B.; Liu, Z.; Cohen, W.W.; Lu, X. Pubmedqa: A dataset for biomedical research question answering. arXiv 2019, arXiv:1909.06146. [Google Scholar]
- Mahadi Hassan, M.; Knipper, A.; Kanti Karmaker Santu, S. ChatGPT as your Personal Data Scientist. arXiv 2023, arXiv:2305.13657. [Google Scholar]
- Irons, J.; Mason, C.; Cooper, P.; Sidra, S.; Reeson, A.; Paris, C. Exploring the Impacts of ChatGPT on Future Scientific Work 2023; SocArXiv Papers: Eveleigh, Australia, 2023. [Google Scholar]
- Altmäe, S.; Sola-Leyva, A.; Salumets, A. Artificial intelligence in scientific writing: A friend or a foe? Reprod. Biomed. Online 2023, 47, 3–9. [Google Scholar] [CrossRef]
- Zheng, Y.; Koh, H.Y.; Ju, J.; Nguyen, A.T.; May, L.T.; Webb, G.I.; Pan, S. Large language models for scientific synthesis, inference and explanation. arXiv 2023, arXiv:2310.07984. [Google Scholar]
- Aczel, B.; Wagenmakers, E.J. Transparency Guidance for ChatGPT Usage in Scientific Writing; PsyArXiv 2023 Preprint: Charlottesville, VA, USA.
- Jin, H.; Huang, L.; Cai, H.; Yan, J.; Li, B.; Chen, H. From llms to llm-based agents for software engineering: A survey of current, challenges and future. arXiv 2024, arXiv:2408.02479. [Google Scholar]
- Kimura, A.; Scholl, J.; Schaffranek, J.; Sutter, M.; Elliott, A.; Strizich, M.; Via, G.D. A decomposition workflow for integrated circuit verification and validation. J. Hardw. Syst. Secur. 2020, 4, 34–43. [Google Scholar] [CrossRef]
- Roy, D.; Zhang, X.; Bhave, R.; Bansal, C.; Las-Casas, P.; Fonseca, R.; Rajmohan, S. Exploring llm-based agents for root cause analysis. In Proceedings of the Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering, Porto de Galinhas, Brazil, 15–19 July 2024; pp. 208–219. [Google Scholar]
- Guo, C.; Cheng, F.; Du, Z.; Kiessling, J.; Ku, J.; Li, S.; Li, Z.; Ma, M.; Molom-Ochir, T.; Morris, B.; et al. A Survey: Collaborative Hardware and Software Design in the Era of Large Language Models. arXiv 2024, arXiv:2410.07265. [Google Scholar]
- Xu, N.; Zhang, Z.; Qi, L.; Wang, W.; Zhang, C.; Ren, Z.; Zhang, H.; Cheng, X.; Zhang, Y.; Liu, Z.; et al. ChipExpert: The Open-Source Integrated-Circuit-Design-Specific Large Language Model. arXiv 2024, arXiv:2408.00804. [Google Scholar]
- Zheng, Y.; Chen, Y.; Qian, B.; Shi, X.; Shu, Y.; Chen, J. A Review on Edge Large Language Models: Design, Execution, and Applications. arXiv 2024, arXiv:2410.11845. [Google Scholar]
- Hirschberg, J.; Ballard, B.W.; Hindle, D. Natural language processing. AT&T Tech. J. 1988, 67, 41–57. [Google Scholar]
- Petrushin, V.A. Hidden markov models: Fundamentals and applications. In Proceedings of the Online Symposium for Electronics Engineer, Rapallo, Italy, 25–27 July 2000. [Google Scholar]
- Yin, W.; Kann, K.; Yu, M.; Schütze, H. Comparative study of CNN and RNN for natural language processing. arXiv 2017, arXiv:1702.01923. [Google Scholar]
- Hihi, S.; Bengio, Y. Hierarchical recurrent neural networks for long-term dependencies. Adv. Neural Inf. Process. Syst. 1995, 8, 493–499. [Google Scholar]
- Hochreiter, S. Recurrent neural net learning and vanishing gradient. Int. J. Uncertainity Fuzziness-Knowl.-Based Syst. 1998, 6, 107–116. [Google Scholar] [CrossRef]
- Azunre, P. Transfer Learning for Natural Language Processing; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
- Shi, Y.; Larson, M.; Jonker, C.M. Recurrent neural network language model adaptation with curriculum learning. Comput. Speech Lang. 2015, 33, 136–154. [Google Scholar] [CrossRef]
- Kovačević, A.; Kečo, D. Bidirectional LSTM networks for abstractive text summarization. In Proceedings of the Advanced Technologies, Systems, and Applications VI: Proceedings of the International Symposium on Innovative and Interdisciplinary Applications of Advanced Technologies (IAT), Bosnia and Herzegovina, 17 November 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 281–293. [Google Scholar]
- Wu, Y.; Schuster, M.; Chen, Z.; Le, Q.V.; Norouzi, M.; Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K.; et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv 2016, arXiv:1609.08144. [Google Scholar]
- Yadav, R.K.; Harwani, S.; Maurya, S.K.; Kumar, S. Intelligent Chatbot Using GNMT, SEQ-2-SEQ Techniques. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; pp. 1–5. [Google Scholar]
- Luitse, D.; Denkena, W. The great transformer: Examining the role of large language models in the political economy of AI. Big Data Soc. 2021, 8, 20539517211047734. [Google Scholar] [CrossRef]
- Topal, M.O.; Bas, A.; van Heerden, I. Exploring transformers in natural language generation: Gpt, bert, and xlnet. arXiv 2021, arXiv:2102.08036. [Google Scholar]
- Bird, J.J.; Ekárt, A.; Faria, D.R. Chatbot Interaction with Artificial Intelligence: Human data augmentation with T5 and language transformer ensemble for text classification. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 3129–3144. [Google Scholar] [CrossRef]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training; OpenAI: San Francisco, CA, USA, 2018. [Google Scholar]
- Radford, A.; Wu, J.; Amodei, D.; Amodei, D.; Clark, J.; Brundage, M.; Sutskever, I. Better language models and their implications. OpenAI Blog. 14 February 2019. blog 1, no 2. Available online: https://openai.com/index/better-language-models/ (accessed on 19 December 2024).
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
- Huang, J.; Chang, K.C.C. Towards reasoning in large language models: A survey. arXiv 2022, arXiv:2212.10403. [Google Scholar]
- Xi, Z.; Chen, W.; Guo, X.; He, W.; Ding, Y.; Hong, B.; Zhang, M.; Wang, J.; Jin, S.; Zhou, E.; et al. The rise and potential of large language model based agents: A survey. arXiv 2023, arXiv:2309.07864. [Google Scholar]
- Hadi, M.U.; Al Tashi, Q.; Shah, A.; Qureshi, R.; Muneer, A.; Irfan, M.; Zafar, A.; Shaikh, M.B.; Akhtar, N.; Wu, J.; et al. Large language models: A comprehensive survey of its applications, challenges, limitations, and future prospects. Authorea Preprints. 12 August 2024. Available online: https://www.techrxiv.org/doi/full/10.36227/techrxiv.23589741.v6 (accessed on 19 December 2024).
- Naveed, H.; Khan, A.U.; Qiu, S.; Saqib, M.; Anwar, S.; Usman, M.; Barnes, N.; Mian, A. A comprehensive overview of large language models. arXiv 2023, arXiv:2307.06435. [Google Scholar]
- Fan, L.; Li, L.; Ma, Z.; Lee, S.; Yu, H.; Hemphill, L. A bibliometric review of large language models research from 2017 to 2023. arXiv 2023, arXiv:2304.02020. [Google Scholar] [CrossRef]
- Raiaan, M.A.K.; Mukta, M.S.H.; Fatema, K.; Fahad, N.M.; Sakib, S.; Mim, M.M.J.; Ahmad, J.; Ali, M.E.; Azam, S. A review on large Language Models: Architectures, applications, taxonomies, open issues and challenges. IEEE Access 2024, 12, 26839–26874. [Google Scholar] [CrossRef]
- Minaee, S.; Mikolov, T.; Nikzad, N.; Chenaghlu, M.; Socher, R.; Amatriain, X.; Gao, J. Large language models: A survey. arXiv 2024, arXiv:2402.06196. [Google Scholar]
- Liu, Y.; He, H.; Han, T.; Zhang, X.; Liu, M.; Tian, J.; Zhang, Y.; Wang, J.; Gao, X.; Zhong, T.; et al. Understanding llms: A comprehensive overview from training to inference. arXiv 2024, arXiv:2401.02038. [Google Scholar] [CrossRef]
- Cui, C.; Ma, Y.; Cao, X.; Ye, W.; Zhou, Y.; Liang, K.; Chen, J.; Lu, J.; Yang, Z.; Liao, K.D.; et al. A survey on multimodal large language models for autonomous driving. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 958–979. [Google Scholar]
- Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
- Kachris, C. A survey on hardware accelerators for large language models. arXiv 2024, arXiv:2401.09890. [Google Scholar]
- Islam, R.; Moushi, O.M.; Gpt-4o: The cutting-edge advancement in multimodal llm. Authorea Prepr, 2 July 2024. Available online: https://easychair.org/publications/preprint/z4TJ/open (accessed on 19 December 2024).
- Šimsová, J. Examining Cognitive Abilities and Multilingual Performance of Large Language Models: A Comparative Analysis of GPT-3 and GPT-4; Univerzita Karlova, Filozofická Fakulta: Prague, Czech Republic, 2024. [Google Scholar]
- Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018, 169, 467–473. [Google Scholar] [CrossRef]
- Alsaqer, S.; Alajmi, S.; Ahmad, I.; Alfailakawi, M. The potential of llms in hardware design. J. Eng. Res. 2024, in press. [CrossRef]
- Zhang, H.; Ning, A.; Prabhakar, R.; Wentzlaff, D. A Hardware Evaluation Framework for Large Language Model Inference. arXiv 2023, arXiv:2312.03134. [Google Scholar]
- Korvala, A. Analysis of LLM-Models in Optimizing and Designing VHDL Code. Master’s Thesis, Modern SW and Computing Technolgies, Oulu University of Applied Sciences, Oulu, Finland, 2023. [Google Scholar]
- Thakur, S.; Blocklove, J.; Pearce, H.; Tan, B.; Garg, S.; Karri, R. Autochip: Automating hdl generation using llm feedback. arXiv 2023, arXiv:2311.04887. [Google Scholar]
- Thakur, S.; Ahmad, B.; Fan, Z.; Pearce, H.; Tan, B.; Karri, R.; Dolan-Gavitt, B.; Garg, S. Benchmarking large language models for automated verilog rtl code generation. In Proceedings of the 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 17–19 April 2023. [Google Scholar]
- Blocklove, J.; Garg, S.; Karri, R.; Pearce, H. Chip-chat: Challenges and opportunities in conversational hardware design. In Proceedings of the 2023 ACM/IEEE 5th Workshop on Machine Learning for CAD (MLCAD), Snowbird, UT, USA, 10–13 September 2023; pp. 1–6. [Google Scholar]
- Chang, K.; Wang, Y.; Ren, H.; Wang, M.; Liang, S.; Han, Y.; Li, H.; Li, X. Chipgpt: How far are we from natural language hardware design. arXiv 2023, arXiv:2305.14019. [Google Scholar]
- Martínez, P.A.; Bernabé, G.; García, J.M. Code Detection for Hardware Acceleration Using Large Language Models. IEEE Access 2024, 12, 35271–35281. [Google Scholar] [CrossRef]
- DeLorenzo, M.; Gohil, V.; Rajendran, J. CreativEval: Evaluating Creativity of LLM-Based Hardware Code Generation. arXiv 2024, arXiv:2404.08806. [Google Scholar]
- Tomlinson, M.; Li, J.; Andreou, A. Designing Silicon Brains using LLM: Leveraging ChatGPT for Automated Description of a Spiking Neuron Array. arXiv 2024, arXiv:2402.10920. [Google Scholar]
- Xiang, M.; Goh, E.; Teo, T.H. Digital ASIC Design with Ongoing LLMs: Strategies and Prospects. arXiv 2024, arXiv:2405.02329. [Google Scholar]
- Wang, H. Efficient Algorithms and Hardware for Natural Language Processing. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2020. [Google Scholar]
- Fu, Y.; Zhang, Y.; Yu, Z.; Li, S.; Ye, Z.; Li, C.; Wan, C.; Lin, Y.C. Gpt4aigchip: Towards next-generation ai accelerator design automation via large language models. In Proceedings of the 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), San Francisco, CA, USA, 28 October–2 November 2023; pp. 1–9. [Google Scholar]
- Fu, W.; Li, S.; Zhao, Y.; Ma, H.; Dutta, R.; Zhang, X.; Yang, K.; Jin, Y.; Guo, X. Hardware Phi-1.5 B: A Large Language Model Encodes Hardware Domain Specific Knowledge. arXiv 2024, arXiv:2402.01728. [Google Scholar]
- Wang, H.; Wu, Z.; Liu, Z.; Cai, H.; Zhu, L.; Gan, C.; Han, S. Hat: Hardware-aware transformers for efficient natural language processing. arXiv 2020, arXiv:2005.14187. [Google Scholar]
- Chang, K.; Ren, H.; Wang, M.; Liang, S.; Han, Y.; Li, H.; Li, X.; Wang, Y. Improving Large Language Model Hardware Generating Quality through Post-LLM Search. In Proceedings of the Machine Learning for Systems 2023, Zhuhai, China, 17–20 February 2023. [Google Scholar]
- Guo, C.; Tang, J.; Hu, W.; Leng, J.; Zhang, C.; Yang, F.; Liu, Y.; Guo, M.; Zhu, Y. Olive: Accelerating large language models via hardware-friendly outlier-victim pair quantization. In Proceedings of the 50th Annual International Symposium on Computer Architecture, Orlando, FL, USA, 17–21 June 2023; pp. 1–15. [Google Scholar]
- Liu, S.; Fang, W.; Lu, Y.; Zhang, Q.; Zhang, H.; Xie, Z. Rtlcoder: Outperforming gpt-3.5 in design rtl generation with our open-source dataset and lightweight solution. arXiv 2023, arXiv:2312.08617. [Google Scholar]
- Lu, Y.; Liu, S.; Zhang, Q.; Xie, Z. Rtllm: An open-source benchmark for design rtl generation with large language model. In Proceedings of the 2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC), Incheon, Republic of Korea, 22–25 January 2024; pp. 722–727. [Google Scholar]
- Pandelea, V.; Ragusa, E.; Gastaldo, P.; Cambria, E. Selecting Language Models Features VIA Software-Hardware Co-Design. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Cerisara, C. SlowLLM: Large Language Models on Consumer Hardware. Ph.D. Thesis, CNRS, Paris, France, 2023. [Google Scholar]
- Li, M.; Fang, W.; Zhang, Q.; Xie, Z. Specllm: Exploring generation and review of vlsi design specification with large language model. arXiv 2024, arXiv:2401.13266. [Google Scholar]
- Kurtić, E.; Frantar, E.; Alistarh, D. ZipLM: Inference-Aware Structured Pruning of Language Models. Adv. Neural Inf. Process. Syst. 2024, 36. Available online: https://proceedings.neurips.cc/paper_files/paper/2023/hash/ced46a50befedcb884ccf0cbe8c3ad23-Abstract-Conference.html (accessed on 19 December 2024).
- Thorat, K.; Zhao, J.; Liu, Y.; Peng, H.; Xie, X.; Lei, B.; Zhang, J.; Ding, C. Advanced language model-driven verilog development: Enhancing power, performance, and area optimization in code synthesis. arXiv 2023, arXiv:2312.01022. [Google Scholar]
- Huang, Y.; Wan, L.J.; Ye, H.; Jha, M.; Wang, J.; Li, Y.; Zhang, X.; Chen, D. New Solutions on LLM Acceleration, Optimization, and Application. arXiv 2024, arXiv:2406.10903. [Google Scholar]
- Goh, E.; Xiang, M.; Wey, I.; Teo, T.H. From English to ASIC: Hardware Implementation with Large Language Model. arXiv 2024, arXiv:2403.07039. [Google Scholar]
- Zhang, H.; Ning, A.; Prabhakar, R.B.; Wentzlaff, D. Llmcompass: Enabling efficient hardware design for large language model inference. In Proceedings of the 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA), Buenos Aires, Argentina, 29 June–3 July 2024; pp. 1080–1096. [Google Scholar]
- Chang, K.; Wang, K.; Yang, N.; Wang, Y.; Jin, D.; Zhu, W.; Chen, Z.; Li, C.; Yan, H.; Zhou, Y.; et al. Data is all you need: Finetuning llms for chip design via an automated design-data augmentation framework. In Proceedings of the 61st ACM/IEEE Design Automation Conference, San Francisco, CA, USA, 23–27 June 2024; pp. 1–6. [Google Scholar]
- Nakkab, A.; Zhang, S.Q.; Karri, R.; Garg, S. Rome was Not Built in a Single Step: Hierarchical Prompting for LLM-based Chip Design. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD, Salt Lake City, UT, USA, 9–11 September 2024; pp. 1–11. [Google Scholar]
- Hossain, S.; Gohil, A.; Wang, Y. Using LLM such as ChatGPT for Designing and Implementing a RISC Processor: Execution, Challenges and Limitations. arXiv 2024, arXiv:2401.10364. [Google Scholar]
- Zhang, Y.; Yu, Z.; Fu, Y.; Wan, C.; Lin, Y.C. Mg-verilog: Multi-grained dataset towards enhanced llm-assisted verilog generation. In Proceedings of the 2024 IEEE LLM Aided Design Workshop (LAD), San Jose, CA, USA, 28 June 2024; pp. 1–5. [Google Scholar]
- Mudigere, D.; Hao, Y.; Huang, J.; Jia, Z.; Tulloch, A.; Sridharan, S.; Liu, X.; Ozdal, M.; Nie, J.; Park, J.; et al. Software-hardware co-design for fast and scalable training of deep learning recommendation models. In Proceedings of the 49th Annual International Symposium on Computer Architecture, New York, NY, USA, 18–22 June 2022; pp. 993–1011. [Google Scholar]
- Wan, L.J.; Huang, Y.; Li, Y.; Ye, H.; Wang, J.; Zhang, X.; Chen, D. Software/Hardware Co-design for LLM and Its Application for Design Verification. In Proceedings of the 2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC), Incheon, Republic of Korea, 22–25 January 2024; pp. 435–441. [Google Scholar]
- Yan, Z.; Qin, Y.; Hu, X.S.; Shi, Y. On the viability of using llms for sw/hw co-design: An example in designing cim dnn accelerators. In Proceedings of the 2023 IEEE 36th International System-on-Chip Conference (SOCC), Santa Clara, CA, USA, 5–8 September 2023; pp. 1–6. [Google Scholar]
- Collini, L.; Garg, S.; Karri, R. C2HLSC: Can LLMs Bridge the Software-to-Hardware Design Gap? arXiv 2024, arXiv:2406.09233. [Google Scholar]
- Blocklove, J.; Garg, S.; Karri, R.; Pearce, H. Evaluating LLMs for Hardware Design and Test. arXiv 2024, arXiv:2405.02326. [Google Scholar]
- Batten, C.; Pinckney, N.; Liu, M.; Ren, H.; Khailany, B. PyHDL-Eval: An LLM Evaluation Framework for Hardware Design Using Python-Embedded DSLs. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD, Salt Lake City, UT, USA, 9–11 September 2024; pp. 1–17. [Google Scholar]
- Nazzal, M.; Vungarala, D.; Morsali, M.; Zhang, C.; Ghosh, A.; Khreishah, A.; Angizi, S. A Dataset for Large Language Model-Driven AI Accelerator Generation. arXiv 2024, arXiv:2404.10875. [Google Scholar]
- Vungarala, D.L.V.D. Gen-Acceleration: Pioneering Work for Hardware Accelerator Generation Using Large Language Models. Master’s Thesis, Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ, USA, 2023. [Google Scholar]
- Heo, G.; Lee, S.; Cho, J.; Choi, H.; Lee, S.; Ham, H.; Kim, G.; Mahajan, D.; Park, J. NeuPIMs: NPU-PIM Heterogeneous Acceleration for Batched LLM Inferencing. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, La Jolla CA USA, 27 April 2024; Volume 3, pp. 722–737. [Google Scholar]
- Lai, C.; Zhou, Z.; Poptani, A.; Zhang, W. LCM: LLM-focused Hybrid SPM-cache Architecture with Cache Management for Multi-Core AI Accelerators. In Proceedings of the 38th ACM International Conference on Supercomputing, Kyoto, Japan, 4–7 June 2024; pp. 62–73. [Google Scholar]
- Mao, Y.; You, Y.; Tan, X.; Huang, Y.; You, X.; Zhang, C. FLAG: Formula-LLM-Based Auto-Generator for Baseband Hardware. In Proceedings of the 2024 IEEE International Symposium on Circuits and Systems (ISCAS), New Delhi, India, 18–19 October 2024; pp. 1–5. [Google Scholar]
- Chen, H.M.; Luk, W.; Yiu, K.F.C.; Li, R.; Mishchenko, K.; Venieris, S.I.; Fan, H. Hardware-aware parallel prompt decoding for memory-efficient acceleration of llm inference. arXiv 2024, arXiv:2405.18628. [Google Scholar]
- Paria, S.; Dasgupta, A.; Bhunia, S. Divas: An llm-based end-to-end framework for soc security analysis and policy-based protection. arXiv 2023, arXiv:2308.06932. [Google Scholar]
- Srikumar, P. Fast and wrong: The case for formally specifying hardware with LLMS. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), La Jolla, CA, USA, 27 April 2023; ACM Press: New York, NY, USA, 2023. [Google Scholar]
- Ahmad, B.; Thakur, S.; Tan, B.; Karri, R.; Pearce, H. Fixing hardware security bugs with large language models. arXiv 2023, arXiv:2302.01215. [Google Scholar]
- Kokolakis, G.; Moschos, A.; Keromytis, A.D. Harnessing the power of general-purpose llms in hardware trojan design. In Proceedings of the 5th Workshop on Artificial Intelligence in Hardware Security, in Conjunction with ACNS, Abu Dhabi, United Arab Emirates, 5 March 2024; Volume 14. [Google Scholar]
- Saha, D.; Tarek, S.; Yahyaei, K.; Saha, S.K.; Zhou, J.; Tehranipoor, M.; Farahmandi, F. Llm for soc security: A paradigm shift. arXiv 2023, arXiv:2310.06046. [Google Scholar] [CrossRef]
- Wang, Z.; Alrahis, L.; Mankali, L.; Knechtel, J.; Sinanoglu, O. LLMs and the Future of Chip Design: Unveiling Security Risks and Building Trust. arXiv 2024, arXiv:2405.07061. [Google Scholar]
- Ahmad, B.; Thakur, S.; Tan, B.; Karri, R.; Pearce, H. On hardware security bug code fixes by prompting large language models. IEEE Trans. Inf. Forensics Secur. 2024, 19, 4043–4057. [Google Scholar] [CrossRef]
- Kande, R.; Pearce, H.; Tan, B.; Dolan-Gavitt, B.; Thakur, S.; Karri, R.; Rajendran, J. (Security) Assertions by Large Language Models. IEEE Trans. Inf. Forensics Secur. 2024, 19, 4374–4389. [Google Scholar] [CrossRef]
- Paria, S.; Dasgupta, A.; Bhunia, S. Navigating SoC Security Landscape on LLM-Guided Paths. In Proceedings of the Great Lakes Symposium on VLSI 2024, Clearwater, FL, USA, 12–14 June 2024; pp. 252–257. [Google Scholar]
- Tarek, S.; Saha, D.; Saha, S.K.; Tehranipoor, M.; Farahmandi, F. SoCureLLM: An LLM-driven Approach for Large-Scale System-on-Chip Security Verification and Policy Generation. Cryptol. ePrint Arch. 2024. Available online: https://eprint.iacr.org/2024/983 (accessed on 19 December 2024).
- Kande, R.; Gohil, V.; DeLorenzo, M.; Chen, C.; Rajendran, J. LLMs for Hardware Security: Boon or Bane? In Proceedings of the 2024 IEEE 42nd VLSI Test Symposium (VTS), Tempe, AZ, USA, 22–24 April 2024; pp. 1–4. [Google Scholar]
- Saha, D.; Yahyaei, K.; Saha, S.K.; Tehranipoor, M.; Farahmandi, F. Empowering Hardware Security with LLM: The Development of a Vulnerable Hardware Database. In Proceedings of the 2024 IEEE International Symposium on Hardware Oriented Security and Trust (HOST), Tysons Corner, VA, USA, 6–9 May 2024; pp. 233–243. [Google Scholar]
- Akyash, M.; Kamali, H.M. Self-HWDebug: Automation of LLM Self-Instructing for Hardware Security Verification. arXiv 2024, arXiv:2405.12347. [Google Scholar]
- Yao, X.; Li, H.; Chan, T.H.; Xiao, W.; Yuan, M.; Huang, Y.; Chen, L.; Yu, B. Hdldebugger: Streamlining hdl debugging with large language models. arXiv 2024, arXiv:2403.11671. [Google Scholar]
- Fu, W.; Yang, K.; Dutta, R.G.; Guo, X.; Qu, G. LLM4SecHW: Leveraging domain-specific large language model for hardware debugging. In Proceedings of the 2023 Asian Hardware Oriented Security and Trust Symposium (AsianHOST), Tianjin, China, 13–15 December 2023; pp. 1–6. [Google Scholar]
- Fang, W.; Li, M.; Li, M.; Yan, Z.; Liu, S.; Zhang, H.; Xie, Z. AssertLLM: Generating and Evaluating Hardware Verification Assertions from Design Specifications via Multi-LLMs. arXiv 2024, arXiv:2402.00386. [Google Scholar]
- Orenes-Vera, M.; Martonosi, M.; Wentzlaff, D. Using llms to facilitate formal verification of rtl. arXiv 2023, arXiv:2309.09437. [Google Scholar]
- Varambally, B.S.; Sehgal, N. Optimising design verification using machine learning: An open source solution. arXiv 2020, arXiv:2012.02453. [Google Scholar]
- Liu, M.; Pinckney, N.; Khailany, B.; Ren, H. Verilogeval: Evaluating large language models for verilog code generation. In Proceedings of the 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), San Francisco, CA, USA, 28 October–2 November 2023; pp. 1–8. [Google Scholar]
- Sun, C.; Hahn, C.; Trippel, C. Towards improving verification productivity with circuit-aware translation of natural language to systemverilog assertions. In Proceedings of the First International Workshop on Deep Learning-Aided Verification, Paris, France, 18 July 2023. [Google Scholar]
- Liu, M.; Ene, T.D.; Kirby, R.; Cheng, C.; Pinckney, N.; Liang, R.; Alben, J.; Anand, H.; Banerjee, S.; Bayraktaroglu, I.; et al. Chipnemo: Domain-adapted llms for chip design. arXiv 2023, arXiv:2311.00176. [Google Scholar]
- Zhang, Z.; Chadwick, G.; McNally, H.; Zhao, Y.; Mullins, R. Llm4dv: Using large language models for hardware test stimuli generation. arXiv 2023, arXiv:2310.04535. [Google Scholar]
- Kande, R.; Pearce, H.; Tan, B.; Dolan-Gavitt, B.; Thakur, S.; Karri, R.; Rajendran, J. Llm-assisted generation of hardware assertions. arXiv 2023, arXiv:2306.14027. [Google Scholar]
- Qayyum, K.; Hassan, M.; Ahmadi-Pour, S.; Jha, C.K.; Drechsler, R. Late breaking results: LLM-assisted automated incremental proof generation for hardware verification. In Proceedings of the 61st ACM/IEEE Design Automation Conference, San Francisco, CA, USA, 23–27 June 2024; pp. 1–2. [Google Scholar]
- Xiao, C.; Deng, Y.; Yang, Z.; Chen, R.; Wang, H.; Zhao, J.; Dai, H.; Wang, L.; Tang, Y.; Xu, W. LLM-Based Processor Verification: A Case Study for Neuronnorphic Processor. In Proceedings of the 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE), Valencia, Spain, 25–27 March 2024; pp. 1–6. [Google Scholar]
- Ma, R.; Yang, Y.; Liu, Z.; Zhang, J.; Li, M.; Huang, J.; Luo, G. VerilogReader: LLM-Aided Hardware Test Generation. arXiv 2024, arXiv:2406.04373. [Google Scholar]
- Makatura, L.; Foshey, M.; Wang, B.; Hähnlein, F.; Ma, P.; Deng, B.; Tjandrasuwita, M.; Spielberg, A.; Owens, C.E.; Chen, P.Y.; et al. Large Language Models for Design and Manufacturing. MIT Explor. Gener. AI. Available online: https://mit-genai.pubpub.org/pub/nmypmnhs (accessed on 19 December 2024).
- Du, Y.; Deng, H.; Liew, S.C.; Chen, K.; Shao, Y.; Chen, H. The Power of Large Language Models for Wireless Communication System Development: A Case Study on FPGA Platforms. arXiv 2024, arXiv:2307.07319. [Google Scholar]
- Englhardt, Z.; Li, R.; Nissanka, D.; Zhang, Z.; Narayanswamy, G.; Breda, J.; Liu, X.; Patel, S.; Iyer, V. Exploring and Characterizing Large Language Models For Embedded System Development and Debugging. arXiv 2023, arXiv:2307.03817. [Google Scholar]
- Lian, X.; Chen, Y.; Cheng, R.; Huang, J.; Thakkar, P.; Zhang, M.; Xu, T. Configuration Validation with Large Language Models. arXiv 2024, arXiv:2310.09690. [Google Scholar]
- Patil, R.; Gudivada, V. A review of current trends, techniques, and challenges in large language models (llms). Appl. Sci. 2024, 14, 2074. [Google Scholar] [CrossRef]
- Kumar, P. Large language models (LLMs): Survey, technical frameworks, and future challenges. Artif. Intell. Rev. 2024, 57, 260. [Google Scholar] [CrossRef]
- Li, R.; Fu, D.; Shi, C.; Huang, Z.; Lu, G. Efficient LLMs Training and Inference: An Introduction. IEEE Access 2024. [Google Scholar] [CrossRef]
- Luz, A. Enhancing the Interpretability and Explainability of AI-Driven Risk Models Using LLM Capabilities; Technical Report; EasyChair: Stockport, UK, 2024. [Google Scholar]
- Fujiwara, K.; Sasaki, M.; Nakamura, A.; Watanabe, N. Measuring the Interpretability and Explainability of Model Decisions of Five Large Language Models; Open Science Framework: Charlottesville, VA, USA, 2024. [Google Scholar]
- Weber, I. Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications. arXiv 2024, arXiv:2406.10300. [Google Scholar]
- Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. Lora: Low-rank adaptation of large language models. arXiv 2021, arXiv:2106.09685. [Google Scholar]
- Nijkamp, E.; Pang, B.; Hayashi, H.; Tu, L.; Wang, H.; Zhou, Y.; Savarese, S.; Xiong, C. Codegen: An open large language model for code with multi-turn program synthesis. arXiv 2022, arXiv:2203.13474. [Google Scholar]
- Xu, F.F.; Alon, U.; Neubig, G.; Hellendoorn, V.J. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, San Diego, CA, USA, 13 June 2022; pp. 1–10. [Google Scholar]
- Tihanyi, N.; Jain, R.; Charalambous, Y.; Ferrag, M.A.; Sun, Y.; Cordeiro, L.C. A new era in software security: Towards self-healing software via large language models and formal verification. arXiv 2023, arXiv:2305.14752. [Google Scholar]
- Sandal, S.; Akturk, I. Zero-Shot RTL Code Generation with Attention Sink Augmented Large Language Models. arXiv 2024, arXiv:2401.08683. [Google Scholar]
- Parchamdar, B.; Schafer, B.C. Finding Bugs in RTL Descriptions: High-Level Synthesis to the Rescue. In Proceedings of the 61st Design Automation Conference (DAC), Francisco, CA, USA, 23–27 June 2024. [Google Scholar]
- Tavana, M.K.; Teimouri, N.; Abdollahi, M.; Goudarzi, M. Simultaneous hardware and time redundancy with online task scheduling for low energy highly reliable standby-sparing system. ACM Trans. Embed. Comput. Syst. 2014, 13, 1–13. [Google Scholar] [CrossRef]
- Luo, Q.; Hu, S.; Li, C.; Li, G.; Shi, W. Resource scheduling in edge computing: A survey. IEEE Commun. Surv. Tutor. 2021, 23, 2131–2165. [Google Scholar] [CrossRef]
- Kumar, S.; Singh, S.K.; Aggarwal, N.; Gupta, B.B.; Alhalabi, W.; Band, S.S. An efficient hardware supported and parallelization architecture for intelligent systems to overcome speculative overheads. Int. J. Intell. Syst. 2022, 37, 11764–11790. [Google Scholar] [CrossRef]
- Kao, S.C.; Jeong, G.; Krishna, T. Confuciux: Autonomous hardware resource assignment for dnn accelerators using reinforcement learning. In Proceedings of the 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), Athens, Greece, 17–21 October 2020; pp. 622–636. [Google Scholar]
- Alwan, E.H.; Ketran, R.M.; Hussein, I.A. A Comprehensive Survey on Loop Unrolling Technique In Code Optimization. J. Univ. Babylon Pure Appl. Sci. 2024, 32, 108–117. [Google Scholar] [CrossRef]
- Liu, Y.; Ma, Y.; Zhang, B.; Liu, L.; Wang, J.; Tang, S. Improving the computational efficiency and flexibility of FPGA-based CNN accelerator through loop optimization. Microelectron. J. 2024, 147, 106197. [Google Scholar] [CrossRef]
- Hasan, B.M.S.; Abdulazeez, A.M. A review of principal component analysis algorithm for dimensionality reduction. J. Soft Comput. Data Min. 2021, 2, 20–30. [Google Scholar]
- Wang, Q.; Li, X.; Yue, C.; He, Y. A Survey of Control Flow Graph Recovery for Binary Code. In Proceedings of the CCF National Conference of Computer Applications, Suzhou, China, 16–20 July 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 225–244. [Google Scholar]
- Talati, N.; May, K.; Behroozi, A.; Yang, Y.; Kaszyk, K.; Vasiladiotis, C.; Verma, T.; Li, L.; Nguyen, B.; Sun, J.; et al. Prodigy: Improving the memory latency of data-indirect irregular workloads using hardware-software co-design. In Proceedings of the 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Republic of Korea, 27 February–3 March 2021; pp. 654–667. [Google Scholar]
- Ayers, G.; Litz, H.; Kozyrakis, C.; Ranganathan, P. Classifying memory access patterns for prefetching. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, Lausanne, Switzerland, 16–20 March 2020; pp. 513–526. [Google Scholar]
- Kastner, R.; Gong, W.; Hao, X.; Brewer, F.; Kaplan, A.; Brisk, P.; Sarrafzadeh, M. Physically Aware Data Communication Optimization for Hardware Synthesis. Available online: https://cseweb.ucsd.edu/~kastner/papers/iwls05-phy_aware_data_comm.pdf (accessed on 19 December 2024).
- Fan, Z. Automatically Generating Verilog RTL Code with Large Language Models. Master’s Thesis, New York University Tandon School of Engineering, New York, NY, USA, 2023. [Google Scholar]
- Lekidis, A. Automated Code Generation for Industrial Applications Based on Configurable Programming Models. Preprints 2023. [Google Scholar] [CrossRef]
- Bhandari, J.; Knechtel, J.; Narayanaswamy, R.; Garg, S.; Karri, R. LLM-Aided Testbench Generation and Bug Detection for Finite-State Machines. arXiv 2024, arXiv:2406.17132. [Google Scholar]
- Kibria, R.; Farahmandi, F.; Tehranipoor, M. FSMx-Ultra: Finite State Machine Extraction from Gate-Level Netlist for Security Assessment. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2023, 42, 3613–3627. [Google Scholar] [CrossRef]
- Gauthier, L.; Ishikawa, Y. HDLRuby: A Ruby Extension for Hardware Description and its Translation to Synthesizable Verilog HDL. ACM Trans. Embed. Comput. Syst. 2024, 23, 1–26. [Google Scholar] [CrossRef]
- Rashid, M.I.; Schaefer, B.C. VeriPy: A Python-Powered Framework for Parsing Verilog HDL and High-Level Behavioral Analysis of Hardware. In Proceedings of the 2024 IEEE 17th Dallas Circuits and Systems Conference (DCAS), Richardson, TX, USA, 19–21 April 2024; pp. 1–6. [Google Scholar]
- Morgan, F.; Byrne, J.P.; Bupathi, A.; George, R.; Elahi, A.; Callaly, F.; Kelly, S.; O’Loughlin, D. HDLGen-ChatGPT Case Study: RISC-V Processor VHDL and Verilog Model-Testbench and EDA Project Generation. In Proceedings of the 34th International Workshop on Rapid System Prototyping, Hamburg, Germany, 21 September 2023; pp. 1–7. [Google Scholar]
- Kumar, B.; Nanda, S.; Parthasarathy, G.; Patil, P.; Tsai, A.; Choudhary, P. HDL-GPT: High-Quality HDL is All You Need. arXiv 2024, arXiv:2407.18423. [Google Scholar]
- Qiu, R.; Zhang, G.L.; Drechsler, R.; Schlichtmann, U.; Li, B. AutoBench: Automatic Testbench Generation and Evaluation Using LLMs for HDL Design. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD, Salt Lake City, UT, USA, 9–11 September 2024; pp. 1–10. [Google Scholar]
- Wenzel, J.; Hochberger, C. Automatically Restructuring HDL Modules for Improved Reusability in Rapid Synthesis. In Proceedings of the 2022 IEEE International Workshop on Rapid System Prototyping (RSP), Shanghai, China, 13 October 2022; pp. 43–49. [Google Scholar]
- Witharana, H.; Lyu, Y.; Charles, S.; Mishra, P. A survey on assertion-based hardware verification. ACM Comput. Surv. CSUR 2022, 54, 1–33. [Google Scholar] [CrossRef]
- Agostini, N.B.; Haris, J.; Gibson, P.; Jayaweera, M.; Rubin, N.; Tumeo, A.; Abellán, J.L.; Cano, J.; Kaeli, D. AXI4MLIR: User-Driven Automatic Host Code Generation for Custom AXI-Based Accelerators. In Proceedings of the 2024 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), Edinburgh, UK, 2–6 March 2024; pp. 143–157. [Google Scholar]
- Vivekananda, A.A.; Enoiu, E. Automated test case generation for digital system designs: A mapping study on vhdl, verilog, and systemverilog description languages. Designs 2020, 4, 31. [Google Scholar] [CrossRef]
- Nuocheng, W. HDL Synthesis, Inference and Technology Mapping Algorithms for FPGA Configuration. Int. J. Eng. Technol. 2024, 16, 32–38. [Google Scholar]
- Cardona Nadal, J. Practical Strategies to Monitor and Control Contention in Shared Resources of Critical Real-Time Embedded Systems. Ph.D. Thesis, Universitat Politècnica de Catalunya, Barcelona, Spain, 2023. [Google Scholar]
- Jayasena, A.; Mishra, P. Directed test generation for hardware validation: A survey. ACM Comput. Surv. 2024, 56, 1–36. [Google Scholar] [CrossRef]
- Srivastava, A.; Mukherjee, R.; Marschner, E.; Seeley, C.; Dobre, S. Low Power SoC Verification: IP Reuse and Hierarchical Composition using UPF. DVCon Proc. San Jose, CA, USA. 2012. Available online: https://dvcon-proceedings.org/document/low-power-soc-verification-ip-reuse-and-hierarchical-composition-using-upf/ (accessed on 19 December 2024).
- Mullane, B.; MacNamee, C. Developing a reusable IP platform within a System-on-Chip design framework targeted towards an academic R&D environment. Design and Reuse 2008. Available online: https://www.design-reuse.com/articles/16039/developing-a-reusable-ip-platform-within-a-system-on-chip-design-framework-targeted-towards-an-academic-r-d-environment.html (accessed on 19 December 2024).
- Leipnitz, M.T.; Nazar, G.L. High-level synthesis of approximate designs under real-time constraints. ACM Trans. Embed. Comput. Syst. TECS 2019, 18, 1–21. [Google Scholar] [CrossRef]
- Gangadharan, S.; Churiwala, S. Constraining Designs for Synthesis and Timing Analysis; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Namazi, A.; Abdollahi, M. PCG: Partially clock-gating approach to reduce the power consumption of fault-tolerant register files. In Proceedings of the 2017 Euromicro Conference on Digital System Design (DSD), Vienna, Austria, 30 August–1 September 2017; pp. 323–328. [Google Scholar]
- Namazi, A.; Abdollahi, M.; Safari, S.; Mohammadi, S. LORAP: Low-overhead power and reliability-aware task mapping based on instruction footprint for real-time applications. In Proceedings of the 2017 Euromicro Conference on Digital System Design (DSD), Vienna, Austria, 30 August–1 September 2017; pp. 364–367. [Google Scholar]
- Alireza, N.; Meisam, A. LPVM: Low-Power Variation-Mitigant Adder Architecture Using Carry Expedition. In Proceedings of the Workshop on Early Reliability Modeling for Aging and Variability in Silicon Systems, Dresden, Germany, 18 March 2016; pp. 41–44. [Google Scholar]
- Chandra, A.; Chattopadhyay, S. Design of hardware efficient FIR filter: A review of the state-of-the-art approaches. Eng. Sci. Technol. Int. J. 2016, 19, 212–226. [Google Scholar] [CrossRef]
- Chegini, M.; Abdollahi, M.; Baniasadi, A.; Patooghy, A. Tiny-RFNet: Enabling Modulation Classification of Radio Signals on Edge Systems. In Proceedings of the 2024 5th CPSSI International Symposium on Cyber-Physical Systems (Applications and Theory) (CPSAT), Tehran, Iran, 16–17 October 2024; pp. 1–8. [Google Scholar]
- Narayanan, D.; Harlap, A.; Phanishayee, A.; Seshadri, V.; Devanur, N.R.; Ganger, G.R.; Gibbons, P.B.; Zaharia, M. PipeDream: Generalized pipeline parallelism for DNN training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, Huntsville, ON, Canada, 27–30 October 2019; pp. 1–15. [Google Scholar]
- Osawa, K.; Li, S.; Hoefler, T. PipeFisher: Efficient training of large language models using pipelining and Fisher information matrices. Proc. Mach. Learn. Syst. 2023, 5, 708–727. [Google Scholar]
- Shibo, C.; Zhang, H.; Todd, A. Zipper: Latency-Tolerant Optimizations for High-Performance Buses. In Proceedings of the To Appear in The Asia and South Pacific Design Automation Conference, Tokyo, Japan, 20–23 January 2025. [Google Scholar]
- Shammasi, M.; Baharloo, M.; Abdollahi, M.; Baniasadi, A. Turn-aware application mapping using reinforcement learning in power gating-enabled network on chip. In Proceedings of the 2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), Penang, Malaysia, 19–22 December 2022; pp. 345–352. [Google Scholar]
- Aligholipour, R.; Baharloo, M.; Farzaneh, B.; Abdollahi, M.; Khonsari, A. TAMA: Turn-aware mapping and architecture–a power-efficient network-on-chip approach. ACM Trans. Embed. Comput. Syst. TECS 2021, 20, 1–24. [Google Scholar] [CrossRef]
- Abdollahi, M.; Namazi, A.; Mohammadi, S. Clustering effects on the design of opto-electrical network-on-chip. In Proceedings of the 2016 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP), Heraklion, Greece, 17–19 February 2016; pp. 427–430. [Google Scholar]
- Jayakrishnan, M.; Chang, A.; Kim, T.T.H. Power and area efficient clock stretching and critical path reshaping for error resilience. J. Low Power Electron. Appl. 2019, 9, 5. [Google Scholar] [CrossRef]
- Smith, F.; Van den Berg, A.E. Hardware genetic algorithm optimisation by critical path analysis using a custom VLSI architecture. S. Afr. Comput. J. 2015, 56, 120–135. [Google Scholar]
- Barkalov, A.; Titarenko, L.; Mielcarek, K.; Mazurkiewicz, M. Hardware reduction for FSMs with extended state codes. IEEE Access 2024, 12, 42369–42384. [Google Scholar] [CrossRef]
- Barkalov, A.; Titarenko, L.; Chmielewski, S. Hardware reduction in CPLD-based Moore FSM. J. Circuits Syst. Comput. 2014, 23, 1450086. [Google Scholar] [CrossRef]
- Barkalov, A.; Titarenko, L.; Malcheva, R.; Soldatov, K. Hardware reduction in FPGA-based Moore FSM. J. Circuits Syst. Comput. 2013, 22, 1350006. [Google Scholar] [CrossRef]
- Fummi, F.; Sciuto, D. A complete testing strategy based on interacting and hierarchical FSMs. Integration 1997, 23, 75–93. [Google Scholar] [CrossRef]
- Farahmandi, F.; Rahman, M.S.; Rajendran, S.R.; Tehranipoor, M. CAD for Fault Injection Detection. In CAD for Hardware Security; Springer: Berlin/Heidelberg, Germany, 2023; pp. 149–168. [Google Scholar]
- Minns, P.D. Digital System Design Using FSMs: A Practical Learning Approach; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
- Barkalov, A.; Titarenko, L.; Bieganowski, J.; Krzywicki, K. Basic Approaches for Reducing Power Consumption in Finite State Machine Circuits—A Review. Appl. Sci. 2024, 14, 2693. [Google Scholar] [CrossRef]
- Okada, S.; Ohzeki, M.; Taguchi, S. Efficient partition of integer optimization problems with one-hot encoding. Sci. Rep. 2019, 9, 13036. [Google Scholar] [CrossRef]
- Uyar, M.Ü.; Fecko, M.A.; Sethi, A.S.; Amer, P.D. Testing protocols modeled as FSMs with timing parameters. Comput. Netw. 1999, 31, 1967–1988. [Google Scholar] [CrossRef]
- Amir, M.; Givargis, T. Pareto optimal design space exploration of cyber-physical systems. Internet Things 2020, 12, 100308. [Google Scholar] [CrossRef]
- Tian, Y.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary large-scale multi-objective optimization: A survey. ACM Comput. Surv. CSUR 2021, 54, 1–34. [Google Scholar] [CrossRef]
- Yang, L.; Shami, A. On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415, 295–316. [Google Scholar] [CrossRef]
- Balasubramaniam, D.; Jefferson, C.; Kotthoff, L.; Miguel, I.; Nightingale, P. An automated approach to generating efficient constraint solvers. In Proceedings of the 2012 34th International Conference on Software Engineering (ICSE), Zurich, Switzerland, 2–9 June 2012; pp. 661–671. [Google Scholar]
- Abdollahi, M.; Mashhadi, S.; Sabzalizadeh, R.; Mirzaei, A.; Elahi, M.; Baharloo, M.; Baniasadi, A. IODnet: Indoor/Outdoor Telecommunication Signal Detection through Deep Neural Network. In Proceedings of the 2023 IEEE 16th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC), Singapore, 18–21 December 2023; pp. 134–141. [Google Scholar]
- Mashhadi, S.; Diyanat, A.; Abdollahi, M.; Baniasadi, A. DSP: A Deep Neural Network Approach for Serving Cell Positioning in Mobile Networks. In Proceedings of the 2023 10th International Conference on Wireless Networks and Mobile Communications (WINCOM), Istanbul, Turkey, 26–28 October 2023; pp. 1–6. [Google Scholar]
- Abdollahi, M.; Sabzalizadeh, R.; Javadinia, S.; Mashhadi, S.; Mehrizi, S.S.; Baniasadi, A. Automatic Modulation Classification for NLOS 5G Signals with Deep Learning Approaches. In Proceedings of the 2023 10th International Conference on Wireless Networks and Mobile Communications (WINCOM), Istanbul, Turkey, 26–28 October 2023; pp. 1–6. [Google Scholar]
- Yoo, H.J.; Lee, K.; Kim, J.K. Low-Power Noc for High-Performance Soc Design; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Baharloo, M.; Aligholipour, R.; Abdollahi, M.; Khonsari, A. ChangeSUB: A power efficient multiple network-on-chip architecture. Comput. Electr. Eng. 2020, 83, 106578. [Google Scholar] [CrossRef]
- Yenugula, M. Data Center Power Management Using Neural Network. Int. J. Adv. Acad. Stud. 2021, 3, 320–325. [Google Scholar] [CrossRef]
- Kose, N.A.; Jinad, R.; Rasheed, A.; Shashidhar, N.; Baza, M.; Alshahrani, H. Detection of Malicious Threats Exploiting Clock-Gating Hardware Using Machine Learning. Sensors 2024, 24, 983. [Google Scholar] [CrossRef]
- Wang, Y.; Sheng, M.; Wang, X.; Wang, L.; Li, J. Mobile-edge computing: Partial computation offloading using dynamic voltage scaling. IEEE Trans. Commun. 2016, 64, 4268–4282. [Google Scholar] [CrossRef]
- Joshi, S.; Li, D.; Ogrenci-Memik, S.; Deptuch, G.; Hoff, J.; Jindariani, S.; Liu, T.; Olsen, J.; Tran, N. Multi-Vdd design for content addressable memories (CAM): A power-delay optimization analysis. J. Low Power Electron. Appl. 2018, 8, 25. [Google Scholar] [CrossRef]
- Tiwari, A.; Bisht, M.R. Leakage Power Reduction in CMOS VLSI Circuits using Advance Leakage Reduction Method. Int. J. Res. Appl. Sci. Eng. Technol. 2021, 9, 962–966. [Google Scholar] [CrossRef]
- Pathak, A.; Sachan, D.; Peta, H.; Goswami, M. A modified SRAM based low power memory design. In Proceedings of the 2016 29th International Conference on VLSI Design and 2016 15th International Conference on Embedded Systems (VLSID), Kolkata, India, 4–8 January 2016; pp. 122–127. [Google Scholar]
- Birla, S.; Singh, N.; Shukla, N. Low-power memory design for IoT-enabled systems: Part 2. In Electrical and Electronic Devices, Circuits and Materials; CRC Press: Boca Raton, FL, USA, 2021; pp. 63–80. [Google Scholar]
- Cao, R.; Yang, Y.; Gu, H.; Huang, L. A thermal-aware power allocation method for optical network-on-chip. IEEE Access 2018, 6, 61176–61183. [Google Scholar] [CrossRef]
- Dehghani, F.; Mohammadi, S.; Barekatain, B.; Abdollahi, M. Power loss analysis in thermally-tuned nanophotonic switch for on-chip interconnect. Nano Commun. Netw. 2020, 26, 100323. [Google Scholar] [CrossRef]
- Abdollahi, M.; Chegini, M.; Hesar, M.H.; Javadinia, S.; Patooghy, A.; Baniasadi, A. NoCSNet: Network-on-Chip Security Assessment Under Thermal Attacks Using Deep Neural Network. In Proceedings of the 2024 17th IEEE/ACM International Workshop on Network on Chip Architectures (NoCArc), Austin, TX, USA, 3 November 2024; pp. 1–6. [Google Scholar]
- Bhasker, J.; Chadha, R. Static Timing Analysis for Nanometer Designs: A Practical Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Willis, R. Critical path analysis and resource constrained project scheduling—theory and practice. Eur. J. Oper. Res. 1985, 21, 149–155. [Google Scholar] [CrossRef]
- Kao, C.C. Clock skew minimization in multiple dynamic supply voltage with adjustable delay buffers restriction. J. Signal Process. Syst. 2015, 79, 99–104. [Google Scholar] [CrossRef]
- Hatture, S.; Dhage, S. Multi-clock domain synchronizers. In Proceedings of the 2015 International Conference on Computation of Power, Energy, Information and Communication (ICCPEIC), Melmaruvathur, India, 22–23 April 2015; pp. 403–408. [Google Scholar]
- Saboori, E.; Abdi, S. Rapid design space exploration of multi-clock domain MPSoCs with Hybrid Prototyping. In Proceedings of the 2016 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Vancouver, Canada, 15–18 May 2016; pp. 1–6. [Google Scholar]
- Chentouf, M.; Ismaili, Z.E.A.A. A PUS based nets weighting mechanism for power, hold, and setup timing optimization. Integration 2022, 84, 122–130. [Google Scholar] [CrossRef]
- Wang, C.Y.; Liao, H.Y.M.; Yeh, I.H. Designing network design strategies through gradient path analysis. arXiv 2022, arXiv:2211.04800. [Google Scholar]
- Mirhoseini, A.; Goldie, A.; Yazgan, M.; Jiang, J.W.; Songhori, E.; Wang, S.; Lee, Y.J.; Johnson, E.; Pathak, O.; Nazi, A.; et al. A graph placement methodology for fast chip design. Nature 2021, 594, 207–212. [Google Scholar] [CrossRef] [PubMed]
- Dey, S.; Nandi, S.; Trivedi, G. PowerPlanningDL: Reliability-aware framework for on-chip power grid design using deep learning. In Proceedings of the 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 9–13 March 2020; pp. 1520–1525. [Google Scholar]
- Szentimrey, H.; Al-Hyari, A.; Foxcroft, J.; Martin, T.; Noel, D.; Grewal, G.; Areibi, S. Machine learning for congestion management and routability prediction within FPGA placement. ACM Trans. Des. Autom. Electron. Syst. TODAES 2020, 25, 1–25. [Google Scholar] [CrossRef]
- Lin, J.M.; Chang, W.Y.; Hsieh, H.Y.; Shyu, Y.T.; Chang, Y.J.; Lu, J.M. Thermal-aware floorplanning and TSV-planning for mixed-type modules in a fixed-outline 3-D IC. IEEE Trans. Very Large Scale Integr. VLSI Syst. 2021, 29, 1652–1664. [Google Scholar] [CrossRef]
- Guan, W.; Tang, X.; Lu, H.; Zhang, Y.; Zhang, Y. Thermal-Aware Fixed-Outline 3-D IC Floorplanning: An End-to-End Learning-Based Approach. IEEE Trans. Very Large Scale Integr. VLSI Syst. 2023, 12, 1882–1895. [Google Scholar] [CrossRef]
- Kim, D.; Kim, M.; Hur, J.; Lee, J.; Cho, J.; Kang, S. TA3D: Timing-Aware 3D IC Partitioning and Placement by Optimizing the Critical Path. In Proceedings of the 2024 ACM/IEEE International Symposium on Machine Learning for CAD, Salt Lake City, UT, USA, 9–11 September 2024; pp. 1–7. [Google Scholar]
- Xu, Q.; Rocha, R.T.; Algoos, Y.; Feron, E.; Younis, M.I. Design, simulation, and testing of a tunable MEMS multi-threshold inertial switch. Microsyst. Nanoeng. 2024, 10, 31. [Google Scholar] [CrossRef]
- Hosseini, S.A.; Roosta, E. A novel technique to produce logic ‘1’in multi-threshold ternary circuits design. Circuits Syst. Signal Process. 2021, 40, 1152–1165. [Google Scholar] [CrossRef]
- Haj-Yahya, J.; Alser, M.; Kim, J.; Yağlıkçı, A.G.; Vijaykumar, N.; Rotem, E.; Mutlu, O. SysScale: Exploiting multi-domain dynamic voltage and frequency scaling for energy efficient mobile processors. In Proceedings of the 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), Valencia, Spain, 30 May–3 June 2020; pp. 227–240. [Google Scholar]
- Tsou, W.J.; Yang, W.H.; Lin, J.H.; Chen, H.; Chen, K.H.; Wey, C.L.; Lin, Y.H.; Lin, S.R.; Tsai, T.Y. 20.2 digital low-dropout regulator with anti PVT-variation technique for dynamic voltage scaling and adaptive voltage scaling multicore processor. In Proceedings of the 2017 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 5–9 February 2017; pp. 338–339. [Google Scholar]
- Lungu, A.; Bose, P.; Buyuktosunoglu, A.; Sorin, D.J. Dynamic power gating with quality guarantees. In Proceedings of the 2009 ACM/IEEE International Symposium on Low Power Electronics and Design, San Francisco, CA, USA, 19–21 August 2009; pp. 377–382. [Google Scholar]
- Jahanirad, H. Dynamic power-gating for leakage power reduction in FPGAs. Front. Inf. Technol. Electron. Eng. 2023, 24, 582–598. [Google Scholar] [CrossRef]
- Scarabottolo, I.; Ansaloni, G.; Constantinides, G.A.; Pozzi, L.; Reda, S. Approximate logic synthesis: A survey. Proc. IEEE 2020, 108, 2195–2213. [Google Scholar] [CrossRef]
- Wu, J.; Zhang, Y.; Zukerman, M.; Yung, E.K.N. Energy-efficient base-stations sleep-mode techniques in green cellular networks: A survey. IEEE Commun. Surv. Tutor. 2015, 17, 803–826. [Google Scholar] [CrossRef]
- Ning, S.; Zhu, H.; Feng, C.; Gu, J.; Jiang, Z.; Ying, Z.; Midkiff, J.; Jain, S.; Hlaing, M.H.; Pan, D.Z.; et al. Photonic-Electronic Integrated Circuits for High-Performance Computing and AI Accelerators. J. Light. Technol. 2024. [Google Scholar] [CrossRef]
- Park, H.; Kim, S. Hardware accelerator systems for artificial intelligence and machine learning. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 2021; Volume 122, pp. 51–95. [Google Scholar]
- Hu, X.; Li, X.; Huang, H.; Zheng, X.; Xiong, X. Tinna: A tiny accelerator for neural networks with efficient dsp optimization. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 2301–2305. [Google Scholar] [CrossRef]
- Liu, S.; Cao, Y.; Sun, S. Mapping and optimization method of SpMV on Multi-DSP accelerator. Electronics 2022, 11, 3699. [Google Scholar] [CrossRef]
- Dai, K.; Xie, Z.; Liu, S. DCP-CNN: Efficient Acceleration of CNNs With Dynamic Computing Parallelism on FPGA. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2024. [Google Scholar] [CrossRef]
- Zacharopoulos, G.; Ejjeh, A.; Jing, Y.; Yang, E.Y.; Jia, T.; Brumar, I.; Intan, J.; Huzaifa, M.; Adve, S.; Adve, V.; et al. Trireme: Exploration of hierarchical multi-level parallelism for hardware acceleration. ACM Trans. Embed. Comput. Syst. 2023, 22, 1–23. [Google Scholar] [CrossRef]
- Jamilan, S.; Abdollahi, M.; Mohammadi, S. Cache energy management through dynamic reconfiguration approach in opto-electrical noc. In Proceedings of the 2017 25th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), St. Petersburg, Russia, 6–8 March 2017; pp. 576–583. [Google Scholar]
- Sanca, V.; Ailamaki, A. Post-Moore’s Law Fusion: High-Bandwidth Memory, Accelerators, and Native Half-Precision Processing for CPU-Local Analytics. In Proceedings of the Joint Workshops at 49th International Conference on Very Large Data Bases (VLDBW’23), Vancouver, BC, Canada, 28 August–1 September 2023. [Google Scholar]
- Hur, S.; Na, S.; Kwon, D.; Kim, J.; Boutros, A.; Nurvitadhi, E.; Kim, J. A fast and flexible FPGA-based accelerator for natural language processing neural networks. ACM Trans. Archit. Code Optim. 2023, 20, 1–24. [Google Scholar] [CrossRef]
- Kabir, E.; Kabir, M.A.; Downey, A.R.; Bakos, J.D.; Andrews, D.; Huang, M. FAMOUS: Flexible Accelerator for the Attention Mechanism of Transformer on UltraScale+ FPGAs. arXiv 2024, arXiv:2409.14023. [Google Scholar]
- Lee, H.; Lee, J.; Kang, S. A Robust Test Architecture for Low-Power AI Accelerators. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2024. [Google Scholar] [CrossRef]
- Lee, S.; Park, J.; Park, S.; Kim, H.; Kang, S. A New Zero-Overhead Test Method for Low-Power AI Accelerators. IEEE Trans. Circuits Syst. II Express Briefs 2023, 71, 2649–2653. [Google Scholar] [CrossRef]
- Shah, N.; Meert, W.; Verhelst, M. Efficient Execution of Irregular Dataflow Graphs: Hardware/Software Co-Optimization for Probabilistic AI and Sparse Linear Algebra; Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
- Rashidi, B.; Gao, C.; Lu, S.; Wang, Z.; Zhou, C.; Niu, D.; Sun, F. UNICO: Unified Hardware Software Co-Optimization for Robust Neural Network Acceleration. In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, Toronto, ON, Canada, 28 October–1 November 2023; pp. 77–90. [Google Scholar]
- Arman, G. New Approach of IO Cell Placement Addressing Minimized Data and Clock Skews in Top Level. In Proceedings of the 2023 IEEE East-West Design & Test Symposium (EWDTS), Batumi, Georgia, 22–25 September 2023; pp. 1–5. [Google Scholar]
- Deng, C.; Cai, Y.C.; Zhou, Q. Register clustering methodology for low power clock tree synthesis. J. Comput. Sci. Technol. 2015, 30, 391–403. [Google Scholar] [CrossRef]
- Kyriakakis, E.; Tange, K.; Reusch, N.; Zaballa, E.O.; Fafoutis, X.; Schoeberl, M.; Dragoni, N. Fault-tolerant clock synchronization using precise time protocol multi-domain aggregation. In Proceedings of the 2021 IEEE 24th International Symposium on Real-Time Distributed Computing (ISORC), Daegu, Republic of Korea, 1–3 June 2021; pp. 114–122. [Google Scholar]
- Han, K.; Kahng, A.B.; Li, J. Optimal generalized H-tree topology and buffering for high-performance and low-power clock distribution. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2018, 39, 478–491. [Google Scholar] [CrossRef]
- Rahman, M.S.; Guo, R.; Kamali, H.M.; Rahman, F.; Farahmandi, F.; Abdel-Moneum, M.; Tehranipoor, M. O’clock: Lock the clock via clock-gating for soc ip protection. In Proceedings of the 59th ACM/IEEE Design Automation Conference, San Francisco, CA, USA, 10–14 July 2022; pp. 775–780. [Google Scholar]
- Hu, K.; Hou, X.; Lin, Z. Advancements In Low-Power Technologies: Clock-Gated Circuits and Beyond. Highlights Sci. Eng. Technol. 2024, 81, 218–225. [Google Scholar] [CrossRef]
- Erra, R.; Stine, J.E. Power Reduction of Montgomery Multiplication Architectures Using Clock Gating. In Proceedings of the 2024 IEEE 67th International Midwest Symposium on Circuits and Systems (MWSCAS), Springfield, MA, USA, 11–14 August 2024; pp. 474–478. [Google Scholar]
- Namazi, A.; Safari, S.; Mohammadi, S.; Abdollahi, M. SORT: Semi online reliable task mapping for embedded multi-core systems. ACM Trans. Model. Perform. Eval. Comput. Syst. TOMPECS 2019, 4, 1–25. [Google Scholar] [CrossRef]
- Namazi, A.; Abdollahi, M.; Safari, S.; Mohammadi, S.; Daneshtalab, M. Lrtm: Life-time and reliability-aware task mapping approach for heterogeneous multi-core systems. In Proceedings of the 2018 11th International Workshop on Network on Chip Architectures (NoCArc), Fukuoka, Japan, 20 October 2018; pp. 1–6. [Google Scholar]
- Abumwais, A.; Obaid, M. Shared Cache Based on Content Addressable Memory in a Multi-Core Architecture. Comput. Mater. Contin. 2023, 74. [Google Scholar] [CrossRef]
- Bahn, H.; Cho, K. Implications of NVM based storage on memory subsystem management. Appl. Sci. 2020, 10, 999. [Google Scholar] [CrossRef]
- Sarkar, R.; Abi-Karam, S.; He, Y.; Sathidevi, L.; Hao, C. FlowGNN: A dataflow architecture for real-time workload-agnostic graph neural network inference. In Proceedings of the 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Montreal, QC, Canada, 25 February–1 March 2023; pp. 1099–1112. [Google Scholar]
- Kenter, T.; Shambhu, A.; Faghih-Naini, S.; Aizinger, V. Algorithm-hardware co-design of a discontinuous Galerkin shallow-water model for a dataflow architecture on FPGA. In Proceedings of the Platform for Advanced Scientific Computing Conference, Geneva, Switzerland, 5–9 July 2021; pp. 1–11. [Google Scholar]
- Besta, M.; Kanakagiri, R.; Kwasniewski, G.; Ausavarungnirun, R.; Beránek, J.; Kanellopoulos, K.; Janda, K.; Vonarburg-Shmaria, Z.; Gianinazzi, L.; Stefan, I.; et al. Sisa: Set-centric instruction set architecture for graph mining on processing-in-memory systems. In Proceedings of the MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, Athens, Greece, 18–22 October 2021; pp. 282–297. [Google Scholar]
- Sahabandu, D.; Mertoguno, J.S.; Poovendran, R. A natural language processing approach for instruction set architecture identification. IEEE Trans. Inf. Forensics Secur. 2023, 18, 4086–4099. [Google Scholar] [CrossRef]
- Baharloo, M.; Abdollahi, M.; Baniasadi, A. System-level reliability assessment of optical network on chip. Microprocess. Microsystems 2023, 99, 104843. [Google Scholar] [CrossRef]
- Abdollahi, M.; Baharloo, M.; Shokouhinia, F.; Ebrahimi, M. RAP-NOC: Reliability assessment of photonic network-on-chips, a simulator. In Proceedings of the Eight Annual ACM International Conference on Nanoscale Computing and Communication, Virtual, 7–9 September 2021; pp. 1–7. [Google Scholar]
- Hasanzadeh, M.; Abdollahi, M.; Baniasadi, A.; Patooghy, A. Thermo-Attack Resiliency: Addressing a New Vulnerability in Opto-Electrical Network-on-Chips. In Proceedings of the 2024 25th International Symposium on Quality Electronic Design (ISQED), San Francisco, CA, USA, 3–5 April 2024; pp. 1–9. [Google Scholar]
- Anuradha, P.; Majumder, P.; Sivaraman, K.; Vignesh, N.A.; Jayakar, A.; Anthonirj, S.; Mallik, S.; Al-Rasheed, A.; Abbas, M.; Soufiene, B.O. Enhancing High-Speed Data Communications: Optimization of Route Controlling Network on Chip Implementation. IEEE Access 2024, 12, 123514–123528. [Google Scholar] [CrossRef]
- Nisa, U.U.; Bashir, J. Towards efficient on-chip communication: A survey on silicon nanophotonics and optical networks-on-chip. J. Syst. Archit. 2024, 152, 103171. [Google Scholar] [CrossRef]
- Abdollahi, M.; Firouzabadi, Y.; Dehghani, F.; Mohammadi, S. THAMON: Thermal-aware High-performance Application Mapping onto Opto-electrical network-on-chip. J. Syst. Archit. 2021, 121, 102315. [Google Scholar] [CrossRef]
- Abdollahi, M.; Tavana, M.K.; Koohi, S.; Hessabi, S. ONC3: All-optical NoC based on cube-connected cycles with quasi-DOR algorithm. In Proceedings of the 2012 15th Euromicro Conference on Digital System Design, Izmir, Turkey, 5–8 September 2012; pp. 296–303. [Google Scholar]
- Bai, C.; Huang, J.; Wei, X.; Ma, Y.; Li, S.; Zheng, H.; Yu, B.; Xie, Y. ArchExplorer: Microarchitecture exploration via bottleneck analysis. In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, Toronto, ON, Canada, 28 October–1 November 2023; pp. 268–282. [Google Scholar]
- Dave, S.; Nowatzki, T.; Shrivastava, A. Explainable-DSE: An Agile and Explainable Exploration of Efficient HW/SW Codesigns of Deep Learning Accelerators Using Bottleneck Analysis. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Vancouver, BC, Canada, 25–29 March 2023; Volume 4, pp. 87–107. [Google Scholar]
- Bernstein, L.; Sludds, A.; Hamerly, R.; Sze, V.; Emer, J.; Englund, D. Freely scalable and reconfigurable optical hardware for deep learning. Sci. Rep. 2021, 11, 3144. [Google Scholar] [CrossRef]
- Jia, H.; Ozatay, M.; Tang, Y.; Valavi, H.; Pathak, R.; Lee, J.; Verma, N. 15.1 a programmable neural-network inference accelerator based on scalable in-memory computing. In Proceedings of the 2021 IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, USA, 13–22 February 2021; Volume 64, pp. 236–238. [Google Scholar]
- Lakshmanna, K.; Shaik, F.; Gunjan, V.K.; Singh, N.; Kumar, G.; Shafi, R.M. Perimeter degree technique for the reduction of routing congestion during placement in physical design of VLSI circuits. Complexity 2022, 2022, 8658770. [Google Scholar] [CrossRef]
- Chen, X.; Liu, G.; Xiong, N.; Su, Y.; Chen, G. A survey of swarm intelligence techniques in VLSI routing problems. IEEE Access 2020, 8, 26266–26292. [Google Scholar] [CrossRef]
- Karimullah, S.; Vishnuvardhan, D. Experimental analysis of optimization techniques for placement and routing in Asic design. In Proceedings of the ICDSMLA 2019: Proceedings of the 1st International Conference on Data Science, Machine Learning and Applications, Hyderabad, India, 29–30 March 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 908–917. [Google Scholar]
- Ramesh, S.; Manna, K.; Gogineni, V.C.; Chattopadhyay, S.; Mahapatra, S. Congestion-Aware Vertical Link Placement and Application Mapping Onto Three-Dimensional Network-On-Chip Architectures. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2024, 43, 2249–2262. [Google Scholar] [CrossRef]
- Rocher-Gonzalez, J.; Escudero-Sahuquillo, J.; Garcia, P.J.; Quiles, F.J. Congestion management in high-performance interconnection networks using adaptive routing notifications. J. Supercomput. 2023, 79, 7804–7834. [Google Scholar] [CrossRef]
- Cho, Y.; Kim, H.; Lee, K.; Jo, H.; Lee, H.; Kim, M.; Im, Y. Fast and Real-Time Thermal-Aware Floorplan Methodology for SoC. IEEE Trans. Components Packag. Manuf. Technol. 2024, 14, 1568–1576. [Google Scholar] [CrossRef]
- Cho, Y.; Kim, H.; Lee, K.; Im, Y.; Lee, H.; Kim, M. Thermal Aware Floorplan Optimization of SoC in Mobile Phone. In Proceedings of the 2023 22nd IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Orlando, FL, USA, 30 May–2 June 2023; pp. 1–7. [Google Scholar]
- Dehghani, F.; Mohammadi, S.; Barekatain, B.; Abdollahi, M. ICES: An innovative crosstalk-efficient 2 × 2 photonic-crystal switch. Opt. Quantum Electron. 2021, 53, 1–15. [Google Scholar] [CrossRef]
- Kaur, M.; Singh, G.; Kumar, Y. RF and Crosstalk Characterization of Chip Interconnects Using Finite Element Method. Indian J. Eng. Mater. Sci. IJEMS 2023, 30, 132–137. [Google Scholar]
- Kashif, M.; Cicek, I. Field-programmable gate array (FPGA) hardware design and implementation ofa new area efficient elliptic curve crypto-processor. Turk. J. Electr. Eng. Comput. Sci. 2021, 29, 2127–2139. [Google Scholar] [CrossRef]
- Bardon, M.G.; Sherazi, Y.; Jang, D.; Yakimets, D.; Schuddinck, P.; Baert, R.; Mertens, H.; Mattii, L.; Parvais, B.; Mocuta, A.; et al. Power-performance trade-offs for lateral nanosheets on ultra-scaled standard cells. In Proceedings of the 2018 IEEE Symposium on VLSI Technology, Honolulu, HI, USA, 18–22 June 2018; pp. 143–144. [Google Scholar]
- Gao, X.; Qiao, Q.; Wang, M.; Niu, M.; Liu, H.; Maezawa, M.; Ren, J.; Wang, Z. Design and verification of SFQ cell library for superconducting LSI digital circuits. IEEE Trans. Appl. Supercond. 2021, 31, 1–5. [Google Scholar] [CrossRef]
- Dannan, B.; Grumman, N.; Kuszewski, J.; Vincent, R.; Wu, S.; McCaffrey, W.; Park, A. Improved methodology to accurately perform system level power integrity analysis including an ASIC die; Presented at DesignCon: Santa Clara, CA, USA, 2022; pp. 5–7. [Google Scholar]
- Meixner, A.; Gullo, L.J. Design for Test and Testability. Des. Maintainab. 2021, 245–264. [Google Scholar] [CrossRef]
- Huhn, S.; Drechsler, R. Design for Testability, Debug and Reliability; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Deshpande, N.; Sowmya, K. A review on ASIC synthesis flow employing two industry standard tools. Int. J. Eng. Res. Technol. 2020, 8. [Google Scholar] [CrossRef]
- Taraate, V. ASIC Design and Synthesis; Springer Nature: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Golshan, K. The Art of Timing Closure; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Sariki, A.; Sai, G.M.V.; Khosla, M.; Raj, B. ASIC Design using Post Route ECO Methodologies for Timing Closure and Power Optimization. Int. J. Microsystems IoT 2023, 1, 195–204. [Google Scholar]
- Lau, J.H. Recent advances and trends in advanced packaging. IEEE Trans. Components, Packag. Manuf. Technol. 2022, 12, 228–252. [Google Scholar] [CrossRef]
- Abdollahi, M.; Mohammadi, S. Vulnerability assessment of fault-tolerant optical network-on-chips. J. Parallel Distrib. Comput. 2020, 145, 140–159. [Google Scholar] [CrossRef]
- Hiller, M.; Kürzinger, L.; Sigl, G. Review of error correction for PUFs and evaluation on state-of-the-art FPGAs. J. Cryptogr. Eng. 2020, 10, 229–247. [Google Scholar] [CrossRef]
- Djambazova, E.; Andreev, R. Redundancy Management in Dependable Distributed Real-Time Systems. Probl. Eng. Cybern. Robot. 2023, 79, 37–54. [Google Scholar] [CrossRef]
- Oszczypała, M.; Ziółkowski, J.; Małachowski, J. Redundancy allocation problem in repairable k-out-of-n systems with cold, warm, and hot standby: A genetic algorithm for availability optimization. Appl. Soft Comput. 2024, 165, 112041. [Google Scholar] [CrossRef]
- Hantos, G.; Flynn, D.; Desmulliez, M.P. Built-in self-test (BIST) methods for MEMS: A review. Micromachines 2020, 12, 40. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Lin, Y.; Gupta, S. Built in self test (BIST) for RSFQ circuits. In Proceedings of the 2024 IEEE 42nd VLSI Test Symposium (VTS), Tempe, AZ, USA, 22–24 April 2024; pp. 1–7. [Google Scholar]
- Verducci, O.; Oliveira, D.L.; Batista, G. Fault-tolerant finite state machine quasi delay insensitive in commercial FPGA devices. In Proceedings of the 2022 IEEE 13th Latin America Symposium on Circuits and System (LASCAS), Santiago, Chile, 1–4 March 2022; pp. 1–4. [Google Scholar]
- Salauyou, V. Fault Detection of Moore Finite State Machines by Structural Models. In Proceedings of the International Conference on Computer Information Systems and Industrial Management, Tokyo, Japan, 22–24 September 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 394–409. [Google Scholar]
- Pavan Kumar, M.; Lorenzo, R. A review on radiation-hardened memory cells for space and terrestrial applications. Int. J. Circuit Theory Appl. 2023, 51, 475–499. [Google Scholar] [CrossRef]
- Lee, M.; Cho, S.; Lee, N.; Kim, J. New radiation-hardened design of a cmos instrumentation amplifier and its tolerant characteristic analysis. Electronics 2020, 9, 388. [Google Scholar] [CrossRef]
- Wang, Z.; Chen, L.; Wang, S.; Zhou, J.; Tian, C.; Feng, H. AIP-SEM: An Efficient ML-Boost In-Place Soft Error Mitigation Method for SRAM-Based FPGA. In Proceedings of the 2024 2nd International Symposium of Electronics Design Automation (ISEDA), Xi’an, China, 10–13 May 2024; pp. 351–354. [Google Scholar]
- Xie, Y.; Qiao, T.; Xie, Y.; Chen, H. Soft error mitigation and recovery of SRAM-based FPGAs using brain-inspired hybrid-grained scrubbing mechanism. Front. Comput. Neurosci. 2023, 17, 1268374. [Google Scholar] [CrossRef]
- Xu, F.; Ding, N.; Li, N.; Liu, L.; Hou, N.; Xu, N.; Guo, W.; Tian, L.; Xu, H.; Wu, C.M.L.; et al. A review of bearing failure Modes, mechanisms and causes. Eng. Fail. Anal. 2023, 152, 107518. [Google Scholar] [CrossRef]
- Huang, J.; You, J.X.; Liu, H.C.; Song, M.S. Failure mode and effect analysis improvement: A systematic literature review and future research agenda. Reliab. Eng. Syst. Saf. 2020, 199, 106885. [Google Scholar] [CrossRef]
- Chen, B.; Zhang, F.; Nguyen, A.; Zan, D.; Lin, Z.; Lou, J.G.; Chen, W. Codet: Code generation with generated tests. arXiv 2022, arXiv:2207.10397. [Google Scholar]
- Unno, H.; Terauchi, T.; Koskinen, E. Constraint-based relational verification. In Proceedings of the International Conference on Computer Aided Verification, Los Angeles, CA, USA, 18–24 July 2021; Springer: Berlin/Heidelberg, Germany; pp. 742–766. [Google Scholar]
- Jha, C.K.; Qayyum, K.; Coşkun, K.Ç.; Singh, S.; Hassan, M.; Leupers, R.; Merchant, F.; Drechsler, R. veriSIMPLER: An Automated Formal Verification Methodology for SIMPLER MAGIC Design Style Based In-Memory Computing. IEEE Trans. Circuits Syst. Regul. Pap. 2024, 71, 4169–4179. [Google Scholar] [CrossRef]
- Coudert, S.; Apvrille, L.; Sultan, B.; Hotescu, O.; de Saqui-Sannes, P. Incremental and Formal Verification of SysML Models. SN Comput. Sci. 2024, 5, 714. [Google Scholar] [CrossRef]
- Ayalasomayajula, A.; Farzana, N.; Tehranipoor, M.; Farahmandi, F. Automatic Asset Identification for Assertion-Based SoC Security Verification. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2024, 43, 3264–3277. [Google Scholar] [CrossRef]
- Rostami, H.; Hosseini, M.; Azarpeyvand, A.; Iman, M.R.H.; Ghasempouri, T. Automatic High Functional Coverage Stimuli Generation for Assertion-based Verification. In Proceedings of the 2024 IEEE 30th International Symposium on On-Line Testing and Robust System Design (IOLTS), Brittany, France, 3–5 July 2024; pp. 1–7. [Google Scholar]
- Tian, K.; Mitchell, E.; Yao, H.; Manning, C.D.; Finn, C. Fine-tuning language models for factuality. arXiv 2023, arXiv:2311.08401. [Google Scholar]
- Yang, Z. Scalable Equivalence Checking for Behavioral Synthesis. Ph.D. Thesis, Computer Science Department, Portland State University, Portland, OR, USA, 2015. [Google Scholar]
- Aboudeif, R.A.H. Design and Implementation of UVM-Based Verification Framework for Deep Learning Accelerators. Master’s Thesis, School of Sciences and Engineering, The American University in Cairo, New Cairo, Egypt, 2024. [Google Scholar]
Paper | LLMs Model | LLMs Application Programming Interface (API) | LLMs Dataset | Domain LLMs | Taxonomy | LLMs Architecture | LLMs Configurations | ML Comparisons | Performance | Parameters and Hardware Specification | Scope | Key Findings | Methodology and Approach |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Huang et al. [93] | ✓ | × | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | × | LLM reasoning abilities | Explores LLMs’ reasoning abilities and evaluation methodologies | Reasoning-focused review |
Xi et al. [94] | ✓ | × | ✓ | ✓ | × | ✓ | × | × | ✓ | × | LLM-based AI agents for multiple domains | Highlights potential for LLMs as general-purpose agents | Agent-centric analysis |
Hadi et al. [95] | ✓ | × | ✓ | × | ✓ | ✓ | ✓ | × | ✓ | ✓ | Comprehensive review of LLMs, applications, and challenges | Highlights potential of LLMs in various domains, discusses challenges | Literature review and analysis |
Naveed et al. [96] | ✓ | × | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Overview of LLM architectures and performance | Challenges and advancements in LLM training, architectural innovations, and emergent abilities | Comparative review of models and training methods |
Fan et al. [97] | ✓ | × | ✓ | × | ✓ | ✓ | × | × | × | × | Bibliometric review of LLM research (2017–2023) | Tracks research trends, collaboration networks, and evolution of LLM research | Bibliometric analysis using topic modeling and citation networks |
Zhao et al. [5] | ✓ | ✓ | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | × | Comprehensive survey of LLM models, taxonomy | Detailed analysis of LLMs evolution, taxonomy, emergent abilities, adaptation, and evaluation | Thorough review, structured methodology, and various benchmarks |
Raiaan et al. [98] | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Comprehensive review of LLM architectures, applications, and challenges | Discusses LLM development, applications in various domains, and societal impact | Extensive literature review with comparisons and analysis of open issues |
Minaee et al. [99] | ✓ | × | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | × | Comprehensive survey of LLM architectures, datasets, and performance | Comprehensive review of LLM architectures, datasets, and evaluations | Comprehensive survey and analysis |
Liu et al. [100] | ✓ | × | ✓ | × | ✓ | ✓ | ✓ | × | ✓ | ✓ | Training and inference in LLMs | Cost-efficient training and inference techniques are crucial for LLMs | Comprehensive review of training techniques and inference optimizations |
Cui et al. [101] | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | MLLMs for autonomous driving with extensive dataset coverage | Explores the potential of MLLMs in autonomous vehicle systems | Survey focusing on perception, planning, and control |
Chang et al. [102] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Comprehensive evaluation of LLMs across multiple domains and tasks | Details LLM evaluation protocols, benchmarks, and task categories | Survey of evaluation methods for LLMs |
Kachris et al. [103] | ✓ | × | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Hardware solutions for accelerating LLMs | Energy efficiency improvements through hardware | Survey on hardware accelerators for LLMs |
Category | Task | Description |
---|---|---|
Design | HDL Code Generation | Automatically generate Verilog, VHDL, or SystemC code from high-level design descriptions or specifications. |
Design Specification Translation | Convert natural language specifications into formal design requirements or constraints. | |
Design Optimization Suggestions | Provide recommendations for optimizing design parameters such as power, performance, and area. | |
Component Selection | Suggest suitable components based on design requirements and existing libraries. | |
Documentation Generation | Create detailed design documentation, including block diagrams, interface definitions, and data sheets. | |
Design Space Exploration | Propose and evaluate different design alternatives based on specified criteria. | |
IP Core Integration | Automate the integration of IP cores into larger systems, including interface matching and configuration. | |
Verification | Test Bench Generation | Automatically generate test benches, including stimulus and expected results, from high-level test plans. |
Test Case Generation | Create individual test cases based on design specifications and verification requirements. | |
Bug Detection and Suggestion | Analyze simulation logs and error reports to identify potential bugs and suggest debugging steps. | |
Assertion Generation | Generate assertions for formal verification to ensure the correctness of design behavior. | |
Coverage Analysis | Analyze coverage reports to identify untested areas and suggest additional tests. | |
Regression Test Management | Automate the organization, execution, and analysis of regression test suites. | |
Simulation Script Generation | Create scripts for running simulations with different configurations and scenarios. | |
Collaborative and Supportive Tasks | Code Review Assistance | Provide automated feedback on HDL code quality, compliance with coding standards, and potential issues |
Documentation Summarization | Summarize lengthy documentation and highlight key points for quicker understanding. | |
Training Material Creation | Generate tutorials, guides, and FAQs for training new team members on tools and processes. | |
Knowledge Base Maintenance | Organize and maintain a knowledge base of best practices, common issues, and solutions. | |
Natural Language Queries | Answer queries in natural language about design specifications, verification results, and other relevant topics. | |
Design and Verification Workflow Automation | Requirement Traceability | Track design requirements through all stages of development and verification, ensuring all requirements are met. |
Change Impact Analysis | Analyze the impact of design changes on the overall system and suggest necessary verification updates. | |
Project Management Support | Assist in tracking project milestones, deadlines, and deliverables related to design and verification. | |
Advanced Automation | Design Validation | Validate design correctness against high-level specifications using formal methods and simulation. |
Error Diagnosis | Diagnose errors in simulation results and suggest possible fixes based on historical data. | |
Performance Analysis | Perform detailed performance analysis and suggest improvements based on simulation data. | |
Automated Synthesis: | Guide the synthesis process to optimize the design for specific targets (e.g., low power, high performance). |
Aspect | ChatGPT | Copilot | Gemini |
---|---|---|---|
Parameterization | Fully parameterized with WIDTH parameter. | Parameterizes WIDTH but less detailed implementation. | Parameterization included but lacks flexibility in edge cases. |
Reset Logic | Proper reset handling with all bits set to 0. | Reset logic initializes data correctly to 0. | Resets internal_data properly to 0. |
Enable Logic | Handles enable logic to load input data correctly. | Covers enable logic but misses finer details. | Covers enable logic for loading input data. |
Scan Logic | Includes scan logic with separate handling for WIDTH == 1. | Handles scan logic but lacks specific checks for WIDTH == 1. | Handles scan logic but without sufficient detail for edge cases. |
Error Handling (e.g., WIDTH == 1) | Specifically handles WIDTH == 1 case to avoid out-of-bounds indexing. | Does not handle WIDTH == 1 case explicitly, which may cause issues. | Lacks handling for WIDTH == 1 case explicitly. |
Clarity of Code | Code is well-structured and readable, with inline comments. | Readable but less comprehensive in handling edge cases. | Readable and concise, but could use more comments for clarity. |
Adherence to Specification | Adheres strictly to the input specification and handles edge cases. | Covers basic functionality but misses edge-case handling. | Mostly adheres to the specification but misses edge-case handling. |
Parameter | Approach | References | |
---|---|---|---|
Scope and Focus | Optimizing hardware specifically for LLM inference and performance | [108,119,121,131] | |
Generation and optimization of hardware design code using LLMs | [109,110,117,128,135,137] | ||
Exploring broader challenges and opportunities in conversational and natural language-based hardware design | [112,113] | ||
RISC processor design automation | [136] | ||
LLM applications in hardware design | [107] | ||
Code detection for acceleration and quantization techniques | [114,123] | ||
Methodologies | Benchmarking and Evaluation | Benchmarking and evaluating LLM performance in hardware-related tasks | [108,111,125,132] |
Automated Generation | Methodologies for automating HDL generation and design specification using LLM feedback | [107,110,124,128,135,136] | |
Optimization Techniques | Exploring specific optimization techniques like hardware-aware transformers and structured pruning | [114,121,129,131,137] | |
Innovative Contributions | Creativity and Originality | Evaluating the creativity of LLM-generated hardware code | [107,115,132,136,137] |
Hierarchical prompting | Automated design of complex modules using hierarchical prompts | [135] | |
Neuromorphic Hardware | Focusing on designing neuromorphic hardware (spiking neuron arrays) using LLMs, highlighting an innovative application in the field | [116] | |
Consumer Hardware Feasibility | Investigating the feasibility of running LLMs on consumer-grade hardware, addressing practical deployment challenges | [127,131] | |
Application Areas | AI Accelerators | Automation of AI accelerator design, reflecting the growing importance of specialized hardware for AI tasks. | [119,124,131] |
VLSI Design | VLSI design specifications, an area critical for complex integrated circuit design. | [128] | |
General Hardware Design | Looking at various aspects of hardware design and integration with LLMs. | [113,117,118,132,136,137] | |
Performance and Efficiency | Making LLMs more efficient and hardware-friendly, addressing the computational and resource challenges associated with large models. | [121,123,131,135,136] | |
Discussing frameworks and techniques to enhance inference performance, which are crucial for deploying LLMs in real-world applications. | [107,108,129,132,137] |
Paremeter | Approach | References |
---|---|---|
Objective and Focus | Developing a comprehensive dataset to support LLM-driven AI accelerator generation. | [144] |
Detecting code patterns suitable for hardware acceleration using LLMs. | [114] | |
Automating hardware accelerator design using LLMs. | [145] | |
Optimizing batched LLM inferencing with a heterogeneous acceleration approach combining NPUs and PIMs. | [146] | |
Optimizing memory management and reducing compilation times for multi-core AI accelerators targeting large language models using a hybrid SPM-cache architecture. | [147] | |
Approach and Methodology | Curating a diverse set of hardware design examples and specifications for LLMs. | [144] |
Training LLMs on a corpus of annotated code examples to detect hardware-accelerable code. | [114] | |
Using LLMs to interpret high-level hardware design specifications and generate accelerators. | [145] | |
Integrating NPUs and PIMs to handle computation-intensive and memory-bound tasks. | [146] | |
Integrating a shared cache with AI cores and employing TMU for cache management, along with tile-level hardware prefetching and dead block prediction. | [147] | |
Evaluation and Result | Evaluating dataset by the performance of LLMs in generating accurate hardware designs; improvements noted. | [144] |
Measuring accuracy of LLMs in detecting acceleratable code sections and performance gains; significant improvements found. | [114] | |
Comparing LLM-generated accelerators with manually designed ones; LLM designs show comparable or superior performance. | [145] | |
Benchmarking against CPU and GPU setups; significant improvements in speed and energy efficiency. | [146] | |
The system outperforms traditional SPM in mixed-precision quantization scenarios | [147] | |
Innovation and Impact | First standardized dataset for LLM and hardware accelerator design intersection; potential to advance the field. | [144] |
Application of LLMs to code optimization for hardware acceleration; automates optimization process. | [114] | |
Automates traditionally manual hardware design process, reducing development time and cost. | [145] | |
Combines NPU and PIM technologies to optimize LLM inferencing; addresses computational and memory challenges. | [146] | |
The hybrid SPM-cache architecture introduces novel hardware-level cache management for AI accelerators, especially beneficial for LLMs. | [147] | |
Future Directions | Expand and diversify the dataset; enhance LLM capabilities for complex tasks. | [144] |
Develop more sophisticated models, integrate with different hardware platforms, and expand dataset. | [114] | |
Refine models, expand applicability to different accelerators, and integrate with design tools. | [145] | |
Refine NPU and PIM integration, explore other heterogeneous configurations, expand to other AI workloads. | [146] | |
Further optimization of cache replacement policies and better integration of this architecture into future AI accelerator designs for large-scale AI models. | [147] |
Paremeter | Approach | References |
---|---|---|
Objective and Focus | Providing comprehensive LLM-based frameworks that enhance security analysis in SoC design by automating tasks such as bug fixing, vulnerability detection, and policy generation. | [150,154,158,159] |
Specific challenges of detecting and fixing bugs in hardware code, as well as the potential misuse of LLMs for malicious purposes like designing hardware Trojans. | [152,153,156,160] | |
Emphasizing the risks of relying on LLM-generated specifications and assertions and advocating for integrating LLMs with formal verification methods to ensure correctness and security. | [151,157] | |
Approach and Methodology | Utilizing LLMs in a broad range of security tasks, from HDL generation and verification to vulnerability detection and policy enforcement. SoCureLLM stands out for its scalability and focus on large-scale SoC designs. | [150,154,158,159,160] |
Ref. [151] advocates for combining LLMs with formal verification techniques, while ref. [157] focuses on using LLMs to generate security assertions. | [151,157] | |
Exploring how LLMs can assist in identifying and fixing hardware security bugs, presenting frameworks that analyze hardware code for vulnerabilities. | [152,156,161] | |
Evaluation and Result | Demonstrating significant improvements in detecting vulnerabilities and generating security policies through case studies and experiments, particularly with SoCureLLM outperforming traditional methods in large-scale SoC designs. | [150,154,158,159] |
Showing that combining LLMs with formal methods and generating security assertions can enhance the security of hardware specifications. | [151,157] | |
Presenting empirical evidence of LLMs effectively fixing hardware security bugs, though results indicate varying effectiveness depending on the complexity of the bugs. | [152,156] | |
Innovation and Impact | Ref. [159] is unique in addressing scalability issues and applying LLMs to large-scale designs, setting a new standard for hardware security verification frameworks. | [159] |
Pioneering in integrating LLMs for comprehensive security analysis and policy enforcement in SoC designs. | [150,154] | |
Refs. [152,156] showcase innovative methods for automating hardware bug fixing, while ref. [151] proposes integrating formal methods to avoid the semantic errors associated with LLM-generated specifications. | [151,152,156] | |
Raising critical concerns about the potential misuse of LLMs and suggesting countermeasures. | [153,162] | |
Future Directions | Emphasizing refining LLM integration and expanding their applicability to larger designs and real-time scenarios. | [150,158,159] |
Recommending improving the robustness of LLMs and expanding their applicability to more complex scenarios. | [152,156] | |
Continuing to advocate for the integration of formal verification techniques to mitigate the risks posed by LLM-generated code. | [151] |
Aspect | ChatGPT | Copilot | Gemini |
---|---|---|---|
Understanding the Problem | Clearly identifies the issue with WIDTH == 1 leading to negative index. | Identifies the problem but explanation lacks depth. | Identifies the problem with WIDTH == 1, but explanation is minimal. |
Error Identification (WIDTH == 1) | Explicitly mentions invalid indexing and the need for conditional handling. | Acknowledges the issue but does not explore the root cause fully. | Mentions the issue but does not go into technical depth. |
Proposed Solution Clarity | Proposes a clear solution with conditional logic for WIDTH == 1. | Provides a basic solution but lacks clarity in handling edge cases. | Solution is concise but not detailed for all edge cases. |
Edge Case Handling | Handles edge cases robustly, ensuring the design works across all valid WIDTH values. | Edge cases are not fully addressed, particularly WIDTH == 1. | Edge cases, especially WIDTH == 1, are not fully handled. |
Code Readability and Comments | Code is structured and commented for readability and understanding. | Code is readable but lacks detailed comments or structure for complex scenarios. | Code is concise and readable but lacks comprehensive commenting. |
Adherence to Problem Statement | Adheres strictly to the problem requirements and provides detailed reasoning. | Partially adheres to the problem requirements but misses critical edge handling. | Mostly adheres to the problem statement but misses finer details. |
Flexibility of Solution | Flexible solution adaptable to other scenarios with minimal modification. | Solution is somewhat rigid and less adaptable to broader cases. | Limited flexibility due to lack of detailed handling for edge scenarios. |
Paremeter | Approach | References |
---|---|---|
Objective and Focus | Focusing on general HDL debugging, aiming to automate the identification and correction of syntax and semantic errors in HDL code. | [163] |
Targeting hardware debugging with a specific emphasis on security-related issues, leveraging a domain-specific LLM trained on hardware security data. | [164] | |
Approach and Methodology | Using a general-purpose LLM adapted for HDL debugging, with modules for parsing code, generating suggestions, and integrating user feedback. | [163] |
Employing a specialized LLM trained specifically on hardware security datasets, providing targeted debugging assistance for security vulnerabilities. | [164] | |
Evaluation and Result | Showing effectiveness in identifying and correcting a wide range of common HDL errors, demonstrating significant improvements in debugging efficiency. | [163] |
Demonstrating superior performance in detecting and resolving security-related issues in hardware designs compared with general-purpose LLMs, highlighting its accuracy and relevance in security contexts. | [164] | |
Innovation and Impact | Integrating LLMs into the general HDL debugging process, reducing manual effort and expertise required for traditional debugging. | [163] |
Focusing on security-specific hardware debugging, addressing the more complex and critical aspect of hardware design vulnerabilities. | [164] | |
Future Directions | Expanding the system’s knowledge base and incorporating advanced machine learning techniques to handle more complex debugging scenarios. | [163] |
Enhancing the model’s performance by expanding the training dataset and refining its understanding of complex security scenarios. | [164] |
Aspect | ChatGPT | Copilot | Gemini |
---|---|---|---|
UVM Environment Completeness | Comprehensive and includes all necessary UVM components. | Includes UVM components but lacks some details in implementation. | Provides an overview of UVM components but lacks full implementation. |
Components Implementation (Driver, Monitor, Scoreboard, etc.) | Implements driver, monitor, scoreboard, sequencer, and testbench clearly. | Implements basic UVM components but misses detailed scoreboard and checks. | Mentions UVM components, but implementation is incomplete. |
Scalability and Reusability | Highly reusable and scalable for different shift register configurations. | Moderately reusable but requires additional customization for scaling. | Limited scalability due to incomplete or generalized implementation. |
Code Structure and Readability | Code is well-structured, clear, and includes comments. | Readable code but lacks detailed comments and structure for clarity. | Readable but lacks detailed comments and structural clarity. |
Explanation of UVM Flow | Provides a detailed explanation of the UVM flow and components. | Basic explanation of UVM flow, not very detailed. | Provides a brief overview of UVM flow, not comprehensive. |
Test Case Coverage | Covers a wide range of test cases, including edge cases like WIDTH == 1. | Covers basic test cases but lacks focus on edge cases. | Covers basic functionality but misses detailed test case scenarios. |
Error Handling and Debugging Assistance | Excellent handling of errors and debugging guidance provided. | Minimal error handling and limited debugging assistance. | Minimal focus on error handling and debugging. |
Paremeter | Approach | References |
---|---|---|
Objective and Focus | Refs. [165,169,172] focus on generating verification assertions, but ref. [165] uses multiple LLMs for better accuracy. | [165,169,172] |
Focusing on enhancing verification through formal methods and machine learning, respectively. | [166,167] | |
Focusing on the generation and evaluation of Verilog code and domain-adapted LLMs for chip design. | [168,170] | |
Focusing on generating hardware test stimuli, providing a distinct angle on improving the verification process. | [171,173] | |
Approach and Methodology | Refs. [165,172] use LLMs to interpret design documents and generate assertions, but ref. [165] emphasizes a multi-LLM approach. | [165,172] |
Automating the generation of properties for formal verification. | [166] | |
Using ML techniques rather than purely LLMs to optimize the verification process. | [167] | |
Fine-tuning and benchmarking LLMs for specific tasks related to Verilog and chip design. | [168,170] | |
Utilizing LLMs for generating test stimuli based on hardware design specifications. | [171] | |
Evaluation and Result | Improving accuracy and efficiency in assertion generation. | [165,172] |
Enhancing error detection and reducing verification time. | [166] | |
Improving the verification coverage and time savings using ML. | [167] | |
Highlighting the strengths and weaknesses of different LLMs in Verilog code generation. | [168] | |
Showing the benefits of domain adaptation in LLMs for chip design tasks. | [170] | |
Providing evidence of effective test case generation, improving coverage, and identifying design issues. | [171] | |
Innovation and Impact | Using multiple LLMs for assertion generation is innovative in its multimodel approach. | [165] |
Integrating LLMs into the formal verification process, traditionally a manual task. | [166] | |
Providing an open-source solution that encourages community development. | [167] | |
Offering a comprehensive benchmarking framework for evaluating LLM performance in Verilog code generation. | [168] | |
Emphasizing domain adaptation, showing significant performance improvements in chip design tasks. | [170] | |
Focusing on automation in different aspects of the verification process, enhancing efficiency and effectiveness. | [171,172] | |
Future Directions | Refining LLM training datasets, integrating frameworks with existing tools, and enhancing model architectures. | [165,166,167,168,170,172] |
Improving the understanding of complex hardware designs and further adaptation techniques. | [165,170,172] | |
Highlighting the need for more sophisticated ML and LLM models to handle complex verification tasks. | [166,167] | |
Emphasizing continued benchmarking and adaptation to specific hardware design requirements. | [168,170] | |
Integrating more advanced LLMs and expanding test generation capabilities within verification frameworks. | [171] |
Domain | Task | LLM Use |
---|---|---|
HLS | Automating high-level code to RTL. | Optimizing for performance, area, and power. |
HDL Generation | Creating RTL from specifications. | Automating Verilog, VHDL, or SystemVerilog generation. |
Component Integration | Managing interactions between hardware modules. | Automating interface generation and integration. |
Design Optimization | Improving performance, power, and area iteratively. | Suggesting optimal configurations and design alternatives. |
FSM Design | Designing FSMs to control hardware modules. | Generating and optimizing FSM transitions and states. |
Design Space Exploration | Exploring multiple configurations for performance, power, and area. | Suggesting optimal configurations and trade-offs. |
Power-Aware Design | Designing hardware with a focus on power efficiency. | Recommending power-saving techniques like clock gating. |
Timing Analysis | Ensuring hardware meets timing constraints. | Optimizing clock trees and fixing timing violations. |
Floorplanning | Optimizing the placement of components on a chip. | Assisting in module placement and layout optimization. |
Low-Power Design | Implementing low-power design techniques. | Suggesting balanced performance–power trade-offs. |
Hardware Accelerators | Designing specialized hardware accelerators. | Creating optimized architectures for AI hardware like GPUs and TPUs. |
Clock Tree Synthesis | Creating a balanced clock distribution network. | Optimizing clock tree generation for minimal skew. |
Chip Architecture Design | Defining the overall chip architecture and data flow. | Generating architectural suggestions and optimizing data flow. |
Physical Layout | Determining how components are placed and routed. | Suggesting efficient routing paths and placements. |
ASIC Design | Designing custom integrated circuits. | Automating design optimizations for ASICs. |
Fault-Tolerant Design | Creating hardware with built-in redundancy. | Assisting in the creation of error-correcting codes and self-test logic. |
Verification Plans | Creating verification plans for hardware. | Generating comprehensive verification plans and test cases. |
Category | Gap | Impact |
---|---|---|
Integration with Formal Methods | LLMs lack integration with formal verification methods. | Risk to safety-critical designs. |
Lack of Contextual Understanding for Design Optimizations | LLMs struggle with design trade-offs between PPA. | Multi-objective optimization challenges in hardware design. |
Limited Exploration of Hardware Security Vulnerabilities | LLMs are not widely applied to hardware-specific security issues. | Hardware designs remain vulnerable to attacks and misconfigurations. |
Inadequate Training Data for Hardware-Specific Tasks | Lack of specialized datasets for hardware design. | LLMs perform poorly on tasks like digital circuit design or corner case verification. |
Challenges in Scaling LLMs for Large Hardware Designs | Scaling LLMs for complex hardware like SoCs is difficult. | Full-chip verification is not efficiently managed by current LLM systems. |
Underdeveloped Use in Analog and Mixed-Signal Design | Few applications of LLMs in AMS design. | AMS circuits are critical in many systems, and research in this area is lacking. |
Lack of Research on Hardware/Software Codesign | Limited research on LLMs for hardware/software optimization. | Co-optimization of hardware and software in SoCs remains unaddressed. |
Challenges in Post-Silicon Validation and Debugging | LLMs are not used in post-silicon validation. | Detecting issues after fabrication is not automated by LLM systems. |
Limited Explainability and Interpretability in Hardware Design | LLMs often lack clear explanations for their design choices. | Designers lack trust in LLM solutions. |
Lack of Efficient DSE | LLMs have not been fully used for DSE. | Optimizing design variants for power, area, and performance remains a challenge. |
Minimal Use in Advanced Verification Techniques (UVM, SystemVerilog Assertions) | Research on UVMand SystemVerilog Assertions with LLMs is limited. | Verification for complex designs remains unoptimized. |
Underdeveloped Role in Fault-Tolerant Hardware Design | Fault-tolerance design using LLMs is unexplored. | Missed opportunity to design reliable systems for industries like aerospace. |
Limited Optimization for FPGA Design Automation | LLMs are not widely applied to FPGA design processes like place-and-route. | FPGA design and prototyping are slower without LLM automation. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abdollahi, M.; Yeganli, S.F.; Baharloo, M.; Baniasadi, A. Hardware Design and Verification with Large Language Models: A Scoping Review, Challenges, and Open Issues. Electronics 2025, 14, 120. https://doi.org/10.3390/electronics14010120
Abdollahi M, Yeganli SF, Baharloo M, Baniasadi A. Hardware Design and Verification with Large Language Models: A Scoping Review, Challenges, and Open Issues. Electronics. 2025; 14(1):120. https://doi.org/10.3390/electronics14010120
Chicago/Turabian StyleAbdollahi, Meisam, Seyedeh Faegheh Yeganli, Mohammad (Amir) Baharloo, and Amirali Baniasadi. 2025. "Hardware Design and Verification with Large Language Models: A Scoping Review, Challenges, and Open Issues" Electronics 14, no. 1: 120. https://doi.org/10.3390/electronics14010120
APA StyleAbdollahi, M., Yeganli, S. F., Baharloo, M., & Baniasadi, A. (2025). Hardware Design and Verification with Large Language Models: A Scoping Review, Challenges, and Open Issues. Electronics, 14(1), 120. https://doi.org/10.3390/electronics14010120