Department
Computer Science and Cybersecurity
Document Type
Poster
Abstract
The increasing complexity of FPGA designs, particularly those incorporating AI and machine learning accelerators, makes it extremely difficult to manually analyze and secure the millions of configurable logic blocks and routing channels. Adversaries can exploit this complexity to embed sophisticated hardware Trojans or timing-based side-channel vulnerabilities that are undetectable by conventional verification tools. The current approach to Design for Security (DfS) is often a post-design afterthought rather than an integrated part of the development process. A critical research gap exists in leveraging the power of AI and machine learning to automatically identify and mitigate security vulnerabilities during the design and synthesis of FPGA bitstreams. This study focuses on Predictive Vulnerability Analysis, which involves training AI models on vast datasets of both secure and known-compromised FPGA designs. The model learns to predict potential vulnerabilities such as timing side-channels and Trojan insertion points early in the design phase, generating more secure, hardened bitstreams without sacrificing performance or efficiency.
Publication Date
12-4-2025
Recommended Citation
Yang, C. & Mohamed Abdirahman, M. (2025, December 4). FPGAs: Reinforcement learning for attack generation [Poster presentation]. Student Research Conference Fall 2025, Saint Paul, MN, United States. https://metroworks.metrostate.edu/student-scholarship/29
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.