Department
Computer Science and Cybersecurity
Document Type
Poster
Abstract
The complexity of modern FPGA designs has outpaced the scalability of manual security assessments by human experts. Adversarial AI models lack the hardware-specific generalization required for different FPGA architectures. This research proposes an autonomous Reinforcement Learning (RL) framework integrated with the Model Context Protocol (MCP). By equipping a specialized AI agent with MCP capabilities, we can provide it with enhanced contextual awareness and direct tool access. This enables the discovery of optimal paths to security compromises with unique autonomy.
Publication Date
Spring 4-9-2026
Recommended Citation
Yang, C. & Mohamed Abdirahman, M. (2026, April 9). FPGAs: Reinforcement learning for attack generation [Poster presentation]. Student Research Conference Spring 2026, Saint Paul, MN, United States. https://metroworks.metrostate.edu/student-scholarship/33
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Comments
Spring 2026: Student Research Conference
Best Visual Award
Excellence in Knowledge Sharing Award: Calvin Yang