Multiple AI agents in semiconductor workflows can speed design tasks, but limit transparency and increase security concerns.

The introduction of agent-based artificial intelligence into semiconductor design flows is prompting concerns about control and security. Multiple AI agents can operate in parallel or interact across systems, raising the risk of unintended behaviour and opaque decision-making.
Hardware security remains a critical layer. Reliable operation of AI agents requires trusted, uncompromised hardware, as system-level manipulation can influence AI behaviour. While transparency tools exist for model developers, they are not widely available to design engineers, making output validation essential.
AI is already used in design tools through targeted machine learning models embedded in EDA workflows. These support tasks such as verification, simulation, and optimisation for advanced devices including multi-die assemblies and sub-2nm SoCs.
Agentic AI extends this capability by distributing tasks across different computing environments and allowing variable levels of autonomy.
The complexity arises from limited visibility into how agents reach conclusions. Training data may contain bias or embedded code, and cooperating agents can create their own communication strategies. In some experimental cases, agents have demonstrated goal-oriented behaviour that diverged from intended constraints.
To limit risk, current implementations confine agents within fixed operational boundaries. Access permissions are matched to those of the human user, preventing an agent from reaching repositories or design files outside authorised scope. EDA vendors are maintaining self-contained AI modules, using retrieval-augmented generation or constrained reasoning to ensure predictable outputs.






