EDA is pushing AI beyond consumer LLMs as Siemens deploys domain-adapted models and multi-agent systems to cut design cycles and ensure verifiability.
Isn’t it funny how the very models driving demand for new silicon are outpacing the cycles of the chip industry itself?
Models double in size and complexity every quarter, yet new architectures still take twelve to eighteen months to design, verify, and tape out and that gap is widening. If it closes, the promise of AI hardware hits a hard wall.
This was the real-world driver Siemens VP and GM of Digital Industries Ankur Gupta led with in his talk at the AI Infra Summit last week. For Gupta, the bottleneck isn’t GPUs or fabs. It’s electronic design automation or EDA, the software stack that turns device physics into manufacturable silicon.
And it is here, in this unforgiving corner of the industry, that AI is being asked to do more than draft copy or generate code. It is being asked to co-design the chips that make AI possible.
“Would you base the design of your chip on a consumer-grade LLM that’s giving you some scripts or flows on how to design your processor or memory and then take it out with that $300 million cost? You would not,” Gupta said.
The divide between consumer AI and industrial AI becomes far less existential when one wrong answer can burn a hundred million dollars.
Consumer systems, even the most powerful of today’s LLMs, are brittle, opaque, and prone to hallucinations. That makes them fine for experimentation, but deadly for jobs where a single mistake compounds into cascading failure. Industrial AI, as Gupta described it, has to be verifiable, auditable, and robust. It has to carry the weight of domain knowledge and not just mimic the surface patterns of language.
“We have to consider very seriously verifiability, usability, generality, robustness of these solutions that we deliver to you using AI in EDA,” he told us. This is a checklist of survival he says and together, they form a blueprint not only for chip design but for every enterprise system that wants to move AI from toy to tool.
The need is pressing because the talent pipeline is collapsing just as demand is spiking. Thirty years ago, he says, electrical engineering and computer science majors were split about evenly between hardware and software. Today, the hardware share has dwindled toward four percent, representing a coming gap of 76,000 engineers.
Without a way to transfer knowledge faster, to compress the years it takes to onboard new designers, the whole ecosystem risks slowing to a crawl. In Gupta’s view the only plausible way to fill the widening gulf between the pace of AI workloads and the human capacity to build the chips that serve them.
That is why Siemens has taken the route of building domain-adapted, pre-trained models specifically for EDA.
Quizzing a general-purpose LLM about Verilog or GDS won’t get you far. The model needs to know the language of design flows, the nuance of RTL, the quirks of toolchains. “We took it upon us to do domain-adaptive, pre-trained LLM models,” Gupta said, “and you will see how we are delivering that to the market.” The work goes hand in hand with partnerships. He pointed to Nvidia’s Blackwell platform as both the driver of demand (20 petaflops in eight years) and the foundation for the microservices Siemens now integrates into its own EDA AI system. In Gupta’s view, keeping pace with Nvidia’s hardware roadmap requires remaking the design stack with AI inside it.
Siemens doing much more than adding gen AI across the edges of its tools. As Gupta explains, assistants and reasoners are part of the story, helping engineers navigate documentation, debug error logs, and cut through the noise of design iterations.
More important to this broader AI strategy is the emergence of agents. “If there is a task which is repetitive but can be algorithmically defined, that’s a good AI agent use case,” Gupta said. In Aprisa, Siemens’ RTL-to-GDS platform, reinforcement learning already drives placement engines, generative models broaden the search space of design exploration, and agents take on the rote but critical loops that used to consume weeks of human effort.
The horizon, Gupta explained, is not single agents working in isolation but multiple agents negotiating with each other to create an agentic flow.
It’s hard to argue with the numbers. Gupta reports tenfold productivity improvements, ten percent better power-performance-area, and, perhaps most surprisingly, less compute consumed in the process.
“We are at the stage of offering AI agents for multiple tasks,” Gupta said. “Everything put together, this offers 10× productivity with lesser compute and overall better results, 10% better PPA results.”
The logic behind those numbers is straightforward, he explains. Human designers rely on recipes, the settings that worked on the last generation, the defaults that feel safe but an AI system is free of habit. It can explore combinations no one has tried, run them at scale, and converge on results that push past what experts expect.
What comes back, he says, is not a sealed black box but a transparent record engineers can examine, question, and refine.
Gupta also showed what that looks like at the user level. A junior engineer typed into the interface: How do I run that? The system parsed the documentation, built the flow, executed it, and returned the results.
Can you show me the overall congestion? More charts, more suggestions.
What once meant weeks of reading manuals and bothering colleagues now takes minutes, he says.
If AI agents can co-design chips, the most unforgiving technical workflow in the entire technology stack, they can extend into other domains where the tolerance for error is equally low. Medicine, finance, aerospace, energy: all have the same demand for verifiability, security, and transparent reasoning.
“AI and human engineers should lead to massive productivity improvements,” he said toward the close of his talk. “And the 10× that I’m talking about, it’s not just a number. We actually measure it with our data, customers, and partners.”
It was a simple line, but it carried a larger truth. The AI world spends endless hours debating agents and speculating about futures. In EDA, the future is already here, and it works.
