EDA Finds a Common Framework for AI

By David White, Cadence, 04.29.19

There's a model emerging for designing commercial decision systems such as EDA tools with embedded machine learning, writes an AI expert at Cadence in the first installment of a two-part article.

Last year, I was asked to serve on a panel at the NATO Science and Technology Board meeting. The group included folks from the various NATO countries working on AI and machine learning, mostly for applications associated with military-related operations, logistics, piloting, and surveillance-related decision-making. Over the past six months, our discussions have continued as we see more commonality with the military and aerospace communities in the area of AI-based applications for complex decision systems.

My talk was focused on the use of AI and other technologies to improve electronic design automation given that the cost, performance, and reliability of electronics is critical to the mission success of many systems and vehicles. There are also growing concerns about the cost of electronics development and processes related to the verification and support of next-generation AI chips, whether they use conventional or neuromorphic architectures.

Many presentations focused on how machine and deep learning may impact the future of the four-step loop for decision-making in uncertain or changing environments called OODA, which stands for observe, orient, decide, and act. It is based on military strategist John Boyd’s work and has become a standard template for military decision-making, as well as for how automated and human-in-the-loop decision systems are designed.

In more recent years, OODA has become a popular template for decision processes in the business world. In fact, it is not uncommon to see systems architecture descriptions for the Internet of Things or automated driving that combine elements of both OODA and reinforcement learning.

cadence ai

A placement step in EDA layout shown as a form of OODA loop.
(Image: Cadence)

OODA and classical reinforcement learning loops have many similarities. Both use the following steps:

Both OODA and reinforcement learning have had significant influence on modern systems analysis. Given the interest in using machine and deep learning, the two worlds seem destined to merge where commonality exists. Likely, this will begin with augmented intelligence as machine and deep learning are used to assist the existing human-in-the-loop systems (including human-driven CAD systems), evolving toward use of more automated decisions made using machine and deep learning.

This approach allows machine and deep learning to work within existing software flows to gain greater user acceptance before introducing more automation. The evolution probably will start with automation of low-level decisions and continue hierarchically, moving from tactical to strategic levels.

Different applications may have different performance requirements with regard to latency, throughput, and scale, as well as the desired degree of autonomy. Problems are similar enough that different communities can learn and benefit from each other, but no approach will fit every app.

In chip design, one of the things that keeps us up at night is the lack of attention and progress in verifying the results of AI techniques in real products. As AI hardware and software are introduced into existing and new systems and vehicles, how do we verify that these systems will adapt in expected, robust ways and do no harm?

This is one major concern I will explore in a follow up article later this week.

--David White is a senior R&D group director at Cadence where he manages product teams for the Virtuoso and OrbitIO tools and leads a technical task force on AI.

友情链接: 白山头IC技术圈问答ReCclayCrazyFPGA