ExASIC
分享让工作更轻松

AI Challenges for Next-Gen EDA

By David White, Cadence, 05.03.19
https://www.eetimes.com/author.asp?section_id=36&doc_id=1334643

In the second of a two-part series, an AI expert at Cadence discusses the challenges applying to EDA tools an emerging model for machine learning in decision-support systems.

In an earlier article, I presented a common framework for adaptive decision processes discussed at a NATO science and board meeting I attended. Here, I will discuss common challenges shared across several industries that were presented there and that we continue to discuss and work on today.

I participated in a panel on the topic where members represented logistics, operations, transportation, and surveillance as well as electronics design. The number of challenges we shared was amazing.

In real-time continuous learning, unobservable factors may exist in the use model or environment or observable factors may change over time. The uncertainty requires an ability to detect anomalies and adapt quickly.

However, we must ensure systems adapt in a stable way and we know when things change. So, we need formal verification processes to ensure stable learning and robust reactions to unexpected inputs.

In addition,some machine-learning methods are black boxes that cannot tell you why they produced a particular answer or how confident they are in that answer. So, we need more focus on building systems where learned parameters can be interpreted and reviewed, producing a probabilistic confidence for each answer.

Finally, we agreed we need to support cold starts and sparse-data solutions.EDA tools, for example, have their greatest value when you do something new, such as designing at a new node where there's little data.

Many of these challenges are related to a class of sequential decision and optimization problems measured against an objective function and constraints. For these types of problems, verification is even more critical and more complex to complete.

A user's intent may be expressed as constraints or encoded in the objective function. In other cases, the intent may have to be inferred based on observed behavior patterns in the context of scenarios.

Next page: Cloud computing well suited for hierarchical decision processes

e-02-01

Sequential decision problems in an EDA physical design flow present challenges where the right path is dependent on the design intent expressed as an objective function.

(Source: Cadence)

It's worth noting that having a large quantity of data can breed a false sense of security. A high volume of data is not necessarily valuable if it doesn't sufficiently cover all possible situations that may be encountered.

A closed decision process will continually exploit the known mapped environment (i.e., the data you have) and may be ripe for instability in a changing world. State space exploration and the ability to extend the prior learned space are the key to stable adaptation. Hierarchical goals and hierarchies of cost functions aligned to drive decisions along a high-level strategy has been examined for reinforcement learning and other optimization approaches.

Our envisioned decision process is similar to classic reinforcement learning approaches or the OODA approach. An example for an EDA physical design flow (below) shows a hierarchical data infrastructure combined with data-driven solutions that include analytics, machine and deep learning, optimization, and massive distributed processing.

e-02-02

Decisions can be made across multiple design steps with a multi-stage cost function.

(Source: Cadence)

Analytics are used to help determine what observed data is important and learned, as well as how it is labeled and stored. Analytics also can drive the design of cost functions for positive and negative reinforcement during optimization. Machine and deep learning are used to capture complex design interactions and to model behavior and decisions.

Optimization is used to drive the solution to a desired condition or cost in the required time. The larger opportunity is to perform what-if exploration in the background across the design space to find better solutions and in some cases evaluate cost-benefit analysis where the requirements and mission may have flexibility.

Many of these algorithms require significant computational bandwidth, and distributed processing is necessary to speed the process. Optimization in particular can consume a lot of computation, so it uses parallel processing--the more computational power, the more exploration the system can do.

Cloud computing may play an important role in setting standards for compute infrastructure given the broad array of internal IT capabilities across electronics companies. Our approach is hierarchical, starting with low-level decision sequences, building hierarchically to multiple sequences and then to a flow where optimization is conducted across design steps, feeding back routing quality to influence future placements. This approach may require significant computing resources that may not be available in an on-premises installation.

The next wave of machine and deep learning must address many of the fundamental needs for intelligent embedded systems required to adapt within their environment. These systems include applications such as ADAS, robotics, CAD, transportation, and logistics as well as distributed IoT apps.

The next wave also will have greater emphasis on explainable AI, especially where human factors and safety are involved. In addition, the next wave will require significant progress in verification processes and standards that can ensure robust adaptation. Progress in all these challenges will enable the realization of truly adaptive, commercial decision systems.

--David White is a senior R&D group director at Cadence where he manages product teams for the Virtuoso and OrbitIO tools and leads a technical task force on AI.

阅读数:
更多文章:文章目录
解惑专区
(支持markdown插入源代码)
欢迎使用ExASIC订阅服务
仅用于ExASIC最新文章通知,方便及时阅读。
友情链接: IC技术圈问答ReCclayCrazyFPGA