本文译自《Advanced Logic Synthesis》，作者André Inácio Reis和Rolf Drechsler。
Electronic Design Automation is ripe for a paradigm change. But is it ready? The ubiquitous availability of ample compute power will allow for a different way for designers to interact with their design data and will allow for new optimization algorithms to be invented. In this chapter we will introduce the concept of warehouse-scale computing, its software stack, and how it will apply to EDA analysis and optimization.
In the early days of Electronic Design Automation, EDA1.0, separate applications were developed. Each ran on individual workstations. Verification, synthesis, placement, and routing were separate tasks carried out by separate tools. The size of the designs that could be handled was limited by the available compute power and the scalability of the algorithms. But since Moore’s law had not really taken off, these tools were sufficient for the design sizes at the time. The design flow was carried out by moving design data from tool to tool, with little interaction between the individual steps.
Due to Moore’s law and the scaling of the technology designs became larger and larger. At the same time, it became more difficult to predict the result of decisions made early in the design flow on the final result. Wire delay started to play an increasingly larger role in the overall delay of a path because wire delays do not scale as well as gate delays over the last several technology generations. In addition, due to the larger design sizes a certain percentage of the wires became longer and longer. Logic synthesis therefore needed to really understand the effects of interconnect increasingly better to make sensible decisions. To make reasonable accurate timing predictions, placement and routing need to be fairly complete. In the era of EDA2.0, this got handled by integrating the individual tools such as synthesis, placement, and routing as well as the analysis tools (such as timing) together in an integrated tool suite. These integrated tool suites would typically run on larger servers. Scaling to even larger design sizes was obtained by multi-threading many of the core algorithms such that they could take advantage of the many cores and threats in these larger servers.
In the last few generations, technology progress has slowed down from a power/performance perspective. Getting the utmost power and performance out of the smallest possible design has become more crucial to make a design in a new technology worth it. However, technology scaling has allowed for design sizes to continue to grow. At the same time design rule complexity has continued to go up, and people are advocating that handling complex rules needs to become an integral part of the design flow by providing in-design checking tools. As a result, the design work and possibilities for optimization have gone up tremendously. The amount of data that needs to be dealt with in an end-to-end design flow has exploded.
Design teams have increased in size. Despite that, it is impossible to complete a large design on an economically feasible schedule without lots of reuse. This has helped fill up the chips with homogeneous structures. But how many homogeneous cores do we want to continue to put on the same chip? The drive for better power/performance on specific workloads advocates for a lot more heterogeneity on a chip with functions that target a specific workload. To deliver this larger variety of designs, an additional boost in designer productivity will be required. It is time that we look beyond the individual, albeit integrated tools, and start to optimize the iterative end-to-end design flows.
Many of challenges for the future of EDA were outlined in a report on the 2009 NSF Workshop that focused on EDA’s Past, Present, and Future. This report was published by Brayton and Cong in two parts in IEEE D&T. The second part of that paper outlines the key EDA challenges. Interestingly, it has only a few challenges printed in bold: intuitive design environments, simplified user interfaces, standardized interfaces, and scalable design methodologies all leading to disciplined and predictable design. These challenges do not really drive the need for new EDA point tools. Instead they all point to problems that need to be solved in the endtoend design flows. They point to the improvement that is needed in the design environment through which the designers interact with design tools and to the scale of problems that need to be solved.
These challenges have substantial overlap with the areas Big Data and Analytics applications have been focusing on and made tremendous progress in. One can certainly argue that a Big Data application like Google maps has a simple and intuitive user interface, has standard APIs to annotate data, and has been architected to be very scalable. It is therefore very pertinent to look how these applications have been architected and what that means for EDA3.0 applications.
It is time for the next era in EDA that attacks these problems. EDA3.0 will deliver this next step up in productivity. In this era, EDA needs to provide designers with analysis tools that do not just analyze the results and produce reports, but tools that provide real insight. The size of the reports has already become overwhelming for most designers. Analysis tools need to provide data that will give designers insight in how to make their design better. We need to move from the era of analysis tools to analytics tools. These tools should take advantage of the compute power of large warehouse-scale clusters instead of individual servers such that they can provide near real-time insight in the state of a design. At the same time, we want to harness the power of these large compute clusters to devise smarter optimization algorithms that explore a larger part of the design space.
What will need to happen to make EDA3.0 reality? First we will have to capitalize on the changing nature of IT. We need to learn from Big Data, Cognitive and other data, and graph parallel systems. This will allow us to create an integrated design flow on very large compute clusters. Next, we need to change the way designers interact with design data and allow them to get much better insight in the state of their design. They need to understand what needs to be done next to meet their constraints. The analytics tools need to provide this insight. Finally, a new class of optimization algorithms needs to be invented that deliver a faster convergence and therefore designer turn around time (TAT) in meeting the design objectives on increasingly larger designs.
让EDA3.0变成现实需要什么？首先，我们需要重视IT的变化本质。我们需要从大数据、认知数据（Cognitive data）、其它数据、以及图并行系统（graph parallel system）中学习。这使得我们可以创建基于超大集群的集成设计流程。其次，我们需要改变设计人员与设计数据交互的方法，让他们更好地洞察设计的状态。让他们清楚要满足约束下一步需要做什么。分析方法学工具需要能够提供这种洞察的能力。最后，需要开发出更快收敛的、新的优化算法，因些设计人员可以快速迭代，来满足持续增涨的、更大的设计目标。