The Concept of Electronic Design Automation:Short History of EDA
Short History of EDA
The first Generation
The advent of EDA came in the early 70s. It was a few years after the invention of the integrated circuit by Jack Kilby, and chips with 20 to 100 transistors were state of the art in those days. Popular ICs were multiple logic gates and flip flops (e.g. the 7400 series), and operational amplifiers. Development of those circuits was hand work and design was based upon empirical methods, simplified formulas, estimations, and assumptions. Lay- out was manually and polygons were cut out of rubilith foils for mask making.
In the light of modern design flows, that process sounds primitive and time consuming. The biggest design problem, however, was that the intrinsic non-linearity of the transistors did not allow one to accurately predict the behavior of the circuit.
With increasing complexity of the chips it became more likely that first prototypes did not work as desired. Several prototype runs were needed to allow for learning and iterative refinement of the circuit (redesign) until an implementation had been found meeting the specification. The computer program SPICE developed at the university of California in Berkeley [2.7] laid the foundation for EDA. That program, after many revisions still in use today, can simulate electric networks even if they contain non-linear elements like transistors and allows one to accurately predict either time or frequency behavior 1). Circuit optimization now no longer had to rely on trial and error. Productivity received a boost because simulation on a computer is orders of magnitude faster and cheaper com- pared to the physical implementation of a proto- type.
Computer Aided Design (CAD) arrived on the horizon once computers became more handsome and affordable. Initially CAD targeted mechanical and construction engineering, but it became clear that those tools could be used for any kind of ge- ometric design work. CAD systems substantially improved productivity of layout design. Polygon data are now being entered into a computer via a mouse, and can easily be modified and maintained. Once polygon capture is complete, a mechanical optical system (old fashioned, not used any more) or an electron beam writer transfers the geometric structures into a physical image (masks).
Design errors were a burden for chip design from the early days. Simulation allowed the prediction of the electrical behavior, but there was no proof that the manual transformation into the geometric domain was correct. Of course, manual checks were applied before the design data were released for production, but these checks became increas- ingly inefficient as complexity grew. More time had to be spent, and chances of an error being missed in the check grew.
Layout verification tools were developed and solved that problem. Two kinds of layout veri- fication tool exist today:
• Geometric checks (DRC: Design Rule Check). This is software that uncovers a violation of geometrical rules such as minimum width of
1) SPICE numerically solves mesh and node equations of a network that represent a system of non-linear differential equations with several unknowns.
• Layout extraction is the recognition of active and parasitic elements and their interconnects. This is a requisite for checking the match be- tween layout (physical design) and the sym- bolic/structural (schematic) design. The latter task is commonly referred to as layout versus schematic check. Extraction of parasitic elements such as wire capacitance is needed for estimating their impact on delays.
To summarise, the first generation of EDA com- prised tools for verification of the electric behavior (simulation) and tools for correlating the electrical and geometrical representations. It came along with a substantial reduction of time consuming and costly redesigns. A layout editor was another member of the first EDA generation valued as an efficient capture tool for mask data. Layout editors are still used today for the design of analog circuits and cell libraries.
The first generation of EDA did not contain a tool for automation of an implementation step (hence the term EDA is not fully correct). Layout editors may offer some very limited automation capability, but this can only be applied to repetitive structures typical for memories.
Figure 2.7 shows the trace through the Y chart if using the tools of the first EDA generation. We walk down the behavioral axis and break the functional behavior down into Boolean equations. Then we create structure by crafting a transistor level netlist matching the Boolean equations. Cir- cuit simulation (SPICE) will then solve the dif- ferential equations to verify that transformation. Once the transistor netlist has been found to be correct it will be translated into polygons. That translation is to be verified by DRC and LVS.
The Second Generation of EDA
In the early 80s semiconductor technology had advanced enough to allow for the integration of up to ten thousand gates. Layout creation by means of a layout editor and circuit simulation using SPICE started to consume intolerable amounts of time and cost. Additionally, computer requirements skyrocketed, in line with the amount of data. A new approach was needed.
Automatic Place & Route tools arrived on the horizon and rendered manual layout entry obso- lete. Those tools create chip layout by instantiating basic cells and interconnecting their terminals.
Logic simulators were introduced and they boosted the productivity of electrical and logic verification. Contrary to simulation with a circuit simulator, a signal is considered to have only three discrete states (0, 1, X = unknown) rather than solving for the analog voltage level. Although not implied by the term ‘logic simulator’, delay calcu-
lation or at least delay estimation is performed to ensure that the design will meet the performance targets. For that purpose logic simulators have built in a kind of a delay model that utilizes data obtained from the characterization of the cell library.
Initially circuits were captured in the form of a netlist – a tabular representation listing every oc- currence of the basic cells including all wires (nets) needed for interconnect. Netlists are efficient in terms of data storage, but they are not well adepted for human understanding. Electrical engineers ‘think’ in schematics which graphically show all interconnects between circuit elements. Schemat- ics had been used for documentation purposes since the early days of electrical engineering. Once schematic editors became available they quickly became the state of the art for circuit capture. These tools allow for graphical entry, capture, and output (plot) of the circuit schematic. However, the graphics information is only needed for human understanding – it is redundant for the circuit itself. The graphics information will be removed before design data are sent to other EDA tools. Typically EDA tools use standardized netlist formats (e.g. EDIF or Verilog) for the exchange of structural circuit information.
The second EDA generation allowed for the mov- ing of the electric design work to the gate level, i.e., no manual work at lower levels was needed any longer. The structural gate level schematic or netlist becomes the most detailed design descrip- tion created manually and can be considered the reference database for all further computer aided (or automated) design work (see figure 2.8). The logic simulator is a tool for the verification of the last manual transformation process and allows for the correlation of the schematic or netlist with Boolean equations and timing behavior.
The Third Generation of EDA
The early eighties saw a few universities starting to work on algorithms for logic design automation. The goal was to read in e.g. register transfer design or Boolean equations and automatically perform all design steps down to the polygon level. The term silicon compiler had been invented. The early experimental tools demonstrated nice examples, but failed completely in real life. Looking back, we can say that it is inappropriate to perform several transformation steps (as seen, e.g. in the Y chart) without human interaction. Every ASIC has custom requirements which mandate a high degree of customization in the design flow. Early silicon compilers simply did not provide that level of flexibility.
An important requisite for the later success of design automation was the standardization of hard- ware description languages, e.g. VHDL in 1987 by the IEEE and, later, Verilog. They had initially been intended for functional design documentation and cell level modeling at the RT level 1). Later they also enjoyed acceptance in use for capturing the functionality of complex digital systems. Based upon these languages, logic synthesis tools were developed which allowed for automatic translation.
After early versions of logic synthesis tools had provided the world of engineering with limited success, the company Synopsys introduced their tool DesignCompiler. This tool performs logic synthesis and is closely tied to a companion tool for RTL synthesis called HDL Compiler. Design- Compiler and HDL Compiler can virtually per- form two tasks in a single step. This tool suite saw some early adopters, but it took several years until it became the defacto standard for RTL synthesis. The reasons for slow acceptance were:
• The price tag for those tools was considerably high;
• RTL synthesis revolutionized the design method- ology. There was no place for schematic dia- grams widely used in those days and engineers had to change their ‘way of thinking’;
• The quality of results of early synthesis runs did not meet that of manual designs.
Early weaknesses of that product were solved and the VHDL and Verilog languages were finally taught at universities. Soon young engineers ‘thinking’ RTL rather than schematic arrived on the working floor. In addition, by the early 90s ASICs started to become so complex that the traditional design methodology started to break down.
Register transfer level synthesis automates the implementation steps below the RT level. The RTL description becomes the reference database (see Figure 2.9) and is typically expressed in VHDL or Verilog. Now simulation tools are needed to predict the logic behavior of the design: VHDL and Verilog simulators work similarly to a gate simulator, but they can handle the high level constructs available in those languages. This additional ab- straction compared to the gate level opens the door for computer run time improvements because of similarities between VHDL or Verilog and computer programming languages. A ‘+’ operation can be executed in a single machine instruction, while at gate level, the operator is decomposed into
individual bits and a number of full adder cells, each having a logic complexity of approximately After verifying the correct behavior of an RTL de- sign with the aid of VDHL or Verilog simulation, RTL synthesis will translate the design into a gate level netlist. How can we know that the translation was error-free? At first glance one could argue that a computer program does not make mistakes and the results would be correct by construction. Real life had shown many cases where synthesized gate level netlists did not show the desired behav- ior. Tool flaws and ambiguities of RTL languages, especially with clock trees, are the main source of wrong synthesis results. Because of the high incremental cost of a redesign and the associated project delay, it is mandatory to verify the netlist even if it is machine-generated.
The conventional approach to verifying the gen- erated netlist is to refer to the second generation EDA tools. A logic simulator can use the same set of stimuli as an RTL simulator. Fortunately, VHDL and Verilog simulators are also capable of gate level simulation, therefore, the stimuli (the ‘test bench’) do not have to be re-written in another format. However, with very complex circuits (e.g. more than a few hundred thousand gates) gate level simulation requires excessive run times and becomes the bottleneck for design productivity.
Formal verification and static timing analysis are the resort. A formal verification tool checks whether two circuits are equivalent, i.e., have an identical logic behavior. The circuits can be repre- sented at different levels of abstraction, e.g. one is given by its RTL model and the other is de- scribed at the gate level. The principle of formal verification is quite simple: The behavior of logic gates can be modeled by very simple Boolean expressions. The behavior of a circuit represented by its gate level netlist can be derived from the gate instances and their interconnects. Using Boolean algebra the tool can then see whether the gate level behavior is equivalent to that captured in the RTL design. Of course, formal verification tools are very complex (and expensive), but they are very fast, indeed.
1) The intent of VHDL and Verilog was to capture RT level behavior. For practical reasons it was necessary to introduce the concept of hierarchy similar to subprograms in software programming languages. Hierarchy provides a link to structural domain, and VHDL and Verilog can, therefore, be used to capture gate level netlists.
Static timing analysis is the method of calculating and adding up gate delays along a signal path. After considering all structurally possible paths between a given start and end point, those with minimum and maximum delays are considered for further analysis.
Formal verification and static timing analysis are referred to as ‘static’ tools because they do not require stimuli (test vectors). Static tools have been accompanied by a tremendous improvement in computer run times: A circuit requiring days for logic simulation can be checked in several minutes with static tools. The use of static tools mandates a synchronous design style. Ideally this implies that the whole circuit changes it state only at the active edge of a single clock. In practice, this implies that all clock pins of all registers be connected to the same clock signal.
To summarise,
• Hardware description languages;
• RTL simulation;
• Logic synthesis;
• Formal verification; and
• Static timing analysis;
all build the tool suite of the third EDA generation. They represent a complete development system for design at the RT level and its transformation into the gate level. An additional tool is also considered a member of this generation:
Automatic Test Pattern Generation (ATPG)
ATPG cannot be seen as beeing detached from logic synthesis: as mentioned in section 2.2.7, a plain functional test is not feasible and structural test is the only resort. This methodology calls for creating test vectors needed to identify physical wire defects (stuck at faults). This task cannot start before the netlist has been frozen, and, similarly to any work below the RT level, will be automated according the charter of the third EDA generation.
Figure 2.9 sketches the traces within the Y chart if the third EDA generation is applied. A deeper understanding can be found in [2.5] and [2.8].
Design Productivity
The amount of time needed for the development of an ASIC has a major impact on the commercial success of an electronic product. ASICs are primarily used within new products with short life cycles. If the development of the ASIC is late then also the market introduction of the target product will be delayed and the life cycle will be shortened. Less time left to sell the product also means less revenue and less profit 1).
1) Within the context of product life cycle the market window is an important parameter. This is the duration between product acceptance in the market place and its replacement by a more powerful product. For typical high tech products life times are in the range of one to two years. If development is late and delays the introduction of production by just 2 months then potential revenue would decay by 8 %.
Figure 2.10 demonstrates how the complexity of microprocessors and DRAMs – expressed in terms of transistor count – have evolved over the last 30 years. ASICs enjoyed a very similar progress. Ac- cording to Moore’s law every 8 years the complex- ity will increase by a factor of ten, on average. Let us now define design productivity as the number of logic gates a designer can implement in an ASIC within one year, simple constant, but strongly depends on the type of an ASIC, especially on the amount of regularity within the circuitry. The ratio between the values in table 2.1, however, can be considered typical.
If productivity does not change over time then ASIC development time would also grow tenfold every 8 years – if the size of the design team remains constant. Alternatively, we could pump up the design team tenfold to keep the development time constant. Neither approach is viable, nor do we see that design issues would soon invalidate any progress in semiconductor manufacturing technol- ogy. A continuous increase in design productivity is the only resort. Fortunately the evolution of EDA does provide the required productivity improvements.
In table 2.1 we have summarized typical figures for design productivity when using the tool suite available in the EDA generations covered in the previous sections. The author interviewed several ASIC design teams and managers to compile that overview. Of course, design productivity is not a By migrating from EDA generation I to generation III, design productivity grew by a factor of 100. From that we can conclude that a typical ASIC design in 1979 using EDA tools of generation I required approximately the same amount of human resources like an ASIC typical for 1995 when using the tool suite of generation III.
Fourth EDA Generation
For another ten or fifteen year we can expect to see the complexity of ASICs grow as a result of advances in semiconductor technology. EDA needs to accommodate that trend, and design pro- ductivity will have to increase continuously, and new tools will be needed. Currently the following trends can be seen:
Design Re-use (multiple usage of a single de- sign).
Design re-use had been widely practiced at a lower level (in the context of the Y chart) for a while: usage of pre-designed standard cells is a way for design re-use. We can easily boost design pro- ductivity if we raise design re-use to a higher level, e.g. by reusing microprocessor cores rather than simple gates. However, there arrives one of the challenges of design re-use: specification of logic gates is quite easy because of the simple logic behavior. Semiconductor manufacturers not only provide their customers with a detailed data book of their cell libraries, but also provide the characterization data in various formats (views) needed by EDA tools. For a more complex circuit, however, the size of and effort in documentation grows rapidly.
Historically circuit ideas had not been considered marketable products and nobody was willing to spend the cost of user-ready documentation. More recently design re-use has been seen as essential key to system integration (system on a chip), and the Virtual Socket Initiative, an international forum, [2.12] pushes, promotes, and develops standards for design re-use. A new market place has developed and several new firms, but also existing companies, sell or broker circuit designs, frequently referred to as intellectual property, e.g. [2.10].
The significance of design re-use can be demon- strated within the Y chart. By inserting an existing circuit in a new design we can immediately migrate to the structural axis rather than having to traverse further down the functional arm, and there is no need to describe behavior at lower levels.
Graphic State Chart Editors
During the era of the second EDA generation, schematic editors had been introduced to allow electronic designers to think in schematics. At the RT level some members of the designer commu- nity also desire to design graphically. However, RTL descriptions are usually viewed more like a computer program rather than as a structured electronic circuit. Flow charts describing behav- ior would benefit most from graphical entry, but they are applicable only to a fraction of a typical circuit. It depends on the designers‘ preference whether a mixed graphical language oriented entry is preferred against a pure language oriented one. Several EDA companies offer state chart editor today, but these have not yet gained much ground.
Behavioral Synthesis
Behavioral synthesis is the continued evolution of the concept of EDA: raise manual design work to higher levels of abstraction. It automates the transformation of an algorithmic description to the register transfer level. C level synthesis is a certain type of behavioral synthesis and those tools can read ‘C’ language rather than VHDL or Verilog. These tools are gaining ground [2.4], [2.5], [2.6] but they appear still to be in their infancy.
Hardware Software Co-design
Most electronic systems contain a microproces- sor. In such systems the software executed by the microprocessor controls all peripheral hardware. When developing such systems the designers have the choice of implementing a certain function in either hardware or software. A software solution, in general, is cheaper, but whenever highest per- formance is required a hardware oriented solution may be required. The partitioning between hard- ware and software impacts the overall cost of a system. Hardware software co-design improves the productivity of design of a hybrid system by allowing one to simulate both hardware and soft- ware. Tools are available today, but again they are in a state of infancy.
In general, the emerging fourth generation of EDA systems will support the design of systems on a chip (SOC), increasingly calling for support of mixed analog digital (mixed signal) circuitry. The extension of VHDL to VHDL-AMS allows for capturing behavior of mixed signal design and is a step towards that direction.
Comments
Post a Comment