Circuit Verification:Verification by Simulation
There are several concepts of how to verify the correctness of a circuit. Since designing and manufacturing integrated circuits are very time-consuming and expensive tasks it is necessary to detect errors as soon as possible in order to avoid additional costs (e.g., see [9.2]).
We have to distinguish between different kinds of errors (fig. 9.1). The fundamental prerequisite for all following work is that the specification of a design is correct, such that it can be used for all following checks of correctness. Typically the specification is given at a very abstract description level. It specifies what has to be done, but it does not describe how it is done. In an extreme case the specification will not even determine whether a task has to be done in software or hardware.
During the phase of circuit design the designer describes how to perform the specified task with a circuit. For sake of simplicity we assume the task is completely done in hardware and that no soft- ware/hardware co-design is necessary. The design verification (section 14.6) then has to compare the design produced against the specification to find out whether both are equivalent.
Typically the designer will perform several modifications and refinements to describe details of the circuit before he finally creates mask data for the manufacturing of chips. For every such manual or automated conversion step its correctness has to be verified in order that the chips produced will perform the specified task. In the following sections of this chapter we consider techniques available for such verifications. Please note that whenever an error occurs during the design process all chips produced will have the same faulty behavior. The job of verification is to localize such design errors.
Additional errors may occur during the manufacturing of chips. In this case only single chips of a series may be faulty, whilst others are working correctly. Obviously such faults can only be detected by a chip test checking the behavior of each individual manufactured chip. Since the test effort grows strongly with the complexity of the circuits, and since testing each individual chip is very time consuming, one uses special techniques to obtain reliable predicates about the correctness of chips in reasonable time. This topic is treated in detail in chapter 15. Generally the techniques for testing chips are not suitable for detecting design errors. Indeed, to detect production errors one has to assume the design to be correct.
Verification by Simulation
One method of checking the correctness of a circuit is to perform a simulation of the circuit. For this purpose one uses typical input patterns (stimuli) and checks whether the simulated out- puts are identical to the intended outputs of the circuit. Any differences are owed to design errors or inaccurate simulations. In both cases one has to localize the error and to eliminate it. On the other hand, if the simulated results are identical to the desired values there is a certain confidence in the correctness of the design. But unfortunately this is no guarantee of correctness.
Since each simulation uses simplified models of reality a simulation cannot discover all possible errors. For example, the digital simulation con- sidered in chapter 11 uses a simple description of gates and a highly simplified timing model. On the other hand the more precise analog simulation described in chapter 10 can discover more design errors but is extremely time consuming so it is not suitable for complex integrated circuits. Further- more, using a very quick high level simulation supported by design languages such as VHDL (chapter 5), the simulated results and the behavior of the circuit may disagree because of awkward high level circuit descriptions. Such phenomena are discussed in section 9.1.1. For these reasons simulations should be treated with great caution, even if the simulated results are as expected.
A further problem with circuit simulation is the selection of suitable stimuli. In practice, even for fast simulators it is impossible to simulate a com- plex circuit considering all possible input patterns and all possible states of the circuit. For example, the exhaustive simulation of a multiplier for two binary numbers of length 32 bits would have to consider 264 = 18,446,744,073,709,551,616 input samples. Even with a simulation rate of 100 million multiplications per second the simulator would need 5,849 years. Thus in practice one has to restrict the simulation to the most relevant stimuli. Then in the case of a wrong selection of stimuli possible design errors may remain undetected.
A further problem is the evaluation of observed simulation results. For the multiplier example it is quite simple to check the outputs against mathematical software. But verification becomes more difficult if, for example, the circuit generates control signals for some other device. Then it may be harder to decide whether a complicated sequence of signals will have the desired effect. In this case one needs a detailed specification of a complete system, also including the environment of the chip to check the results. For example, using the language VHDL this can be done by a test bench (section 4.10). But because any misunderstanding of a circuit specification may result in a faulty test bench design errors may again remain undetected.
Verification by High Level Simulation
Using description languages such as VHDL one achieves a very fast high level simulation because of a high degree of abstraction. However, with high level simulation there is the risk that in the case of an awkward circuit modeling the simulated behavior may be quite different from the real behavior of a chip. This is not only owing to neglected details, such as, for example, signal delay times, but also because of different interpretations of VHDL statements by simulation and synthesis.
This is demonstrated by the following incorrect VHDL examples of behavioral descriptions. They lead to drastic mismatches between the results obtained by high level simulation and gate level simulation. Because of such examples the interpretation of simulation results should be carried out very carefully. As long as a simulation produces unexpected results giving hints of design errors it is very useful. However, when the simulation produces correct results one has to use more exact simulation methods to confirm the correctness.
For the VHDL description from listing 9.1 the incorrect results are owed to the treatment of undefined signals in conditional statements. In addition to the boolean values ‘0’ and ‘1’ the data types std_logic and stdulogic from the IEEE library implementing standard 1164 also consist of signals ‘U’, ‘X’ ‘Z’, ‘W’ ‘L’, ‘H’ and . But for synthesis only the values ‘0’, ‘1’ and ‘Z’ are relevant. Thus the semantics of the language VHDL is different for simulation and synthesis [9.5].
The intended effect of process P is that signals a and b always have inverse values such that the signal z independently of s always becomes ‘1’. As a consequence the output y should always be identical to the input x. Indeed, synthesizing the circuit from the given VHDL description we obtain a circuit in accordance with y<= x consisting of only a single wire.
But during simulation the internal signal s obtains the value ‘X’ because of a missing initialization. Then in both conditional statements of process P the else cases are executed assigning the value 1’ to signals a and b. Thus we obtain z = ‘0’ and therefore y = ‘0’, so that obviously the simulation results are in contradiction with the synthesized circuit.
The second VHDL example concerns the activation of processes, comparison of signals, and timing problems. For this the listing 9.2 presents an incorrect description of a modulo counter.
The designer intended to increment the state s of the counter with every rising clock edge, and when obtaining the value 6 to replace it by 0. Using the input reset the counter should be synchronously reset with the rising edge of clk. Figure 9.2 shows the expected behavior and the result of simulating the VHDL description.
It is noticeable that in the VHDL simulation with a rising edge of clk we achieve the state 6 and the correction to s = 0 occurs not before the next falling edge. This is because the assignment s< = s+1 is performed concurrently, so that for the following test if s = 6 ... the signal s still holds the old value 5 instead of the new value 6. The next activation of process P with a successful test on s = 6 occurs with the next modification of clk, thus with the falling edge.
After synthesizing the VHDL description we obtain the circuit shown in figure 9.3. One easily recognizes the synchronous increment operation and the asynchronous modulo calculation. Obviously the correction from state 6 to state 5 is performed
immediately without waiting for the falling edge of the clock signal. The cause for this deviation between simulation and synthesis is the absence of signal s in the sensitivity list of process P. Including s in the list the modulo calculation will also be performed synchronously within the simulation. This modification has no effect on synthesis.
Another remarkable fact is that in the beginning of the simulation the counter is in state 0, even without use of the reset signal. In general this will not be true for the synthesized circuit. Indeed, the gate level simulation tells us that at the beginning the state is undefined. This deviation between both kinds of simulation is owed to the state modeling by a data type of integer. If we replace this data type by an unsigned integer the state will be undefined until it is initialized by signal reset.
All presented modifications of the behavioral description have no effect on synthesis, but are useful for obtaining a more realistic simulation. Although the VHDL simulation then shows exactly the desired ehavior and the FPGA prototype implementation also works well, there is one more problem with gate level simulation. In that simulation the state remains undefined even if the reset signal is set. The reason for this unexpected phenomenon is a special feature of the synthesized circuit. Signal s_next of the counter is derived from the current state and from the signal reset. Figure 9.4 shows a small part of the combinatorial circuit computing the bit s_next1. The labels are according to the applied input reset = 1. Correctly we obtain
Obviously in this case the gate simulation is too careful. When we manually initialize the counter to any defined value the gate level simulation also shows the intended behavior of the circuit. Nevertheless, the circuit described is poorly suitable in practice. Depending upon wiring and sizing of the gates we will obtain different capacities, and thus different signal delays, we may influence the calculation of s_next2, s_next1 and s_next0. For example if signal s_next1 is slower than the other two signals the transition from s = 3 = (011)bin to s = 4 = (100)bin will briefly produce the inter-mediate result s_next = (110)bin = 6 and thus start an asynchronous reset operation for the flip flops.
This erroneous behavior is shown in figure 9.2 as case e).
Altogether the aforementioned examples show that simulation results should be very carefully inter-
preted and different kinds of simulation may pro- duce contradictory results. The problems shown arise from an awkward or incorrect behavioral description of circuits given by the designer. Of course in practice the designer should try to avoid such problems by using well formulated descriptions. Nevertheless, there is no guarantee that a VHDL description will be error-free.
Concept of Formal Verification
As presented in section 9.1, it is not practicable to carry out an exhaustive simulation using all possible stimuli. Formal verification is a method of avoiding this problem. Here the correctness of the design is proved, for example, by showing the equivalence of two descriptions. This technique is explained in more detail in chapter 14. For this methodology there are convincing sample ap- plications. For instance, if a circuit is manually modified to improve its time performance or to minimize the chip area then one can check by formal verification whether the circuit descriptions before and after the modification are equivalent [9.3], [9.1]. If, for example, the designer makes a mistake by cutting a wire, or by causing a short circuit between wires, or by using a wrong gate, then the modification will change the logical behavior of the circuit and the error will be detected by a comparison of both circuit descriptions.
Typically, the formal verification will generate a stimulus for each detected error so that the error can also be observed by simulating both circuits for the given stimulus. The great advantage of formal verification is that it is not necessary to try out a large number of stimuli to detect an error. Instead of this the relevant simulation inputs are calculated by the verification tool. Of course the verification task becomes simpler if both circuit descriptions are very similar.
The large problem of formal verification is that we need a precise specification of the desired chip behavior which can be formally compared to the actual circuit design. The creation of this specification may be just as difficult as designing the circuit. Furthermore, if specification and design both contain the same error it cannot be detected by formal verification.
With none of the verification methods do we obtain a guarantee for the correctness of a circuit’s implementation. In practice all we can achieve is to avoid special types of design errors. For that reason we restrict the following presentation to the task of verifying special properties of a design. For example, we may verify the timing to ensure that for a specified clock rate there is enough time to achieve the next stable state from every current state of the circuit and for any input data.
Comments
Post a Comment