By paul ~ January 9th, 2009. Filed under: Industry, Systems Engr..
Sorry for two posts about the same subject, but Brian Bailey’s whitepaper (System Level Virtual Prototyping becomes a reality with OVP donation from Imperas) really got me thinking.
One of Brian’s most valuable contributions in the paper is a lucid discussion of the level of timing accuracy required for different development tasks. Brian touches on this throughout the paper, but deals with it head-on in a section headed Accuracy and Timing. He makes an important observation in this section:
So if hardware has not yet been implemented, how can we know the exact timing? The only thing that can be shown is that it meets the requirements. That is one of the purposes of a system level virtual prototype – to work out the timing necessary and other details of the architecture long before they are frozen. Thus timing is an artifact of implementation, and the lack of timing in the early stages of a project does not imply inaccuracy, just the degrees of freedom currently left in the design process. (Emphasis mine.)
Well put! One of the concerns that I have often heard expressed when discussing the usefulness of system-level modeling at high levels of abstraction is this issue of accuracy. Since we know that, by its very nature, an abstraction cannot be accurate in an absolute sense, is it useful? After all, systems fail because of timing problems in the nanoseconds, don’t they?
Yes, at some level, accuracy is important, but it is also expensive. In the early stages of the design process, however, when we’re trying to settle on an architecture that will satisfy the system requirements, detailed accuracy is not so important. And it is at this point that we can benefit from design at a higher level of abstraction to create a system more immune to timing error.
As I read Brian’s paper, I was struck with the fact that he makes a powerful argument for system-level modeling and simulation, even before you get to the kind of virtual prototyping enabled by the OVP technologies. For complex designs, it is critical to explore the design space through a malleable high level model before you invest in detailed hardware design and start software development. The hardware/software decision itself must be made carefully. Software architecture and technology selection (ASIP vs. GP core, single processor vs. multiple processors, switched fabric vs. serial vs. parallel bus, etc.) should always be worked out before committing the kind of investment required to build a virtual prototype upon which to develop software.
High-level modeling tools, such as Foresight, enable the rapid prototyping and evaluation of architectures prior to committing to detailed design and implementation. The resulting models are accurate enough to make excellent design optimization decisions and are quick and easy to use. Because the broad-brush decisions have been made in the higher level model, less energy is wasted in exploring dead-ends with the more labor-intensive virtual platform model.
Please understand that I am not arguing against the use of virtual platform methods. These developments are very exciting and definitely a valuable enabler. My suggestion is simply to lift your eyes a bit higher still, reach for another rung, and see if additional benefit cannot be gained from an even higher level of abstraction.