By paul ~ November 18th, 2008. Filed under: Systems Engr..
The essence of a Model Driven Systems Engineering methodology is that a model becomes the backbone of the systems design process. At the beginning of the process, the engineering activity centers around the model and “designing” and “modeling” are synonymous. During the latter stages of the process, the model is a reference, a resource, and a trouble-shooting tool. In a true Model Driven methodology, a model has persistent value and contributes to the entire project life cycle. I want to make this point and explain “why” because, in many systems engineering flows, modeling and simulation are relegated to the Architectural Design activity. This seriously undervalues the model in the systems engineering process.
In this post, I’ll be trying to be as general as possible as I believe that the methodologies we’re talking about are applicable to many different domains. However, as always, what I say will be heavily influenced by my experience with complex embedded systems which have significant electronic hardware and software content. I’ll rely on the same definitions of Modeling Language and Model that we discussed in a previous post Model Driven Design (For Real). As a refresher, these are:
- The Modeling Language is used to express the designer’s intent. As such, it must be concise, expressive, easy to learn, able to express concepts at a level of abstraction above the hardware/software/mechanical decision, and formal (machine analyzable and executable). Several of these imply that it should leverage diagrams and not only be textual. In order to make it easy to learn and expressive, it should leverage familiar modes of expression.
- A Model, fundamentally, behaves sufficiently like the real system to be useful, is analyzable, communicates intent, is inexpensive to produce and is malleable (easily changed to explore options). To be analyzable by machines and humans, it must be at least somewhat formal. To communicate intent in a useful fashion, it must be understandable to all stakeholders with a minimum of “re-education.” Ideally, it is possible to communicate intent to implementation via automation (i.e. implementation or more detailed design can be generated directly from the model).
Modeling Through the Flow
The central distinctive characteristic of Model Driven Systems Engineering is the use of simulatable models in the highly iterative process stages from Requirements Analysis (requirements identification, elaboration, discovery, and documentation) through Architectural Design (system partitioning, technology evaluation and selection, design space exploration, and interface design). This part of the process tends to be very iterative because the Architectural Design process inevitably exposes ambiguity in the existing requirements and reveals new requirements. This is a healthy phenomenon that should be encouraged by the design process. In addition, the Architectural Design activity itself is very iterative in order to effectively explore the design space. New technologies are evaluated, trade studies are performed, and alternative architectures are evaluated to arrive at an optimal high-level design before the architecture can be handed off to Detailed Design and Implementation tasks. Throughout this process, simulatable models can be used as virtual prototypes to aid in analysis.
Creating and using a simulatable system model as the backbone of your system design process has many benefits. I have broken up the discussion of these benefits into sections aligned with the design flow to better illustrate the value of modeling throughout the entire design process. Note that in the accompanying graphic, System Verification has been drawn to extend from the earliest phases of the system design to the end. This illustrates what all excellent systems engineering teams know: “Begin with the end in view”. If you don’t start planning (and doing) verification for complex systems from the beginning, you will find it extremely difficult to accomplish at all.
Requirements Definition and Analysis
The model is a useful focal point for exchanges with stakeholders regarding their needs. For example, “When you said you needed X, did you mean this?” “This is our proposed approach to meeting requirement X. Does this approach look acceptable to you?” “As we have analyzed approaches to meeting both requirement X and requirement Y, we have discovered that we can only reach 80% of Y while providing X. Here’s why…” In these situations, well crafted models ensure that all constituents, including end users, agree on the definition of the requirements.
This value can be particularly great for systems that present a user interface. A simulatable model can be a great aid to human interface development because it behaves as the intended system will. In the end, this model can also be utilized in user training. (Note that as the model progresses and necessary detail is added, the model may no longer simulate fast enough to be usable for human interface development and training. At this point, we either separate the model into two models with different purposes, or provide a “switch” in the model that will allow it to run at a higher level of abstraction for the HMI purposes.)
As I mentioned earlier, verification must be considered at the requirements analysis stage. Verifying complex systems is very difficult and the challenges compound as design proceeds beyond the initial requirements definition. Many problems arise from a single root cause: poor requirements traceability. By “embedding” requirements in the model, with appropriate links between requirements and model, it is possible to easily trace requirements to the components in the system that will (must) implement them. This not only makes it easier to develop verification plans, but facilitates the flow-down of the requirements to the appropriate development functions. [Trust me, if you’ve ever tried to verify a complex system built with the help of subcontractors, you’ll really appreciate this help with requirements traceability!]
An innovation whose time has come is “Model Driven Acquisition” (my extension to “Simulation Based Acquisition.”) While, to my knowledge, this has been rarely utilized below the operational simulation level, we should start seeing it at the subsystem level. What this would mean is that system models are “handed down” with the RFP with the expectation that models would be “handed back” that satisfied the requirements at the preliminary design review. This would put powerful analysis capability in the hands of the customer for evaluating proposed designs, as well as providing bidding teams with a better understanding of the requirements in the RFP.
The analyzable model facilitates exploration of the design space via trade studies. Because we have an analyzable model in hand, we can quantitatively evaluate design alternatives in order to minimize our cost function (whether formal or visceral). This is, of course, the heart of design optimization. Some examples of the kinds of analyses that can be performed include schedulability analysis, performance analysis, functional simulation, cost analysis, reliability analysis, usability & supportability analysis, and the evaluation of new technologies. All of these allow for the quantitative evaluation of design alternatives against the requirements early in the design cycle.
Many complex systems engineering activities deliver capability in an incremental fashion. A system model that can persist from increment to increment can be invaluable both in charting the course and in enabling the discussion that must exist between the phased development efforts.
The model presents a more complete specification to the downstream processes of Detailed Design and Implementation. The rigor required to create a simulatable (and, more generally, an analyzable) model, and the artifacts that accompany it, make a much better specification than paper specifications or a collection of diagrams as used in most organizations. Ultimately, the goal is machine synthesis from model to design and then implementation, but this is not a requirement to realize the other benefits mentioned here.
The model facilitates identifying solutions to problems that may appear during implementation. For instance, if a performance problem is identified, the model can be updated with necessary detail and analyzed to determine what the cause of the problem is and identify and evaluate potential solutions. [This is much more efficient than the approach of discovering the problem, THEN developing a model to help resolve it!] This value is particularly important where the deployed behavior is difficult or impossible to produce in the development environment. Often these behaviors can be reproduced in a system-level model.
It is possible for the system model to further facilitate implementation by acting as a test bench and/or producing “test vectors” for the testing of modules as they are completed. We call the former “implementation-in-the-loop” simulation where implementation is actually included in the system model allowing it to both be tested and the performance of the system with actual implementation predicted.
As mentioned in the Requirements Analysis section above, a system model can be an excellent communications medium for the requirements discussion. This came up again in the Detailed Design section where we discussed the model being an effective specification for hand-off to detailed design and implementation tasks. For the same reasons, it can be a powerful tool for communicating with subcontractors throughout the project life cycle.
Performing the integration in the system model early, before we get to the final integration step, can expose many latent problems that would be extremely expensive to resolve in the integration activity. As we all know, the key to taking the pain out of integration is to consider and do it (on a trial basis) as early as possible, before everyone has poured concrete!
The model greatly facilitates verification, both early and final. As alluded to above, the model can be used from the beginning to evaluate proposed design approaches against the requirements very early in the design cycle. But the value of the analyzable model to verification actually extends well beyond that. First, as the system design is developed from the requirements, the requirements become “embedded” in the model in a variety of ways. Secondly, in many complex systems that I have been involved with, verification by test is simply not possible for many requirements until the system is nearly finished. This leaves the project with Verification by Analysis as the verification method until nearly the end. For critical performance requirements, it is simply not acceptable to perform an analysis at the beginning and then wait until test to assure ourselves that the requirement is met. An excellent solution to this problem is a system model that persists and is updated throughout the development phase, updating analysis results as we go along.
One of the “embedding” methods is to actually build some verification directly into the model in the form of “assertions” where the values tested come directly from the requirements database as parameters. Simply simulating or analyzing the model can then verify that the current state of the design meets the requirements.
Taking it to the Bank
Well crafted models form the backbone of a system design. A properly executed Model Driven Systems Engineering flow will significantly reduce the risk and cost of complex systems designs by creating a strong, continuous verification framework for the systems engineering process. The system model becomes not just an artifact but a powerful tool that dramatically improves communication, collaboration and execution throughout the product cycle. Model Driven Systems Engineering is the key to deploying complex systems on spec, on time and on budget.