By paul ~ December 9th, 2008. Filed under: Announcements, RAMS, SDR & SCA, Systems Engr., Tips & Tricks.
For the last 4 years, I’ve had the privilege of using Foresight for performance analysis on a number of different Joint Tactical Radio System (JTRS) – related Software Defined Radio (SDR) projects. It was a privilege because it is not often that tool developers have the opportunity to gain first-hand experience in the use and application of their tools on real-world projects. For me, this has been very enjoyable, rewarding work and I look forward to continuing it.
For those who aren’t familiar with SDR, a software defined radio is a wireless communications device where the functions traditionally implemented in analog electronics, such as modulation, are handled in software running on a DSP and/or MCU. The radio functionality is usually referred to as a waveform, the radio device itself (and basic software services) are referred to as the platform. If you’re like me, when you hear the word “waveform” you think of an analog signal. In this case, the word waveform refers to the entire software path from baseband to the hand-off to analog components at the RF front end. Waveforms often include digital modem, encryption, and networking capabilities. [This is NOT your father’s HAM radio!]
The Software Communications Architecture (SCA) provides the foundation for JTRS radios. The SCA is a software architecture designed to enable a highly modular and portable approach to developing software defined radios. The SCA, in turn, is based on CORBA. In theory, a platform publishes the resources it provides, and a waveform publishes the resources it requires (and its required internal connectivity) in XML files. Functionality is bound to resources dynamically as the waveform is loaded onto the platform. A Core Framework delivers the central, common services of the SCA.
Modern JTRS waveforms are extremely complex. Imagine having a high-end Cisco network appliance (router, firewall, security, QoS management, etc.) and broadband radio modem that provides full simultaneous support for multiple RF channels in one box. As a result, they are very demanding of the computational power of the platform.
JTRS radios are complex embedded systems that have strict size, weight, reliability, power, and performance (startup time, latency, throughput, jitter, error rate, etc.) requirements. The fact that multiple contractors develop the radio waveforms and platforms further exacerbates the design challenge.
A waveform must be designed and verified to meet requirements on all target platforms. This is a challenge because the waveforms are developed independently of the platforms. Assumptions made about platform capability become requirements on the platform design. It is difficult to modify waveform designs to accommodate performance problems identified during platform development.
A platform must be designed and verified to meet requirements for a selected group of waveforms. These platforms frequently contain multiple general purpose processors (GPPs), as well as dedicated security processors, DSPs, FPGAs, volatile and non-volatile memory, and inter-processor busses. It is often the case that there are requirements on maximum processor, bus, and memory utilization in order to provide for future expansion.
I hope that you’ll forgive me if this next bit sounds a little like a commercial. The marketing guy did not ask me to write this, nor did he put words in my mouth. I’m just very “sold” myself (based on my experience) on Foresight’s ability to solve real problems for real systems engineers. So, excuse the froth, but it really works!
Foresight’s Resource Aware Modeling and Simulation (RAMS) approach is an excellent fit for both the design and simulation-based verification of software defined radios. (In the interest of brevity, I’m going to assume you’ve read that paper and know what I’m talking about.) My approach is to independently model waveforms and platforms. The waveform model is the RAMS Behavioral Model and the SDR platform model becomes the RAMS Platform Model. [This is a little bit of a simplification. The SCA platform actually contains functional components as well as resources (and components that exhibit both behaviors.) The reality is that the separation is not at all difficult and is easy to make as the models are developed.]
The waveform models include not only a high-level control & data flow model of each waveform component, but also port behaviors. The middleware (SCA port and underlying CORBA behaviors) must be considered directly as it has significant impact on the performance of the radio. Data Flow Diagrams (DFDs) specify waveform, intra-component data, control flow and concurrency. State Transition Diagrams and Minispecs specify component behavior. Foresight’s parameterization capability is used heavily throughout to allow potentially variable inputs (such as resource mapping, processing cost, queue lengths, alternative behaviors, etc.) to be managed and configured externally to the model structure.
The platform model includes not only the hardware resources in the platform (processors, busses, and memory) but also software resources (RTOS, ORB, and Core Framework services such as file system, logging, device manager, etc.) Again, Foresight’s DFD, State Transition Diagram and Minispecs model platform and resource behavior. Parameterization is used heavily for configuration. A single, parameterized platform model is often able to represent many similar platform configurations, which reduces modeling effort, improves reusability, and facilitates tradeoff analysis.
Foresight’s ability to create user-defined resources allows us to accurately model the Integrity RTOS scheduling behavior. This in turn makes it possible to tune priority, weight, and partitioning in the context of the model. Further, we use Foresight’s external call SDK to directly link the Linux scheduler code into the model (we did the same thing with the Linux DiffServ code) such that we didn’t have to attempt to correctly model the implementation in Foresight. The result is a user-defined resource with behavior defined by real code running in the context of the Foresight simulation (running under Windows!)
The models are instrumented such that key performance data is logged against time throughout the simulation. This includes per-component (and port) latency per packet, resource utilization, RTOS activity, and any others that are deemed useful for analysis and trouble-shooting. The data is post-processed using common desktop tools such as Microsoft Access and Excel (programmed with VBA) to produce presentation-ready graphs and tables communicating resource utilization over time, resource consumption per component, startup timeline GANTT charts, throughput and latency per packet QoS class, etc.
Traffic generators, created in Foresight, stimulate the composite models with a variety of load models aimed at testing the system under a variety of stressing and typical traffic conditions.
Foresight’s integrated simulator and model debugging functionality make diagnosing issues with these complex models straightforward. (I wish that I could claim to create bug-free models that just work the first time, but it wouldn’t be true. I really benefit from the debugging features!) It’s pretty sweet to be able to create, simulate, and debug your model all in one environment. The model is truly a “white box” with all data and behavior observable, including breakpoints, monitors and etc. It beats translation to, and debugging in, a software debugger all to pieces!
The result of all of this is an integrated, simulatable, Foresight model of the radio. A series of experiments are set up and batch-executed overnight with analyses performed the next day. As updated information becomes available, the model can be updated, automated experiments re-run, with new results available in a timely fashion. As questions arise about the results a little data mining usually produces the answers. If necessary, new experiments are set up and executed to address the questions.
The following are some of the more notable results we’ve achieved from these efforts:
- We are able to routinely predict processor and bus utilization, latency, and throughput for waveform-platform combinations.
- We are able to evaluate QoS traffic shaping strategies (in a waveform) against system-level requirements.
- We are able to evaluate inter-processor bus technologies (such as RapidIO vs. Ethernet) for best fit.
- We are able to identify bottlenecks in startup and waveform operation and develop strategies for fixes.
- We are able to predict the overhead (in latency and processor utilization) of CORBA and evaluate the performance of various call strategies (such as optimized same-address-space calls.)
- We are able to re-use large portions of the models across multiple projects.
- We are able to effectively and clearly communicate the design and results to stakeholders.
- We have a performance prediction and verification methodology that satisfies our customers!
Frankly, I have been immensely pleased with what we have been able to accomplish with Foresight in assisting the design and verification of software defined radios. It just plain works! In these days when all kinds of claims are being made for languages, tools, and technologies, it’s pretty refreshing to be able to just get your work done. However, the approach and benefits are not limited to just SDR. This approach has been shown to work equally well for any complex embedded or computer-based system. Seriously, you need this for your next complex systems design!
I’ll end with a comment from one of our customers (I didn’t contribute to this project):
“I recently have been able to compare our Foresight model results with actual CNI hardware in the lab. The results from the lab are correlating nicely so far. The lab data correlates much better than I would have hoped, actually. I generally worry about microsecond to microsecond performance comparisons (since the model cannot be as complex as the actual system). Also, it’s [very difficult to get] everything configured correctly to do a fair “apples to apples” test. Generally [the test team doesn’t] have too much flexibility to set up sophisticated tests. However, we did a fairly high resolution model of our hardware (and software) components, so it tracks rather well when looking at critical latencies.”
Foresight user at Northrop Grumman, designing an SDR Project.