By paul ~ April 6th, 2006. Filed under: FAQ, Tips & Tricks.
We recently received the following questions from a user and felt that the discussion is useful for all. Note that the user was using v5.2.3 and was running into the Windows 2GB/application memory limit. This was subsequently solved by moving to 64-bit Windows XP where the simulation mapped approximately 3GB!
In case our memory issues can not be resolved by attempting to modify the 2 Gbyte application limit, I am trying to think of other ways to reduce the memory required by our current model. I have a few questions…
1. When I start up a simulation (i.e., after the sim has completed “initialization”, but before starting the run), the task manager shows the amount of (peak) memory used by Foresight up until that point. What would make this number vary from run to run if I am running the identical model, configured (through parameters) identically? If I compress the model each time before a simulation, shouldn’t the memory usage for the application be about the same?
I agree with you and would expect that any variation would be small, provided that we’re also talking about a fresh Foresight session each time (as would occur in a batch run.) The Foresight versions prior to v5.3.1 had some TERRIBLE memory leaks that occurred if you closed and restarted the simulator panel multiple times within the same session. The fact that you are compressing the model before each simulation leads me to believe that these are in separate sessions, so that is not the explanation.
As for compressing the model, you shouldn’t need to do that but very occasionally. On the large models I’ve been working with, I’m only compressing in the (very rare since v5.3.1) instance where some kind of database corruption occurs. Otherwise, there’s little value in doing it beyond reducing the persistent size of the model somewhat. It does increase your run-time, though, as cached persistent structures have to be re-created after compression when you turn the simulator on. (The loss of these, particularly compiled minispec/std code, accounts for a good part of the reduction in persistent model size.) Other than that, there is no down-side to compressing every time, it’s just not buying you much. You might try not compressing between runs, though, and see if it slightly reduces your memory utilization when you turn the simulator on. If there happened to be a leak in the compiler, you’d be adding that to your memory utilization.
2. Are there any typical memory hogs in a model? For example, a certain data type. We have some large 2 dimensional arrays in the model whose size seems to affect the memory usage significantly, but since we cannot modify these we are looking for other ways to reduce memory requirements. How about leaving unused spec items in some of the subsystems? Could this affect memory even if these items are not part of the simulation?
Complex alternatives (unions) are memory hogs. We just fixed this for the the v5.3.3 release. I’m not aware of any other types to stay away from. The internal representation of most types is very close to the ‘C’ representation and is therefore pretty efficient. If you have unused spec items that are instantiated many times (through multiple instantiations of a reusable), that could significantly increase your memory utilization. Unused specitems that are never instantiated should not be a problem, as long as they don’t contain large static structures (like a global array or something.) If I have an unused DFD sitting in a subsystem, that DFD will ‘run’ even though it isn’t instantiated elsewhere. It looks like a top-level DFD to the simulator. So, any periodic sources, intialization, instantiation of reusables, strip-charts, etc. within that DFD will be activated, even though it is never used. That is an important thing to know.
3. Does the data dictionary size affect memory usage? (Some of us are not good about cleaning up the Data Dictionary.)
The Data Dictionary requires storage in each subsystem, however it only occurs once and shouldn’t account for a lot of the memory usage. One thing that REALLY helps with DD management is the use of the #include directive documented in the v5.2.2 release notes (sections 1.2 and 1.3). This allows you to ensure that each type is only defined in one place and then included where needed. To clean up a data dictionary then becomes as simple as deleting all of the entries (in Foresight), then re-importing the appropriate text file. This is the ONLY way to fly for large models with many sub-sysystems. Otherwise, it’s just too hard to keep them in sync.
A run-time question…
1. We have had a “sim accelerator” in our model from the very beginning – basically a square wave generator @ 50%. I believe this is necessary to prevent the sim from running at wall-clock time. Does the newer version of Foresight (we are at 5.2.3) still require this? Since some of our runs can be over 100,000 seconds, I wonder if the additional events caused by the square wave generator could be affecting run time significantly.
Yes, you will still need something in the model that creates events with a period of less than 1 second in order for the model to run at fill speed. Resetting the period of your square wave generator to 1.8 would cut the number of events due to that generator almost in half, but I really doubt that this accounts for a very high percentage of the number of events in the system, particularly if it’s just routed into a sink. Note that a strip-chart with a sample frequency > 1.0 will accomplish the same thing. If you have strip-charts in the model, double-check them to make sure they aren’t performance problems. One thing I do is to make a simulation run with event recording turned on. Then, I look through the file to see where the bulk of my events are coming from. We need to create a tool to make this easier, but I usually just look at it in EMACS and grep for strings that look common. (Yes, I’m a throwback, I use cygwin on Windows so that I can run the convenient UNIX commands, like “grep EventPattern event.txt | wc -l” which will tell me the number of events that match that pattern. There’s probably a convenient Windows way to do this, but, due to laziness, I don’t know it.) Doing this can be very informative.
One further thing. In v5.3.3 we have added an (as yet un-documented) option that may help users who are trying to improve the performance of their models. The enableProfiling option has been added to the foresight.ini file. The entry in foresight.ini is as follows:
# Enable model profiling timestamp output in events file
# options are: true false
When this flag is set to true, Foresight adds a wall-clock-stamp as the first field to each line in the events output when events are recorded. This is crude, but it can be used to determine which parts of your model execution are taking the most time.
1. Do you suspect the upgrade of large, complicated models from v5.2.3 to v5.3.1 may be a problem? I don’t recall if upgrading some of our older models to v5.2.3 involved any extra work – besides just running the DB tool.
Some great news is that the move to v5.3.x didn’t change the database format at all. As a result, upgrading is as simple as loading the model and doing a complete analysis.
2. Will I be able to install two different versions of Foresight on a single machine?
You can but, sadly, the installer doesn’t make this easy. If you move to v5.3.3 or later, it’s made more difficult because you have to upgrade to Exceed 10 at the same time.