Palisade Knowledge Base

HomeTroubleshooting@RISK for Excel: SimulationConvergence by Standard Deviation: Always Same Number of Iterations

4.17. Convergence by Standard Deviation: Always Same Number of Iterations

Applies to: @RISK 5.x–7.x

When I select "auto" for number of iterations, and my test for convergence is on standard deviation, no matter the characteristics of my model it always converges at 2200 iterations. Even if it's just a uniform distribution designated as an output, converging on the standard deviation still takes 2200 iterations. Why is that?

If I enable multiple CPU, the convergence generally takes a few hundred iterations longer. Why?

Surprising though it may be, this result is correct. It has to do with the interaction of the confidence interval for the standard deviation with @RISK's method of testing convergence.

How does @RISK test convergence? The default is to test every 100 iterations for a 95% confidence level on a tolerance of ±3%, so let's work with that. Every 100 iterations, @RISK takes the current simulated statistic — mean, standard deviation, or percentile, whichever one you chose — subtracts 3% from it to get L, adds 3% to it to get U, and then figures the cumulative probability between L and U. If

cumulative probability between L and U ≥ 95%

then you are at least 95% confident that the true value is between L and U. In other words, your mean, standard deviation, or percentile has converged at the 95% confidence level.

But how does @RISK find the cumulative probability when testing for convergence of the standard deviation? A confidence interval for standard deviation is computed from a confidence interval for variance, which uses the χ² distribution. Therefore, to test convergence of the simulated standard deviation, @RISK actually tests convergence of the variance. After n iterations, the current simulated standard deviation is s and the simulated variance is s². The bounds for variance are therefore

L = (0.97s)² = 0.9409s²    and    U = (1.03s)² = 1.0609s²

But here's the surprise: to place a value on the χ² curve, you divide (n-1)s² by that value. Recall the requirement for convergence: the cumulative probability between L and U must be ≥ 95%. Using the χ² distribution, that translates to

cum. prob. of χ² with df=n−1 between (n-1)s²/1.0609s² and (n-1)s²/0.9409s²   ≥   95%

(L and U have traded places, because 1/1.0609 is smaller than 1/0.9409.)  The s² in the fractions cancel out:

cum. prob. of χ² with df=n−1 between (n−1)/1.0609 and (n−1)/0.9409   ≥   95%

This is an inequality in just one variable: n, the number of iterations. It's completely independent of the estimated standard deviation and of the underlying distribution you're simulating. (The tolerance comes into the .9409 and 1.0609 coefficients, and the confidence level is the number on the right-hand side of the inequality.)

Solving the inequality gives n ≥ 2140: to converge to within ±3% with 95% confidence takes 2140 iterations. You can do similar calculations for any other convergence tolerance or confidence level.

Why 2200 iterations, then? The default is to test for convergence every 100 iterations, and the first multiple of 100 iterations ≥ 2140 is 2200.

Why are more iterations needed with multiple CPUs? The master CPU does the convergence testing, based on iteration values it gets from the worker CPUs. If the workers are doing 1000-iteration blocks, for example, then convergence can't be tested more often than every 1000 iterations, and @RISK will report convergence at 3330 iterations.

Why doesn't this happen when means and percentiles are tested for convergence? Because the confidence intervals for means and percentiles don't involve the χ² distribution, and therefore convergence does depend on the progressive estimates of the mean or percentile. It(#rsq#)s only for standard deviation or variance that convergence depends purely on number of iterations.

If you have versions of @RISK 6 before 6.2, you will find that convergence to 3% tolerance with 95% confidence takes 8547 iterations, not 2140. Those versions of @RISK were testing for a ±3% tolerance in variance, not standard deviation. The actual tolerance for standard deviation was about 1.5%, which needed more iterations. You may want to upgrade your software to the current version if you have an earlier version.

Last edited: 2015-10-14

This page was: Helpful | Not Helpful