Print this
Growing Pains

Monte Carlo, that Old Faithful of financial analytics, is more widely available today than ever before. But here's the catch: its long-standing limitations are more worrisome in today's maturing markets.

By Karen Spinner

Monte Carlo analysis has been a reliable standby technique for both dealers and users of complex derivatives. Today it has two critical uses at most financial institutions. First, it is an important valuation tool for financial instruments that are either highly structured or include very complex prepayment assumptions. As a rule, if an instrument is too complex to price with closed formulae such as the Black­Scholes model and if it is resistant to lattice techniques, then it may be appropriate to turn to Monte Carlo for a price. Indeed, Monte Carlo, which simply refers to a sophisticated random number generator, has been around for years as the pricing methodology of last resort.

Monte Carlo analysis also plays an important role as a risk management tool. Typically it is used to generate scenarios that can then be used in a value-at-risk (VAR) calculation to determine the maximum variance of the portfolio, to a certain level of confidence, over a specific time horizon. Monte Carlo, along with historical and parametric approaches, is one of three popular ways to generate VAR scenarios.

But ironically, Monte Carlo's near-ubiquity on trading desks and in risk management departments has coincided with a growing awareness of its limitations. These include less than perfect accuracy, the vast number of trial runs required to obtain a reasonably accurate result and the length of time it can take, particularly in the case of large portfolios, to process.

1001 uses for Monte Carlo

Monte Carlo is, of course, an extremely flexible and useful technique. It is the pricing court of last resort, and to date it remains the pricing methodology of choice for CMOs and related instruments. Although most experts recommend that Monte Carlo analysis be used only when no other, more precise methodology is available, it can provide a quick-and-dirty price estimate for most instruments. Bob Geske, vice president at C*ATS, says that Monte Carlo is particularly useful in cases where more than one deterministic solution exists and there is disagreement over which solution is more correct. It has also caught on as a method for determining probable portfolio variance without resorting to either pure historical data or a parametric assumption that market factors are normally distributed.

And today Monte Carlo analysis-the derivatives valuation jack-of-all-trades-is easier and less expensive to obtain than ever before. It is available as a standard feature in many sophisticated spreadsheet add-ins. Says George Williams, a principal at Kalotay and Associates, a consultancy which specializes in the development of sophisticated pricing models for interest rate­based instruments: "The temptation of Monte Carlo is that it is so powerful, yet so easy to program. You can whip up a Monte Carlo generator in a single afternoon that can give you a quick and very reasonable price estimate of some very complex financial instruments."

Finally, in a way Monte Carlo simulation is the mother of all models. Russel Caflisch, professor of mathematics at the University of California in Los Angeles, explains that Monte Carlo is an extremely useful tool for validating new pricing models under a multitude of "random" market conditions.

The drawbacks

But just as Monte Carlo has definite strengths, the technique also possesses some considerable weaknesses which, perhaps, have become more significant now that the technique is so widely used. Here are some of its well-known limitations:

General inaccuracy. As a valuation tool, Monte Carlo typically works as follows. Start by generating random market factors-such as interest rates and currencies-based on their historical distributions. Feed each set of randomly generated data into a model which is designed to predict the behavior of the instrument in question based on changes in these market factors, and this model spits out a price. This cycle continues over and over again, until a statistical distribution of prices is created. The mean of this distribution is then taken as the instrument's approximate price.

The accuracy of the results will depend on the number of times the simulation is run, but there is no perfectly linear relationship between the number of Monte Carlo sequences performed and the degree of pricing accuracy achieved. Instead, the degree of error in any Monte Carlo simulation is inversely proportional to the square root of the iterations used. This means that by increasing the number of iterations from 100 to 1,000, there won't be a tenfold improvement in accuracy. And importantly, the larger the number of iterations already run, the larger the number of additional runs needed to obtain substantial improvements in accuracy.

And Williams points out that there is no such thing as a perfectly accurate Monte Carlo-based price. He says: "Monte Carlo cannot be perfectly calibrated to today's markets. Consider the example of three option-free cash securities: a floating rate note, a swap and a fixed rate bond. From a structural perspective, the note and the swap can be combined to create the bond and, if you have available market prices for the note and the bond, you can solve the following for the price of the swap:


"Suppose," says Williams, "that the market tells me that the bond is a par bond. Suppose further that the floating rate note resets at the fair short-term rate so that it is also priced at par. In this case the value of the swap should be zero, but Monte Carlo will provide a slighly different answer. Of course, Monte Carlo analysis will misprice the bond and the note as well so that the equation cited above will work out under Monte Carlo, although the individual terms will be wrong."

The greatest danger, concludes Williams, would arise from combining another technique with Monte Carlo.

C*ATS's Geske, however, explains that this calibration problem is not limited to Monte Carlo analysis. Many instruments rely on "implied" market factors. For example, in the case of options, volatilities may be calculated by obtaining current market prices and then solving Black­ Scholes or a similar model for vega. If the equation was solved for the "implied price" of the swap above and then a deterministic equation was used to solve for the price of the swap, the answers would be slightly different, as in the case of Monte Carlo.

Calibration errors can be more severe in the case of Monte Carlo. If, for instance, the closed-form valuation formula included an implied forward rate, then the valuation may be off to the extent that the implied forward rate is off, says Geske. Likewise, if a binomial or trinomial model is used, where implied forward rates are assigned to a finite number of nodes, the cumulative discrepancies in those forward rates will have a slightly larger impact on the final price. But under Monte Carlo analysis, the imperfect forward rate is used thousands of times with a result, depending on the extent to which the discrepancies cancel themselves out, that can have an even more significant effect on the final price.

Processing speed. In order to get extremely accurate-though by no means perfect-results from Monte Carlo, it is necessary to run a lot of simulations. Consider the findings of Jonathan Berk and Richard Roll of Goldman Sachs, who required 10,000 iterations in order to value one adjustable rate mortgage to a single standard deviation. Faced with a whole book full of adjustable rate mortgages, it doesn't take a systems expert to see that this process will take a very, very long time.

The same basic principle holds true when using Monte Carlo analysis to determine the likely variance of a portfolio. According to Dan Rosen, a senior financial engineer for software firm Algorithmics, one of the most critical drawbacks to Monte Carlo as used on a portfolio basis is processing speed. Consider running a value-at-risk calculation on a portfolio comprised of 20,000 swaps using 5,000 randomly selected sets of market factors. This kind of task, says, Rosen, becomes very difficult to manage in real time.

Of course, the amount of time such a problem would take to process depends on the sort of market factors chosen. For example, if on every Monte Carlo pass, multiple, many-factored yield curves are generated in different currencies, then the problem will be more complex than if a concerted attempt is made to simplify these market factors, thus reducing the dimensionality. And C*ATS's Geske stresses that instruments within a portfolio being analyzed for risk management purposes would necessarily have to be valued by a deterministic approach. "Monte Carlo-within-Monte Carlo" is not viable.

"Specific" inaccuracy. While Monte Carlo simulation is in a sense "always" slightly inaccurate, there are some instruments for which Monte Carlo cannot produce a meaningful result. Consider the classic case of the American-style option.

An American-style option can be exercised at any time during its life, and so valuation must include some sort of early-exercise model to predict when, exactly, the investor is most likely to exercise the option. Says Williams, "To decide whether to exercise an option, one must know two things: the benefit realized and the value forfeited by exercise. For European options, this is trivial. But American options have an incremental time value beyond their intrinsic exercise value, and an option should only be exercised when its time value is dominated by the exercise value-say, at least ten to one."

The problem here, according to Williams, is that Monte Carlo can only approximate the incremental time value left on the option at exercise. He says, "Monte Carlo works okay for discounting known cash flows, as long as it is used consistently; however, it should not be used to make the decisions required in managing options." Emmanuel Fruchard, head of financial engineering for Summit Software, concurs: "Monte Carlo has problems with American options because these methods are highly dependent on users' assumptions relating to under what circumstances the option will be exercised, and thus provide a less than optimal tool for decision-making."

Of course Monte Carlo would never be the valuation model of choice for American-style options. C*ATS's Geske says, "In most cases, everyone agrees on the early-exercise function and, generally speaking, it is not all that significant." He stresses that Monte Carlo is typically used in cases where it is impossible to accurately predict a prepayment function, as in the case of CMOs, where the forces of "QFRD" (Quit, Fire, Retire and Die) make such a prediction too complex to be represented by simple deterministic formulae.

And they're getting worse

The classic drawbacks to Monte Carlo analysis described above are certainly no well-kept secret. Financial engineers have known about these issues for years. The reasons these drawbacks have become more significant today is that the derivatives markets are maturing and users are becoming more technologically proficient. Consider the following:

Tiny spreads. Over the past five or so years, the derivatives markets have to a large extent become commoditized. This means that swaps, for example, which used to yield 80 or 100 basis points in profits to the typical dealer, may now only yield, say, five basis points. And this drastic reduction in profit margins has also affected more complex instruments. Says Williams: "Monte Carlo pricing is no longer an adequate approximation now that the markets have tightened. Even small variations in price are much more critical now."

In other words, the calibration errors described earlier, which can be more severe in Monte Carlo analysis, become more critical when there is little margin for error. And although it is possible to eventually reduce the error in Monte Carlo simulations to a virtually negligible level, the amount of work required to do this can be prohibitive.

Operational risk. The widespread availability of Monte Carlo­capable software has increased the probability that it may be used improperly. A considerable danger exists that systems may not provide a full, obvious disclosure of Monte Carlo's limitations. Dan van Deventer, president of Japan-based Kamakura, a risk management consulting firm, provides the following scenario.

Consider an asset/liability manager who regularly marks his balance sheet to market using Monte Carlo analysis. Let's say he runs 200 iterations and his balance sheet is valued at 100. Any competently written software will inform him of his sampling error, which in this case can be expressed by stating that there is a two standard deviation variance around the portfolio's "mean" of 100; in other words, between 98 and 102. However, if the software does not display this fact, then the asset manager could blithely assume his valuation is more accurate than it really is.

This kind of basic misconception can cause problems with hedging decisions. For example, many risk managers use the concept of delta in order to determine appropriate hedges for their exposures. "Delta" is the projected change in a security's price in response to a change in a market factor. Using Monte Carlo to arrive at the delta for a portfolio can be particularly dangerous because of sampling error, says van Deventer. It is very easy to get a "long" delta where, in reality, the position is "short" delta, and vice versa. On the basis of this faulty information, a risk manager can easily enter into a hedge that exacerbates the problem rather than ameliorating it.

Solutions, alternatives for valuation...

Given the potential pitfalls of Monte Carlo, coupled with a market characterized by constantly narrowing margins, it is not surprising that many researchers are working on alternatives to Monte Carlo in the pricing arena. Consider the following popular alternatives:

Closed-form equations. According to van Deventer, one promising avenue for alternatives to Monte Carlo is to be found in "finite difference" methods, particularly for products that incorporate American-style options. This process is very flexible and is consistent with a wide variety of statistical, market factor distributions, says van Deventer. Thus, using finite difference methods, a risk manager does not have to assume that market factors are normally or lognormally distributed.

Multifactor models. From simple, one-factor binomial models to high-order polynomial approximations, multifactor models are considered the second-best way to value derivative instruments. Typically, these methods involve valuing the instrument in question at a finite number of nodes to which predetermined market factors are assigned.

Of these techniques, the simplest-and therefore the most efficient-is the one-factor model. Sometimes known as "recursive," "lattice" or "bushy tree" techniques, these models track one critical market factor from the present to the instrument's maturity and back again.

One-factor models rely on either binomial or trinomial trees. A binomial tree is a discrete time model that describes the movement of a single random variable whose movement at each "node" in the tree can be expressed as an upward or downward movement with a known probability, explains van Deventer. Since each node has two branches, it is simple enough to predict the values of all the nodes on the tree. To price a relatively simple instrument, whose price is largely derived from a single factor, a manager needs to travel "up" the tree, into the future, to generate market prices for each node, and back "down" the tree to arrive at values for the derivative instrument at each node. Probabilities are then used to arrive at a final price.

Trinomial trees, unsurprisingly, have three branches: one representing "up," one representing "down" and one representing "stays the same." In the case of interest rates, three parameters are typically used: short-term rate bias, known as "drift"; volatility; and mean reversion, which represents the probability that short-term interest rates will be pulled back to a historical mean value.

These one-factor models often, however, have substantial drawbacks. For example, while a trinomial tree might do well in pricing a relatively simple swap or bond, what about a convertible bond or a currency swap? Many shops are at work on effective two-factor, bushy-tree-type models that can handle multiple market factors, but these tend to be quite complex and arduous to program.

Beyond the one- and two-factor models are high-order polynomial or "finite difference" techniques, which although they are more complex than the lattice techniques, still avoid the randomness associated with Monte Carlo. These methods use grids from which solutions to the partial differential equation that describes a particular instrument's price can be obtained. This process is very flexible, says van Deventer, and is consistent with a wide variety of statistical market factor distributions. Thus, using finite difference methods, it isn't necessary to assume that market factors are normally or lognormally distributed.

These methods do require that a distribution pattern is chosen for the market factors. Says Geske, "Your price will be correct only if you have chosen the optimal statistical distribution."

Trades which are exposed to more than two risk factors cannot be effectively priced using a lattice-building technique, counters Fruchard of Summit Software. For example, a European swaption on a currency swap would involve exposure to multiple interest rates, yield curves and currencies. With this many dimensions, any lattice-type framework would have too many nodes to evaluate efficiently.

Still, Williams explains that recursive and finite-difference techniques can replace Monte Carlo as the pricing methodology of choice for many sorts of path dependent securities, including index amortizing swaps and adjustable-rate mortgages. The one sticky point, he explains, comes with the path dependent nature of CMOs. The need to model the interactions among the multiple tranches of the underlying pool of mortgages, coupled with prepayment models driven in part by some moving average of rates, leads to computational demands that seem to render recursive valuation unsuitable to the task. "But," he says, "We're working on it."

"Deterministic," "quasi-random," "low discrepancy" sequences. Another technique-which is essentially a "tweaked" version of Monte Carlo analysis-is known variously as "deterministic simulation," "quasi-random selection" and "low-discrepancy sequences." Basically, this technique takes the "random" out of Monte Carlo by providing rules which guide how each market factor is generated. Quasi-random selection can correct for the bunching of market factors which frequently happens with classic Monte Carlo, says UCLA's Caflisch. (See chart.)

This tendency for randomly generated numbers to clump means that vast numbers of iterations may be required to correct for this clumping. Quasi-random sequencing provides a smoother distribution much earlier on, thus reducing the number of iterations required for a reasonably accurate measure. According to Caflisch, the error in a quasi-random simulation is inversely proportional to the number of sequences run. Relative to Monte Carlo-where error is inversely related to the square root of the total number of iterations-quasi-random simulation can provide an acceptable answer much faster than Monte Carlo.

And, in fact, many firms are looking into the potential applications of deterministic simulation. Toronto-based Algorithmics, which offers sophisticated models and risk management technology, is exploring potential uses for low discrepancy sequences. Algorithmics is conducting a joint project with University of Toronto's RiskLab to identify where the use of nonrandom sequences will likely be most productive.

But while quasi-random simulation sounds great in theory, it has a lot of drawbacks. Remember IBM's announcement last year that a new breakthrough methodology would revolutionize how banks priced their path dependent securities? (See Derivatives Strategy, December 1995.) Despite the fanfare, Big Blue is still bogged down in the testing phase. Although quasi-random methods can dramatically speed the pricing of certain relatively simple CMOs, the technique tends to break down when used for more complex instruments.

This type of deterministic simulation has two key drawbacks. First, the dramatic improvements in processing speed that quasi-random methods can deliver on relatively simple problems do not translate into the same sorts of improvements on complex problems because complex problems tend to have a high "dimensionality," says Caflisch. Dimensionality refers to the number of variables which much be independently simulated during each "pass." Caflisch explains that, in a quasi-random sequence, it is necessary to create a separate set of rules for each dimension. The rules specified for each dimension must be independent of the rules specified for the other dimensions and, for problems with hundreds of dimensions, this can be quite challenging. The rule of thumb, then, is that the more complex the problem, the less improvement in speed quasi-random methods will provide over standard Monte Carlo.

Furthermore, it is very difficult to pin down, exactly, the degree of error present in quasi-random-generated pricing results. Caflisch explains that while the exact standard deviation associated with a Monte Carlo sequence can be determined, that is not possible with quasi-random. This point alone is enough to make most bankers a little edgy.

"Nevertheless," says Caflisch, "it is important to keep in mind that while quasi-random sequencing is not appropriate for all problems, they have a great deal of potential for pricing both simple and moderately complex instruments. The trick will be to isolate the most critical variables in any problem and then to minimize the total number of variables that must be processed."

Solutions and alternatives for risk management

Monte Carlo analysis is just one of three techniques that can be used to generate scenarios in the context of calculating value-at-risk. Of the two alternative methods, Geske says that historical simulation is used more frequently as a supplement to other forms of analysis because it is widely recognized that history is not necessarily the best predictor of future market behavior. Furthermore, utilizing large quantities of historical data does not necessarily yield great advantages in terms of processing time over Monte Carlo methods. The other alternative, parametric simulation, assumes that market factors are normally distributed and uses correlation and volatility to predict market prices. The biggest advantage of this method is the speed with which a value can be obtained. Data for this simulation is easily available from several sources, including the RiskMetrics web site.

Monte Carlo, because it neither assumes a particular distribution of market forces nor relies on historical data, is considered by many to be a more rigorous test of a portfolio's likely future behavior, given uncertain future market conditions. But the biggest drawback remains the time it can take to process a large portfolio under Monte Carlo.

The goal of many researchers then is not to replace Monte Carlo as a technique for generating risk scenarios, but rather to find new ways of representing the portfolio and other market factors so they can be easily processed through Monte Carlo.

There are two relatively accessible techniques that could be used to make Monte Carlo more efficient as an engine for VAR and other portfolio-wide analytics. The first is portfolio compression, wherein an index is created which imitates the behavior of a very large portfolio. This portfolio replication can be accomplished using, among other tools, Algorithmics' patented scenario optimization technique. Rosen explains that the replicated portfolio need not behave like the large original portfolio forever and under all conditions. Instead, it should imitate the portfolio under a small, representative set of likely states over the time horizon over which VAR is being calculated.

Algorithmics already supports a limited use of this technique. Offline, the larger portfolio is compressed over a small set of scenarios. Then the compressed portfolio, which consists of a small number of simple instruments, can be used for intra-day calculations, even if some market conditions change. This way the replicated portfolio captures the behavior of complex instruments without incorporating their pricing complexities. For example, the changes in the value of exotic and path-dependent options may be captured by an optimal combination of simpler plain vanilla instruments.

The second method is to simplify or compress the horizon of risk factors over which Monte Carlo sampling is performed prior to running the simulation. This can be accomplished by applying a transformation process that directly incorporates information on the target portfolio. The result is a more focused sampling process and a faster valuation of the portfolio.

This technique is similar to traditional principal component analysis (PCA) in some respects. But while standard PCA applied to scenario generation would focus on capturing the variability of the risk factors, the components obtained through the compression technique are optimally chosen to reflect the portfolio's behavior. Moreover, the resulting functional relationship between the portfolio and the new risk factors is much simpler. Although this method requires some analytical work, the combination of a smaller simulation space and a simpler valuation can result in a considerable speed enhancement over standard Monte Carlo.

Pulling it all together...

So, despite the growing pains, Monte Carlo analysis is a basic building block of financial analytics. And as pricing technology becomes increasingly sophisticated, Monte Carlo's most important function may be as a risk management tool and a mechanism for validating new, more self-contained models.

Dos and Don'ts


  • Use Monte Carlo to price instruments for which no other method is readily available
  • Make sure you know how accurate your Monte Carlo analysis really is
  • Select software that provides clear, easy-to-understand information on accuracy
  • Use Monte Carlo to validate pricing models
  • Use Monte Carlo to obtain portfolio-wide value at risk


  • Price instruments with Monte Carlo if other more accurate methods are available
  • Price instruments with Monte Carlo "because it comes with my software package"
  • Cut down on the number of iterations to save time
  • Use Monte Carlo to price instruments with complex optionality without considering the dangers