Print this
Pricing Weather Derivatives

Izzy Nelken, president of Super Computer Consulting Inc., examines various methods for pricing weather derivatives and suggests an alternative.

The weather derivatives market has never been hotter. Some 1,600 over-the-counter deals worth approximately $3.5 billion have been done in the United States so far, and the Chicago Mercantile Exchange recently began listing these products.

But pricing these deals—which usually take the form of options on degree days—is tricky. Black-Scholes-Merton, the standard option-pricing methodology, is based on the notion of continuous hedging. This works well when pricing options on currencies, stocks, commodities or other fungible assets that can be traded in the spot market. But in weather derivatives, the underlying is not traded. "You can't buy a sunny day,” goes the old saying.

Burn analysis

The insurance industry uses an approach called burn analysis that's useful in the weather markets. Burn analysis asks, in effect, "What would we have paid out had we sold a similar put option every year for the past 50 years?” There are six steps in the burn analysis process. First, one must collect the historical weather data, convert them to degree days (heating degree days [HDD] or cooling degree days [CDD]), and make some corrections. Then, for every year in the past, one must determine what the option would have paid out, find the average of these payout amounts and discount back to the settlement date.

In weather derivatives, the underlying is not traded. "You can't buy a sunny day,” goes the old saying.

The most difficult steps in the process are the first and third. Collecting historical data can be tricky. While there are Internet sites with downloadable historical weather information for the United States, obtaining historical weather data for the United Kingdom and the rest of Europe is quite costly. Even when the data are available, there are missing data, gaps and errors. The historical data must be "scrubbed” before they are used for pricing.

The correction step, meanwhile, presents its own problems. Consider the following examples:

  • The period in question is a leap year, so there are more days in the period from November 1, 1999, to March 31, 2000, than there are in the corresponding period the year before.
  • The weather station may have had to be moved as a result of construction, or may have been moved from the sun to the shade.
  • How many years of historical data should one consider? Ten years? Twenty? Many cities exhibit the "urban island effect,” in which, as a result of heavy industrial activity, construction and pollution, the weather gradually grows warmer over time. In some urban centers, it is possible to detect warming trends in the weather. These trends must be accounted for when pricing the option.
  • Sometimes extreme weather patterns such as El Niño and La Niña occur. The pricing of an option in an El Niño year is different from the pricing in a year that is not.

It is possible to resolve these problems in a reasonable manner. A weather trend may be corrected for by observing the average HDDs for the past 10 years and comparing it with the corresponding trailing 10-year average of the HDDs for each of the years in the past. For example, assume that

  • For the year 1988, in the period from November 1, 1988, to March 1, 1989, the HDD count was 5,050.
  • The average of the HDD counts for the 10 years from 1989 to 1998 was 5,000.
  • The average of the HDD counts for the 10 years preceding and including 1988 was 5,020.

When we compare these numbers, we find that the HDDs have dropped from 5,020 to 5,000. This is consistent with a warming of the weather. In this case, it may be reasonable to shift the data relating to 1988. A linear shift would be:

Shifted HDD = Observed HDD + last average - previous average = 5050 + (5000-5020) = 5030

It is also possible to find reasonable corrections for the other effects.

Most market participants use some sort of burn analysis in computing the fair value of the option. These degree day-based models are simple to construct. All that is required is a good source of historical data.

A flaw

There is, however, a serious flaw in the degree day-based models, which manifests itself most profoundly in periods when temperatures hover around 65 degrees. In most areas of the United States, these are typically the fall and autumn seasons—the so-called shoulder months.

To observe this flaw, look at a simple hypothetical example. Consider an option whose period is three days: February 3, 4 and 5, 2000.We observe the historical weather in City A:

  Feb 3 Feb 4 Feb 5
1996 64 64 64
1997 66 66 66
1998 64 64 64
1999 67 67 67

We also observe the historical weather in City B:
  Feb 3 Feb 4 Feb 5
1996 64 64 64
1997 96 96 96
1998 64 64 64
1999 97 97 97

Obviously these two cities exhibit different weather patterns. We would not want to sell a weather derivative on City A for the same price as a weather derivative on City B. Note, however, that any degree day-based model would not be able to distinguish between these two cities. The HDDs generated are exactly the same for both cities: 3 for 1996, 0 for 1997, 3 for 1998 and 0 for 1999—clearly a problem.

Another flaw

What is the strike of a zero-cost swap? Using burn-rate analysis, the answer to this question depends on the maximal payout assumption. This seems counterintuitive, since we would expect that the strike of a zero-cost swap to be the same regardless of the maximal payout amount.

For example, consider a swap in which we are long HDD in Chicago. The period is November 1, 1999, to March 31, 2000. Assume that there is a $10,000 payment per HDD and that the maximal payout is $10 million. Taking the last 10 years of data (1989–1998), without trending or adjusting for leap years, we note that the average HDD level is 5,018.75. A swap with a strike of 5,018.75 would indeed be a zero-cost swap.

On the other hand, assume the maximal amount that can be paid out (either way) under the swap is only $1.7 million. In this case, the burn analysis gives a totally different result. A swap with a strike of 5,018.75 would actually show an average payout of negative $202,000!

If we restrict the maximal payout, then in some of the 10 years of history we would get the maximal payout of $1.7 million and in some we would pay it. The average of these payouts will not, in general, be zero. In some cases (as in our example) it would be quite different from zero. To make the average payout zero, we would have to change the strike price to 4,945.

The problem does not disappear even if we use more data. Using 30 years of historical data gives similar results. This counterintuitive result stems from the fact that we are using a burn-rate analysis and that the payout of the options is bounded.

Temperature-based models

More sophisticated weather derivative models are based on modeling the weather directly. These models do not incur the flaws mentioned above. Such models require the following steps: collecting the historical weather data, making some corrections, creating a statistical model of the weather and simulating possible weather patterns in the future. Then, for each weather pattern, calculating the payout of the option, finding the average of these payout amounts and discounting back to the settlement date.

The fundamental difference between the two approaches is that we are building a model for the weather, not the degree days. The simulation done in step four could be performed using a Monte Carlo algorithm. Such procedures generate random numbers, which can then be used to simulate the behavior of the phenomena we are trying to model.

It is possible to price foreign exchange options using Monte Carlo. Most market participants agree that foreign exchange fluctuates according to a random walk described by a geometric Brownian motion:

The stochastic process is described by the differential equation:

dP = (r-q)P dt + v P dz
P is the price of the security (or the foreign exchange rate),
dP is the instantaneous change to the price P,
dt is an infinitesimally small unit of time,
r is the domestic interest rate of the payout currency,
q is the foreign interest rate,
v is the annualized volatility of the exchange rate, and
dz is a Weiner process, dz = w ­ dt based on w, a normal distribution with a mean of zero and a standard deviation of 1.

It is possible, and not too difficult, to show that an algorithm that involves running many simulations of this Monte Carlo algorithm, taking the final exchange rate on the expiration of the option, computing the payout of the option for each of the simulations, averaging these payouts and discounting the average back to the settlement date will give precisely the same results as the Black-Scholes-Merton formula.

So can we use the same model for temperature?

Unfortunately not. It is possible for exchange rates or stock prices to fluctuate sharply over time. For example, many stocks have doubled their value within one year. It seems unlikely, however, that the temperature next year will be double what it was this year. We therefore must choose a different model for the weather. To model weather, we can use mean-reverting models.

Mean-reverting models

Mean-reverting models have been used extensively to model interest rates. In the United States, where interest rates are approximately 5 percent, it is unlikely that rates will be 50 percent. There are many different models of interest rates. For an excellent review of the different models, see the chapter by Kerry Back in my book Option Embedded Bonds (Irwin Professional Publishing).

To illustrate a mean-reverting model, consider the "simple Gaussian model.” The differential equation describing the model is given by:

dr = a*(b-r) dt + v dz
r is the continuously compounded instantaneous interest rate,
dr is the instantaneous change in r,
dt is an infinitesimally small unit of time,
b is the mean interest rate,
a is the speed of mean reversion,
v is the volatility, and
dz is a Weiner process based on a normal distribution with a mean of zero and a standard deviation of 1.

This is an example of a simple mean-reverting model. Intuitively, r, the instantaneous interest rate, changes by an amount equal to dr. In this model, it is assumed that interest rates will converge to some long-term mean b. If r is greater than b, then the contribution (b-r) is negative. This will tend to pull interest rates to a lower level. Similarly, if r is less than b, then the term (b-r) is positive, which will tend to pull the interest rate higher.

The term (b-r) is the "pull to the mean.” It is multiplied by a, the "speed of mean reversion,” and this is one addendum in the sum representing dr. In addition, there is a random component to the short-term interest rate, represented by the product of the volatility with the Weiner process, v dz. Obviously the random component may be positive or negative.

Mean-reverting models similar to the simple Gaussian are used to price interest rate options, such as caps, floors and swaptions. The main difficulty here lies in determining a, b and v. Obviously, once the parameters are known, it is possible to compute the prices of various derivative instruments. Calibration is the reverse process in which the market prices of the liquid derivative instruments are used to determine the parameters of the model. We are trying to determine the set of model parameters that would result in prices that are as close as possible to the market prices on a large variety of instruments.

Intuitively, calibration answers the question, "What is the set of model parameters that would result in the model matching the observed market prices or coming as close to them as possible?” Model calibration is computationally quite intensive and typically requires high-dimensional nonlinear optimization. Of course, it relies on the availability of market prices for the liquid instruments.

A model for weather

It is natural to assume a mean-reverting model for the weather. As with interest rates, it is unlikely that the weather next year will be 10 times higher than the weather this year. The weather model is similar to the models used in interest rate derivatives, with a few caveats:

  • The weather changes with the season. Hence we allow the mean of the weather to vary. The parameter representing the mean, b, is replaced with b (i), which represents the mean for day number i.
  • Similarly, the volatility may depend on the day in question. In many cities, the weather is more volatile in the winter than in the summer, so the volatility parameter v is changed to v (i), the volatility for a particular day.
  • By the same token, we allow the mean-reversion rate to vary. The parameter a, which represents the mean reversion rate, is allowed to change over time. The mean reversion rate for day i is represented by a (i).
  • There is a natural seasonal effect in weather. Assume that it is now spring and that the temperature today is exactly equal to its long-term mean. We may well expect that the temperature tomorrow will be slightly warmer than it is today. In other words, there is a natural "drift” to the weather.

The most important difference between interest rate derivative models and models for weather derivatives is the calibration process. Interest rate derivative models are calibrated to the market prices of liquid instruments, whereas weather derivative models are calibrated to past data. So far, an active and liquid market does not yet exist for weather derivatives. On the other hand, we have a wealth of historical weather data. The calibration process asks, "What is the set of model parameters that would have the highest probability of generating the past weather patterns?”

This is essentially a "maximum likelihood” question. Assume that the observed data is the result of a stochastic process. We are determining the parameters for which the probability of having generated the observed data is maximal.

For example, say you flip a coin 1,000 times and it comes out 900 heads and 100 tails. It could be that this is a fair coin and that this is an unlikely sequence of coin flips. On the other hand, it could be that the coin is not even and intrinsically has a much higher chance of showing heads than tails. The maximum-likelihood technique would tend to choose the second explanation.

The weather derivatives model calibrates the model to the observed past data using a maximum-likelihood technique. Once the model parameters are determined, weather sequences are generated using a Monte Carlo process. The random sequences drive a mean-reverting model, similar to models used to price interest rate derivatives. Here, many sequences are generated, with each representing a possible future weather pattern. For each weather sequence, the payout of the option is determined. The average payout of the option under the various scenarios is deemed to be the expected payout of the option. Taking the present value of the expected payout gives us a fair-value price.

For a more complete version of this paper, please contact the author at izzy@supercc.com or go to www.supercc.com.

New Approaches to Credit Risk Management

James Gleason, author of Risk: The New Management Imperative in Finance, highlights the importance of measuring portfolio exposure and charging for credit risk.

Financial services firms are paying increasing attention to the management of credit risk. Many are building Monte Carlo-based processes to measure presettlement exposure on a global basis, thereby capturing the benefits of portfolio offsets and netting. In addition, some firms are creating internal markets with charges for credit risk to manage presettlement risk. This market process is more effective than limits are at allocating scarce credit resources and eliminating internal arbitrage.

Measuring portfolio credit exposure

Financial firms have recently been migrating to Monte Carlo simulation methods that project the worst-case credit exposure at the portfolio level, including enforceable closeout netting agreements. This enables them to achieve significant benefits compared with the mark-to-market-plus (MTM+) methodology currently in use. MTM+ captures the potential worst-case change in exposure over each deal's remaining term. In MTM+, each deal is marked to market and an add-on representing the worst possible change in value is then calculated. This add-on calculation is based on the remaining tenor, the notional amount and the volatility of underlying factors such as interest rate, exchange rate, index rate, stock quote and so on. But the MTM+ method overstates projected exposure for a number of reasons.

Exposure profiles generated from Monte Carlo simulations reflect the natural offsets and allowable netting within the portfolio. This method typically projects much lower exposures than the MTM+ calculation. The biggest reductions occur among market dealers with large trading volume and a diverse book. For those portfolios, the average reduction in exposure is 50 percent to 60 percent, although eighty percent reductions have also been achieved. Figure 1 illustrates the difference between MTM+ and Monte Carlo simulation exposure profiles.

Moreover, when banks convert to measuring credit exposure with Monte Carlo simulations, they significantly increase the trading line's measured performance (return on risk capital) by reducing the capital required for credit risk. These banks are also well-positioned to reduce their regulatory-capital requirements for credit when the regulators eventually allow it.

MTM+ vs. Monte Carlo Simulation Portfolio

Of course, there are challenges to overcome. Monte Carlo simulations have enormous data requirements—all on a centralized basis. The major sets of data are transaction, customer, market and model data. Global banks are usually unable to provide these data sets as required. Obtaining timely, accurate transaction data is the biggest challenge, since many sources must be tapped. There are numerous situations in which data problems can be accommodated within local processes, often through manual adjustments, but these problems cannot be accommodated as easily in a standard global process. For example, it may take several days to capture correctly a complex, multi-legged derivatives trade in the local systems—and a local process that calculates local usage against a local limit can adapt to this shortcoming much more easily than a global process can.

The data output works on a different operational paradigm as well. Credit availability—limit minus exposure for each tenor—must be delivered promptly to all traders globally. Mechanisms to capture incremental trades and reflect them promptly in the global utilization structure must also be built. This is considerably different from the local or regional operations that are in place today.

Despite these hurdles, however, many global institutions are undertaking to capture and manage their credit exposure for trading on a global level using Monte Carlo Simulations. Some are also looking past the current impediments toward trading without limits.

The trouble with credit limits

Currently, credit risk for trading is almost universally controlled through limits on exposure. This is similar to a command economy's setting of quotas. One of the outcomes in command economies is low-quality products that fail to meet market needs. Similarly, using limits for trading removes the incentive to find an appropriate product mix. Traders can use scarce credit resources on low-margin products as easily as they can on high-margin products. Not charging for credit leads to indiscriminate usage. There is nothing to stop the huge volumes of low-margin trades.

In addition, not charging for credit creates arbitrage opportunities within a firm. Traders can sometimes, with minimal effort, reduce charges for market risk by converting market risk to credit risk. This incremental credit risk is a cost to the firm, but not directly to the traders. Not charging appropriately for credit risk therefore creates incentives for traders that often are not consistent with a firm's objectives.

There are operational considerations regarding limits as well. The limit monitoring and enforcement process at most firms is creaky and operationally onerous. Typically, a report of credit limit excesses is produced overnight in each local back office. An administrator goes through the report to identify and explain the incorrect entries—usually more than half the entries. There are many sources of the erroneous entries: different naming conventions between front office and back office, limit reallocations that aren't reflected in the back-office systems, expired limits and so on. Toward the end of the trading day, traders are asked to recall and explain apparent trading excesses that occurred the prior day. But a day is a huge time frame for traders. They prefer to focus on current market conditions and are reasonably loath to recall and explain events that occurred 24 hours earlier.

Charging for credit

The solution to this problem is to drive the cost for credit down to a more granular level. There are several real cases in which credit charges, based on the marginal impact to the global portfolio, are driven to the desk level. This creates opportunities for win-win situations. In some cases, a firm can reduce both market risk and credit risk by laying off market risk with the right counterparty.

Although the process of charging for credit exposure can be implemented relatively easily, providing the requisite analytical tools is much more difficult. Traders need to see the credit cost of a deal before they do it. They need to be able to find a low-cost counterparty to whom they can lay off the risk of a pending deal. They need to know which trades will add to a counterparty's presettlement exposure and which will reduce it. Also, lawyers and credit departments need to be able to examine a global portfolio under various netting and margin/collateral schemes to determine which agreements should be negotiated with each counterparty and where negotiating resources should be deployed. All of these analytical insights, based on the marginal impact of credit risk on the global counterparty portfolio, need to be delivered in a timely way to all desktops. That's a tall order. Figure 2 illustrates the conceptual framework for pricing credit exposure for the trading process.

A Conceptual Framework

The framework needs to incorporate default probability, deal tenor, netting, collateral, and the contribution of specific deals and subportfolios to exposure. The scheme depicted charges for the marginal contribution of each deal or portfolio of deals to the overall credit exposure within various tenor buckets. The marginal contribution includes consideration for netting and offsets. The basic charge incorporates default probability and considerations for credit portfolio concentrations. As credit exposure approaches a firm's overall appetite, charges go up, discriminating toward higher-margin products. Collateral agreements can either cap the potential exposure or lower the base price.

This framework is conceptual, but many firms are moving closer to it. The first Monte Carlo systems that were developed and implemented did not address the need to allocate scarce credit resources efficiently. Five years ago when they were created, it was considered a coup simply to run a simulation in fewer than 24 hours. Computing speeds have accelerated considerably since then. Recent initiatives have started to deliver the analytical capabilities needed to move to this scheme. At several firms, a new credit-exposure profile is calculated before every significant deal is concluded. At one firm, a prospective deal is tested against all liquidity providers to guide traders to the best counterparty for a balancing trade—that is, the one for whom presettlement exposure increases the smallest amount. For some deals with certain counterparties, exposure can even drop, as a result of offsets and netting. Finally, one financial services firm charges for credit risk while also maintaining credit limits. Desks incur a cost (on their P&L) for credit exposure based on their portfolio's marginal contribution to the overall credit exposure. The charges are levied within tenor buckets and are based on current credit market prices (credit spreads) for each counterparty. Credit limits for trading have become a secondary concern at this firm.

It's curious that charges for credit risk occur in all markets except the internal ones within banks and securities firms. This shortcoming is understandable, considering the explosive growth of over-the-counter derivatives and the complexity of these products, whose exposure levels vary. This deficiency has lead to inefficient use of scarce credit resources, significant trader arbitrage and high operational overhead. Inevitably, the firms that correct this deficiency will perform better. Appropriate credit charges coupled with analytical tools can eliminate some of these inefficiencies and arbitrage opportunities in a firm by aligning traders' incentives with the firm's overall objectives. When an efficient internal credit market is put in place, trading based on optimized incentives occurs, unfettered by limits.

This column, excerpted from James Gleason's Risk: The New Management Imperative in Finance, is reprinted by permission of Bloomberg Press. ©2000 by James T. Gleason.

Risk Allocation Works!

S. Waite Rawls, chief operating officer at Ferrell Capital Management, compares the asset allocation and risk allocation approaches to portfolio management.

The phrase risk allocation has become popular on the conference circuit and in the investment press. We're always hearing about how a large, sophisticated pension plan sponsor is "investigating risk allocation” or how a leading college endowment is "analyzing risk budgeting.” It seems that risk allocation is the next logical step beyond risk management, the "in” thing for trend-setters to talk about. But we cannot fail to notice that, when asked, most participants are hard-pressed to define what risk allocation is, how it works and how they should use it.

A portfolio that uses risk rather than notional asset size to allocate its investments will outperform a portfolio that uses a conventional approach to asset allocation. The risk-allocated portfolio is more responsive to changing markets and market conditions, is equally long-term in its view, and delivers a better intuitive fit by matching a portfolio's exposure to the investor's appetite and tolerance for fluctuations in portfolio value.

The risk allocation process

There are four principal differences between asset allocation and risk allocation.

First, there is a difference in emphasis on the three inputs to an optimal portfolio—the expected return, the expected risk and the expected correlations. In the initial allocation process, the asset allocator tends to put extra emphasis on return—which asset class or sector he expects to outperform which others. The risk allocator spends relatively more time on the volatility and, especially, the correlation characteristics of the classes.

The second difference is how they monitor the allocations. The asset allocator only monitors the dollar amount of increases and decreases in the various allocations caused by profits or losses. If stock returns exceed bond returns, the, say, 60 percent allocation to stocks will grow. Having analyzed the volatilities and correlations of the various components to create the initial allocations, the asset allocator usually ignores his forecast error in estimating those characteristics. And he ignores changing patterns in those characteristics until it is time to do another asset allocation study, often several years later. The risk allocator knows that the portfolio exposure to each component is not a function simply of the dollars allocated to it, but also of the volatility and correlations.

Third, rebalancing is different for an asset allocator than it is for a risk allocator. The asset allocator is using the dollar amount of assets as the proxy for risk. When one component grows beyond its intended allocation—for example, if stocks grow to 70 percent from 60 percent—the asset allocator sees that as a sign to reduce risk and rebalances back to 60 percent by moving money from stocks to bonds. The risk allocator would see growth in risk exposure to a class if the amount grew, if the volatility grew or if the correlation to the rest of the portfolio went higher, and he would rebalance to the desired risk allocation if some combination of those three things happenned. This does not necessarily connote a more frequent rebalancing; it merely connotes a different stimulus for rebal ancing.

Finally, for monitoring and rebalancing, the asset allocator would examine only the composition of the parts—not the entire portfolio. The risk allocator would look first at the whole and then at the component parts. The 60/40 asset allocator would not respond to a market in which the volatility of both stocks and bonds went up and, at the same time, became more positively correlated. The risk allocator would see the portfolio risk going up beyond the target and would try to reduce that level of portfolio risk, perhaps by introducing a third, noncorrelating asset class or by "delevering”—that is, by reducing the allocation to both stocks and bonds and putting part of the assets into cash. On the other hand, if markets became quiet, and portfolio risk dropped, the risk allocator might have wanted to add leverage to the portfolio to get the risk up to the desired level.

Which works better?

The differences between asset and risk allocation are substantial. Ultimately, the question is whether or not risk allocation will improve the expected results of a portfolio. Before we go into the empirical analysis, however, let's review a few intuitive differences.

The theoretical goal for doing an allocation is to maximize (or optimize) the return from a portfolio that is consistent with a desired risk profile. In the world before easy access to computers and information sources, managers did not have the data or tools to monitor the markets and their portfolio against their desired risk tolerance. Therefore, they used the data they had—monthly returns and holdings reports from custodians—and monitored the amount allocated to an asset class as a proxy for risk, developing rules of thumb for allocations. For example, a 60/40 allocation might be about the right amount of risk for a pension plan, but a high-net-worth individual might prefer 75/25. So the vocabulary of asset allocation, over the years, became the norm. Today, even nonfinancial people who sit on investment committees are familiar with the lingo of asset allocation, even if they do not understand the subtleties of efficient frontier curves.

There are two ways to define the word conservative. The first is "traditional and opposed to change.” Asset allocation fits that definition. So for those who want to manage the conventional way, asset allocation is appropriate. The actual risk of the portfolio, however, will fluctuate widely as the markets become quieter or more volatile.

The second definition of conservative is "cautious, moderate and risk-averse.” Risk allocation fits this latter definition. It is not traditional. In fact, it is a new outgrowth of the risk management culture that swept Wall Street in the early 1990s. But it focuses much more precisely on the risk tolerance of the investor.

Risk allocation acknowledges that market conditions and manager behavior can and will change. It takes advantage of modern technology, which permits continuous monitoring and analysis of market fluctuations. And it uses top-down rebalancing in the allocation of assets to keep the portfolio's actual risk consistent with the investor's desired tolerance level. It is, therefore, more conservative by the second, and far more appropriate, definition.

Of course, we need to test the thoughts outlined earlier to see whether a portfolio using the proposed risk allocation approach outperforms a portfolio using the conventional asset allocation model. "Outperform” is defined in two ways: Will risk allocation result in a higher absolute return? Will risk allocation result in a higher risk-adjusted return, measured by the Sharpe ratio?

Asset allocation—the model

We decided to build a model that would use a reasonable historical period to build a base case performance for asset allocation. The initial portfolio was 60 percent stocks and 40 percent bonds, starting on December 1, 1984, and ending on July 1, 1999. The start date allowed almost 15 years of data and, importantly, included the October 1987 period of stress. We employed the two methods of rebalancing that are used by most investors: First, in the annual rebalancing, we reset the allocation to 60 percent/40 percent on December 31 of each year, regardless of how much it deviated in the interim. Second, in the range rebalancing, we reset the allocation to 60 percent/40 percent if the stock allocation exceeded 65 percent or fell below 55 percent as a result of market prices and their effect on profits and losses, regardless of how much time had passed. Weekly price data were used. Volatility and correlations were determined using a trailing 52-week period. No leverage was allowed.

The results of the two rebalancing methodologies were different. The range rebalancing had a return that was 0.7 percent higher with a similar volatility, giving it a higher Sharpe ratio. Despite less frequent rebalancings, the allocation stayed closer to the target of 60 percent/40 percent.

The risk analysis is even more interesting. Common assumptions are a volatility of 15 percent for stocks, a volatility of 5 percent for bonds, and a modest correlation to each other, giving the portfolio an expected volatility of about 10 percent. At 9.8 percent to 10 percent over the entire period, the actual volatility of both methods is close to the forecast. The typical asset allocator, however, thinks that if the asset allocation is held close to the target by rebalancing, then the risk will likewise be held close to the expectation of 10 percent volatility.

Risk allocation—the model

One of the central purposes of applied risk management is to maintain portfolio risk at the desired level (or at least within a range around that level), changing the allocation of actual assets to accommodate the risk goal. Therefore, the risk allocator needs to monitor the ongoing volatility and correlations of the components to calculate the ongoing risk in his portfolio as currently allocated. If the actual risk of the current portfolio exceeds his range of risk tolerance, he changes the asset allocation to get the risk back in line. Our model does just that.

Figure 1
Return 9.0% 9.7%
Volatility 10.0% 9.8%
Sharpe ratio 0.24 0.32
Best 1-year return 24.5%(1995) 27.0%(1995)
Worst 1-year return -6.1%(1994) -5.6%(1994)
Number of times rebalanced 14 10
Maximum stock allocation 70% 65.0%
Minimum stock allocation 57% 55%

We wanted to compare two equally risky portfolios, so we started with a desired portfolio risk level at 10 percent volatility and an initial asset allocation of 60 percent/40 percent, as before. This starting point puts the two portfolios on a common risk footing, since the target for the risk allocation is equal to the expectation for the asset allocation. Using 52-week trailing volatility and correlation levels, we monitored the risk in the portfolio.

The range of risk tolerance was set at ± 2 percent. If the portfolio risk rose above 12 percent, we rebalanced to bring it back to the target level of 10 percent. The method of rebalancing was simple during times of increasing risk. Initially, we decreased stocks and increased bonds. If portfolio risk continued higher still, we added cash and optimized to get the allocation to stocks, bonds and cash at the proportions that deliver a 10 percent portfolio risk. In other words, cash was used to "delever” the portfolio to reduce the risk. During times of low market risk, we allowed the asset allocation to rise to 90 percent stocks and 10 percent bonds, but no further than that. Again, no leverage was permitted.

Risk allocation—the effect on risk and return

The results were interesting. For the entire period, the volatility of returns was roughly the same as it was for the two asset allocation methodologies, so our portfolios were roughly equal from a risk point of view (see Figure 2).

Figure 2
Annual Asset
Rebalancing (from before)
Range Asset
Rebalancing (from before)
Risk Allocation
and Rebalancing

More important, the pattern of risk-taking was improved dramatically. The asset allocation, which is intended to express a consistent risk tolerance for the portfolio, actually allowed volatility to go as high as 15 percent and as low as 5 percent. This allowed the portfolio to be far riskier than intended in some periods and did not adequately express our appetite for risk in others. The risk allocation kept the portfolio's actual volatility far closer to the intended 10 percent risk level.

The result on returns was more impressive, particularly since the wide fluctuations in risk levels narrowed considerably (see Figure 4).

Moreover, managed risk produced higher returns, with the risk-allocated portfolio outperforming the two asset allocation methods by more than 2 percent. More impressive was the comparison of the Sharpe ratios, where the risk was the comparison of the Sharpe ratios. The risk-allocated portfolio produced a Sharpe ratio of 0.5 and the asset-allocated portfolio had a Sharpe ratio of 0.32.

Since any historical analysis may simply reflect the distortions of the period of analysis, we investigated further to see when and why the risk allocation worked so well during the 1986–1999 period (see Figure 5).

Figure 4
Return 11.4% 9.7%
Sharpe ratio 0.50 0.32
Best 1-year return 32.4%(1995) 27.0%(1995)
Worst 1-year return -4.7%(1990) -5.6%(1994)
Number of times rebalanced 12 10
Maximum stock allocation 90% 65%
Minimum stock allocation 44% 55%

Figure 5
Year Return:
Risk Allocation
Range Allocation
1986 15.1% 15.5% -0.4%
1987 6.0% 1.0% +5.0%
1988 7.2% 7.0% +0.2%
1989 19.5% 19.3% +0.2%
1990 -4.7% -4.8% +0.1%
1991 14.7% 17.5% -2.8%
1992 6.5% 5.3% +1.2%
1993 7.1% 6.5% +0.6%
1994 -2.5% -5.6% +3.1%
1995 32.4% 27.0% +5.4%
1996 20.8% 11.6% +9.2%
1997 18.0% 14.9% +3.1%
1998 19.2% 21.8% -2.6%

Detailed Sensitivity Analysis

We were impressed with the results of the comparison of the risk allocation approach to both of the asset allocation approaches, but we wanted to test it to see how other inputs would work. Therefore, we ran the model under several other scenarios.

The most interesting finding of the analysis is the comparison of Sharpe ratios. The traditional asset allocation approaches generated a Sharpe ratio of 0.24 for the annual rebalancing and 0.32 for the range rebalancing. The Sharpe ratios of every risk allocation model outperformed both of the traditional asset allocations. The Sharpe ratios of the risk allocations for the entire period ranged from a low of 0.40 to a high of 0.58. Furthermore, the Sharpe ratio outperformance was not a function of any particular period within the 14-year study, since there was never a material deviation between the risk allocations and the asset allocations. Even in the 1994–97 period, when the risk allocations significantly outperformed the asset allocations, they did so by getting the volatility up and keeping the Sharpe differential relatively constant. This difference in Sharpe ratio performance is, we believe, the most material finding of the study.

Risk allocation outperforms asset allocation, both by increasing return and increasing the return on risk. We have been employing these techniques in our own allocation process for a number of years in a multi-manager fund of long/short strategies, successfully meeting our volatility targets.

Like asset allocation, risk allocation allows the investor to determine the appropriate risk tolerance level for the portfolio and to set the initial allocation commensurate with that level. From that point forward, however, risk allocation makes it possible for the investor to adapt to changing market conditions and any errors in the initial determination of expectations. The portfolio continues to be allocated consistently, on an ex ante basis, with the initial determination of risk tolerance. The result is a portfolio whose actual volatility, on an ex post basis, stays closer to the desired risk levels than does a portfolio that uses traditional asset allocation techniques. The investor gets the benefit of a higher Sharpe ratio. More importantly, the investor gets a higher level of comfort that the allocation will work.