There's no perfect way to estimate volatility, but here
are some of the best approaches.
By Karen Spinner
The area of volatility, as any trader knows, is itself volatile. The
key question is how one can use predictions of volatility to help manage
position and seek out new profit opportunities. Volatility has undergone
striking changes during the past several years in terms of how it is measured,
predicted and used by traders and risk managers alike.
In the dark ages of volatility analysis, analysts preferred quantity
over quality-the more historical data the better, and volatilities were
routinely projected one, three, even five years in advance. Soon, however,
quants discovered that recent volatilities might be more relevant in terms
of predicting the near future. Traders rapidly joined the Cult of the Short
Term, using ever-smaller samples of data to estimate volatility. As the
use of delta and gamma hedging increased, implied volatilities increasingly
were used more and more in conjunction with historical simulation to identify
new profit-making opportunities.
Volatility estimation has come to play an important role in risk management. The rise of Monte Carlo and variance/covariance Value-at-Risk, both of which
use volatility estimates to generate market factor matrices, has created
a whole new set of uses for volatility estimation techniques. In these cases,
volatility estimation becomes critical to the valuation of an entire portfolio-not
just option positions. And, because volatility estimation as a risk management
tool is a relatively young discipline, it is in this area that a great deal
of original research is taking place, both in applying traditional trading
techniques to risk and generating volatility models that work better within
the context of Value-at-Risk and other risk-measurement models.
From a philosophical perspective, volatility is the essence of trading.
It reflects the level of price variance in an underlying market. With options
it takes on a special meaning, because options derive their price in part
from how volatile the price of the underlying might be. Relative volatilities
in option markets tend to vary according to strike price. For example, if
one were to plot the relative volatilities of options that were out of the
money, at the money and in the money, all with the same underlying and maturity
date, one will often find that both out-of-the-money and in-the-money volatilities
are somewhat higher than at-the-money volatilities. This configuration is
known as the volatility "smile," and reflects market perception
of how much option prices are likely to fluctuate. The volatility smile
is not necessarily symmetrical. Instead, it may be skewed to one side or
the other based on what the market "thinks" of in-the-money versus
Similarly, volatilities can vary a great deal based on an option's maturity date. Volatility is also a function of time. For many underlying assets
in which there is an active forward market, volatility has term structure.
This means that as contracts mature, their volatilities converge toward
a certain value. For traders, shifts in this term structure-and, therefore,
how volatility will fluctuate over time-are important to consider. "The
term-structure effect is particularly pronounced in the fixed-income markets,"
notes Robert Geske, a professor of mathematics at the University of California
at Los Angeles and vice president of C*ATS Software. Time also has an effect
on the volatility smile. Typically, the smile is more pronounced for short-dated
options and much flatter for very long-dated options.
For option traders, predicting volatility-and market perception of volatility-is critical. Most traders use historical models and implied volatilities together
to make better decisions. Historical volatilities can either be taken as
is, or plugged into one of a variety of predictive models. Implied volatilities,
on the other hand, are calculated by getting live option prices and market
data and then solving for volatility. Bob Jarrow, a professor at the Cornell
School of Business Management and research consultant for Kamakura Software,
notes that whether or not you receive a valid implied volatility depends
on the option model you are using to solve for volatility. "Any 'misspecifications'
in the model will affect the resulting implied volatility."
Geske's recent research on the volatility of Eurodollar futures and options indicates that implied volatility is always greater than actual volatility
in an efficient market. This becomes apparent when you compare actual, historical
volatil-ities with implied volatilities for the same time period. Other
highly liquid markets may show similar results.
The popularity of variance/ covariance and Monte Carlo-based Value-at-Risk, which use volatility and correlation estimates to generate price points,
has spawned yet another demand for volatility forecasting. At the most basic
level, volatility can be used to run a stress test on a portfolio of options.
Lance Smith, a partner at New York-based Imagine Software, explains that
one use of historical volatilities is stress testing option positions. "For
example," he says, "if you have a position in oil options, you
can use, say, volatilities from the beginning of Desert Storm to 'shock'
Currently, many risk managers use either a "moving average"
or an exponential model based on historical data to generate future volatility
estimates. Moving average refers to an arithmetic weighting of historical
volatilities during a preset time period; each past volatility within that
time period is weighted equally in the resulting volatility estimate. This
method is currently recommended by the Bank of International Settlements
and, according to Mark Garman, a professor of finance at the University
of California at Berkeley and president of Financial Engineering Associates,
it may be widely used because more people understand the math.
Exponential smoothing, on the other hand, gives more weight to the most
recent observations of volatility. For example, "a 30-day moving average
volatility measure suffers the disadvantage that it changes suddenly and
implausibly when a 'spike' in the volatility measured 30 days ago is no
longer included in the moving average," says Garman. Exponential smoothing
is used in JP Morgan's popular RiskMetrics model, and many prefer it to
the arithmetic moving average methods. Garman notes that the math associated
with exponential smoothing is actually quite easy, once one understands
the basic concept.
Beyond moving average and exponential techniques, there is a whole series of historical simulation models designed to predict volatility. These models
can be effectively used in both the trading and risk-management environments,
and they include ARCH (autoregressive conditional heteroscedacity), GARCH
(generalized ARCH), Kalman filters and enough other methods to fill a library
of textbooks. According to Jarrow, many of these models come out of the
academic community, where disagreements over the latest and greatest models
have become something of an armchair sport.
ARCH and GARCH are two widely used methods of predicting implied volatilities. Heteroscedasticity refers to a variable whose volatility changes over time.
In some markets, such as energy, the volatility of implied volatilities
can vary a great deal over time. (Likewise, homoscedasticity refers to a
variable whose volatility is relatively constant over time.) In plain English,
ARCH and GARCH models attempt to create rules describing the volatility
of implied volatility under certain market conditions. ARCH and GARCH models
typically use current volatilities combined with a selection of historical
variables in order to predict implied volatilities a short time into the
future. In basic models these parameters are often drawn from historical
volatility patterns. More advanced models might also consider the underlying
commodity or contract, volatilities in other markets and so on.
ARCH, GARCH and their brethren-neural networks, genetic algorithms, Kalman filters and the like-are considered likely over time to become a more important
part of advanced portfolio risk-management techniques. Carol Alexander,
a professor of mathematics at the University of Sussex and a consultant
for Toronto-based Algorithmics, explains that GARCH enjoys a great deal
of popularity within the academic community because it is based on a statistically
sound methodology. "Moving average and exponential smoothing methods
assume that volatility has a constant term structure," she says. "Today's
volatilities will always reflect tomorrow's vol-atilities. GARCH models,
on the other hand, allow you to define a term structure for volatility.
This means that, based on historical analysis, you can identify a long-term
mean to which volatility for a given market is likely to revert over time."
Alexander also emphasizes that GARCH models can be useful to traders
because they produce a stochastic volatility model rather than just a single
number as moving average and exponential methods do. Thus, GARCH can help
traders develop the most effective delta and gamma hedges. Delta, which
signifies the amount by which a price will change based on a unit's price
change in the underlying, and gamma, which signifies how much your delta
will change under the same circumstances, are closely related to volatility.
A GARCH model can help traders more accurately match their delta hedges
to their positions over time. This principle has lately been extended to
use GARCH in order to fit the entire volatility smile surface and hence
predict short-term (that is, one day) movements in option prices.
Yet another use of GARCH and other statistical measures of volatility
is to compare statistical volatilities to implied volatilities in order
to see if an option is over- or under-valued by the market. If implied volatility
is substantially higher than your GARCH result, then you should sell volatility.
Likewise, if your implied volatility is lower than your GARCH result, then
you should buy volatility.
In conjunction with Algorithmics, Alexander has developed a GARCH model
tailored specifically for the portfolio risk-management environment. Known
as orthogonal GARCH, this particular variation is designed to produce large,
positive, definite covariance matrices that can be used to value a portfolio
in the context of Value-at-Risk. "Moving average methods do not produce
positive definite matrices; in fact, the RiskMetrics data can have many
zero eigenvalues. In practice, this means that your matrix may contain some
negative eigenvalues. Therefore, some cash portfolios could have negative
VAR measures, and some options' portfolio VARs cannot be measured."
According to Alexander, orthogonal GARCH works well as a volatility forecaster, particularly for use in portfolio-wide VAR. She notes, however, that in
order to account for "high stress" market scenarios, the model
needs to be trained on highly correlated subcategories within the global
risk factor categories of equities, foreign exchange, fixed income and commodities.
Then the whole, firm-wide covariance matrix can be obtained using the orthogonal
splicing procedure developed by Alexander for Algorithmics.
In the future
Of course, historical models aren't the only way to predict volatility.
"Traders would almost rather be able to predict future volatility than
future price," observes Geske. He explains that some quants have already
tried to apply neural networking techniques-in which a program learns to
predict volatility or implied volatility-with mixed results. And, from a
risk management perspective, the next frontier could be managing the correlation
of various volatilities. According to Imagine's Smith, an important issue
when evaluating portfolio risk is the relationship between volatility and
correlation. "Let's say you are long some options and you want to hedge
them with a short position," explains Smith. "The first question
is whether or not there will be a mismatch in the volatilities associated
with the two positions. The second question is to what extent these volatilities
are correlated with each other."
This question, in fact, is already leading to the next frontier in volatility research. Traders and risk managers alike should keep their ears to the
ground and their coffee pots plugged in.