.
.--.
Print this
:.--:
-
|select-------
-------------
-
Models

Gena Ioffe, chief developer at Intermark Solutions, talks about how to make new theoretical models more sensitive to market values.

Making Theoretical Models Market-Savvy

Why is the theoretical framework for option pricing models still the same as in the original work by Black, Scholes and Merton? I ask this question because we all know that in determining a market price, a trader will consider other factors that are not parameters in the Black-Scholes model.

For the last several years, I have worked for a number of software vendors developing over-the-counter foreign exchange options trading and risk management systems. As a self-taught programmer with degrees in mathematics and statistics, I have been responsible for the development of new models to price each new exotic option that is created in the market. When our clients report that the system produces “wrong” option prices, I have always been responsible for “fixing” the problem.

Here’s the challenge: Theoretical option pricing models calculate “fair” values. But for many derivatives players, fair values are not enough. They also need to calculate the “market price” of the option with some degree of accuracy. Salespeople want the ability to quote a price without having to ask the trader. Smaller desks at the regional banks essentially “resell” options to their local clients and need tools to price options closer to market. Buy-side users need systems to mark their portfolios to market.

For each of these users, the “market” option price is not necessarily the “fair” value, but rather the price that the user will pay or receive from the market-maker. Because they need software that generates “market” prices, they often ask me to “fix” this problem when their system shows price considerable different from the market-maker prices. While market-makers price options based on fair values, they sometimes include other factors not observable in the market. The common practice is to calibrate the theoretical model to the market with the volatility (the “smile” effect).

Theoretical option pricing models calculate “fair” values. But for many derivatives players, fair values are not enough. They also need to calculate the “market price” of the option with some degree of accuracy.

Here are the sources of some of most frequently reported problems:

Erroneous software. While this problem is less common now as software systems have become more mature, I think our industry is in need of common option-pricing models, tested and validated by independent counsel. In another words, a set of common models in a public domain that can be used by all software vendors. Today, the ability to code the Black-Scholes model for vanilla options or even options with a single or double barrier does not seem to be a competitive advantage. Even when someone buys inexpensive spreadsheet add-ins, they are really paying for the interfaces and value-added spreadsheets. How much does a buyer of a multimillion dollar trading or risk management system pay for the models?

Bid-ask spreads. Several years ago, a foreign exchange vanilla options trader at small German bank using our system for position-keeping reported that the end-of-day mark-to-market prices for his trades were different from the ones he received from his pricing software. After days of intense labor, we were certain that all the market data and option parameters we passed to our models exactly matched the inputs he entered into the pricing system. After several sleepless nights spent reviewing our error-free code, I could see no explanation.

It was then that we discovered that the bank’s pricing system used a proprietary averaging method to adjust inputs into the model when it calculated bid and ask prices. The pricing system was at the time the de-facto market standard, and this proprietary averaging method was a reason for notorious differences in mark-to-market prices between front- and middle-office systems. As far as I know, the proprietary method was never published.

Calibration of models for barrier options. During the last couple of years, we have received numerous requests from our users to add double knockouts, range structures, window options and other complex options to our system. Like most other software vendors, we were enthusiastic, seeing this is an opportunity to add value to our systems. I employed an excellent mathematician who has spent his entire life solving partial differential equations in related fields. He quickly developed a formula that allowed us to create pricing models for new exotic options with barriers.

Unfortunately, however, users complained that these models are difficult to calibrate. The volatility of the volatility implied by the model sometimes is quite big. Meanwhile, the fair values calculated by our models perfectly match fair values calculated by other software systems. I have asked several market-makers why there is sometimes such a difference between market price and “fair” values. Their answer is always the same: to make a price, they consider many factors unobservable in the marketplace, such as the cost of hedging, market interventions when an underlying asset gets closer to the barrier and so on. For low-cost options with barriers, these unobservable factors can make a noticeable difference between “fair” values and actual market prices.

Fair value models cannot meet all the practical needs in trading new advanced exotics options. New models should be developed to calculate market prices more accurately and should incorporate the following features:

  • The models should allow more degrees of freedom, such as allowing more than a single volatility or a single interest rate. Some software models allow users to price options using multiple volatilities and multiple interest rates. Models need to be developed also to incorporate unobservable market factors such as hedging costs and provide for better calibration.
  • The models should be fair-value-based.
  • The models should be arbitrage-free.
  • The new models should be placed in the public domain.

Ioffe can be contacted at gioffe@fnx.com.


Money Management

William Ferrell, President of Ferrell Capital Management, talks about how to integrate VAR into the investment management process.

Reinventing Portfolio Management

We have witnessed a revolution in portfolio management over the past 10 years. The traditional portfolio of domestic stocks, bonds and cash has been replaced with one that might include foreign stocks and bonds, real estate, venture capital, private equity and total return strategies. Management strategies may now include currency positions, leverage, shorts and derivatives.

At the same time, there has been a revolution in technology. Information sources and data are now available on a real-time basis. And wholly different analytical approaches have been developed, using new software, to take advantage of the improved data availability and to cope with the additions to the portfolio structure.

These changes bring two questions to the fore: How have investment management techniques adapted to this revolution in available asset classes and strategies? And how are investors incorporating these newly developed technological and analytical capabilities?

Before we can look at the new approaches, we should review the conventional approaches. The conventional approach to managing a portfolio started with an asset allocation, which concentrated on the proportions of the cash in the portfolio that would be invested in different asset classes. Either a rule of thumb, such as 60/35/5 stocks/bonds/cash, or an optimizer was used to divide the portfolio into asset classes (see Figure 1). Then managers were chosen to make the actual investments. The choice of managers was driven by a combination of their styles, their reputations, a consultant’s recommendation and their track records of producing top-quartile returns against their peers. There was little effort to examine the managers’ risk profiles, with most attempts focusing on their Sharpe ratio or information ratio (the ratio of excess return, divided by the volatility of excess return), requiring three to five years of monthly return data in consistent markets to be reliable.

Controlling risk was thought to be presumptuous. It was often heard, “We hire the best and give them the rope to succeed or to hang themselves.” With the accidents of the last several years, that attitude has changed, and strict guidelines have become the vogue of institutional investors. Give the managers extremely detailed instructions with pages of “Thou shalt nots” and ask the custodian to watch for compliance.

This traditional approach does not work well today. The number of markets involved has expanded and the relationships between markets are constantly changing. There is turnover in personnel at both the manager and the plan sponsor. Guideline surveillance is a logistical nightmare. The guideline approach uses proxies for risk rather than looking directly at risk itself. Perhaps most important, the guidelines can become constraints that perversely reduce returns and increase risk rather than the other way around.

What is a smart investor to do?

New approaches focus directly on the risk and return characteristics of the current portfolio’s holdings rather than the risk and return characteristics of historical holdings. This is a subtle but important development. We need a good predictor of return and risk over the next time frame, whether that be a month, a quarter or a year. Current holdings, measured in the conditions of the current market, provide a better predictor than the historical holdings in different markets.

The new approach of analyzing holdings, however, requires the investor to monitor their managers more carefully and more often than was required by return-based analysis. We have noted before that the leading-edge plan sponsors, endowments and foundations are creating new staff positions, called “risk manager.” I believe that the duty to conduct the holdings-based analysis of the portfolio lies with the risk manager.

Tremendous strides have been made to arm managers with market-specific analytical tools to analyze the risk and return of current holdings. Duration in fixed income and beta in stocks have become far more important in portfolio analysis. Different approaches to “factors” are being developed. But a portfolio manager and risk manager have difficulty comparing a bond portfolio whose duration is a year longer than the benchmark with a stock portfolio with an average beta of 1.2. And neither duration nor beta is helpful to total return strategies. The real challenge for the holdings-based approach is to develop a common vocabulary that can be applied across the entire portfolio—at the stock-selection level, at the manager selection and evaluation level, and at the asset and risk allocation level.

Common vocabulary

Many of the cutting-edge thinkers are applying value-at-risk to long-term investment portfolios because of its applicability across asset classes. VAR has its roots in the short-term world of trading, where today it clearly represents the “best practices” approach to risk management, capital adequacy and financial disclosures. If lengthen VAR’s time horizon is lengthened to a month or quarter, it becomes applicable to investment management. Lengthened to a year, it is the Markowitz vocabulary of volatility and correlation and can be applied across the entire portfolio.

Using VAR

VAR becomes a useful tool in bringing a current, risk-focused view on the portfolio. Figure 2 illustrates the virtual cycle that is the portfolio management process. In the upper left quadrant, the investor chooses asset classes, using appropriate benchmarks, and allocates to them consistent with the desired risk/return profile and the investment constraints. Moving to the top right, managers are chosen to achieve those benchmarks and to add value by taking risk vs. the benchmark. Continuing to the lower right, the old process assigned guidelines and constraints in the hopes that they would keep the manager close to the desired benchmark. The new process replaces constraints with “managed risk.” The VAR that is inherent in the actual holdings is measured and compared with the VAR in the benchmark. The residual VAR—what is often called “tracking risk”—becomes an ex ante forecast of the potential range of excess returns—that is, a forecast of the volatility of excess returns. Now we have timely information with which to monitor the manager’s risk to the benchmark. Continuing to the lower-left quadrant, if the tracking risk is too large, shows a change in behavior or represents style drift, we have the option of taking remedial action before the returns become a surprise. As we return to the upper left, we use the same information that gave us defensive insights to provide offensive insights. By doing so, we can increase returns by rebalancing or reallocating to those managers who represent the optimal future return on risk. In the old environment, two managers who have the same excess returns might be rewarded equally. In the new, even with equal returns, differences in risk profiles might suggest a reallocation to the better risk/return opportunity.

Figure 3 (Page 52) illustrates how this cycle of analysis can be performed at the manager level, adding useful information to the continual decision process. Equally, it can be applied to all of the managers in a particular asset class or to the entire portfolio. The monitoring and rebalancing becomes a continuum—with the insights of the most current holdings in the light of current market conditions.

How do these tools improve portfolio performance?

Monitoring revisited

What will increased monitoring show? Let’s use the case of a pension fund that hired an international equity manager whose benchmark is the unhedged EAFE. Figure 4 illustrates an example using the last six months of 1997. When the pension fund made an allocation to the asset class, it did so because of a set of expectations about international stocks—it expected a set of returns, volatility and correlations with other asset classes. So the first thing the plan sponsor wants to monitor is how the benchmark is doing. Its expected volatility for EAFE was 10 percent, based on an average of 9.6 percent for the period of 1994–96. The left side of the figure shows the value-at-risk of the benchmark (using the trailing 12 months to calculate volatility). In other words, how much can we forecast that we would make or lose if we were fully invested in an index fund for that benchmark. We see that volatility and risk has picked up, from less than 10 percent to more than 14 percent, jumping with the Asian turmoil of October. It is now higher than the expectations and bears watching carefully, but not yet so high as to warrant a change in allocation. If current risk were significantly higher or lower than expectations, however, we might consider increasing or decreasing the allocation to the class.

When the pension fund chose the manager to take the allocation, it also had a series of expectations about the manager. Let’s suppose that the manager had described his or her style as one that focused on country weightings, kept cash at a minimum and kept currency exposure close to the benchmark (in this case, fully exposed to the dollar). The center part of Figure 4 (Page 53) illustrates the absolute level of VAR of the holdings of our manager at the end of each month. We see that the manager’s risk has also risen, but has stayed below the absolute level of risk in the benchmark.

The key message of the monitoring process comes from the right side of the chart. Here we see the residual risk between the manager and the benchmark—the “tracking risk.” The level tells us how large the future range of excess returns might be based on current holdings. And a change in the level is the best leading indicator that we know of shifts in style or behavior. In this case, a more detailed analysis showed that the manager, responding to the increased volatility in the market, sold stocks and raised the cash position to more than 15 percent. This decreased the risk in his portfolio but increased the risk to the investor, who had consciously chosen to take the risk of EAFE. The risk was that of underperforming a rising market, which is exactly what happened in the first quarter. Based on this ex ante analysis of holdings, this plan sponsor had the information in November and December to take corrective action if he or she had chosen to do so rather than waiting until the end of the first quarter to see the results.

The example of a cash position is relatively easy to see with the naked eye and a portfolio listing. A shift to lower beta stocks or to lower volatility countries is much more subtle. It will escape notice. It will be within guidelines. And it may be riskier. This information is more timely than waiting until returns come in, and it is certainly better than attempting to control manager risk through guidelines. It is precisely on point.

Other applications

The preceding analysis helps us understand our risk better, but what about returns? Let’s suppose that over the full year 1997, the manager cited earlier beat EAFE by 100 basis points. A second manager with the same benchmark also had excess returns of 100 basis points. Everything else being equal, should they have equal allocations? How might we change this decision if we knew that the above manager averaged about 500 basis points of tracking risk while the second manager’s tracking risk was only 200 basis points. All else being equal, we should take money from the first and allocate it to the second. This reallocation would leave returns unchanged but would reduce the portfolio risk.

What else might we do with the information? The tracking risk should approximate the realized volatility of excess returns over time, so the 1997 performance indicates a potential Sharpe ratio of only .2 (100 basis points/500 basis points). And suppose we cannot find managers whose Sharpes are appreciably better. These ratios compare poorly to the long-term Sharpe ratio of the benchmark that is closer to .4–.5. We might want to take less active management risk so that we could take more benchmark risk, which has a better risk/return. So this analysis would support a decision to index the EAFE allocation. The above analysis, whether implicit or explicit, is the driver today of the increasingly popular decision to index large-cap domestic stocks, as the expectations of added value, measured by the Sharpe ratio, are poor.

We now need to look across the portfolio in order to allocate to the best returns for the overall profile. Let’s look at long/short strategies under the same microscope. We know that some offer returns of more than 10 percent, an expected Sharpe ratio of 1.0 or better, and no correlation to market indexes. These are powerful financial results. But we also know that those strategies employ leverage, derivatives and short sales—all of which many have previously attempted to constrain through the use of guidelines. If we can monitor holdings and quantify risk on a regular basis, we may be able to loosen the guidelines while actually improving our risk control. By doing so, we may consider the inclusion of the strategy in the portfolio, thereby increasing returns.

So the next question might be how to allocate between the indexed Standard & Poor’s 500, the active management of the long-only equities and the long/short strategy. The answer is to allocate risk among those approaches, not merely to allocate assets. If we look only at risk and return (the Sharpe ratio), we would index most of the equity allocation and add the long/short strategy. Some call the long/short strategy a new asset class, some call it an “overlay” and some call it “transportable alpha.” We simply call it a better portfolio, because it will offer better expected returns for the risk taken than the actively managed long-only portfolio or the fully indexed portfolio. When we look also at the diversification effect of the low correlation of the long/short strategy, the case for including it is even stronger. We can bring accuracy to this process by using an optimizer, but the intuitive rationale stands on its own.

This is exactly the information that risk managers are providing to leading investors, who are using the information to employ cutting-edge strategies.

Results

There are two objectives to the new approach:

  • Decrease surprises and narrow the range of results.
  • Increase returns for a given amount of risk and/or decrease risk for a given amount of expected return.

The key to the former is to monitor risks more closely and on a more timely basis than has been customary. The key to the latter is in the allocation process. Traditionally, assets were allocated, an approach that works well if indexed benchmarks are the only considerations. But return comes from risk, not from assets. Some types of risk require cash assets, such as the underlying indexed portion of an active equity strategy. Other types of risk use no incremental cash, such as the active management portion of the same equity investment or currency. Still other types of risk use some cash, but cash is not at all representative of the risk, such as many total return strategies.

Thus far I have focused on how decisions might be made if risk was the commodity to be allocated rather than assets. The mental model mirrors the optimizer’s mathematical model:

Where are the best risk-adjusted returns?

Where is the best diversification?

What are the correct proportions for each component?

A risk allocation is represented in Figure 5. The pie chart shows the allocation to markets (stocks and bonds indexed) and the allocation to management skill (active management of stocks and bonds and total return strategies). The proportions are those of relative contribution to aggregate portfolio risk, not relative proportions of cash usage. Unlike the more standard asset allocation shown in Figure 1, the relative proportions of the ultimate returns of this portfolio will mirror this risk allocation pie chart.

Increasingly, market leaders are using the new analytical tools that have emerged from improvements in technology. It is now possible to understand how the various parts and levels of a portfolio fit together from the perspective of risk, which drives return. We are convinced that, in the next several years, portfolios that are managed through professional risk allocation will outperform their peers. Their efficient frontiers will be higher than others, affording more return out of each given level of risk. We believe that ultimately the term “risk management” will evolve to “risk allocation” and become synonymous with “portfolio management.


New Software Purchases

BUYER SOFTWARE BRAND/PRODUCT DESCRIPTION
CECA (Spanish Confederation of Savings Banks) C-ATS Risk management and derivatives trading solution
Chase Manhattan
ABN AMRO
FNX: Sierra System Back-office and general ledger modules for global futures and exchange-traded options business
JSB Rodina Bank
Banco Mello Valores
Intermark Solutions: Focus FX Advanced Exotics, Equities, Fixed-Income modules Derivatives trading and risk management

New Software Releases

COMPANY PRODUCT DESCRIPTION
Algorithmics
Contact: Marianne Kupina
416-217-4175
RiskMapper 3.1
AlgoExpress
HistoRisk 2.1
Software mapping tool and data service
Advanced scenario-generation software
Axiom Software Laboratories
Contact: Patrick Connor
212-753-1900
RiskMonitor enhancements Risk management engine linked to third-party plug-in analytics and valuation and pricing models offered by Financial Engineering Associates (FEA)
FAME Information Services
Contacts: Phil Cenatiempo 212-506-0300
David Gow 44-171-367-5205
FAME 8.0 Upgrade for FAME’s database management solution and integrated historical market information software suite of products (7.8)
FNX
Contact: Samantha Roady
212-363-9500
Sierra System Multisite, multidirectional trade replication
Henwood Energy Services (HESI)
Contact: 916-569-0985
Bid Wizard Bidding generation software
Mamdouh Barakat Risk Management (MBRM)
Contact: Mamdouh Barakat
44-171-628-2007
Universal Convertibles Add-in 8.1 Convertible bonds pricing and risk management system
NumeriX NumeriX: TreeXpress+ Toolkit for pricing financial derivatives
Triple Point Technology
Contact: Diane Kingsbury 203-291-7979
TANGO Integrated energy commodities trading system for small and mid-sized groups trading in petroleum and petroleum products

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--