.
.--.
Print this
:.--:
-
|select-------
-------------
-
Roundtable the limits of Models

How far can models be trusted? How can they be checked and adapted? What other factors need to be considered?

The second session of the Derivatives Hall of Fame roundtable, entitled "First Kill All the Models,” examined the limitations of modeling in general. While at least one participant branded models as something to be avoided if at all possible, all agreed that they have serious limitations and should not be trusted implicitly. Models, at best, are flawed maps of an uncertain terrain—abstractions of reality. They need to be carefully adapted to the trading environment with sound financial concepts. To be truly useful, models should try to account for investment returns as well as investment risk. And ideally, they should also try to measure the operational and credit risks that are often ignored.

Much was made of the necessity of using input variables that are both intuitive and tradable. Models also need to be back-tested and calibrated against changing market conditions, and carefully examined to see how they affect P&L. In the final analysis, models may not be that critical to bottom-line success—a good trader, it was argued, can usually beat a good model, and a bad model may cause a host of new problems. That's why panelists concluded that models and their inputs need to be challenged in an ongoing debate between traders, managers and auditors.

How should we use models? And what kinds of idealized assumptions do they make?”
—George Holt

George Holt: The topic here is "First Kill All the Models,” which would indicate that perhaps we can get away with throwing out all of our models and not worry about them anymore. However, trading, investment and financing activities involve decisions between choices that reflect the decision-maker's view or model of the world, so models are inherent to these activities.

But to step back a little bit from the topic, I think it's worth asking two questions: Which models are relevant, and which models should be used in specific situations?

There are a number of different ways we actually use models. For example, we may use a model to communicate a proposed trade—for example, Black-Scholes volatility for an option. In each case, you'd have to ask how much the counterparties need to know about what went into the model. A standardized model is required to have a productive trading conversation. You might also use a different model, such as Black-Derman-Toy, to place your own view on the actual value of the instrument traded. Your view will likely be different from your potential counterparty; otherwise, there would be no incentive for trading. You might use yet another model to determine a replication strategy to hedge a traded instrument. Finally, you might even use a different model to provide a measure of the risk of a portfolio of instruments.

Many existing models make idealized assumptions about the behavior of derivative and underlying markets that are likely to be realized only in a limited number of situations. These are assumptions like liquidity of the underlying market; stability or "stationarity”—underlying prices behave as a stationary stochastic process; divisibility of trades—liquid trading often occurs only in specific discrete lot sizes; fungibility—similar underlying assets can be treated as being the same; and perishability—the underlying asset may not be storable or its quality may change over time. We need to address how these considerations are reflected in models for the underlying and for derivative contract pricing.

Of course, models aren't the whole story here. The acquisition and analysis of model inputs are equally important, particularly the underlying price data describing what's really going on in the markets. It is important to have a thorough understanding of the underlying markets to make sure one's data are actually representative of what's going on.

These are some of the questions we can discuss as we go through this session.

Emanuel Derman: George mentioned a couple of things that were on my mind as well, and I'm going to try to expand on them. But first of all, I want to disagree with the title of this session, "First Kill All the Models.” I actually believe the opposite. To paraphrase Mao in the 1960s: Let a thousand models bloom. And I'll try to say why I believe in a revival of models.

What's the purpose of models? Models are descriptions of idealized worlds; they are only approximations, if even that, to the real hurly-burly world of finance and people and markets. Even in engineering, that's true. Models don't describe the real world. They describe an approximation to the real world at best. But you try to use them to give you a value for something in the real world. The question is, How should you use models?

I try to keep several things in mind.

1. Be aware that the models you are using are "gedanken” experiments. That's the German word that people in physics used for imaginary experiments they perform to try to get a feel for how things would behave even if they can't actually do physical experiments carefully enough to go along with their imaginations. So, for example, Einstein would think about what it was like to sit on the edge of a wave moving at the speed of light and what he would see. And I think we're doing something like that. We are sort of investigating imaginary worlds and trying to try to get some value out of them and see which one best approximates one's own.

There's nothing wrong with having a lot of different models. There really isn't any precision in finance—certainly not as much as there is in physics. I think you want to have a stress-test of many world views to see if you can find the correct answer for the problem in a whole bunch of different consistent worlds, even if they are imaginary. So, a multiplicity of models is good.

2. In finance and in engineering, as opposed to physics, you need to be constantly aware of the fact that you are using models that are wrong or inadequate. You want to be aware that your models are wrong. Some people use the word "mis-specified,” but I don't like that because there is no obvious Platonic specification of the trading world.

3. And then what are you going to do about the fact that they're wrong or incorrect or imperfect? You need to think about how to account for the mismatch between models and the real world. You want to think about things, as George mentioned, like transaction costs, illiquidity, jumps, the fact that markets are closed, the fact that there's contagion, and try to have models, or at least scientific thoughts, about all the corrections that could occur to the idealized models you're using.

"Find the model that best approximates your world...so the more models the better. Be aware of when your model is wrong and the corrections that need to be made. Use variables that are intuitive and easily understood. Make sure the models are calibrated properly. Use financial concepts—not math concepts.”
—Emanuel Derman

4. Models are used to relate things traders can think about easily to things they can't think about easily. So, for example, if somebody asked what you should pay for a deep-out-of-the-money option compared with an at-the-money option, it's difficult to say "23 cents” instead of "two dollars.” But if you've got a model and if you know what volatility is out of the money compared with at the money (and traders or anybody in the real world can get a feel for how volatility varies with strike), then the model is a filter that lets you turn your perceptions about volatility into dollar values or hedge ratios.

That said, I think good models ought to use variables that people can think about intuitively. It is good to have describable variables or factors that have names. By and large, I prefer models where one of the variables is, say, the curvature of the yield curve rather than factor number three in some principal component model. So, models should have inputs that are at least intuitively well-understood. That's not to say you can't introduce new variables, but the concepts ought to be there before the mathematics, which is something I'm going to come to again later.

5. Another thing George mentioned is the inputs to the models. In the practitioner's world, one of the things that always seems important to me and often ignored in academic life or by new people on the Street is the problem of calibration. Models really work best when they are used to extrapolate or interpolate from things whose value you know to things you don't know the values of—for example, from listed instruments to more exotic over-the-counter instruments. And they allow you to move into the unknown smoothly from things whose values you don't have to figure out.

This is almost as important as the model. It's easy to come up with three-factor models, but what's most important is to make sure there's some way to fit that model to the world that you know and, in fact, fit it smoothly to the models people were using to apply their intuition to before.

If you want to build a yield curve model that's got interest rate volatility, you have to remember that today's bond yields are known, and described in terms of a deterministic forward curve. So somebody says, "a bond yield is 9 percent.” If you want to add interest rate volatility to that model, you have to make sure that even with volatility, you still get a 9 percent yield the way everybody else does, no matter their model, when you value a 9 percent bond.

In the same way, if you want to add stochastic volatility to options models, you can't just wake up one day and put in a new factor, because it won't price options correctly. Today everybody is pricing off the Black-Scholes model, and you want to get that market price, else there would be a big calibration and consistency problem. It's not just a question of adding new variables—it's making sure that you add them in a way that's consistent with the prices that people give you today.

6. Finally, along the lines of what Nassim Taleb was talking about earlier, I think that although mathematics is important, mathematics isn't primary. It's mostly complementary. You must have economic or financial insight first—or at least at the same time. Models are based on concepts, not on advanced mathematics. My feeling is that it's always better to start with some idea, some concrete economic or physical or financial mechanism, and then express it in mathematics, rather than just write down an equation.

It's surprising that math works so well, but think about physics. My daughter's doing high school mechanics now and she's trying to calculate how far something falls in three seconds. And it is pretty amazing to think that somebody can write down an equation that tells you how far a rock falls in three seconds. Because you look at the rock, and you think, how does it know how to get there in three seconds? Does it have a little Pentium inside that's computing where it should get in the next millisecond? It doesn't.

So, it's a funny thing that math works. There's no reason why it shouldn't work in finance as well, but not quite as well. It doesn't somehow. In physics, you are still playing against God, where the rules are to some extent there all the time and unchanging. In finance, it just doesn't work as well. And even if people are made by God, they somehow aren't as predictable as natural laws.

That is another reason why you want to have a lot of models describing a lot of different consistent worlds, and then be able to choose between them and see all the rational answers you could get in a bunch of rational worlds. Then you can decide which one of those is closest to the world you're living in and make corrections for that.

Joe Kolman: I think we're on a more positive note this time. I think we've acknowledged in the first session and the beginning of this one that a model is a map and it isn't the territory. It's not a perfect map, but the question then becomes: How do you make these maps better than they are?

James Lam: I think that's the right question. The debate shouldn't be whether we use value-at-risk models or not. The right question is how to make VAR applications better, more widely accepted and more valuable to the business. Let me propose a few ideas.

One is to incorporate return measurement in VAR models, whereas current VAR models tend to focus only on potential loss. Yet if you look at the models that have succeeded over time, such as capital asset pricing, option pricing, RAROC and EVA models, they all incorporate some quantification of upside potential. Thus, I think it is important for us to extend VAR applications to measure both risk and return.

"Try to incorporate return measures into VAR analysis. Don't ignore operational or credit risks.”
—James Lam

Second, I believe it is important for risk management tools such as VAR to keep up with the trend toward enterprise-wide risk management. Think of the headline stories about major losses—nearly all of them resulted from operational issues. Yet few VAR models incorporate operational risk or credit risk. It's like someone telling me to go build a house and giving me a hammer, and I am told that 20 percent of the time, the hammer is going to be a very useful tool, but I am not going to have the proper tools for 80 percent of the time. You know what? I'm not going to have a lot of confidence in building that house.

Third, current VAR models are mostly static. They measure potential loss of the current portfolio with no considerations of future actions or strategies, or even the reinvestment of future cash flows. While this may not be a critical issue for short-term trading positions, it can be misleading if you look at the investment portfolio or the rest of the balance sheet. It reminds me of when I started my career in the early 1980s when banks measured interest rate risk using the static maturity gap. Banks later discovered that static gap measures can be misleading, and they migrated to more dynamic duration and simulation models.

Kolman: Bill Margrabe?

William Margrabe: Well, there's an old saying you may have heard, "It isn't the size of the wand; it's the wizard who waves it.” I think that that's true about models.

It's important to take a look at the traders who use the models, and at the managers of the traders. There's a lot of interaction there. A good trader with a bad model can beat a bad trader with a good model. For example, a good trader using Black's model to price bond options could fleece a bad trader using a proprietary model if the good trader knows how to get the right inputs from brokers and the bad trader doesn't make all the required adjustments. Also, I've seen a good trader with no model at all take millions of dollars from a bad trader with a fancy model that is satisfactory for the equity market, but fatally flawed for the crude oil futures market.

"A good trader can beat a good model. A bad model can cause a trading manager big problems. Traders can play all sorts of games with model inputs, and risk managers are often reluctant to challenge them. Trading managers don't always want to know when their models are off base.”
—William Margrabe

Then there's the interaction between the trader, the model and the manager. A bad or dishonest trader with a model that might be off by only a few basis points can use that to create big problems for his or her firm and create huge amounts of fake P&L. If the manager is not on top of his or her game, then the manager will end up getting seriously embarrassed. We've seen that happen a few times also.

Stan Jonas: I have a question that is sort of the corollary of Emanuel's: What is the purpose of a model, and how do you actually find its superior among all these competing models? We see people talking about "marking their position to model.” That always strikes me as being a totally self-contradictory point of view. A model is supposed to have some way of falsifying the model. Scientific honesty then consists of specifying, in advance, an experiment such that if the result contradicts the theory, the theory has to be given up. Scientists put forward statements and test them step by step. In the field of empirical science they construct hypotheses and test them against experience by observation and experiment. What tells us in finance whether the trading model that we are using is right or wrong? And yet, in finance or at least in most banks, one takes the model and trades on it, and then marks one's positions to it. So in this modeling, there is by definition no possibility of falsification. We don't look for real truth, and just satisfy self imposed consistencies.

"How can we know when a model is wrong? How can we make it consistent? What are the criteria for judging models? Too much attention to complicated models is a dangerous waste of time.”
—Stan Jonas

Derman: I agree with what you're saying, but I don't have a good answer. You can tell when a model is wrong. It is much more difficult to tell when a model is right. I don't think you can. I think a model works (serves its purpose) if it's a useful way of thinking about things. So, in Black-Scholes, you can ask, What will happen to my position if interest rates go up, if volatility goes up or if dividend yields go up? I honestly think that's about as good as you can do—having a rational way of exploring what might happen to you in a world in which you've articulated the variables that can affect value.

Dan Mudge: Just following up on a couple of Stan's comments. How do we know when a model is right?

I find it ironic that in our first conversation we talked about VAR. Part of the formal requirements involve back-testing to get an idea of how good the predictive qualities of VAR are and to fine-tune it. Yet, often on trading desks, while this is sometimes done by the individuals, is not done nearly as often systematically. It is important to do some form of back-testing on models used on trading desks by using historical data, or by going forward and tracking the sources of errors or problems in a formal way, just as it's done on a VAR.

Value-at-risk is often used in a strategic way with a fairly low level of precision, whereas on these pricing models you're making finite or fine-tuned decisions, and yet there is less formality in terms of back-testing.

On his second point, about how models can be used to motivate people, we talk about risk in isolation from revenue or return. I would say that the accounting for your process goes hand in hand with your risk. For example, say you buy a long-dated option that's really marked to model. It doesn't really exist. It's illiquid. Say you buy it for 5 percent volatility and you think that the real value is 8 percent and you take a big profit up front. Your perception of the risk is entirely different from if you bought it for your firm and you marked it at 0 percent volatility and took a loss. Your risk is very much influenced by the incentives that you give traders and the width of the parameters you give them in pricing. It will influence the types of decisions they'll make.

Ron Dembo: James and Emanuel touched on the essence of modeling here. The reality is that mathematical modeling is not something that you do really believing that you have the absolute answer. The question—Is the model correct?—is almost a contradiction in terms, because a model is an abstraction of the real world and the model's never right. It would only be right if it was the real world.

The real use of models and the value of models is for comparative purposes. In other words, as a single trader looking at a single trade, you have no way in hell of comparing the relative merits of a number trades if you don't have some benchmark with which to do that comparison. The model forms a benchmark. So you never seriously believe the number that you get—44.27563—is really the absolute answer.

"Models are abstractions: don't expect answers to the fourth decimal point.”
—Ron Dembo

I remember when I was at Goldman Sachs, and we were producing these wonderful models for portfolio immunization. I had people asking me for the numbers down to the last cent. And they were really serious about this. These were numbers that involved billions of dollars, and by the time you get to the last cent, you have the most perfect random number generator imaginable.

So I think people take models too literally. You can take the real world; you can abstract it, and you know that you're making gross assumptions, but you've created a means of benchmarking alternatives. James' point is the one that I liked most here. Let's not debate whether we should or shouldn't use models; let's debate on how we can make them better.

Richard Tanenbaum: Emanuel said that one of the things he'd like to see in a model is having a number one can understand. I would take it a step further and say that I think you always want to have a bias with adding observable variables through models. And when I say observable, I mean tradable. Implied volatilities are only observable if everyone agrees that Black-Scholes is what to use. Otherwise, if you use Black-Scholes and another guy uses Black-Scholes B, you end up with two implied volatilities. So, you haven't really observed it any more than you can trade it.

"Try for variables you can actually trade. Implied volatility may be less meaningful than you think.”
—Richard Tanenbaum

I would always argue that you should try to have a tradable variable inside of the model instead of a non-tradable variable. To take it a step further, I might propose that one of the reasons that Black-Scholes has done so well over the last 25 years is that it has only one unobservable variable. That way you can always talk about what the implied volatility is, and it becomes a number that we feel is intuitive, even if we can't observe it. If you have two implied things, like volatility and volatility of volatility, you can no longer solve for a single implied. You always have to say, this is the volatility. And the volatility is implied if you know that this is the implied volatility. So, it becomes more difficult to do that. This means the single number we call implied volatility is, in some sense, just an amalgamation of all the unobservable, untradable parameters. And that's why I find it funny, because we talk about implied volatility as if it's meaningful, and yet to a large extent it is actually, by definition, everything that we cannot observe.

Michael Onak: The focus has been on the problems with and the shortcomings of models, but I think we all agree that they do serve a purpose. A model provides a convenience that allows us to try to communicate a lot of information in a condensed time frame. It's a simplified assumption with some degree of error, but it provides something, a benchmark. I suspect that if we had a world today without models, the first thing we'd do would be to build some.

"A model is a simplification that provides a benchmark.”
—Michael Onak

Jonas: I agree with Emanuel that we need to use an evolutionary approach to this, but much like the Lucas critique in macroeconomics, the models themselves and the people's belief in the models change the mental horizon under which we trade. It's impossible to go back before Monica Lewinsky's birth. We are now creatures of Black-Scholes, and that's the way we think. What happens, however, is that we start taking on risks that we probably wouldn't have taken on if we didn't have this false confidence in the models.

Why improve a model if we don't know it's wrong? We have to have some criteria by which we judge our models. It's nonsensical to think that just because a model exists, it's satisfactory. We have to have some method of what Karl Popper called "a criteria of refutation,” which has to be laid down beforehand; it must be agreed. Traders have such a criterion. When they lose money they no longer trade the model—they modify it, change it or refute it entirely. I think we can see with the Black-Scholes model, in particular, that it took many years before this practical (yet scientific in a true sense) feedback began changing traditional academics' interests in how to look at that model. It's only within the last, say, eight or nine years that academics have seriously begun to look at how to modify the model to catch up with what one can imagine as some form of Darwinian evolutionary process, in which as though through a complicated "genetic algorithm” the optimal model has emerged.

If we take a look at the models that we always create and the ways derivative securities are produced, it seems as though we have bright analytical people creating models on one side, and then we have what are postured as rather naive (unscientific) people buying these products on the other side. Ironically (and in some moral sense, thankfully), over the long haul, it always seems that those naive people make a lot more money than the people who sell the product. (Laughter.)

As a financial institution's shareholder I would feel that there's something wrong with how we devote our resources when somebody who simply reads a newspaper and makes a bet based on what he thinks Alan Greenspan is going to do makes more money on a risk-adjusted basis than a team of former Russian physicists, who are trying to figure out what the correlation is and will be between, say, two-month LIBOR and 10-year LIBOR for the next 10 years. Perhaps it is a waste of resources and the risk capital at hand. If I were an investor, a shareholder in the bank, I just wouldn't want my money spent doing that.

Margrabe: Unfortunately, sometimes, it seems that the only way to falsify models these days is to have a shakeout. I'll give examples of how some traders, risk managers and management don't seem to be doing their jobs of trying to falsify the models.

Traders get to choose the inputs for their models. Well, what inputs do they use? There's always some excuse they can use to manipulate the numbers. The markets may not be liquid, so at the end of the day, the markets may settle on prices and volatilities that aren't realistic. So, some traders will argue that those prices and volatilities don't really reflect the markets, and that they should use another set of numbers instead. Many controllers are in no position to challenge them, so these traders get to mark their positions where they want.

Now, take a look at the risk managers. Many risk managers also lack any confidence to challenge the traders. I'll give you an example. I know a risk manager in a company that uses position limits based on VAR. The VAR model was the Monte Carlo model. One trader was over the limit, but the risk manager didn't have the nerve to challenge him. And he knew he'd lose the battle because the guy was just a little bit over the limit. So he ran his Monte Carlo model again. This time, the guy was within the limit. Problem solved. (Laughter.)

Then you take a look at the managers. Some managers absolutely refuse to see the models of their traders falsified. They're like the guy in the joke who thought his wife was committing adultery, and sent a detective around to follow her. She and this guy went into a motel room and began hugging and kissing and tearing off their clothes, and then they closed the blinds and the detective reported these results back to the man. And the guy said, "Isn't that always the way it is. You just never know.” (Laughter.) I think some managers are the same way.

--