The Crisis In Credit Modeling
Excerpts from “Trading and Management of Credit and Default Risk”
Sponsored by Chase Securities
Cosponsored by Fitch IBCA
|Joyce Frost, Moderator|
vice president, Chase Securities
director, KMV Corp.
partner, RiskMetrics Group
senior vice president, Chase Manhattan Bank
partner, RiskMetrics Group
There's a disturbing lack of consensus in credit modeling these days. Do you prefer structural models based on equity or reduced-form models based on debt? For that matter, how should you measure default probabilities and the correlations between credit events? Then add all the discrepancies between credits with the same rating, and you might begin to wonder if these models were worth anything at all.
Brian Dvorak: “Long-Term Capital Debacle Exposes Limitations of Merton's Model,” declared a recent headline. Another recent article also criticized the Merton model. I'm here to tell you that reports of the death of the Merton model are greatly exaggerated. I'd like to tell you a little about what the Merton model is and why it's relevant. One of the key inputs for pricing credits derivatives, and for default swaps in particular, is the probability of default itself. Another key part of the picture is the joint probability of default of the reference asset and the protection seller, if you're buying credit derivatives. The Merton model provides guidance for how to calculate these probabilities and price credit-risky assets.
Just what is the Merton model? It falls into a class of models that we call structural models. A structural model such as the Merton model says that a firm will default when the market value of its assets falls below the obligations that it has payable. This is called the default point.
One of the problems with this model is that the market value of the assets is not observable—we can't simply look it up on a page on Bloomberg. We have to come up with some other methodology for inferring what the market value of assets is. Fortunately, we can rely on option-pricing theory to do that. We look at the market value of the equity and use option-pricing theory to come up with the market value of the assets. It's not quite that simple, because it also involves the volatility of assets, but it comes down to two equations, two unknowns, and from these we can determine how far away a firm is from defaulting. We state that distance to default as a certain number of standard deviations of asset value.
|“The recent chorus of people saying “Aha, Merton's model doesn't work after all!” is really old news, and practitioners looking closely at these models know that.”
There is an alternative methodology that has been proposed for modeling credit risk that falls into the category of reduced-form models, which assume away the causality of default. These models essentially stipulate that default occurs as a random event. This particular type of model requires us to have the market price of the firm's debt. We have to know what the spread is on the debt in order to be able to infer something about the incidence of default. Once we've got that, we can calculate the value of derivatives on that debt. It would be quite a useful tool for pricing derivatives—if we knew the price of the debt itself.
Unfortunately, this model can't be used for pricing the debt—that's an input to the process. In some ways, this assumes away the difficult part of the problem, because one of the things we would like is some sort of independent assessment of what the probability of default is. Spreads cannot tell us this, since they encompass not only the probability of default but other things such as optionality and some expectation of recovery, and sometimes it's difficult to disentangle these issues.
There's another modeling problem with reduced-form models: It's tricky to come up with a decent framework for looking at joint default correlations. Correlations have to be added to the model in an artificial way. In some models, you must add some randomness into the default rate itself, which gives you the effect of putting in some correlations among joint default events. In contrast, if you have a structural model, you're able to look much more directly at the correlations between joint defaults.
All this raises some important questions. Reduced-form models may have problems, but if the Merton model doesn't work well, they may be a better choice. So is the Merton model any good? How well does it work in practice? Studies going back a number of years that have looked at the Merton model indicate that a naive implementation would give you some pretty bad results. The probability of default and the spread that would come out of the model would not be very indicative of the market price of a particular bond or loan. There are some good reasons for that, but this is not really news. The recent chorus of people saying “Aha, Merton's model doesn't work after all!” is really old news, and practitioners looking closely at these models know that.
One of the main reasons for this is the way the model maps the distance to default—the number of standard deviations that the market value of assets is above the default point. How you map that into the probability of default is a tricky question. It's something that can't be solved using some sort of assumed distribution such as a normal distribution. That's why practitioners building models of this form that work well are using empirical distributions based on actual default events to come up with that mapping process. If you do that, you can actually come up with good results. You can get good probabilities of default that are accurate in terms of predicting default events—and, consequently, you get good prices for debt and debt derivatives.
So are these models any good at all? Just as rejecting the Merton model based on a poor implementation would be throwing out the baby with the bath water, rejecting all models would be throwing out the bathtub! I think that there is indeed a role for modeling in looking at credit risk. It's possible to have a good implementation of a model that gives you accurate measures of default risk as well as early warnings of default. How early a warning you get and whether it will come in advance of spreads moving or simply in advance of ratings moving is an interesting empirical question. The answer certainly varies from time to time. But in general, equity markets contain useful information that can be used in structural models to measure default risk and default events.
|“Don't take a capital number that comes out of the model as if it were written in stone, but examine the parameters that
are uncertain and their effects.”
It comes down to which is more efficient: the equity markets or the debt markets? This may change over time with the growth of liquidity in loans, more transparency in the bond markets and the growth of credit derivatives. Maybe bond markets will become more efficient, but I'll leave it up to you to assess whether there's more efficiency now in the stock market or the debt market. The answer should influence your decision between structural and reduced-form models.
Christopher Finger: I'd like to address the issue of parameter uncertainty that Brian raised.
It's worth reiterating that CreditMetrics takes the parameters that Brian was just talking about as inputs. So the default probabilities are put in a table and we start from there. In a way, we've put the difficult part of the problem on other people. But that means we need to be concerned about how good that number is. You certainly can't even start to evaluate the assumptions we make about our model until you're satisfied that most of the numbers you put in are accurate.
The first parameter that's maybe questionable is the probability of default for each of the issuers you're analyzing. There are other parameters that also come into question—namely, the correlations we use. If you agree with our assumptions about how things are correlated, do you buy the actual numbers that come out of that? Then there are valuation-type parameters: We use spreads that we observe in the market to value exposures in order to account for value differences in credit events in a downgrade situation.
All these questions come up. One of our first goals was to make it easier to examine the effects of these types of uncertainly on portfolio results. Putting a stress-testing or sensitivity capability into the software was a high priority for us. It's something we actively encourage people to do: Don't take a capital number that comes out of the model as if it were written in stone, but examine the parameters that are uncertain and their effects with default probabilities that are 10 basis points or 20 basis points or a 100 basis points higher than what you've put in. Examine the effect of spreads tightening or widening. It's an important thing to look at—and it's a message that has been well-received.
Most of the interesting comparisons have been made at the portfolio level. In other words, if I assume the same data and assume the same sort of biases in my estimation of those data, what capital numbers do I get with CreditMetrics or other, competing models? Results of studies have varied, but we've seen that the numbers aren't too different. While that makes us as modelers sigh a bit in relief, it also raises a second issue: Where is all this risk coming from? That leads to the next step in the analysis of these models. It's fine if two models give me the same capital number for a portfolio of 1,000 loans. But what are the models telling me about which loans are contributing to the capital required?
Some regulators have said that issues like this are important only if there is a portfolio of liquid assets that can be actively managed. Our goal now is to convince regulators that it matters anyway. The loans that are being securitized in CLO structures aren't necessarily liquid loans in portfolios that are actively managed. So why did they get picked to securitize? Because they were contributing high levels of capital. There are smart people managing portfolios who look at where contributions to capital are coming from, and act according to this information. Any model or capital regulation has implications for these actions.
|“Until the markets are more highly developed, my preference is to construct the estimates of default probability from first principles using a historical financial analysis approach.”
In short, we're heading toward more microanalysis of the models and we're also doing more work on the data going into the models.
Michel Araten: Many of the other speakers are academics and consultants, and could be characterized as ministers without portfolios. At Chase, we have a portfolio and we've taken some of these various concepts and used them to help us manage it.
One frustrating issue from a bank's point of view occurs when the bank looks at a credit derivative as opposed to a guarantee on a bank loan. Presumably, the two should be the same from an economic perspective. But what we see is that the different accounting and regulatory treatments result in different assessments of capital and risk. What's going on here?
If a credit derivative is in a trading book, there should be sufficient liquidity for the instrument to be realistically marked to market. There should also be observable volatility of credit spreads so one can conduct some kind of VAR analysis to determine the risks associated with that volatility.
With plain-vanilla derivatives, VAR is greatly affected by the period of time in which one looks at the volatility of spreads and rates. But what we're really concerned with in the credit world is event risk, and I'm concerned that this may be underestimated. How do you measure the volatility of credit spreads for an instrument in a given risk class? If you're looking at BBs as a risk class, you're looking at an index that reflects the volatility of spreads associated with all BBs. If there is a major change that occurs infrequently—a volatility episode, for instance, causing all spreads to widen at the same time—it will be difficult to pick that up using relatively short periods of time. Furthermore, that's quite different from what you're really interested in—namely, the volatility of spreads and the risk associated with the specific idiosyncratic BB that you're evaluating.
The problem with looking at an index is that all the names that constitute a BB for a specific period of time are not formed through some kind of cohort analysis, but also represent names that used to be something other than BB. You've got bonds going in and out of the risk class, as opposed to bonds that started out as BBs and migrated to something else. If one incorporates transition matrices, care must be taken in their use, since recent evidence indicates that there is a lot more migration and therefore less likelihood that ratings are stable.
Referring to the discussion on estimating default probabilities, until the markets are more highly developed, my preference is to construct the estimates of default probability from first principles—either KMV/Merton-type models or what we would call our judgmental assessment using a historical financial analysis approach.
Clearly, one of the problems that banks have in comparing a guarantee on a loan with a credit derivative is a “disconnect” between the events with which they are concerned. Banks are primarily concerned with default, the loss from default and how that might affect their balance sheets. In the credit derivatives world, in addition to default, one is concerned with the migration event. The mark-to-market results in a decline in real value but will not be reflected on banks' accrual books.
|“How long will it be before we get a more robust regulatory capital environment? Do you think it will be on an institution-by-institution basis?”
The differences between the various approaches to the valuation of credit risk will not be resolved unless banks are required to evaluate risks on some kind of mark-to-market basis. Perhaps banks should start to think of credit risk internally on a shadow type of accounting basis. That way, even though they may not mark their portfolio to market, they may understand some of the economic aspects of a mark-to-market valuation.
Joyce Frost: How long will it be before we get a more robust regulatory capital environment?
Ethan Berman: We posed the exact same question to people from the New York and Washington Federal Reserve Banks, and they gave an answer as foggy as the answer I'm about to give. I think they intend to put out something to address that in the first half of next year. Having said that, however, they noted a large caveat involving the models that they found people were using—and, primarily, the data that were going into the models.
The variety of creative data solutions people were using—whether transition matrices on municipalities, transition matrices for Swiss industrials and so on—was so varied, and the market so lacking in that area, that it was difficult for them to come out and say, “Yes, this is an acceptable way of approaching the problem.”
Frost: Do you think it will be on an institution-by-institution basis?
Berman: It didn't really get to that point. But I think it may start with some broad-based big-picture initiative, and then proceed individually, institution by institution. It may not be that dissimilar from the way they handled value-at-risk on the market risk side. Although they noted that it seemed like a reasonable approach to the problem, they stressed that modeling credit risk is still in its infancy.
Finger: I would like to add that people at the Fed are actually some of the most progressive regulators in the world with respect to this issue. It was an interesting contrast to the language we heard in London a month ago at the Bank of England/FSA conference about the Basle Accord. I think there was a lot of discomfort with the models because of the lack of validation or back-testing. As we do research on these things, we've been able to hide behind the excuse that we don't have enough data to back-test. I think the time has come do better than that, since full-blown portfolio credit models are not going to enter the Basle accord without a fair amount of rigor on that point.
Frost: Along those lines, what type of regulation do you think will come down in the next year?
Berman: I think there will be some distinction given to the riskiness of the asset that doesn't exist today. There will be some way of distinguishing the risk characteristics of an AAA from an AA or an A, and so on. What they may do on correlation is unclear. I don't want to say they'll use equity-based correlations, but they may allow for the idea that there are some diversification benefits or concentration risks in a portfolio that need to be looked at differently from how they would be today. I think banks will ultimately be allowed to use some sort of internal modeling—within certain guidelines. But we're talking years, not months.
|“I think banks will ultimately be allowed to use some sort of internal modeling—within certain guidelines. But we're talking years, not months.”
Araten: There has been a lot of talk about problems with data, but I don't really see a problem with the data. Maybe it depends on how you define data.
First of all, every portfolio model has to have some kind of default estimates. These could either be subjectively estimated, derived using ratings, or determined by using some kind of model. Regulators have dealt with these default estimates for a long time now. When they examine a bank's books, they look at the bank's risk-rating assessment of individual loans and they agree with it or disagree with it, focusing particularly on the ones that may be more problematic.
The second issue is the loss severity associated with a default. There are tons of data out there that are becoming more stable and reliable. The data vary by type of security, but they're not going to be that far off.
Then there's the question of what the exposure is. Other than for term loans, the measurement of exposure for unused contingent facilities can be a problem. There needs to be better data so one can more accurately assess the loan equivalent for contingent exposures in the event of default.
Then, of course, you've got the critical correlation issues. There are a number of different model approaches that can also be evaluated using sensitivity analysis.
Combining all these factors into an assessment of portfolio risk is not all that complicated. As we've said, there are a number of different approaches. What is required is that you be careful about how you specify each of these elements. Are you estimating tail risk? What kind of assumptions are you making about the shape of the loss distribution? I think it's also important to have a way to simulate an entire distribution. One needs resources to be able to do that.
The institutions that can demonstrate a reasoned approach toward assessing portfolio risk will probably be the ones that the Fed allows to use some kind of internal model. But I don't think data is the key issue, as long as the assumptions are clearly specified.