.
.--.
Print this
:.--:
-
|select-------
-------------
-
The World According to Christine Cumming

Christine Cumming is one of the key generals in the Federal Reserve’s efforts to monitor bank risk. Since 1994, she has been senior vice president of the bank supervision group at the Federal Reserve Bank of New York. She has tracked financial risk management since its inception and has taken the lead in educating fellow regulators about the issues risk managers face. Cumming was recently cited by the Global Association of Risk Professionals for her “ability to attack complex problems of model risk by narrowing the issue to insufficient controls within a financial institution” and for “taking the old stereotype image of regulators and replacing it with a new, flexible, more tailored approach where dialogue is encouraged.” She spoke with editor Joe Kolman in May.

Derivatives Strategy: You and your colleagues are in charge of making sure New York banks have adequate risk controls in place. How happy are you with the state of things right now?

Christine Cumming: From a supervisor’s standpoint, we can really be quite happy with the progress that’s been made on value-at-risk. I think the frontier for sophisticated institutions continues to be the option-related instruments and how to treat them within VAR models most effectively. We’re also seeing a new era in which the banks are looking much harder at how they can make stress testing a more integrated part of the risk management process.

DS: I’ve been told that less then 10 percent of the leading financial institutions are going beyond VAR to nonlinear risk measures.

CC: I think everyone recognizes that to deal with optionality effectively, you need to move to simulation models. That creates a computation burden, and how to manage that efficiently is a critical question. But for medium-size and smaller companies, there’s still a great deal of ground to be covered in understanding the basics of VAR and how it can help them manage their investment portfolios and other activities.

DS: What are the dangers of using measures that are too simplistic?

Cumming: In the structured note market in the early 1990s, when interest rates went up in 1994, many banks were surprised by the market risk exposure of those instruments.

Today we’re starting to see structured notes that take on credit exposure. These are notes in which the coupon payment might depend on the behavior of a bond or other credit instrument and where the holder of the note would see a reduction in principal in the event of nonpayment. Instead of simply being exposed to the credit risk of the issuer of the note, the holder is exposed to the credit risk of the underlying bond or equity or loan.

That probably calls for additional analytical tools at the holder. If you could simulate some of the outcomes from those models or have one or more dealers simulate some of those outcomes, it would help you to understand better how those instruments might behave in unusual market circumstances.

DS: That’s a pretty sophisticated thing to do. These days most dealers will claim they have a pretty good handle on market risk, but they admit that credit risk is something entirely different.

CC: I think that reflects the evolution of risk measurement thinking. The big struggle in credit over the last couple of decades is to get good exposure measurement, so that as financial institutions become more global, they fully understand what and where the credit exposures are. That includes traditional banking activities, derivatives activities as well as other trading-related exposures such as settlement lines in the foreign exchange or the securities markets.

Banks have spent a lot of effort building systems that help them to aggregate that information in a fairly timely way. The next challenge is trying to assess more fully and more consistently the nature of credit risk in sets of credit exposures. Are they related simply to default risk, the combination of default risk and drawdown in a letter of credit, or the interaction of credit risk and the market risk in a derivatives contract?

DS: What is the Fed feeling about all the new activity in credit derivatives and credit modeling?

Cumming: The development of credit modeling and credit derivatives is important and exciting in and of itself, but these developments have opened up a new potential to—let’s be blunt—arbitrage our capital requirements. There are all sorts of new opportunities to securitize commercial credit that just weren’t there 10 years ago. In the United States 10 years ago we had an active asset-backed securities market trading primarily consumer credit exposures. That was because you could rely on statistical regularities in a consumer portfolio to really understand what the likely default probabilities were.

Today with credit risk modeling, we’re starting to see some growing confidence in the market, so that you can get at the default probability in a portfolio of commercial credits. You also have the ability to transfer credit risk through credit derivatives via instruments such as credit-linked notes.

We’ve seen a number of significant securitizations over the last six months to a year. CLOs and CBOs have made many of us in the supervisory community believe that the current structure of capital requirements, which puts a lot of weight on all commercial credit regardless of its credit rating, probably will be eroded rapidly by securitization techniques that can change the effective capital requirement considerably.

DS: Are you optimistic about the ability of these institutions to use innovative structures to get more capital bang for the buck?

CC: The market has developed very quickly. As always with relatively new financial developments, they need to stand the test of time. In particular, they need to stand the test of a distressed market scenario, a downturn or rising interest rates or other sources of stress.

One test that these commercial credit securitization approaches have not yet really had to pass is an environment in which defaults are rising. The important questions will be: How good were those initial risk assessments of the portfolios, and how will investors respond to any surprises there?

“The development of credit modeling and credit derivatives has opened up a new potential to—let’s be blunt—arbitrage our capital requirements.”

DS: But a lot of people are hoping that when you effectively combine market risk and credit risk into a uniform risk measure you’ll be able to reduce the capital required by some huge percentage. Some people are even talking about reductions of more than 50 percent. Is that in the cards, or are they smoking something?

CC: I think that is a really difficult question to answer, in part because I don’t think we have fully answered the question of how to move from a risk measurement to a capital charge. It will take some time to resolve some of these questions and fully understand the relationship between the current level of capital, which I think supervisors have become comfortable with, and the capital level that would come out of some new approach.

DS: How long do you think all this will take? Are we talking about potential reductions five or 10 years down the road?

CC: I hesitate to say anything about reductions in required capital, because I don’t know at all that that’s where we’d end up. If you look at how the financial business has unfolded the last few years, you wouldn’t conclude that this is a business in which you can afford to be thinly capitalized. I think the evidence goes heavily in the other direction.

Here’s another way to look at the question. There’s a gap between the structure of risk as expressed in those capital requirements and the structure of risk that the institutions themselves use to characterize the same risk. That has changed, and the gap between those two approaches is wider today then it was 10 years ago. So we may not disagree as much on the level of capital as on the expectation of what particluar businesses and transactions and exposures attract what level of capital.

DS: How is that gap between regulators and users widening?

CC: If you go back to the Basle risk-based capital accords, which treated all commercial credit with an 8 percent weight, there initially was recognition that it was a simplification, but it was something one could live with. Today, of course, we’ve got techniques like credit risk modeling and securitization that allow you to make much finer and much more compelling distinctions between a triple-A-type credit and triple-C-type credit. That’s to say nothing of the fact that the bottom end of the credit market has expanded quite a bit with the increasing marketability of high-yield bonds and instruments like that. As a result, what was an acceptable simplifying assumption in 1988 becomes tough to live with today.

DS: Let’s turn to your role as a model checker. How does the Fed examine the accuracy of bank models?

CC: There is no way we can check every model in a bank. We don’t want to be in that business and we couldn’t be in that business even if we wanted to be. There’s just too many of them. The typical VAR model really is a collection of models that might start at the instrument level with pricing models or some kind of risk factor assessment models. Those need to be looked at in some way. You also have to look at the aggregation or simulation approach that goes on at the institution.

One of the critical things for us is making sure the bank has the internal checks and balances that ensure that all the models are in good shape and are reasonable and that the assumptions that are being made are good. If you read through the qualitative part of the market risk amendment, it really lays out a series of things that we as supervisors will be looking for.

The first thing is to make sure that there is some independent risk measurement process in the bank. That is, the person who is responsible for the risk measurement process is not the same as the person who is heading up the trading businesses.

The second thing that we look for is that whatever risk measurement process is being used for regulatory purposes is essentially the same as what the bank is using for its own purposes. If that’s the case, the institution has a strong incentive to keep the models up to date to make sure they accurately reflect the risk in the business.

We also make sure the models are vetted by someone independent of the area that developed the models. So, for example, if the traders develop the model for a new derivative product, our expectation is that somebody independent of the traders will check that model and make sure it’s sound.

DS: That doesn’t have to be outside the bank.

“If you look at how the financial business has unfolded the last few years, you wouldn’t conclude that this is a business in which you can afford to be thinly capitalized.”

CC: It doesn’t at all have to be outside the bank. In fact, as a practical matter, it’s really desirable for it to be someone inside the bank, because the model that a bank uses to price instruments on its book might be used to track things on the operational side of the house. The robustness of the model is important to the financial institution itself.

It is a resource commitment that the bank needs to make. If you have bright people who develop models in trading or in research, then you’re going to need sufficiently bright people who are able to check these models.

DS: A group that mirrors the original group?

CC: You don’t need an advanced rocket scientist to check the work of the most advanced rocket scientist. But you need people who are really quite literate in these financial products. That’s probably the most contentious thing that we discuss with banks.

We’ve found that as banks have developed their market risk management process, they are likely to develop a model inventory and to keep a library of approved models. So model control within institutions has become a much more important feature of the risk management process. That was a development that we welcomed, but it was very much driven by the industry.

Those are the kinds of things the examiners look for. They would probably review a few models to see that the elements of the process are in place, that there’s independent checking, that the models themselves look reasonable and that there are adequate controls around the model to make sure it can’t be corrupted by someone. The examiners also need to understand how the models fit into the broader management processes of the bank.

The other potentially controversial thing we discuss with banks is the pace at which the modeling evolves within the institution. We are interested in making sure banks match the sophistication of the risk measurement system to the sophistication to their business. That can be somewhat contentious because there’s definitely more computational requirements involved in fully capturing the risk of options in a risk measurement system.

DS: What happens when your auditors or examiners discover an apparent discrepancy in a model? Give us a sense of the back-and-forth that might follow.

CC: We have had situations in which the examiners have looked at a model and believed that it didn’t really capture the risk adequately for the product, or that there might be a risk that cannot be fully captured in the model.

A really good example of the latter would be quantos, where you might have an interaction between a foreign exchange risk and a stock equity risk. The examiners will start a dialog with the bank. Does the bank see the same missing elements in the model that we see? How do they assess the size of those missing elements? Are there other techniques that they’re using to monitor the risk in something like a quanto?

For example, it might be perfectly acceptable for a bank simply to track the size of its quanto exposure and set a limit of some kind. If traders and management are aware of what the potential risk is, that may be perfectly acceptable, even if the interactive risk element is not fully captured in the model.

“There is no way we can check every model in a bank. We don’t want to be in that business and we couldn’t be in that business even if we wanted to be.”

The other thing the examiners will be alert to is whether this is a one-off problem or whether it reflects a more pervasive problem within the institution’s modeling. If the examiners think it necessary, they can look at additional models to make sure. In general, if the examiners find something that the institution agrees is a problem, the bank will generally fix it right away.

DS: I understand you’re also looking at other risk measures that aren’t directly related to VAR and stress testing.

CC: One way of understanding capital charges and internal capital allocations is to view them as a way of providing self-insurance against unusual events, operational breakdowns, big errors and things like that.

One strand in the literature of banking institutions has looked at insurance models, particularly in the contest of using deposit insurance as a way to understand what is an adequate capital level for an institution.

Although people are focusing on the VAR approach for both capital and market risk, it’s natural for researchers to keep broadening the horizon of models they consider to help them understand the risk of financial institutions. Some people are starting to look at the world of insurance risk and the modeling of catastrophic events to see if there are useful techniques that could be applied in the financial institution management context.

I think insurance models are a potentially rich area of exploration, particularly if you go back to the question of how to move from a risk measure to deciding the appropriate level of prudence in a capital charge. That avenue may provide another set of answers that could help us in finding the answer.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--