.
.--.
Print this
:.--:
-
|select-------
-------------
-
Credit Risk

Satyajit Das, an Australia-based risk management consultant, shows how financial institutions must reorganize themselves in order to trade credit derivatives and benefit from the latest credit management techniques.

Rethinking the Credit Function

The credit derivatives market has grown at a rapid rate over the last few years. But new growth has created organizational stresses that are becoming increasingly evident as various aspects of credit derivatives trading are more closely scrutinized. Two questions beg for answers: What is the optimal organization of credit derivatives trading groups? And what are the operational requirements for trading credit derivatives?

The advent of credit derivatives requires a fundamental reassessment of the credit function in financial institutions. It demands a complete adjustment in how credit is handled. This reassessment extends to credit approval processes and the pricing of credit risk.

Credit occupies a central role in most organizations and transcends product and geographic boundaries. This makes managing credit risk and organizing a credit derivatives trading function within an institution difficult. It may also generate inefficiency in interaction between units, which further reduces the utility of credit derivative products.

Most financial institutions have a number of areas that trade credit-derived products and have a legitimate claim to an involvement of some kind with credit derivatives. These include fixed-income desks, high-yield desks, emerging market desks, repo or finance desks, asset swap desks, syndication and asset sales desks, distressed debt trading, securitization desks, equity desks (particularly convertible trading), individual credit officers and account managers, and credit portfolio management. The potential users of credit derivatives clearly cross product and functional boundaries. (The purpose of various desks and the products traded are summarized in Table 1, Page 55.)

Each group has different motives for using credit derivative structures. From an operational perspective, this makes it difficult to organize a credit derivatives unit. This structural dilemma, however, stems in part from the evolution of the product itself.

Financial institution activity in credit derivatives is predicated on the fact that these organizations are holders of substantial credit risk through their normal operations. In this respect, the credit derivatives can be regarded as an adjunct to traditional methods of managing of credit exposures and loan asset trading and distribution activities.

The initial activity in credit derivatives was driven by trading desks (typically derivatives units), and focused on the management of specific exposures to individual counterparties. As a result, different units within the same bank became involved in credit derivatives trading.

While this has spurred greater interest in the product and its applications, the overall impact has been negative because it has been difficult for institutions to maximize the effectiveness of these products in managing the credit risk of the institution as a whole. To do this, the organization of credit risk management within the institution needs to be reformed substantively.

The ideal structure

The required organization of credit derivatives within an entity requires an understanding of the banking process model of the future. This is best illustrated by following the transaction flow.

Assume a bank wishes to enter into a transaction—for example, to issue a loan, purchase a security or enter into a derivative transaction. In the present model, credit approval is sought from the credit function. Under current practice, this approval is binary (yes or no) and may have pricing guidelines for the transaction. The transaction is then booked against credit risk limits allocated either for the transaction or to the entity.

Under the future banking process model, the steps will be somewhat different. The originator will still seek approval from the credit function, but instead of making a decision based on the traditional binary approval process, the credit function will assign the originator a credit capital charge for allocating lines to do the transaction. This charge will be reconciled with the profit-and-loss account of the transaction’s originator (whether it is a relationship manager, trader or capital markets desk).

The credit function will price the credit charge from the lower of two sources—internal and external. The internal figure will be based on the proposed transaction’s marginal contribution to portfolio risk, and the return will match what is required to cover the incremental expected and unexpected losses resulting from the transaction. The external figure for the credit charge will be based on the market price for purchasing default protection (through a total return swap or a credit default swap) from an acceptable financial counterparty.

If the transaction is undertaken, then the credit risk will be treated internally as a credit default risk swap written between the credit function and the transaction originator. The credit function will then have the responsibility for managing the credit risk assumed.

To do this, the credit function could create provisions and holding capital against the risk; purchase protection against the risk through a credit derivative transaction; or package selected credit risks and sell them down through securitization structures, such as collateralized bond obligations or collateralized loan obligations, or by issuing credit-linked notes.

The most significant impact of this change is the migration of the credit derivative function from the derivatives desk to the credit desk within the institution. But this change also requires a shift in the philosophy of credit risk management. So far, a number of organizations, including JP Morgan, United Bank of Switzerland and Credit Suisse, have made this transition—at least in part.

This change requires a great deal of integration between credit and market risk. For example, when a currency swap or market-sensitive instrument is traded, the resulting credit exposure is dynamic. Although the exposure is modeled and limits are established, changes in market rates beyond those forecast can lead rapidly to the actual exposure exceeding the forecast exposure. This risk is inherent in the credit risk management of all market-value instruments but is not explicitly and systematically managed.

In the new banking process model, the exposure could be managed by the credit risk function estimating worst-case and average exposures and then purchasing protection against movements in rates beyond the forecast levels. The protection would be in the form of out-of-the-money options purchased from the relevant market risk desk. This would permit the accurate quantification of these risks and the capital costs of assuming the risks. This would, in turn, allow for the accurate pricing of these risks.

The process described earlier relates to the overall restructuring of the capture and management of credit risk within an institution. At a lower level, credit derivative instruments will become fairly generic building-block tools for financial engineering across entire business units. The nonfunded nature of total return swaps will make them generic devices to finance assets and arbitrage markets across various desks. Moreover, the nonfunded nature of credit default products, along with the ability to shift the exposure to default risk, will make these instruments useful in repackaging assets or customizing risk-attribute bundles for investors. And the ability to trade and monetize expectations of spread movement independent of absolute interest rate movements will make credit spread products useful to various desks as a mechanism for managing or acquiring exposure to spread risk.

In effect, credit derivatives will be embedded in an organization’s structure at two distinct levels—organizations will need to have a centralized credit trading function as well as a management function that manages the total credit risk of the institution’s portfolio. Eventually, these instruments will become increasingly homogenous and will be absorbed into the trading or financial engineering activities of all business units. This process is similar to the way in which derivatives trading is gradually being merged with the cash markets for the underlying instruments.

Tools of the trade

This new paradigm requires a major reengineering of operational functions within financial institutions. The release of Credit Metrics and Credit Risk+, along with the increased interest in products such as KMV’s EDF default prediction models, have advanced the debate about credit portfolio models.

However, the lack of an accepted credit pricing model (such as Black-Scholes-Merton for options) creates an inherent lack of transparency that discourages trading. In addition, the difficulties in modeling parameters such as default risk, recovery rates and default correlations for credit portfolios remain significant. Considerable work still remains to be done in this area, since the availability of credit risk management systems currently lags behind that of market risk management systems. The need for software to price and trade credit derivatives, as well as to manage credit portfolios, is urgent.

Most traditional middle offices and operations areas are not equipped to settle and monitor credit derivative transactions. They have little experience dealing with the complexity of default language, the options for calculating default payments and the different demands on counterparties.

The lack of transparency and the difficulty in marking to market individual transactions also create problems. To address these issues, some institutions are establishing separate middle-office functions for credit derivatives, merging credit derivatives operations with loan administration, or reengineering the credit monitoring function to encompass these products. What is increasingly clear in all this is that as the market for credit derivatives continues to evolve, institutions will need to integrate their credit risk functions in order to improve their ability to manage credit risk.

A version of this article was first published as “Credit Derivatives: Development Issues” in Financial Products, Issue 91.

Potential Users of Credit Derivatives Within a Financial Institution
Desk Responsibilities Uses Total Return Swaps To: Uses Credit Default Swaps To: Uses Credit Spread Products To: Uses of Other Products:
Fixed Income Trading, risk management, client-driven products Finance positions for clients Trade/manage credit risk, synthesize fixed-income products for clients Trade/manage spread risk, synthesize credit spread products for clients  
High Yield Trading, risk management, client-driven products Finance positions for clients Trade/manage credit risk, synthesize fixed-income products for clients Trade/manage spread risk, synthesize credit spread products for clients  
Emerging Markets Trading, risk management, client-driven products Finance positions for clients Trade/manage credit risk,synthesize fixed-income products for clients Trade/manage spread risk, synthesize credit spread products for clients Trade currency inconvertibility protection for clients and for proprietary accounts
Repo/Finance Financing inventory, assisting clients, funding assets Finance positions synthetically (nongovernment obligations), arbitrage against repo rates      
Asset Swaps Trading, risk management of asset swaps inventory, client-driven products Create leveraged/unfunded asset swaps products for clients Manage credit risk of inventory, synthesize from asset swaps, arbitrage against asset swaps pricing Manage spread risk of inventory, synthesize from asset swaps, arbitrage against asset swaps pricing Embed credit spread optionality in asset swaps
Syndications/Loan Sales Selling-down/acquiring loan risk Utilize as de facto unfunded risk participations Utilize as de facto unfunded risk participations Manage syndication spread risks Use synthetic lending facilities/assets or swaptions to synthesize revolving credit facilities for investors
Distressed Debt Trading distressed debt, funding distressed debt positions Finance positions synthetically Assume risks (identical to selling puts on the distressed debts), assume recovery rate positions    
Securitization Securitizing credit portfolios, enhancing the credit of securitized assets   Alternative to monoline insurers as a form of credit enhancement   Use credit-linked notes as alternatives to CBO/CLO structures
Equity (Convertible) Trading Funding convertible positions, repackaging the credit risk of convertibles Finance positions synthetically Hedge/manage the risk of convertible portfolios, synthesize fixed-income products for clients    
Individual Account Managers Managing exposure to individual credits Sell-down/acquire exposure to clients as part of maximizing client revenues Sell-down/acquire exposure to clients as part of maximizing client revenues    
Credit Portfolio Management Managing aggregate portfolio risk characteristics, managing concentration risk, managing return on economic credit capital, managing regulatory credit capital Sell-down/acquire exposure to clients as part of maximizing client revenues Sell-down/acquire exposure to clients as part of maximizing client revenues   Use credit-linked notes to manage total portfolio in terms of economic returns and regulatory capital, trade credit risk


Risk Management

Energy specialists Kevin Boothby, George Holt, Ram Shivakumar and Don Ellithorpe outline the myriad intricacies buried within electricity industry contracts.

Analyzing Structured Power Contracts

The trading of structured power contracts in the U.S. over-the-counter markets has increased rapidly in recent months as deregulation of the electric power industry has accelerated. This rapid growth has occurred despite the complexity of many of these structured power contracts. Many electric power contracts, for instance, contain embedded options that are unusual by the standards of financial markets.

The complexity of even vanilla power contracts would be considered exotic in mature financial markets. The valuation of structured power products generally requires different methodologies and models from those used to price financial market instruments. Consequently, new models are being developed by market participants to provide traders and risk managers with a more accurate perspective on valuation and risk measurement.

Given the risk management challenges in decomposing these structured products into their constituent components, replicating and valuing composite deals, and measuring the sensitivity of these values to changes in market variables, it is important to ask why electric utilities have chosen to trade these products.

One explanation is that these structured products provide long-term protection against drastic changes in market conditions. Indeed, while the media have reported on the drastic financial repercussions to some electric utilities from the recent run-up in electricity prices, they have not addressed the possibility that other electric utilities may have avoided a similar fate because of their long position in structured power contracts.

A second explanation is that structured power contracts provide utilities and others with the flexibility to procure and sell power in response to unexpected operating shocks. For instance, the shutdown of a nuclear plant as a consequence of a severe thunderstorm may enable an electric utility with a long position in a structured power contract to minimize its financial losses.

The new market in electricity derivatives presents an enormous challenge to risk managers and quantitative analysts.

The new market in electricity derivatives presents an enormous challenge to risk managers and quantitative analysts—particularly those making the transition from the financial world to the energy industry, who must understand the attributes of power delivery agreements (PDAs) and the structure and composition of power contracts.

Structured power contracts are generally contingent claims on one or more elemental commodities, including:

Electric power: Electricity is delivered over a period of time in specified quantities.

Transmission capacity: Transmission capacity is required to deliver electric power between two or more locations over a period of time.

Fuels: Coal, oil, natural gas and other fuels are used to generate electric power, and their cost is reflected in power contracts.

Emission allowances: Power generating units are required to cover their emission of pollutants with emission allowances. These allowances, issued by the federal government to utilities with generating plants, are traded in a secondary market.

Other production inputs: Miscellaneous inputs to electric power generation such as construction costs and fuel transportation costs are reflected in power contracts.

The important attributes of structured power contracts include:

Amount of power: Specified in megawatts.

Hours of power delivery: Specified as hours of the day during which power will be delivered. Power may be delivered during on-peak hours, off-peak hours or both.

Duration of power delivery: Specified as the period of time over which power is delivered. A contract, for instance, may specify the delivery of 100 MwH of power during on-peak hours in the period from July 1 through August 31.

Location of power delivery: Specified as a system interconnection point or control area. The contract, for example, may specify the delivery of power to a specific company’s control area.

Days of power delivery: Specified as the days of the week when power will be delivered. For example, power may be delivered only on weekdays, only on weekends, weekdays or weekends, both weekdays and weekends, or some subset of weekdays and weekends.

Degree of dependability: Specifies whether power delivery can or cannot be interrupted (non-firm vs. firm power).

Terms of settlement: Specifies whether the contract is physically or financially settled. In general, most power contracts are physically settled.

Nature of the underlying: Specifies whether the underlying is a spot contract, a forward contract or an option contract. If the underlying is a spot contract, exercise of an option will imply the immediate delivery of power for a given period of time. If the underlying is a forward contract, exercise of the option will imply the delivery of power for a given period of time beginning at some future date. If the underlying is an option contract, exercise of an option will give the buyer an option to purchase or sell power at or before some future date.

Number of times options are exercisable: Specifies the maximum and/or minimum number of times the options may be exercised. Generally, buyers of structured power contracts purchase the right to exercise options on multiple occasions over a given time interval. A typical example is a daily exercise option on power. Some multiple-exercise instruments are essentially volumetric instruments with cumulative volume constraints that require that some total (or average) quantity of power be purchased over a given time interval.

Unlike a conventional call option, however, the buyer of a swing call option has the right to purchase a variable amount of power.

Varying exercise price: Specifies the exercise price during each exercise period. The exercise price often varies with the passage of time according to some predetermined formula. The exercise price determined by the formula will typically depend on an average or index of market prices for power.

While conventional derivatives such as simple forwards and options are present in most structured power contracts, the major challenges in valuation and risk management arise as a consequence of a variety of complex and unique (to the electricity industry) embedded derivatives. Perhaps the most unusual embedded derivative common to most structured power products is the swing option, which is similar to a conventional call option.

The buyer of a swing call option has the right to purchase power on or before a specified expiration date at a specified exercise price (K). Unlike a conventional call option, however, the buyer of a swing call option has the right to purchase a variable amount of power (Q), where Q may not exceed the notional amount of the option (N). Exercise will only occur if the market price of power (P) exceeds K. An outlay of K*Q occurs upon exercise, and the value of the option at expiration is given by:

Max [(P-K)*Q,0]

A typical structured power transaction contains the following embedded options:

Flexible power: The buyer has the right to purchase variable power (on-peak and off-peak) each day at stated prices.

Optional flexible power: The buyer has the option to purchase an additional flexible power contract on certain future dates.

Optional flexible power exchange: The seller has the option to exchange the buyer’s remaining flexible power option for another option with a shorter duration.

Interruptible: The seller has the right to interrupt purchased power on an hourly basis, offsetting the buyer’s right to power.

Buy-through: The buyer has the right to override a prospective power interruption by a seller. If the buyer exercises the option, the buyer will pay a higher price for power than before interruption.

Termination: The buyer has the right to terminate the structured power transaction before expiration under certain conditions.

Emission allowance pass-through: The buyer pays the cost of the emission allowances required to generate power delivered by the seller.

Most participants in the electric power markets have one or more of the following needs:
Structuring Designing structured transactions with reference to customer needs and constraints.
Decomposition Identifying transaction structure and components, and assessing the qualitative behavior of transactions under alternative market scenarios.
Valuation Modeling the composite value of the structured product in terms of component values, and the sensitivity of this value to variation in contract terms.
Risk measurement Determining the sensitivity of the value of the structured product to changes in market prices, based on differential, stress and stochastic scenarios.
Risk mitigation Identifying replicating portfolios that mitigate the risk of the components of the structured product.

The flexible power component is thus a complex substructure and often comprises two strips of swing options on peak and off-peak one-day power. On each day during the tenure of the flexible power contract, the option holder can independently exercise a swing option for peak and off-peak power for the following day. Each swing option has an identical notional amount (such as 50MW), but the exercise prices of successive options might increase annually by expiration date to reflect the writer’s marginal cost of procuring power.

Exercise of individual swing options in the flexible power agreement might deliver an interruptible PDA under which the seller will supply on-peak and off-peak power on the following day to the buyer. The notional amount of interruptible power is determined by the quantity established when the swing option is exercised, and the price per MW of energy is the exercise price of the swing option.

Electricity market dynamics are difficult to model because of the number and variety of drivers—transmission, generation, fuels, weather and technology. Heightening the risk management challenge is the complexity of many structured power products. The first step toward addressing this challenge is to be able to identify quickly the complex structure of these products, decompose them into their constituent components, value each component and the composite product, and replicate the composite deal.

Boothby, Holt and Shivakumar are consultants in the derivatives and treasury risk management group at Arthur Andersen. Ellithorpe is a senior analyst with Koch Industries.


Systems

Maureen Callahan of Callahan Co. explains why it’s important for everybody to understand the consequences of choosing between competing middleware standards.

CORBA Vs. DCOM: Making Sense of Geek-Speak

This is an article for business people designed to take some of the alphabet-soup geek-speak out of distributed-object middleware architecture and make it accessible to normal human beings.

Anyone who’s been on Wall Street for more than 20 minutes has suffered from incompatible systems. It matters little if your analysis is stochastic or deterministic if you don’t know what you own—and if your trading desks are locked into their own islands of technology, preserving this well-kept secret. Middleware promises to solve that problem, but at the moment, anyway, competing standards make it a Beta vs. VHS war out there.

The promise of middleware is more productivity for the geeks. With the right middleware, they can do things once.

At its simplest, middleware is a virtual pipeline for information and behavior. It facilitates communication between applications, clients and servers, and masks differences or incompatibilities among networks, hardware, software, databases and so forth. The promise of middleware is more productivity for the geeks. With the right middleware, they have to do things once—instead of over and over for each piece of hardware and software.

That means less pain for normal human beings, more savings and a better understanding of how our business works—a win-win situation.

Getting this right is so important and so strategic that in April 1998, several Wall Street firms, led by Morgan Stanley and Goldman Sachs, formed a committee to select standards and establish best practices for middleware. The group includes the usual suspects: Chase, Citibank, CSFB, JP Morgan, Lehman Brothers, Merrill Lynch and Salomon Smith Barney. Middleware issues are so important that fierce competitors are now collaborating to get it right.

The Internet is, of course, a giant piece of middleware. It lets us move information and functionality around. In global financial firms, however, we require more sophistication—scale, performance, security, guaranteed delivery, fault tolerance and, at its best, a real plug-and-play environment that allows users to move intelligent behavior across their firms to where and when they need it most.

Nirvana? No, but we’re getting there via two basic architectures that are continually being improved and are now being widely adopted on Wall Street. These are DCOM/DNA from Microsoft and CORBA from the rest of the world—in other words, the Object Management Group (OMG), a consortium of technology firms that now stands at more than 850 members. Each camp has its own proponents (and bigots) and the rhetoric sometimes escalates, just like the religious wars in 16th-century France.

The OMG was formed in 1989 to publish a standard to support distributed objects in a distributed computing framework. The result was CORBA, the Common Object Request Broker Architecture, which is an open architecture that supports multiple platforms (NT, the many flavors of UNIX, OS/2, OS/400, MVS, VMS, Macintosh and so on) and multiple languages. (Software development has always been a Tower of Babel.) CORBA was first specified in 1991 and has been available commercially since 1992. CORBA, thus, is relatively mature and provides a stable base for high-volume, enterprise-distributed global computing. CORBA is the initial focus of the Wall Street Middleware Working Group and is the architecture of global, high-volume trading systems such as Fintrack’s CHARM and CHEERS products. The successes of CORBA—first, that a consortium could actually (and rather quickly) agree to a specification, and second, that robust products could be built, understood and adopted just as quickly—probably have as much to do with the elegance of the solution as with the claque of technophiles who love to hate Microsoft.

Some critics have characterized Microsoft’s distributed-object computing framework as inelegant, sprawling, technologically mediocre, stunningly complex and—sin of sins—not an object environment.

Friends of Bill (Gates), however, counter that success is the best revenge. Microsoft, of course, is the other standard, de facto, because Microsoft already owns the desktop and wants to own the Internet. Microsoft’s standard is the currently available Distributed Component Object Model (DCOM), and the recently announced Distributed interNet Applications (DNA) architecture. DCOM, available since 1996, is a proprietary (as opposed to “open”) distributed-object computing infrastructure that supports the ever-pervasive Windows/NT environment.

Microsoft is the other standard, de facto, because it already owns the desktop and wants to own the Internet.

Windows/NT is clearly already the stated standard, at least on the desktop, for most of Wall Street. Hence, DCOM is the choice for departmental-level distributed applications that run in a Microsoft-only environment. Indeed, it offers the added benefit of allowing users to couple their DCOM application tightly with the entire Microsoft Office suite and, like CORBA, it allows them to distribute computationally intense processes over all the servers in their environment. DCOM provides three key advantages in global risk management software: ease-of-use, flexibility and scalability. Plus, you don’t need nine Ph.D.s to maintain it.

That’s fine, say the anti-Bill claque, but NT servers simply don’t yet give users the firepower (or reliability) they need to price and risk-manage complex path-dependent derivatives such as collateralized mortgage obligations. Forget path dependent—any nonlinear model is expensive in terms of performance. That’s why you still need UNIX to do the heavy lifting on the server side, and why CORBA is useful. It’s an excellent mechanism to bridge the gap between the Microsoft desktop and UNIX servers. Besides, most of what runs Wall Street is mainframe and UNIX, not Microsoft.

Bill Gates believes that DNA will solve this problem—it’s the basis of his straight-through processing (STP) strategy. And most likely it will, but DNA is not yet real (unless you’re talking about the national FBI repository in a secret location). Right now it amounts to little more than a white paper and a media blitz; it will be some time before it’s produced and shipped.

Are we forced to choose between the nice, tight coupling DCOM gives us in the Microsoft world (where most of us live anyway) and the interoperability (lots of different hardware and software) of CORBA? So far, we are. The OMG has been beavering away at defining a bridge between DCOM and CORBA, and a few vendors are working on the problem as well. But we all know the slip between the cup and the lip when it comes to software development: It’s always late and it’s never what we expected. Maybe this begs a question for business people: Why should I care? Because speedy and transparent access to data, wherever they live in your empire, is the key to quality risk management. Because the global business is moving too quickly to allow you the luxury of repeating past mistakes. Because unless you think the Internet is a passing fancy and your clients are indifferent to technology, you need the right middleware architecture.


Credit Risk

Richard Skora, president of Skora and Co., explains how correlation risk makes collateralized debt obligations difficult to model.

Correlation: The Hidden Risk in CDOs

Collateralized debt obligations are one of the most interesting innovations of the securitization market in the 1990s. They create new, customized asset classes by allowing various investors to share the risk and return of an underlying pool of debt obligations. The attractiveness to investors is determined by the underlying debt and the rules for sharing the risk and return.

But a CDO is a correlation product. Investors in this product are betting on correlation risk—or, more specifically, the risk that the individual underlying instruments will not default simultaneously. To make sure that they are getting a fair return for this risk, they must be able to measure this risk.

Since the 1950s, when Harry Markowitz did his pioneering work on portfolio theory, there has been intense study of correlation between equity investments. Equities are liquid and have relatively small transactions costs—lending themselves well to portfolio rebalancing. However, correlation between debt securities has not been studied so intensely—possibly because debt securities other than U.S. Treasuries are not liquid and have high transactions costs.

Up to now, CDOs have not been subjected to intense correlation analysis. This is a result of the historical development of the market. In a relatively short time, the CDO market experienced two extremes. In the late 1980s and early 1990s, these securities were sold mainly as credit arbitrage investments. The spreads between non-investment-grade credits and Treasuries were at historical highs—much higher, in fact, that the historical loss rate on non-investment-grade credits. This meant that the spread more than compensated the investor for the additional risk. Thus, CDOs created windfall profits for all investors—and, in particular, those who took the greatest credit risk, sometimes earning them returns of 30 percent to 40 percent or more.

More recently, credit spreads have completely reversed. In the last few years, spreads between Treasuries and non-investment-grade corporate bonds and emerging markets are at historical lows, barely enough to justify the extra credit risk (as the market may now be finding out).

Nevertheless, CDOs have remained popular as capital arbitrage mechanisms. Banks use them to free up regulatory capital by securitizing the higher layers of credit risk. The instruments are backed by a pool of debt obligations, including bonds, loans, revolving credit facilities, structured finance obligations and almost any other kind of instrument one can imagine.

The interest and principal payments from the underlying instruments are redirected to the various investors through tranches so that losses in interest or principal to the collateral are absorbed first by the lowest-level tranche, then the next tranche and so on. The lowest tranche is the riskiest and is called the equity tranche. All the tranches except the equity tranche are rated, with the highest tranche usually rated AAA. The mechanism for distributing the losses to the various tranches is called the waterfall.

Losses occur when there is some kind of credit event. A credit event is usually caused by a default of the underlying collateral, or a credit downgrade of the collateral. In either case, the market value of the collateral drops.

The timing and the severity of losses associated with these credit events are important. Credit events are not completely independent of each other; a macroeconomic change that affects one instrument in the portfolio may affect others as well. In other words, the better the diversification, the more stable the CDO investment.

The ultimate value of the CDO is based on the value of the proceeds of the various tranches, which is the same as the value of the underlying pool of debt, plus or minus management fees. So if one miscalculates the risk of one tranche, and therefore “misprices” that tranche, then one automatically misprices the other tranches.

To measure the risks and returns built into the CDO accurately, one must quantify the diversification of the individual instruments. An investor in a particular tranche or a rating agency wants to know the likelihood of sustaining a loss and the likely severity of that loss. In statistical terms, they want to know the exact probability distribution of losses to the underlying pool of debt.

The probability distribution depends on the probability of a credit event and the relationship between two or more credit events. In statistical language this is called nonindependence of events. Although correlation is a measure of nonindependence, it is only one number and does not capture the complicated nature of credit events. For example, it is well-known that the correlation between interest rates and defaults is almost zero. A careful examination of historical data, however, shows that defaults tend to occur in extreme interest rate environments—either low or high rates.

The science of credit analysis is quite sophisticated, but people have only recently begun to think about credit ratings in terms of the probability of default. The correlation of defaults has been thought about even less. Work done to date on default is based on historical data. Unlike other risk factors such as forward rates and volatility, correlation is not easily observable or tradable. Indeed, many derivative instruments exist for trading rates and volatility. No such instruments exist for correlation.

There are many models used to measure credit risk and the correlation of credit risk. The first modelers of default were Black, Scholes and Merton (1974), who observed that a corporation defaults when its assets fall below its debt obligations. This led to various attempts to model assets. One problem with most credit models is that they try to measure nonindependence by a single number—namely, correlation. But, as we’ve seen above, the nonindependence associated with credit events is often more complicated than a single number.

A second problem with these models is that they assume that correlation, and even nonindependence, is constant over time. A casual review of historical data shows that industries that may have been related at one point in time may not be related later on. To compensate for these inadequacies, modelers often use conservative inputs to the model. As a result, the model then undervalues the senior tranches or, equivalently, undervalues their credit quality.

Recently, there have been many strides in modeling and, in particular, econometric modeling. Some have concluded that the most important inputs into the calculation of the probability distribution are outlying events, not average default rates or average correlations. An analysis of historical defaults shows that default rates are not constant over time. There have been periods when the number of defaults has spiked up.

CDOs are likely to evolve and remain a staple in every investment manager’s portfolio. But in order to gain greater acceptance and application, there must be a parallel development in our understanding of defaults—and, in particular, correlation. This knowledge will not only benefit CDOs, but will find applications across all lines of risk management, including all kinds of portfolio risk management and capital allocation.

Richard Skora is president of Skora and Co., a credit risk management consulting business. His e-mail address is richard.skora@skora.com.

Figure 1
Losses in the Collateral of a CDO
Losses in the Collateral of a CDO

Figure 1 illustrates how various models approximate the probability distribution of losses in the collateral of a CDO. All the models basically agree on expected losses but disagree greatly on the variability of the losses. The losses are concentrated in the left of the graph. These losses affect the lowest tranche of the CDO first. All the tranches are concerned with the variability of the losses, but the lowest tranche is affected more than the others by the way these losses are correlated with the rest of the portfolio. The highest tranches are affected when the losses are excessive and in the range of $150 million or more. The last of the four distributions was calculated by a model that incorporates both the non-“stationarity” of defaults and the nonindependence of defaults.

Figure 2
Large Losses in the Collateral
Large Losses in the Collateral

Figure 2 provides a close look at how various models approximate the probability distribution of losses by looking at losses above $150 million. In this region, the shortcomings of the models are amplified. Slight changes in correlation assumptions have a profound effect on the calculation of expected losses to the higher tranches.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--