Print this
The World According to William Sharpe

William Sharpe is the STANCO 25 Professor of Finance at Stanford University’s Graduate School of Business. He is one of the originators of the Capital Asset Pricing Model and the binomial method for valuing options, as well as the creator of the Sharpe ratio for investment performance analysis. He is the author of six books, including Portfolio Theory and Capital Markets (McGraw-Hill, 1970) and Asset Allocation Tools (Scientific Press, 1987). In 1990 he received the Nobel Prize in Economic Sciences. He spoke with editor Joe Kolman in February.

Derivatives Strategy: The Sharpe ratio has become a standard risk measurement tool in finance. What inspired you to develop it?

William Sharpe: I came up with the Sharpe ratio in 1962 or 1963, but it wasn’t called that. The decision context was trying to figure out what would happen if you put some of your money in a mutual fund and the rest in the bank, or maybe even borrowed some money and put all your money plus the borrowed money in the mutual fund. If you cared about risk and return à la Harry Markowitz, you would look for the mutual fund that had the highest ratio of excess return—that is, the highest expected return over the borrowing rate or lending rate divided by the per-unit risk.

I called that the “reward-to-variability ratio.” It sort of lay fallow. It had been published and was out there in the literature, but no one paid much attention to it until someone in the commodities business—I have no notion whom—started using it to rank commodities advisers in some publication on a continuing basis.

So the commodities derivatives folks started using this as a measure of what returns you got per unit of risk. You can understand why they did that. With derivatives, you can let the risk be almost any level you want by varying the amount of leverage you put in the position. You can’t simply compare the returns of somebody who’s got a 100 percent margin and somebody who’s got a 10 percent margin. At some point, someone called it the Sharpe ratio.

DS: Are you happy with how it’s been used?

WS: It’s all quite gratifying, but I’ve argued that in many cases it’s actually being misused. The problem is that when most people evaluate an investment strategy, be it a mutual fund or a derivatives position, they are really thinking in the context of an overall portfolio. If you’re trying to decide whether to put A in 10 percent of your portfolio or to put B in 10 percent of your portfolio, you need to think not only about the risk and return of A and the risk and return of B, but also about the extent to which A and B are correlated with the other stuff in the portfolio. So you decide that A has a slightly higher added return per unit of risk, but it’s highly correlated with everything else you hold. Whereas B, even though it’s got a slightly lower added return per unit of risk, is uncorrelated with everything else and is a great diversifier. So you’d prefer B, even though A would have the higher Sharpe ratio.

People use a lot of what’s called risk-neutral pricing or risk-neutral probabilities. These things, in fact, are not risk-neutral and they’re not probabilities.

DS: Is there a way around that problem?

WS: Yes. One way is to take the return of the fund minus the return on a benchmark that has the same sort of exposures to asset classes. If you were considering a U.S. growth equity mutual fund, for example, you would compare it with an index fund that holds a whole bunch of U.S. growth stocks, and you would take the average difference in performance divided by the standard deviation of the difference in performance. That is what I call the “selection Sharpe ratio.” Some people call it the information ratio.

It’s a better number to use when you’re thinking of putting a particular mutual fund in a portfolio with a bunch of other things, because that difference presumably is either uncorrelated or not as correlated with the differences to the other things. But even there, you’re still throwing out some information you need, since even two “growth funds” can differ significantly in the exposures to growth stocks and, for that matter, cash, value stocks and the like.

DS: You started thinking about risk quite a bit before most people in the academic world. What do you think of the enormous growth in new risk measurement methodologies?

WS: Certainly the growth of procedures for estimating risks is impressive; however, the focus on particular parts of the probability distribution of outcomes is not especially new— and in some cases may be dangerous. We’ve always known that the best strategy is to look at the probability distribution. In a certain context, and for certain decision-making, you can simplify the process by looking at some aspects of it—perhaps the mean and the standard deviation, or the mean and value-at-risk. But typically, you need to look at least at two things. You can’t simplify it down to one.

In some cases, maybe you can combine those two things into one thing, but the fewer elements you look at, the more dangerous it can be, even if you’ve got the real probability distribution. Of course, if you don’t have the real distribution, then you have even bigger problems. What’s new is the widespread application of increasingly sophisticated models to estimate the distribution.

Is VAR a better measure than standard deviation? It depends. If you are comparing a set of alternatives with similar distributions, it is often the case that if you give me a VAR and a mean, I can give you a standard deviation and a mean, or vice versa. It follows that I can separate efficient and inefficient strategies using either pair of measures.

DS: You’ve done a lot of consulting for financial institutions on risk. Have you found any recurring shortcomings in the marketplace’s understanding of risk?

WS: I’ve worked mainly with pension funds, endowments and foundations over the years, at a firm of my own and now through Financial Engines, a new company I’m involved with. The technology we’ve developed and adapted there includes optimization, Monte Carlo simulation, style analysis and factor models.

The technology built up over the years in industry and by academics is powerful, valuable stuff. But you have to have some experience in using it to know when the power is a little too much for the quality of data you’re putting in it. Formally, you may have to acknowledge that you don’t know precisely the probability distribution of the returns for, say, a derivatives position. You don’t know the precise nature of the stochastic process for interest rates. So you don’t want to turn powerful mathematical procedures loose on your best estimates without some control and caution, because you might get some silly answers.

DS: How did you come up with the idea of binomial option-pricing?

WS: When I was working on the first edition of my Investments book in the 1970s, I got to the chapter on options. Since this was a book for MBA students, I was trying to figure out the easiest way to explain the notion of hedging and the notion of valuation by doing a replicating hedge—the fundamental idea that drives the Black-Scholes-Merton model.

Imagine you’ve got a stock that could only go up by 10 percent or down by 5 percent. And imagine you’ve got a Treasury bond that only goes up by 1 percent. Now imagine somebody wants to do a deal in which you get paid a certain amount if the stock goes up and a different amount if it goes down—an option, if you will. I started horsing around with this little numerical example and found I was able to explain how the option would allow you to achieve the same effect with the right combination of the stock and bond.

When I got that one written up, I tried to make it a little harder. Let’s assume there are two years. Each year the stock could go up 10 percent or down 5 percent, and each year the Treasury bond will go up 1 percent. You end up with four different things that could happen. Then we design a little option and show how to value it. I can show you how to solve it recursively and figure out how to replicate and hedge it.

You don’t want to turn powerful mathematical procedures loose on your best estimates without some control and caution, because you might get some silly answers.

At that point, I had a little rudimentary curve that looked like the Black-Scholes-Merton curve, except it had three line segments. I guessed that if we went to three years or five or 10 years, this thing would smooth out and look more like the standard Black-Scholes-Merton curve.

Then I realized that by using this sort of simple discrete-time, discrete-state approach, which is now called binomial pricing, I could get results quite similar to those of Black-Scholes-Merton in the context for which Black-Scholes-Merton was designed. The great thing, of course, was that I could now deal with all kinds of exotic things that I couldn’t handle with the Black-Scholes-Merton model, such as options with payouts, early exercise and Lord knows what else.

I remember showing it to John Cox in my office one night, and he was impressed. Then John and Mark Rubinstein and Steve Ross took this idea and expanded on it. The net result was the famous Cox-Ross-Rubinstein paper. If you check the Cox-Rubinstein book, you’ll find that they say it was built on my idea.

That’s where the binomial option-pricing model came from, in a sense. It’s an incredibly powerful and practical method. Now, of course, people have gone way beyond that to trinomial models and many variations on the theme. Some say that it and these other discrete models are being used more than continuous-time models.

DS: What are the important theoretical economic influences on your own work in option-pricing?

WS: If you really start thinking about it, all this uses a similar process in economics to what is called the Arrow-Debreu time/state pricing theory. The authors were Kenneth Arrow of Stanford University and Gerard Debreu of the University of California at Berkeley, who won Nobel prizes for this work as well as other works in the 1950s.

The basic idea is to think of a world in which, at any given time, there are a certain number of states that exist. Let’s say that, in principle, you can buy a claim to give you $1 at a given time, if and only if a given state occurs, but nothing otherwise. If you take that view of the world, you’ll find the mathematics are simple and linear, and you get incredible insights.

Many of us in finance in the 1960s knew about all this, but everyone said it wasn’t practical and you couldn’t use the stuff—there aren’t a zillion different states of the world and nobody trades things like that. So we concentrated on the mean-variance Markowitz approach, which has, of course, served us quite well.

Now, because of derivatives, we realize that all this has real practical application. I didn’t realize that binomial pricing was really only doing a special case of Arrow-Debreu. Although I knew of Arrow-Debreu, I didn’t connect it at that time. And I think it’s kind of sad that the whole financial engineering profession has been built up without, in some way, even realizing that it is using Arrow and Debreu’s procedures.

if you’re big enough, you have to start recognizing that your actions may change the risks and returns in the wrong direction.

People use a lot of what’s called risk-neutral pricing or risk-neutral probabilities. They say, “We’ll find the ‘expected value’ using ‘risk-neutral probabilities,’ and we’ll discount that and that’s how we’ll value these things.” I take umbrage at those terms as an academic. These things, in fact, are not risk-neutral and they’re not probabilities. They are forward prices and they are Arrow-Debreu prices.

DS: How would somebody learn more about this?

WS: I’ve been trying to write a book that uses that approach as the bedrock. It’s on my web site (www.wsharpe.com) under Investment Text.

DS: What other important risk management issues are on the horizon?

WS: A lot of academics and people in the industry are beginning to concentrate a little bit more on endogeneity. What that means is, If you’re big enough and other people think you’re smart and want to try to emulate what you’re doing, you have to build into your models the fact that your decisions are going to affect the probability distributions. So if you take a position in Russia and a position in Brazil, and other people are looking over your shoulder, the correlation between Russia and Brazil may be higher then it would otherwise have been.

A lot of evaluation, risk estimation and hedging calculations use the classical assumption that there are probabilities and risks and returns—but that they’re given. But at some point, if you’re big enough, you have to start recognizing that your actions may change the risks and returns in the wrong direction. So you’d better not do anything unless everything still looks favorable after you’ve taken into account the effect of your actions on the probabilities.

That becomes much harder. When you reach a certain scale or have a certain prominence, you have to deal with that. Traders know this, but it’s a lot harder to build it into your formal risk-modeling process. I think a lot of people haven’t built it in. In some cases, that’s fine, and other cases, well, maybe they should work harder and take this into account.

The other issue that people are wrestling with is liquidity. There are times at which people seem to want it more than they do at other times. It seems like this risk is different from traditional risk, but it can hurt you just as badly if, for example, you’re on the wrong side of a flight to quality. I think academics are just beginning to try to understand what liquidity is and to what extent it’s different from traditional sources of risk. How do you deal with it? How should you measure it? How should you predict it?

Fortunately, although we’ve come a long way in understanding the problems of the financial world, the industry keeps presenting us with sufficient new ones to keep researchers and practitioners employed into the foreseeable future.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!