Foreword According to modern portfolio theory, risk and return go hand in hand. But risk (whether portfolio risk, firm r...
661 downloads
5621 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Foreword According to modern portfolio theory, risk and return go hand in hand. But risk (whether portfolio risk, firm risk, security risk, or other) rarely gets the same amount of attention as return does. For instance, the annual return on the S&P 500 Index is a commonly mentioned number in the financial press. But how often does one see the annual volatility of the S&P 500 mentioned? Risk management for the asset management world encompasses a wide variety of concerns, some of which can be measured and quantified and some of which must be handled subjectively. Investors and asset managers must be prepared to address these risks if they are to achieve optimal investment performance. That is, investors and managers must understand the principles and practices behind the design, implementation, and interpretation of risk management systems for the asset management industry. The authors in this proceedings come at the issue of risk management from many angles, including a behavioral perspective. They cover the topics of risk management during market crises, fiduciary risks,
firmwide risks, risk management tools, implementing risk management systems, and the role of credit risk in understanding equity risk. And although quantifying risk, through the use of such measures as value at risk, can help managers and investors get a handle on risk, subjective judgment must always come into play to make sure that the measures and systems used are relevant and accurate. We are grateful to Bluford H. Putnam at CDC Investment Management Corporation for serving as moderator for the conference and for providing the introduction for this book. We also wish to thank all the authors for their assistance producing this book: Richard M. Bookstaber, Moore Capital Management; Stephen Kealhofer, KMV Corporation; Andrew W. Lo, Massachusetts Institute of Technology's Sloan School of Management; Jacques Longerstaey, Goldman Sachs Asset Management; Michelle McCarthy, Deutsche Bank; Desmond Mac Intyre, General Motors Investment Management Corporation; Robert M. McLaughlin, Eaton & Van Winkle; Brian D. Singer, CFA, Brinson Partners; and Charles W. Smithson, CIBC World Markets.
Terence E. Burns, CFA Vice President Educational Products
iv
©Association for Investment Management and Research
Introduction to Risk Management Bluford H. Putnam President CDC Investment Management Corporation Integrating risk analysis into the investment process and into the core operational processes of managing an asset management company requires a commitment to discipline. This integration requires using a number of building-block concepts about risk management, but these building blocks will only improve investment performance and the overall returns of the asset management company if they are assembled in a theoretically consistent fashion. Developing a disciplined approach to risk management involves understanding a broad array of concepts. In this proceedings, key themes resonate that will help in guiding the risk management process toward one of enhancing investment returns. These building-block themes need to be noted up front because they appear and reappear as important elements in each presentation. The authors in this proceedings take these themes and apply them to a number of key topics, thereby challenging the risk management world.
Building-Block Concepts The distinction between risk management and risk measurement must always be in the forefront of one's thoughts. Many risk management departments have nothing to do with risk management and are focused solely on risk measurement. The difference is critical. Risk measurement is part of the fiduciary oversight process and does not necessarily directly influence investment decisions. Risk management, by contrast, is tightly linked to the assessment of return potential in the investment process and directly affects the construction of portfolios. For example, calculating the historical IOO-day value at risk (VAR) for a given portfolio is a risk measurement task. Risk management requires evaluating various estimates of risk, including forwardlooking or judgment-based measures, then relating the risk estimates to return estimates in a portfolio context to allow one to make investment decisions. Government regulators, clients of investment companies, as well as the chief investment officers and stockholders of investment companies will want to review risk measurement information as part of an Editor's note: The opinions in this introduction are solely those of the author and are not necessarily related to any of the opinions of the other authors in this proceedings.
©Association for Investment Management and Research
ongoing and important due diligence process. To benefit from improved risk-return characteristics, however, risk management must be fully and seamlessly integrated into the investment process. The appropriate use of judgment is another key building-block concept. Many investors suffer from the delusion of "spurious specificity." Historical measures of risk can be calculated to the tenth decimal place or more with impressive accuracy. Asserting that these pinpoint measures of risk are in any way, shape, or fashion good forecasts of future risks is, however, a huge leap of faith. Moreover, having a portfolio with return expectations based on a forward-looking view of the world paired with risk estimates based on a backward-looking view of the world is a prescription for disaster in terms of evaluating risk-return trade-offs at the portfolio level. The construction process is internally inconsistent, even though this same historical risk measure may provide useful information in the risk oversight process. Those professionals assigned exclusively to risk measurement tasks are notoriously uncomfortable with making decisions and integrating judgment into the risk estimation process. Risk management, once fully integrated into the investment process, is very much about the appropriate use of judgment to improve on the essentially quantitative estimates from the risk measurement calculations. The key to using judgment effectively in risk management, and in investment processes in general, is to have a strong appreciation of the assumptions embodied in specific risk-estimation calculations. When developing theory, assumptions are made to simplify the analysis. When applying theory to practical cases, the user of any theoretical concept needs to check the simplifying assumptions to make sure that they do not embody hidden risks that are not being taken into account. For example, option-pricing theory, in many of its most commonly used versions, assumes unlimited borrowing capacity at the risk-free rate, no taxes, perfect market liquidity with continuous prices and no gaps, constant price volatility, and more. Each of those assumptions comes with its own danger signals. As Michele McCarthy discusses, knowing that many of these assumptions are embedded in common risk measurements can give one the confidence to use judgment appropriately in the risk management process.
1
Risk Management: Principles and Practices
In modern portfolio theory, risk and return are two sides of the same coin. An investor's financial situation is improved if higher returns can be earned for the same level of risk or if the same returns can be earned by taking less risk. This view of risk and return assumes a symmetry of preferences for risk taking that needs to be explored further. As Andrew Lo discusses, a complete understanding of the relationship between risk and return requires one to think in terms of probabilities, prices, and preferences. A key lesson for those involved with risk management is that the laws of probabilities apply regardless of how the probabilities are estimated. Confusion sometimes occurs between probabilities that are purely objective in nature and those that are subjective. An objective probability, such as the onein-six chance of one side of a fair six-sided die coming up "six," is accurately quantifiable. By contrast, an example of a subjective probability is someone saying the probability of life forms existing somewhere else in the universe is one in six. Such a probability is more of a belief than a quantifiable estimate, even if assigned a specific probability. Once probabilities are assigned, however, they follow the same laws. That is, if the probability of Event Y is subjectively assigned a probability of 0.7, then the probability of Event Y not happening is 0.3; Event Y either occurs, or it does not. Other laws of how to calculate conditional probabilities apply as well, regardless of the objective or subjective nature of the original probability estimates. This point may seem trivial and obvious, but when one moves into the world of subjective proba-
bilities, it is not hard to contrive complex cases in which the probabilities are inconsistent with each other. For example, this inconsistency can happen quite easily when one is casually thinking about the probabilities of the French franc or the German mark, appreciating against the U.S. dollar. Such subjective probabilities of currency movements may be inconsistent when looked at from another perspective, such as against the U.K. pound or Japanese yen, indicating an inconsistency in the probabilities of the cross-exchange rates. The lesson is that one must be extremely disciplined in manipulating risk and correlation estimates and in assigning related probability estimates so that internal mathematical consistency is always maintained. Another issue is whether or not to introduce a bias into risk estimates. In the practical application of risk measurement, it is not unusual for" conservative" risk estimates to be chosen for a wide variety of economic or financial events. The person in charge of risk measurement often believes that the only proper response is to err on the side of caution when measuring risks. Unfortunately, when this type of conservatism is practiced for measuring risks in an investment process, the outcome will be very substandard returns relative to the actual risks taken. The conservative bias in the risk measurement process will translate, as a practical matter, into an unnecessary reduction in risk-bearing capacity, which, in tum, will limit return opportunities compared with an investment process based on internally consistent, fair, and unbiased estimates of future risks. The price of reducing risks is also important to the risk management process. As Lo discusses, one can often observe explicit prices for certain types of financial hedging transactions, or various versions of option pricing theory can allow one to estimate the price of a certain type of financial hedging transaction. Obviously, for an economic system, considering the price of reducing risk is essential. What many analysts forget is that one should not stop at analyzing the price of hedging a single transaction. One can also reduce risk at the level of the whole portfolio. In this case, correlations between any two pairs of financial positions matter a great deal in determining whether a lower price for minimizing risks can be obtained at the portfolio level or at the transaction level. In most practical cases, correlations are such that significant reductions in the price of limiting risk may be obtained when they are viewed at the portfolio level instead of the transaction level. Once probabilities and prices are on the table, thinking about preferences is much easier. What is important is that the assumption of symmetry in preferences should be explicitly recognized and
2
©Association for Investment Management and Research
Although this proceedings is about risk management, a critical and final building-block concept is to focus on a consistent measure of returns. In general, returns should be measured as the excess return over the risk-free rate or, in some cases, the excess return over a given benchmark. To make these measurements properly, one needs to train oneself to think of every portfolio in two parts: a benchmark portfolio and an overlay portfolio. Even a market-neutral hedge fund should have a benchmark portfolio, even if it is only the 90-day T-bill rate. For U'S. equities, the benchmark is commonly the S&P 500 Index. The overlay portfolio is simply the total portfolio with the benchmark positions subtracted. Thinking in terms of benchmark and overlay portfolios, as well as in terms of excess returns, greatly enhances the ability to consistently integrate risk management into the investment process.
Probabilities, Prices, and Preferences
Introduction to Risk Management
explored. Is there something in the nature of the investor or the investment problem that suggests that risk-bearing capacity should be different for gains and losses? Behavioral finance indicates that in many cases, people do express an asymmetry of preferences. In financial theory, the building block of put and call options allows a rich approach to tailoring portfolios to match the asymmetry in preference, but at a cost. As a result, one is explicitly forced to think consistently and simultaneously in terms of probabilities, prices, and preferences to get a full understanding of risk management.
Measuring vs. Managing with VAR To use VAR approaches effectively in the risk management process and as part of the investment process, one needs a very thoughtful and complete understanding of the limits of VARas a risk measurement tool, which is not to say that VAR is not a critical tool in the risk management process. But as with any tool, especially quantitative tools, one needs to know its limits to use it properly. Michele McCarthy provides a thorough examination of VAR, noting carefully its advantages and disadvantages. Several characteristics of VAR analysis, as commonly used, need highlighting. First, VAR is generally calculated using short-run, historical, daily data. This common habit has some unappreciated side effects. Daily data, for example, introduce the likelihood of underestimating correlations, especially in global portfolios. This underestimation occurs because markets in different continents and time zones close at different times. Financially important information may become available after one market closes but before another closes on the same day. Closing-day prices will show a discrepancy because of information availability in certain time zones that gets corrected the next day when the market reopens. The result is lower correlations in the prices of daily data than are truly the case. Using weekly or monthly data to calculate correlations goes a long way toward minimizing this problem. Two-day averaging is a very crude first step that addresses the problem in a minimal way, but it is a start. A low estimate of correlations results in an overdependence on diversification that may be an illusion. Daily data, when used for global portfolios, are plagued with holidays. The common habit is to use the previous day's closing price during the holiday period until the market in question reopens for trading. This process lowers estimated correlations as well as volatility. When short periods are analyzed, such as the past 100 days, the biases that are introduced through the use of daily data can matter a lot.
©Association for Investment Management and Research
Using commonly calculated historical VAR measures in a portfolio-construction process runs into the danger of mixing forward-looking judgments about returns with backward-looking estimates of risk. For example, fixed-income markets were quite calm in the United States in 1993, when the U'S. Federal Reserve kept its federal funds rate locked at 3 percent all year. Price volatility declined for fixed-income markets, as measured with historical daily data, throughout 1993. Thus, using historically based VAR estimates for risk put one in the position of seeing lower and lower risks, even as the true risks of a Federal Reserve tightening and a bond market collapse were increasing. The worst case hit in February of 1994, when historicalbased VAR signals were still showing relatively low risks in the fixed-income sector. The issue involves the potential for risk to be mean reverting. That is, if high-risk periods are followed by lower risk periods and vice versa, then historical VAR measures may well regularly send low-risk signals just before financial storms and send high-risk signals after storms and before the calming periods. One needs to use some judgment in these cases, and the naive assumption that the recent past will repeat itself is probably a very poor assumption. One can also build quantitative systems that use historical data in a more appropriate and dynamically adjusting fashion to gain some improved insights into future risks. Even better, one can introduce return-forecasting systems and processes into the risk-estimation process. Changes in both return forecasts and factors influencing returns can have important, forward-looking implications for estimating risks. Another limitation of VAR, as commonly used, is its inability to deal with option positions, whether explicit positions or embedded in structured positions. Options are a wonderful way of capturing asymmetric risks. Symmetric risk measures, such as common uses of VAR, are handicapped when optionrelated positions are important in a portfolio. And embedded options are more common than many investors realize. For example, high-yield bonds are best analyzed as put options. The lender (investor) is writing (short) a put option on the assets of the company. If everything goes well, the company repays the loan on schedule with the agreed-on interest. If everything goes badly, the lender ends up owning the assets of the company just when those assets have significantly less value than when the loan was made. What this example clearly argues is that one needs an option-theoretic approach to estimating the probabilities of default. Some path-breaking quantitative work has been done in this regard by the KMV Corporation, the results of which are discussed by Stephen Kealhofer.
3
Risk Management: Principles and Practices In addition, equities are like call options, particularly in entrepreneurial companies with short track records, tremendous potential, and little profits. The investor pays an up-front fee for the stock and is betting on everything going well in the long run and on the entrepreneurial company developing into the next Microsoft or Federal Express. In the case of high-yield bonds (put options) and equity in young companies (call options), the option-theoretic approach to risk measurement focuses attention on the volatility of the underlying cash flows and the value of the assets. In these cases, going beyond VAR to estimate forward-looking risks is essential to capture the risk asymmetry embedded in these optionlike positions. Rare events also cause problems for the VAR methodology. One tends to use the recent past in the VAR calculation, and this practice may ignore some very big and risky past events. The obvious supplement is to look to scenario tests and Monte Carlo simulations to help capture the risk involved in rarely occurring events and financial shocks. Rare events are complicated by the likelihood that, within a given asset class, the correlations between exposures tend to rise, and rise sharply to unity, in a crisis situation. An example of this "increasing correlations in a crisis" problem was the credit and liquidity crisis of the summer of 1998. One can argue that what brought down the investment firm of Long-Term Capital Management (LTCM) was that its long (leveraged) positions in the credit markets were normally lowly correlated, but in the crisis, volatilities and correlations rose sharply, making LTCM's portfolio much riskier than standard, historically based models would have suggested before the crisis. There are, of course, other reasons for LTCM's problems, and many more subtleties and caveats surrounding the episode, as Richard Bookstaber discusses in his presentation. To close this discussion of VAR on a more positive note, one of the key advantages of any VARbased approach to risk measurement is that it focuses on the portfolio and not on individual trades. As such, VAR introduces directly into the process an appreciation of correlations and the power of diversification as a risk-reducing tool. Single position riskestimation techniques, such as option pricing, do not have this ability, and as a result, they miss the power of correlation analysis that is captured automatically in VAR approaches.
One of the factors that is absolutely critical to effectively integrating risk measurement techniques into
the investment process is using the intuition provided by different risk measures to gain insights into managing risks. Some interesting visual aids can help greatly in the process of converting risk measurements into judgment and intuition. Over the years, Ronald Layard-Liesching, the research director at Pareto Partners, has explored the ways that mathematicians can display volatility and correlation in a spatial context. At Brinson Partners, much valuable work has been done on making risks and correlations more intuitive through using simple geometry, as Brian Singer illustrates in his presentation. A number of intuitive ideas can be developed from some simple geometrical concepts. Basically, one visualizes volatility (such as standard deviation) as length (inches) and correlations as angles (cosine of the correlation coefficient). Putting two portfolios together in this way generates an intuitive visualization of the power of correlations. For example, if an investor buys $100 of the S&P 500 as the benchmark portfolio and then adds an overlay portfolio, what is the total risk of the two portfolios? Suppose the risk of the S&P 500 portfolio is estimated at 16 percent. (It has been measured somewhat lower in the recent bull market of 19931999.) Now, consider three different overlay portfolios. The first is a $50 hedge (short S&P futures, for example). The correlation (angle) between the benchmark and the hedge is -1.0 (perfect hedge). The investor can draw a line 16 inches long for the benchmark and then retrace (an angle of 0 degrees) 8 inches to represent the 50 percent hedge. The total risk is 8 percent, or 8 inches. The second overlay involves leverage, in which the investor borrows $50 and invests that $50 into the S&P 500 as well. The correlation (angle) between the benchmark and the leveraged overlay is 1.0 (perfect). The investor can draw a line 16 inches long to represent the benchmark and then keep drawing for another 8 inches to represent the overlay. The angle is 180 degrees, and the total risk is 24 percent, or 24 inches. For the more interesting case of market neutrality, the overlay is constructed with $50 of supporting capital and 8 percent of risk in a way that makes it likely that the overlay will have zero correlation with the S&P 500. Zero correlation is represented by an angle of 90 degrees. The investor draws a line 16 inches long to represent the benchmark and then at the end of that line, the investor draws a perpendicular line 8 inches long to represent the market-neutral overlay. The line that completes a right-angle triangle (i.e., the hypotenuse) represents the total risk of the
4
©Association for Investment Management and Research
The Geometry of Risk and Correlation
Introduction to Risk Management combined portfolios. By using the Pythagorean Theorem, the investor can calculate total risk as the square root of the sum of 16 squared plus 8 squared. That is, total risk is the square root of 320 (256 + 64), or 17.89 percent. Note that the 8 percent risk overlay adds only 1.89 percent of additional risk to the total risk of the combined portfolios relative to the 16 percent risk of the benchmark portfolio. If the excess return potential of the overlay portfolio has any kind of reasonable risk-return trade-off (information ratio, that is), then the incremental excess return potential is a bargain relative to the incremental risk in the total (combined) portfolio. By contrast, if the overlay portfolio has a correlation of 0.71with the benchmark, the investor will need to draw a line at 135 degrees (45 degrees for -0.71 correlation) that extends total risk in a way that is much more similar to the leveraged example. In this case, the investor will see some total risk reduction relative to the leverage example, but not very much, especially when compared with the market-neutral overlay example. Unfortunately, most u.s. equity managers try to outperform the S&P 500 by running a portfolio of under- and overweights relative to the S&P 500 that is highly correlated with the index, and they fail to take advantage of the risk-reducing properties of designing market-neutral overlays. As more and more investors understand and appreciate the intuition offered by the geometry of risk, they may well start to demand that their asset managers actively measure the correlation of the implicit overlay portfolio to the benchmark portfolio and, eventually, demand that managers design portfolios that ex ante are expected to have zero or negative correlations to the benchmark yet still earn a reasonable excess return for the risks being taken.
Managing Operational Risks Although much risk measurement and risk management attention is focused on the investment process, asset management companies probably face at least as much risk in terms of how they manage their complex and interrelated fiduciary and regulatory responsibilities. In actuality, asset management companies are relatively complex organizational structures. This complexity can be appreciated when one tries to trace the whole process from making an investment decision to reconciling the trade, reporting results to clients, marketing to clients, and meeting the huge burden of regulatory reporting and complex rules for different types of investment structures. Indeed, using the terms of industrial organization theory, asset management companies face both "interactive complexity" (i.e., they have a high degree of interrelated processes) and "tight coupling" (i.e., many important processes are linked in a critical path to each other).
©Association for Investment Management and Research
Both of these traits, when they occur in concentrated forms, are known to increase the probability of operational risks. In tum, operational risks, much more than poor investment performance, can put the asset management company in legal or regulatory jeopardy. Robert McLaughlin discusses some of the legal issues of being a fiduciary institution. From a practical point of view, asset management companies can take a number of steps to reduce the severity of fiduciary and operational risks. The first step is to simplify internal reporting lines, where possible, taking care to tightly link responsibility with accountability and with decision-making authority. These organizational and cultural changes help reduce the operational risks of complex organizations. In addition, asset management companies can learn something from the corporate finance literature concerning the use of incentives in corporations. Indeed, asset management companies can learn many lessons from corporate finance theory, some of which Charles Smithson discusses. One of the interesting subjects for examination is the use of incentive compensation. In the corporate world, some evidence exists that bonuses related to stock options can increase risk-taking incentives relative to cash or stock payments. This idea makes some sense, given that high volatility raises the value of a given option contract, everything else being equal. Another key element that links operational risks and compensation is the willingness of a company to invest in human capital. If asset management companies make a commitment to employee trainingmeaning all the employees (in the back office, in client service, etc.), not just portfolio managers-then the employees will get a keen sense of support from the parent institution and understand that they are not just earning a salary but that the company is investing in their personal future. Operational risk benefits are derived from employee education as well. For example, the better the back-office personnel understand the investment process, the more likely they will catch trading errors and more efficiently reconcile trades and notice mistakes in net asset value calculations. In short, operational risk management starts with developing straightforward business processes, but it also includes how the company compensates and invests in its employees-with money and with training.
Risk Management to Enhance Returns Risk management has gone through a number of phases of development in the asset management
5
Risk Management: Principles and Practices world. Phase one focused on risk measurement, not risk management. At first, risk measurement tools were not integrated into the portfolio construction process, but as the clients of asset management services increasingly focused on the excess returns being earned relative to the risks being taken (i.e., the information ratio), there has been a strong desire to move into phase two: integrating risk management into the investment process. Two authors in this proceedings share some special insights on going beyond risk measurement to making risk management part of both the portfolioconstruction and the overall asset-liability management process. One approach is to look at practical implementation issues, which are discussed by Jacques Longerstaey. A very different perspective is provided from the pension fund point of view by Desmond Mac Intyre, who looks explicitly at the experience of General Motors Investment Management Corporation in administering its retirement benefit programs. One point that emerges in almost every discussion of risk management is that taking some of the useful, but always flawed, tools of risk measurement
and integrating them into the investment process requires judgment. Risk measurers are notorious for avoiding explicit judgments, but even risk measurement tools have critical built-in assumptions that simplify the world, sometimes in very misleading ways. Risk managers who are actively involved in the risk-return calculus that is the essence of constructing efficient portfolios know that forward-looking judgment is always going to be a key element. The message is that, as asset managers, we should not be focused on the elusive search for the Holy Grail of risk measurement-the one number that summarizes our total risk position. Instead, we must focus on making sure we pay attention to all the components of the portfolio process: excess return expectations, risk estimates, and correlation estimates. Historically, we may have spent too much time on returns, not enough time on risk, and not remotely enough time thinking about the implications of correlations. If we can get more balance into our investment process, then there is a strong likelihood that we can effectively use risk management to enhance excess returns relative to the risks being taken.
6
©Association for Investment Management and Research
A Framework for Understanding Market Crisis Richard M. Bookstaber Head of Risk Management Moore Capital Management
The key to truly effective risk management lies in the behavior of markets during times of crisis, when investment value is most at risk. Observing markets under stress teaches important lessons about the role and dynamics of markets and the implications for risk management.
o area of economics has the wealth of data that we enjoy in the field of finance. The normal procedure we apply when using these data is to throwaway the outliers and focus on the bulk of the data that we assume will have the key information and relationships that we want to analyze. That is, if we have 10 years of daily data-2,500 data pointswe might throw out 10 or 20 data points that are totally out of line (e.g., the crash of 1987, the problems in mid-January 1991 during the Gulf War) and use the rest to test our hypotheses about the markets. If the objective is to understand the typical dayto-day workings of the market, this approach may be reasonable. But if the objective is to understand the risks, we would be making a grave mistake. Although we would get some good risk management information from the 2,490 data points, unfortunately, that information would result in a risk management approach that works almost all the time but does not work when it matters most. This situation has happened many times in the past: Correlations that looked good on a daily basis suddenly went wrong at exactly the time the market was in turmoil; value at risk (VAR) numbers that tracked fairly well day by day suddenly had no relationship to what was going on in the market. In the context of effective risk management, what we really should do is throw out the 2,490 data points and focus on the remaining 10 because they hold the key to the behavior of markets when investments are most at risk. This presentation considers the nature of the market that surrounds those outlier points, the points of market crisis. It covers the sources of market crisis and uses three case studies-the equity market crash
N
© 1999 Richard M. Bookstaber
of 1987, the problems with the junk bond market in the early 1990s, and the recent problems with LongTerm Capital Management (LTCM)-to illustrate the nature of crisis and the lessons for risk management. This presentation also addresses several policy issues that could influence the future of risk management.
Sources of Crisis The sources of market crisis lie in the nature and role of the market, which can be best understood by departing from the mainstream view of the market. Market Efficiency. The mainstream academic view of financial markets rests on the foundation of the efficient market hypothesis. This hypothesis states that market prices reflect all information. That is, the current market price is the market's "best guess" of where the price should be. The guess may be wrong, but it will be unbiased; it is as likely to be too high as too low. In the efficient market paradigm, the role of the markets is to provide estimates of asset values for the economy to use for planning and capital allocation. Market participants have information from different sources, and the market provides a mechanism that combines the information to create the full information market price. Investors observe that price and can plan efficiently by knowing, from that price, all of the information and expectations of the market. A corollary to the efficient market hypothesis is that, because all information is already embedded in the markets, no one can systematically make money trading without nonpublic information. If new public information comes into the market, the price will
7
Risk Management: Principles and Practices instantaneously move to its new fair level before anybody can make money on that new information. At any point in time, just by luck, some traders will be ahead in the game and some will be behind, but in the long run, the best strategy is simply to buy and hold the overall market. I must confess that I never felt comfortable with the efficient market approach. As a graduate student who was yet to be fully indoctrinated into this paradigm, I could look at the many simple features of the market that did not seem to fit. Why do intraday prices bounce around as much as they do? The price of a futures contract in the futures market or a stock in the stock market moves around much more than one would expect from new information coming in. What information could possibly cause the price instantaneously to jump two ticks, one tick, three ticks, two ticks second by second throughout the trading day? How do we justify the enormous overhead of having a continuous market with real-time information? Can that overhead be justified simply on the basis of providing the marketplace with price information for planning purposes? In the efficient market context, what kind of planning would people be doing in which they had to check the market and instantly make a decision on the basis of a tick up or down in price? Liquidity and Immediacy. All someone has to do is sit with a broker/dealer trader to see that more than information is moving prices. On any given day, the trader will receive orders from the derivative desk to hedge a swap position, from the mortgage desk to hedge out mortgage exposure, and from clients who need to sell positions to meet liabilities. None of these orders will have anything to do with information; each one will have everything to do with a need for liquidity. And the liquidity is manifest in the trader's own activities. If inventory grows too large and the trader feels overexposed, the trader will aggressively hedge or liquidate a portion of the position, and the trader will do so in a way that respects the liquidity constraints of the market. If the trader needs to sellZ,OOO bond futures to reduce exposure, the trader does not say, "The market is efficient and competitive, and my actions are not based on any information about prices, so I will just put those contracts in the market and everybody will pay the fair price for them." If the trader puts 2,000 contracts into the market all at once, that offer obviously will affect the price, even though the trader does not have any new information. Indeed, the trade would affect the market price even if the market knew the trader was selling without any informational edge.
8
The principal reason for intraday price movement is the demand for liquidity. A trader is uncomfortable with the level of exposure and is willing to pay up to get someone to take the position. The more uncomfortable the trader is, the more the trader will pay. The trader has to pay up because someone else is getting saddled with the risk of the positionsomeone who most likely did not want to take on that position at the existing market price because otherwise, that person would have already gone into the market to get it. This view of the market is a liquidity view rather than an informational view. In place of the conventional academic perspective of the role of the market, in which the market is efficient and exists solely for informational purposes, this view is that the role of the market is to provide immediacy for liquidity demanders. The globalization of markets and the Widespread dissemination of real-time information have made liquidity demand all the more important. With more and more market information disseminated to a wider and wider set of market participants, less opportunity exists for trading based on an informational advantage, and the growth of market participants means there are more incidents of liquidity demand. To provide this immediacy for liquidity demanders, market participants must exist who are liquidity suppliers. These liquidity suppliers must have free cash available, a healthy risk appetite, and risk management capabilities, and they must stand ready to buy and sell assets when a participant demands that a transaction be done immediately. By accepting the notion that markets exist to satisfy liquidity demand and liquidity supply, the framework is in place for understanding what causes market crises, which are the times when liquidity and immediacy matter most. Liquidity Demanders. Liquidity demanders are demanders of immediacy: a broker/ dealer who needs to hedge a bond purchase taken on from a client, a pension fund that needs to liquidate some stock position because it has liability outflow, a mutual fund that suddenly has some inflows of cash that it has to put into the index or the target fund, or a trader who has to liquidate because of margin requirements or because of being at an imposed limit or stop-loss level in the trading strategy. In all these cases, the defining characteristic is that time is more important than price. Although these participants may be somewhat price sensitive, they need to get the trade done immediately and are willing to pay to do so. A huge bond position can lose a lot more if the bondholder haggles about getting the right price rather than if the bondholder just pays up a few ticks to put the hedge on. Traders who have hit their risk limits do not have any choice; they are going to get out, and they are not in a
A Framework for Understanding Market Crisis good position to argue whether or not the price is right or fair. One could think of liquidity demanders as the investors and the hedgers in the market. Liquidity Suppliers. Liquidity suppliers meet the liquidity demand. Liquidity suppliers have a view of the market and take a position in the market when the price deviates from what they think the fair price should be. To liquidity suppliers, price matters much more than time. For example, they try to take a cash position or an inventory position that they have and wait for an opportunity in which the liquidity demander's need for liquidity creates a divergence in price. Liquidity suppliers then provide the liquidity at that price. Liquidity suppliers include hedge funds and speculators. Many people have difficulty understanding why hedge funds and speculators exist and why they make money in an efficient market. Their work seems to be nothing more than a big gambling enterprise; none of them should consistently make money if markets are efficient. If they did have an informational advantage, it should erode over time, and judging by their operations, most speculators and traders do not have an informational advantage, especially in a world awash in information. So, why do speculators and liquidity suppliers exist? What function do they provide? Why do, or should, they make money? The answer is that they provide a valuable economic function. They invest in their business by keeping capital readily available for investment and by applying their expertise in risk management and market judgment. They want to find the cases in which a differential exists in price versus value, and they provide the liquidity. In short, they take risk, use their talents, and absorb the opportunity cost of maintaining ready capital. For this functionality, they receive an economic return. The risk of providing liquidity takes several forms. First, a trader cannot know for sure that a price discrepancy is the result of liquidity demand. The discrepancy could be caused by information or even manipulation. But suppose somebody waves a white flag and announces that they are trading strictly because of a liquidity need; they have no special information or view of the market and are willing to discount the price an extra point to get someone to take the position off their hands. The trader who buys the position still faces a risk, because no one can guarantee that between the time the trader takes on the position and the time it can be cleared out the price will not fall further. Many other liquidity-driven sellers may be lurking behind that one, or a surprise economic announcement might affect the market. The liquidity supplier should expect to make money on the trade, because there is an opportunity
cost in holding cash free for speculative opportunities. The compensation should also be a function of the volatility in the market; the more volatile the market, the higher the probability in any time period that prices will run away from the liquidity suppliers. In addition, their compensation should be a function of the liquidity of the market; the less liquid the market, the longer they will have to hold the position and thus the longer they will be subject to the volatility of the market. Interaction of Liquidity Supply and Demand in a Market Crisis. A market behaves qualitatively differently in a market crisis than in "normal" times. This difference is not a matter of the market being "more jumpy" or of a lot more news suddenly flooding into the market. The difference is that the market reacts in a way that it does not in normal times. The core of this difference in behavior is that market prices become countereconomic. The normal economic consequence of a decline in market prices is that fewer people have an incentive to sell and more people have an incentive to buy. In a market crisis, everything goes the wrong way. A falling price, instead of deterring people from selling, triggers a growing flood of selling, and instead of attracting buyers, a falling price drives potential buyers from the market (or, even worse, turns potential buyers into sellers). This outcome happens for a number of related reasons: Suppliers who were in early have already committed their capital; suppliers turn into demanders because they have pierced their stop-loss levels and must liquidate their holdings; and others find the cost of business too high with widening spreads, increased volatility, and reduced liquidity making the risk-return trade-offs of market participation undesirable. It is as if the market is struck with an autoimmune disease and is attacking its own system of self-regulation. An example of this drying up of supply can be seen during volatility spikes. Almost every year in some major market, option volatilities go up to a level that no rational person would think sustainable. During the Asian crisis in 1998, equity market volatility in the United States, Hong Kong, and Germany more than doubled. During the exchange rate crisis in September 1993, currency volatility went up manyfold. During the oil crisis that accompanied the Gulf War, oil volatilities exceeded 80 percent. Volatilities for stocks went from the mid-teens to more than 100 percent in the crash of 1987. Did option traders really think stock prices would be at 100 percent volatility levels during the three months following the crash? Probably not. But the traders who normally would have been available to take the other side of a trade were out of the market. At the very time everybody needed the insurance that options provide and was
9
Risk Management: Principles and Practices willing to pay up for it, the people who could sell that insurance were out of the market. They had already "made their move," risking their capital at much lower levels of volatility, and now were stopped out of their positions by management or, worse still, had lost their jobs. Even those who still had their jobs kept their capital on the sidelines. Entering the market in the face of widespread destruction was considered imprudent, and the cost of entry was (and still is) fairly high. Information did not cause the dramatic price volatility. It was caused by the crisis-induced demand for liquidity at a time that liquidity suppliers were shrinking from the market. Market Habitat. All investors and traders have a market habitat where they feel comfortable trading and committing their capital-where they know the market, have their contacts in the market, have a feel for liquidity, know how the risks are managed, and know where to look for information. The habitat may be determined by an individual's risk preferences, knowledge, experience, time frame and institutional constraints, and by market liquidity. Investors will roam away from their habitat only if they believe incremental returns are available to them. Someone who is used to trading in technology stocks will need more time for evaluation and a better opportunity to take a position in, say, the automotive sector, than in the more familiar technology sector. Nowadays, the preferred market habitat for most investors and traders is expanding because of low barriers to entry and easy access to information. Anyone can easily set up an account to trade in many markets, ranging from the G-7 counties to the emerging markets. Anyone can get information-often realtime information---on a wide variety of bonds and stocks that used to be available only to professionals. The days of needing to call a broker to check up on the price of a favorite stock now seem a distant memory. More information and fewer barriers to entry expand habitat. Higher levels of risk also tend to expand habitat. The distinction among assets blurs as risk increases. In addition, market participants become more like one another, which means that liquidity demanders all demand pretty much the same assets and grab whatever sources of liquidity are available. This situation is characterized in the market as "contagion," but in my view, what is happening is an expansion of habitat because the risk of the market has made every risky asset look pretty much the same. If all investors are in the same markets, they will run into trouble at the same time and will start liquidating the same markets to get financing and reduce their risks. Think of how the investor's focus shifts as the investor moves from a normal market environment
10
to a fairly energetic market environment, and then to a crash environment. In a normal market, investors have time to worry about the little things: the earnings of this company versus that company, P /Es, dividends, future prospects, and who is managing what. As the energy level goes up in the market, investors no longer have the luxury of considering the subtleties of this particular stock or that stock. They need to concentrate on sectors. If the technology sector is underperforming, all technology stocks look the same. If oil prices go up, an oil company's management and earnings prospects no longer matter; all that matters is that the company is in the energy sector. Turn the heat up further to a crash environment and all that participants care about is that it is a stock and that they can sell it. All stocks look the same, and the correlations get close to 1.0 because the only characteristic that matters is that this asset is a stock or, for that matter, is risky. In fact, the situation can get even worse; junk bonds may be viewed to be similar enough to stocks that they trade like stocks. The analysis and market history of the normal market environment no longer applies. The environment is different; the habitat has changed. An analogy from high-energy physics helps to illustrate the situation. As energy increases, the constituents of matter blur. At low energy levels-room temperature-molecules and atoms are distinct and differentiated. As energy goes up, the molecules break apart and what is left are the basic building blocks of matter, the elements. As energy goes up even more, the atoms break apart and plasma is left. Everything is a defused blob of matter. As the energy of the market increases, the same transformation happens to the constituents of the market. In a market crisis, all the distinct elements of the market-the stocks (e.g., IBM and Intel), the market sectors (e.g., technology and transportation), the assets (e.g., corporate bonds and swap spreads)-turn into an undifferentiated plasma. Just as in highenergy physics, where all matter becomes an undifferentiated "soup," in the high-energy state of a market crisis, all assets blur into undifferentiated risk. One of the most troubling aspects of a market crisis is that diversification strategies fail. Assets that are uncorrelated suddenly become highly correlated, and all the positions go down together. The reason for the lack of diversification is that in a highenergy market, all assets in fact are the same. The factors that differentiate them in normal times are no longer relevant. What matters is no longer the economic or financial relationship between assets but the degree to which they share habitat. What matters is who holds the assets. If mortgage derivatives are held by the same traders as Japanese swaps, these
A Framework for Understanding Market Crisis two types of unrelated assets will become highly correlated because a loss in the one asset will force the traders to liquidate the other. What is most disturbing about this situation is not that the careful formulation of an optimized, risk-minimizing portfolio turns to naught but that there is no way to determine which assets will be correlated with which other assets during a market crisis. That is, not only will diversification fail to work at the very time it is most critical, but determining the way in which it will fail will be impossible. Liquidity demanders use price to attract liquidity suppliers, which sometimes works and sometimes does not. In a high-risk or crisis market, the drop in prices actually reduces supply and increases demand. This is the critical point that participants must look for. Unfortunately, most people never know how thin the ice is until it breaks. Most people did not see any indications in the market in early October 1987 or early August 1998 that made them think they were on thin ice and that a little more weight would dislocate the market and prices would become an adverse signal. Of course, the indications seem obvious after the fact, but it should suggest something about the complexity of the market that these indications are missed until it is too late. For example, option prices, particularly put option prices, were rising before the crash of 1987.After the crash, this phenomenon was pointed to as an indicator that there was more risk inherent in the market and more demand for protection. In the month or so before Long-Term Capital Management (LTCM) had its problems, the U.S. swap spread was at its lowest volatility level in a decade. This low volatility demonstrated a lack of liquidity and commitment to the swap market. In the case of the 1987 market crash, the missed indicator was high volatility; in the case of the LTCM crisis, the missed indicator was low volatility.
Case Studies Three case studies help to demonstrate the nature of market crises: the equity market crash of 1987, the junk bond crisis, and the LTCM default. 1987 Equity Market Crash. The market crash of 1987 occurred on Monday, October 19. But it was set up by the smaller drop of Friday, October 16 and by the reaction to that drop from a new and popular strategy-portfolio insurance hedging. Portfolio insurance is a strategy in which a manager overlays a dynamic hedge on top of the investment portfolio in order to replicate a put option. Operationally, the hedge is reduced as the portfolio increases in value and increased as the portfolio declines in value. The hedge provides a floor to the
portfolio, because as the portfolio value drops beyond a prespecified level, the hedge increases to the point of offsetting future portfolio declines one for one. The selling point for portfolio insurance is that it provides this floor protection while retaining upside potential by systematically reducing the hedge as the portfolio rises above the floor. This hedging strategy is not without a cost. Because the hedge is being reduced as the portfolio rises and increased as the portfolio drops, the strategy essentially requires buying on the way up and selling on the way down. The result is a slippage or friction cost because the buying and selling happen in reaction to the price moves; that is, they occur slightly after the fact. The cumulative cost of this slippage can be computed mathematically using the tools of option-pricing theory; the cumulative cost of the slippage should be about the same as the cost of a put option with an exercise price equal to the hedge floor. The key requirement for a successful hedge, and especially a successful dynamic hedge, is liquidity. If the hedge cannot be put on and taken off, then obviously all bets are off. Although liquidity is not much of a concern if the portfolio is small and the manager is the only one hedging with a particular objective, it becomes a potential nightmare when everyone in the market has the same objective, which in a nutshell is what happened on October 19. On Monday morning October 19,everybody who was running a portfolio insurance program looked at the computer runs from Friday's market decline and saw they had to increase their hedges. They had to short out more of the exposure that they had to the market, and the hedging instrument of choice was the S&P 500 Index futures contract. Shortly after the open on October 19, the hedges hit the S&P pit. Time mattered and price did not; once their programs were triggered, the hedge had to be increased and an order was placed at the market price. And a lot of programs were triggered. Portfolio insurance was first introduced by LOR (Leland O'Brien Rubinstein) in 1984, and portfolio insurance programs were heavily and successfully marketed to pension funds, which overlaid tens of billions of dollars of equity assets. The traders in the S&P pit are very fast at execution. When someone wants to sell a position at the market, a trader in the pit will buy it immediately. Once the market maker takes the position, the market maker will want to take the first opportunity to get rid of it. The market makers on the floor make money on the bid-offer spread (on turnover) and not by holding speculative positions. Among the sources they rely on to unload their inventory are program traders and cash futures arbitrageurs. The program
11
Risk Management: Principles and Practices traders and arbitrageurs buy S&P contracts from the futures pit while selling the individual stocks that comprise the S&P 500 on the NYSE. If the price of the basket of stocks differs from the price of the futures by more than the transaction costs of doing this trade, then they make a profit. This trade effectively transfers the stock market activities of the futures pit to the individual stocks on the NYSE. It is here where things broke down in 1987,and they broke down for a simple reason: Although the cash futures arbitrageurs, program traders, and market makers in the pit are all very quick on the trigger, the specialists and equity investors who frequent the NYSE are not so nimble. The problem might be called "time disintermediation." That is, the time frame for being able to do transactions is substantially different between the futures market and the equity market. This situation is best understood with a stylized example. Suppose that you are the specialist on the NYSE floor for IBM. On Monday morning October 19, you wait for the markets to open. Suddenly, a flood of sell orders comes in from the program traders. You do not have infinite capital. Your job is simply to make the market. So, you drop the price of IBM a half a point and wait. Not many people are coming, so you drop it a full point, figuring now people will come. Meanwhile, suppose I am an investment manager in Boston who is bullish on IBM, and I am planning to add more IBM to my portfolio. I come in, glance at the screen, and see that IBM is down a half point. After coming back from getting some coffee, I check again; IBM is now down a full point. The price of IBM looks pretty good, but I have to run to my morning meeting. Half an hour has gone by, and you and the other specialists are getting worried. A flood of sell orders is still coming in, and nowhere near enough buyers are coming in to take them off of your hands. Price is your only tool, so you drop IBM another point and then two more points to try to dredge up some buying interest. By the time I come back to my office, I notice IBM is down four points. If IBM had been down a half point or a full point, I would have put an order in, but at four points, I start to wonder what is going on with IBM-and the market generally. I decide to wait until I can convene the investment committee that afternoon to assess the situation. The afternoon is fine for me, but for you, more shares are piling into your inventory with every passing minute. Other specialists are faced with the same onslaught, and prices are falling all around you. You now must not only elicit buyers, but you must also compete with other stocks for the buyers' capital. You drop the offer price down 10 points from the open.
12
The result is a disaster. The potential liquidity suppliers and investment buyers are being scared off by the higher volatility and wider spreads. And, more importantly, the drop in price is actually inducing more liquidity-based selling as the portfolio insurance programs trigger again and again to increase their selling to add to their hedges. So, because of time disintermediation and the specialist not having sufficient capital, the price of IBM is dropped too quickly, the suppliers are scared off, and the portfolio insurance hedgers demand even more liquidity than they would have otherwise. This IBM example basically shows what happened in the crash of 1987. Demand for liquidity moved beyond ignoring price and focusing on immediacy to actually increasing as a function of the drop in price because of the built-in portfolio insurance rules. Supply dried up because of the difference in time frames between the demanders and suppliers, which led prices to move so precipitously that the suppliers took the drop as a negative signal. The key culprit was the difference in the trading time frames between the demanders and the suppliers. If the sellers could have waited longer for the liquidity they demanded, the buyers would have had time to react and the market would have cleared at a higher price. 1991 Junk Bond Debacle. Junk bonds, or more euphemistically high-yield bonds, were the mainstay of many corporate finance strategies that developed in the 1980s. The best known use of highyield bonds was in leveraged buyouts (LBOs) and hostile takeovers. Both of these strategies followed the same course over the 1980s. They started as good ideas that were selectively applied in the most promising of situations. But over time, more and more questionable deals chased after the prospect of huge returns, and judgment was replaced with avarice. The investment banks played more the role of cheerleader than advisor, because they stood to gain no matter what the long-term outcome and they had a growing brood of investment banking mouths and egos to feed. The size of the average LBO transaction peaked in 1987. But deal makers continued working to maintain their historical volumes even as the universe of leverageable companies declined. Volume was maintained in part by lowering the credit quality threshold of LBO candidates. The failed buyout of United Airlines in 1989 is one example of this situation, because airlines are cyclical and previously had not been considered good candidates for a highly levered capital structure. Leverage in the LBOs also increased over the course of the 1980s. Cash flow multiples increased in 1987 and 1988, from the 5x range in 1984 and 1985 to the lOx range in 1987 and 1988. This
A Framework for Understanding Market Crisis increase turned out to be fatal for many companies. An earnings shortfall that is manageable at 5 times cash flow can lead to default if the investors pay 10 times cash flow. Although LBOs moved from larger to smaller deals, hostile takeovers went after bigger game as time went on. The RJR debt of nearly $10 billion represented approximately 5 percent of the high-yield market's total debt outstanding. Many institutions had limitations on the total amount of exposure they could have to anyone name, which became a constraint given the size of the RJR issues. The justification for hostile takeovers was, starting in the mid-1970s, for the market value of companies to be less than their replacement cost. Thus, after a hostile takeover, the acquirer could sell off the assets and inventories for more than the cost of buying the company holding those assets. The activity of hostile takeovers-and possibly the threat of further takeovers-woke up the market to the disparity between the market value and the replacement cost of companies' assets, and the gap closed by 1990.The arbitrage plays implicit in hostile takeovers led to an improvement of market efficiency in textbook fashion, and the raison d'eire for the hostile takeovers disappeared. But the hope for financial killings remained and led to continued demand for the leverage of high-yield bonds as ammunition to bag the prey. The following scenario summarizes the life cycle of LBOs and hostile takeovers. With these financial strategies still virgin territory, and with the first practitioners of the strategies the most talented and creative, the profits from the first wave of LBOs and hostile takeovers made headlines. More investors and investment bankers entered into the market, and credit quality and potential profitability were stretched in the face of the high demand for high-yield financing. Rising multiples were paid for LBOs and were accepted in hostile takeovers because of both the higher demand for financing and the increase in equity prices. The result of the stretching into lowerquality deals and the higher multiples paid for the companies led to more defaults. The defaults hit the market even harder than did the earlier LBO and hostile takeover profits. Within a few short months, high-yield bonds were branded as an imprudent asset class. In 1991, the high-yield bond market was laid to waste. Bond spreads widened fourfold, and prices plummeted. The impact of the price drop was all the more dramatic because, even though the bonds were not investment grade, investors had some expectation of price stability. The impact on the market was the same as having the U.S. stock market drop by 70 percent. As with the 1987
stock market crash, the junk bond debacle was not the result of information but of a shift in liquidity. In 1991, the California Insurance Commission seized Executive Life. The reaction to this seizure was many faceted, and each facet spelled disaster for the health of the market. Insurance companies that had not participated in the high-yield bond market lobbied for stricter constraints on high-yield bond holdings. It is difficult to know whether this action was done in the interest of securing the industry'S reputation, avoiding liability for the losses of competitors through guaranty funds, stemming further failures (such as Executive Life), or meeting the threat of further insurance regulation. Insurance companies were anxious to stand out from their competitors in their holdings of high-yield bonds and featured their minimal holdings of junk bonds as a competitive marketingpoint. A number of savings and loans (S&Ls) seized on the high-yield market as a source of credit disintermediation. Federal deposit guarantees converted their high-risk portfolios into portfolios that were essentially risk free. The S&Linvestors captured the spread between the bond returns and the risk-free return provided to the depositors. That this situation was a credit arbitrage at the government's expense became clear in the late 1980s. The government responded with the Financial Institutions Reform, Recovery and Enforcement Act in 1989. This act not only barred S&Ls from further purchases of high-yield bonds, but it also required them to liquidate their high-yield bond portfolios over the course of five years. The prospect of the new regulation and stiffening of capital requirements by the Federal Home Loan Bank Board led S&Ls to reduce their holdings even in early 1989by 8 percent, compared with an increase in holdings in the previous quarter of 10 percent. Investors reacted quickly to the weakness in the high-yield bond market. In July 1989,high-yield bond returns started to decline, hitting negative returns. For investors who did not understand the risk of high-yield bonds, the realization of negative returns must have been a rude wake-up call. Over the third quarter of 1989, the net asset value of high-yield mutual funds declined by as much as 10 percent. The implications of erosion of principal-eoupled with media reports of the defaults looming in the highyield market-led to Widespread selling. As with any other financial market, the junk bond market had both liquidity suppliers and liquidity demanders. Some poor-quality junk bonds made it to the market, which caused some investors who normally would have been suppliers of liquidity to spurn that market because it was considered imprudent. Consequently, financing was reduced. These
13
Risk Management: Principles and Practices people then had financial problems, which demonstrated that junk bonds were imprudent and which meant more people went out of the market. So, the liquidity suppliers who were willing to take on the bonds became liquidity demanders. They wanted to get rid of their junk bonds, and the more the price dropped, the more they wanted to get rid of their junk bonds. Junk bonds were less than 5 percent of their portfolios, so owning junk bonds was not going to ruin the entire portfolio, but they could have lost their jobs. Suddenly, suppliers were disappearing and turning into demanders. The price drop created the wrong signal; it made the bonds look worse than they actually were. The junk bond crash of 1991 was precipitated by several junk-bond-related defaults. But the extent of the catastrophe was from liquidity, not default. Institutional and regulatory pressure accentuated the need for many junk bond holders to sell, and to sell at any price. Because the usual liquidity suppliers were in the position of now needing to sell, not enough capital was in the market to absorb the flow. The resulting drop in bond prices, rather than drawing more buyers into the market, actually increased the selling pressure, because the lower prices provided confirmation that high-yield bonds were an imprudent asset class. Regulatory pressure and senior management concerns-not to mention losses on existing bond positions-vetoed what many traders saw as a unique buying opportunity. 1998 LTCM Default. Long-Term Capital Management is a relative-value trading firm. Relativevalue trading looks at every security as a set of factors and finds within that set of factors some factor that is mispriced between one security and another. The manager then tries to hedge out all the other factors of exposure so that all that is left is long exposure in the factor in one security and short exposure in the factor in another security. One security is cheaper than the other, so the manager makes money. Ideally, in relative-value trading, the positions should be selffinancing so that the manager can wait as long as necessary for the two prices to converge. If a spread takes, say, three years to converge, that is no problem if the position is self-financed. The most common relative-value trading is spread trading. Spread trading is attractive because all that matters is the relative value between the two instruments. This approach has great advantages for analytically based trading because it is easier to determine if one instrument is mispriced relative to another instrument than it is to determine if an instrument is correctly priced in absolute terms. A relativevalue trader can still get it right even with making an erroneous assumption, so long as that assumption
14
affects both instruments similarly. Another advantage of relative-value trading is that a relative-value trade is immune to some of the most unpredictable features of the market. If a macroeconomic shock hits the market, it will affect similar instruments in a similar way. Although both instruments might drop in price, the relative value of the two may remain unaffected. One of the problems of relative-value trading, and of working with spread trades in particular, occurs because the spreads between instruments are typically very small. These small spreads are a direct result of trading between two very similar instruments, where the variations between the prices are very small. Although in the end the dollar risk may be the same as an outright trade to put on this riskand thereby get double-digit expected returns-the relative-value trader is usually highly leveraged. Relative-value trading has other problems as well. First, these very big positions are hard to liquidate, and the newer, less-liquid markets are usually the very markets that exhibit the spread discrepancies. Yet these are the very markets where experience is limited and observers have not seen the risks played out over and over. Second, in a relative-value trade, the manager requires price convergence between the two assets in a spread position. Sooner or later that convergence should take place, but the manager does not know when and thus may have a long holding period. Third, because of the myriad risks and small spreads, the modeling in relative-value trading has to be very precise; if a manager has $10 billion long in one instrument and $10billion short in another instrument and if the manager is off by 1 percent, then the manager stands to lose a lot of money. In terms of relative-value trading at LTCM, the traders were doing such things as buying LIBOR against Treasuries, so they were short credit risk. They were buying emerging market bonds versus Brady bonds and mortgages versus Treasuries. While they had the trades on, they decided to reduce their capital. In the early part of 1998, LTCM returned nearly $3 billion of capital to its investors, reducing its capital base from about $7 billion to a little more than $3 billion. Normally, LIBOR, Treasuries, and mortgagesthe markets that LTCM invested in-are very liquid. The liquidity that the traders at LTCM had, however, was lower than what they expected for several reasons, some completely unanticipated. Even in a normal market environment, if a trader is dealing with really large size, the market is not very liquid; if the trader starts to sell, nobody wants to buy because they know there is a lot more supply where that came from. LTCM's real problems, however, started on July 7,
A Framework for Understanding Market Crisis 1998. On that day, the New York Times ran a story that Salomon Smith Barney was closing its Ll.S, fixedincome proprietary trading unit. Even though I was the head of risk management at Salomon, I did not know this decision had been made. I certainly questioned the move after the fact on several grounds; the proprietary trading area at Salomon was responsible for virtually all the retained earnings of Salomon during the previous five years. Furthermore, this was an announcement that no trader would ever want made public. Closing the trading unit meant that Salomon's inventory would probably be thrown into the market. If Salomon was closing its proprietary trading area in the United States, it probably would do so in London as well. So, the logical assumption was that Salomon's London inventory would be coming into the market as well. The result was that nobody would take the other side of that market; who wants to buy the first $100 million of $10 billion of inventory knowing another $9.9 billion will follow? Salomon should have quietly reduced its risk and exposure. Once the risk and exposure were down and inventory was low, then Salomon could have announced whatever it wanted. As it was, the nature of the announcement worked to dampen demand in the market, which did not bode well for LTCM. Another event that was not favorable for LTCM occurred in August 1998;Russia started to have problems. LTCM, like everybody else, had exposure to Russia. The result was that LTCM had to liquidate assets because its cash reserve was gone. Liquidating assets is only a big deal when nobody wants the assets. Not only did nobody want the assets because of the glut of inventory resulting from the closing of Salomon's proprietary trading units; they now did not want the assets because they knew LTCM was selling because it had financial problems and because they did not know how deep LTCM's inventory was. At the time LTCM was demanding immediacy, liquidity suppliers did not exist in the market. To make matters worse, LTCM was itself a major liquidity supplier in the market. LTCM was providing the other side of the market for people who wanted to hedge out their credit exposure in various instruments. The reason LTCM was making money was that it was supplying liquidity. It was providing a side of the market that people needed. Once LTCM was gone, not many other people were left. And those who were left were not going to stay in the face of this huge overhang of supply. So, when LTCM had to sell, a market did not exist for its positions, because LTCM was the market. LTCM's selling drove the price down enough so that, just as in the case of portfolio insurance, LTCM had to sell even more. LTCM did manage to sell some of its positions but at such low prices
that when it marked to market its remaining holdings, they dropped so much as to require even more margin and to require even more selling. So, a cycle developed, and as the spreads widened, anybody who would have provided liquidity on the other side was not willing to. If people had had more time, the downward cycle would have been halted; someone would have taken the assets off LTCM's hands because the assets were unbelievably mispriced, not only in terms of price levels but also in totally different directions. How could fixed-income instruments in Germany have almost historically low volatility while LIBOR instruments in the United Kingdom have historically wide spreads? The issue was strictly one of liquidity and immediacy; buyers simply were not there quickly enough. Many things have been written about LTCM, some of which are not very favorable to the principals of the firm. But the fact is that the principals are among the brightest people in finance. They have done relative-value trading longer than anybody else on Wall Street. The failure of LTCM says more about the inherent risk and complexity of the market than it does about LTCM; the market is sufficiently complex that even the smartest and most experienced can fail. Who would have anticipated a closing of U.S. fixedincome proprietary trading at Salomon? Who would have anticipated that this closing would be revealed in a public announcement? Who would have anticipated the speed and severity of the Russian debacle hard on the heels of the Salomon announcement? It is that very complexity that the risk analysis models failed to capture.
Lessons Learned These market crises share some common elements that can teach all of us important lessons about risk management. First, it is not just capital that matters. What matters is the willingness to put that capital into the market, to commit capital at times of crisis and high risk. During the LTCM crisis, if somebody had been willing to commit capital at a time when the spreads were at unbelievably wide levels, the crisis would have been averted. I was in charge of risk management at Salomon Smith Barney at the time of this crisis and encouraged-unsuccessfully, it turned out-a more aggressive position in the market. Salomon Smith Barney was in a position to stay in these spread trades, because the firm had sizeable capital and, through its proprietary trading group, more expertise on staff than anybody else in the world. (Remember that LTCM was dubbed "Salomon North" because the bulk of its talent came from
15
Risk Management: Principles and Practices Salomon, but Salomon retained an exceptional talent for relative-value trading even after John Meriwether and others left the firrn.) Nevertheless, in spite of its far stronger capital position and its trading expertise, Salomon Smith Barney was just as quick to get out of the market as LTCM. So, what matters is not just capital or expertise. What matters is capital and expertise and the willingness to use that capital at the time the market really needs liquidity. Second, speculative capital is needed to meet liquidity demand. Either the markets must slow down to allow people more time to respond to the demand for immediacy, or more participants must enter the markets who can act quickly and meet that immediacy. In the crash of 1987, circuit breakers would have slowed things down so that the portfolio insurance programs could have triggered at a pace that the traders in New York and elsewhere could have matched. Or on the futures side, more speculators with capital could have made the market and held onto those positions. Or on the stock exchange side, specialists with more capital and staying power could have held onto the inventory until the stock investors had gotten settled for the day. Third, the markets must have differentiated participation. As the financial markets become more integrated, there is increasing focus on systemic risk-the risk inherent in the institutions that comprise the financial system. A nondifferentiated ecosystem has a lot of systemic risk. One little thing goes wrong, and everything dies. Complexity and differentiation are valuable because if one little thing goes wrong, other things can make up for it. Systemic risk has its roots in the lack of differentiation among market participants. Modem portfolio theory focuses on the concept of diversification within a portfolio, which is fine in a low-energy market. As a market moves to a highenergy state and habitats expand, what matters is not so much diversification among asset classes but diversification among market participants. If everything I hold is also held by other market participants, all of whom have the same sort of portfolio and risk preferences that I have, I am not diversified. In a low-energy state, this lack of diversification will not be apparent, because prices will be dictated by macroeconomics and firm performance. As the market moves to a high-energy state, things change. What matters then is which assets look like which other assets based on the liquidity demanders and suppliers who will be dumping assets into the market. So, in a low-energy state, I am well diversified, but in a high-energy state, everything goes against me because what matters now is not what the assets are but the fact that they are pure risk and that they are all held by the same sort of people.
16
Finally, Wall Street has experienced a lot of consolidation-Citigroup and Morgan Stanley Dean Witter, for example. Big firms are sensitive to institutional and political pressure; they have to go through many checks and sign-offs and thus are slow to react. The habitat is becoming less diverse, and more systemic failures are occurring because everybody looks the same and is holding the same assets. Big firms never seem to be as risk taking as their smaller counterparts. When two firms merge, the trading floor does not become twice as large. The trading floor stays about the same size as it was before the two firms merged. The total risk-taking capability, however, is about half of what it was before. In fact, the situation gets even worse because two firms do not merge into one big firm in order to become a hedge fund. Firms merge in order to conduct retail, highfranchise business. Risk taking becomes less important, even somewhat of an annoyance. Although with consolidation the firm has more capital and more capability to take risk, it is less willing to take risk.
Policy Issues The markets are changing, and thus, risk management must change along with them. But often, changes resulting from reactions to market crises create more problems than they solve. Policy issues surrounding transparency, regulation, and consolidation could dramatically affect the future of risk management.
Transparency. The members of the LTCM bank consortium (the creditors of LTCM that took over the firm in September 1998) complained that they were caught unaware by the huge leverage of the hedge fund. Reacting to the losses and embarrassment they faced from the collapse, some of the consortium members entered the vanguard for increased transparency in the market. They argued that the only way to know if another LTCM is lurking is by knowing their trading clients' positions. The issue of hedge fund transparency may deserve a fuller hearing, but opaqueness was not the culprit for LTCM. A simple back-of-the-envelope calculation would have been sufficient to demonstrate to the creditors that they were dealing with a very highly leveraged hedge fund. The banks-and everyone else in the professional investment communityknew that LTCM's bread and butter trading was swap spreads and mortgage spreads. Everyone also knew that on a typical day, these spreads move by just a few basis points-a few one-hundredths of a percent. Yet historically, LTCM generated returns for its investors on these trades of 30 percent or more. The only way to get from 5 or 10 basis points to 30 or 40 percent is to lever more than 100 to 1.
A Framework for Understanding Market Crisis If the banks were unable to do this simple calculation, it is hard to see how handing over reams of trading data would have brought them to the same conclusion. Often in trading and risk management, it is not lack of information that matters; it is lack of perceiving and acting on that information. Indeed, looking back at the major crises at financial institutionswhether at Barings Securities, Kidder, Peabody & Co., LTCM, or DBS-finding even one case in which transparency would have made a difference is hard. The information was there for those who were responsible to monitor it. The problem was that they either failed to look at the information, failed to ask the right questions, or ignored the answers. Indeed, if anything, the LTCM crisis teaches us that trading firms have good reasons for being opaque. Obviously, broadcasting positions dissipates potential profit because others try to mirror the positions of successful firms, but it also reduces market liquidity. If others learn about the positions and take them on, fewer participants will be in the market ready to take the opposite position. Also, if others know the size of a position and observe the start of liquidation, they will all stand on the sidelines; no one will want to take on the position when they think a flood of further liquidation is about to take place. Transparency will come at the cost of less liquidity, and it is low liquidity that is at the root of market crisis. Regulation. Regulation is reactive. It addresses problems that have been laid bare but does not consider the structure that makes sense for the risks that have yet to occur. And indeed, by creating further rules and reporting requirements to react to the everincreasing set of risks that do become manifest, regulation may actually become counterproductive by obscuring the field of view for financial institutions to the areas of risk that have yet to be identified. At some point, the very complexity of the risk management system gets in its own way and actually causes more problems than it prevents. We are not at that point yet in the financial markets, but some precedence exists for this phenomenon in other highly regulated industries, such as airlines and nuclear energy. The thing to remember is that every new risk management measure and report required by regulation is not only one more report that takes limited resources away from other, less well-defined risk management issues; it is also one more report that makes risk managers more complacent in thinking they are covering all the bases. Consolidation. I have already discussed the implications of consolidation on risk taking. With every financial consolidation, the capacity of the market to take risk is reduced. Large financial supermar-
kets and conglomerates are created to build franchise, not to enhance risk taking. Consolidation also increases the risk of the market, especially the risk of market crisis. The increase in risk occurs because the market becomes less differentiated. A greater likelihood exists that everyone will be in the same markets at the same time and will share the same portfolios. The investment habitat becomes less diverse. The drop in habitat diversity from financial consolidation looks a lot like the drop in retail diversity that has occurred as interstate highways and mass media have put a mall in every town and the same stores in every mall. Whether in food, clothing, or home furnishings, regional distinctions are disappearing. "The maIling of America" is creating a single, uniform retail habitat. Coming soon will be "the malling of Wall Street." Broker/ dealers are consolidating into a small set of investment "super stores." On the investor side, more and more investors are taking advantage of ready access to information and markets, but along with this information advantage comes a convergence of views among investors-particularly the retail or individual investors-because the information sources are all the same. When the Glass-Steagall Act was passed, in all likelihood Congress did not have in mind diversifying the ecosystem of the financial markets. Glass-Steagall created a separation between different types of financial institutions in order to protect investors. The separation and resistance to certain types of consolidation is still needed but now for another reason-to maintain a diverse habitat. The goal of any Glass-Steagalltype reform should be to maintain different types of risk takers. It should encourage differentiation among financial market participants so that if one liquidity supplier is not supplying liquidity in a particular adverse circumstance, another one is, thus helping to prevent or minimize a full-blown crisis. Some people think of speculative traders as gamblers; they earn too much money and provide no economic value. But to avoid crises, markets must have liquidity suppliers who react quickly, who take contrarian positions when doing so seems imprudent, who search out unoccupied habitats and populate those habitats to provide the diversity that is necessary, and who focus on risk taking and risk management. By having and fostering this differentiated role of risk taking, market participants will find that crises will be less frequent and less severe, with less onerous consequences for risk management systems. The hedge funds, speculative traders, and market makers provide this role.
17
Risk Management: Principles and Practices
Question and Answer Session Richard M. Bookstaber Question: Could you discuss the u.s. Federal Reserve's role in the LTCM crisis? Bookstaber: Other solutions could probably have been found if more time had been available. The Fed could have waited until things worked out, but the Fed took another course because it perceived a time of real financial crisis. These were the major financial markets of the world, and if something had not been done, the situation could have been much worse. It was already much worse from a systemic standpoint than the crash of 1987, but from the perspective of most individual investors, the crisis was behind the scenes because it dealt with esoteric instruments. For the financial marketplace, however, these were the primary financial instruments. The Fed has taken a lot of heat for its activist role, but in that position, you have to step up and do what you think is right even if you have to explain afterwards. It is a mark of courage and perspicacity on the part of the Fed that it would take the step that was necessary, even if the action was unorthodox and opened the Fed up to criticism. The alternative would havebeen far worse. At least we have the luxury of debating the propriety of the Fed's actions and whether there was some conflict of interest. I would rather be debating than dealing with the aftermath if nobody had protected these markets. Question: How do investors protect themselves from the malling of Wall Street and lack of diversification among participants? Bookstaber: If you are an individual investor, the malling of
18
Wall Street probably does not matter quite so much because your positions are small and you can get out quickly. If you are an institutional investor, you have to start looking at diversification in a different dimension. Low-energy diversification is the Markowitz diversification. High-energy diversification is looking at diversifying among net asset classes, among market participants, and among habitats so that if something happens in one area, it is less likely to affect your holdings in other areas. The more that globalization and the malling of Wall Street occurs, the harder it is to do that high-energy diversification, because Wall Street goes beyond the boundaries of Wall Street or the United States. Capital can flow from anyplace to anyplace else. Question: If these crises are the result of a time disintermediation between liquidity suppliers and demanders, why don't the markets recover much faster? Bookstaber: If you think it took a long time for recovery-whether it was the crash of 1987, LTCM, or the junk bond crisis, which was a multiyear ordeal-that is, unfortunately, the nature of systemic risk. Recovery could have been much slower and more painful than it was. In a normal market, liquidity demanders are serviced by liquidity suppliers who are in the market, and participation in the market is a function of price. When a cycle is created in which prices do the opposite of what they are supposed to do and suppliers disappear or become demanders themselves, that is a wrenching experience for all concerned, especially those who have not had such
a previous experience. As is the case with any experience that shatters our illusions and causes us to rethink long-held assumptions, recovery comes slowly. If the suppliers had been there at the same time as the demanders, October 19, 1987, would have just been another day and prices would not have dropped 20 percent. If the suppliers had been there for LTCM so that when LTCM had that first margin call it could have sold at a reasonable price and met the margin, then life would have gone on. Neither scenario happened, and recovery was difficult. Question: How would you describe your view of risk management? Bookstaber: I think about the markets as a scientific enterprise rather than an accounting enterprise. Many facets of the markets are accounting oriented, or the mathematical equivalent of accounting; examples include modem portfolio theory and the capital asset pricing model. These accounting-type models are important, but we have to look beyond the simple relationships and resulting output. During the oil crisis in the mid1970s,the speed limit was dropped to 55 miles per hour. One firm ran this information through its models and discovered that auto insurers would profit from the reduction in the speed limit. We have to learn to make this type of connection between an oil crisis, lower speed limit, and the decision to buy stock in auto insurance companies. When Chemobyl blew up, a lot of people saw it only as a terrible event, but somebody saw it as an opportunity to buy wheat futures.
A Framework for Understanding Market Crisis Making that kind of connection is easy to do after the fact and does not require deep analytical tools, but it does require a scientific or analytical view of how the world is tied together. Looking at risk management from a scientific perspective is important because the risk that finally hurts most is the risk that
you do not know about. Refining our bread and butter measures of risk-VAR,stress tests, and similar tools-will not bring us much closer to uncovering the most critical risks. Granted, they are valuable tools for measuring wellknown risks, and they are capable of assessing the likelihood of somebody losing money because a
known market factor, such as interest rates or equity prices, moves precipitously. But what matters most are the risks we do not recognize until they occur; after the fact, it is always easy to say, "1 should have known that." The challenge is to try to see the risk ahead of time, to imagine the unimaginable.
19
Risk Management and Fiduciary Duties Robert M. McLaughlin 1 Partner Eaton & Van Winkle
Risk management for firms in the investment profession must address the potential for fiduciary violations, especially in derivative-related activities. Analysis of fiduciary relationships, laws, duties, and court cases provides guidance for minimizing the risk of fiduciary violations.
Examination of theactivities offiduciaries involves, above all, aninquiryintothepropriety ofprofit-making. What isat stake is whether thecourtshould sanction or stigmatize aparticular actperformed byabusinessman in a commercial context. 2 It is striking to see contemporary courts . . . haul professional trustees over the coals for investment policies that few financial economists would find exceptional.' idu ciary law is a highly compartmentalized, complex field with as many different branches of law as there are types of institutions, investors, and investment managers. Worse yet, a distinctive feature of fiduciary law-especially in its application to derivatives-is its often elusive and unpredictable moral underpinning. When large unexpected losses occur, it is all but inevitable that charges of fiduciary wrongs will follow; indeed, large losses are often construed as invitations to litigation and regulatory enforcement actions. But capital markets depend on risk taking, and when risky economic decisions result in judicial and regulatory responses that are based on attacks against the decision makers for supposed fiduciary violations, the results can be highly destructive. Penalizing business and investment decisions that happen to tum out badly risks stifling economic activity (in addition to being unfair) and thus has the potential for doing more harm than good. It also causes doctrinal confusion and, of significant importance to fiduciaries and their advisors, legal uncertainty,
F
IThis presentation is adapted from Robert M. McLaughlin, Overthe-Counter Derivative Products: A Guide to Businessand Legal Risk Managementand Documentation (New York: McGraw-Hill, 1999). 2Emest J.Weinrib, "The Fiduciary Obligation," Universitya/Toronto LawJournal, vol. 25 (1975):1~2. 3 Jeffrey
N. Gordon, "The Puzzling Persistence of the Constrained Prudent Man Rule," N.Y.U. Law Review (April 1987):52, 66.
20
An important task, then, is to develop an analytical approach that reduces the legal uncertainty surrounding fiduciary conduct and, by so doing, provides practical guidance to (1) fiduciaries, (2) those who wish to benefit from fiduciary law's protections, and (3) those who are charged with responsibility for administering fiduciary law. In this presentation, I attempt to develop such an approach and thereby offer practical guidance on minimizing the risk of potential fiduciary violations and the risk that the protections afforded by fiduciary law will inadvertently be forfeited. The presentation does so by focusing on • the structure of fiduciary relationships, • the function of fiduciary law, which follows directly from that structure-namely, the law protects benefited parties from the risks that arise from their "structural dependence" on their fiduciaries, • important differences in the way fiduciary and contract law treat written contracts, which can have a direct impact on risk management practices, and • lessons from the limited derivative-related fiduciary duty case law and from cases on other matters relevant to the supervision-internal and external---of derivative activities. Note that the presentation will not cover"controls" as such, although the discussion will have definite controlimplications. Nor will it focus on the fiduciary law of any particular jurisdiction or regulatory regime.
Fiduciary Relationships A threshold point to be made in any discussion of fiduciary relationships, and one often overlooked, is
© 1999 Robert M. McLaughlin
Risk Management and Fiduciary Duties that whether a relationship is or is not legally a fiduciary relationship is a question ultimately decided by courts-and not always according to the parties' intentions. In classifying relationships, courts give great weight to those intentions, especially as set forth in written agreements and evidenced by other facts and circumstances. Courts ordinarily honor, for example, both express disclaimers and express assumptions of fiduciary duties. Yet although the parties' intentions and contractual provisions are important, they are not controlling. Courts readily look beyond the "four comers" of a contract to examine such external factors as equity, public policy, and state-imposed limitations on the parties' capacity and freedom to structure their dealings privately. External factors can readily lead to rulings that defeat the parties' intentions by unexpectedly imposing fiduciary obligations or, in contrast, by rendering the protections of fiduciary law unavailable. Because of courts' willingness to examine external factors in classifying relationships, it can be dangerous for parties to rely solely on the language of their written agreements to determine their legal rights and obligations. The inconclusive nature of written agreements can make it difficult to decide when someone has incurred fiduciary duties and, if so, what the specific content of those duties might be. A functional analysis of fiduciary relationships, on the other hand, helps to reduce that uncertainty and minimize surprises. It does so by focusing on six common elements shared by all fiduciary relationships. Relationship for Provision of Services. The first element of a fiduciary relationship is that it must have been established for the provision of services by one party (the fiduciary) to another party (which, for lack of a better term, I will follow Frankel and call the "entrustor") with respect to property or assets. Frankel coined the term "entrustor" to connote the two "unifying features of all fiduciary relations.,,4 First, the root "trust" identifies the substitution function that a fiduciary performs, standing in the entrustor's place as to entrusted matters. Second, "entrust" suggests a delegation of powers to the fiduciary for performing the contemplated services. "Entrustor," although an imperfect term, is more descriptive and less confusing than the available alternatives. The most common, of course, is "beneficiary," but that seems an odd term for describing such diverse persons as general partners, joint venturers, patients of physicians or psychiatrists, clients of lawyers, union members, stockholders, and at times, even bondholders and institutional lenders-all of whom enjoy some fiduciary law protections. 4Tamar Frankel, "Fiduciary Law," California Law Review (May 1983):800, Note 17.
The principal concern here is with two important categories of fiduciary services: the management of investment portfolios and the management of largescale, indivisible business enterprises that are funded with the pooled capital of numerous security holders who, in tum, share ownership and risks. Delegated, Discretionary Power. The second element is that a fiduciary receives delegated, discretionary power over some property of the entrustor and it receives that power to enable it to perform the contemplated services. Potential reasons for the delegation are, of course, numerous. An entrustor might, for example, simply prefer not to perform the relevant services for itself. Or perhaps, the entrustor lacks the time, expertise, or facilities to perform those services efficiently. In any event, as discussed below, if the fiduciary is to be able to perform those services properly, the power delegated to it must be discretionary. Moreover, in exercising that power, the fiduciary acts for the benefit of the entrustor as its substitute or "alter ego." Of critical legal importance is that the delegation is not for the fiduciary's benefit but solely to facilitate the performance of the particular services for the entrustor. Thus, the fiduciary has no independent right to use or assume the property or powers for its own benefit; it has no inherent right to share in any investment gain or corporate profit. Any independent right to use the delegated power or property for the fiduciary's benefit must be expressly stated in a contract or otherwise unambiguously provided for. Prohibitive Costs. The third defining element of fiduciary relationships is that the exact actions to be taken by the fiduciary in discharging its responsibilities are subject to so much uncertainty and so many variables that prespecifying those actions would be futile or impractical. Futility arises when any effort to precisely predetermine the fiduciary's conduct would deprive the fiduciary of the ability to exercise its expertise meaningfully; impracticality occurs when the costs-in terms of time and moneyof prespecifying are prohibitive. Portfolio investing and corporate management frequently involve so much risk and uncertainty that it is impossible to dictate in advance the fiduciary's specific behavior without undermining the purpose and benefits of the relationship. Specifying, for example, counterparty credit concentrations and country exposure limits is important and often sufficiently measurable to permit objective verification of the fiduciary's performance. But trying meaningfully to specify, for example, how the fiduciary should react to proposed deal terms prior to any offer being made or how the fiduciary ought to respond
21
Risk Management: Principles and Practices to specific changes in market conditions without knowing in advance what those changes might be at the time is pointless. The difficulties of prespecifying the fiduciary's conduct are exacerbated by the fact that fiduciaries, particularly portfolio managers and executives of financial and industrial companies, are hired not because they are especially honest or trustworthy but precisely because of their knowledge, expertise, and judgment. In restricting the exercise of discretion, an entrustor limits its fiduciary's ability to apply that knowledge and expertise and its independent judgment. "Structural Dependence": Risk of Negligence or Misappropriation. By delegating discretionary
power over their property, entrustors become exposed to the risk that their fiduciaries may exercise that power carelessly or for their own benefit. Here lies the central problem of all fiduciary relationships: Namely, the delegation of discretionary power to the fiduciary to enable it to perform the contemplated services renders the entrustor dependent on its fiduciary for the performance of those services and for the protection of the entrusted property. Furthermore, as discussed below, the entrustor's structural dependence on its fiduciary cannot be satisfactorily alleviated through direct control and monitoring of the fiduciary's performance. Inadequacy of Direct Control and Monitoring. The fifth element of a fiduciary relationship is
that direct control of the fiduciary by the entrustor is so impractical, or costly and inefficient, that it undermines the purpose of the relationship. For example, once entrustors have hired expert money managers or corporate executives, they seldom want, or have the time and ability, to consider, direct, and review every investment decision or business judgment that needs to be made. And when numerous investors pool their capital, potentially serious and costly "collective action" problems arise that hinder any effort to exercise direct control. Consider the traditional trust relationship, for example. Assume that many beneficiaries would benefit from a lawsuit brought to compel the trustee to take a given course of action. But which beneficiary is going to fund the lawsuit, given that the beneficiary's costs and expenses are generally not reimbursable from the trust's assets and that the other beneficiaries will have no legal obligation to share in them? Or consider the fate of a public stockholder who wants to launch a potentially costly proxy battle against an allegedly dishonest management team. Even though all other stockholders could benefit if the allegations are correct and the stockholder is successful, those others will have no direct legal obligation to help fund the proxy contest.
22
Monitoring is also limited in its effectiveness. Among other things, the effectiveness of any monitoring effort will depend on the quality and frequency of reporting. Unfortunately, the reporting is often too infrequent. Moreover, the information reported is typically prepared either by, or under the supervision of, the fiduciary. Monitoring, as a form of after-thefact protection, is also flawed in a more unsettling way. In particular, reports about the results of a manager's decisions may say little about the quality of those decisions given the circumstances under which they were made. Good decisions can have bad results, and results can be ambiguous or otherwise difficult to evaluate. A lot of smart, diligent people have lost money following perfectly reasonable investment strategies. In short, outcomes are inconclusive. Alternative Controls. Finally, courts will usually find fiduciary relationships and, therefore, impose fiduciary obligations only if they are convinced that no effective alternative controls, marketbased or otherwise, are present to limit the entrustors' dangers of delegation. The most common alternative control is the availability of a trading market that enables an investor to dispose of investments, thereby terminating any potential fiduciary relationship with the issuer's management. Public stockholders, for example, who hold highly liquid shares can sell their stock if they disapprove of management's actions. Corporate directors and officers usually have a powerful self-interest in maintaining high stock values and ought to be reluctant to take actions that are contrary to stockholders' interests. Reliance on market forces alone is of limited use because it works only with securities trading in liquid markets and under favorable market conditions. Entrustors are ordinarily unable to "vote" against management, except at great cost, by selling their securities when the issuer's securities are thinly traded.
Fiduciary Law A functional approach to fiduciary relationships illuminates the major reasons why the law is so concerned with the protection of entrustors. This approach begins by separating the fiduciary's contemplated services from the entrusted property that enables the fiduciary to perform those services. "Extraordinary" Risk of Loss. A risk arises in any fiduciary investment relationship from the fact that the fiduciary's services, even if valuable, are likely to be less valuable than the invested capital. By delegating power over its property to a fiduciary, and by parting with that power, an entrustor exposes itself to a risk of loss that may have nothing
Risk Management and Fiduciary Duties to do with the investment itself; losses may occur solely as a result of the fiduciary's carelessness or misappropriation. Because its capital is at risk, the entrustor's potential loss can be extraordinary and disproportionate to the benefits to be derived from the relationship-such loss may greatly exceed the value of the fiduciary's services. Furthermore, abuse by a fiduciary that leads to investment loss can be exceptionally difficult to detect so long as the fiduciary maintains legitimate and exclusive possession of the entrusted property. "Fiduciary risk" is unlike market or investment risk, from which one might reasonably expect to incur losses from time to time as perhaps an unavoidable cost of generating profits; entrustors do not enter into business or investment relationships expecting their fiduciaries occasionally to misappropriate funds or invest carelessly. An entrustor may, then, be caught off guard by its fiduciary's abuse and with little realistic ability to protect itself. Function of, and Limitations on, Fiduciary Law. The function of fiduciary law is to protect
entrustors from the disproportionate, extraordinary risks inherent in the structure of fiduciary relationships. The law endeavors to protect the entrustor from the risk of loss resulting from a fiduciary's potential carelessness with, or misappropriation of -often likened to embezzlement-entrusted property. Properly understood, fiduciary law is neither a guarantee that a portfolio or business will never incur significant losses nor an assurance that a portfolio or business will perform as expected. The law is sophisticated enough to recognize that even sound portfolio investment principles and business strategies can produce unexpected losses. Fiduciary law's basic concern is rooted in the strong public policy of promoting socially desirable relationships by affording entrustors legal protections that might otherwise be unavailable, too costly, or impractical to obtain. One of the most compelling public policy rationales for fiduciary law is the phenomenon of specialization: Specialization increases within society the sum of available expert services, which is essential to modem economies because it enhances economic efficiency. Exclusive Benefit Principle. Fiduciary law's one-sided concern, embodied in its so-called exclusive benefit principle, is to protect the entrustor by imposing on the fiduciary mandatory legal obligationsspecifically, the duties of loyalty (which essentially means no self-dealing with entrusted property) and care. Fiduciary law is not concerned with how a relationship is established or with the relative sophistication of the parties but only with the structure of the relationship. Although structure may be evidenced by
contractual terms, the law does not require an agreement-written or otherwise-in order to find a fiduciary relationship. Rather, courts can impose fiduciary duties as a matter of law and even contrary to the parties' intentions if they find the requisite delegation of power over an entrustor's property under circumstances that expose the entrustor to extraordinary risks of its fiduciary's misconduct. For tha t reason, one may be surprised to learn that courts can readily find fiduciary violations by persons who never intended to assume fiduciary obligations. Even a fiduciary's right of compensation and of reimbursement of expenses is designed to protect the entrustor. Without compensation, few people would be willing to act as fiduciaries in most commercial relationships, especially those that are viewed as economically risky. Similarly, the fiduciary's right of reimbursement is designed to ensure that the fiduciary takes all necessary and appropriate steps to perform its services properly. If, for example, immediate action is essential-maybe to preserve capital or take advantage of a fleeting investment opportunity-it is likely to be in the entrustor's interest that the fiduciary not forgo that action simply because of concern over who will bear the expense. Relationship Structure. When courts analyze a relationship's structure to determine whether it gives rise to fiduciary duties, they begin with and look carefully at the express and implied terms of the parties' agreement. One can often infer a relationship's structure from its contractual terms. Nevertheless, as previously noted, the law does not require an express or even an implied agreement by the fiduciary to assume any obligations that are fiduciary in character. As discussed earlier, a delegation is both a grant (to the fiduciary) and a ceding (by the entrustor) of power that renders the entrustor dependent on the fiduciary as to the entrusted property. Express and detailed restrictions and controls on the fiduciary would be either unrealistic or so costly that they would defeat the purpose of the relationship. Nevertheless, the delegation of power and inability of the entrustor to monitor and directly control the use of that power expose the entrustor to real risk. Of interest in derivative cases is that the dependency that triggers fiduciary duties can arise even with highly sophisticated entrustors who in most other business and investment contexts would not be likely to be the kinds of parties thatthe law is solicitous of protecting. Most of the major derivative litigation to date has involved otherwise sophisticated institutions, such as Procter & Gamble and Gibson Greetings. Moral Stigma. The entrustor's structural dependence has long been the source of the law's moral
23
Risk Management: Principles and Practices indignation with violations of fiduciary duties. In justifying the imposition of onerous penalties on fiduciaries, modem courts seem to reflexively recite Judge Cardozo's famous 1928dictum that a fiduciary is "held to something stricter than the morals of the market place. Not honesty alone, but the punctilio of an honor the most sensitive, is then the standard of behavior.rf A finding of a violation implies dishonorable or irresponsible conduct, and a serious moral stigma generally attaches to fiduciaries who are found to have violated their fiduciary duties. In addition to potentially resulting in monetary and other damages, the stigma of fiduciary violations can cause serious reputational harm. Stigmatizing complex financial decisions that happen to tum out badly is deeply disturbing, particularly so with derivatives, where one should expect even perfectly designed and executed risk management programs to incur derivative losses from time to time. Hedging programs that use derivatives often involve joint transactions in which gains on derivatives offset losses on underlying assets, and vice versa. Logically, in a perfectly designed and implemented hedging program, either the derivative or the hedged asset is expected to incur a loss as market conditions change. Punishing fiduciaries simply because the losses incurred in a sound hedging program happened to fall on the derivative side of the joint transaction strikes one as arbitrary and unfair. Even the case Cardozo wrote about betrays the tensions that can arise in judicial attempts to stigmatize complex business decisions. The case involved a real estate venture in which the parties' agreements simply did not contemplate, expressly or otherwise, the events that led to their dispute. Suffice it to say that one party, a property manager, took advantage of a business opportunity presented to him during the term of the venture without informing his coventurer of that opportunity or affording him a chance to participate in it. Although the manager may have adopted an aggressive business approach, the claim that the plaintiff was in any meaningful way harmed by or structurally dependent on the manager is dubious. And it is unlikely that either party would have expected during the course of their relationship that the relationship was fiduciary in nature. The decision was rendered by a sharply divided court; three of seven judges dissented vehemently, arguing compellingly that nothing in the law or facts before them warranted the imposition of fiduciary duties. 5Meinhard v. Salmon, 249 N.Y. 4S8 (1928):464. Emphasis added by author.
24
Fiduciary Law versus Contract Law A meaningful understanding of the essential differences between fiduciary and contract lawparticularly the different weights they give written agreements-is vital to any effort to minimize risks of fiduciary violations. Fiduciary Law. The principal objective of fiduciary law in the economy is to foster beneficial business and investment relationships by protecting entrustors' property rights. Fiduciary law, therefore, imposes substantial restrictions on fiduciaries' freedom to "contract around" or "out of" their duties of care and loyalty. Fiduciary law is particularly aggressive in protecting the interests of passive investors from fiduciaries' opportunistic behavior, especially when those investors are information disadvantaged (i.e., are dependent on their fiduciaries to provide relevant information about their investments). At a minimum, the law subjects any attempt made by fiduciaries to limit the nature and scope of their fiduciary duties to rigorous procedural preconditions. Before giving effect to waivers of fiduciary duties, courts typically demand convincing evidence that waivers, for example, were granted only after full disclosure to the entrustor of all material facts; they also require evidentiary showings that the waivers were knowingly made by entrustors who possessed a realistic ability to refuse to waive. Contract Law. Contract law, on the other hand, stresses the values of personal freedom and autonomy. It assumes that contracting parties are fully capable of looking out for and protecting their own best interests. Thus, contract law places great weight on the terms of the parties' agreement, which the parties are presumed to have freely chosen for themselves. Absent evidence to the contrary, the parties are assumed to have acted in good faith, and each party acts and expects the other to act in his or her own best interest. Under a traditional contract law analysis, the state's role is limited to merely ensuring completion of the contract; thus, the only role that a court should play in a contract dispute is to determine and give effect to the parties' actual intentions, based on the express terms of their agreement and any other terms that are implied from the nature of the relationship or transaction. Contract law traditionally does not take the expansive approach to interpreting duties that fiduciary law does. Some commentators and at least one prominent federal court argue that the traditional, narrow approach to interpreting contracts should also be used to interpret fiduciary duties. In particular, the U.S. Court of Appeals for the Seventh Circuit has advocated strenuously for a contractual approach to
Risk Management and Fiduciary Duties fiduciary duties. It has, in effect, asserted that fiduciary duties are merely the equivalent of implied contract terms-that they are in essence gap fillers that merely complete the missing details of an agreement that the parties may not have taken the time themselves to supply. In the court's words: "The fiduciary duty is an off-the-rack guess about what parties would agree to if they dickered about the subject explicitly.r''' Market-Based Protections. Whether under a contract law or fiduciary law analysis, courts generally recognize that alternative market mechanisms (e.g., the presence of a liquid secondary market that enables stockholders to exit their investment relationships with minimal transaction costs) may reduce investors' need for expansive legal protections, especially those afforded by traditional fiduciary law. Courts, such as the Seventh Circuit mentioned earlier and Delaware state courts, emphasize that modern economies depend on important and beneficial business and investment relationships. Accordingly, overdeterrence of managerial risk taking threatens those relationships, and punishing managers for economic decisions that are morally inconclusive or ambiguous could kill risk taking by increasing managers' natural risk aversion. Such punishment could also deter qualified candidates from becoming investment managers or corporate officers and directors. Moreover, courts have voiced recent concerns over the disproportionality of imposing liability for corporate losses on officers and directors. In the words of a Delaware court: Given the scale of operation of modem public corporations, this stupefying disjunction between risk and reward for corporate directors threatens undesirable effects.... The law protects shareholder investment interests against the uneconomic consequences that the presence of . . . second-guessing would have on director action and shareholder wealth?
Trust Law versus Corporate Law Trust law and corporate law also differ in their treatment of fiduciary relationships. The differences can add a layer of complexity-beyond that encountered in the tension between fiduciary and contract lawto any examination of the duties business and investment managers might owe to their investors.
6Jordan v. Duff and Phelps, Inc., 815 F.2d 429 (7th Cir. 1987) (Easterbrook, J.), cert. dismissed, 485 u.s. 901 (1988). 7Gagliardi v. Trifoods
July 19, 1996).
International, lnc., 683 A.2d 1049,1052 (Del. Ch.
Trust Law. Traditional trust law imposes the strongest duties on fiduciaries and encourages the highest degree of risk aversion because the beneficiary's structural dependence is at its greatest in a trust relationship. As mentioned earlier, beneficiaries usually have little meaningful ability to remove trustees, and they are usually not involved in trustee selection. Moreover, trust beneficiaries have little practical ability to exit the relationship without suffering substantial losses. Consider the options available to those who become trust beneficiaries through inheritance: Few have any meaningful ability to break their trusts and acquire direct control of the trust assets. Indeed, the very purpose of the trust is often to prevent the beneficiary from gaining such direct control. Furthermore, when a trust has multiple beneficiaries, those beneficiaries usually face substantial collective-action problems should they ever wish to attack the trustee's decisions. As noted earlier, such beneficiaries are usually not entitled to reimbursement from the trust assets for costs and expenses incurred in attempting to influence the trustee's actions. Historical approach. Trust law throughout the United States used to be-and in a diminishing number of states still is-severely, almost arbitrarily, hostile toward most financial activities involving either investment risk or speculation. Under the old view, which typically goes under the heading of the Prudent Man Rule, investment risk is seen as the onesided chance of loss, or what many today call"downside risk." Until as late as the 1950s, trust law even labeled investments in common stock as automatically "speculative" and, therefore, impermissible. In fact, the Prudent Man Rule holds that many categories of investments are "imprudent per se. Accordingly, the exercise of care, skill, and caution would be no defense [to liability] if the property acquired or retained by a trustee or the strategy pursued by a trust was characterized as imperrnissible.rf That is, until relatively recently, trust law has been openly hostile toward most financial activities that involve any kind of investment risk. Until recently, the dominant philosophy underlying trust law was deeply antagonistic toward allowing trustees to offset portfolio gains and losses. It also revealed a near absolute emphasis on ensuring that each individual investment within a portfolio was designed to minimize risk of loss on that investment. 8Restatement of the Law Third, Trusts, Prudent Investor Rule, as adopted and promulgated by the American Law Institute, Washington, D.C.. May 18, 1999 (American Law Institute Publishers. 1992):3--4.
25
Risk Management: Principles and Practices Even the law's express diversification requirement had as its sole purpose "minimizing the risk of large losses." That form of diversification requirement, which also continues in a diminishing number of states, mandated diversification solely for the purpose of minimizing the risk of loss. There was no evident awareness that through diversification, trustees might improve the economic"efficiency" of their portfolios. Ultimately, the restrictions placed on trustees rendered trust law, albeit unintentionally, also antagonistic toward beneficiaries because the beneficiaries bore the ultimate costs-in the form of suboptimal portfolios and unnecessary costs and expenses-of their trustees' inability to rely on modern portfolio theory and investment techniques. Note that it is prudent today for any trustee or other fiduciary who manages risky investments, whether or not including derivatives, to determine whether the law that governs the investment relationship is the old Prudent Man Rule. If so, certain categories of instruments, such as derivatives, may be deemed imprudent per se. In that case, even the exercise of care, skill, and caution will afford no defense against losses if either an investment made or strategy pursued is characterized as impermissible. II Recent approach. By 1990, a dramatic shift had taken place in the law of many jurisdictions. Trust investment law, through a new Prudent Investor Rule, began explicitly to recognize and accept modern investment principles. In particular, it now views risk in the modern sense of two-sided uncertainty of outcomes, comprising both "upside" and "downside" exposures. The new law incorporates modern portfolio theory into its diversification requirement, and it treats no investment strategy or technique, including those that use derivatives, as automatically prohibited. The reason for the shift is a growing recognition that "prudent risk management is concerned with more than ... the loss of dollar value. It takes account of all hazards that may follow from inflation, volatility of price and yield, lack of liquidity, and the like."9 The new Prudent Investor Rule expressly refrains from classifying any investment or technique as imprudent in the abstract. Instead, it attempts to provide the law with a measure of generality and flexibility and thus attempts to free trustees from rigid and arbitrary investment constraints. In jurisdictions that have adopted the Prudent Investor Rule, the prudence of a trustee's conduct will be analyzed based on a more informed assessment of all relevant facts and circumstances. 9Id., at General Comment e.
26
Some commentators have observed that the new rule "liberates" sophisticated trustees by expressly sanctioning the use of popular investment techniques and instruments and eliminating per se, or automatic, liability. In fact, the new rule makes successfully attacking a trustee's conduct more difficult than under the old rule. Given the new rule's focus on overall portfolio strategy and the complexity of any assessment of a trustee's performance, courts should be more reluctant under the new rule to conclude that a trustee has acted imprudently. One important lesson emerging from the new Prudent Investor Rule is that in assessing the prudence of a trustee's conduct, courts must now as a practical matter focus more closely on process than results. Central factors that are likely to be considered in most litigation over fiduciaries' investment decisions are the manner in which the fiduciary has documented its activities and whether the fiduciary can demonstrate its conformity to agreed-on investment guidelines. Compliance with fiduciary standards is now to be judged ... not with the benefit of hindsight or by taking account of developments that occurred after the time of a decision to make, retain, or sell an investment. The question of whether a breach of trust has occurred turns on the prudence of the trustee's conduct, not on the eventual results of investment decisions.l''
The new Prudent Investor Rule clarifies that in delegating investment authority, a trustee must exercise care, skill, and caution in (1) selecting a suitable delegee, (2) establishing the scope and terms of the delegation, (3) periodically reviewing the delegee's compliance with the scope of the delegation, and (4) controlling the costs of the delegation. ERISA, the U.S. federal Employee Retirement Income Security Act governing the trustees of U.S. corporate pension funds, likewise accommodates derivative activities, at least in its interpretation by the U.S. Department of Labor. The DOL wrote in 1996 that "Investments in derivatives are subject to the fiduciary responsibility rules in the same manner as are any other plan investments.r"! Corporate Law. For several reasons, corporate law's fiduciary duties are more lenient than those imposed by trust investment law, but the most important reason is that stockholders, at least in public lOId., at General Comment b. llLetter dated March 21, 1996, from Olena Berg, U.S. Department of Labor, addressed to Hon. Eugene A. Ludwig, Comptroller of the Currency; see also Lynch v. J.P. Stevens & Co., Inc., 758 ESupp. 976, 1013 (D.N.J. 1991). Court observed, in dicta, that nothing in the allegations before it suggested that an investment of plan assets in futures and options was unlawful.
Risk Management and Fiduciary Duties companies, are usually far less structurally dependent on their fiduciaries than are trust beneficiaries. Stockholders of most public companies can usually sell their securities and exit their relationship relatively easily, and they enjoy voting rights within the corporation. They can also seek judicial dissolution of the corporations. Moreover, stockholders usually enter into their corporate investment relationships voluntarily. Finally, as noted earlier, minimizing the "stupefying disjunction between risk and reward" is a judicially recognized countervailing policy arguing against the imposition of strict duties on officers and directors. Consequently, courts have developed the socalled Business Judgment Rule-a rebuttable presumption that in making a business decision management "acted on an informed basis, in good faith, and in the honest belief that the decision was in the best interest of the company and its shareholders.rlWhen successfully invoked, the rule provides officers and directors with a near absolute shield against liability. Because of the rule's protections, officers and directors are rarely held liable (absent evidence of self-dealing or conflict of interests) for breaches of the duty of care. Most recent discussions and judicial analyses of the Business Judgment Rule, however, offer confusing guidance in understanding how the rule would likely be applied in derivative contexts. The bulk of recent corporate fiduciary duty litigation has arisen in the context of heated takeover battles in which strong arguments are typically made that managementparticularly in rejecting takeover proposals-acted out of its own self-interest (i.e., out of a desire to perpetuate itself in office) rather than the interests of the company and its stockholders. Decisions made by management out of self-interest are not protected by the Business Judgment Rule. The reason that most takeover precedent cases are of limited use here is that derivative cases seem less likely than takeover battles to raise questions of managerial self-interest.
Fiduciary Duties Ultimately, courts decide the presence and extent of any fiduciary duties, typically based either directly or indirectly on state law. No overarching federal fiduciary law exists. The u.s. Securities and Exchange Commission (SEC) has attempted, in effect, to create such a law, but the U.S. Supreme Court and a number of federal circuit courts have repeatedly refused to allow the SEC's view to prevail. (Nevertheless, the SEC does rely on a number of fiduciary issues and fiduciary theories to support the enforcement actions 12Aronson v. Lewis, 473 A.2d 805, 812 (Del. Supr. 1984).
it brings involving alleged violations of federal securities laws.) In addition, although ERISA does impose fiduciary duties on trustees of corporate pension funds, the principles applicable to those trustees are primarily those derived from state trust investment law in general. 13 Under state trust law, the mere invocation of a fiduciary duty in a contract is insufficient (although it helps) to give rise to fiduciary duties if the relationship fails to display the requisite structural dependence; likewise, an express disclaimer is insufficient to avoid the establishment of fiduciary duties if that dependence exists. Courts generally (an important exception appears to be the U'S. Court of Appeals for the Seventh Circuit) follow the approach of New York's highest court, which has stated that "Mere words will not blind us to realities." Because various approaches to both disclaimers and assumptions of fiduciary duties have been advocated by some industry groups, examining briefly a couple of prominent examples may be useful. II Assumption example. Consider Risk Standard 1:Acknowledgment of Fiduciary Responsibility, contained in the Risk Standards for Institutional Investment Managers andInstitutional Investors, published in 1996 and prepared by a working group of 11 individuals from the institutional investment community under the technical guidance of Capital Market Risk Advisors. The Risk Standards offer guidelines that institutional investors and investment managers may use in their own risk management activities. Risk Standard 1 asserts that "Fiduciary responsibilities should be defined in writing and acknowledged in writing by the parties responsible." As discussed earlier, most courts would likely give effect to an express and detailed written acknowledgment of fiduciary duties, such as those suggested in the Risk Standards. Nevertheless, that acknowledgment should not end one's analysis: One should note that a court may, depending on the structure of a relationship and any relevant external factors (such as equity or public policy), disagree with the parties' definitions of those fiduciary responsibilities. Perhaps more important is that the mere absence of an express acknowledgment of fiduciary duties, such as that suggested in Risk Standard 1, should not be interpreted as conclusive proof that a manager has 13See,for example, First National Bankof Chicago v. A.M. Castel & Company Employee Trust, 1999 U'S, App. LEXIS11891(7th Cir., June 9, 1999): Where trustee was an ERISA fiduciary, the court found that general trust investment law principles applied "rather than anything special to either the regulation of national banks or to ERISA." But, cf Rice v. Rochester Laborers' Annuity Fund, 888 E Supp. 494 (WO.N.Y. 1995): The fiduciary duties established under ERISA are a more stringent version of the Prudent Person Rule than under the state common law of trusts.
27
Risk Management: Principles and Practices not assumed any fiduciary duties: Courts have repeatedly demonstrated that parties do not need a written acknowledgment to have entered into a fiduciary relationship; courts readily find fiduciary relationships and impose fiduciary duties even without a written acknowledgment when a relationship displays the requisite structural dependence. III Disclaimer example. In contrast, the International Swaps and Derivatives Association (ISDA) has published a suggested standard form of nonreliance provision, entitled Representation Regarding Relationships between Parties, which the parties to an ISDA Master Agreement may include as an amendment to their agreement. Clause (c) of that provision, entitled Status of the Parties, contemplates that each party will acknowledge and represent that "the other party is not acting as a fiduciary for or an advisor to it." Thus, this provision acts as an express disclaimer of any fiduciary relationship or duties. The most important question confronting many parties who incorporate the ISDA disclaimer is whether it will be judicially upheld, and most courts would seem likely to uphold it. Nevertheless, those who wish to disclaim fiduciary relationships by means of such a waiver are well-advised to make certain that the structure of the relationship does not suggest that the other party is structurally dependent on it. Otherwise, again, a court could easily ignore the waiver and impose fiduciary duties. A subsidiary question is whether the lack of the disclaimer might be deemed to constitute evidence that the parties to an ISDA Master Agreement have entered into a fiduciary relationship. That conclusion is doubtful, especially under New York law, where courts are loath to find fiduciary duties among parties to business relationships.
Lessons from Case Law Two fairly recent cases should prove especially revealing for anyone attempting to assess fiduciary duties in the context of derivative activities. Among other things, the cases suggest several key risk management principles applicable to derivatives that fiduciaries and their legal advisors should consider. The first case, Brane v. Roth,14 is an Indiana case in which a federal trial court held directors liable for breaching fiduciary duties of care in connection with a failed futures hedging program. The second, Caremark,15 demonstrates that to invoke the shield of the Business Judgment Rule, corporate officers and directors must first make a business decision; it also casts some doubt over whether a board's perfor14590
N.E.2d 587 (Ind. App., 1st Dist., 1992). 15In re Caremark International Inc. Derivative Litigation, No. 13670 (DeL Ch., Sept. 25, 1996).
28
mance of its "oversight" role, absent an identifiable decision, is enough to invoke the rule. Brane v. Roth. Ironically, this is the derivativerelated case that seems to have caused the greatest alarm among commentators, even though it properly should have remained fairly obscure. The case resulted from a successful action brought by stockholders of a rural grain elevator cooperative against the co-op's directors for losses the co-op suffered on its grain sales, losses that could have been prevented through adequate hedging. The directors had, in fact, authorized the use of futures to hedge the co-op's grain price exposure. Nevertheless, the co-op failed to hedge; virtually 95-98 percent of its exposures remained unhedged long after the hedging program had been authorized. Although several commentators argue that Brane v. Roth is an anomalous case standing for the proposition that directors at times have a general"duty to hedge," the case stands for no such thing. It is simply a case in which a court reiterated the longstanding precept that the Business Judgment Rule does not protect directors from liability for decisions made on an uninformed basis. (The Brane directors were found not to have bothered to learn the "fundamentals" of hedging with futures and not to have actively supervised the actual hedging that was done.) That is, the decision to hedge must be made on an informed basis, and once a decision to hedge has been made, directors have a duty to supervise the hedging program. The court said that the directors' lack of understanding of hedging rendered them fundamentally unable to rely on the Business Judgment Rule. And for corporate fiduciaries, the inability to rely on the Business Judgment Rule is usually fatal, because in the vast majority of cases, those officers and directors who are unable to rely on it are held liable. Caremark. Although not a financial derivative case, Caremark is important because it involved a review of director oversight responsibilities in circumstances that do not necessarily call for actions or decisions, circumstances that might easily be found in many derivative cases. The facts in Caremark presented questions as to whether the directors failed to satisfy their so-called duty of attention and whether that duty is somehow legally distinct from the ordinary duty of care. Violations of federal law had occurred deep within the organization, and the plaintiffs claimed that the directors ought to have been held liable for losses that resulted from the failure to prevent those violations. The court, in holding that the directors were not liable, noted that the claimed breach relied on "possibly the most difficult theory in corporation law upon which a plaintiff might hope
Risk Management and Fiduciary Duties to win a judgment." The court found that the directors were simply not liable and had not done anything wrong. Caremark involved the oversight responsibilities of directors under Delaware corporate law, and Delaware is generally perceived as being the most directorfriendly jurisdiction in the nation. For cases in jurisdictions where the law is arguably less hospitable to corporate fiduciaries, officers' and directors' risks of being held liable may be greater than that suggested by Caremark. Currently, little case law exists regarding this issue, at least outside Delaware. Implications. Brane and Caremark offer guidance on several important risk management issues, particularly as to derivative-related fiduciary duties. First, prudence is process, not results. The Brane court, for example, focused less on the content of the derivative program at issue than on the directors' failure to attempt to learn the fundamentals of that program. Second, before officers and directors (and for that matter portfolio managers, trustees, and supervisors) decide to authorize the use of derivatives for risk management purposes, they must possess a sufficiently sound understanding of the fundamentals of the contemplated risk management activity to make an informed judgment as to its suitability under the circumstances. They do not necessarily need to know the details of the mathematics underlying a derivative program or, for example, how to perform scenario analyses or stress testing, but they must have a fundamental sense of the economic logic of the contemplated derivative strategy and of that strategy's objectives.
Third, once the directors and trustees decide on the use of any risk management program, they must actively supervise and monitor its actual use or implementation to make sure it complies with the activities authorized. Finally, despite what some commentators, including one former chair of the Commodity Futures Trading Commission, have read into Brane v. Roth, current case law does not suggest that corporate directors at times have a fiduciary duty to use derivatives to hedge.
Conclusion The risk of violating fiduciary duties in general, or derivative-related fiduciary duties in particular, need not be disconcerting. There are compelling reasons to conclude that fiduciary duties apply to derivative activities no differently from how they apply to any other commercially significant economic activity. Most doubts to date have arisen because the two main bodies of fiduciary law-state corporate law and state law of trust investments-offer little specific guidance as to how the duties they establish apply to derivatives. Fortunately, the law of trust investments through the Prudent Investor Rule is undergoing a rapid and long overdue modernization that expressly anticipates the use of derivatives. Under corporate law, the shield provided to officers and directors by the Business Judgment Rule should protect most derivative-related risk management activities, provided that they are undertaken on an informed basis and subject to appropriate supervision.
29
RiskManagement: Principles and Practices
Question and Answer Session Robert M. McLaughlin If a firm has state-ofthe-art risk management practices, would that firm be judged by a higher standard in a court of law than a similar firm in similar circumstances that had not bothered with risk management? Question:
McLaughlin: I cannot say that the first firm would be held to a higher standard, but it most likely would be expected to use and take advantage of its expertise, especially if the use of that expertise was broadly advertised or otherwise expressly contemplated by investors or entrustors in their agreements with the firm. Keep in mind that the firm without that expertise would also be held to some sort of objective standard. For example, suppose the second firm manages investment portfolios and investors specifically complain about the firm's failure to use sophisticated risk management techniques. If the investors present a compelling case that the firm ought to be using those techniques and that they are readily available, then there is a danger that the failure to do so could give rise to liability for any subsequent losses. In any event, whether under trust law or corporate law, the firm would most likely be held to some sort of standard concerning the process that it undertakes in evaluating the costs and benefits of its existing risk management systems and the proposed new techniques. Prudence is process, and a firm that fails to undertake such an evaluation could easily be seen as failing to exhibit the requisite prudence. Question: Where are the more friendly (besides Delaware) and less friendly jurisdictions?
30
McLaughlin: New York, in my view, is a friendly jurisdiction, whether under corporate or trust investment law. For example, the most famous derivative case, Procter & Gamble Company v. Bankers Trust Company, 1 was an Ohio federal court case in which the court's decision was based on New York law. In that case, Judge Feikens in essence found that P&G had no reasonable expectation that Bankers Trust would be acting as its fiduciary; in the terminology of the analysis presented earlier, P&G was not structurally dependent on Bankers Trust for its derivative expertise. P&G and Bankers Trust were parties to a business relationship, and Judge Feikens found that under New York law, no fiduciary relationship can arise between parties to a business relationship.? As such, P&G should have expected to be relying on its own expertise rather than Bankers Trusts', despite Bankers Trusts' "superior knowledge in the swaps transactions." So, this aspect of the decision was certainly a welcome one for Bankers Trust and any other sophisticated derivative market participant. As to trust law, New York has also enacted a version of the Prudent Investor Rule that, in my view, is highly protective of sophisticated trustees. In general, the more hostile environments will be those states where the new Prudent Investor Rule has not been implemented. 1925 F. Supp. 1270 (S.D. Ohio, May 9,1996). 2Recently, a New York court asserted that it disagreed with Judge Feikens' analysis, "inasmuch as a confidential relationship may indeed arise between the parties to a business relationship." See Societe Nationale D'Exploiiaiion Industrielle DesTabacs Et Allumettes v. Salomon Brothers International Limited, QDS: 12101179, New York Law Journal (June 18, 1998):27.
Question: Do stock rights and warrants require any special documentation? McLaughlin: They certainly do. Most of the time, stock rights and warrants are contingent equity claims that may fall under state securities laws, SEC rules, and possibly Commodity Futures Trading Commission (CFTC) regulations. Although no discrete fiduciary law exists for documenting stock rights and warrants, standard disclosure and contract rules, SEC administrative rules, and CFTC administrative rules will likely dominate any fiduciary consideration. And state anti-fraud law would also apply. Question: Do you see any signs that the SEC, the CFTC, or the DOL will be revising its rules to make them more friendly to modern portfolio theory? Is new legislation by the U.S. Congress likely with respect to derivative activities? McLaughlin: The SEC is the most proactive with respect to bringing regulation into line with modem portfolio theory and investment practices. It has proposed an entirely new system of regulating securities activities, and a recent SEC release has attempted to restructure modern securities law. And although I am not aware of any effort to modify ERISA rules expressly to codify the acceptance of modern portfolio theory, the DOL's benign treatment of derivatives-recall that in the DOL's view ERISA effectively treats a trustee's investment in derivatives in the same way it treats any other plan investments-may imply the DOL's
Risk Management and Fiduciary Duties
general acceptance of modem portfolio theory. However, the DOL's hands are somewhat tied by ERISA's statutory standard of care, which adopts the language of the former Prudent Man Rule. That rule is hostile to the modem notion of diversification, the purpose of which is to improve the efficiency of a portfolio by minimizing the risk assumed to generate an expected return or by maximizing expected returns for a
specified level of risk. Unfortunately, the applicable ERISA provision, 29 us.c.A. § l104(a)(1)(C), states that the purpose of the diversificationrequirement is simply lito minimize the risk of large losses"; it does not mention improving portfolio efficiency. In general, an investment manager who is concerned about whether applicable law and regulations follow modem portfolio theory should try to confirm that
the jurisdiction governing its activities has expressly adopted some form of the Prudent Investor Rule. Lastly, I do not think Congress is about to pass any new derivative legislation, at least not in the near future. The CPIC has injected some uncertainty into the area by championing an effort to reopen much of the regulatory debate concerning over-the-counter derivatives, but few observers think that effort is likely to succeed over the short run.
31
A Behavioral Perspective on Risk Management Andrew W. Lo Harris & Harris Group Professor of Finance Massachusetts Institute of Technology's Sloan School of Management
Traditional risk management approaches emphasize statistical and economic considerations. But comprehensive financial risk management should also incorporate the role of human preferences in rational decision making under risk.
ince the market turmoil of August and September 1998, skepticism has undoubtedly increased about the relevance of quantitative techniques for the practice of risk management. If, as most industry experts now acknowledge, the general "flight to quality" and subsequent widening of credit spreads was unprecedented and, therefore, unforecastable, what good are Value-at-Risk measures that are based on the statistics of historical data? These concerns are wellfounded but somewhat misplaced in their focus. The fault lies not in the methods but, rather, in the unrealistic expectations we have in their application. In a broader context, rational decision making under uncertainty requires a focus on three specific components, which I have previously described as the"three P's of total risk management": prices, probabilities, and preferences.' Although any complete risk management protocol should contain elements of all three P's, to date most risk management practices have focused primarily on prices and probabilities, with almost no attention to preferences. In this article, I will emphasize the role of preferences in rational decision making under risk through three illustrative examples: the nature of loss aversion, the difference between risk and uncertainty, and the interpretation of probabilities. Before launching into these examples, let me emphasize that despite the term "behavioral" in the title of this article, and the increasing popularity of "behavioral finance", the importance of behavior is certainly not new to modem finance. However, the
S
1 For a complete discussion of the three P's of risk management, see A. Lo, "The Three P's of Total Risk Management," Financial Analysts Journal (JanuaryfFebruary 1999):13-26.
32
enormous progress that psychologists, cognitive scientists, and neuroscientists have made in recent years has created a renaissance in research on human behavior, of which one aspect is economic and financial decision making. This may very well lead to an entirely new field of "financial decision analysis" in which the gains from cross-disciplinary research are especially prominent, and financial risk management is the obvious starting point.
Loss Aversion An individual's decision making under risk-rational or otherwise-s-is heavily influenced by the concept of loss aversion. Suppose you are offered two investment opportunities, A and B:Investment A gives you a sure payoff of $240,000,and investment Bgives you a lottery ticket with a chance of winning $1 million with a probability of 25 percent and a chance of winning nothing with a probability of 75 percent. If you must choose between A or B, which one would you prefer? Now investment B has an expected value of $250,00Q-a higher expected value than A's payoffbut this may not be all that important to you because you will receive either $1million or zero. Clearly, there is no right or wrong choice here; the answer is simply a matter of personal preferences. Faced with this choice, most people prefer A to B. Now suppose you are faced with another two choices, C and 0: Investment C yields a sure loss of $750,000, and investment 0 is a lottery ticket with a chance of losing nothing with 25 percent probability and a chance of losing $1 million with 75 percent probability. In this case, C and 0 have exactly the same expected value: -$750,000. If you must choose
© 1999 Andrew W. La
A Behavioral Perspective on Risk Management
Risk versus Uncertainty
between these two undesirable choices, which would you prefer (this situation is not as absurd as it might seem at first glance; one can easily imagine situations that require choosing the lesser of two evils)? In this case, most people choose D. These two sets of choices are based on an experiment that was conducted by Stanford psychologists Kahneman and Tversky almost 20 years ago. 2 When Kahneman and Tversky performed this experiment, and in many repetitions since then, the results have shown that an overwhelming proportion of individuals preferred A to Band D to C. These choices reveal an interesting fact about individual preferences for risk. For those who choose A and 0, they are selecting the equivalent of a single lottery ticket that offers the chance of winning $240,000 with 25 percent probability and losing $760,000 with 75 percent probability.f However, those who choose Band C (the combination that most individuals shun) have the same probabilities of losses and gains-25 and 75 percent, respectively-but when they win, they win $250,000 instead of $240,000, and when they lose, they lose $750,000 instead of $760,000. In fact, choice Band C is equivalent to choice A and 0 plus $10,000free and clear-no risk at all because $10,000 is added to both the winning and losing alternatives. Faced with this information, would you still pick A and D? A common reaction to this example is: "It isn't fair-when you told us about A and B, you did not tell us about C and D." But this example is not nearly so contrived as it may first appear to be; in a multinational company, the London office may be faced with choices A and B and the Tokyo office with choices C and D. Locally (in London and in Tokyo), there is no right or wrong answer; the choice between A and B and the choice between C and D are matters of personal risk preferences. But the globally consolidated book for the company will show a very different story. From the financial perspective, there is indeed a right and wrong answer for the company. The purpose of financial technology is to provide a framework for analyzing problems such as this. Financial technology should prevent people from engaging in the kind of behavior that gives rise to these apparent arbitrage opportunities.
The distinction between risk and uncertainty is a subtle one but quite important from the perspective of the individual investor. The following example, based on the well-known Ellsberg (1961) Paradox.? illustrates that risk management must take into account the uncertainty of risks. Suppose 100 balls-50 red and 50 black-are placed into Urn A. You are asked to pick a color, red or black, and write it down on a piece of paper without revealing it to anyone. A ball is then drawn randomly from the urn, and if it is the color you selected, you will receive a $10,000 prize, otherwise you will receive nothing. What is the most you would be willing to pay to play this game (this game is to be played only once)? Most financial industry professionals name their top price as $5,000, which is not surprising because this is the expected value of the game. However, other individuals typically bid considerably lower-usually not more than $4,00Q-a discount from the expected value that indicates risk aversion, a common trait among most of us. Now consider the same game with the same terms but with Urn B, which contains 100 red and black balls of unknown proportion (it might be 100 red balls and no black balls, 100 black balls and no red balls, or anything in between). What is the most you would be willing to pay to play this game (also to be played only once)? The majority of individuals asked say that they would pay much less than they would to play the first game with Urn A (offers as low as $100 are not unusual among individuals unfamiliar with basic probability theory). But this seems to be wholly inconsistent with the risks of this game, which are mathematically identical to those of the first game in which Urn A is used. 5 Alternatively, suppose you have already paid $5,000 to play the game but are given the choice of which urn to use, A or B.Which urn would you prefer? Most individuals prefer Urn A, despite the fact that the probability of drawing a red or black ball is exactly the same for both urns. This game-a variant of Ellsberg's Paradox-illustrates a deep phenomenon regarding the typical individual's differing levels of uncertainty about his or her risks. How can that be? The words "uncertainty" and "risk" are usually considered synonyms, and yet individuals seem to prefer knowing what kind of uncertainty they are facing. Somehow,
20. Kahneman and A. Tversky, "The Psychology of Preference," Scientific American, vol. 246 (1982):160-173. 3In choosing A, you receive $240,000 for sure. In choosing 0, you lose nothing with 25 percent probability, hence you keep the $240,000, and with 75 percent probability you lose $1 million, in which case you are down net $760,000.
40. Ellsberg, "Risk, Ambiguity, and the Savage Axioms," Quarterly Journal of Economics, vol. 75 (1961):643-669. 5For Urn B, you do not know what the proportion is of red/black balls-you literally have no information. Presumably, what this means is that it is 50/50 (this represents the "maximum degree of ignorance").
33
Risk Management: Principles and Practices not knowing about that uncertainty is worse than knowing about the uncertainty. This brings up the obvious question: "Do people care about the uncertainty of the uncertainty of the risk?" Unfortunately, we do not yet have a satisfactory answer to this question, and only recently have researchers begun to study this question in the context of financial risk management. This example is particularly compelling because it illustrates all of the elements-the three P's againthat a complete risk management protocol ought to include. First, the game requires an understanding of the statistics of the phenomenon-that is, the probabilities. For the game with Urn A, it is 50/50. For the game with Urn B, you do not know the proportion, but it also turns out to be 50/50. The second aspect is an economic aspect, or prices. That is, how much are you willing to pay? Third, and what I argue is the most important aspect, is the personal aspect, namely, how do you feel about the uncertainty surrounding the risks?
Interpreting ProbabiIities Even a strict focus on the statistical aspect of risk management, namely probabilities, cannot avoid the issue of preferences; how people interpret probabilities often interacts with their preferences in peculiar ways. Here I illustrate a curious interaction between probabilities and personal preferences and show that probability-based risk management analytics can be improved in concrete ways by incorporating preference information. Probability-Based VAR. Value at Risk (VAR) is based on probabilities. In fact, the RiskMetrics documentation defines VAR in the following way: "Value at Risk is an estimate, with a predefined confidence interval, of how much one can lose from holding a position over a set horizon.,,6 VAR attempts to provide a quantitative answer to the question: "What is the probability of losing $100 million over the next month, given the current portfolio?" Although the focus-the probability of extreme dollar losses-seems to be straightforward, interpreting VAR may raise as many questions as it answers. For example, is VAR based on conditional or unconditional probabilities? That is, does VAR indicate the probability oflosing $100 million on any given day, or is VAR talking about the probability of losing $100 6Morgan Guaranty Trust Company, Introduction to RiskMetrics, 4th ed. (New York: Morgan Guaranty Trust, 1995).
34
million after a specific event has occurred, such as a 5 standard deviation drop in the yen/ dollar exchange rate? How does VAR handle consistency across portfolios and across time? Are the probabilities that are either imposed or extracted from, say, a derivatives portfolio consistent with a foreign currency hedging strategy? Does VAR have any mutual checks to make sure that the VAR probabilities are consistent across time? If the probabilities are not consistent, arbitrage opportunities could arise. Does VAR make use of prior information or preferences? Any sound risk management framework needs to address these kinds of questions. Conditional Probabilities. The role of preferences in interpreting probabilities becomes clear in the following serious example taken from the epidemiology literature: AIDS testing. Suppose a blood test for AIDS is 99 percent accurate. By that, I mean the probability of the blood test turning out positive if you have AIDS is 99 percent, and the probability of the test turning out negative if you do not have AIDS is also 99 percent. Now, suppose you take this blood test, and the test result is positive. What is your personal assessment of the probability that you have AIDS? Do not use any other external information to answer this question; only consider the fact that the blood test is positive and that this test is 99 percent accurate. Many people would say the probability of having AIDS is 100 percent, and most everybody would say the probability has to be more than 50 percent. But the answer is not given by the 99 percent accuracy of the test, which refers to the probability of the test being positive given your condition. The relevant probability is the probability of your condition given the outcome of a 99 percent accurate test. The distinction between these two probabilities is very important, and Baye's rule links the two formally. Specifically, according to Baye's rule, the probability of having AIDS, given a positive blood test, is equivalent to the probability of a positive blood test given that you have AIDS, multiplied by the unconditional probability of having AIDS divided by the probability that the blood test is positive: Prob(AIDS 1+) =Prob(+IAIDS) x
Prob(AIDS) Prob(+) .
To assess your probability of having AIDS given a positive blood test, you need two other pieces of data in addition to the fact that the blood test is 99 percent accurate: the unconditional probability of AIDS, Prob(AIDS), and the unconditional probability of a
A Behavioral Perspective on Risk Management positive blood test, Prob (+), which are approximately 0.1 percent and 1.098 percent, respectively? Therefore, the probability that you have AIDS given a positive blood test is: 0.1% Prob(AIDS/+) == 99% x - - '" 9.02%. 1.098%
The relevant probability-the conditional probability of AIDS given a positive blood test-is not 100 percent or even 50 percent, but 9 percent! This is a surprisingly small number given the accuracy of the blood test, but recall that before testing positive, the unconditional probability of AIDS was only 0.1 percent. Testing positive does yield a great deal of informationindeed, the probability of AIDS increases almost a hundredfold-but it is by no means a certainty that you have AIDS. When we make use of probabilities, we must keep in mind that we need to focus on the right probabilities. Moreover, researchers and practitioners need to think about how simple probabilities interact with other factors, such as conditioning on prior information (the AIDS example illustrates the importance of conditioning information). Experience, judgment, and intuition are also critical in assessing prior information, as are preferences and human biology. Interpreting Zero-Probability Events. In the AIDS example, if you guessed, as many people do, that the probability of your having AIDS given a positive blood test was 100 percent, you concluded that the probability that you did not have AIDS was zero, a very strong conclusion. Zero-probability events create interesting conundrums for modern finance. Suppose an event E has never occurred in the past. Because of the nature of human cognition, most people will act as if the probability of such an event is zero, despite the fact that they might be able to contemplate the occurrence of such an event if asked. But what if another set of individuals thinks that the probability of E is not zero? In that case, at least one group (and possibly both groups) will be convinced that an arbitrage opportunity-a "free lunch" transaction-exists. In particular, the group that believes the probability of E is zero should be pleased to write a low-cost insurance policy that pays $100 7The unconditional probability is the probability of an event without reference to any other event or information. In this case, the unconditional probability of having AIDS is the probability that any randomly selected individual has AIDS, which is roughly approximated by the number of individuals known to have AIDS in the United States (about 250,000) divided by the total U.S. population (about 250 million), which yields 0.1 percent. The unconditional probability of testing positive then follows from: Prob(+)
= Prob(+IAIDS) x Prob(AIDS) + Probr-l No AIDS) x [I
~
Prob(AIDS)].
million if E occurs and nothing if E does not. As long as this group receives a positive premium for writing such an insurance contract, it will be happy to do so and will write as many policies as it can, because the group believes that it will never need to payout (since it believes the probability of E is zero). However, the group that believes the probability of E is positive should be pleased to purchase such an insurance policy at some positive price. Both groups believe they are receiving a bargain, yet this may be a recipe for financial disaster if E is indeed something that can occur, even if it occurs infrequently. This scenario may seem rather simplistic, but consider the turmoil in the hedge fund industry during the summer of 1998. Some of the hedge fund managers involved have argued that the events of August and September 1998were unprecedented and virtually impossible to anticipate. All the relevant models and risk analytics indicated that the possibility of such a massive global flight to quality and such huge increases in credit spreads was an extraordinarily unlikely event (a 27 standard deviation event, by some accounts). In other words, what actually happened was, ex ante, a zero-probability event! Such zero-probability events can create some very serious gaps in risk management systems. These gaps are related to the distinction between objective and subjective probabilities. That is, if you and I have different probability assessments, then as a practical matter, an objective probability may not be relevant. Rather, multiple subjective probabilities exist, and subjective probabilities are influenced by none other than human preferences.
Summary and Conclusion Existing risk management practices focus on the statistical and economic aspects of risk management, an endeavor that should be called "statistical risk management". They do not focus on the personal aspects of risk management, which is where much more thought and research should be devoted. This suggests the possibility of a new approach to risk management-called financial risk management-in which the importance of preferences is explicitly acknowledged and a serious attempt is made to measure preferences and consider their interaction with prices and probabilities to arrive at optimal financial decisions. While financial risk management is more challenging than statistical risk management-it does requires additional structure, more sophisticated estimation and inference, and more careful interpretation-the payoff is a genuine capability of managing, rather than just measuring, financial risks.
35
RiskManagement: Principles and Practices
Question and Answer Session Andrew W. La Question: If some guarantor of last resort will bail people out when they have huge losses, should risk management systems be adjusted accordingly?
a lot of time thinking about how to measure risk preferences, which is one of the projects that I am working on now with psychologists and neuroscientists.
Not only should people be adjusting their risk management practices for the likelihood of a bailout, but in practice, people do adjust for it. That is, they take into account the implicit insurance that they have at their disposal, which is a very serious problem that underlies not only hedge funds but also mutual funds and individuals investing in Individual Retirement Accounts and 401(k) plans. This insurance phenomenon even influences compensation contracts for typical managers. The interaction between individual preferences and compensation contracts is very complex. For example, the typical hedge fund manager has a compensation contract that is convex. The manager gets a management fee and an incentive fee. The incentive fee creates a bias toward taking on more risk, and the manager has his or her own preferences that have to be layered on top of the compensation contract. The problem is determining the overall risk profile when you put all those pieces together. Another way of thinking of this problem is to fix the compensation contract and the desired risk profile of the manager. Then, under these constraints, how many managers do you have to interview before you find the one that has the right personal risk preferences such that when the individual's preferences interact with the compensation contract, you arrive at the proper risk behavior? Working through the problem this way means that you will have to spend
Question: How do you actually incorporate a zero-probability event into your models?
Lo:
36
Lo: One of the unique abilities of human cognition is being able to create a mental model of events that do not exist. Humans have the unique ability to dream up all sorts of bizarre scenarios and ideas and plans and expectations, to plan for contingencies that have never existed. This ability is what allows us to dominate the environment the way we do. The problem from a risk management standpoint is how many different events do you think there are that have never occurred but that might be relevant for the next five years? There are many such events. What we need to do is to use our creativity, our judgment, our heuristics, in fact, all of our experiences to try to come up with events for which, although they have a very low probability of occurring, the probability is not zero. For other events, we simply have to assign zero probabilities, because we cannot possibly analyze all of these events. The events of August and September 1998 are important not because we learned so much about specific details of statistical analysis or risk management systems but because we learned that something that we thought was not possible was possible. We have broadened our mindset in terms of zero-probability events. We need to do a lot more of that mind broadening, but in the end, we are never
going to foresee all possible disasters that can occur. Question: In your AIDS example, what happens if the accuracy of the blood test is 100 percent? Lo: If the test accuracy goes from 99 percent to 100 percent, a lot changes. If you are saying that the test perfectly predicts whether somebody has AIDS, then if that test is positive, the person definitely has AIDS. So, to that person, the difference between 99 percent and 100 percent accuracy is all the difference in the world! What is remarkable is that people simply do not make such fine distinctions. We do not distinguish between 99 percent events and 100 percent events, because we are not structured to do so. Think about human evolution and how we came to be able to process the kind of information that we do. We are the product of hundreds of millions of years of environmental forces that impinged on our probability of survival. So, do you think that being able to distinguish between a 99 percent event and a 100 percent event would lead to a higher probability of survival when you are being chased by a saber-toothed tiger? I doubt it. What is intriguing, and what I think has implications for evolutionary biology, is that the probability of survival in the next millennium may well be linked to being able to discern between 99 percent probability and 100percent probability. The rise of financial markets, financial interactions, financial engineering, and money as a medium of exchange and a measure of "fitness" may influence the nature of human evolution. We in finance will have a lot to say to
A Behavioral Perspective on Risk Management the evolutionary biologists about what they ought to start learningthe Black-Scholes model and the capital asset pricing model, for example--during the next 20 years! What are the implications of behavioral finance for the investment management world? Question:
Lo: One general implication is the impact of behavioral biases on investment decision making. For example, there is a difference between active management from the point of view of pursuing a particular long-run financial goal and active management that is an outcome of trading decisions that are influenced by behavioral biases. Humans are extraordinarily risk averse when it comes to gains. That is, we believe that a bird in the hand is worth two in the bush. If we are ahead, we want to lock in those gains. But, when it comes to losses, we are much more risk seeking. That is, if we are threatened with a loss, we would rather double up than take a sure loss.
This behavior is perfectly reasonable from an evolutionary perspective. If your very existence is threatened, the last thing you will do is try to calculate probabilities and take a loss that would make you even less likely to exist. You will gamble to try to get out of that danger. In investment situations, we are not talking about losing our lives, but the fact is that our brains are wired to respond to risk in that particular way. One of the experiments that I hope to conduct is to take a look at how traders' brain activities shift as they are faced with losses versus gains. I want to try to isolate exactly what part of the cognitive process is associated with these kinds of biases. In terms of the practical implications of behavioral finance, one implication is to focus on risk management with the knowledge that these biases exist. Another, which is even more important, is to educate clients (e.g., pension plan sponsors) and, ultimately, individual investors about how to think about risk in a more systematic fashion (which does not necessarily mean "in a more rational fashion"). There is
nothing irrational about these biases. After all, they are what helped us survive the past 100 million years. They are inappropriate, however, in a financial context. Being able to understand when these biases are appropriate and inappropriate is critical for dealing with investment problems. Finally, managers should be thinking about the entire risk management process from beginning to end. They should think about risk as a multidimensional, multiattribute phenomenon that needs to be dealt with in a much more sophisticated manner. In active management, it is not just the beta or the sigma or the tracking error that is relevant. What are also important are draw-downs, the dynamics of the risk and how they shift through time and across regimes, and how correlations among various securities change in response to institutional and political changes. This is a very complicated task that cannot be completed overnight, but I think that current research will enable us to provide some tools to allow individuals to manage those risks better.
37
The Plan Sponsor's Perspective on Risk Management Programs Desmond Mac Intyre Vice President, Financial Planning General Motors Investment Management Corporation
A sound risk management program must be grounded in the organization's philosophy, objectives, and mission. General Motors Investment Management Corporation has integrated a variety of key criteria-from audit/review to output-into comprehensive building blocks for a total risk management system.
The first step toward setting up a formal risk management structure was taken in 1995, with the formation of an internal risk management task force. That task force had a mandate to figure out how GMIMCo should take a formal approach to risk management; in so doing, the task force members had to confront several issues. For instance, they had to consider the implications of the dynamics between group responsibility and individual accountability, and they had to consider the cultural aspects of dealing with
extremely defined rules on the one hand and an environment of trust with minimal definition on the other hand. People were understandably concerned that a formal program would stifle imagination and creativity and limit the opportunity sets GMIMCo might consider. The task force members also had to consider the possibility that they would create an approach and standards that could not be adhered to, thus causing more harm than good. Two other factors had to be taken into account. First, any risk program or standards must be relevant to both the staff and the investment process and thus get collective "buy-in" across the organization. Although the risk management role must be independent and protected from interference, involving and educating everyone with respect to risk issues is absolutely critical. Second, any risk management program must be forward looking. A retrospective viewpoint is usually dangerous, inevitably degenerating into a "blame game" and being perceived as a "witch hunt." After balancing all these concerns, the task force recommended the establishment of a risk management function and the appointment of a director of risk management, reporting directly to the president and CEO of GMIMCo. With the risk management function in place, then began a process of reviewing industry best standards and practices, the most relevant of which turned out to be the Risk Standards Working Group, with its strong focus on the end user and institutional investors. Their 20 standards helped add momentum to GMIMCo's internal efforts and undoubtedly those of many other organizations as well.
38
©Association for Investment Management and Research
OOkin g back and trying to pick the point at which an organization first began its risk management activities is difficult, especially when people believe strongly that risk management always existed. In fact, much of the early work that an organization does in developing a risk management program simply centers on documenting what has actually happened to that point. Eventually, most organizations move to a more structured environment in which explicit objectives are set and practices are continually documented-in other words, to a formalized point at which it is no longer sufficient to say that the organization has always operated in a particular way and everyone knows what that way is. This presentation discusses the experience of General Motors Investment Management Corporation (GMIMCo) in building a broadly constructed risk management program to cover the more than $110 billion under its management; included are program objectives and scope, critical building blocks, key criteria, and potential benefits.
L
Background
The Plan Sponsor's Perspective on Risk Management Programs Following this broad sweep of standards and practices, GMIMCo began to create its philosophy, a set of objectives, and a mission with respect to formal risk management.
Philosophy, Objectives, and Mission GMIMCo's philosophy and objectives culminated in a mission statement that encompasses the formal risk management approach. Philosophy. GMIMCo's philosophy begins with the notion that risk, in and of itself, is not negative. What does have a potentially negative impact on General Motors and GMIMCo is the undertaking of risk that is not properly priced, not managed effectively, and/or misunderstood or simply not known. Second, risk management is a holistic endeavor and must be very broad based, transcending quantitative and qualitative measures. Third, all risks must be managed, not just those that receive the attention of the press, although it is fair to say that managers who repeat the very visible mistakes of others probably get what they deserve. Fourth, risk management should be proactive, not reactive. Finally, managing risk is the responsibility of everyone in the organization, and one of the dangers in moving to a formal approach is that risk management will come to be perceived as being the province of one or a few"experts." Objectives. GMIMCo's objectives are threefold: to implement a depersonalized (objective) approach for evaluating and monitoring current risks within the context of an overall program; to sensitize employees and management to investment and operational risk; and to satisfy General Motors and GMIMCo senior management that risks are known, controlled, and acceptable worldwide. Mission. From the philosophy and objectives has come a mission statement that addresses several key elements of the risk management process. First, risk needs to be measured, monitored, and managed within a consistent framework under the active oversight of senior management. In that regard, GMIMCo has set up a risk management committee, which is discussed later. The purpose of active oversight should be to continually determine whether the risk management program is practical, is relevant to our activities at GMIMCo, and is not strangling our opportunities. Second, we need to ensure that we are adequately rewarded for the risks we take. Third, the director of risk management should widen the recognition of existing and potential risks, should identify the critical elements of both absolute and relative risk, and should seek and develop appropriate measures for both absolute and relative risk.
©Association for Investment Management and Research
Risk Management Approach GMIMCo's approach to implementing its risk management philosophy, objectives, and mission is a fairly standard one. First, the risk director, with broad input from the rest of the organization, identified and selected suitable benchmarks as a starting point and conducted a firmwide risk audit against those benchmarks. In that regard, we believe that accountability is the biggest form of risk control, so we worked to ensure accountability for our own standards as they evolved. The entire framework was clarified; the resolution of action items necessitated by the risk audit was benchmarked; reporting lines were clarified; findings, procedures, and policies were well documented; and clear timelines were established, with stated consequences of failing to take necessary risk management actions. One of the comments of the Risk Standards Working Group members was that many risk standards are not actually adhered to, so we made our standards a guidance document of best practice to steer our future direction and activities. Circle of Risks. As an organization, GMIMCo developed a common framework for viewing the scope of risks, what we call the "circle of concern," that we face every day. GMIMCo identified 10 key risks, as follows. II Compliance risk. The possibility that existing procedures do not adequately ensure that GMIMCo and its clients are in compliance with the rules and regulations of governmental and regulatory bodies and industry standards of practice. Compliance risk also includes the possibility that the record keeping needed to document compliance is not sufficient to show that GMIMCo and its clients are, or have been, in compliance. III Corporate or financial risk. The potential that events and/or decisions at GMIMCo will have an adverse impact on the financial position of GMIMCo itself or its parent, General Motors. ill Credit/counterparty risk. This risk has two aspects: (1) the risk of a counterparty's credit deteriorating, thus substantially affecting the price of the security, and (2) the potential that the issuer of a security may default or fail to honor its financial obligations to GMIMCo or its clients. III Fiduciary risk. The potential exposure of the fiduciaries for each client to legal and regulatory actions precipitated by a breakdown in controls or failure to execute due diligence on behalf of the plan. II Liquidity risk. The potential failure to maintain sufficient funds, primarily cash and marketable securities, to meet short-term obligations. Also, market liquidity risk is the inability to dose out or liquidate market positions at fair market value within a reasonable time frame.
39
Risk Management: Principles and Practices II Monitoring risk. The potential for losses because of unintended bets or a breakdown in due diligence with respect to manager relations, or the potential for unintended consequences from the results of investment initiatives that were not fully understood at the outset. III Operational risk. The potential for discontinuity because of the possibility of a breakdown in operational procedures, particularly as they relate to a process breakdown; this risk is distinct from the design, implementation, and maintenance of computerized information systems. II Market risk. The possibility of loss resulting from movements in market prices (e.g., from changes in interest rates, foreign exchange rates, volatility, correlations between markets, or capital flows). l1li Modeling risk. The potential for loss because of actions taken or of policies implemented based on views of the world, in general, and the investment community, in particular, that are derived from improper models. These views are derived from representations of reality that do not capture all significantly relevant information or are inappropriately applied throughout the investment program. II Systems risk. The potential that current system designs or implementations are inappropriate or ineffective to the extent that information obtained from or disseminated through the system environment is incorrect or incorrectly perceived and therefore, the potential that the decisions made based on that information are suboptimal. In addition, this risk includes the security of information in response to unauthorized access and the continuity of operational and information system capability in the event of a disaster.
attribution, investment profiling, due diligence, optimal structures, target tracking errors, information ratios and alphas, benchmarks, and new product review groups-e-essentially putting together an entire structure from which to monitor and review our ongoing investment program for internal and external managers. Training, To translate these standards to our organizational culture, we held workshops in each business unit. Although GMIMCo has six business units, divided along asset class groupings, many of the risks are common to all groups. After meeting with one or two groups, we came up with about 80 percent of the risks we believe we face as an organization. We categorized our risk exposures as either horizontal or vertical. Horizontal risks are qualitative in nature: the operational risks involved in financial accounting and controls, legal, personnel, research, and systems. Vertical risks (corporate, modeling, market/credit, and liquidity risk) are quantifiable and specific to a particular asset class, thus differing from business unit to business unit. By looking at the risks as horizontal and vertical, we were able to determine our organizational exposures quite quickly, to translate standards and practices consistently across the organization, and to get collective "buy-in" as we implemented our program.
BUilding Blocks of Risk Management
Circle of Influence, Once we identified the scope of risks facing GMIMCo, the task team then looked at the extent to which GMIMCo could influence and/ or control for these risks, which we call our "circle of influence." For each risk, we tried to define the best industry standards for that particular risk, and we defined a road map to translate those standards into best practice. Recognizing that not all riskinfluencing goals were obtainable short term, we prioritized action items. We tried to relate best practices to the different needs of each product and each business unit and to fit those practices into the organizational culture. For example, to try to influence or control monitoring risk, we focused on the portfolio impact of managers' violating investment guidelines or engaging in unauthorized transactions, excessive concentrations, or outright fraud. Under the standards of best practice for monitoring risk, we addressed riskadjusted performance, risk limits, stress testing, return
GMIMCo's formal approach to risk management can be viewed as consisting of systematic building blocks arranged on five rows, as shown in Exhibit 1. The first row of building blocks begins with a review of objectives and resources, with respect to risk management. Small firms might be faced with not having enough resources, but external resources can be leveraged, via consultants, risk bureaus, and the like. We then proceed (horizontally in the first row) to an understanding of best practice and how relevant that practice is to our organization's unique operations. Having established best practice (and receiving "buy-in" from management, which is important), the next step is to review, audit, and document existing operations versus best practice. The final step of the first row is to prioritize, based on the audit results, the areas where actions are needed. The emphasis should be on what is feasible and achievable and on guarding against unrealistic expectations. The second row of building blocks in Exhibit 1 focuses on management considerations. First is corporate governance-the process and structure by which client and corporate objectives are met. GMIMCo is currently in the midst of an exercise in which we have identified every major decision in our
40
©Association for Investment Management and Research
The Plan Sponsor's Perspective on Risk Management Programs Exhibit 1. Building Blocks of Risk Management
Row 1
Review: Objectives/ Resources
Best Practice
Review / Audit/ Document
Prioritize Action
Row 2
Corporate Governance
Investment Principles/ Objectives
Management Structure/ Accountability
Business Continuity
Row 3
Asset-Liability Reviews
Risk Levels Alpha Targets
Investment Control Framework
Selection Criteria: New Products/ Managers
Performance Measurement/ Attribution
Monitoring/ Exception Reporting
Escalation Procedures
Education
Independent Review
Review: Objectives/ Resources
Data: Row 4 Source/Validation/ Valuation
RowS
Benchmarking
organization, the key information points required to make that decision, and the people responsible for implementation and oversight. For an investment organization, a key building block obviously must be continual consideration of investment principles, objectives, and philosophy. Management structure and accountability is particularly important to ensure consistency in approach and process across asset classes. Business continuity touches on several issues, from disaster recovery planning (who is dealing with year 2000 issues?) to succession planning (who is going to replace the head of a business unit?). The third row of building blocks focuses on investment-specific issues and revolves around how to translate corporate and fiduciary objectives into an asset mix. Asset-liability reviews are critical; GMIMCo conducts asset-liability reviews every three years for General Motors' U'S. pension plans, in addition to an annual validation of the investment policy guidelines stemming from the review and the capital market inputs used. For each asset class, and obviously in the context of the asset-liability process, we have identified certain return targets and tracking limits versus those targets. In tum, we have related our incentive compensation structure to those alpha targets. We have also established a consistent investment control framework covering investment philosophy, fund construction, and analysis. Finally, at this level, we have increasingly formalized the new product development criteria and the selection criteria we use to hire new managers. This step has become even more important recently with the evolution of exotic instruments. The fourth row of building blocks in Exhibit 1 deals with performance measurement for risk man-
©Association for Investment Management and Research
agement purposes. Obviously, a key element of any measurement system is the data being used and reported; trust in the source and validation of the data quality are absolute necessities. GMIMCo had several custodians in place for all of our assets until recently, which made gathering all those data a huge exercise. We have recently consolidated to two custodians and implemented a master record-keeping structure so that, in essence, we are building a data reservoir from which to work. If the data are not cleaned up and validated, the exercise of modeling data at the aggregate level is time consuming at best and certainly of questionable value at worst. We have a similar stance with respect to performance measurement and attribution. We are actively looking for standardization in performance attribution across all of our asset classes, and we are standardizing to the extent possible all performance measurement analytics across all of our asset classes. We have recently commingled our defined-benefit and defined-contribution assets, and 50 percent of our assets are now valued in the daily net asset value (NAV) environment. Although this change to a NAV environment brings with it a whole new set of risks, increased transparency and daily performance measurement are important advances for us. Once a program is in place, monitoring that program and reporting exceptions in some formal way becomes a necessity. Many organizations have automated exception reporting systems. GMIMCo has an entire set of procedures and standards for reviewing managers on a semiannual basis, with set agendas for those manager reviews. We have also engaged in the exercise of involving people from different asset
41
Risk Management: Principles and Practices classes to review certain managers. The simple, basic, and direct questions from someone who is not familiar with a certain asset class can often be the most telling. In all cases, the focus is on documenting and recording all of this information and holding ourselves accountable. If the monitoring process reveals a breach, whether internal or external, escalation procedures, such as error and omission policies, then come into play. A key reason why the risk program has to be independent is that it has to report to top management to avoid potential interference in these escalation procedures. The fifth and final row of building blocks attempts to ensure that the entire risk management system "completes the loop" with respect to continuous feedback and review. Benchmarking progress should be done against peers, against stated objectives, and against the best standards that have been adopted. Risk management education should be continuous. Our legal staff and our investment staff hold internal workshops in which we work through ideas and annual workshops in each business unit in which we go over the relevance of our risk management system. Independent reviews need to be integrated into the system. A formal annual audit process is a good first step in this regard, whether internal or external. Finally, the right-most building block in the fifth row actually ends the process where it beganreview of objectives and resources-although in this case, the review is conducted in light of changing market conditions, the changing use of products, and changing risk appetites and objectives.
of necessary resources, total group involvement, and ardent and visible management support. Required Management Response. Not only is management support critical, but certain management responses are also required periodically. Clearly stated risk management objectives and a well-defined and up-to-date plan, a solid organizational structure, a meaningful reporting format that cannot be "gamed," clear accountability, periodic assessment (i.e., review, benchmarking, and recalibration), and clear linkage of risk performance to compensationall of these must be seen by management as required components of an effective risk management system.
Dependencies. The dependencies criteria refer to the necessary resource and system requirements. Centralized and aggregated information systems are critical, especially to those organizations whose operations are global. Knowledge and continuous education are key, as are clear communication, marshalling
Output. A risk management system's performance can be judged by its output, by the tangible evidence of its existence. Such evidence may include more-defined and better-documented control processes and overall control environment, written acknowledgment of responsibilities, documented risk and return limits and objectives, risk templates for various investment programs, established reporting and escalation procedures, and a centralized risk management platform from which to run a continuous risk management cycle. GMIMCo's experience provides numerous examples in two key areas-structure and reporting--of this risk management system output. II Structure examples. With respect to structure, an internal valuation committee was established so that we would have hierarchical price structures and rules for every single asset class. The previously noted risk management committee-a broadly based group consisting of people from investments, senior management, controls, legal, and other areas-meets quarterly with a set agenda to consider. This committee has well-defined reports to analyze, which gives some degree of independence and protection to the risk manager, and it reports on an annual basis to a joint committee of GM Corporation and GMIMCo. A formal risk management team is now in place and has recently been expanded in size, and all risk management objectives are linked into the investment management process. We have developed educational programs and workshops and hold annual group risk workshops, in which we review what has been achieved in the past year and give people the opportunity to voice their concerns-in the spirit of asking them to express their concerns going forward, not dwelling on mistakes made in the past. We have formalized escalation procedures and developed errors and omission policies and funding arrangements for such policies. We have consolidated our custodial structure and now basically operate off the platform of a master record keeper, and each of our fund managers has access to all of the daily data.
42
©Association for Investment Management and Research
Key Criteria Any organization embarking on a formal risk management program should recognize the key success criteria, which can be organized into four categories: audit/review, dependencies, required management response, and output. Audit/Review. This category includes identifying and internalizing best practice, engaging in an interactive risk review, prioritizing action items, tasking individuals and making them accountable, and benchmarking the resolution of items. It is especially important in the review process to engage everyone in the organization, questioning each person about every aspect of his or her business as it relates to risk exposure and risk control.
The Plan Sponsor's Perspective on Risk Management Programs III Reporting examples. With respect to reporting, GMIMCo has adopted a value at risk (VAR) approach that is supplemented appreciably with scenario analysis-regarding our derivative and foreign exchange activities-for liquidity management and assessing counterparty risk. We have standardized review formats for all of our external and internal managers. We have also set up ad hoc reviews for external managers, in which we get a mixture of investment, control, and risk management staff to visit the external manager and assess the entire organization-from the research department to the trading process to the formalization and construction of portfolios. We have documented and standardized our investment philosophy, processes, and procedures organizationwide. We have also reviewed our investment guidelines. In one asset class, for instance, we had 350 investment rules, of which probably 20 were appropriate. The review caused us to focus anew on what is important to the client and what is critical to
©Association for Investment Management and Research
our business, rather than on setting up multiple rules just to be seen as"in control." We have also developed a process for reporting and resolving exceptions. Finally, we have set up quarterly benchmarking against our objectives and developed consolidated, independent risk management reporting.
Conclusion Risk management is not about generating a risk number. It is about setting up a quality control environment in which everyone is encouraged to ask questions and generate solutions and in which everyone is a risk manager. Developing and formalizing a risk management program provides clear business benefits. Such a program strengthens and supports the decisionmaking process, increases senior management's comfort level and awareness, helps identify risk exposures and especially opportunities, strengthens compliance with regulatory requirements, and in general, promotes a stronger, more effective control environment.
43
Risk Management: Principles and Practices
Question and Answer Session Desmond Mac Intyre Question: What have been the biggest surprises from establishing this risk management program?
I am somewhat surprised by the unintended benefits and by the willingness of everyone to get involved. Better communication is probably one of the most valuable results of setting up a program such as this. You have to get together all the various strands and different viewpoints in your organization, and what becomes very clear are the different mindsets, different philosophies, and indeed different appetites for risk. The formalization and standardization of the process has also been a valuable benefit. In all honesty, having risk standards in an informal capacity as part of your investment process is dangerous and actually makes investment managers' jobs more difficult, not less. If you can formalize the areas in which they are asked to act, that situation is better and cleaner for investment managers; we have found their reactions and involvement to be Widespread and positive. Far from being resistant, the investment personnel are the actual designers of our risk management program. Mac Intyre:
In what ways does this program affect your external managers?
Question:
Mac Intyre: First, we do not abdicate the responsibility for analysis ourselves. We have established rigorous reporting require-
44
ments for our external managers, but we want reporting to be independent and internalized. We are, however, ultimately responsible for that analysis ourselves. In fact, one of the main reasons that we developed our internal management activity was so that those people could better understand the managers they were managing; for example, we have a European portfolio manager who is responsible for European managers. Second, we have standardized the review format. All of our external managers are subject to semiannual review, and all of our internal managers are subject to quarterly review. In those reviews, we have a fixed agenda that we ask all of our investment personnel to work through every meeting. Also, in moving a large portion of our assets to a daily NAV environment, we have (1) set in place more-rigorous standards in terms of reconciliation, (2) set pricing hierarchies for various asset class and securities, and (3)established a three-way process that involves GMIMCo, its external managers, and custodians. Third, the risk management team gets involved in the selection of external managers, not to serve as a roadblock but to become familiar with the risk exposures and risk requirements that might be unique to that relationship. Finally, communication is key. Whether in developing guidelines or understanding objectives, our relationships with external managers have to be a two-way process, and often a simple conversation will clear up
a miscommunication or a lack of understanding. Question: Other than VAR, what measures do you use to review market risk? Mac Intyre: To a degree, we encourage everyone to look at a broad set of risk measures. At the portfolio level, we have alpha targets, and we look at multiple risk measures, such as tracking error and semi-variance data. Although we often hear that the best set of risk measures is one that could be standardized, there is still some value in profiling managers from different perspectives and using different shortfall measures. It is also important that any risk measure, and any risk measurement product, have total buy-in at the portfolio managerlevel. We are working with several vendors to establish an aggregate platform from which to view portfolio risk, asset class risk, and total plan risk Question: How have you incorporated risk into the compensation scheme for internal and external managers? Mac Intyre: Everyone's compensation and/ or bonus is in part qualitative and in part quantitative. To a degree, the risk element is contained in the qualitative measure, which reflects the fact that we started off with a more holistic rather than quantitative view of risk management.
©Association for Investment Management and Research
Managing Risk in the Industrial Company Charles W. Smithson Managing Director GIBe World Markets
Industrial companies are important users, albeit with a different perspective from investment firms, of risk management products. Using such products, especially for companies with certain characteristics, should and apparently does reduce risk and increase company value.
nterest in the uses and applications of risk management products (especially those involving derivatives) has been sharpened by derivative-related headlines, mostly negative, during the past several years. Exhibit 1 shows the losses associated with derivatives by type of company in each year from 1994 through 1998. At different times, nonfinancial users, institutional investors, and dealers have all felt the sting of huge derivative losses. In that context, what U.S. industrial corporations do to control risks has applications for what pension funds and money management firms should be doing to hedge their risks. CIBC World Markets has been carrying out research for a long time on the risk management practices of nonfinancial companies because we sell risk management products (i.e.,derivative products) to these companies (see Bodnar, Marston, and Hayt 1998).1 This presentation discusses the use of risk management products by industrial corporations, explores the theoretical arguments for why risk management products should add value, and examines the available empirical evidence on whether risk management "works"-that is, if a company uses such products, do its risks diminish and! or value increase?
I
Use of Risk Management Products With the Wharton School, CIBC has carried out three surveys of industrial companies' use of derivatives since 1994.Of the full sample of surveyed companies, 35 percent reported that they used derivatives in 1994,41 percent in 1995, and 50 percent in 1998.These raw numbers imply that use is increasing, but if only IComplete information on CIBC's surveys and research is available on the School of Financial Products' World Wide Web site: www.schoolfp.cibc.com.
© 1999 Charles W. Smithson
the companies that responded to at least two of the three surveys are examined, the level is flat-at about 41 percent for respondents to three surveys and 44 percent for respondents to two surveys. Although use of derivatives by industrial companies has not increased, neither has it decreased in the wake of the horror stories in the WallStreetJournal and elsewhere. Moreover, for those companies that do use derivatives, the level of usage is up. When asked in the 1998 survey about the intensity of their use of derivative products, 42 percent of respondents reported that use had increased over the previous year, 46 percent reported use had remained constant, and only 13 percent reported use had decreased. Users and Uses of Derivatives. The survey results indicate that large companies are by far the biggest users of derivatives. In the 1998 survey by Bodnar et al., 83 percent of the large companies (those with fiscal year 1996 total sales greater than $1.2 billion), 45 percent of the medium-sized companies (sales between $150 million and $1.2 billion), and 12 percent of the small companies (sales less than $150 million) that responded use derivatives. That result is surprising because theory suggests that small companies would benefit more from these products than large companies. The reason, however, could be that managers of the small companies are not as familiar, and thus not as comfortable, with derivative products as their larger company counterparts. As business schools increasingly make derivatives a part of their regular curricula and companies, regardless of size, move up the learning curve, the numbers for use by small companies should grow. As for the businesses of the companies using derivatives, the 1998survey indicates the greatest use
45
Risk Management: Principles and Practices Exhibit 1. Losses Associated with Derivative Products, 1994-98 Nonfinancial Users
1994 Coldeco ($207million) Gibson Greetings ($20 million) Procter & Gamble Company ($157 million) Mead ($7 million) Air Products and Chemicals, Inc. ($70 million) Federal Paper ($19 million) Caterpillar ($13million)
Institutional Investors
Dealers
Askin Capital Management Arco ($22 million) Investors Life Insurance Company ($90 million) Piper Jaffray Odessa College ($11 million) Orange County, CA ($1.5 billion)
1995 Wisconsin Investment Board ($95 million) Escambia County, FL ($19 million) Common Fund ($138 million)
Barings Securities ($1 billion)
1996 Sumitomo Corporation ($2.6billion)
1997 NatWest Markets ($127 million) Bank of Tokyo-Mitsubishi ($53 million) VBS ($431 million) J.P. Morgan & CO. IBJ ($120 million) Salomon Brothers ($100 million) Peregrine Holdings Limited
1998 Salomon Smith Barney ($700+ million) Bankers Trust Company ($488 million) Nomura Securities International ($1.16billion) Goldman, Sachs & Company VBS ($600 million) Westdeutsche Genossenschafts-Zentralbank ($230 million)
by primary product companies, which include agriculture, mining, energy, and utilities (at 68 percent), followed by manufacturing companies (at 48 percent), and service companies (at 42 percent). Industrial companies use derivatives to reduce a variety of exposures, but the most frequent use is for reducing foreign exchange risk. The 1998survey indicates that 81 percent use derivatives to manage foreign exchange exposure, 67 percent to reduce interest rate risk, and 42 percent to manage exposure to commodity prices. So, the kind of nonfinancial company most likely to be using derivatives is one that is involved in foreign exchange transactions. Instruments. Nonfinancial companies use varying combinations of options, forwards, futures, and swaps-both exchange traded and O'TCc-depending on the exposure being managed. The 1997 CIBC/ University of Waterloo survey of Canadian industrial
46
companies-the sister survey to the Wharton survey discussed by Fortin (1998)-indicates that forwards are the favorite product of industrial companies for managing foreign exchange risk. The likely explanation is that those companies are using aTC forwards to hedge transaction exposures (i.e., to lock in the value of receivables or payables). Forward contracts, by the way, are also the tool most widely used by institutional investors for managing foreign exchange risk. For managing interest rate risk, industrial companies tend to favor interest rate swaps, an aTC product. In contrast, institutional investors favor exchangetraded futures contracts. This difference arises from the basic nature of the business that each type of firm conducts and the investment horizon each faces. Investment firms are in the business of securities and derivative trading, and their investment horizon is relatively short. Institutional investors, therefore, like to use exchange-traded instruments in case they need
Managing Risk in the Industrial Company to get out of a position quickly because they know they are likely to find the liquidity they need on the exchanges. Industrial companies only use derivatives to facilitate their real business, and their horizons can be quite long. So, even though few dealers offer the customized OTC products and such positions are, as a result, harder to unwind than futures, the interest rate swap market is so large that a major and relatively long-lasting market disruption is the only potential problem for those industrial companies that tap that market. For managing commodity exposure, industrial companies prefer forwards and, to a lesser extent, options. Reasons for Use. The 1997 CIBC/University of Waterloo survey asked industrial companies about the objectives of their derivative use. The responding companies indicated that their"most important objective" was managing cash flows (nearly 40 percent of the respondents), followed by managing earnings (25 percent) and minimizing financial distress (almost 15 percent). The respondents indicated that their "second most important objective" was managing cash flows (about 25 percent), followed by minimizing financial distress (more than 20 percent) and managing earnings (more than 15 percent). Companies identified their "third most important objective" to be reducing market risk (about 25 percent), followed by managing cash flows, minimizing financial distress, and maintaining a competitive position (all about 15 percent each). The point is that these companies tend to be managing some kind of flow measure rather than a stock measure. Value at risk (VAR), by contrast, is a stock measure that was developed inside banks as a way of communicating between the trading desks and senior managers. VAR is valuable for OTC derivative dealers and managers because they are interested in value changes, whether an absolute change or a rate of change. VAR is much less useful for most industrial corporations because they do not manage their companies in terms of present value; they manage in terms of having sufficient cash flows this quarter to meet an investment budget or to pay a dividend, for instance.
How Can Risk Management Increase Firm Value? Should the shareholders of industrial companies want the companies to practice risk management? The answer is simple: They should only if risk management can increase the value of the company. One reason risk management might increase the value of the company is through reducing the risk to
shareholders. Risk management products change the variance of returns; the expected net present value of a swap or an option or any other financial product at origination (leaving out the bid-offer spread) is zero. Derivatives reduce volatility, but portfolio theory makes clear that they are useful only to the extent that investors cannot diversify their risks in other ways. If risks were completely diversifiable, nobody would need or want these products. The true value of risk management products to the shareholders of widely held firms is found in the work of Merton Miller and Franco Modigliani. The so-called M&M Proposition 1 is as follows: In a world of no taxes, no transaction costs, and a given, specified investment decision, the debt-equity structure of a corporation does not matter. The proposition is actually even broader. In such a world, no financial policy-to issue common or preferred stock, to manage risk or not-matters. But in fact, financial policy decisions do matter in the real corporate world because such decisions have an impact on (1) the taxes the company pays, (2) the transaction costs, and (3) the investment decision. So, a shareholder has a stake in whether the firm hedges or not. If the firm can hedge in such a way as to reduce its tax liability or reduce its cost of financial distress, the shareholder should be pleased because the shareholder cannot achieve those effects as an individual investor. Those effects can only be achieved at the firm level. The most important reason, however, for the shareholder to care about a firm's financial policies is that investment decisions are not fixed; the firm may be able to improve its investment decisions. At the simplest level, the shareholder would like the firm to make the following investment decisions: Accept all projects with positive net present values, and reject all projects with negative net present values. In short, if risk management can move the firm toward those decisions, shareholders want the firm to practice risk management. Suppose a hypothetical U.S. company, Acme Pharmaceuticals, is exposed to foreign exchange rate risk, interest rate risk, and commodity price risk. The company's biggest exposure is foreign exchange risk because it is a U'S. dollar-based firm that sells drugs in all of the countries of the world and has royalties coming in from all over the world. All those foreign currencies are eventually converted into Ll.S, dollars, so when the dollar is strong, Acme's pretax cash flows will be low. When the dollar is weak, Acme's pretax cash flows will be high. Its cash flows are definitely volatile. On average over time, if Acme does nothing about the volatility, its cash flows will be at the mean. Why would Acme go to a derivative dealer that will charge it half the bid-offer spread to move the cash flows toward the mean sooner than if the company
47
Risk Management: Principles and Practices simply waited for the flows to settle on the mean? Acme cares about this volatility because if it does nothing about it, the volatility can change the economics of this company. The issue Acme is worried about is that if the dollar is strong enough, its cash flows will be low, and if the cash flows drop low enough, that drop will trigger a cutback in R&D spending. A pharmaceutical company does not want to cut R&D, the lifeblood of its revenue and income stream. To generalize, risk management could increase the value of a company by reducing its taxes if the company has more tax loss carryforwards, more tax credits, and! or more income in the progressive (higher marginal rate) region of the tax schedule. Similarly, risk management could increase the value of a firm by reducing the costs associated with financial distress (a specific transaction cost) if the firm has less interest coverage, more financial price risk, more leverage, and! or lower credit ratings. Finally, risk management could increase the value of a firm by facilitating its optimal investment decisions if the firm has more R&D expenditures and! or a higher market-to-book ratio. Academic research has generated both theoretical predictions and empirical evidence with respect to the signs of these determinants of risk management use. The consensus predictions from 3 theoretical papers and the consensus evidence from 15 empirical studies are shown in Exhibit 2. Note that the empirical evidence supports the theoretical predictions. Based on this empirical evidence, what kind of companies would be more likely to use derivative instruments? The answer is companies with more tax loss carryforwards, more tax credits, less interest expense coverage, more leverage, and higher R&D budgets.
Researchers have also examined agency relationships in companies that use derivatives. A company that wants its managers to work hard is likely to motivate them with the reward of either shares or options on shares. All other things being equal, who is more likely to manage risk-the option receiver or the share receiver? Put another way, which of the two likes volatility? The option holder likes volatility because it increases the value of that option position. Theoretically, if managers are compensated with options, one would expect them to do less hedging than managers who are compensated with actual equity. Consistent with theory, Tufano (1996) and other researchers (Ceczy, Minton, and Schrand 1997; Wysocki 1996; and Berkman and Bradbury 1996) found a positive relationship between equity compensation and level of hedging. Tufano also found a negative relationship between option compensation and level of hedging. Gay and Nam (1998) found the opposite relations, and Ceczy et al. also found a positive relationship between option compensation and hedging activity, so the empirical evidence is mixed.
Does Risk Management Work? Although theory predicts that risk management should work, the real question is whether risk management does work. That is, do risk management products contribute to lower risk, higher share price, and! or improved investment decisions? Risk. In a study of whether companies that use risk management products actually reduce their risk, Guay (1999) examined changes in company risk for new users of derivatives according to three measures of risk-total risk, interest rate risk, and foreign exchange rate risk. Table 1 summarizes Guay's
Exhibit 2. Signs of Determinants of Risk Management Activity: Consensus Theoretical Predictions and Empirical Evidence Signs
Consensus Theoretical Prediction
Consensus Empirical Evidence
Indeterminate Indeterminate Indeterminate
Positive Positive Indeterminate
Negative Positive Positive Negative
Negative Positive Positive Indeterminate
Positive Positive
Positive Positive
To reduce taxes Tax loss carryforward Tax credits Income in progressive region of tax schedule
To reduce cost offinancial distress Interest coverage Interest rate or foreign exchange risk Leverage Credit rating
Tofacilitate optimal investment R&D expenditures Market value/book value
Note: For a list of the studies used to compile this exhibit, see Smithson (1998).
48
Managing Risk in theIndustrial Company Table 1. Mean Changes in Risk for New Users of Derivatives (from Year t-1 to Year t) Type of Risk Total risk Interest rate risk Exchange rate exposure
Mean Change -0.56% -0.14* -0.25*
"Statistically significant at a 90 percent confidence level. Source: Guay (1999).
results: All three measures of risk were lower, by at least 9 basis points (bps) and by as much as 72 bps, after risk management products were used. Tufano (1998) examined the effect of risk management on gold mining companies. He measured the sensitivity of the company's share value to the price of gold and found that companies that were hedging their gold price risk exhibited less gold price risk than companies that did not. What about Beta? One might question whether the preceding results represent diversifiable or nondiversifiable risk. Interest rate risk, foreign exchange risk, and commodity price risk are all diversifiable risks. For example, if investors know a company has positive interest rate risk, they can diversify their portfolios by holding the stock of a company that has negative interest rate risk. So, what should happen to beta if the firm uses risk management? Theoretically, nothing should happen to beta, but the empirical research that has looked at beta has found that beta became smaller. What is causing a change in beta? The answer could be that the capital asset pricing model is wrong and that the market prices risk other than market risk. But the answer could also be that investors are using risk management as a signal of management quality. Share Price. Allayannis and Weston (1998) studied the relationship between a company's use of foreign exchange derivatives and the value of the firm. Their results, as summarized in Table 2, suggest that the values of those companies that hedge their foreign exchange risk are higher than those of companies that did not use risk management. In his ongoing research on gold mining firms, Tufano has been examining the effect of risk management on the stock price performance of gold mining companies. He has formed portfolios of gold mining firms that hedge and portfolios of gold mining firms that do not hedge and has tracked the performance of these portfolios during the 1990s.Not surprisingly, in periods of falling gold prices, the hedged firms outperformed the unhedged firms. What is surprising is that in periods of rising gold prices, when the unhedged firms would be expected to outperform
the hedged firms, the two portfolios exhibited essentially the same performance. The reasons are not clear and could be several, but one plausible explanation is that the hedging activity is a signal to investors of quality of management and of lower overall risk, and investors react positively to that signal, even when market price movements are seemingly inopportune for hedging. Several years ago, Chris Turner and I were interested in the way that a company's stock price reacts when the company announces a risk management initiative (see Smithson 1998). To examine this question, we looked at a sample of 158 hybrid debt issues (i.e., debt issues that could be decomposed into a standard debt issue plus an embedded derivative). These hybrids were indexed to interest rates, foreign exchange rates, commodity prices, or other financial indexes. As the preceding evidence indicates, when the firm issued the hybrid, the markefs perception of the riskiness of the firm changed. We separated our sample into those hybrid debt issues that were likely done for risk management motives ("risk managers") and those that were not ("others"). We found that the reaction to the hybrid issuance for the risk managers was positive and marginally significant and that the reaction for the others was negative but statistically insignificant. Investment Decisions. Minton and Schrand (forthcoming 1999) considered why a shareholder would want a company to manage risk. For example, why would the shareholders of a pharmaceutical firm want the firm to manage its foreign exchange risk? One possible explanation is that those investors do not want the pharmaceutical firm to cut back on R&D. Minton and Schrand examined the relationship between earnings volatility and investment. They found a negative relationship between earnings vol-
Table 2. MeanValueofTobin's Q: Firms That Use Foreign Exchange Derivatives versus Those That Do Not, 1990-95
Year 1990 1991 1992 1993 1994 1995
Firms That Use Firms That Do Not Use Foreign Exchange Foreign Exchange Derivatives Difference Derivatives 0.25a 1.53 1.28 0.24a 1.63 1.39 0.20a 1.41 1.21 0.25a 1.42 1.17 1.29 1.12 0.17a 0.09a 1.22 1.13
Note: Comparison of median market values. aRank-sum test indicates P-value less than 0.05. Source: AIIayannis and Weston (1998).
49
Risk Management: Principles and Practices atility and investment, implying that if companies can reduce their cash flow volatility, their capital, R&D, and advertising, then expenditures all increase.
Conclusion A number of theoretical and empirical studies have provided insight about the use of risk management products by industrial companies. In the United States, about half the companies-both large and small-are using risk management instruments, primarily to manage cash flows and earnings. Looking at the survey data more closely, one can see that large companies are more likely to use risk management products than small companies and primary producers are more likely to use the products than manufacturing or service firms.
Theory predicts that risk management could increase firm value if it (1) reduces the firm's tax liability, (2) reduces the firm's transaction costs (e.g., the costs associated with financial distress), and (3) optimizes the firm's investment expenditures. The available empirical evidence suggests that risk management works. Companies that use these products are less risky by several measures of risk and along multiple risk dimensions-total risk and financial price risk, as well as possibly market (beta) risk. And to the extent that risk management reduces the volatility of the firm's cash flows or earnings, empirical evidence of an association with higher levels of internal investment spending has been obtained. Finally, and most importantly, the payoff in share price performance to risk management is positive.
References Allayannis, George, and James Weston. 1998. "The Use of Foreign Currency Derivatives and Firm Market Value." Working paper. University of Virginia (lanuary),
Cuay, Wayne. 1999. "The Impact of Derivatives on Firm Risk: An Empirical Examination of New Derivative Users." Journal of Accountingand Economics Uanuary):319-351.
Berkman, Henk, and Michael E. Bradbury. 1996. "Empirical Evidence on the Corporate Use of Derivatives." Financial Management (Summer):5-13.
Minton, Bernadette, and Catherine Schrand. Forthcoming 1999. "The Impact of Cashflow Volatility on Discretionary Investment and the Costs of Debt and Equity Financing." Journal of Financial
Bodnar, Gordon M., Richard C. Marston, and Greg Hayt. 1998. "1998 Survey of Financial Risk Management by U.S. Non-Financial Firms." Wharton School and CIBC World Markets (Iuly). Fortin, Steve. 1998. "University of Waterloo Second Survey of Canadian Derivatives Use and Hedging Activities." In Managing Financial Risk,Yearbook 1998. Edited by Charles W. Smithson. New York: CIBC World Markets. Gay, Gerald D., and [ouahn Nam. 1998. "The Underinvestment Problem and Corporate Derivatives Use." Financial Management (Winter):53-69. Geczy, Christopher, Bernadette Minton, and Catherine Schrand. 1997. "Why Firms Use Currency Derivatives." Journal of Finance (September):1323-54.
50
Economics. Smithson, Charles W. 1998."Questions Regarding the Use of Financial Price Risk Management by Industrial Corporations." CIBC School of Financial Products Web site (www.schoolfp.cibc.com). Tufano, Peter. 1996. "Who Manages Risk? An Empirical Examination of Risk Management Practices in the Gold Mining Industry." Journal of Finance (September):1097-1l37. --.1998. "The Determinants of Stock Price Exposure: Financial Engineering and the Gold Mining Industry." Journal ofFinance Gune):1015-52. Wysocki, Peter D. 1996. "Managerial Motives and Corporate Use of Derivatives: Some Evidence." Working paper. Simon School of Business, University of Rochester.
Managing Risk in the Industrial Company
Question and Answer Session Charles W. Smithson Question: Traditional investment management firms, which rely on index funds and mutual funds, tend to be publicly owned, whereas hedge funds tend to be privately owned. Does something in the nature of risk management tools explain this difference in corporate structure? Smithson: The different corporate structures have evolved because of asymmetrical information. If a firm is following an index, it needs no special information, so it can have a broad, shareholdermanaged structure. By contrast, a firm that has valuable information does not want to disclose that information to shareholders, who want to see it before they buy shares. So, a firm that wants to tell outsiders as little as possible about its risks will not go public. How many financial institutions are willing to report their value at risk numbers on their financial statements? The answer is many, and in the next five years, even if the U.S. regulators do not require VARreporting, nearly all financial institutions will report VAR numbers in their annual reports. Publicly traded firms cannot afford not to report it. When shareholders receive an annual report that does not contain a VAR number, they think either the number is so big the firm does not want to disclose it or the firm does not know how to calculate VAR. Neither reaction is good for the firm.
Consider what happened with industrial corporations. The first two companies to report VAR in their annual reports (which happened in the same year) were British Petroleum and Mobil Oil. Why? Because both were known to be active traders-i-Bl" in the foreign exchange markets and Mobil in hedging interest rate risks. If an investor knows that BP is actively trading foreign exchange, that investor wants to know BP's VAR number. The same situation affects investment firms. Publicly traded financial institutions that are going to be active derivative traders will have to disclose VAR. Question: Do you think regulatory pressure and events such as those that occurred in the credit markets during the summer of 1998 will cause changes in the way VAR is measured and reported? Smithson: VAR was originally designed to answer the simple question of how much risk is in a given portfolio. It was a shorthand way of conveying whether risk was a lot or a little. Rather than being concerned about whether we all had wrong VAR numbers in the summer of 1998,which by the way were wrong in the right direction, I am more troubled by VAR being used in ways that were never intended and are not appropriate. VAR is primarily a tool for communication but is being used as a tool for control. For instance, using VAR as a stop-loss measure is a problem. When you use VAR
as a stop loss, if you exceed your VAR today, you are required to liquidate that portfolio starting tomorrow. That liquidation simply increases the volatility in the market even more, which means that your VAR will get broached again tomorrow, and the negative cycle is now established and intensified, all because you used VAR inappropriately. I am also troubled by statements in the press such as lithe models didn't work in the summer of 1998." At CIBC we generated a regular VAR number, but we also stress tested our VAR;we looked at other scenarios and knew what kind of losses we could end up with if those unusual states of the world came about. We made a business decision that those scenarios were not going to happen, and we left investment positions alone. What happened is that one of those scenarios we thought would not happen did happen. So, all we could do was say, "Rats!" We all wish it had not happened, but the situation did not catch us by surprise, and we lost what those models said we were going to lose under those circumstances. The models worked; we simply made business decisions that did not turn out the way we hoped. The real question for the financial institutions that were using these models is: If you lost more than your one-day VARsuggested you were going to lose, was your loss within the bounds of your stress test? If the answer is yes, then your model worked.
51
Practical Issues in Choosing and Applying Risk Management Tools Jacques Longerstaey Vice President, Co-Head-Risk Management Group Goldman Sachs Asset Management
Effective risk management encompasses many concerns and requires a complete program of organizational capabilities. Defining risk, agreeing on and critiquing measures of risk, and deciding whether to buy or build a risk management model-all are key steps in choosing and applying risk management tools.
isk management systems range from the overly simple to the numbingly complex. Somewhere in between is the appropriate approach to risk management for most investment management organizations-an approach that addresses key risk exposures with understandable risk measures in a user-friendly risk management model. This presentation focuses on some of the practical issues involved with trying to implement a risk management framework-issues that include defining risk, agreeing on risk measures, recognizing deficiencies in such widely used measures as tracking error, and deciding whether to buy or build the appropriate risk measurement models.
R
Effective Risk Management Gerald Corrigan, former president of the New York Federal Reserve Bank, described risk management as getting the right information to the right people at the right time. His description is more telling than its brevity might suggest. The "right information" refers to having enough, but not too much, information. Many risk management reporting systems get bogged down in a mass of information, and the danger is that the system will produce data that are not actionable. Portfolio managers and the firm's senior management-the "right people"-need data and information that they can act on, which is why and how the risk measurement group in an organization can add value. The "right time" is not always easy to identify, particularly when someone has to look at the pros and cons of different methodologies and different systems. The trade-off is frequently between accuracy and speed. Often, some accuracy must be
52
sacrificed in order for the information to be actionable by management. That trade-off is part of where the art meets the science. Asset Manager Risks. Many of the risks borne by asset managers are similar to those borne by other financial institutions: performance risk, credit risk, operational risk, the risk of fraud, and business concentration risk. What differentiates asset management firms from other financial institutions is that some of these risks are shared with clients. In that context, the distinction between the risk that a client is taking in a portfolio and the risk that the manager is ultimately bearing is inevitably a blurry one, and the safest posture for the manager may well be to act as if he or she were managing personal funds. Another way to draw the distinction between risk management for other financial institutions and risk management for asset management is to contrast tactical and strategic risk management. Michelle McCarthy focused on the tactical part of risk management.! The strategic part of risk management, however, asks what performance risk is in a particular portfolio, in a series of portfolios, or in the whole organization. The risk management group of an asset management firm also has a responsibility to focus on the business risks that the firm is exposed to. The ultimate business risk is that the firm has so many portfolio losses that, over time, the firm's client base starts to diminish. For example, value at risk (VAR) models are important to our broker / dealer business at Goldman Sachs Asset Management (GSAM) for estimating lSee Ms. McCarthy's presentation in this proceedings.
©Association for Investment Management and Research
Practical Issues in Choosing and Applying Risk Management Tools how much we can lose in our trading books. The biggest potential risk to us as an institution, however, is not the loss incurred by a trading desk. The biggest potential risk is a sustained bear market that affects our entire initial public offering business. That risk is substantially bigger for us, or any other bank on Wall Street, than the trading losses that we incur as a result of market movements. Concerns. Risk management encompasses many concerns, and many systems need to be put in place to reflect those concerns adequately. Probably of greatest importance for a risk management group to work effectively is to make senior management adequately aware of the workings of the group. If senior management does not "buy-in" to the process, the risk management group will either have no power or nothing to do. Unfortunately, often an "accident" has to take place to ensure management awareness. If a firm wants to implement a comprehensive risk management program, it should also • follow "best practices" that already exist in the industry, • have independent monitoring of positions, • make sure no conflicts of interest exist among the various people in the investment process, • undertake independent price verification of inventory and contracts to ensure adequate liquidity, • establish processes for controlling exposure to operational, legal compliance, credit, and reputational risks (what we call Wall Street Journal risk), and • understand the potential market and performance risks. Establishing a Program. Four basic ingredients comprise a top-notch risk management group: culture, data, technology, and process. II Culture. The essence of an appropriate culture is organizational acceptance of risk management control principles and the development of a "language" of risk. Still, the risk management culture is very difficult to define; I often say that it is one of those things that I know when I see it. The risk culture is affected by the "soundness" of the hiring process and the types of risk-reward policies in place. In a good risk management culture, the people throughout the organization are conscious of the risk issues and the performance risk issues resulting from any of their decisions. For example, at GSAM, our objective is to produce consistent, stable, replicable return distributions. Achieving that objective can be hard to do when managers accept absolutely every benchmark that every consultant can think of, because no one can effectively monitor performance risk versus
©Association for Investment Management and Research
a large number of benchmarks. For funds that have a customized benchmark, we may not be able to calculate the tracking error because we might not know the composition of the benchmark. This risk would not be picked up by a VAR model, but it is something to be aware of, and we are trying to sensitize everybody in the organization to that problem. Creating culture is a long process, and it starts when people are hired, which is particularly difficult in a rapidly growing organization. For example, we often rotate new analysts through the risk management group for a period of three months. They are assigned a variety of tasks, and we hope that they forge links with the risk management group that will last over time. We organize internal seminars to make people aware of certain types of risk exposures that we have. We have also created a risk committee that meets every two weeks in which the business heads of all of the areas meet, review performance, discuss subjects related to risk management in general, and make presentations to the risk committee on their own specific activities. The goal is to try to create a culture in which the portfolio manager is the person responsible and the risk group serves as the safety net. II Data. Position data, market data, factor data, historical return data-a risk management group requires a variety of data in seemingly huge quantities. The more data we get, the happier we are because then we can design anything we feel comfortable with. But those data need to have high integrity and must be integrated with respect to historical returns, current positions, and the analytics being undertaken. Thus, a risk management group is a significant technological investment, and fortunately, the asset management world is slowly overcoming its historical reluctance to spend money on risk management. II Technology. A risk management group needs a system that captures, analyzes, and distributes risk information. Although a lot of systems do a good job capturing and analyzing risk, very few systems do a good job distributing that information and formatting it for people who actually need to manage risk. One often gets the impression that the people who designed the reports have never managed a portfolio. At GSAM, we spent a reasonable amount of time redesigning reports to identify what is really going to hit us in our risk systems, what Bob Litterman calls the "hot spots" in a portfolio.f II Process. The final ingredient for establishing effective risk management is designing a process to put in place appropriate responsibilities, limits, policies, and procedures. Much of this work is commonsense, but the details can be overwhelming. 2Robert Litterman, "Hot Spots" and Hedges," Journal of Portfolio Management (December 1996):52-75.
53
Risk Management: Principles and Practices
Defining Risk At GSAM, the first step in managing risk is to define what performance risk means for a particular client. For example, should the focus be on absolute VARor relative VAR (i.e., tracking error)?3 Although those two concepts are so similar that they are often difficult to distinguish, they do differ in terms of the horizon and level of confidence used. Typically, the client defines the exact risk measure, but even when the client defines the risk measure, does the client absolutely, always want to use that risk measure? If clients say that they are measuring performance relative to a particular benchmark, will it always be true? Certainly, many portfolio managers argue that measuring performance relative to a benchmark is valid on the upside but often not on the downside. On the downside, clients basically look at performance versus cash. So, measuring against a benchmark will not work in all cases. In some cases, implementing an absolute risk measure, as well as a relative measure, is a good idea. In addition to risk defined against a benchmark, certain clients stipulate that managers have to beat the competition. From a risk management perspective, beating the competition is difficult, because knowing exactly what the competition is doing, or even in some cases who they are, is difficult. Trying to beat the competition is like trying to manage against a benchmark without knowing its composition. Therefore, the relative risk is an unknown, and one cannot add a lot of value to an unknown. For a particular fund, we must also determine if risk is symmetrical. Distributions might be skewed because the fund has derivative positions, and even absent derivative exposure, certain markets, such as emerging markets, can create fat-tail distributions. Looking at just one number is not enough; the whole distribution of returns has to be examined. Clients must thoroughly understand what the risk measure means, no matter whether we or the client selected that measure. Even if we are not using tracking error and are using something that is scaled to the 99th percentile, does the client understand that a 1 percent chance of loss is not the same as never, especially given that the 1 percent chance always seems to happen in the first quarter that the money is under management? Thus, the educational process that we go through with clients and others within the organization is quite important. Our risk management group works with the marketing group and clients to make sure we are all speaking the same risk language. Most
of us in the risk management group at GSAM came from the banking or broker I dealer risk management side, so we had to learn and adapt to the terminology used in investment management. One of the first things we did was establish a glossary, and in doing that, we discovered that many people were using the same term to mean different things, which is itself another source of risk for an organization. For example, variance to me is a statistical term; it does not mean the difference in performance between a portfolio and its benchmark. Finally, clients and managers must be clear as to whether performance matters more than consistency. That question is a philosophical one. Although I do not have the definitive answer, I lean toward consistency; some people favor performance. Performance and consistency are basically two different product offerings. Therefore, the risk management frameworks that an organization puts in place for both of those things may differ.
Risk Measures After defining the performance risk issues, the next step is to make sure that everyone agrees on the risk measure used. Agreement, however, is an allencompassing term, and an in-depth look at tracking error serves to illustrate the difficulties inherent in settling on a certain risk measure and the importance of being able to objectively critique any specific measure.
3Formore information, see the Goldman Sachs Asset Management report "Tracking Error: VAR by Any Other Name."
Tracking Error. Tracking error is probably the most commonly used measure of performance risk, but does everyone agree what tracking error actually is or how it is calculated? Tracking error can be calculated in different ways; are we going to look at historical tracking error or forecast tracking error, and what type of model is going to be used? Suppose a client gives us tracking-error guidelines. If the client asks us to measure compliance risk, we would need to go back to the client and ask what he or she means. In this context, what does having 500 basis points (bps) of projected tracking error mean? Depending on which VAR system we run the portfolio through, we can get hugely different results. Thus, we can be in compliance with one system and not in compliance with another, so what does compliance risk mean to this particular client? Tracking error also does not provide insights into "one-sixth events"-those events that are in the lower left-hand tail of a portfolio distribution and that are going to affect the value of the portfolio about one-sixth of the time. So, the risk management group might need to put other indicators in place, in addition to tracking error, to monitor risk. For example,
54
©Association for Investment Management and Research
Practical Issues in Choosing andApplying RiskManagement Tools the group might want to look at style drift to make sure that managers are in line with their mandates or with their typical strategies. The group may also want to look at consistency of performance across accounts, which is more of a strategic risk management consideration, particularly for those concerned with the replicability and scalability of their business. Finally, the risk management group might want to look at short-term changes in correlations versus the benchmark to see whether certain portfolio managers are starting to drift away from their mandates and/ or their benchmarks. Another way to look at this problem is style analysis. Deficiencies in Methodology. Although many people are quick to cite the failures and shortcomings of VAR, tracking error actually suffers from most of the same shortcomings, because it is partially the same methodology. The evaluation horizon for asset managers is typically longer than that for traders, but it is shorter than the investment horizon. Thus, distinguishing between an investment horizon and an evaluation horizon is important. A manager might have a 5- or Ifl-year investment horizon, but people are going to look at the manager's performance every three months or even more frequently. I have heard of managers getting calls from clients on the 20th of the month asking why the portfolio has underperformed 200 bps since the beginning of the month. Unfortunately, even if managers have a long investment horizon, they must look at risk measures that are consistent with their somewhat shorter evaluation horizons. The 1 standard deviation measure that is typically used does not intuitively provide managers with the probability or size of underperformance in the case of event risk. Even if a manager does not have options or complex derivatives to manage, some distributions may not scale normally from the 1, 2, or 3 standard deviation level. This phenomenon is particularly true for emerging market portfolios, which tend to have fat-tailed return distributions. Any tracking error represented by one number does not give managers a good idea of what a client's utility function is. Utility function is one of those concepts in economics that has always been intuitively understandable but very hard to measure. A manager can phrase questions in certain ways to determine how clients feel about this particular thing or how they feel about that particular thing. In this way, the manager can come to a closer understanding about the outcomes that would make the client panic versus the outcomes that, if they happen, would be acceptable.
©Association for Investment Management and Research
Another problem with using tracking error is that clients typically have asymmetrical responses to performance in rising and declining markets. That asymmetry has a significant bearing on how a manager might structure a client's portfolio. If a client does not reward a manager as much for outperformance compared with how much the client penalizes the manager for underperformance, the manager might use a strategy that caps the upside and protects the downside. The problem is that a dichotomy often arises between a client's utility function and the investment guidelines. If a client has strong risk aversion on the downside, that aversion argues for using derivatives to protect the downside, but often, a client's guidelines indicate that options cannot be used. The potential conflict is quite clear. Tracking error, by definition, reflects relative returns, which are questionable if, as is usually the case, the benchmarks do not represent the client's liabilities. We assume that whatever benchmark we are given to manage against is the appropriate representation of the client's liabilities, but that assumption is often not true. Although our role is not to secondguess our clients, we still try to model those liabilities and make sure that whatever investment performance we are asked to generate is consistent with those liabilities and with the benchmarks. Even so, trackingerror forecasts are often a function of the benchmark. A manager can calculate tracking error versus any benchmark, but if the client's portfolio is composed of different securities from those in the benchmark, the tracking-error number can be meaningless. The resulting tracking-error number is exposed to substantially more model risk than the number that the manager would get from assuming that the benchmark looks very much like the portfolio. For example, suppose you are managing a fund versus the S&P 500 Index and you have only S&P 500 equities in the fund. A reasonable assumption for this fund is that the correlations have less risk of breaking down (remember that the equities are all part of the same universe) than if you were managing a small-capitalization fund against a large-cap index. When a small-cap fund is measured against a large-cap index, at times those securities will be correlated and the tracking error will be low, but when a significant event occurs, those correlations will break down and the tracking error will rise significantly. Therefore, the appropriateness of benchmarks becomes a key issue in assessing whether the tracking-error measure is meaningful. Deficiencies in Models. Another problem with tracking error is that many of the estimates generated by the models vary significantly depending on the particular model used. For example, using daily
55
Risk Management: Principles and Practices returns for a U.s. growth and income equity fund for the period January 1997 to January 1999, the annualized historical tracking error is 796 bps. Figure 1 shows that the tracking error for rolling 20-day periods is between 5 percent and 10 percent on average, although it did exceed 15 percent in January 1999. With monthly data, the tracking-error estimate is about 530 bps, but that number is probably affected by the sample size, which is only 24 observations for a two-year sample period. Monthly data for a longer historical period show that the tracking error moves back to about 775 bps. The question then becomes which tracking-error number is correct, and the answer becomes a judgment call depending on what the risk manager thinks the fund is currently doing. For example, I have a tendency to focus on short-term movements. Therefore, I am biased toward looking at the higher numbers, particularly because the 20-day rolling tracking-error number has drifted up in the latter part of the sample period. But the tracking-error numbers are ambiguous and raise as many questions as they answer.
Figure 2. Rolling 20-Day Return Correlations,
1997-99
0.9 0.8 to: ..0 0
..$
....~ 0
0.7 0.6
U
0.5 0.4
0.3 '--_-'--_--'-_---'-_--'-_---'_ _. L . . - _ - ' - - _ - - ' - ' 1/97 4/97 7/97 10/97 1/98 4/98 7/98 10/98 1/99
-
us. Growth and Income Fund versus S&P 500
.... S&P 500 versus S&P 500 Value
One question we might ask, for instance, is whether style drift explains why the tracking error went up at the end of the sample period or whether some more-fundamental change was at work. Figure 2 shows the 20-day rolling return correlation for the same growth and income fund against the S&P 500 and the 20-day rolling correlation between the S&P 500 and theS&P 500 Value Index. up until November 1998, the two lines followed each other closely. Because the correlation between the S&P 500 and the S&P 500 Value Index did not suffer the same dissoci-
ation evidenced in Figure 1, the idea of the tracking error going up because of style drift is probably not appropriate. If I were monitoring the risk of this fund, these data would be a signal to talk to the portfolio manager and determine the causes of the spike in tracking error. Tracking error, either historical or prospective, will not identify issues, such as extreme events, that involve the whole distribution of returns, which is where simulations can contribute information. Figure 3 shows the distributions of monthly variances for historical returns and the current positions based on Monte Carlo simulations We included a series of funds across different asset classes, some of which actually used derivatives, to make sure the distribution would not be totally symmetrical. We first simulated the historical distribution of the aggregate of the funds' returns, shown by the solid line. Then, we reran the simulation, shown by the dotted line, using historical data on the instruments in the funds but using the funds' current positions. The dotted line shows that the risk has been significantly reduced: The distribution is narrower, and although it still has a kink on the left-hand side, the distribution does not have the fat tail that the solid line has. Thus, the current fund positions are less risky than the historical fund positions. This view of the total risk of the portfolio could not be achieved by looking at tracking error alone. Risk managers must make their clients aware that even if a fund has a constant tracking error over the year, during the year, the fund might spend some time outside that return distribution. Figure 4 shows the tracking-error levels for a fund that seeks to outperform the benchmark by 3 percent annually with a
56
©Association for Investment Management and Research
Figure 1. Rolling 20-Day Historical Tracking Error of a U.S. Growth and Income Fund versus the S&P 500 Total Return Index,
1997-99 25 r - - - - - - - - - - - - - - - - - - ,
20
c.... ........0
15
~
IJO
.s
..>: u
10
ttl
~
5 O'----'----'----'-_-'-_---'-_---'_ _.L..-_..J..J
1/97 4/97 7/97 10/97 1/98 4/98 7/98 10/98 1/99
Practical Issues in Choosing and Applying Risk Management Tools Figure 3. Distributions of Monthly Variances Using Monte Carlo Simulations
... Current Positions
Historical Returns
-2.0
-1.6
-1.2
-0.8
o
-D.4
0.4
0.8
1.2
1.6
2.0
Return (%)
Figure 4. Vear-to-Date Cumulative Returns for U.S. Equity Growth and Income Fund versus S&P 500 by Target Tracking-Error Levels, 1998 30
20
10 ~ e...-
a
.8
.....
+ 1 Tracking ~~~~r .............. ........
........
. Expected Return
-----~-----------~---
0
............... :':1' T;~~ki~g
'"
~
-10 -3 Tracking Error
E~roi
B'.·
----I."T,::~~----------A
-20 Equity Fund versus S&P 500
-30 Time
tracking error of 6 percent. At the end of the year, the fund's return will hopefully lie at Point B, which is within the predetermined tracking-error level. Within that year, however, the fund spends some time outside that distribution, in area A.
©Association for Investment Management and Research
The fund shown in Figure 4 (the heavy solid line) was managed versus an established benchmark. Over the course of 1998, the fund's performance, as measured by tracking error, degraded substantially. But the benchmark was inappropriate, and therefore, the
57
RiskManagement: Principles and Practices two series of client accounts that are managed the same way and created histograms of their distributions. Panel A shows a distribution of returns that is very tight around its mean. These accounts are for the most part being managed consistently. The distribution in Panel B is scattered, even though the accounts should be managed in a consistent fashion, and requires further investigation. Although there could be some good reasons (client guidelines, restricted stocks) why the distribution is scattered in Panel B, there could also be some reasons that are not as defensible and that would require a change in process. A firm that has made a strategic decision to strive for consistency of performance wants to have distributions similar to those in Panel A, not Panel B.
tracking-error estimate was probably not a good indication of the overall risk. Backtesting. Backtesting is one way to assess the accuracy of tracking-error forecasts. To create Figure 5, we ran a U'S. growth and income fund through a tracking-error model developed by a software vendor and then back tested the model's results. We found that the model performs relatively poorly; 13 percent of the observations are outside the 2.33 tracking-error band. In the banking sector, backtesting is taken very seriously, and models typically are not released until they are adequately back tested. This backtesting has not been the case in the investment management industry, but backtesting will simply have to become a more important aspect of model design and development. In the meantime, portfolio managers often intuitively or subjectively, on the basis of their own experiences, adjust model risk estimates.
Strategic Risk Management Measures. Firms also want to make sure that their fund performance is not affected by credit concentrations or by a firmwide style bias. Credit concentrations may not be important on a portfolio-by-portfolio basis but may have substantial liquidity implications in the aggregate. Also, firms do not want to be betting their business on what investment asset class is, or will be, in style in any particular year. Firms do not want to bet their franchise, and their ability to attract or retain assets, on things that they cannot control.
Strategic Perspective. To achieve consistency of performance, which is important at the strategic management level, the investment firm might want to measure factors that are totally unrelated to tracking error. Figure 6 shows the distributions of monthly relative returns for two account categories. We took
Figure 5. Weekly Returns for a U.S. Growth and Income Equity Fund versus S&P 500 Returns, November 14,1997, to November 20,1998 2.0
• •
1.5
•
+2.33 Tracking Error
1.0
'EOJ
~ OJ 0...
0.5
••
•
+1 Tracking • •..... Error
....................
o -~if' .~~ .•. h..I!!.. ~.;.,.h.. ....
....... . -1 Tracking Error
~0.5
•
.
•• •
• .............
..,.._., . IL .
•
...............
.Ii!'
•IL. •
.
.
....... .•..........-
•
"
• .......
• ••
-1.0
•
•
-2.33 Tracking Error
-1.5 '-------'---------'-----'-------'-----------'-------'---'-------'
11/14/97
1/9/98
3/6/98
5/1/98
6/26/98
8/21/98
10/16/98
• Weekly Returns
58
©Association for Investment Management and Research
Practical Issues in Choosing and Applying Risk Management Tools Figure 6. Distributions of Monthly Relative Returns for Two Account Categories A. Account Category 1
Retum(%)
B. Account Category 2
rest. The primary disadvantage of building a system is the large investment in cost and time; the primary advantages are flexibility, hopefully increased accuracy and precision, and competitive differentiation. Manager A cannot tell a client that he or she manages risk better than Manager B if they are both using the same vendor-generated analytics. This competitive differentiation will help to separate the top asset management firms from those in the second tier. The advantages of buying a system are relatively low cost and support from the provider, but at GSAM, we find that the market is not very large or diverse for providers of performance risk analytics and reporting systems to the investment management industry. As a result, this scarcity of providers has affected the quality not only of the analytics but also of the reporting software. A number of the systems have decent analytics, but they do not necessarily work all the time, and the system architectures are usually difficult to adapt. So, at GSAM, we built our risk management system, which uses different components from different vendors and some internally developed applications, basically combining risk models from third parties and what we use internally on the broker/ dealer side. So, we have the GSAM risk system as the framework and the delivery system, but any portfolio can be run through a variety of external or internal risk models.
Conclusion
/ I Retum(%)
Buy or Build? My personal recommendation for creating a risk management system is to buy the best and build the
©Association for Investment Management and Research
A practical approach to risk management recognizes the investment risks that need to be measured, the organizational concerns that need to be addressed, and the elements of a meaningful program-eulture, data, technology, and process. Those organizations that are able to define their relevant performance risks, agree on measures of risk that avoid some of the serious deficiencies of widely used measures, and assess the trade-offs involved in buying versus building risk measurement models are most likely to implement a truly useful risk management system.
59
Risk Management: Principles and Practices
Question and Answer Session Jacques Longerstaey Question: Do you change your method of risk management depending on the fund you are looking at? Longerstaey: I believe in sticking with one overall approach to risk management, although the approach might be modified for each type of product and perhaps for different clients. Over time, however, you want to ensure stability. By using one approach, you know its shortcomings, and even if the absolute number has some faults, as the number evolves over time, it will become ever more meaningful because you can make consistent comparisons and judgments. Because different variations of an approach may be used for different asset classes, the aggregation is particularly important and complex. You might, for instance, use one type of factor model for looking at equities and a totally different factor model for looking at fixed income. Thus, aggregating the data is typically a problem. Fortunately, at the aggregated level, people are less concerned with the absolute pinpoint accuracy of that risk measure and more concerned with the big picture. Question: Do you use a standard tracking-error number? Longerstaey: We have spent a reasonable amount of time with our portfolio managers and our marketing people to position our products so that we have a diversified product offering. A diversified product offering means different levels of tracking error for different products for different clients. For example, the tracking error for a Japanese equity fund varies substantially depending on
60
where it is distributed. The tracking error might be lower if that fund is distributed as a component of an international equity fund than if the same fund is distributed locally in Japan. International investors would be looking for generic exposure to the Japanese market, but domestic Japanese investors would be looking for more-aggressive risk taking. The tracking error depends on how you position your fund and which client you are dealing with. Question: Do you use riskreturn ratios in your analysis?
The risk management group works with management to develop performance measures, such as risk-return ratios or information ratios, for portfolio managers and to ensure that everybody feels comfortable with those measures. The number that everybody thinks they can achieve for long-only portfolios is 0.5. That is, for a 3 percent return over the benchmark, 6 percent risk is a good guideline. One of the first things we did was to look at whether that ratio is meaningful and which part of the percentile distribution the ratio lies in. On the active equity side, a manager with a 0.5 information ratio is a star-in the top percentile of the distribution. But also keep in mind that there is likely to be a relationship between how many managers are pursuing a certain strategy or sector or style and the ability to achieve a 0.5 ratio. Other manager types would have substantially different ratios; a hedge fund manager, for instance, might be able to achieve information ratios between 1.2 and 3. Longerstaey:
Question: How do you view the comparative advantages of a historically based VAR perspective, a Monte Carlo simulation, or another kind of parametric method? Longerstaey: We all use history. The one big advantage that the different versions of parametric and Monte Carlo methods have over pure historical simulation is that they allow us to take into account the time-varying nature of volatility. With simple historical simulation, we do not necessarily know what type of regime existed when those simulated results occurred. An event might have happened in a low-volatility regime, and the volatility could get a lot worse. I favor methods that incorporate the time-varying nature of volatility. Question: How do you deal with changes in the composition of a benchmark? Longerstaey: The moving benchmark is just as difficult to deal with as the benchmark for which you do not know the composition. One of the things that we are doing for our own marketing people is creating categories of benchmarks: the ones that we like, the ones that we can tolerate, and the ones that we do not want to use. Volatility of composition (or from a positive perspective, transparency of construction and content) is a key factor in determining which of the three categories a benchmark falls in. Interestingly, most of the opposition to certain benchmarks comes not from the risk management group but from portfolio managers or from the performance measurement group, whose lives are directly complicated by these difficult benchmarks.
©Association for Investment Management and Research
Practical Issues in Choosing and Applying Risk Management Tools In backtesting, we assume that portfolios do not change during the measurement period, but portfolio managers do adjust their portfolios. How do you handle that problem? Question:
Longerstaey: Actually, you don't get this problem if you use high-frequency returns (i.e., daily or weekly). Another way of addressing this problem is to look
at risk and performance in constant portfolios. The issue in this case is that you may have to calculate returns on positions you've never held since the positions were unwound over the evaluation horizon, which may be more costly than moving your organization to higher frequency data. Also, with regard to using tracking error in another fashion, we are contemplating creating two
©Association for Investment Management and Research
risk-adjusted performance measures: one defined as performance divided by realized tracking error and another defined as performance divided by anticipated tracking error. The ratio between these two performance ratios would be a measure of the portfolio manager's efficiency at converting potentially higher risk into lower realized volatility.
61
How Risk Management Can Benefit Portfolio Managers Michelle McCarthy Risk Product Manager IQ Financial Systems, tnc,'
Using value at risk to analyze portfolio risk may appear to be inaccurate and to present yet another constraint on portfolio managers. But VAR,which measures an investment's potential loss exposure over a specified time period at a given confidence level, can help senior managers in investment firms practice unified and disciplined risk management, giving investors more-reliable results and permitting portfolio managers to use a less restricted range of investment instruments.
isk measurements help senior managers at investment firms supervise portfolio managers and help clients monitor the mix and concentration of risks in their portfolios. Although individual portfolio managers may not see the benefits of risk management, it does provide genuine benefits to this group as well. Part of the"disconnect" perhaps lies in different notions of what constitutes risk itself and thus how risk can, or should, be managed. This pre~ sentation defines risk and risk management for an investment firm, reviews the background of value at risk and describes the process of adopting VAR for investment portfolios, and discusses the benefits, criticisms, and limitations of VARin a portfolio management context.
R
Defining Risk Risk for an investment management firm can be viewed from three perspectives: absolute versus relative risk, fund-specific risk versus risk among a group of funds, and surprise losses. Absolute versus Relative Risk. Risk can be thought of in an absolute sense-risk that the assets lose more money than the person who owns the assets thought possible. Such absolute losses can occur because of market risk factors (the usual focus of VAR methodology), factors specific to an issuer, and operational problems (ranging from fraud to not processing an order on time). 1 Ms.
62
McCarthy is now a managing director at Deutsche Bank.
Risk can also be thought of in a relative senseunderperforming the stated intention of the fund, or more likely a specified benchmark, by an amount greater than the person who owns the assets thought possible. If a client has given an investment firm funds with the intention of outperforming a particular benchmark, relative risk is the more relevant risk measure for those funds. Fund-Specific Risk. Investment firms sometimes apply risk measures across the firm to make sure that the aggregate position of all the funds is not too long or too short. This approach, I argue, is less useful and less justifiable than looking at the risk of each individual fund. Pooling together all the funds, however, may help an investment firm monitor whether its fee income could drop because all its funds are concentrated in the equity markets, for instance, making it vulnerable to an equity crash. I believe that risk management in an investment firm should look for unacceptably large potential losses fund-by-fund, as opposed to across all the funds, because all the funds are not all one person's money. Thus, risk management should evaluate each fund relative to the constraints and desires of the investor. This approach measures the risk of misexecuting fiduciary duties or losing customer satisfaction rather than the risk of reduced fee income because of market fluctuations. Surprise Losses. Risk can also be viewed in the context of unanticipated, or surprise, losses. Often,
©Association for Investment Management and Research
How Risk Management Can Benefit Portfolio Managers when surprise losses happen, certain culprits can be identified. Sometimes the management information system, particularly if it is based on financial accounts, misses something important. Financial accounts do not display usefully such key risk factors as portfolio duration, currency impact, and derivative exposure. So, management information based on financial accounts will miss, among others, surprises resulting from duration risk, currency risk, and derivative risk. If a senior manager or investment committee uses only categories that appear on a balance sheet to monitor how clients' money is being invested, the manager will miss a great portion of the actual risk. Investment guidelines tackle some risks that do not appear on a balance sheet, but they can be easily outstripped by changes in the mix of products available in the capital markets. If the guidelines try to specify what asset classes or what long or short positions are or are not allowed, the guidelines can easily become extremely complex and may not control risk at all. In multicurrency, multiasset funds, investment guidelines cannot do the trick. Investment guidelines that are meant to keep funds safe cannot accommodate all the things happening in a fund, and thus surprises can occur. Surprises are also likely to happen when the manager and client do not discuss how much risk is allowed in pursuit of return. If they discuss what potential outperformance may occur, then the potential underperformance, which is caused by the same risks that allow for outperformance, must be discussed as well. VAR attempts to monitor that potential underperformance over time, to help firms ensure that it remains at an acceptable level.
Defining Risk Management For too many investment firms, risk management means "derivatives." Furthermore, when the discussion broadens beyond derivatives to incorporate all the instruments in a fund, confusion often arises between two kinds of risk management: the offense and the defense. Offense. Portfolio managers may use risk measurement and optimization techniques while seeking to add return to the portfolio. When a portfolio manager makes choices to buy, sell, or hedge--choices based on an analysis of the risk-return potential of the portfolio-those choices are often based on risk forecasts that are grounded in history. The manager may, however, forecast that the future will be entirely different from history and invest accordingly. Choosing assets based on a model, perhaps changing the model based on a belief that the future will be different from the past, is part of the "offense" of risk management.
©Association for Investment Management and Research
Defense. Risk management used defensively refers to supervisors looking at a portfolio to see if the level of risk is acceptable. For example, oversight bodies within investment management organizations, such as investment committees, or plan sponsors might measure the risks in the portfolios managed on their behalf to verify that they are comfortable with the risks being taken. In these cases, amending the historical data and embedding forecasts in the models are rare; one tenet of defensive risk measurement and management is that managers should not be amending historical data except in a conservative direction. VAR, for the most part, is used as defense in risk management.
VAR Background VAR is defined as the potential change in value, or potential loss, of a portfolio over some time horizon at some confidence interval (e.g., how much the fund could lose in the next week with 95 percent confidence, how much the fund could lose with respect to its benchmark in the next quarter at 99 percent confidence). A loss at 99 percent confidence means that if the model does its job correctly, the portfolio will suffer a worse loss no more than 1 percent of the time. VAR uses historical data to quantify how much a portfolio might lose, given the assets the portfolio currently holds. The key data that VAR uses to help decide whether a fund is risky are the volatility of the assets and the correlations between the assets. That statement should not be surprising because VAR is grounded in modern portfolio theory and is not very different from risk measures that investment managers have been using for many years. In the 1980s, banks began to adapt modem portfolio theory, thus creating VAR, mainly because the rules they had been using to manage their portfolios were not working. When they used guidelines that simply indicated how many bonds and futures to buy, they were outstripped by the complexity of portfolios. When they tried to look at duration sensitivities, they were stymied by arbitrage portfolios; they could not decide which long-short portfolio was truly riskless and which one was not. VAR gives banks the ability to analyze the risk exposures of their increasingly complex portfolios, and banks make great use of VAR today, not the least because banking regulation has adopted VAR as a popular method of assessing market risk capital requirements. In many aspects, banks are different from most investment firms, and VAR could not have been applied to the latter without significant adaptation. Banks, for instance, tend to be more involved with fixed-income and foreign exchange exposures and have much lower equity exposures than investment
63
Risk Management: Principles and Practices A final adaptation that increases VAR's effectiveness for asset managers is to incorporate the clients into the process. In this approach, clients first create a benchmark asset allocation and compare that benchmark with their liabilities to make sure they are comfortable. They can then delegate risk to their asset managers. They assign a benchmark to their fund managers and further clarify how much they want to outperform that benchmark and what potential underperformance they find acceptable in pursuit of the target outperformance. This process clearly defines for the asset management firm what it should be monitoring to be a good fiduciary and to avoid client dissatisfaction.
firms. Banks also have a need to measure the worst case loss because their regulators want to make sure the banks set aside enough capital for that worst case scenario. Therefore, much of the VAR theory and application focuses on quantifying the worst case loss. Finally, banks tend to be dealers in options and have a large number of nonlinear exposures, which require a great deal of refinement in VAR measurement. Banks' concerns are often driven by the heavy amount of nonlinear product in their portfolios and the fact that they are option buyers and sellers, requiring banks to run hedged option portfolios. So, for VAR to work effectively for asset managers, several changes have to be made. First, VAR must look at risk not only in the absolute but also versus a benchmark (relative risk). Second, banks tend to express VAR in terms of absolute dollars; percent of net asset value is a much more helpful display for investors because it compares neatly with performance reporting. Banks cannot meaningfully express VAR in this way because so many of their portfolios are long-short and, therefore, have tiny net asset values, even though they have a large amount of VAR, which tends to make percentage expressions infinitely large-and not very helpful. Dollar-based measures are similarly unhelpful for investors. A third adaptation for asset managers is to use a lower confidence interval for VAR than a bank would typically use. In the absence of regulatory capital requirements and option use, imposing a high confidence interval does not yield extra information, and using a lower confidence interval means that the manager can better test that the model is a good one. If the manager runs a 1 standard deviation VAR model, the manager can get many more data points to confirm whether or not the model is working. The number will also tie in well with managers' experiences: A manager can more easily judge whether a loss of 8 percent versus a benchmark is too heavy to sustain in a normal year (84.5 percent/1 standard deviation confidence interval) than whether a loss of 18.64 percent versus the same benchmark is too heavy to sustain 1 year out of 100 (99 percent/2.33 standard deviations). The fund composition is the same in both cases, but given that most people's careers are shorter than a century, the first question is easier to answer than the second. Fourth, VAR for investment portfolios requires a stronger focus on equities. In most well-documented VAR models in the public arena for banks, equities are treated as benchmarks, which does not help asset managers determine the extent to which their equity funds might fail to track their benchmarks. Equities can be brought into VAR as single stocks, which is very data intensive, or aggregated into industries or other relevant subsectors.
Approach. When using VAR for risk management of investment funds, the investment management firm must quantify for each portfolio the level of potential underperformance that is unacceptably risky compared with a benchmark (i.e., tracking error). This number becomes the "risk guideline" or "threshold" for that fund. For hedge funds and funds for which a benchmark is not relevant, the manager must determine how much absolute risk is unacceptably risky in order to determine the risk guideline. The investment firm must measure VAR for the portfolio periodically, either in absolute terms or with respect to the benchmark, and when the potentialloss exceeds the investment firm's risk guideline for that fund, then the manager must take some action to get back within the comfort zone. For example, suppose a manager goes 100 percent into cash using futures-a change that may not be explicitly
64
©Association for Investment Management and Research
Adopting VAR An investment firm contemplating adopting VAR has several tasks to accomplish: understanding the purposes of VAR, taking an approach to implementing VAR, dealing with special cases, measuring the performance of VAR, and recognizing the benefits and limitations of VAK Purposes. VAR has two primary purposes for risk management of investment portfolios. First, VAR measurements allow comparisons of risks among asset classes and funds. VAR numbers are like having a sensible set of financial accounts that help highlight the role of risks in a fund. VAR is not a "magic bullet," but it can be the beginning of an analysis based on comparability among all funds. Second, VAR allows for the regular monitoring of risk to see how a fund's risk exposure has changed and to see whether that change is acceptable. Consequently, VAR allows unacceptable risk to be changed before that risk becomes crystallized as a loss.
How Risk Management Can Benefit Portfolio Managers covered by investment guidelines. VAR will warn when a portfolio manager has gone too far into cash; the tracking error with respect to the benchmark will increase drastically. When such a move is brought to the attention of the senior managers or investment committee of the investment firm, these senior managers will have to decide whether being that much out of the market is a good idea. Of course, the fund manager probably has made that move because he or she believed something terrible was about to happen. But if the manager knows with great certainty what will happen tomorrow, why not use the leverage to go short the market? The penalty of being 100 percent in cash is that the manager could miss that potential uptick in the market that everybody else catches and seriously underperform the benchmark. Limits need to be set ahead of time to help determine how much risk is too much for any strategy. Absolute risk limits may be required in addition to, or instead of, risk limits versus a benchmark (tracking error). In that regard, however, VAR does not replace the use of investment guidelines. Many guidelines are still useful for making sure that the portfolio or fund stays within specific issuer concentration limits, liquidity constraints, and other specified exposures. Without such guidelines, VAR might show that a portfolio looks terrific, but in reality, the portfolio may be overconcentrated in one issue and lose too much money on that issue or it may have inadequate liquidity. Implementation. Implementing VAR as part of an investment management risk system requires two pieces of information: the instruments comprising each benchmark and the instruments comprising each fund. The fund contents and the benchmark contents must be broken down into risk factors, which in effect are pricing sensitivities-s-such as the sensitivity of a given note to a 1basis point increase at different points of the yield curve or a foreign exchange forward's sensitivity to the change in the relevant foreign exchange rate and changes in the yield curves of both currencies. These sensitivities (also known as "exposures" or "positions") are brought into the VAR engine, and VAR measures, both absolute and rela-
tive, are produced. Table 1 shows possible VAR measures for a one-year holding period and a 1 standard deviation (84.5 percent) confidence level for four funds: an active U.S. equity fund, an international equity index fund, a money market fund, and a hedge fund. The VAR engine indicates that the active U.S. equity fund might lose 15 percent of its absolute value but only 5 percent with respect to its benchmark-the S&P 500 Index-because the mix of assets in the fund is different from those in the benchmark. Similarly, the international equity fund could lose 20 percent in that one year in absolute terms and 8 percent with respect to its benchmark-the Morgan Stanley Capital International Europe/Australasia/Far East Index (MSCI EAFE). The money market fund, as expected, has a tiny VAR in both absolute and relative terms. The hedge fund has a large VAR in an absolute sense, and because the hedge fund's benchmark is T-bills, its absolute risk is the same as its relative risk. The last two columns in Table 1 show the VAR risk thresholds that were set ahead of time. The investment committee might determine these thresholds based on how the fund has been marketed or on the worst loss the fund has suffered; these thresholds might be ones explicitly stated by the plan sponsor, or they might be simply common sense. In any case, these numbers represent a standard against which the VAR numbers can be compared. For example, the active Ll.S. equity fund should not be able to underperform by more than 4 percent (relative terms). The international equity index fund has both an absolute and a relative risk limit; the money market fund has only a relative risk measurement; and the hedge fund has only an absolute risk measurement. The VAR engine shows that the active u.s. equity fund's relative risk is higher than the threshold. What the manager does at this point depends on the way his or her firm works. In some firms, the investment committee looks at those VAR numbers that are over the threshold and decides whether or not being over the threshold is acceptable. Others require the manager to reduce over- or underweight positions immediately in order to bring the VAR figure under the threshold.
Table 1. Fund VAR Measures (Absolute and Relative) and Risk Thresholds Fund YAR Measures' Fund/Benchmark Active us. equity /S&P 500 International equity index fund/MSCI EAFE Money market fund/Index Hedge fund/T-bills
Absolute
Relative
15.0% 20.0 0.1 28.0
5.0% 8.0 0.1 28.0
Risk Thresholds Absolute
Relative
23.0%
4.0% 10.0 0.13
35.0
'YAR measures specified for one-year holding period and 1 standard deviation (84.5percent) confidence level.
©Association for Investment Management and Research
65
Risk Management: Principles and Practices
Special Cases. Hedge funds and emerging market funds pose special challenges to VAR adoption. • Hedge funds. Sometimes investors want to roll their hedge funds into their overall portfolio VAR measurements, but unless they have access to portfo-
lio holdings of the hedge funds, doing this analysis is pointless. Compared with its use for traded investments, VAR is weaker for the analysis of nontraded investments-for instance, venture capital investments or real estate partnerships. If no reliable, publicly available periodic price data exist for an investment, then it is difficult (if not impossible) to build portfolio proxies, which VAR requires. VAR is useful, however, for hedge funds that deal in tradeable securities, whether they use longshort or other types of strategies. If an asset class can be neatly matched to benchmarks and financial instruments, VAR is applicable, no matter how complex the strategy. Long-short strategies, however, do require much more refined VAR analysis than longonly strategies. If a manager simply buys assets and compares them with a benchmark, the manager needs only a modestly refined analytical system. But if the manager is taking relative risk, such as with a long-short strategy, the manager needs a high standard of proof and a high degree of refinement in his or her VAR model. For example, suppose the historical data for a portfolio VAR model include only J-year, 3-year, and lO-year data for French bonds, rather than the entire French yield curve. The portfolio adopts a strategy that is long $1 billion the 3-year and short $775 million the 4-year French government bonds and makes a significant amount of money with that strategy. If the bonds are duration weighted, the unadjusted VAR model may collapse them both into a 3-year category so that the portfolio looks to be long $1 billion duration equivalent of the 3-year bonds and short $1 billion duration equivalent of the 3-year bonds (which is the $775 million 4-year position duration adjusted into a 3-year bond equivalent), thus showing zero risk yet a high return. With more-detailed historical data on French bonds (l-year, 2-year, 3-year,4-year, 5-year, etc.), the VAR analysis would show that the portfolio is actually $1 billion long the 3-year and $775 million short the 4-year bond. If those two bonds are not 100 percent correlated, VAR will show some risk coming out of that analysis. That risk was missing before because the VAR modeling was too crude for the actual strategy being pursued. So, without explicit categories for the two legs of a very important long-short strategy, that strategy will show zero risk, which is, of course, misleading. • Emerging markets. Emerging market investments also pose challenges for VAR analysis. The problem of illiquidity is again crucial; standard deviation is a less effective measure for typically illiquid markets and for markets in which the manager is overconcentrated. If an asset normally trades infrequently and/or in small sizes, then any
66
©Association for Investment Management and Research
Implementing a risk management system such as that envisioned in Table 1 is not trivial. Breaking out all the data on the funds and benchmarks into risk factors is difficult. The system also needs a VAR engine and historical data with which to compare the VAR numbers. For every fixed-income and derivative product, the system needs valuation models to break these products down into risk factors. Finally, data (often from disparate systems) must be consolidated and converted. A regular reexamination of what risk thresholds are acceptable is also part of the system, and having clients who are like-minded and agree with the risk levels is very helpful. In addition, scenario analysis should be used to supplement the VAR numbers. Once portfolios have been broken down into risk factors, running them through a historical scenario, such as the crash of 1987, is relatively simple to do. The only portfolio that would have been bulletproof against every crisis that occurred in the past 20 years is T-bills, but being immune to all potential crises cannot be the goal of this analysis, because doing so would entirely preclude any risk taking and any chance of gain above the risk-free rate. If the investment firm does.not believe that history will repeat itself, it can run the portfolio through a simulated scenario. Even positions that look acceptable in the VAR framework may not pass historical or simulated stress testing, in which case the manager might actually (and for good reason) construct positions that are suboptimal in a VAR sense. One important issue for most investment firms is the new product review process. The manager needs to make sure that the VAR system will capture the data and all the risks inherent in new products. For example, if a manager has taken on prepayment risk for the first time in the fund, the system must be able to capture that risk. Liquidity and credit screens are important to ensure that a manager does not own too much of one issue in one market. If a firm measures possible risk by looking at the historical volatility of price movements in an asset when normal-sized lots are being traded but the manager owns 90 percent of the market in that asset, the manager will not experience the measured volatility the day he or she goes to trade. Firms should be sure their positions are not oversized with respect to average daily trading volume; if they are, the firms must acknowledge this fact and adjust for any oversized positions by increasing their VAR measures accordingly.
How RiskManagement Can Benefit Portfolio Managers time a manager makes a normal-sized trade in that asset, that trade itself will raise the standard deviation. If the manager happens to be overconcentrated in that illiquid market, the effect is that much more pronounced. So, standard deviations are suspect in the case of illiquid markets, which obviously include emerging markets, and VAR is thus weaker for those who invest in such markets. In addition, illiquid markets have "fat tails," which means that rare events are more common than what is estimated by statistics built around a normal distribution. Illiquid markets are characterized by long periods of boredom punctuated by short periods of terror-not a smooth distribution of price changes. Furthermore, in many of these markets, finding historical data to build VAR analysis is difficult. For example, when investors first moved into Russia, no historical market data were available, and yet investors trusted analyses that required quantification. Finally, in these markets, VAR and VAR-like models may not suitably capture the characteristics of rapidly evolving financing instruments, such as convertibles. Models typically used to price these instruments and break them down into risk factors have been shown to be unreliable in the less liquid markets. II Adjustments. These difficulties have led VAR users to adjust their models in various ways. Some managers, for instance, create a more fat-tailed distribution for their emerging market investments. They model the normal behavior statistically and then say that a 99 percent event for that market is three or four times the standard deviation. Most managers prefer the historical-simulation VAR model over those models that rely on volatilities and correlations. Historical simulation does not model high confidence intervals by extrapolating out from the standard deviations of market movements, making this model preferable for illiquid markets. It tends to more accurately reflect the "fat tails" than other approaches. Some managers keep the VAR measure as is but supplement it with extra stress tests from those markets. By doing so, they are trying to indicate that the possibility exists that a much larger event could happen than what would ever be predicted. Some managers add an arbitrary"charge" to their VAR estimates for certain holdings or markets. For example, the standard deviation of a market may be 15 percent, but the manager might think the "true" volatility of that market is higher than 15 percent. The manager may arbitrarily add a charge to the VAR engine reflecting the extra volatility of the positions in this market, in which the charge, when added to the base 15 percent volatility, will never show a total volatility in that market of greater than 100percent. When
©Association for Investment Management and Research
managers first started trading in Russian equities, they frequently charged 100 percent VAR for their Russian equity positions. This process is not very scientific, but it does attempt to recognize the special nature of such markets. Another similarly arbitrary adjustment that managers sometimes make is with respect to individually modeled instruments. Convertible or high-yield bonds in emerging market countries may be treated as 100 percent equity, for example, instead of being treated as partially equity, partially fixed income. Such treatment may add too much conservatism to a VAR model during good times but just about the right amount during bad times. Finally, some managers increase the holding period assumption for illiquid markets. If they are using a one quarter and a 1 standard deviation measure for their more "normal" funds, they might use a longer period for their emerging market funds to reflect the decreased ability to close out these positions compared with their more liquid positions. Performance Feedback. Backtesting is a process that attempts to determine if all of this risk analysis has done any good. The manager predicts how much the fund might lose with respect to the benchmark every day or every week and then records the actual result at the end of that period. Figure 1 shows how well this VAR model predicted the actual losses over a one-week time frame. The squares represent the predictions made at the beginning of the week, and the dots represent the actual results measured at the end of the week. If this model used a 99 percent confidence interval, 99 percent of the dots should be above the VAR predictions and no more than 1 percent of the sample should be below the predictions. If the actual
Figure 1. Backtesting: Actual Gains and Losses versus VAR Predictions Using Weekly Data 300.--------------------, Ui'
-0
c::
200
'"
100
<0 ;i
-B
~
0
Q)
'"
0
>-oJ
-100
•
• •
• 00_000000_ 0 00000000.0
• •
-0
c:: -200 <0
.s<0
o
-300 -400 Time ___ Predicted
• Actual
67
Risk Management: Principles and Practices gains are below the predicted gains more than 1 percent of the time, then this model requires amendment. The manager might have a lot of exposure to illiquid markets, or the manager might have an overconcentration in a single issue, both of which would distort the model. The manager can continue tweaking the model, or if the manager is completely in venture capital or real estate, he or she may decide to discontinue using the model or to use different proxies for the assets in the portfolio. With backtesting, the manager needs to make reasonably frequent measures of VAR and portfolio performance. Backtesting does not work on a quarterly basis; a weekly basis can even be a stretch because weekly testing misses intraperiod rebalancing that may be the source of profit and loss. With weekly testing, the manager might find that the predicted and actual numbers are very different from one another.
name of an asset but, rather, runs each asset through a pricing model and identifies each of its pricing variables individually. For example, structured notes were able to get into portfolios because they obeyed the letter, although not the spirit, of the law. Suppose a manager had a one-year AAA note suitable for a money market fund, but its performance was actually linked through a formula in its coupon payment to the behavior of a 30-year CCC note. The one-year AAA note fit the guidelines, but it had the price volatility of a 3D-year CCC note. VAR models each instrument in a pricing model and identifies its separate pricing variables. If VAR had been used with this AAA note, it would have shown the note clearly as a 3D-year CCC note.
Comparison with Other Measures. One important benefit of VAR, especially compared with assetspecific investment guidelines, is that VAR is not limited to one asset class. An equity fund's guidelines might specify that its beta should not be higher than x, and a fixed-income fund's guidelines might specify that its duration should not be longer than y. If the fixed-income fund goes one month longer in duration than y and the equity fund goes 0.1 higher in beta than x, which is riskier? Those two statistics are not comparable if one wants to look at a fund complex to see which managers are too close to their risk tolerance and which are comfortably within their risk tolerance. When using only beta and duration, how would someone look at the risk of a multicurrency and multiasset fund or of a balanced fund that has fixed income and equities and perhaps even international assets? Analyzing the risk of such portfolios is very difficult with only beta and duration, but very possible with VAR. Standard deviation of a fund's past results is comparable among asset classes, but standard deviation is not an early warning of whether a fund has suddenly changed its composition. Because standard deviation is historical in nature, it will show how the fund manager behaved in the past. But if a manager wants to see if a new position might exceed a client's risk tolerance, standard deviation will not help because it is backward looking; it does not use the current set of assets. VAR, however, uses the current set of assets to generate the risk measure. Unlike investment guidelines, VAR does not rely on the names of assets in order to categorize them. For equities, using VAR is not very different from using investment guidelines. But for fixed-income and derivative instruments, an especially important characteristic of VAR is that it does not look at the
Benefits to Portfolio Managers. VAR has the potential to allow portfolio managers greater freedom. Compared with using only investment guidelines to control risk, VAR is a better system for funds that may need to use different instruments or may need to have a currency-hedging strategy. VAR is a targeted way to express the role of leverage; it captures leverage by measuring how much loss could occur in the geared assets and how much loss could occur from the difference between assets that were shorted versus those that were bought. So, managers who want to use creative strategies should view VAR favorably. I do not mean to suggest that managers should have no investment guidelines but, rather, that judicious use of VAR can avoid a manager's being unable to use strategies that are sensible but difficult to address in investment guidelines. Numerous cases now exist in which a client agrees on VAR limits for funds that are going to use leverage or derivatives, and the client monitors the manager's use of these instruments through VAR reporting to confirm that the risk suits his or her tolerance. Portfolio managers who are prevented from using useful instruments because they fall into categories prohibited by the investment guidelines could add value by agreeing with the client to adopt VAR monitoring in lieu of certain kinds of guidelines and thus be able to use a wider range of instruments. So, some of the investment guidelines and constraints in place could be relaxed if clients become more comfortable and have more time and experience with VAR as an alternative control tool. VAR can be used across different portfolios as a supervisory tool because it applies the same measure across every fund, whether the fund is passive, active, equity, fixed income, balanced, or a single asset. VAR also provides a language for allocating responsibility more effectively, especially if a client explicitly states what potential underperformance is acceptable in the pursuit of gain. As a framework,
68
©Associ~tion for Investment Management and Research
How Risk Management Can Benefit Portfolio Managers VAR clearly states that asset allocation is the client's responsibility and that performance versus a benchmark is the fund manager's job, in which the manager is responsible for staying within the tracking-error boundary established by the VAR risk threshold (rather than trying to manage absolute risk). Using VAR can also impart a competitive advantage. After a year such as 1998, when some funds sustained significant losses, clients want to see what framework a manager has in place to avoid having surprises and losing money. VAR is a way to avoid some of the surprises that would be buried in other kinds of reporting. Criticisms and Limitations. VAR is frequently subject to criticism, much of which can be addressed, and has real limitations, which must be recognized when a firm chooses to adopt VAR measurement. • Criticisms. Byusing VAR, a manager is specifically acknowledging potential investment loss or underperformance. Managers at banks frequently discuss their potential losses; they set capital aside for losses and respond to regulators who receive regular reporting of a bank's loss potential. The investment management industry, however, does not use the word "loss" (the "L" word) very often. Clients should explicitly agree that a loss of a certain amount is acceptable or possible in the pursuit of gain, which is not common in the investment industry and which is required for a proper acknowledgment of who has what authority and what responsibility. Clients must recognize that they are signing up for a potential loss as they pursue outperformance. If a client cannot acknowledge a potential loss, then no potential loss is ever acceptable. Using a single measure to evaluate all managers means that they are all being measured against the same time horizon. No matter whether one manager's strategy relies on day trading or another manager's strategy relies on a five-year holding horizon, they are both being measured against the same time horizon, be it one day, one quarter, or one year. This use of a fixed time horizon is one reason why VAR is often unpopular. When faced with this fixed time horizon, managers sometimes say, "I would not lose that much in a quarter. I would day trade out of it" or "A quarter does not matter in my fund; I have a five-year horizon." The problem with this sort of objection is that a fund does have unacceptable annual, quarterly, or monthly performance numbers; they are, in fact, all compared with the same standard at regular intervals. If a client is going to look at performance numbers for a time period less than a full market cycle, then measuring risk to the same time horizon makes sense. If a client is very disciplined and never reacts to a quarterly performance number, then the client will be very
©Association for InvestmentManagementand Research
disciplined and never react to a quarterly risk number. The client will set enormous risk thresholds. Investors do not fall asleep for five years and then look at performance at the end of five years; they look at returns intraperiod and occasionally instigate changes based on intraperiod performance. Similarly, managers' risk numbers can be compared intraperiod. The snapshot style of VAR measurement does not take into account the dynamism of active strategies. VAR takes today's portfolio, assumes the manager falls asleep for the holding period (say one quarter), and predicts the possible loss in a quarter. Use of this snapshot approach is a genuine criticism. To test the validity of this criticism, one could measure over time the extent to which VAR failed to track the actual performance that managers sustained. If the tracking error is marked and consistent, then this dynamism really matters. If VAR actually did a good job predicting actual results, then the dynamism matters less. Defensive risk management should be independent; it cannot rely on managers' own pictures of their risk, else it is not a double check. It will always be cruder than the tools managers use to monitor and craft their own strategies. It must be; otherwise, it would involve overly costly duplication of effort. VAR analysis always simplifies complex strategies~ always boils things down to broader categories-s-and thus will always be subject to criticism by managers because they are aware of its shortcomings compared with their own modeling. Additionally, the"offense" will never numbly use historical data alone to come to conclusions about the portfolio, as VAR measures often do. It is almost a requirement of portfolio management to believe the future will be different from the past, else investing is a zero-sum game. Managers will amend models based on historical data to reflect their unique insights of the future or use no historical data at all. This approach is exactly what should be done to earn good returns but should not be duplicated in the "defense" system. The answer to most criticisms of VAR comes in the backtest when one can see how well the model predicted the results. If VAR does a good enough job, then it does not need a heavy and highly layered infrastructure. In general, VAR is criticized because it is backward looking. Criticizing a backward-looking model is easy. The model everyone wants has the future data set in it. And by comparison, the backward data set always looks inadequate, but it is the only one available. History will always take an unexpected tum from time to time, but that fact does not defeat the value of taking a disciplined look at how the current portfolio would have gone through history. History predicts a great deal of portfolio movement, even if it does not predict it all. 69
Risk Management: Principles and Practices Losses within the specified confidence interval do not constitute model failure. In other words, at a confidence level of 99 percent, a loss 1 percent of the time is not an indication of model failure. If that loss happened 10 percent of the time, then the model would have failed to predict losses. In addition, if the user does not follow the rule of making only conservative adjustments to the model, then the user cannot claim model failure. Finally, if management does not implement any thresholds beyond which action would be taken and an unacceptable amount of money is lost, management cannot blame the model. Therefore, when evaluating an instance in which a model was supposed to have failed, one must look to see if the loss occurred within the confidence interval, if adjustments were made to the model, and if any limits whatsoever existed. If management had not set any limits, for example, then the loss cannot be blamed on model failure.
The model is supposed to be wrong a certain percent of the time (100 minus the confidence interval), and it may be wrong in grand style on those occasions. Instead of focusing on whether VAR is a perfect prediction of potential loss, it is more useful to observe that VAR can help highlight the fact that a fund has become riskier from one day to the next because of a change in its assets. VAR may not quantify the actual perfect amount of loss that could be sustained, but it helps highlight when, because of a change of positions, the risk has gone up. That ability to highlight change is of greater value than the actual perfection of the forecast and allows the manager to trim the positions before the VAR figure, or a larger number, is realized. Perhaps the main reason for VAR's unpopularity is that it adds another constraint in a highly constrained industry. Asset managers already have to deal with various legal constraints and investment guidelines, and VAR presents another set of activities that could constrain a manager's freedom and chance to add alpha to a fund. As noted previously, the silver lining comes if VAR is allowed to substitute for less useful guidelines, thereby removing bad constraints and introducing more logical rules. • Limitations. Risk measurement in general does not prevent losses altogether, and VAR is no different. When a firm loses money, the explanation is not necessarily that its VAR system failed. Even though VAR users are quantifying risk every day, they must keep in mind that the potential loss measured by this technique could actually occur. VAR also does not decide what risk threshold is acceptable: that decision must be made by management (e.g., the plan sponsor or the investment committee). VAR measures how much risk is in a fund, and management must determine when to start reacting to that exposure by making changes in the fund.
VARis a disciplined, unified way to look at the risk in an investment portfolio. Its value is clear to senior managers in investment firms and to plan sponsors. It allows them to compare portfolio managers and to flag situations that may require attention. From a portfolio manager's viewpoint, VAR can be another layer of constraint, but VAR monitoring can actually permit the removal of some constraints that are placed on portfolio managers, particularly if they want to use strategies that involve instruments that normally cannot be accommodated by investment guidelines. If a firm's clients agree with the VAR system thresholds, there will be less room for misinterpretation and the question of who is responsible for what portion of the fiduciary decisions will be clarified.
70
©Association for Investment Management and Research
Conclusion
How Risk Management Can Benefit Portfolio Managers
Question and Answer Session Michelle McCarthy Question: Once you have an approved risk process, how do you permit exceptions? McCarthy: VAR analysis is different in this regard from investment guidelines. For some investment firms, any exception to a guideline triggers an automatic action. When I was in the investment management group at Bankers Trust Company, when we first implemented the system, we tried to be a little slower to react to VAR monitoring because we needed to understand what the model told us compared with our business judgment. If you have just adopted a set of thresholds and you have not tested them in practice, you need to decide whether VAR is sounding an alarm too often or whether the risk is normal. VAR provides a chance for human judgment and management to come into play, rather than an automatic action being taken. Even once you have approved and adopted the model, it is constantly evolving. You might add things to improve the risk measures. You may see that your risk measurement system overstates the risk of a certain portfolio in a strange way and that if you continue limiting the portfolio and forbidding it to be in that position, it may not have a chance of getting any kind of outperformance. You are continually balancing the need to control with the need to add alpha in a fund. Question: Do VAR numbers vary among vendors? McCarthy: If all the vendors use the same type of VAR model, their results should not vary. But if you compare a Monte Carlo-type VAR
result with a historical-simulationtype VAR result with a parametrictype VAR result, you should not get the same answer. Even if the vendors are using the same type of model, you should check that they are using the same confidence interval and holding period and the same data history. Please discuss the importance of the time horizon and the length of the data history. Question:
McCarthy: The most likely thing to happen tomorrow is what happened yesterday, but such a short data history (yesterday's price changes) has very little to do with what investment managers are interested in. A longer data history, such as three years, makes backtesting less accurate but captures tail events or crises better than a short data history, simply because more things happen in a longer period of time. Some investment managers criticize the three-year holding period because it does not contain a full market cycle. But if you use 20 years of data, the backtest results will be even less accurate in the near term. You will have old data and relationships between markets that do not exist anymore, and you will have limited ability to calibrate your model and test its validity. With 20 years of data, however, you will have a lot of market crises and tail events that you might want to capture. I believe that having a long data history is important because it will incorporate more market moves than those occurring during the past couple of weeks. Something in the three- to fiveyear horizon usually captures enough market excitement to sat-
©Association for Investment Management and Research
isfy most people. Clearly, if you use only the past 100 days, you can miss something. The bottom line is that nobody knows which piece of market history best predicts the period that you are concerned about in the future. The best thing is to stick with one historical period and backtest the results of your VAR model for a period of time. If you are dissatisfied with the backtest, you may want to tinker with the data history at that point. I think that you should always assume that if a period has been overly calm, the future is likely to be worse than predicted, so VAR users who are concerned with this likelihood will need to be doubly sure to couple VAR measurement with rigorous stress testing and scenario analysis of the portfolio. Question: If you use a long data history, should you decay the data? McCarthy: Decaying the data (e.g., in 100 days of data the most recent 10 days might have 90 percent of the weight) has value, but you have to be careful. If you decay the data, your VAR number will not drop abruptly when an old crisis disappears out of your data series. The decaying technique progressively discounts older data points, so the crisis will be weaned out over time. Decaying the data does mean, however, that you are making less use of that crisis in your VAR estimate as time goes on. lt is a judgment call, and my own judgment is that with investment management, we are usually talking about long holding periods, so theories that make sense for predicting the next day or the next 10 days are less valid than those for longer time periods. For investment management applications,
71
Risk Management: Principles and Practices having a longer data history makes sense, and if you refrain from decaying or de-emphasizing some data or generally trying to smooth the data out, you let the record speak for itself. How do you identify which security is making the portfolio VAR too high? Question:
McCarthy: Most VAR methodologies have extra measures to identify the portions of a portfolio that are contributing most heavily to the portfolio's risk. A classic measure is called marginal VAR or incremental VAR, which can be measured in terms of the benchmark tracking error. That is, if I eliminate this stock, this sector, or this country, incremental or marginal VAR will indicate how much the tracking error would be reduced. If you do this analysis, however, for every single asset, every sector, and every country in the portfolio, you will get information overkill. Thus, most VAR users try to identify the subcomponents that they want to monitor regularly and see how much those broader categories contribute to the overall portfolio risk.
72
Question: How would you assess a risk management system that used only historical price information rather than a VAR methodology? McCarthy: Such a system can convey only partial information about risk exposures. If a manager knows only the price of a compound type of security and does not break it up into risk factors, the manager cannot simulate how that security will behave under different market scenarios. If the only data a manager has on a convertible bond are its price and how many bonds he or she owns, the manager does not know how much of the bond's performance and risk is equity derived, bond derived, or currency derived. Also, if you ran a firm just by risk management based on history, you would not bother investing. Risk management assumes a zerosum game, and it assumes the future will be like the past. History is a disciplined way of looking at what can happen to a portfolio because we do not have the future data set; however, it is not the only thing that portfolio managers should have at their disposal. They
should have their knowledge of the market and their beliefs about how history will change. VAR is a way to make sure that a manager's positions that are based on beliefs about the future (beliefs that the future will be different from the past) do not have the capability to really hurt the portfolio if history does in fact repeat itself. Question: Do you recommend using daily or weekly data collection for measuring actual numbers? McCarthy: Data collection and measurement frequency should relate to your trading frequency. Daily collection makes sense for day traders and financial institutions, but daily data are incredibly noisy. Even if you do not trade particularly frequently, however, it does not make sense to look at data only once a quarter if some meaningful portfolio changes are likely to take place in that quarter. Weekly data are a nice compromise. Weekly data give more flexibility than monthly or quarterly data without having the noisiness of daily data.
©Association for Investment Management and Research
Risk Analysis: A Geometric Approach Brian D. Singer, CFA Managing Director Brinson Partners, Incorporated
Quantitative methods for risk management should allow investors and portfolio managers to look at, and try to manage, risk in new ways. A geometric approach can help in displaying the risk characteristics of a portfolio and its benchmark and in assessing the impact of portfolio constraints.
isk management and quantitative methods are typically considered to be almost interchangeable, even to the extent that risk management requires or depends on quantitative sophistication. Although a quantitative perspective can certainly be useful in risk management, too often quantitative methods and analytical elegance provide the illusion of control over risk. Such approaches make investors feel as if they have grasped uncertainty and dealt with it simply by the act of quantifying it. But risks, by their very nature, are unexpected. So, quantitative methods should not turn confidence into arrogance; rather, what quantitative methods should do is allow investors (and portfolio managers) to look at risk in new ways-to try to manage risk in ways that they previously could not do, or were not necessarily comfortable doing. This presentation discusses a process that uses Euclidean geometry to visualize risk. Such an approach is somewhat avant-garde for risk management and is decidedly quantitative, but the intent is to illustrate a tool that enables quantitative risk managers to communicate with nonquantitative portfolio managers and nonquantitative clients. Although the approach applies to any investment horizon, this discussion takes a relatively long-term perspective on risk-one that is typical of an investment policy perspective-and allows direct analysis of portfolio risk and portfolio constraints.
R
Risk Estimation: Data Risk estimation relies on volatility and correlation data to construct covariance matrixes; one of the key questions, of course, is which data? Historical Data. Risk estimation typically begins with the use of historical data-the computa-
©Association for Investment Management and Research
tion of historical volatilities and correlations. Historical data are consistent, easy to obtain, and often easy to compute. A manager, for instance, can compute a covariance matrix very easily with Microsoft Excel. The problem with historical data is that the data are almost certain to be inappropriate representations of the future. An investor looking at a broad index of the U.S. bond market, such as the Lehman Brothers Aggregate Bond Index, would find that the volatility of that index was in excess of 10 percent in the late 1970s and early 1980s. Currently, that same index has a volatility of 4-5 percent. That historical period (late 1970s and early 1980s) was characterized by high and volatile inflation, which is not the case now. Thus, that investor would not want, in any forward-looking sense, to rely on that period as a foundation for his or her risk estimates unless the investor believed, for example, that the U.S. Federal Reserve Board was planning to monetize, or in effect provide an inflation tax for, fiscal policy. This is a very real investment problem: Suppose at Brinson Partners we are trying to set the investment policy-the normal policy mix-for a pension plan, an endowment, or a foundation. In that instance, the client's time horizon is long term, so we do not want to know a daily or weekly value at risk estimate. History might not necessarily represent what we think could happen in the future, but an analysis of monthly or quarterly data going back several decades aids in our understanding of risk in various economic and market environments. Granted, a number of advances in statistical methodology applied to historical data have occurred: the use of volatility clustering, the use of generalized autoregressive conditional heteroscedasticity, and the
73
Risk Management: Principles and Practices use of mean reversion for forecasting volatilities. All of those historical approaches have been beneficial for estimating risk, especially over short horizons, but investors are still faced with regime changes-some notable, some not notable; some identifiable, some not identifiable-and every regime change decreases the relevance of historical data. An interesting example comes from New Zealand, which for years suffered from high and volatile interest rates and, therefore, volatile bond returns. In an attempt to change that environment, New Zealand altered the charter for its central bank. The New Zealand central bank now provides an inflation target, and if the head of the central bank does not meet that target, he or she is fired. New Zealand's inflation volatility is now much lower than it was in the past. Forward-Looking Data. Because such regime changes are possible, forward-looking volatilities and correlations can be, although are not always, better representations of the future. That regime change in New Zealand was quick and identifiable, but historical data would not have predicted it. Thus, having some type of forward-looking perspective, in terms of the covariance matrix, is a good idea. But forward-looking matrixes also have problems, the biggest one being limitations on human imagination. In a forward-looking sense, people can only incorporate what they imagine, but risks, by their nature, are unexpected. Therefore, it is difficult to incorporate the appropriate forward-looking events or regime changes that might affect the covariance matrix.
This ability to manipulate shapes starts in childhood. Children at a very young age learn to put round pegs in round holes, square pegs in square holes, and triangular pegs in triangular holes. They understand and learn to manipulate shapes long before they are able to grasp algebra and other mathematical concepts.
Currency Risk Assume that from a U'S. dollar perspective the U.K. pound has a volatility of 12 percent and the German mark also has a volatility of 12 percent. The correlation between the pound and the mark is 0.71. Although with this information an investor can construct a very simple covariance matrix, another way of looking at the covariance matrix is geometrically, as shown in Figure 1. Volatilities are shown as distances, and correlations are shown as angles. To construct Figure 1 from a U.S. dollar perspective, I first made a point for the dollar. Because the volatility of the mark compared with the dollar is 12 percent, I drew a line of length 12. That line could be 12 inches, 12 centimeters, or 12 kilometers; it does not matter, just 12 units of length. The volatility of the pound against the dollar is also 12 percent, so I needed to draw another line of length 12. The question is, what is the relationship (or the angle) between those two lines? The answer is that the relationship is determined by the correlation, which is 0.71 in this example; specifically, the cosine of the angle is equal to 0.71. The cosine of 45° is 0.71, so I drew the second line at a 45° angle to the first line. Thus, I have portrayed the same covariance matrix, but instead of
Geometric Representation. Other important
difficulties with using a forward-looking perspective are achieving consistency and intuition, which is where Euclidean geometry comes in. A geometric interpretation of volatilities and correlations has the potential to make risk estimation consistent, practical, and more intuitive to understand and communicate, especially between quantitative-oriented people and non-quantitative-oriented people. Why take a geometric approach? The mathematician Keith Devlin has commented that mathematicians may be able to express their thoughts using the language of algebra, but generally, they do not think that way. Even a highly trained mathematician may find it hard to follow a long, algebraic argument. But every single one of us is able to manipulate mental pictures and shapes with ease. By translating a complicated problem into geometry, the mathematician is able to take advantage of this fundamental human capability.
74
Figure 1. Visual Representation of Volatilities and Correlations from a U.S. Dollar Perspective Pound
12.0%
cos(a) = Dollar
P(£/DM)
= 0.71
12.0%
Mark
Note: p is the correlation coefficient.
©Association for Investment Management and Research
Risk Analysis
using strictly numbers, I have portrayed it as part of a triangle.' One of the tools we use at Brinson Partners is what I refer to as the correlation protractor. John Zerolis, one of the more quantitative-oriented people at Brinson Partners, generated the protractor by computing the correlation associated with each angle. We use the protractor for discussions in which immediate visual representations are useful. One of the interesting things people notice when looking at that correlation protractor is that not all correlation changes are created equally in risk space. Suppose a correlation goes from 0.9 to 1.0. This might not seem like a big change, but moving from 0.9 to 1.0. is a 26° angle on the protractor. Similarly, a 26° angle from zero moves the correlation from zero to about 0.45. In risk space, a movement in correlation from zero to 0.45 is similar to a movement in correlation from 0.9 to 1.0. This relationship is readily apparent from a geometric representation but not at all obvious from a set of formulas. Now suppose I want to know the volatility of the pound from a mark perspective. All I need to do is draw a line connecting the pound and the mark and measure the length of that line. The dotted line in Figure 1 indicates that the length is 9.1 (hence the volatility is 9.1 percent). In addition, if I look at the angle between the dotted line and the solid dollar/ mark line, I can tell that from a mark perspective, the dollar and the pound have a correlation of 0.38. This technique facilitates the ability to understand a single covariance matrix from the perspective of every investor in the world, regardless of the investor's base currency. For U'S. investors, we focus on the dollar vertex of the triangle. For German investors, we focus on the mark vertex, and so on. We can use any number of different base currencies. With just three currencies, we can geometrically represent the correlation matrix on a piece of paper; with four currencies, we would need a three-dimensional tetrahedron. With five currencies, visualization must occur in triangular or tetrahedral subsegments, but the intuition is still the same. One benefit of this approach, in terms of consistency, is being able to see the implications of a covariance matrix from any base currency perspective. Suppose we think that the United Kingdom is going to join the European Monetary Union (EMU) and that the pound's correlation with the euro (represented by the mark) will probably increase to 0.95 as the United IFor further discussion, see Brian D. Singer, Kevin Terhaar, and John Zerolis, "Maintaining Consistent Global Asset Views (with a Little Help from Euclid)," Financial Analysts Journal (January I February 1998):63--69.
©Association for Investment Management and Research
Kingdom approaches joining the EMU. Figure 2 shows what happens if the correlation between the pound and the mark is 0.95, which corresponds to an angle of about 18°: A correlation of 0.95 means that the pound must have a volatility of 3.8 from a mark perspective. So, compared with Figure 1 (where the correlation was 0.71), the volatility dropped from 9.1 to 3.8. If we are not comfortable with that change in volatility, then we cannot be comfortable with our correlation estimate of 0.95. Notice that if the correlation between the pound and the mark were 1.0, the line would essentially become flat, which implies that the pound would have no volatility from a mark perspective. Figure 2. Effect of Correlation Change Pound
·.3.8%
Mark
Dollar
Note: p is the correlation coefficient.
Portfolio Risk Analysis Risk analysis of a portfolio relative to its benchmark is a simple application of this geometric approach, as shown in Figure 3. The volatility of the benchmark (benchmark risk) is represented by the base of the triangle, the volatility of the portfolio (portfolio risk) is represented by the side of the triangle drawn with the solid line, and the portfolio's tracking error is represented by the side of the triangle drawn with a dotted line. The correlation between the benchmark and the portfolio is represented by the angle o; and Figure 3. Portfolio and Benchmark Risk Analysis in Geometric Terms Portfolio
(T
cos(().)
Porrjo/io
= PPortfolio, Benchmark rr Benchmark
Benchmark I
~(T Benchmark
(l-l3)u Benchmark
75
Risk Management: Principles and Practices the vertical dashed line indicates the portfolio's residual risk. Portfolio, Benchmark, and Residual Risks.
In the context of a single-index model, the return of the portfolio is equal to a benchmark bet and the residual return, which is uncorrelated with the benchmark. Similarly, portfolio volatility comes from two sources-one that is perfectly correlated with the benchmark bet, or systematic risk, and one that is uncorrelated with the benclunark risk, or residual risk. So, in Figure 3, the line for residual risk is at a right angle to the line for the benchmark risk because a right angle is associated with a correlation of zero. Thus, Figure 3 shows two right triangles, and consequently, the Pythagorean theorem (the square of the length of the hypotenuse is equal to the sum of the squares of the other two sides) can be used to help with the risk analysis. This approach allows us to look at the risks visually. We do not have to wonder what will happen if the residual risk of our portfolio goes up by 10 percent: The residual risk line will become 10 percent longer, the correlation of our portfolio with the benchmark will go down because the angle will increase, and the volatility of our portfolio will increase (the portfolio line will lengthen). We do not have to calculate anything to achieve that intuitive understanding. Residual Risk, Benchmark Relative Bets, and Tracking Error. The right-hand, shaded triangle in
Figure 3 shows residual risk, benchmark relative bets, and tracking error. Tracking error can be thought of in this context as value-added risk-the risk of the portfolio from the perspective of the benchmark, or the risk of the difference between the portfolio returns and the benchmark returns. In essence, what this figure indicates is that tracking error is a combination of two things: (1)the benchmark relative bet (the base of the shaded triangle), which is the portion of active management that involves an increase or decrease (as in this example) in benchmark exposure, and (2) the residual risk, which is that portion of the risk of active management that is not in any way correlated with the benchmark.
though the manager did not give me that information), but I can also quickly visualize and gauge the tracking error and residual risk without doing any computations. Although I could use algebraic or trigonometric formulas to calculate the tracking error, using geometry is often easier because it allows me to visualize how the various risks move relative to each other. As in Figure 3, I can see that the residual risk of the portfolio is found by dropping a perpendicular line down from the point for the portfolio; I can see that the base of the left-most triangle, found by multiplying the beta times the benchmark risk, is the systematic risk. Ex Ante Analysis. From an ex ante strategy perspective, the geometric view also helps in making decisions about changing the portfolio. For example, it can help investors understand how a certain strategy or change in strategy might affect a portfolio in absolute and/or relative risk terms. Say we have a portfolio that holds some cash, and we think that taking out that cash might reduce the tracking error. If we take out the cash, we do not change the portfolio's correlation with the benchmark. All we do is increase the risk, so we have to make the line for the portfolio longer. Does increasing the portfolio line decrease the tracking error? Not necessarily. The tracking error decreases to a point but then begins to increase again. Having a risk hedge in the portfolio might reduce risk or it might increase risk relative to the benchmark, which is easy to see from a geometric, visual perspective.
Portfolio Constraints Portfolio managers more often than not operate under a variety of constraints, such as beta, tracking error, or residual risk. But implementing those constraints can cause difficulties. Once again, geometric interpretation can be used to portray feasible sets of alternative portfolios that are consistent with client constraints.
Ex Post Analysis. In a portfolio performance sense (an ex post sense), this geometric approach is a tool that can help investors understand the overall performance of their portfolios a little more intuitively and clearly. Suppose I am a plan sponsor and one of my managers comes to me and says, "The benchmark has a volatility of x, the portfolio has a volatility of y, and the beta is Z." From that information, not only can I compute the correlation, determine the cosine, and create the entire triangle, including the tracking error and residual risk (even
Beta Constraint. Suppose a client wants an essentially defensive portfolio. Figure 4 shows four portfolios that have a beta of 0.9. I can create many portfolios that have a beta of 0.9, and some of those portfolios might be considered defensive, but some would not. Portfolio A, for example, would likely be considered defensive. It has a low volatility, and it has relatively low tracking error and relatively low residual risk. When the correlation with the benchmark is decreased while maintaining the beta of 0.9, the volatility of the portfolio has to increase. Portfolio D still has a beta of 0.9, but its volatility greatly exceeds that of the benchmark, and it has a relatively
76
©Association for Investment Management and Research
Risk Analysis
Figure 4. Portfolios with Constant Beta of 0.9 Residual Risk
DT I
C1 I I
B+ I
A+ I
I ~
Benchmark
I-----------=----------il IJ"Bl'Ilchmark
I
0.9 (J' Ben,hmark
I
0.1 (J' Benchmark
substantial tracking error. All of the portfolios in Figure 4 have a beta of 0.9, but they are very different portfolios; the beta constraint still allows dramatically different levels of portfolio risk, residual risk, and tracking error.
Tracking-Error Constraint. Suppose a manager or a plan sponsor wants a portfolio with a tracking error of 5 percent or less. Again, I can create a number of portfolios that have a tracking error of 5 percent, as shown in Figure 5. I simply draw a circle with a radius length of 5, representing a tracking error of 5 percent, around the benchmark position. Portfolio A, which can be formed by combining the benchmark with cash, has a tracking error of 5 percent; it also has no residual risk and a relatively low volatility. Portfolio D has a beta of 1; its residual risk Figure 5. Portfolios with Constant Tracking Error Less than 5 Percent Portfolios
"
D
~ B ..".
.
~ 2\
:
.:
Residual Risk Constraint. I can also construct a number of portfolios that have the same residual risk, as shown in Figure 6, but those portfolios are decidedly different from each other. Portfolio A has relatively low volatility but relatively high tracking error. By increasing the risk of the portfolio and its correlation with the benchmark, I get to Portfolio C. All three portfolios in Figure 6 have the same residual risk, but Portfolio C has the highest volatility and the lowest tracking error. Thus, a constraint on residual risk places few limits on volatility, beta, or tracking error. Figure 6. Portfolios with Constant Residual Risk Portfolios
A.
~T
~ B.
'. '.
..._
".
:'1
1'1-.,;-.. :' = .J
1
-L-_ _-'-::_.~ ...
I I (J'Residual RISk
1
Benchmark
(f BellchmQrk
Multiple Constraints. Using this type of analysis allows one to look at the interactions of the investment guidelines imposed on a manager. Figure 7 shows a simple example of this type of interaction, in which the portfolio has two constraints. The first constraint is that the total risk cannot be any greater than that of the benchmark, indicated by the circle with the center at Point A and the radius equal to the Figure 7. Feasible Portfolios with Given Total Risk and Tracking Error
E
0. ' . . ~.::.......•.
would have a volatility of 10 percent and Portfolio F would have a volatility of 20 percent. Figure 5 clearly illustrates that a tracking-error constraint still permits wide variations in volatility, beta, and residual risk.
F
Benchmark
(1 Benchmark
5%
I I
0' lracking Error
is equal to its tracking error of 5 percent. Its volatility is slightly greater than that of the benchmark, which is very different from Portfolio A. Portfolio F still has a tracking error of 5 percent and no residual risk, which sounds a lot like Portfolio A, but Portfolio F is much more volatile than Portfolio A. In fact, if the benchmark has a volatility of 15 percent, Portfolio A
©Association for Investment Management and Research
Benchmark
C
B I (T Tracking
Error
77
Risk Management: Principles and Practices benchmark (Point B). The portfolio can be anywhere within that circle. The second constraint is that the tracking error cannot be greater than some specified amount, indicated by a semicircle around the benchmark point (B) with a radius equal to the trackingerror constraint. When I combine those two constraints, the only feasible portfolios are within the cross-hatched area. Consequently, the beta cannot be greater than 1, and the correlation cannot be anything less than about 0.9. Portfolio P represents maximum total risk and maximum residual risk, and the minimum risk portfolio is Portfolio C. A simple example provides a good illustration of the interaction of multiple constraints. Consider a portfolio with the same risk as the S&P 500 Index but with a beta, with respect to the S&P 500, of 0.8. On the surface, those constraints sound fine, but looking at the implications geometrically may indicate otherwise. First, we draw a line whose length represents the volatility of the S&P 500. Because the portfolio and the benchmark have the same volatility, the beta and the correlation both are 0.8. Second, we draw a line whose angle corresponds to a correlation of 0.8
and that has the same length as the S&P 500. That line represents the portfolio. A straight line to connect the S&P 500 and the portfolio reveals that the tracking error is 8-9 percent. Thus, the client's constraints seem reasonable-same volatility as the benchmark and a beta of O.8--but those constraints effectively create a disguised risk: tracking error of 8-9 percent. Does the client really want a portfolio with a tracking error of 8-9 percent? Probably not.
Geometrically displaying the risk characteristics of a portfolio and its associated benchmark is a simple but powerful tool. The geometric representation of portfolio performance and portfolio strategies helps in simultaneously analyzing multiple risks-absolute risk, relative risk, systematic risk, residual risk, and tracking error. The geometric decomposition also produces an intuitive understanding of the interactions, sometimes subtle and often unintended, that result from imposing portfolio constraints.
78
©Association for Investment Management and Research
Conclusion
Risk Analysis
Question and Answer Session Brian D. Singer, CFA Question: Can you represent information ratios geometrically? Singer: Yes, information ratios could be represented by drawing what are known as iso-return lines in Figure 3. These lines might start at zero return (cash return if the analysis is in risk-premium terms) at the dollar vertex of the triangle and go out in parallel fashion at return levels of 5 percent, 10 percent, 15 percent, 20 percent, and so on. Having now superimposed these iso-return lines over the risk triangle, we can begin to do return and risk analysis simultaneously.
Question: How do you incorporate fundamental factors with this approach? Singer: We use geometric risk analysis, looking for examples of the risk relationships between a portfolio basket of securities and an industry or other factor basket. In the risk triangle, the benchmark line could be thought of as the industry or factor basket, with the length of the line indicating the volatility of that industry or factor. The angle at the dollar vertex represents the portfolio's correlation with respect to the industry or factor and the loading on the industry or factor measured just as
©Association for Investment Management and Research
the benchmark bet (systematic risk) would be measured. This would be a univariate loading on one industry or factor, but we can also do multivariate loadings with multiple industries and factors. In fact, that is how we build our forward-looking covariance matrix-by considering country, currency, equity market, and bond market factors. We do not and could not build thousands of pairwise correlations in any consistent way. Rather, what we do is build aggregate factors, which might be regions, industries, and so on, and think about what the loading of each market is on those various factors.
79